WorldWideScience

Sample records for lyapunov-based recurrent neural

  1. Transient Stability Enhancement of Power Systems by Lyapunov-Based Recurrent Neural Networks UPFC Controllers

    Science.gov (United States)

    Chu, Chia-Chi; Tsai, Hung-Chi; Chang, Wei-Neng

    A Lyapunov-based recurrent neural networks unified power flow controller (UPFC) is developed for improving transient stability of power systems. First, a simple UPFC dynamical model, composed of a controllable shunt susceptance on the shunt side and an ideal complex transformer on the series side, is utilized to analyze UPFC dynamical characteristics. Secondly, we study the control configuration of the UPFC with two major blocks: the primary control, and the supplementary control. The primary control is implemented by standard PI techniques when the power system is operated in a normal condition. The supplementary control will be effective only when the power system is subjected by large disturbances. We propose a new Lyapunov-based UPFC controller of the classical single-machine-infinite-bus system for damping enhancement. In order to consider more complicated detailed generator models, we also propose a Lyapunov-based adaptive recurrent neural network controller to deal with such model uncertainties. This controller can be treated as neural network approximations of Lyapunov control actions. In addition, this controller also provides online learning ability to adjust the corresponding weights with the back propagation algorithm built in the hidden layer. The proposed control scheme has been tested on two simple power systems. Simulation results demonstrate that the proposed control strategy is very effective for suppressing power swing even under severe system conditions.

  2. Chaotic diagonal recurrent neural network

    Institute of Scientific and Technical Information of China (English)

    Wang Xing-Yuan; Zhang Yi

    2012-01-01

    We propose a novel neural network based on a diagonal recurrent neural network and chaos,and its structure andlearning algorithm are designed.The multilayer feedforward neural network,diagonal recurrent neural network,and chaotic diagonal recurrent neural network are used to approach the cubic symmetry map.The simulation results show that the approximation capability of the chaotic diagonal recurrent neural network is better than the other two neural networks.

  3. Recurrent neural collective classification.

    Science.gov (United States)

    Monner, Derek D; Reggia, James A

    2013-12-01

    With the recent surge in availability of data sets containing not only individual attributes but also relationships, classification techniques that take advantage of predictive relationship information have gained in popularity. The most popular existing collective classification techniques have a number of limitations-some of them generate arbitrary and potentially lossy summaries of the relationship data, whereas others ignore directionality and strength of relationships. Popular existing techniques make use of only direct neighbor relationships when classifying a given entity, ignoring potentially useful information contained in expanded neighborhoods of radius greater than one. We present a new technique that we call recurrent neural collective classification (RNCC), which avoids arbitrary summarization, uses information about relationship directionality and strength, and through recursive encoding, learns to leverage larger relational neighborhoods around each entity. Experiments with synthetic data sets show that RNCC can make effective use of relationship data for both direct and expanded neighborhoods. Further experiments demonstrate that our technique outperforms previously published results of several collective classification methods on a number of real-world data sets.

  4. Ocean wave forecasting using recurrent neural networks

    Digital Repository Service at National Institute of Oceanography (India)

    Mandal, S.; Prabaharan, N.

    , merchant vessel routing, nearshore construction, etc. more efficiently and safely. This paper describes an artificial neural network, namely recurrent neural network with rprop update algorithm and is applied for wave forecasting. Measured ocean waves off...

  5. Discontinuities in recurrent neural networks.

    Science.gov (United States)

    Gavaldá, R; Siegelmann, H T

    1999-04-01

    This article studies the computational power of various discontinuous real computational models that are based on the classical analog recurrent neural network (ARNN). This ARNN consists of finite number of neurons; each neuron computes a polynomial net function and a sigmoid-like continuous activation function. We introduce arithmetic networks as ARNN augmented with a few simple discontinuous (e.g., threshold or zero test) neurons. We argue that even with weights restricted to polynomial time computable reals, arithmetic networks are able to compute arbitrarily complex recursive functions. We identify many types of neural networks that are at least as powerful as arithmetic nets, some of which are not in fact discontinuous, but they boost other arithmetic operations in the net function (e.g., neurons that can use divisions and polynomial net functions inside sigmoid-like continuous activation functions). These arithmetic networks are equivalent to the Blum-Shub-Smale model, when the latter is restricted to a bounded number of registers. With respect to implementation on digital computers, we show that arithmetic networks with rational weights can be simulated with exponential precision, but even with polynomial-time computable real weights, arithmetic networks are not subject to any fixed precision bounds. This is in contrast with the ARNN that are known to demand precision that is linear in the computation time. When nontrivial periodic functions (e.g., fractional part, sine, tangent) are added to arithmetic networks, the resulting networks are computationally equivalent to a massively parallel machine. Thus, these highly discontinuous networks can solve the presumably intractable class of PSPACE-complete problems in polynomial time.

  6. Interpretation of Recurrent Neural Networks

    DEFF Research Database (Denmark)

    Pedersen, Morten With; Larsen, Jan

    1997-01-01

    This paper addresses techniques for interpretation and characterization of trained recurrent nets for time series problems. In particular, we focus on assessment of effective memory and suggest an operational definition of memory. Further we discuss the evaluation of learning curves. Various nume...

  7. Recurrent Neural Network for Computing Outer Inverse.

    Science.gov (United States)

    Živković, Ivan S; Stanimirović, Predrag S; Wei, Yimin

    2016-05-01

    Two linear recurrent neural networks for generating outer inverses with prescribed range and null space are defined. Each of the proposed recurrent neural networks is based on the matrix-valued differential equation, a generalization of dynamic equations proposed earlier for the nonsingular matrix inversion, the Moore-Penrose inversion, as well as the Drazin inversion, under the condition of zero initial state. The application of the first approach is conditioned by the properties of the spectrum of a certain matrix; the second approach eliminates this drawback, though at the cost of increasing the number of matrix operations. The cases corresponding to the most common generalized inverses are defined. The conditions that ensure stability of the proposed neural network are presented. Illustrative examples present the results of numerical simulations.

  8. A Direct Feedback Control Based on Fuzzy Recurrent Neural Network

    Institute of Scientific and Technical Information of China (English)

    李明; 马小平

    2002-01-01

    A direct feedback control system based on fuzzy-recurrent neural network is proposed, and a method of training weights of fuzzy-recurrent neural network was designed by applying modified contract mapping genetic algorithm. Computer simul ation results indicate that fuzzy-recurrent neural network controller has perfect dynamic and static performances .

  9. Markovian architectural bias of recurrent neural networks.

    Science.gov (United States)

    Tino, Peter; Cernanský, Michal; Benusková, Lubica

    2004-01-01

    In this paper, we elaborate upon the claim that clustering in the recurrent layer of recurrent neural networks (RNNs) reflects meaningful information processing states even prior to training [1], [2]. By concentrating on activation clusters in RNNs, while not throwing away the continuous state space network dynamics, we extract predictive models that we call neural prediction machines (NPMs). When RNNs with sigmoid activation functions are initialized with small weights (a common technique in the RNN community), the clusters of recurrent activations emerging prior to training are indeed meaningful and correspond to Markov prediction contexts. In this case, the extracted NPMs correspond to a class of Markov models, called variable memory length Markov models (VLMMs). In order to appreciate how much information has really been induced during the training, the RNN performance should always be compared with that of VLMMs and NPMs extracted before training as the "null" base models. Our arguments are supported by experiments on a chaotic symbolic sequence and a context-free language with a deep recursive structure. Index Terms-Complex symbolic sequences, information latching problem, iterative function systems, Markov models, recurrent neural networks (RNNs).

  10. Supervised Sequence Labelling with Recurrent Neural Networks

    CERN Document Server

    Graves, Alex

    2012-01-01

    Supervised sequence labelling is a vital area of machine learning, encompassing tasks such as speech, handwriting and gesture recognition, protein secondary structure prediction and part-of-speech tagging. Recurrent neural networks are powerful sequence learning tools—robust to input noise and distortion, able to exploit long-range contextual information—that would seem ideally suited to such problems. However their role in large-scale sequence labelling systems has so far been auxiliary.    The goal of this book is a complete framework for classifying and transcribing sequential data with recurrent neural networks only. Three main innovations are introduced in order to realise this goal. Firstly, the connectionist temporal classification output layer allows the framework to be trained with unsegmented target sequences, such as phoneme-level speech transcriptions; this is in contrast to previous connectionist approaches, which were dependent on error-prone prior segmentation. Secondly, multidimensional...

  11. Multi-Dimensional Recurrent Neural Networks

    CERN Document Server

    Graves, Alex; Schmidhuber, Juergen

    2007-01-01

    Recurrent neural networks (RNNs) have proved effective at one dimensional sequence learning tasks, such as speech and online handwriting recognition. Some of the properties that make RNNs suitable for such tasks, for example robustness to input warping, and the ability to access contextual information, are also desirable in multidimensional domains. However, there has so far been no direct way of applying RNNs to data with more than one spatio-temporal dimension. This paper introduces multi-dimensional recurrent neural networks (MDRNNs), thereby extending the potential applicability of RNNs to vision, video processing, medical imaging and many other areas, while avoiding the scaling problems that have plagued other multi-dimensional models. Experimental results are provided for two image segmentation tasks.

  12. Incremental construction of LSTM recurrent neural network

    OpenAIRE

    Ribeiro, Evandsa Sabrine Lopes-Lima; Alquézar Mancho, René

    2002-01-01

    Long Short--Term Memory (LSTM) is a recurrent neural network that uses structures called memory blocks to allow the net remember significant events distant in the past input sequence in order to solve long time lag tasks, where other RNN approaches fail. Throughout this work we have performed experiments using LSTM networks extended with growing abilities, which we call GLSTM. Four methods of training growing LSTM has been compared. These methods include cascade and ...

  13. Segmented-memory recurrent neural networks.

    Science.gov (United States)

    Chen, Jinmiao; Chaudhari, Narendra S

    2009-08-01

    Conventional recurrent neural networks (RNNs) have difficulties in learning long-term dependencies. To tackle this problem, we propose an architecture called segmented-memory recurrent neural network (SMRNN). A symbolic sequence is broken into segments and then presented as inputs to the SMRNN one symbol per cycle. The SMRNN uses separate internal states to store symbol-level context, as well as segment-level context. The symbol-level context is updated for each symbol presented for input. The segment-level context is updated after each segment. The SMRNN is trained using an extended real-time recurrent learning algorithm. We test the performance of SMRNN on the information latching problem, the "two-sequence problem" and the problem of protein secondary structure (PSS) prediction. Our implementation results indicate that SMRNN performs better on long-term dependency problems than conventional RNNs. Besides, we also theoretically analyze how the segmented memory of SMRNN helps learning long-term temporal dependencies and study the impact of the segment length.

  14. Analysis of Recurrent Analog Neural Networks

    Directory of Open Access Journals (Sweden)

    Z. Raida

    1998-06-01

    Full Text Available In this paper, an original rigorous analysis of recurrent analog neural networks, which are built from opamp neurons, is presented. The analysis, which comes from the approximate model of the operational amplifier, reveals causes of possible non-stable states and enables to determine convergence properties of the network. Results of the analysis are discussed in order to enable development of original robust and fast analog networks. In the analysis, the special attention is turned to the examination of the influence of real circuit elements and of the statistical parameters of processed signals to the parameters of the network.

  15. Adaptive Filtering Using Recurrent Neural Networks

    Science.gov (United States)

    Parlos, Alexander G.; Menon, Sunil K.; Atiya, Amir F.

    2005-01-01

    A method for adaptive (or, optionally, nonadaptive) filtering has been developed for estimating the states of complex process systems (e.g., chemical plants, factories, or manufacturing processes at some level of abstraction) from time series of measurements of system inputs and outputs. The method is based partly on the fundamental principles of the Kalman filter and partly on the use of recurrent neural networks. The standard Kalman filter involves an assumption of linearity of the mathematical model used to describe a process system. The extended Kalman filter accommodates a nonlinear process model but still requires linearization about the state estimate. Both the standard and extended Kalman filters involve the often unrealistic assumption that process and measurement noise are zero-mean, Gaussian, and white. In contrast, the present method does not involve any assumptions of linearity of process models or of the nature of process noise; on the contrary, few (if any) assumptions are made about process models, noise models, or the parameters of such models. In this regard, the method can be characterized as one of nonlinear, nonparametric filtering. The method exploits the unique ability of neural networks to approximate nonlinear functions. In a given case, the process model is limited mainly by limitations of the approximation ability of the neural networks chosen for that case. Moreover, despite the lack of assumptions regarding process noise, the method yields minimum- variance filters. In that they do not require statistical models of noise, the neural- network-based state filters of this method are comparable to conventional nonlinear least-squares estimators.

  16. Identification of Non-Linear Structures using Recurrent Neural Networks

    DEFF Research Database (Denmark)

    Kirkegaard, Poul Henning; Nielsen, Søren R. K.; Hansen, H. I.

    1995-01-01

    Two different partially recurrent neural networks structured as Multi Layer Perceptrons (MLP) are investigated for time domain identification of a non-linear structure.......Two different partially recurrent neural networks structured as Multi Layer Perceptrons (MLP) are investigated for time domain identification of a non-linear structure....

  17. Identification of Non-Linear Structures using Recurrent Neural Networks

    DEFF Research Database (Denmark)

    Kirkegaard, Poul Henning; Nielsen, Søren R. K.; Hansen, H. I.

    Two different partially recurrent neural networks structured as Multi Layer Perceptrons (MLP) are investigated for time domain identification of a non-linear structure.......Two different partially recurrent neural networks structured as Multi Layer Perceptrons (MLP) are investigated for time domain identification of a non-linear structure....

  18. Identification of Non-Linear Structures using Recurrent Neural Networks

    DEFF Research Database (Denmark)

    Kirkegaard, Poul Henning; Nielsen, Søren R. K.; Hansen, H. I.

    1995-01-01

    Two different partially recurrent neural networks structured as Multi Layer Perceptrons (MLP) are investigated for time domain identification of a non-linear structure.......Two different partially recurrent neural networks structured as Multi Layer Perceptrons (MLP) are investigated for time domain identification of a non-linear structure....

  19. Precipitation Nowcast using Deep Recurrent Neural Network

    Science.gov (United States)

    Akbari Asanjan, A.; Yang, T.; Gao, X.; Hsu, K. L.; Sorooshian, S.

    2016-12-01

    An accurate precipitation nowcast (0-6 hours) with a fine temporal and spatial resolution has always been an important prerequisite for flood warning, streamflow prediction and risk management. Most of the popular approaches used for forecasting precipitation can be categorized into two groups. One type of precipitation forecast relies on numerical modeling of the physical dynamics of atmosphere and another is based on empirical and statistical regression models derived by local hydrologists or meteorologists. Given the recent advances in artificial intelligence, in this study a powerful Deep Recurrent Neural Network, termed as Long Short-Term Memory (LSTM) model, is creatively used to extract the patterns and forecast the spatial and temporal variability of Cloud Top Brightness Temperature (CTBT) observed from GOES satellite. Then, a 0-6 hours precipitation nowcast is produced using a Precipitation Estimation from Remote Sensing Information using Artificial Neural Network (PERSIANN) algorithm, in which the CTBT nowcast is used as the PERSIANN algorithm's raw inputs. Two case studies over the continental U.S. have been conducted that demonstrate the improvement of proposed approach as compared to a classical Feed Forward Neural Network and a couple simple regression models. The advantages and disadvantages of the proposed method are summarized with regard to its capability of pattern recognition through time, handling of vanishing gradient during model learning, and working with sparse data. The studies show that the LSTM model performs better than other methods, and it is able to learn the temporal evolution of the precipitation events through over 1000 time lags. The uniqueness of PERSIANN's algorithm enables an alternative precipitation nowcast approach as demonstrated in this study, in which the CTBT prediction is produced and used as the inputs for generating precipitation nowcast.

  20. Deep Recurrent Neural Networks for Supernovae Classification

    Science.gov (United States)

    Charnock, Tom; Moss, Adam

    2017-03-01

    We apply deep recurrent neural networks, which are capable of learning complex sequential information, to classify supernovae (code available at https://github.com/adammoss/supernovae). The observational time and filter fluxes are used as inputs to the network, but since the inputs are agnostic, additional data such as host galaxy information can also be included. Using the Supernovae Photometric Classification Challenge (SPCC) data, we find that deep networks are capable of learning about light curves, however the performance of the network is highly sensitive to the amount of training data. For a training size of 50% of the representational SPCC data set (around 104 supernovae) we obtain a type-Ia versus non-type-Ia classification accuracy of 94.7%, an area under the Receiver Operating Characteristic curve AUC of 0.986 and an SPCC figure-of-merit F 1 = 0.64. When using only the data for the early-epoch challenge defined by the SPCC, we achieve a classification accuracy of 93.1%, AUC of 0.977, and F 1 = 0.58, results almost as good as with the whole light curve. By employing bidirectional neural networks, we can acquire impressive classification results between supernovae types I, II and III at an accuracy of 90.4% and AUC of 0.974. We also apply a pre-trained model to obtain classification probabilities as a function of time and show that it can give early indications of supernovae type. Our method is competitive with existing algorithms and has applications for future large-scale photometric surveys.

  1. Deep Recurrent Neural Networks for Supernovae Classification

    CERN Document Server

    Charnock, Tom

    2016-01-01

    We apply deep recurrent neural networks, which are capable of learning complex sequential information, to classify supernovae. The observational time and filter fluxes are used as inputs to the network, but since the inputs are agnostic additional data such as host galaxy information can also be included. Using the Supernovae Photometric Classification Challenge (SPCC) data, we find that deep networks are capable of learning about light curves, however the performance of the network is highly sensitive to the amount of training data. For a training size of 50% of the representational SPCC dataset (around 104 supernovae) we obtain a type Ia vs non type Ia classification accuracy of 94.8%, an area under the Receiver Operating Characteristic curve AUC of 0.986 and a SPCC figure-of-merit F1 = 0.64. We also apply a pre-trained model to obtain classification probabilities as a function of time, and show it can give early indications of supernovae type. Our method is competitive with existing algorithms and has appl...

  2. Bayesian Recurrent Neural Network for Language Modeling.

    Science.gov (United States)

    Chien, Jen-Tzung; Ku, Yuan-Chu

    2016-02-01

    A language model (LM) is calculated as the probability of a word sequence that provides the solution to word prediction for a variety of information systems. A recurrent neural network (RNN) is powerful to learn the large-span dynamics of a word sequence in the continuous space. However, the training of the RNN-LM is an ill-posed problem because of too many parameters from a large dictionary size and a high-dimensional hidden layer. This paper presents a Bayesian approach to regularize the RNN-LM and apply it for continuous speech recognition. We aim to penalize the too complicated RNN-LM by compensating for the uncertainty of the estimated model parameters, which is represented by a Gaussian prior. The objective function in a Bayesian classification network is formed as the regularized cross-entropy error function. The regularized model is constructed not only by calculating the regularized parameters according to the maximum a posteriori criterion but also by estimating the Gaussian hyperparameter by maximizing the marginal likelihood. A rapid approximation to a Hessian matrix is developed to implement the Bayesian RNN-LM (BRNN-LM) by selecting a small set of salient outer-products. The proposed BRNN-LM achieves a sparser model than the RNN-LM. Experiments on different corpora show the robustness of system performance by applying the rapid BRNN-LM under different conditions.

  3. Phenotyping of Clinical Time Series with LSTM Recurrent Neural Networks

    OpenAIRE

    Lipton, Zachary C.; Kale, David C.; Wetzell, Randall C.

    2015-01-01

    We present a novel application of LSTM recurrent neural networks to multilabel classification of diagnoses given variable-length time series of clinical measurements. Our method outperforms a strong baseline on a variety of metrics.

  4. Optimization of recurrent neural networks for time series modeling

    DEFF Research Database (Denmark)

    Pedersen, Morten With

    1997-01-01

    The present thesis is about optimization of recurrent neural networks applied to time series modeling. In particular is considered fully recurrent networks working from only a single external input, one layer of nonlinear hidden units and a li near output unit applied to prediction of discrete time...

  5. Using Recurrent Neural Network for Learning Expressive Ontologies

    OpenAIRE

    Petrucci, Giulio; Ghidini, Chiara; Rospocher, Marco

    2016-01-01

    Recently, Neural Networks have been proven extremely effective in many natural language processing tasks such as sentiment analysis, question answering, or machine translation. Aiming to exploit such advantages in the Ontology Learning process, in this technical report we present a detailed description of a Recurrent Neural Network based system to be used to pursue such goal.

  6. An evolutionary approach to associative memory in recurrent neural networks

    CERN Document Server

    Fujita, Sh; Fujita, Sh; Nishimura, H

    1994-01-01

    Abstract: In this paper, we investigate the associative memory in recurrent neural networks, based on the model of evolving neural networks proposed by Nolfi, Miglino and Parisi. Experimentally developed network has highly asymmetric synaptic weights and dilute connections, quite different from those of the Hopfield model. Some results on the effect of learning efficiency on the evolution are also presented.

  7. Using Recurrent Neural Network for Learning Expressive Ontologies

    OpenAIRE

    Petrucci, Giulio; Ghidini, Chiara; Rospocher, Marco

    2016-01-01

    Recently, Neural Networks have been proven extremely effective in many natural language processing tasks such as sentiment analysis, question answering, or machine translation. Aiming to exploit such advantages in the Ontology Learning process, in this technical report we present a detailed description of a Recurrent Neural Network based system to be used to pursue such goal.

  8. Financial Time Series Prediction Using Elman Recurrent Random Neural Networks.

    Science.gov (United States)

    Wang, Jie; Wang, Jun; Fang, Wen; Niu, Hongli

    2016-01-01

    In recent years, financial market dynamics forecasting has been a focus of economic research. To predict the price indices of stock markets, we developed an architecture which combined Elman recurrent neural networks with stochastic time effective function. By analyzing the proposed model with the linear regression, complexity invariant distance (CID), and multiscale CID (MCID) analysis methods and taking the model compared with different models such as the backpropagation neural network (BPNN), the stochastic time effective neural network (STNN), and the Elman recurrent neural network (ERNN), the empirical results show that the proposed neural network displays the best performance among these neural networks in financial time series forecasting. Further, the empirical research is performed in testing the predictive effects of SSE, TWSE, KOSPI, and Nikkei225 with the established model, and the corresponding statistical comparisons of the above market indices are also exhibited. The experimental results show that this approach gives good performance in predicting the values from the stock market indices.

  9. Recurrent neural network for vehicle dead-reckoning

    Institute of Scientific and Technical Information of China (English)

    Ma Haibo; Zhang Liguo; Chen Yangzhou

    2008-01-01

    For vehicle integrated navigation systems, real-time estimating states of the dead reckoning (DR) unit is much more difficult than that of the other measuring sensors under indefinite noises and nonlinear characteristics.Compared with the well known, extended Kalman filter (EKF), a recurrent neural network is proposed for the solution, which not only improves the location precision and the adaptive ability of resisting disturbances, but also avoids calculating the analytic derivation and Jacobiaa matrices of the nonlinear system model. To test the performances of the recurrent neural network, these two methods are used to estimate the state of the vehicle's DR navigation system. Simulation results show that the recurrent neural network is superior to the EKF and is a more ideal filtering method for vehicle DR navigation.

  10. A multilayer recurrent neural network for solving continuous-time algebraic Riccati equations.

    Science.gov (United States)

    Wang, Jun; Wu, Guang

    1998-07-01

    A multilayer recurrent neural network is proposed for solving continuous-time algebraic matrix Riccati equations in real time. The proposed recurrent neural network consists of four bidirectionally connected layers. Each layer consists of an array of neurons. The proposed recurrent neural network is shown to be capable of solving algebraic Riccati equations and synthesizing linear-quadratic control systems in real time. Analytical results on stability of the recurrent neural network and solvability of algebraic Riccati equations by use of the recurrent neural network are discussed. The operating characteristics of the recurrent neural network are also demonstrated through three illustrative examples.

  11. The computational power of interactive recurrent neural networks.

    Science.gov (United States)

    Cabessa, Jérémie; Siegelmann, Hava T

    2012-04-01

    In classical computation, rational- and real-weighted recurrent neural networks were shown to be respectively equivalent to and strictly more powerful than the standard Turing machine model. Here, we study the computational power of recurrent neural networks in a more biologically oriented computational framework, capturing the aspects of sequential interactivity and persistence of memory. In this context, we prove that so-called interactive rational- and real-weighted neural networks show the same computational powers as interactive Turing machines and interactive Turing machines with advice, respectively. A mathematical characterization of each of these computational powers is also provided. It follows from these results that interactive real-weighted neural networks can perform uncountably many more translations of information than interactive Turing machines, making them capable of super-Turing capabilities.

  12. Exponential Stability of Complex-Valued Memristive Recurrent Neural Networks.

    Science.gov (United States)

    Wang, Huamin; Duan, Shukai; Huang, Tingwen; Wang, Lidan; Li, Chuandong

    2017-03-01

    In this brief, we establish a novel complex-valued memristive recurrent neural network (CVMRNN) to study its stability. As a generalization of real-valued memristive neural networks, CVMRNN can be separated into real and imaginary parts. By means of M -matrix and Lyapunov function, the existence, uniqueness, and exponential stability of the equilibrium point for CVMRNNs are investigated, and sufficient conditions are presented. Finally, the effectiveness of obtained results is illustrated by two numerical examples.

  13. Recognizing recurrent neural networks (rRNN): Bayesian inference for recurrent neural networks.

    Science.gov (United States)

    Bitzer, Sebastian; Kiebel, Stefan J

    2012-07-01

    Recurrent neural networks (RNNs) are widely used in computational neuroscience and machine learning applications. In an RNN, each neuron computes its output as a nonlinear function of its integrated input. While the importance of RNNs, especially as models of brain processing, is undisputed, it is also widely acknowledged that the computations in standard RNN models may be an over-simplification of what real neuronal networks compute. Here, we suggest that the RNN approach may be made computationally more powerful by its fusion with Bayesian inference techniques for nonlinear dynamical systems. In this scheme, we use an RNN as a generative model of dynamic input caused by the environment, e.g. of speech or kinematics. Given this generative RNN model, we derive Bayesian update equations that can decode its output. Critically, these updates define a 'recognizing RNN' (rRNN), in which neurons compute and exchange prediction and prediction error messages. The rRNN has several desirable features that a conventional RNN does not have, e.g. fast decoding of dynamic stimuli and robustness to initial conditions and noise. Furthermore, it implements a predictive coding scheme for dynamic inputs. We suggest that the Bayesian inversion of RNNs may be useful both as a model of brain function and as a machine learning tool. We illustrate the use of the rRNN by an application to the online decoding (i.e. recognition) of human kinematics.

  14. Bach in 2014: Music Composition with Recurrent Neural Network

    OpenAIRE

    Liu, I-Ting; Ramakrishnan, Bhiksha

    2014-01-01

    We propose a framework for computer music composition that uses resilient propagation (RProp) and long short term memory (LSTM) recurrent neural network. In this paper, we show that LSTM network learns the structure and characteristics of music pieces properly by demonstrating its ability to recreate music. We also show that predicting existing music using RProp outperforms Back propagation through time (BPTT).

  15. Active Control of Sound based on Diagonal Recurrent Neural Network

    NARCIS (Netherlands)

    Jayawardhana, Bayu; Xie, Lihua; Yuan, Shuqing

    2002-01-01

    Recurrent neural network has been known for its dynamic mapping and better suited for nonlinear dynamical system. Nonlinear controller may be needed in cases where the actuators exhibit the nonlinear characteristics, or in cases when the structure to be controlled exhibits nonlinear behavior. The fe

  16. Probing the basins of attraction of a recurrent neural network

    NARCIS (Netherlands)

    M. Heerema; W.A. van Leeuwen

    2000-01-01

    Analytical expressions for the weights $w_{ij}(b)$ of the connections of a recurrent neural network are found by taking explicitly into account basins of attraction, the size of which is characterized by a basin parameter $b$. It is shown that a network with $b \

  17. Chaotifying delayed recurrent neural networks via impulsive effects

    Science.gov (United States)

    Şaylı, Mustafa; Yılmaz, Enes

    2016-02-01

    In this paper, chaotification of delayed recurrent neural networks via chaotically changing moments of impulsive actions is considered. Sufficient conditions for the presence of Li-Yorke chaos with its ingredients proximality, frequent separation, and existence of infinitely many periodic solutions are theoretically proved. Finally, effectiveness of our theoretical results is illustrated by an example with numerical simulations.

  18. A recurrent neural network with ever changing synapses

    NARCIS (Netherlands)

    M. Heerema; W.A. van Leeuwen

    2000-01-01

    A recurrent neural network with noisy input is studied analytically, on the basis of a Discrete Time Master Equation. The latter is derived from a biologically realizable learning rule for the weights of the connections. In a numerical study it is found that the fixed points of the dynamics of the n

  19. Active Control of Sound based on Diagonal Recurrent Neural Network

    NARCIS (Netherlands)

    Jayawardhana, Bayu; Xie, Lihua; Yuan, Shuqing

    2002-01-01

    Recurrent neural network has been known for its dynamic mapping and better suited for nonlinear dynamical system. Nonlinear controller may be needed in cases where the actuators exhibit the nonlinear characteristics, or in cases when the structure to be controlled exhibits nonlinear behavior. The fe

  20. Recursive Bayesian recurrent neural networks for time-series modeling.

    Science.gov (United States)

    Mirikitani, Derrick T; Nikolaev, Nikolay

    2010-02-01

    This paper develops a probabilistic approach to recursive second-order training of recurrent neural networks (RNNs) for improved time-series modeling. A general recursive Bayesian Levenberg-Marquardt algorithm is derived to sequentially update the weights and the covariance (Hessian) matrix. The main strengths of the approach are a principled handling of the regularization hyperparameters that leads to better generalization, and stable numerical performance. The framework involves the adaptation of a noise hyperparameter and local weight prior hyperparameters, which represent the noise in the data and the uncertainties in the model parameters. Experimental investigations using artificial and real-world data sets show that RNNs equipped with the proposed approach outperform standard real-time recurrent learning and extended Kalman training algorithms for recurrent networks, as well as other contemporary nonlinear neural models, on time-series modeling.

  1. Synthesis of recurrent neural networks for dynamical system simulation.

    Science.gov (United States)

    Trischler, Adam P; D'Eleuterio, Gabriele M T

    2016-08-01

    We review several of the most widely used techniques for training recurrent neural networks to approximate dynamical systems, then describe a novel algorithm for this task. The algorithm is based on an earlier theoretical result that guarantees the quality of the network approximation. We show that a feedforward neural network can be trained on the vector-field representation of a given dynamical system using backpropagation, then recast it as a recurrent network that replicates the original system's dynamics. After detailing this algorithm and its relation to earlier approaches, we present numerical examples that demonstrate its capabilities. One of the distinguishing features of our approach is that both the original dynamical systems and the recurrent networks that simulate them operate in continuous time.

  2. Fractional Hopfield Neural Networks: Fractional Dynamic Associative Recurrent Neural Networks.

    Science.gov (United States)

    Pu, Yi-Fei; Yi, Zhang; Zhou, Ji-Liu

    2016-07-14

    This paper mainly discusses a novel conceptual framework: fractional Hopfield neural networks (FHNN). As is commonly known, fractional calculus has been incorporated into artificial neural networks, mainly because of its long-term memory and nonlocality. Some researchers have made interesting attempts at fractional neural networks and gained competitive advantages over integer-order neural networks. Therefore, it is naturally makes one ponder how to generalize the first-order Hopfield neural networks to the fractional-order ones, and how to implement FHNN by means of fractional calculus. We propose to introduce a novel mathematical method: fractional calculus to implement FHNN. First, we implement fractor in the form of an analog circuit. Second, we implement FHNN by utilizing fractor and the fractional steepest descent approach, construct its Lyapunov function, and further analyze its attractors. Third, we perform experiments to analyze the stability and convergence of FHNN, and further discuss its applications to the defense against chip cloning attacks for anticounterfeiting. The main contribution of our work is to propose FHNN in the form of an analog circuit by utilizing a fractor and the fractional steepest descent approach, construct its Lyapunov function, prove its Lyapunov stability, analyze its attractors, and apply FHNN to the defense against chip cloning attacks for anticounterfeiting. A significant advantage of FHNN is that its attractors essentially relate to the neuron's fractional order. FHNN possesses the fractional-order-stability and fractional-order-sensitivity characteristics.

  3. Efficient Training of Recurrent Neural Network with Time Delays.

    Science.gov (United States)

    Marom, Emanuel; Saad, David; Cohen, Barak

    1997-01-01

    Training recurrent neural networks to perform certain tasks is known to be difficult. The possibility of adding synaptic delays to the network properties makes the training task more difficult. However, the disadvantage of tough training procedure is diminished by the improved network performance. During our research of training neural networks with time delays we encountered a robust method for accomplishing the training task. The method is based on adaptive simulated annealing algorithm (ASA) which was found to be superior to other training algorithms. It requires no tuning and is fast enough to enable training to be held on low end platforms such as personal computers. The implementation of the algorithm is presented over a set of typical benchmark tests of training recurrent neural networks with time delays. Copyright 1996 Elsevier Science Ltd.

  4. Application of dynamic recurrent neural networks in nonlinear system identification

    Science.gov (United States)

    Du, Yun; Wu, Xueli; Sun, Huiqin; Zhang, Suying; Tian, Qiang

    2006-11-01

    An adaptive identification method of simple dynamic recurrent neural network (SRNN) for nonlinear dynamic systems is presented in this paper. This method based on the theory that by using the inner-states feed-back of dynamic network to describe the nonlinear kinetic characteristics of system can reflect the dynamic characteristics more directly, deduces the recursive prediction error (RPE) learning algorithm of SRNN, and improves the algorithm by studying topological structure on recursion layer without the weight values. The simulation results indicate that this kind of neural network can be used in real-time control, due to its less weight values, simpler learning algorithm, higher identification speed, and higher precision of model. It solves the problems of intricate in training algorithm and slow rate in convergence caused by the complicate topological structure in usual dynamic recurrent neural network.

  5. Analysis of surface ozone using a recurrent neural network.

    Science.gov (United States)

    Biancofiore, Fabio; Verdecchia, Marco; Di Carlo, Piero; Tomassetti, Barbara; Aruffo, Eleonora; Busilacchio, Marcella; Bianco, Sebastiano; Di Tommaso, Sinibaldo; Colangeli, Carlo

    2015-05-01

    Hourly concentrations of ozone (O₃) and nitrogen dioxide (NO₂) have been measured for 16 years, from 1998 to 2013, in a seaside town in central Italy. The seasonal trends of O₃ and NO₂ recorded in this period have been studied. Furthermore, we used the data collected during one year (2005), to define the characteristics of a multiple linear regression model and a neural network model. Both models are used to model the hourly O₃ concentration, using, two scenarios: 1) in the first as inputs, only meteorological parameters and 2) in the second adding photochemical parameters at those of the first scenario. In order to evaluate the performance of the model four statistical criteria are used: correlation coefficient, fractional bias, normalized mean squared error and a factor of two. All the criteria show that the neural network gives better results, compared to the regression model, in all the model scenarios. Predictions of O₃ have been carried out by many authors using a feed forward neural architecture. In this paper we show that a recurrent architecture significantly improves the performances of neural predictors. Using only the meteorological parameters as input, the recurrent architecture shows performance better than the multiple linear regression model that uses meteorological and photochemical data as input, making the neural network model with recurrent architecture a more useful tool in areas where only weather measurements are available. Finally, we used the neural network model to forecast the O₃ hourly concentrations 1, 3, 6, 12, 24 and 48 h ahead. The performances of the model in predicting O₃ levels are discussed. Emphasis is given to the possibility of using the neural network model in operational ways in areas where only meteorological data are available, in order to predict O₃ also in sites where it has not been measured yet. Copyright © 2015 Elsevier B.V. All rights reserved.

  6. Iterative free-energy optimization for recurrent neural networks (INFERNO).

    Science.gov (United States)

    Pitti, Alexandre; Gaussier, Philippe; Quoy, Mathias

    2017-01-01

    The intra-parietal lobe coupled with the Basal Ganglia forms a working memory that demonstrates strong planning capabilities for generating robust yet flexible neuronal sequences. Neurocomputational models however, often fails to control long range neural synchrony in recurrent spiking networks due to spontaneous activity. As a novel framework based on the free-energy principle, we propose to see the problem of spikes' synchrony as an optimization problem of the neurons sub-threshold activity for the generation of long neuronal chains. Using a stochastic gradient descent, a reinforcement signal (presumably dopaminergic) evaluates the quality of one input vector to move the recurrent neural network to a desired activity; depending on the error made, this input vector is strengthened to hill-climb the gradient or elicited to search for another solution. This vector can be learned then by one associative memory as a model of the basal-ganglia to control the recurrent neural network. Experiments on habit learning and on sequence retrieving demonstrate the capabilities of the dual system to generate very long and precise spatio-temporal sequences, above two hundred iterations. Its features are applied then to the sequential planning of arm movements. In line with neurobiological theories, we discuss its relevance for modeling the cortico-basal working memory to initiate flexible goal-directed neuronal chains of causation and its relation to novel architectures such as Deep Networks, Neural Turing Machines and the Free-Energy Principle.

  7. Iterative free-energy optimization for recurrent neural networks (INFERNO)

    Science.gov (United States)

    2017-01-01

    The intra-parietal lobe coupled with the Basal Ganglia forms a working memory that demonstrates strong planning capabilities for generating robust yet flexible neuronal sequences. Neurocomputational models however, often fails to control long range neural synchrony in recurrent spiking networks due to spontaneous activity. As a novel framework based on the free-energy principle, we propose to see the problem of spikes’ synchrony as an optimization problem of the neurons sub-threshold activity for the generation of long neuronal chains. Using a stochastic gradient descent, a reinforcement signal (presumably dopaminergic) evaluates the quality of one input vector to move the recurrent neural network to a desired activity; depending on the error made, this input vector is strengthened to hill-climb the gradient or elicited to search for another solution. This vector can be learned then by one associative memory as a model of the basal-ganglia to control the recurrent neural network. Experiments on habit learning and on sequence retrieving demonstrate the capabilities of the dual system to generate very long and precise spatio-temporal sequences, above two hundred iterations. Its features are applied then to the sequential planning of arm movements. In line with neurobiological theories, we discuss its relevance for modeling the cortico-basal working memory to initiate flexible goal-directed neuronal chains of causation and its relation to novel architectures such as Deep Networks, Neural Turing Machines and the Free-Energy Principle. PMID:28282439

  8. Learning Contextual Dependence With Convolutional Hierarchical Recurrent Neural Networks

    Science.gov (United States)

    Zuo, Zhen; Shuai, Bing; Wang, Gang; Liu, Xiao; Wang, Xingxing; Wang, Bing; Chen, Yushi

    2016-07-01

    Existing deep convolutional neural networks (CNNs) have shown their great success on image classification. CNNs mainly consist of convolutional and pooling layers, both of which are performed on local image areas without considering the dependencies among different image regions. However, such dependencies are very important for generating explicit image representation. In contrast, recurrent neural networks (RNNs) are well known for their ability of encoding contextual information among sequential data, and they only require a limited number of network parameters. General RNNs can hardly be directly applied on non-sequential data. Thus, we proposed the hierarchical RNNs (HRNNs). In HRNNs, each RNN layer focuses on modeling spatial dependencies among image regions from the same scale but different locations. While the cross RNN scale connections target on modeling scale dependencies among regions from the same location but different scales. Specifically, we propose two recurrent neural network models: 1) hierarchical simple recurrent network (HSRN), which is fast and has low computational cost; and 2) hierarchical long-short term memory recurrent network (HLSTM), which performs better than HSRN with the price of more computational cost. In this manuscript, we integrate CNNs with HRNNs, and develop end-to-end convolutional hierarchical recurrent neural networks (C-HRNNs). C-HRNNs not only make use of the representation power of CNNs, but also efficiently encodes spatial and scale dependencies among different image regions. On four of the most challenging object/scene image classification benchmarks, our C-HRNNs achieve state-of-the-art results on Places 205, SUN 397, MIT indoor, and competitive results on ILSVRC 2012.

  9. Microscopic instability in recurrent neural networks

    Science.gov (United States)

    Yamanaka, Yuzuru; Amari, Shun-ichi; Shinomoto, Shigeru

    2015-03-01

    In a manner similar to the molecular chaos that underlies the stable thermodynamics of gases, a neuronal system may exhibit microscopic instability in individual neuronal dynamics while a macroscopic order of the entire population possibly remains stable. In this study, we analyze the microscopic stability of a network of neurons whose macroscopic activity obeys stable dynamics, expressing either monostable, bistable, or periodic state. We reveal that the network exhibits a variety of dynamical states for microscopic instability residing in a given stable macroscopic dynamics. The presence of a variety of dynamical states in such a simple random network implies more abundant microscopic fluctuations in real neural networks which consist of more complex and hierarchically structured interactions.

  10. A recurrent neural network for solving bilevel linear programming problem.

    Science.gov (United States)

    He, Xing; Li, Chuandong; Huang, Tingwen; Li, Chaojie; Huang, Junjian

    2014-04-01

    In this brief, based on the method of penalty functions, a recurrent neural network (NN) modeled by means of a differential inclusion is proposed for solving the bilevel linear programming problem (BLPP). Compared with the existing NNs for BLPP, the model has the least number of state variables and simple structure. Using nonsmooth analysis, the theory of differential inclusions, and Lyapunov-like method, the equilibrium point sequence of the proposed NNs can approximately converge to an optimal solution of BLPP under certain conditions. Finally, the numerical simulations of a supply chain distribution model have shown excellent performance of the proposed recurrent NNs.

  11. Synchronization of an uncertain chaotic system via recurrent neural networks

    Institute of Scientific and Technical Information of China (English)

    谭文; 王耀南

    2005-01-01

    Incorporating distributed recurrent networks with high-order connections between neurons, the identification and synchronization problem of an unknown chaotic system in the presence of unmodelled dynamics is investigated. Based on the Lyapunov stability theory, the weights learning algorithm for the recurrent high-order neural network model is presented. Also, analytical results concerning the stability properties of the scheme are obtained. Then adaptive control law for eliminating synchronization error of uncertain chaotic plant is developed via Lyapunov methodology.The proposed scheme is applied to model and synchronize an unknown Rossler system.

  12. Nonlinear system identification based on internal recurrent neural networks.

    Science.gov (United States)

    Puscasu, Gheorghe; Codres, Bogdan; Stancu, Alexandru; Murariu, Gabriel

    2009-04-01

    A novel approach for nonlinear complex system identification based on internal recurrent neural networks (IRNN) is proposed in this paper. The computational complexity of neural identification can be greatly reduced if the whole system is decomposed into several subsystems. This approach employs internal state estimation when no measurements coming from the sensors are available for the system states. A modified backpropagation algorithm is introduced in order to train the IRNN for nonlinear system identification. The performance of the proposed design approach is proven on a car simulator case study.

  13. Translation rescoring through recurrent neural network language models

    OpenAIRE

    PERIS ABRIL, ÁLVARO

    2014-01-01

    This work is framed into the Statistical Machine Translation field, more specifically into the language modeling challenge. In this area, have classically predominated the n-gram approach, but, in the latest years, different approaches have arisen in order to tackle this problem. One of this approaches is the use of artificial recurrent neural networks, which are supposed to outperform the n-gram language models. The aim of this work is to test empirically these new language...

  14. Natural Language Video Description using Deep Recurrent Neural Networks

    Science.gov (United States)

    2015-11-23

    videos by exploiting temporal structure. arXiv:1502.08029v4, 2015. 15, 17, 18, 19, 20, 27 [108] Haonan Yu and Jeffrey Mark Siskind. Grounded language ...Examples 32 Translating Videos to Natural Language CNN [Venugopalan et. al. NAACL’15] 33 Does not consider temporal sequence of frames. Can our model be...Natural Language Video Description using Deep Recurrent Neural Networks Subhashini Venugopalan University of Texas at Austin vsub@cs.utexas.edu

  15. Web server's reliability improvements using recurrent neural networks

    DEFF Research Database (Denmark)

    Madsen, Henrik; Albu, Rǎzvan-Daniel; Felea, Ioan

    2012-01-01

    In this paper we describe an interesting approach to error prediction illustrated by experimental results. The application consists of monitoring the activity for the web servers in order to collect the specific data. Predicting an error with severe consequences for the performance of a server (the...... usage, network usage and memory usage. We collect different data sets from monitoring the web server's activity and for each one we predict the server's reliability with the proposed recurrent neural network. © 2012 Taylor & Francis Group...

  16. Training Input-Output Recurrent Neural Networks through Spectral Methods

    OpenAIRE

    Sedghi, Hanie; Anandkumar, Anima

    2016-01-01

    We consider the problem of training input-output recurrent neural networks (RNN) for sequence labeling tasks. We propose a novel spectral approach for learning the network parameters. It is based on decomposition of the cross-moment tensor between the output and a non-linear transformation of the input, based on score functions. We guarantee consistent learning with polynomial sample and computational complexity under transparent conditions such as non-degeneracy of model parameters, polynomi...

  17. On the Efficiency of Recurrent Neural Network Optimization Algorithms

    OpenAIRE

    Krause, Ben; Lu, Liang; Murray, Iain; Renals, Steve

    2015-01-01

    This study compares the sequential and parallel efficiency of training Recurrent Neural Networks (RNNs) with Hessian-free optimization versus a gradient descent variant. Experiments are performed using the long short term memory (LSTM)architecture and the newly proposed multiplicative LSTM (mLSTM) architecture.Results demonstrate a number of insights into these architectures and optimizationalgorithms, including that Hessian-free optimization has the potential for largeefficiency gains in a h...

  18. Homeostatic scaling of excitability in recurrent neural networks.

    Directory of Open Access Journals (Sweden)

    Michiel W H Remme

    Full Text Available Neurons adjust their intrinsic excitability when experiencing a persistent change in synaptic drive. This process can prevent neural activity from moving into either a quiescent state or a saturated state in the face of ongoing plasticity, and is thought to promote stability of the network in which neurons reside. However, most neurons are embedded in recurrent networks, which require a delicate balance between excitation and inhibition to maintain network stability. This balance could be disrupted when neurons independently adjust their intrinsic excitability. Here, we study the functioning of activity-dependent homeostatic scaling of intrinsic excitability (HSE in a recurrent neural network. Using both simulations of a recurrent network consisting of excitatory and inhibitory neurons that implement HSE, and a mean-field description of adapting excitatory and inhibitory populations, we show that the stability of such adapting networks critically depends on the relationship between the adaptation time scales of both neuron populations. In a stable adapting network, HSE can keep all neurons functioning within their dynamic range, while the network is undergoing several (pathophysiologically relevant types of plasticity, such as persistent changes in external drive, changes in connection strengths, or the loss of inhibitory cells from the network. However, HSE cannot prevent the unstable network dynamics that result when, due to such plasticity, recurrent excitation in the network becomes too strong compared to feedback inhibition. This suggests that keeping a neural network in a stable and functional state requires the coordination of distinct homeostatic mechanisms that operate not only by adjusting neural excitability, but also by controlling network connectivity.

  19. Predicting Chaotic Time Series Using Recurrent Neural Network

    Institute of Scientific and Technical Information of China (English)

    ZHANG Jia-Shu; XIAO Xian-Ci

    2000-01-01

    A new proposed method, i.e. the recurrent neural network (RNN), is introduced to predict chaotic time series. The effectiveness of using RNN for making one-step and multi-step predictions is tested based on remarkable few datum points by computer-generated chaotic time series. Numerical results show that the RNN proposed here is a very powerful tool for making prediction of chaotic time series.

  20. Web server's reliability improvements using recurrent neural networks

    DEFF Research Database (Denmark)

    Madsen, Henrik; Albu, Rǎzvan-Daniel; Felea, Ioan

    2012-01-01

    In this paper we describe an interesting approach to error prediction illustrated by experimental results. The application consists of monitoring the activity for the web servers in order to collect the specific data. Predicting an error with severe consequences for the performance of a server (t...... usage, network usage and memory usage. We collect different data sets from monitoring the web server's activity and for each one we predict the server's reliability with the proposed recurrent neural network. © 2012 Taylor & Francis Group...

  1. A Recurrent Neural Network for Warpage Prediction in Injection Molding

    Directory of Open Access Journals (Sweden)

    A. Alvarado-Iniesta

    2012-11-01

    Full Text Available Injection molding is classified as one of the most flexible and economical manufacturing processes with high volumeof plastic molded parts. Causes of variations in the process are related to the vast number of factors acting during aregular production run, which directly impacts the quality of final products. A common quality trouble in finishedproducts is the presence of warpage. Thus, this study aimed to design a system based on recurrent neural networksto predict warpage defects in products manufactured through injection molding. Five process parameters areemployed for being considered to be critical and have a great impact on the warpage of plastic components. Thisstudy used the finite element analysis software Moldflow to simulate the injection molding process to collect data inorder to train and test the recurrent neural network. Recurrent neural networks were used to understand the dynamicsof the process and due to their memorization ability, warpage values might be predicted accurately. Results show thedesigned network works well in prediction tasks, overcoming those predictions generated by feedforward neuralnetworks.

  2. Parameter estimation in space systems using recurrent neural networks

    Science.gov (United States)

    Parlos, Alexander G.; Atiya, Amir F.; Sunkel, John W.

    1991-01-01

    The identification of time-varying parameters encountered in space systems is addressed, using artificial neural systems. A hybrid feedforward/feedback neural network, namely a recurrent multilayer perception, is used as the model structure in the nonlinear system identification. The feedforward portion of the network architecture provides its well-known interpolation property, while through recurrency and cross-talk, the local information feedback enables representation of temporal variations in the system nonlinearities. The standard back-propagation-learning algorithm is modified and it is used for both the off-line and on-line supervised training of the proposed hybrid network. The performance of recurrent multilayer perceptron networks in identifying parameters of nonlinear dynamic systems is investigated by estimating the mass properties of a representative large spacecraft. The changes in the spacecraft inertia are predicted using a trained neural network, during two configurations corresponding to the early and late stages of the spacecraft on-orbit assembly sequence. The proposed on-line mass properties estimation capability offers encouraging results, though, further research is warranted for training and testing the predictive capabilities of these networks beyond nominal spacecraft operations.

  3. Recurrent Neural Network for Computing the Drazin Inverse.

    Science.gov (United States)

    Stanimirović, Predrag S; Zivković, Ivan S; Wei, Yimin

    2015-11-01

    This paper presents a recurrent neural network (RNN) for computing the Drazin inverse of a real matrix in real time. This recurrent neural network (RNN) is composed of n independent parts (subnetworks), where n is the order of the input matrix. These subnetworks can operate concurrently, so parallel and distributed processing can be achieved. In this way, the computational advantages over the existing sequential algorithms can be attained in real-time applications. The RNN defined in this paper is convenient for an implementation in an electronic circuit. The number of neurons in the neural network is the same as the number of elements in the output matrix, which represents the Drazin inverse. The difference between the proposed RNN and the existing ones for the Drazin inverse computation lies in their network architecture and dynamics. The conditions that ensure the stability of the defined RNN as well as its convergence toward the Drazin inverse are considered. In addition, illustrative examples and examples of application to the practical engineering problems are discussed to show the efficacy of the proposed neural network.

  4. Ideomotor feedback control in a recurrent neural network.

    Science.gov (United States)

    Galtier, Mathieu

    2015-06-01

    The architecture of a neural network controlling an unknown environment is presented. It is based on a randomly connected recurrent neural network from which both perception and action are simultaneously read and fed back. There are two concurrent learning rules implementing a sort of ideomotor control: (i) perception is learned along the principle that the network should predict reliably its incoming stimuli; (ii) action is learned along the principle that the prediction of the network should match a target time series. The coherent behavior of the neural network in its environment is a consequence of the interaction between the two principles. Numerical simulations show a promising performance of the approach, which can be turned into a local and better "biologically plausible" algorithm.

  5. A Recurrent Neural Network for Nonlinear Fractional Programming

    Directory of Open Access Journals (Sweden)

    Quan-Ju Zhang

    2012-01-01

    Full Text Available This paper presents a novel recurrent time continuous neural network model which performs nonlinear fractional optimization subject to interval constraints on each of the optimization variables. The network is proved to be complete in the sense that the set of optima of the objective function to be minimized with interval constraints coincides with the set of equilibria of the neural network. It is also shown that the network is primal and globally convergent in the sense that its trajectory cannot escape from the feasible region and will converge to an exact optimal solution for any initial point being chosen in the feasible interval region. Simulation results are given to demonstrate further the global convergence and good performance of the proposing neural network for nonlinear fractional programming problems with interval constraints.

  6. Convolutional neural networks for prostate cancer recurrence prediction

    Science.gov (United States)

    Kumar, Neeraj; Verma, Ruchika; Arora, Ashish; Kumar, Abhay; Gupta, Sanchit; Sethi, Amit; Gann, Peter H.

    2017-03-01

    Accurate prediction of the treatment outcome is important for cancer treatment planning. We present an approach to predict prostate cancer (PCa) recurrence after radical prostatectomy using tissue images. We used a cohort whose case vs. control (recurrent vs. non-recurrent) status had been determined using post-treatment follow up. Further, to aid the development of novel biomarkers of PCa recurrence, cases and controls were paired based on matching of other predictive clinical variables such as Gleason grade, stage, age, and race. For this cohort, tissue resection microarray with up to four cores per patient was available. The proposed approach is based on deep learning, and its novelty lies in the use of two separate convolutional neural networks (CNNs) - one to detect individual nuclei even in the crowded areas, and the other to classify them. To detect nuclear centers in an image, the first CNN predicts distance transform of the underlying (but unknown) multi-nuclear map from the input HE image. The second CNN classifies the patches centered at nuclear centers into those belonging to cases or controls. Voting across patches extracted from image(s) of a patient yields the probability of recurrence for the patient. The proposed approach gave 0.81 AUC for a sample of 30 recurrent cases and 30 non-recurrent controls, after being trained on an independent set of 80 case-controls pairs. If validated further, such an approach might help in choosing between a combination of treatment options such as active surveillance, radical prostatectomy, radiation, and hormone therapy. It can also generalize to the prediction of treatment outcomes in other cancers.

  7. Simultaneous perturbation learning rule for recurrent neural networks and its FPGA implementation.

    Science.gov (United States)

    Maeda, Yutaka; Wakamura, Masatoshi

    2005-11-01

    Recurrent neural networks have interesting properties and can handle dynamic information processing unlike ordinary feedforward neural networks. However, they are generally difficult to use because there is no convenient learning scheme. In this paper, a recursive learning scheme for recurrent neural networks using the simultaneous perturbation method is described. The detailed procedure of the scheme for recurrent neural networks is explained. Unlike ordinary correlation learning, this method is applicable to analog learning and the learning of oscillatory solutions of recurrent neural networks. Moreover, as a typical example of recurrent neural networks, we consider the hardware implementation of Hopfield neural networks using a field-programmable gate array (FPGA). The details of the implementation are described. Two examples of a Hopfield neural network system for analog and oscillatory targets are shown. These results show that the learning scheme proposed here is feasible.

  8. Fine-tuning and the stability of recurrent neural networks.

    Directory of Open Access Journals (Sweden)

    David MacNeil

    Full Text Available A central criticism of standard theoretical approaches to constructing stable, recurrent model networks is that the synaptic connection weights need to be finely-tuned. This criticism is severe because proposed rules for learning these weights have been shown to have various limitations to their biological plausibility. Hence it is unlikely that such rules are used to continuously fine-tune the network in vivo. We describe a learning rule that is able to tune synaptic weights in a biologically plausible manner. We demonstrate and test this rule in the context of the oculomotor integrator, showing that only known neural signals are needed to tune the weights. We demonstrate that the rule appropriately accounts for a wide variety of experimental results, and is robust under several kinds of perturbation. Furthermore, we show that the rule is able to achieve stability as good as or better than that provided by the linearly optimal weights often used in recurrent models of the integrator. Finally, we discuss how this rule can be generalized to tune a wide variety of recurrent attractor networks, such as those found in head direction and path integration systems, suggesting that it may be used to tune a wide variety of stable neural systems.

  9. Biologically plausible learning in recurrent neural networks reproduces neural dynamics observed during cognitive tasks.

    Science.gov (United States)

    Miconi, Thomas

    2017-02-23

    Neural activity during cognitive tasks exhibits complex dynamics that flexibly encode task-relevant variables. Chaotic recurrent networks, which spontaneously generate rich dynamics, have been proposed as a model of cortical computation during cognitive tasks. However, existing methods for training these networks are either biologically implausible, and/or require a continuous, real-time error signal to guide learning. Here we show that a biologically plausible learning rule can train such recurrent networks, guided solely by delayed, phasic rewards at the end of each trial. Networks endowed with this learning rule can successfully learn nontrivial tasks requiring flexible (context-dependent) associations, memory maintenance, nonlinear mixed selectivities, and coordination among multiple outputs. The resulting networks replicate complex dynamics previously observed in animal cortex, such as dynamic encoding of task features and selective integration of sensory inputs. We conclude that recurrent neural networks offer a plausible model of cortical dynamics during both learning and performance of flexible behavior.

  10. Recurrent Neural Network for Single Machine Power System Stabilizer

    Directory of Open Access Journals (Sweden)

    Widi Aribowo

    2010-04-01

    Full Text Available In this paper, recurrent neural network (RNN is used to design power system stabilizer (PSS due to its advantage on the dependence not only on present input but also on past condition. A RNN-PSS is able to capture the dynamic response of a system without any delays caused by external feedback, primarily by the internal feedback loop in recurrent neuron. In this paper, RNNPSS consists of a RNN-identifier and a RNN-controller. The RNN-Identifier functions as the tracker of dynamics characteristics of the plant, while the RNN-controller is used to damp the system’s low frequency oscillations. Simulation results using MATLAB demonstrate that the RNNPSS can successfully damp out oscillation and improve the performance of the system.

  11. Estimating Ads’ Click through Rate with Recurrent Neural Network

    Directory of Open Access Journals (Sweden)

    Chen Qiao-Hong

    2016-01-01

    Full Text Available With the development of the Internet, online advertising spreads across every corner of the world, the ads' click through rate (CTR estimation is an important method to improve the online advertising revenue. Compared with the linear model, the nonlinear models can study much more complex relationships between a large number of nonlinear characteristics, so as to improve the accuracy of the estimation of the ads’ CTR. The recurrent neural network (RNN based on Long-Short Term Memory (LSTM is an improved model of the feedback neural network with ring structure. The model overcomes the problem of the gradient of the general RNN. Experiments show that the RNN based on LSTM exceeds the linear models, and it can effectively improve the estimation effect of the ads’ click through rate.

  12. Delay-slope-dependent stability results of recurrent neural networks.

    Science.gov (United States)

    Li, Tao; Zheng, Wei Xing; Lin, Chong

    2011-12-01

    By using the fact that the neuron activation functions are sector bounded and nondecreasing, this brief presents a new method, named the delay-slope-dependent method, for stability analysis of a class of recurrent neural networks with time-varying delays. This method includes more information on the slope of neuron activation functions and fewer matrix variables in the constructed Lyapunov-Krasovskii functional. Then some improved delay-dependent stability criteria with less computational burden and conservatism are obtained. Numerical examples are given to illustrate the effectiveness and the benefits of the proposed method.

  13. Blind Separation by Redundancy Reduction in a Recurrent Neural Network

    Institute of Scientific and Technical Information of China (English)

    LIU Ju; NIE Kaibao; HE Zhenya

    2001-01-01

    In this paper, a novel information the-ory criterion is proposed for blind source separationbased on a fully recurrent neural network, and a learn-ing algorithm is then developed. Stochastic naturalgradient descent algorithm is used in this algorithm.The proposed algorithm can ensure the maximizationof transferred information when a Hebb term is intro-duced to express the derivative of information missing.At the same time, the mutual information of outputs isminimized so as to make the outputs mutually statis-tically independent. The computer simulation showsthe validity and the good performance of the proposedalgorithm.

  14. Learning text representation using recurrent convolutional neural network with highway layers

    OpenAIRE

    Wen, Ying; Zhang, Weinan; Luo, Rui; Wang, Jun

    2016-01-01

    Recently, the rapid development of word embedding and neural networks has brought new inspiration to various NLP and IR tasks. In this paper, we describe a staged hybrid model combining Recurrent Convolutional Neural Networks (RCNN) with highway layers. The highway network module is incorporated in the middle takes the output of the bi-directional Recurrent Neural Network (Bi-RNN) module in the first stage and provides the Convolutional Neural Network (CNN) module in the last stage with the i...

  15. Lambda and the edge of chaos in recurrent neural networks.

    Science.gov (United States)

    Seifter, Jared; Reggia, James A

    2015-01-01

    The idea that there is an edge of chaos, a region in the space of dynamical systems having special meaning for complex living entities, has a long history in artificial life. The significance of this region was first emphasized in cellular automata models when a single simple measure, λCA, identified it as a transitional region between order and chaos. Here we introduce a parameter λNN that is inspired by λCA but is defined for recurrent neural networks. We show through a series of systematic computational experiments that λNN generally orders the dynamical behaviors of randomly connected/weighted recurrent neural networks in the same way that λCA does for cellular automata. By extending this ordering to larger values of λNN than has typically been done with λCA and cellular automata, we find that a second edge-of-chaos region exists on the opposite side of the chaotic region. These basic results are found to hold under different assumptions about network connectivity, but vary substantially in their details. The results show that the basic concept underlying the lambda parameter can usefully be extended to other types of complex dynamical systems than just cellular automata.

  16. Tuning Recurrent Neural Networks for Recognizing Handwritten Arabic Words

    KAUST Repository

    Qaralleh, Esam

    2013-10-01

    Artificial neural networks have the abilities to learn by example and are capable of solving problems that are hard to solve using ordinary rule-based programming. They have many design parameters that affect their performance such as the number and sizes of the hidden layers. Large sizes are slow and small sizes are generally not accurate. Tuning the neural network size is a hard task because the design space is often large and training is often a long process. We use design of experiments techniques to tune the recurrent neural network used in an Arabic handwriting recognition system. We show that best results are achieved with three hidden layers and two subsampling layers. To tune the sizes of these five layers, we use fractional factorial experiment design to limit the number of experiments to a feasible number. Moreover, we replicate the experiment configuration multiple times to overcome the randomness in the training process. The accuracy and time measurements are analyzed and modeled. The two models are then used to locate network sizes that are on the Pareto optimal frontier. The approach described in this paper reduces the label error from 26.2% to 19.8%.

  17. A modular architecture for transparent computation in recurrent neural networks.

    Science.gov (United States)

    Carmantini, Giovanni S; Beim Graben, Peter; Desroches, Mathieu; Rodrigues, Serafim

    2017-01-01

    Computation is classically studied in terms of automata, formal languages and algorithms; yet, the relation between neural dynamics and symbolic representations and operations is still unclear in traditional eliminative connectionism. Therefore, we suggest a unique perspective on this central issue, to which we would like to refer as transparent connectionism, by proposing accounts of how symbolic computation can be implemented in neural substrates. In this study we first introduce a new model of dynamics on a symbolic space, the versatile shift, showing that it supports the real-time simulation of a range of automata. We then show that the Gödelization of versatile shifts defines nonlinear dynamical automata, dynamical systems evolving on a vectorial space. Finally, we present a mapping between nonlinear dynamical automata and recurrent artificial neural networks. The mapping defines an architecture characterized by its granular modularity, where data, symbolic operations and their control are not only distinguishable in activation space, but also spatially localizable in the network itself, while maintaining a distributed encoding of symbolic representations. The resulting networks simulate automata in real-time and are programmed directly, in the absence of network training. To discuss the unique characteristics of the architecture and their consequences, we present two examples: (i) the design of a Central Pattern Generator from a finite-state locomotive controller, and (ii) the creation of a network simulating a system of interactive automata that supports the parsing of garden-path sentences as investigated in psycholinguistics experiments.

  18. A novel recurrent neural network with finite-time convergence for linear programming.

    Science.gov (United States)

    Liu, Qingshan; Cao, Jinde; Chen, Guanrong

    2010-11-01

    In this letter, a novel recurrent neural network based on the gradient method is proposed for solving linear programming problems. Finite-time convergence of the proposed neural network is proved by using the Lyapunov method. Compared with the existing neural networks for linear programming, the proposed neural network is globally convergent to exact optimal solutions in finite time, which is remarkable and rare in the literature of neural networks for optimization. Some numerical examples are given to show the effectiveness and excellent performance of the new recurrent neural network.

  19. Recurrent Neural Network Approach Based on the Integral Representation of the Drazin Inverse.

    Science.gov (United States)

    Stanimirović, Predrag S; Živković, Ivan S; Wei, Yimin

    2015-10-01

    In this letter, we present the dynamical equation and corresponding artificial recurrent neural network for computing the Drazin inverse for arbitrary square real matrix, without any restriction on its eigenvalues. Conditions that ensure the stability of the defined recurrent neural network as well as its convergence toward the Drazin inverse are considered. Several illustrative examples present the results of computer simulations.

  20. Application of simple dynamic recurrent neural networks in solid granule flowrate modeling

    Science.gov (United States)

    Du, Yun; Sun, Huiqin; Tian, Qiang; Ren, Haiping; Zhang, Suying

    2008-10-01

    To build the solid granule flowrate model by the simple dynamic recurrent neural network (SRNN) is presented in this paper. Because of the dynamic recurrent neural network has the characteristic of intricate network structure and slow training algorithm rate, the simple recurrent neural network without the weight values on recursion layer is studied. The recurrent prediction error (RPE) learning algorithm for SRNN by adjustment the weight value and the threshold value is reduced. The modeling result of solid granule flowrate indicates that it has fast convergence rate and the high precision the model. It can be used on real time.

  1. Recurrent neural networks-based multivariable system PID predictive control

    Institute of Scientific and Technical Information of China (English)

    ZHANG Yan; WANG Fanzhen; SONG Ying; CHEN Zengqiang; YUAN Zhuzhi

    2007-01-01

    A nonlinear proportion integration differentiation (PID) controller is proposed on the basis of recurrent neural networks,due to the difficulty of tuning the parameters of conventional PID controller.In the control process of nonlinear multivariable system,a decoupling controller was constructed,which took advantage of multi-nonlinear PID controllers in parallel.With the idea of predictive control,two multivariable predictive control strategies were established.One strategy involved the use of the general minimum variance control function on the basis of recursive multi-step predictive method.The other involved the adoption of multistep predictive cost energy to train the weights of the decoupling controller.Simulation studies have shown the efficiency of these strategies.

  2. Dual extended Kalman filtering in recurrent neural networks(1).

    Science.gov (United States)

    Leung, Chi-Sing; Chan, Lai-Wan

    2003-03-01

    In the classical deterministic Elman model, the estimation of parameters must be very accurate. Otherwise, the system performance is very poor. To improve the system performance, we can use a Kalman filtering algorithm to guide the operation of a trained recurrent neural network (RNN). In this case, during training, we need to estimate the state of hidden layer, as well as the weights of the RNN. This paper discusses how to use the dual extended Kalman filtering (DEKF) for this dual estimation and how to use our proposing DEKF for removing some unimportant weights from a trained RNN. In our approach, one Kalman algorithm is used for estimating the state of the hidden layer, and one recursive least square (RLS) algorithm is used for estimating the weights. After training, we use the error covariance matrix of the RLS algorithm to remove unimportant weights. Simulation showed that our approach is an effective joint-learning-pruning method for RNNs under the online operation.

  3. A recurrent neural network for adaptive beamforming and array correction.

    Science.gov (United States)

    Che, Hangjun; Li, Chuandong; He, Xing; Huang, Tingwen

    2016-08-01

    In this paper, a recurrent neural network (RNN) is proposed for solving adaptive beamforming problem. In order to minimize sidelobe interference, the problem is described as a convex optimization problem based on linear array model. RNN is designed to optimize system's weight values in the feasible region which is derived from arrays' state and plane wave's information. The new algorithm is proven to be stable and converge to optimal solution in the sense of Lyapunov. So as to verify new algorithm's performance, we apply it to beamforming under array mismatch situation. Comparing with other optimization algorithms, simulations suggest that RNN has strong ability to search for exact solutions under the condition of large scale constraints.

  4. On-line learning algorithms for locally recurrent neural networks.

    Science.gov (United States)

    Campolucci, P; Uncini, A; Piazza, F; Rao, B D

    1999-01-01

    This paper focuses on on-line learning procedures for locally recurrent neural networks with emphasis on multilayer perceptron (MLP) with infinite impulse response (IIR) synapses and its variations which include generalized output and activation feedback multilayer networks (MLN's). We propose a new gradient-based procedure called recursive backpropagation (RBP) whose on-line version, causal recursive backpropagation (CRBP), presents some advantages with respect to the other on-line training methods. The new CRBP algorithm includes as particular cases backpropagation (BP), temporal backpropagation (TBP), backpropagation for sequences (BPS), Back-Tsoi algorithm among others, thereby providing a unifying view on gradient calculation techniques for recurrent networks with local feedback. The only learning method that has been proposed for locally recurrent networks with no architectural restriction is the one by Back and Tsoi. The proposed algorithm has better stability and higher speed of convergence with respect to the Back-Tsoi algorithm, which is supported by the theoretical development and confirmed by simulations. The computational complexity of the CRBP is comparable with that of the Back-Tsoi algorithm, e.g., less that a factor of 1.5 for usual architectures and parameter settings. The superior performance of the new algorithm, however, easily justifies this small increase in computational burden. In addition, the general paradigms of truncated BPTT and RTRL are applied to networks with local feedback and compared with the new CRBP method. The simulations show that CRBP exhibits similar performances and the detailed analysis of complexity reveals that CRBP is much simpler and easier to implement, e.g., CRBP is local in space and in time while RTRL is not local in space.

  5. Detecting behavioral microsleeps using EEG and LSTM recurrent neural networks.

    Science.gov (United States)

    Davidson, P R; Jones, R D; Peiris, M T

    2005-01-01

    Lapses in visuomotor performance are often associated with behavioral microsleep events. Experiencing a lapse of this type while performing an important task can have catastrophic consequences. A warning system capable of reliably detecting patterns in EEG occurring before or during a lapse has the potential to save many lives. We are developing a behavioral microsleep detection system which employs Long Short'Term Memory (LSTM) recurrent neural networks. To train and validate the system, EEG, facial video and tracking data were collected from 15 subjects performing a visuomotor tracking task for 2 1-hour sessions. This provided behavioral information on lapse events with good temporal resolution. We developed an automated behavior rating system and trained it to estimate the mean opinion of 3 human raters on the likelihood of a lapse. We then trained an LSTM neural network to estimate the output of the lapse rating system given only EEG spectral data. The detection system was designed to operate in real-time without calibration for individual subjects. Preliminary results show the system is not reliable enough for general use, but results from some tracking sessions encourage further investigation of the reported approach.

  6. Recurrent Neural Network Applications for Astronomical Time Series

    Science.gov (United States)

    Protopapas, Pavlos

    2017-06-01

    The benefits of good predictive models in astronomy lie in early event prediction systems and effective resource allocation. Current time series methods applicable to regular time series have not evolved to generalize for irregular time series. In this talk, I will describe two Recurrent Neural Network methods, Long Short-Term Memory (LSTM) and Echo State Networks (ESNs) for predicting irregular time series. Feature engineering along with a non-linear modeling proved to be an effective predictor. For noisy time series, the prediction is improved by training the network on error realizations using the error estimates from astronomical light curves. In addition to this, we propose a new neural network architecture to remove correlation from the residuals in order to improve prediction and compensate for the noisy data. Finally, I show how to set hyperparameters for a stable and performant solution correctly. In this work, we circumvent this obstacle by optimizing ESN hyperparameters using Bayesian optimization with Gaussian Process priors. This automates the tuning procedure, enabling users to employ the power of RNN without needing an in-depth understanding of the tuning procedure.

  7. Discussion of stability in a class of models on recurrent wavelet neural networks

    Institute of Scientific and Technical Information of China (English)

    DENG Ren; LI Zhu-xin; FAN You-hong

    2007-01-01

    Based on wavelet neural networks (WNNs) and recurrent neural networks (RNNs), a class of models on recurrent wavelet neural networks (RWNNs) is proposed.The new networks possess the advantages of WNNs and RNNs. In this paper, asymptotic stability of RWNNs is researched according to the Lyapunov theorem, and some theorems and formulae are given. The simulation results show the excellent performance of the networks in nonlinear dynamic system recognition.

  8. Optimal low-thrust spiral trajectories using Lyapunov-based guidance

    Science.gov (United States)

    Yang, Da-lin; Xu, Bo; Zhang, Lei

    2016-09-01

    For an increasing number of electric propulsion systems used for real missions, it is very important to design optimal low-thrust spiral trajectories for these missions. However, it is particularly challenging to search for optimal low-thrust transfers. This paper describes an efficient optimal guidance scheme for the design of time-optimal and time-fixed fuel-optimal low-thrust spiral trajectories. The time-optimal solution is obtained with Lyapunov-based guidance, in which the artificial neural network (ANN) is adopted to implement control gains steering and the evolutionary algorithm is used as the learning algorithm for ANN. Moreover, the relative efficiency introduced in Q-law is analyzed and a periapis-and-apoapsis-centered burn structure is proposed for solving time-fixed fuel-optimal low-thrust orbit transfer problem. In this guidance scheme, the ANN is adopted to determine the burn structure within each orbital revolution and the optimal low-thrust orbit transfer problem is converted to the parameter optimization problem. This guidance scheme runs without an initial guess and provides closed form solutions. In addition, Earth J2 perturbation and Earth-shadow eclipse effects are considered in this paper. Finally, a comparison with solutions given by the literature demonstrates the effectiveness of the proposed method.

  9. Permitted and forbidden sets in discrete-time linear threshold recurrent neural networks.

    Science.gov (United States)

    Yi, Zhang; Zhang, Lei; Yu, Jiali; Tan, Kok Kiong

    2009-06-01

    The concepts of permitted and forbidden sets enable a new perspective of the memory in neural networks. Such concepts exhibit interesting dynamics in recurrent neural networks. This paper studies the basic theories of permitted and forbidden sets of the linear threshold discrete-time recurrent neural networks. The linear threshold transfer function has been regarded as an adequate transfer function for recurrent neural networks. Networks with this transfer function form a class of hybrid analog and digital networks which are especially useful for perceptual computations. Networks in discrete time can directly provide algorithms for efficient implementation in digital hardware. The main contribution of this paper is to establish foundations of permitted and forbidden sets. Necessary and sufficient conditions for the linear threshold discrete-time recurrent neural networks are obtained for complete convergence, existence of permitted and forbidden sets, as well as conditionally multiattractivity, respectively. Simulation studies explore some possible interesting practical applications.

  10. Global exponential periodicity and stability of recurrent neural networks with multi-proportional delays.

    Science.gov (United States)

    Zhou, Liqun; Zhang, Yanyan

    2016-01-01

    In this paper, a class of recurrent neural networks with multi-proportional delays is studied. The nonlinear transformation transforms a class of recurrent neural networks with multi-proportional delays into a class of recurrent neural networks with constant delays and time-varying coefficients. By constructing Lyapunov functional and establishing the delay differential inequality, several delay-dependent and delay-independent sufficient conditions are derived to ensure global exponential periodicity and stability of the system. And several examples and their simulations are given to illustrate the effectiveness of obtained results.

  11. Lyapunov based nonlinear control of electrical and mechanical systems

    Science.gov (United States)

    Behal, Aman

    fusing a filtered tracking error transformation with the dynamic oscillator design presented in [20]. The proposed tracking controller yields a GUUB result for the regulation problem also. In the final chapter, a nonlinear controller is designed for the kinematic model of an underactuated rigid spacecraft that ensures uniform, ultimately bounded (UUB) tracking provided the initial errors are selected sufficiently small. The result is achieved via a judicious formulation of the spacecraft kinematics and the novel design of a Lyapunov-based controller. It is also demonstrated how standard backstepping control techniques can be fused with the kinematic controller to solve the full-order regulation problem for an axisymmetric spacecraft. Simulation results are included to demonstrate the efficacy of the proposed algorithm. 1It is to be noted that the controller presented in [16] was originally designed to obtain exponential rotor position /rotor flux tracking for the full-order induction motor model (i.e., stator current dynamics are included).

  12. Application of recurrent neural networks for drought projections in California

    Science.gov (United States)

    Le, J. A.; El-Askary, H. M.; Allali, M.; Struppa, D. C.

    2017-05-01

    We use recurrent neural networks (RNNs) to investigate the complex interactions between the long-term trend in dryness and a projected, short but intense, period of wetness due to the 2015-2016 El Niño. Although it was forecasted that this El Niño season would bring significant rainfall to the region, our long-term projections of the Palmer Z Index (PZI) showed a continuing drought trend, contrasting with the 1998-1999 El Niño event. RNN training considered PZI data during 1896-2006 that was validated against the 2006-2015 period to evaluate the potential of extreme precipitation forecast. We achieved a statistically significant correlation of 0.610 between forecasted and observed PZI on the validation set for a lead time of 1 month. This gives strong confidence to the forecasted precipitation indicator. The 2015-2016 El Niño season proved to be relatively weak as compared with the 1997-1998, with a peak PZI anomaly of 0.242 standard deviations below historical averages, continuing drought conditions.

  13. Railway Track Circuit Fault Diagnosis Using Recurrent Neural Networks.

    Science.gov (United States)

    de Bruin, Tim; Verbert, Kim; Babuska, Robert

    2017-03-01

    Timely detection and identification of faults in railway track circuits are crucial for the safety and availability of railway networks. In this paper, the use of the long-short-term memory (LSTM) recurrent neural network is proposed to accomplish these tasks based on the commonly available measurement signals. By considering the signals from multiple track circuits in a geographic area, faults are diagnosed from their spatial and temporal dependences. A generative model is used to show that the LSTM network can learn these dependences directly from the data. The network correctly classifies 99.7% of the test input sequences, with no false positive fault detections. In addition, the t-Distributed Stochastic Neighbor Embedding (t-SNE) method is used to examine the resulting network, further showing that it has learned the relevant dependences in the data. Finally, we compare our LSTM network with a convolutional network trained on the same task. From this comparison, we conclude that the LSTM network architecture is better suited for the railway track circuit fault detection and identification tasks than the convolutional network.

  14. Multiplex visibility graphs to investigate recurrent neural network dynamics

    Science.gov (United States)

    Bianchi, Filippo Maria; Livi, Lorenzo; Alippi, Cesare; Jenssen, Robert

    2017-03-01

    A recurrent neural network (RNN) is a universal approximator of dynamical systems, whose performance often depends on sensitive hyperparameters. Tuning them properly may be difficult and, typically, based on a trial-and-error approach. In this work, we adopt a graph-based framework to interpret and characterize internal dynamics of a class of RNNs called echo state networks (ESNs). We design principled unsupervised methods to derive hyperparameters configurations yielding maximal ESN performance, expressed in terms of prediction error and memory capacity. In particular, we propose to model time series generated by each neuron activations with a horizontal visibility graph, whose topological properties have been shown to be related to the underlying system dynamics. Successively, horizontal visibility graphs associated with all neurons become layers of a larger structure called a multiplex. We show that topological properties of such a multiplex reflect important features of ESN dynamics that can be used to guide the tuning of its hyperparamers. Results obtained on several benchmarks and a real-world dataset of telephone call data records show the effectiveness of the proposed methods.

  15. On the Emergent Properties of Recurrent Neural Networks at Criticality

    Science.gov (United States)

    Karimipanah, Yahya; Ma, Zhengyu; Wessel, Ralf

    Irregular spiking is a widespread phenomenon in neuronal activities in vivo. In addition, it has been shown that the firing rate variability decreases after the onset of external stimuli. Since these are known as two universal features of cortical activity, it is natural to ask whether there is a universal mechanism underlying such phenomena. Independently, there has been mounting evidence that superficial layers of cortex operate near a second-order phase transition (critical point), which is manifested in the form of scale free activity. However, despite the strong evidence for such a criticality hypothesis, it is still very little known on how it can be leveraged to facilitate neural coding. As the decline in response variability is regarded as an essential mechanism to enhance coding efficiency, we asked whether the criticality hypothesis could bridge between scale free activity and other ubiquitous features of cortical activity. Using a simple binary probabilistic model, we show that irregular spiking and decline in response variability, both arise as emergent properties of a recurrent network poised at criticality. Our results provide us with a unified explanation for the ubiquity of these two features, without a need to exploit any further mechanism.

  16. Memory in linear recurrent neural networks in continuous time.

    Science.gov (United States)

    Hermans, Michiel; Schrauwen, Benjamin

    2010-04-01

    Reservoir Computing is a novel technique which employs recurrent neural networks while circumventing difficult training algorithms. A very recent trend in Reservoir Computing is the use of real physical dynamical systems as implementation platforms, rather than the customary digital emulations. Physical systems operate in continuous time, creating a fundamental difference with the classic discrete time definitions of Reservoir Computing. The specific goal of this paper is to study the memory properties of such systems, where we will limit ourselves to linear dynamics. We develop an analytical model which allows the calculation of the memory function for continuous time linear dynamical systems, which can be considered as networks of linear leaky integrator neurons. We then use this model to research memory properties for different types of reservoir. We start with random connection matrices with a shifted eigenvalue spectrum, which perform very poorly. Next, we transform two specific reservoir types, which are known to give good performance in discrete time, to the continuous time domain. Reservoirs based on uniform spreading of connection matrix eigenvalues on the unit disk in discrete time give much better memory properties than reservoirs with random connection matrices, where reservoirs based on orthogonal connection matrices in discrete time are very robust against noise and their memory properties can be tuned. The overall results found in this work yield important insights into how to design networks for continuous time.

  17. Low-dimensional recurrent neural network-based Kalman filter for speech enhancement.

    Science.gov (United States)

    Xia, Youshen; Wang, Jun

    2015-07-01

    This paper proposes a new recurrent neural network-based Kalman filter for speech enhancement, based on a noise-constrained least squares estimate. The parameters of speech signal modeled as autoregressive process are first estimated by using the proposed recurrent neural network and the speech signal is then recovered from Kalman filtering. The proposed recurrent neural network is globally asymptomatically stable to the noise-constrained estimate. Because the noise-constrained estimate has a robust performance against non-Gaussian noise, the proposed recurrent neural network-based speech enhancement algorithm can minimize the estimation error of Kalman filter parameters in non-Gaussian noise. Furthermore, having a low-dimensional model feature, the proposed neural network-based speech enhancement algorithm has a much faster speed than two existing recurrent neural networks-based speech enhancement algorithms. Simulation results show that the proposed recurrent neural network-based speech enhancement algorithm can produce a good performance with fast computation and noise reduction.

  18. A one-layer recurrent neural network for support vector machine learning.

    Science.gov (United States)

    Xia, Youshen; Wang, Jun

    2004-04-01

    This paper presents a one-layer recurrent neural network for support vector machine (SVM) learning in pattern classification and regression. The SVM learning problem is first converted into an equivalent formulation, and then a one-layer recurrent neural network for SVM learning is proposed. The proposed neural network is guaranteed to obtain the optimal solution of support vector classification and regression. Compared with the existing two-layer neural network for the SVM classification, the proposed neural network has a low complexity for implementation. Moreover, the proposed neural network can converge exponentially to the optimal solution of SVM learning. The rate of the exponential convergence can be made arbitrarily high by simply turning up a scaling parameter. Simulation examples based on benchmark problems are discussed to show the good performance of the proposed neural network for SVM learning.

  19. Automatic Cloud Resource Scaling Algorithm based on Long Short-Term Memory Recurrent Neural Network

    National Research Council Canada - National Science Library

    Ashraf A. Shahin

    2016-01-01

    .... This paper has proposed dynamic threshold based auto-scaling algorithms that predict required resources using Long Short-Term Memory Recurrent Neural Network and auto-scale virtual resources based on predicted values...

  20. Speed up Training of the Recurrent Neural Network Based on Constrained Optimization Techniques

    Institute of Scientific and Technical Information of China (English)

    陈珂; 包威权; 等

    1996-01-01

    In this paper,the constrained optimization technique for a substantial problem is explored,that is accelerating training the globally recurrent neural network.Unlike most of the previous methods in feedforware neural networks,the authors adopt the constrained optimization technique to improve the gradientbased algorithm of the globally recurrent neural network for the adaptive learning rate during tracining.Using the recurrent network with the improved algorithm,some experiments in two real-world problems,namely,filtering additive noises in acoustic data and classification of temporat signals for speaker identification,have been performed.The experimental results show that the recurrent neural network with the improved learning algorithm yields significantly faster training and achieves the satisfactory performance.

  1. Simplified Gating in Long Short-term Memory (LSTM) Recurrent Neural Networks

    OpenAIRE

    Lu, Yuzhen; Salem, Fathi M.

    2017-01-01

    The standard LSTM recurrent neural networks while very powerful in long-range dependency sequence applications have highly complex structure and relatively large (adaptive) parameters. In this work, we present empirical comparison between the standard LSTM recurrent neural network architecture and three new parameter-reduced variants obtained by eliminating combinations of the input signal, bias, and hidden unit signals from individual gating signals. The experiments on two sequence datasets ...

  2. Recurrent Convolutional Neural Networks: A Better Model of Biological Object Recognition.

    Science.gov (United States)

    Spoerer, Courtney J; McClure, Patrick; Kriegeskorte, Nikolaus

    2017-01-01

    Feedforward neural networks provide the dominant model of how the brain performs visual object recognition. However, these networks lack the lateral and feedback connections, and the resulting recurrent neuronal dynamics, of the ventral visual pathway in the human and non-human primate brain. Here we investigate recurrent convolutional neural networks with bottom-up (B), lateral (L), and top-down (T) connections. Combining these types of connections yields four architectures (B, BT, BL, and BLT), which we systematically test and compare. We hypothesized that recurrent dynamics might improve recognition performance in the challenging scenario of partial occlusion. We introduce two novel occluded object recognition tasks to test the efficacy of the models, digit clutter (where multiple target digits occlude one another) and digit debris (where target digits are occluded by digit fragments). We find that recurrent neural networks outperform feedforward control models (approximately matched in parametric complexity) at recognizing objects, both in the absence of occlusion and in all occlusion conditions. Recurrent networks were also found to be more robust to the inclusion of additive Gaussian noise. Recurrent neural networks are better in two respects: (1) they are more neurobiologically realistic than their feedforward counterparts; (2) they are better in terms of their ability to recognize objects, especially under challenging conditions. This work shows that computer vision can benefit from using recurrent convolutional architectures and suggests that the ubiquitous recurrent connections in biological brains are essential for task performance.

  3. Synaptic plasticity in a recurrent neural network for versatile and adaptive behaviors of a walking robot

    DEFF Research Database (Denmark)

    Grinke, Eduard; Tetzlaff, Christian; Wörgötter, Florentin

    2015-01-01

    mechanisms with plasticity, exteroceptive sensory feedback, and biomechanics. The neural mechanisms consist of adaptive neural sensory processing and modular neural locomotion control. The sensory processing is based on a small recurrent neural network consisting of two fully connected neurons. Online...... correlation-based learning with synaptic scaling is applied to adequately change the connections of the network. By doing so, we can effectively exploit neural dynamics (i.e., hysteresis effects and single attractors) in the network to generate different turning angles with short-term memory for a walking...

  4. A simplified recurrent neural network for pseudoconvex optimization subject to linear equality constraints

    Science.gov (United States)

    Qin, Sitian; Fan, Dejun; Su, Peng; Liu, Qinghe

    2014-04-01

    In this paper, the optimization techniques for solving pseudoconvex optimization problems are investigated. A simplified recurrent neural network is proposed according to the optimization problem. We prove that the optimal solution of the optimization problem is just the equilibrium point of the neural network, and vice versa if the equilibrium point satisfies the linear constraints. The proposed neural network is proven to be globally stable in the sense of Lyapunov and convergent to an exact optimal solution of the optimization problem. A numerical simulation is given to illustrate the global convergence of the neural network. Applications in business and chemistry are given to demonstrate the effectiveness of the neural network.

  5. A novel compensation-based recurrent fuzzy neural network and its learning algorithm

    Institute of Scientific and Technical Information of China (English)

    WU Bo; WU Ke; LU JianHong

    2009-01-01

    Based on detailed atudy on aeveral kinds of fuzzy neural networks, we propose a novel compensation. based recurrent fuzzy neural network (CRFNN) by adding recurrent element and compensatory element to the conventional fuzzy neural network. Then, we propose a sequential learning method for the structure Identification of the CRFNN In order to confirm the fuzzy rules and their correlaUve parameters effectively. Furthermore, we Improve the BP algorithm based on the characteristics of the proposed CRFNN to train the network. By modeling the typical nonlinear systems, we draw the conclusion that the proposed CRFNN has excellent dynamic response and strong learning ability.

  6. Synchronization of Memristor-Based Coupling Recurrent Neural Networks With Time-Varying Delays and Impulses.

    Science.gov (United States)

    Zhang, Wei; Li, Chuandong; Huang, Tingwen; He, Xing

    2015-12-01

    Synchronization of an array of linearly coupled memristor-based recurrent neural networks with impulses and time-varying delays is investigated in this brief. Based on the Lyapunov function method, an extended Halanay differential inequality and a new delay impulsive differential inequality, some sufficient conditions are derived, which depend on impulsive and coupling delays to guarantee the exponential synchronization of the memristor-based recurrent neural networks. Impulses with and without delay and time-varying delay are considered for modeling the coupled neural networks simultaneously, which renders more practical significance of our current research. Finally, numerical simulations are given to verify the effectiveness of the theoretical results.

  7. Adaptive learning with guaranteed stability for discrete-time recurrent neural networks

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    To avoid unstable learning, a stable adaptive learning algorithm was proposed for discrete-time recurrent neural networks. Unlike the dynamic gradient methods, such as the backpropagation through time and the real time recurrent learning, the weights of the recurrent neural networks were updated online in terms of Lyapunov stability theory in the proposed learning algorithm, so the learning stability was guaranteed. With the inversion of the activation function of the recurrent neural networks, the proposed learning algorithm can be easily implemented for solving varying nonlinear adaptive learning problems and fast convergence of the adaptive learning process can be achieved. Simulation experiments in pattern recognition show that only 5 iterations are needed for the storage of a 15X15 binary image pattern and only 9 iterations are needed for the perfect realization of an analog vector by an equilibrium state with the proposed learning algorithm.

  8. Multi-step-prediction of chaotic time series based on co-evolutionary recurrent neural network

    Institute of Scientific and Technical Information of China (English)

    Ma Qian-Li; Zheng Qi-Lun; Peng Hong; Zhong Tan-Wei; Qin Jiang-Wei

    2008-01-01

    This paper proposes a co-evolutionary recurrent neural network (CERNN) for the multi-step-prediction of chaotic time series,it estimates the proper parameters of phase space reconstruction and optimizes the structure of recurrent neural networks by co-evolutionary strategy.The searching space was separated into two subspaces and the individuals are trained in a parallel computational procedure.It can dynamically combine the embedding method with the capability of recurrent neural network to incorporate past experience due to internal recurrence.The effectiveness of CERNN is evaluated by using three benchmark chaotic time series data sets:the Lorenz series,Mackey-Glass series and real-world sun spot series.The simulation results show that CERNN improves the performances of multi-step-prediction of chaotic time series.

  9. Computationally efficient locally-recurrent neural networks for online signal processing

    CERN Document Server

    Hussain, A; Shim, I

    1999-01-01

    A general class of computationally efficient locally recurrent networks (CERN) is described for real-time adaptive signal processing. The structure of the CERN is based on linear-in-the- parameters single-hidden-layered feedforward neural networks such as the radial basis function (RBF) network, the Volterra neural network (VNN) and the functionally expanded neural network (FENN), adapted to employ local output feedback. The corresponding learning algorithms are derived and key structural and computational complexity comparisons are made between the CERN and conventional recurrent neural networks. Two case studies are performed involving the real- time adaptive nonlinear prediction of real-world chaotic, highly non- stationary laser time series and an actual speech signal, which show that a recurrent FENN based adaptive CERN predictor can significantly outperform the corresponding feedforward FENN and conventionally employed linear adaptive filtering models. (13 refs).

  10. An attractor-based complexity measurement for Boolean recurrent neural networks.

    Science.gov (United States)

    Cabessa, Jérémie; Villa, Alessandro E P

    2014-01-01

    We provide a novel refined attractor-based complexity measurement for Boolean recurrent neural networks that represents an assessment of their computational power in terms of the significance of their attractor dynamics. This complexity measurement is achieved by first proving a computational equivalence between Boolean recurrent neural networks and some specific class of ω-automata, and then translating the most refined classification of ω-automata to the Boolean neural network context. As a result, a hierarchical classification of Boolean neural networks based on their attractive dynamics is obtained, thus providing a novel refined attractor-based complexity measurement for Boolean recurrent neural networks. These results provide new theoretical insights to the computational and dynamical capabilities of neural networks according to their attractive potentialities. An application of our findings is illustrated by the analysis of the dynamics of a simplified model of the basal ganglia-thalamocortical network simulated by a Boolean recurrent neural network. This example shows the significance of measuring network complexity, and how our results bear new founding elements for the understanding of the complexity of real brain circuits.

  11. An attractor-based complexity measurement for Boolean recurrent neural networks.

    Directory of Open Access Journals (Sweden)

    Jérémie Cabessa

    Full Text Available We provide a novel refined attractor-based complexity measurement for Boolean recurrent neural networks that represents an assessment of their computational power in terms of the significance of their attractor dynamics. This complexity measurement is achieved by first proving a computational equivalence between Boolean recurrent neural networks and some specific class of ω-automata, and then translating the most refined classification of ω-automata to the Boolean neural network context. As a result, a hierarchical classification of Boolean neural networks based on their attractive dynamics is obtained, thus providing a novel refined attractor-based complexity measurement for Boolean recurrent neural networks. These results provide new theoretical insights to the computational and dynamical capabilities of neural networks according to their attractive potentialities. An application of our findings is illustrated by the analysis of the dynamics of a simplified model of the basal ganglia-thalamocortical network simulated by a Boolean recurrent neural network. This example shows the significance of measuring network complexity, and how our results bear new founding elements for the understanding of the complexity of real brain circuits.

  12. A One-Layer Recurrent Neural Network for Real-Time Portfolio Optimization With Probability Criterion.

    Science.gov (United States)

    Liu, Qingshan; Dang, Chuangyin; Huang, Tingwen

    2013-02-01

    This paper presents a decision-making model described by a recurrent neural network for dynamic portfolio optimization. The portfolio-optimization problem is first converted into a constrained fractional programming problem. Since the objective function in the programming problem is not convex, the traditional optimization techniques are no longer applicable for solving this problem. Fortunately, the objective function in the fractional programming is pseudoconvex on the feasible region. It leads to a one-layer recurrent neural network modeled by means of a discontinuous dynamic system. To ensure the optimal solutions for portfolio optimization, the convergence of the proposed neural network is analyzed and proved. In fact, the neural network guarantees to get the optimal solutions for portfolio-investment advice if some mild conditions are satisfied. A numerical example with simulation results substantiates the effectiveness and illustrates the characteristics of the proposed neural network.

  13. EMP response modeling of TVS based on the recurrent neural network

    Directory of Open Access Journals (Sweden)

    Zhiqiang JI

    2015-04-01

    Full Text Available Due to the larger workload in the implementation process and the poor consistence between the test results and actual situation problems when using the transmission line pulse (TLP testing methods, a modeling method based on the recurrent neural network is proposed for EMP response forecast. Based on the TLP testing system, two categories of EMP are increased, which are the machine model ESD EMP and human metal model ESD EMP. Elman neural network, Jordan neural network and their combination namely Elman-Jordan neural network are established for response modeling of NUP2105L transient voltage suppressor (TVS forecasting the response under different EMP. The simulation results show that the recurrent neural network has satisfying modeling effects and high computation efficiency.

  14. Lyapunov-Based Control Scheme for Single-Phase Grid-Connected PV Central Inverters

    NARCIS (Netherlands)

    Meza, C.; Biel, D.; Jeltsema, D.; Scherpen, J. M. A.

    2012-01-01

    A Lyapunov-based control scheme for single-phase single-stage grid-connected photovoltaic central inverters is presented. Besides rendering the closed-loop system globally stable, the designed controller is able to deal with the system uncertainty that depends on the solar irradiance. A laboratory p

  15. Lyapunov-Based Control Scheme for Single-Phase Grid-Connected PV Central Inverters

    NARCIS (Netherlands)

    Meza, C.; Biel, D.; Jeltsema, D.; Scherpen, J. M. A.

    2012-01-01

    A Lyapunov-based control scheme for single-phase single-stage grid-connected photovoltaic central inverters is presented. Besides rendering the closed-loop system globally stable, the designed controller is able to deal with the system uncertainty that depends on the solar irradiance. A laboratory p

  16. Lyapunov based control of hybrid energy storage system in electric vehicles

    DEFF Research Database (Denmark)

    El Fadil, H.; Giri, F.; Guerrero, Josep M.

    2012-01-01

    This paper deals with a Lyapunov based control principle in a hybrid energy storage system for electric vehicle. The storage system consists on fuel cell (FC) as a main power source and a supercapacitor (SC) as an auxiliary power source. The power stage of energy conversion consists on a boost...

  17. Predicting recurrent aphthous ulceration using genetic algorithms-optimized neural networks

    Directory of Open Access Journals (Sweden)

    Najla S Dar-Odeh

    2010-05-01

    Full Text Available Najla S Dar-Odeh1, Othman M Alsmadi2, Faris Bakri3, Zaer Abu-Hammour2, Asem A Shehabi3, Mahmoud K Al-Omiri1, Shatha M K Abu-Hammad4, Hamzeh Al-Mashni4, Mohammad B Saeed4, Wael Muqbil4, Osama A Abu-Hammad1 1Faculty of Dentistry, 2Faculty of Engineering and Technology, 3Faculty of Medicine, University of Jordan, Amman, Jordan; 4Dental Department, University of Jordan Hospital, Amman, JordanObjective: To construct and optimize a neural network that is capable of predicting the occurrence of recurrent aphthous ulceration (RAU based on a set of appropriate input data.Participants and methods: Artificial neural networks (ANN software employing genetic algorithms to optimize the architecture neural networks was used. Input and output data of 86 participants (predisposing factors and status of the participants with regards to recurrent aphthous ulceration were used to construct and train the neural networks. The optimized neural networks were then tested using untrained data of a further 10 participants.Results: The optimized neural network, which produced the most accurate predictions for the presence or absence of recurrent aphthous ulceration was found to employ: gender, hematological (with or without ferritin and mycological data of the participants, frequency of tooth brushing, and consumption of vegetables and fruits.Conclusions: Factors appearing to be related to recurrent aphthous ulceration and appropriate for use as input data to construct ANNs that predict recurrent aphthous ulceration were found to include the following: gender, hemoglobin, serum vitamin B12, serum ferritin, red cell folate, salivary candidal colony count, frequency of tooth brushing, and the number of fruits or vegetables consumed daily.Keywords: artifical neural networks, recurrent, aphthous ulceration, ulcer

  18. AN INTELLIGENT CONTROL SYSTEM BASED ON RECURRENT NEURAL FUZZY NETWORK AND ITS APPLICATION TO CSTR

    Institute of Scientific and Technical Information of China (English)

    JIA Li; YU Jinshou

    2005-01-01

    In this paper, an intelligent control system based on recurrent neural fuzzy network is presented for complex, uncertain and nonlinear processes, in which a recurrent neural fuzzy network is used as controller (RNFNC) to control a process adaptively and a recurrent neural network based on recursive predictive error algorithm (RNNM) is utilized to estimate the gradient information (ey)/(e)u for optimizing the parameters of controller.Compared with many neural fuzzy control systems, it uses recurrent neural network to realize the fuzzy controller. Moreover, recursive predictive error algorithm (RPE) is implemented to construct RNNM on line. Lastly, in order to evaluate the performance of theproposed control system, the presented control system is applied to continuously stirred tank reactor (CSTR). Simulation comparisons, based on control effect and output error,with general fuzzy controller and feed-forward neural fuzzy network controller (FNFNC),are conducted. In addition, the rates of convergence of RNNM respectively using RPE algorithm and gradient learning algorithm are also compared. The results show that the proposed control system is better for controlling uncertain and nonlinear processes.

  19. Financial Time Series Prediction Using Elman Recurrent Random Neural Networks

    Directory of Open Access Journals (Sweden)

    Jie Wang

    2016-01-01

    (ERNN, the empirical results show that the proposed neural network displays the best performance among these neural networks in financial time series forecasting. Further, the empirical research is performed in testing the predictive effects of SSE, TWSE, KOSPI, and Nikkei225 with the established model, and the corresponding statistical comparisons of the above market indices are also exhibited. The experimental results show that this approach gives good performance in predicting the values from the stock market indices.

  20. A novel nonlinear adaptive filter using a pipelined second-order Volterra recurrent neural network.

    Science.gov (United States)

    Zhao, Haiquan; Zhang, Jiashu

    2009-12-01

    To enhance the performance and overcome the heavy computational complexity of recurrent neural networks (RNN), a novel nonlinear adaptive filter based on a pipelined second-order Volterra recurrent neural network (PSOVRNN) is proposed in this paper. A modified real-time recurrent learning (RTRL) algorithm of the proposed filter is derived in much more detail. The PSOVRNN comprises of a number of simple small-scale second-order Volterra recurrent neural network (SOVRNN) modules. In contrast to the standard RNN, these modules of a PSOVRNN can be performed simultaneously in a pipelined parallelism fashion, which can lead to a significant improvement in its total computational efficiency. Moreover, since each module of the PSOVRNN is a SOVRNN in which nonlinearity is introduced by the recursive second-order Volterra (RSOV) expansion, its performance can be further improved. Computer simulations have demonstrated that the PSOVRNN performs better than the pipelined recurrent neural network (PRNN) and RNN for nonlinear colored signals prediction and nonlinear channel equalization. However, the superiority of the PSOVRNN over the PRNN is at the cost of increasing computational complexity due to the introduced nonlinear expansion of each module.

  1. Identification and prediction of dynamic systems using an interactively recurrent self-evolving fuzzy neural network.

    Science.gov (United States)

    Lin, Yang-Yin; Chang, Jyh-Yeong; Lin, Chin-Teng

    2013-02-01

    This paper presents a novel recurrent fuzzy neural network, called an interactively recurrent self-evolving fuzzy neural network (IRSFNN), for prediction and identification of dynamic systems. The recurrent structure in an IRSFNN is formed as an external loops and internal feedback by feeding the rule firing strength of each rule to others rules and itself. The consequent part in the IRSFNN is composed of a Takagi-Sugeno-Kang (TSK) or functional-link-based type. The proposed IRSFNN employs a functional link neural network (FLNN) to the consequent part of fuzzy rules for promoting the mapping ability. Unlike a TSK-type fuzzy neural network, the FLNN in the consequent part is a nonlinear function of input variables. An IRSFNNs learning starts with an empty rule base and all of the rules are generated and learned online through a simultaneous structure and parameter learning. An on-line clustering algorithm is effective in generating fuzzy rules. The consequent update parameters are derived by a variable-dimensional Kalman filter algorithm. The premise and recurrent parameters are learned through a gradient descent algorithm. We test the IRSFNN for the prediction and identification of dynamic plants and compare it to other well-known recurrent FNNs. The proposed model obtains enhanced performance results.

  2. Dynamic Hand Gesture Recognition for Wearable Devices with Low Complexity Recurrent Neural Networks

    OpenAIRE

    Shin, Sungho; Sung, Wonyong

    2016-01-01

    Gesture recognition is a very essential technology for many wearable devices. While previous algorithms are mostly based on statistical methods including the hidden Markov model, we develop two dynamic hand gesture recognition techniques using low complexity recurrent neural network (RNN) algorithms. One is based on video signal and employs a combined structure of a convolutional neural network (CNN) and an RNN. The other uses accelerometer data and only requires an RNN. Fixed-point optimizat...

  3. Reduced-Order Modeling for Flutter/LCO Using Recurrent Artificial Neural Network

    Science.gov (United States)

    Yao, Weigang; Liou, Meng-Sing

    2012-01-01

    The present study demonstrates the efficacy of a recurrent artificial neural network to provide a high fidelity time-dependent nonlinear reduced-order model (ROM) for flutter/limit-cycle oscillation (LCO) modeling. An artificial neural network is a relatively straightforward nonlinear method for modeling an input-output relationship from a set of known data, for which we use the radial basis function (RBF) with its parameters determined through a training process. The resulting RBF neural network, however, is only static and is not yet adequate for an application to problems of dynamic nature. The recurrent neural network method [1] is applied to construct a reduced order model resulting from a series of high-fidelity time-dependent data of aero-elastic simulations. Once the RBF neural network ROM is constructed properly, an accurate approximate solution can be obtained at a fraction of the cost of a full-order computation. The method derived during the study has been validated for predicting nonlinear aerodynamic forces in transonic flow and is capable of accurate flutter/LCO simulations. The obtained results indicate that the present recurrent RBF neural network is accurate and efficient for nonlinear aero-elastic system analysis

  4. Reduced-Order Modeling for Flutter/LCO Using Recurrent Artificial Neural Network

    Science.gov (United States)

    Yao, Weigang; Liou, Meng-Sing

    2012-01-01

    The present study demonstrates the efficacy of a recurrent artificial neural network to provide a high fidelity time-dependent nonlinear reduced-order model (ROM) for flutter/limit-cycle oscillation (LCO) modeling. An artificial neural network is a relatively straightforward nonlinear method for modeling an input-output relationship from a set of known data, for which we use the radial basis function (RBF) with its parameters determined through a training process. The resulting RBF neural network, however, is only static and is not yet adequate for an application to problems of dynamic nature. The recurrent neural network method [1] is applied to construct a reduced order model resulting from a series of high-fidelity time-dependent data of aero-elastic simulations. Once the RBF neural network ROM is constructed properly, an accurate approximate solution can be obtained at a fraction of the cost of a full-order computation. The method derived during the study has been validated for predicting nonlinear aerodynamic forces in transonic flow and is capable of accurate flutter/LCO simulations. The obtained results indicate that the present recurrent RBF neural network is accurate and efficient for nonlinear aero-elastic system analysis

  5. Learning Topology and Dynamics of Large Recurrent Neural Networks

    Science.gov (United States)

    She, Yiyuan; He, Yuejia; Wu, Dapeng

    2014-11-01

    Large-scale recurrent networks have drawn increasing attention recently because of their capabilities in modeling a large variety of real-world phenomena and physical mechanisms. This paper studies how to identify all authentic connections and estimate system parameters of a recurrent network, given a sequence of node observations. This task becomes extremely challenging in modern network applications, because the available observations are usually very noisy and limited, and the associated dynamical system is strongly nonlinear. By formulating the problem as multivariate sparse sigmoidal regression, we develop simple-to-implement network learning algorithms, with rigorous convergence guarantee in theory, for a variety of sparsity-promoting penalty forms. A quantile variant of progressive recurrent network screening is proposed for efficient computation and allows for direct cardinality control of network topology in estimation. Moreover, we investigate recurrent network stability conditions in Lyapunov's sense, and integrate such stability constraints into sparse network learning. Experiments show excellent performance of the proposed algorithms in network topology identification and forecasting.

  6. Recurrent Artificial Neural Networks and Finite State Natural Language Processing.

    Science.gov (United States)

    Moisl, Hermann

    It is argued that pessimistic assessments of the adequacy of artificial neural networks (ANNs) for natural language processing (NLP) on the grounds that they have a finite state architecture are unjustified, and that their adequacy in this regard is an empirical issue. First, arguments that counter standard objections to finite state NLP on the…

  7. Homeostatic scaling of excitability in recurrent neural networks.

    NARCIS (Netherlands)

    M.W.H. Remme; W.J. Wadman

    2012-01-01

    Neurons adjust their intrinsic excitability when experiencing a persistent change in synaptic drive. This process can prevent neural activity from moving into either a quiescent state or a saturated state in the face of ongoing plasticity, and is thought to promote stability of the network in which

  8. Synchronization control of memristor-based recurrent neural networks with perturbations.

    Science.gov (United States)

    Wang, Weiping; Li, Lixiang; Peng, Haipeng; Xiao, Jinghua; Yang, Yixian

    2014-05-01

    In this paper, the synchronization control of memristor-based recurrent neural networks with impulsive perturbations or boundary perturbations is studied. We find that the memristive connection weights have a certain relationship with the stability of the system. Some criteria are obtained to guarantee that memristive neural networks have strong noise tolerance capability. Two kinds of controllers are designed so that the memristive neural networks with perturbations can converge to the equilibrium points, which evoke human's memory patterns. The analysis in this paper employs the differential inclusions theory and the Lyapunov functional method. Numerical examples are given to show the effectiveness of our results.

  9. Oscillation, Conduction Delays, and Learning Cooperate to Establish Neural Competition in Recurrent Networks.

    Science.gov (United States)

    Kato, Hideyuki; Ikeguchi, Tohru

    2016-01-01

    Specific memory might be stored in a subnetwork consisting of a small population of neurons. To select neurons involved in memory formation, neural competition might be essential. In this paper, we show that excitable neurons are competitive and organize into two assemblies in a recurrent network with spike timing-dependent synaptic plasticity (STDP) and axonal conduction delays. Neural competition is established by the cooperation of spontaneously induced neural oscillation, axonal conduction delays, and STDP. We also suggest that the competition mechanism in this paper is one of the basic functions required to organize memory-storing subnetworks into fine-scale cortical networks.

  10. Finite-time synchronization control of a class of memristor-based recurrent neural networks.

    Science.gov (United States)

    Jiang, Minghui; Wang, Shuangtao; Mei, Jun; Shen, Yanjun

    2015-03-01

    This paper presents a global and local finite-time synchronization control law for memristor neural networks. By utilizing the drive-response concept, differential inclusions theory, and Lyapunov functional method, we establish several sufficient conditions for finite-time synchronization between the master and corresponding slave memristor-based neural network with the designed controller. In comparison with the existing results, the proposed stability conditions are new, and the obtained results extend some previous works on conventional recurrent neural networks. Two numerical examples are provided to illustrate the effective of the design method.

  11. A recurrent neural network with exponential convergence for solving convex quadratic program and related linear piecewise equations.

    Science.gov (United States)

    Xia, Youshen; Feng, Gang; Wang, Jun

    2004-09-01

    This paper presents a recurrent neural network for solving strict convex quadratic programming problems and related linear piecewise equations. Compared with the existing neural networks for quadratic program, the proposed neural network has a one-layer structure with a low model complexity. Moreover, the proposed neural network is shown to have a finite-time convergence and exponential convergence. Illustrative examples further show the good performance of the proposed neural network in real-time applications.

  12. Lyapunov-based Low-thrust Optimal Orbit Transfer: An approach in Cartesian coordinates

    CERN Document Server

    Zhang, Hantian; Cao, Qingjie

    2014-01-01

    This paper presents a simple approach to low-thrust optimal-fuel and optimal-time transfer problems between two elliptic orbits using the Cartesian coordinates system. In this case, an orbit is described by its specific angular momentum and Laplace vectors with a free injection point. Trajectory optimization with the pseudospectral method and nonlinear programming are supported by the initial guess generated from the Chang-Chichka-Marsden Lyapunov-based transfer controller. This approach successfully solves several low-thrust optimal problems. Numerical results show that the Lyapunov-based initial guess overcomes the difficulty in optimization caused by the strong oscillation of variables in the Cartesian coordinates system. Furthermore, a comparison of the results shows that obtaining the optimal transfer solution through the polynomial approximation by utilizing Cartesian coordinates is easier than using orbital elements, which normally produce strongly nonlinear equations of motion. In this paper, the Eart...

  13. A One-Layer Recurrent Neural Network for Pseudoconvex Optimization Problems With Equality and Inequality Constraints.

    Science.gov (United States)

    Qin, Sitian; Yang, Xiudong; Xue, Xiaoping; Song, Jiahui

    2017-10-01

    Pseudoconvex optimization problem, as an important nonconvex optimization problem, plays an important role in scientific and engineering applications. In this paper, a recurrent one-layer neural network is proposed for solving the pseudoconvex optimization problem with equality and inequality constraints. It is proved that from any initial state, the state of the proposed neural network reaches the feasible region in finite time and stays there thereafter. It is also proved that the state of the proposed neural network is convergent to an optimal solution of the related problem. Compared with the related existing recurrent neural networks for the pseudoconvex optimization problems, the proposed neural network in this paper does not need the penalty parameters and has a better convergence. Meanwhile, the proposed neural network is used to solve three nonsmooth optimization problems, and we make some detailed comparisons with the known related conclusions. In the end, some numerical examples are provided to illustrate the effectiveness of the performance of the proposed neural network.

  14. Stack- and Queue-like Dynamics in Recurrent Neural Networks

    OpenAIRE

    Grüning, A

    2006-01-01

    What dynamics do simple recurrent networks (SRNs) develop to represent stack-like and queue-like memories? SRNs have been widely used as models in cognitive science. However, they are interesting in their own right as non-symbolic computing devices from the viewpoints of analogue computing and dynamical systems theory. In this paper, SRNs are trained oil two prototypical formal languages with recursive structures that need stack-like or queue-like memories for processing, respectively. The ev...

  15. Direction-of-change forecasting using a volatility-based recurrent neural network

    NARCIS (Netherlands)

    Bekiros, S.D.; Georgoutsos, D.A.

    2008-01-01

    This paper investigates the profitability of a trading strategy, based on recurrent neural networks, that attempts to predict the direction-of-change of the market in the case of the NASDAQ composite index. The sample extends over the period 8 February 1971 to 7 April 1998, while the sub-period 8 Ap

  16. Evaluation of Heart Rate Variability by Using Wavelet Transform and a Recurrent Neural Network

    Science.gov (United States)

    2007-11-02

    variability is proposed. This method combines the wavelet transform with a recurrent neural network. The features of the proposed method are as follows...1. The wavelet transform is utilized for the feature extraction so that the local change of heart rate variability in the time-frequency domain can

  17. Folk music style modelling by recurrent neural networks with long short term memory units

    OpenAIRE

    Sturm, Bob; Santos, João Felipe; Korshunova, Iryna

    2015-01-01

    We demonstrate two generative models created by training a recurrent neural network (RNN) with three hidden layers of long short-term memory (LSTM) units. This extends past work in numerous directions, including training deeper models with nearly 24,000 high-level transcriptions of folk tunes. We discuss our on-going work.

  18. Direction-of-change forecasting using a volatility-based recurrent neural network

    NARCIS (Netherlands)

    Bekiros, S.D.; Georgoutsos, D.A.

    2008-01-01

    This paper investigates the profitability of a trading strategy, based on recurrent neural networks, that attempts to predict the direction-of-change of the market in the case of the NASDAQ composite index. The sample extends over the period 8 February 1971 to 7 April 1998, while the sub-period 8

  19. Adaptive classification of temporal signals in fixed-weights recurrent neural networks: an existence proof

    CERN Document Server

    Tyukin, Ivan; van Leeuwen, Cees

    2007-01-01

    We address the important theoretical question why a recurrent neural network with fixed weights can adaptively classify time-varied signals in the presence of additive noise and parametric perturbations. We provide a mathematical proof assuming that unknown parameters are allowed to enter the signal nonlinearly and the noise amplitude is sufficiently small.

  20. Congestion Control for ATM Networks Based on Diagonal Recurrent Neural Networks

    Institute of Scientific and Technical Information of China (English)

    HuangYunxian; YanWei

    1997-01-01

    An adaptive control model and its algorithms based on simple diagonal recurrent neural networks are presented for the dynamic congestion control in broadband ATM networks.Two simple dynamic queuing models of real networks are used to test the performance of the suggested control scheme.

  1. A one-layer recurrent neural network for constrained nonconvex optimization.

    Science.gov (United States)

    Li, Guocheng; Yan, Zheng; Wang, Jun

    2015-01-01

    In this paper, a one-layer recurrent neural network is proposed for solving nonconvex optimization problems subject to general inequality constraints, designed based on an exact penalty function method. It is proved herein that any neuron state of the proposed neural network is convergent to the feasible region in finite time and stays there thereafter, provided that the penalty parameter is sufficiently large. The lower bounds of the penalty parameter and convergence time are also estimated. In addition, any neural state of the proposed neural network is convergent to its equilibrium point set which satisfies the Karush-Kuhn-Tucker conditions of the optimization problem. Moreover, the equilibrium point set is equivalent to the optimal solution to the nonconvex optimization problem if the objective function and constraints satisfy given conditions. Four numerical examples are provided to illustrate the performances of the proposed neural network.

  2. A one-layer recurrent neural network for constrained nonsmooth invex optimization.

    Science.gov (United States)

    Li, Guocheng; Yan, Zheng; Wang, Jun

    2014-02-01

    Invexity is an important notion in nonconvex optimization. In this paper, a one-layer recurrent neural network is proposed for solving constrained nonsmooth invex optimization problems, designed based on an exact penalty function method. It is proved herein that any state of the proposed neural network is globally convergent to the optimal solution set of constrained invex optimization problems, with a sufficiently large penalty parameter. In addition, any neural state is globally convergent to the unique optimal solution, provided that the objective function and constraint functions are pseudoconvex. Moreover, any neural state is globally convergent to the feasible region in finite time and stays there thereafter. The lower bounds of the penalty parameter and convergence time are also estimated. Two numerical examples are provided to illustrate the performances of the proposed neural network.

  3. Towards a Unified Recurrent Neural Network Theory:The Uniformly Pseudo-Projection-Anti-Monotone Net

    Institute of Scientific and Technical Information of China (English)

    Zong Ben XU; Chen QIAO

    2011-01-01

    In the past decades, various neural network models have been developed for modeling the behavior of human brain or performing problem-solving through simulating the behavior of human brain. The recurrent neural networks are the type of neural networks to model or simulate associative memory behavior of human being. A recurrent neural network (RNN) can be generally formalized as a dynamic system associated with two fundamental operators: one is the nonlinear activation operator deduced from the input-output properties of the involved neurons, and the other is the synaptic connections (a matrix) among the neurons. Through carefully examining properties of various activation functions used, we introduce a novel type of monotone operators, the uniformly pseudo-projectionanti-monotone (UPPAM) operators, to unify the various RNN models appeared in the literature. We develop a unified encoding and stability theory for the UPPAM network model when the time is discrete.The established model and theory not only unify but also jointly generalize the most known results of RNNs. The approach has lunched a visible step towards establishment of a unified mathematical theory of recurrent neural networks.

  4. Artificial neural network in studying factors of hepatic cancer recurrence after hepatectomy

    Institute of Scientific and Technical Information of China (English)

    HE Jia; HE Xian-min; ZHANG Zhi-jian

    2002-01-01

    Objective: To explore the affecting factors of liver cancer recurrence after hepatectomy. Methods:The BP artificial neural network - Cox regression was introduced to analyze the factors of recurrence in1 457 patients. Results: The affecting factors statistically significant to liver cancer prognosis was selected.There were 18 factors to be selected by uni-factor analysis, and 9 factors to be selected by multi-factor analysis. Conclusion: The 9 factors selected can be used as important indexes to evaluate the recurrence of liver cancer after hepatectomy. The artificial neural network is a better method to analyze the clinical data, which provides scientific and objective data for evaluating prognosis of liver cancer.

  5. Spatiotemporal Dynamics and Reliable Computations in Recurrent Spiking Neural Networks

    Science.gov (United States)

    Pyle, Ryan; Rosenbaum, Robert

    2017-01-01

    Randomly connected networks of excitatory and inhibitory spiking neurons provide a parsimonious model of neural variability, but are notoriously unreliable for performing computations. We show that this difficulty is overcome by incorporating the well-documented dependence of connection probability on distance. Spatially extended spiking networks exhibit symmetry-breaking bifurcations and generate spatiotemporal patterns that can be trained to perform dynamical computations under a reservoir computing framework.

  6. Delay-Dependent Stability Criteria of Uncertain Periodic Switched Recurrent Neural Networks with Time-Varying Delays

    Directory of Open Access Journals (Sweden)

    Xing Yin

    2011-01-01

    uncertain periodic switched recurrent neural networks with time-varying delays. When uncertain discrete-time recurrent neural network is a periodic system, it is expressed as switched neural network for the finite switching state. Based on the switched quadratic Lyapunov functional approach (SQLF and free-weighting matrix approach (FWM, some linear matrix inequality criteria are found to guarantee the delay-dependent asymptotical stability of these systems. Two examples illustrate the exactness of the proposed criteria.

  7. Adaptive capability of recurrent neural networks with fixed weights for series-parallel system identification.

    Science.gov (United States)

    Lo, James Ting-Ho

    2009-11-01

    By a fundamental neural filtering theorem, a recurrent neural network with fixed weights is known to be capable of adapting to an uncertain environment. This letter reports some mathematical results on the performance of such adaptation for series-parallel identification of a dynamical system as compared with the performance of the best series-parallel identifier possible under the assumption that the precise value of the uncertain environmental process is given. In short, if an uncertain environmental process is observable (not necessarily constant) from the output of a dynamical system or constant (not necessarily observable), then a recurrent neural network exists as a series-parallel identifier of the dynamical system whose output approaches the output of an optimal series-parallel identifier using the environmental process as an additional input.

  8. Internal Representation of Task Rules by Recurrent Dynamics: The Importance of the Diversity of Neural Responses

    Science.gov (United States)

    Rigotti, Mattia; Rubin, Daniel Ben Dayan; Wang, Xiao-Jing; Fusi, Stefano

    2010-01-01

    Neural activity of behaving animals, especially in the prefrontal cortex, is highly heterogeneous, with selective responses to diverse aspects of the executed task. We propose a general model of recurrent neural networks that perform complex rule-based tasks, and we show that the diversity of neuronal responses plays a fundamental role when the behavioral responses are context-dependent. Specifically, we found that when the inner mental states encoding the task rules are represented by stable patterns of neural activity (attractors of the neural dynamics), the neurons must be selective for combinations of sensory stimuli and inner mental states. Such mixed selectivity is easily obtained by neurons that connect with random synaptic strengths both to the recurrent network and to neurons encoding sensory inputs. The number of randomly connected neurons needed to solve a task is on average only three times as large as the number of neurons needed in a network designed ad hoc. Moreover, the number of needed neurons grows only linearly with the number of task-relevant events and mental states, provided that each neuron responds to a large proportion of events (dense/distributed coding). A biologically realistic implementation of the model captures several aspects of the activity recorded from monkeys performing context-dependent tasks. Our findings explain the importance of the diversity of neural responses and provide us with simple and general principles for designing attractor neural networks that perform complex computation. PMID:21048899

  9. Firing rate dynamics in recurrent spiking neural networks with intrinsic and network heterogeneity.

    Science.gov (United States)

    Ly, Cheng

    2015-12-01

    Heterogeneity of neural attributes has recently gained a lot of attention and is increasing recognized as a crucial feature in neural processing. Despite its importance, this physiological feature has traditionally been neglected in theoretical studies of cortical neural networks. Thus, there is still a lot unknown about the consequences of cellular and circuit heterogeneity in spiking neural networks. In particular, combining network or synaptic heterogeneity and intrinsic heterogeneity has yet to be considered systematically despite the fact that both are known to exist and likely have significant roles in neural network dynamics. In a canonical recurrent spiking neural network model, we study how these two forms of heterogeneity lead to different distributions of excitatory firing rates. To analytically characterize how these types of heterogeneities affect the network, we employ a dimension reduction method that relies on a combination of Monte Carlo simulations and probability density function equations. We find that the relationship between intrinsic and network heterogeneity has a strong effect on the overall level of heterogeneity of the firing rates. Specifically, this relationship can lead to amplification or attenuation of firing rate heterogeneity, and these effects depend on whether the recurrent network is firing asynchronously or rhythmically firing. These observations are captured with the aforementioned reduction method, and furthermore simpler analytic descriptions based on this dimension reduction method are developed. The final analytic descriptions provide compact and descriptive formulas for how the relationship between intrinsic and network heterogeneity determines the firing rate heterogeneity dynamics in various settings.

  10. Internal representation of task rules by recurrent dynamics: the importance of the diversity of neural responses

    Directory of Open Access Journals (Sweden)

    Mattia Rigotti

    2010-10-01

    Full Text Available Neural activity of behaving animals, especially in the prefrontal cortex, is highly heterogeneous, with selective responses to diverse aspects of the executed task. We propose a general model of recurrent neural networks that perform complex rule-based tasks, and we show that the diversity of neuronal responses plays a fundamental role when the behavioral responses are context dependent. Specifically, we found that when the inner mental states encoding the task rules are represented by stable patterns of neural activity (attractors of the neural dynamics, the neurons must be selective for combinations of sensory stimuli and inner mental states. Such mixed selectivity is easily obtained by neurons that connect with random synaptic strengths both to the recurrent network and to neurons encoding sensory inputs. The number of randomly connected neurons needed to solve a task is on average only three times as large as the number of neurons needed in a network designed ad hoc. Moreover, the number of needed neurons grows only linearly with the number of task-relevant events and mental states, provided that each neuron responds to a large proportion of events (dense/distributed coding. A biologically realistic implementation of the model captures several aspects of the activity recorded from monkeys performing context dependent tasks. Our findings explain the importance of the diversity of neural responses and provide us with simple and general principles for designing attractor neural networks that perform complex computation.

  11. A two-layer recurrent neural network for nonsmooth convex optimization problems.

    Science.gov (United States)

    Qin, Sitian; Xue, Xiaoping

    2015-06-01

    In this paper, a two-layer recurrent neural network is proposed to solve the nonsmooth convex optimization problem subject to convex inequality and linear equality constraints. Compared with existing neural network models, the proposed neural network has a low model complexity and avoids penalty parameters. It is proved that from any initial point, the state of the proposed neural network reaches the equality feasible region in finite time and stays there thereafter. Moreover, the state is unique if the initial point lies in the equality feasible region. The equilibrium point set of the proposed neural network is proved to be equivalent to the Karush-Kuhn-Tucker optimality set of the original optimization problem. It is further proved that the equilibrium point of the proposed neural network is stable in the sense of Lyapunov. Moreover, from any initial point, the state is proved to be convergent to an equilibrium point of the proposed neural network. Finally, as applications, the proposed neural network is used to solve nonlinear convex programming with linear constraints and L1 -norm minimization problems.

  12. Dynamic stability conditions for Lotka-Volterra recurrent neural networks with delays.

    Science.gov (United States)

    Yi, Zhang; Tan, K K

    2002-07-01

    The Lotka-Volterra model of neural networks, derived from the membrane dynamics of competing neurons, have found successful applications in many "winner-take-all" types of problems. This paper studies the dynamic stability properties of general Lotka-Volterra recurrent neural networks with delays. Conditions for nondivergence of the neural networks are derived. These conditions are based on local inhibition of networks, thereby allowing these networks to possess a multistability property. Multistability is a necessary property of a network that will enable important neural computations such as those governing the decision making process. Under these nondivergence conditions, a compact set that globally attracts all the trajectories of a network can be computed explicitly. If the connection weight matrix of a network is symmetric in some sense, and the delays of the network are in L2 space, we can prove that the network will have the property of complete stability.

  13. Simultaneous multichannel signal transfers via chaos in a recurrent neural network.

    Science.gov (United States)

    Soma, Ken-ichiro; Mori, Ryota; Sato, Ryuichi; Furumai, Noriyuki; Nara, Shigetoshi

    2015-05-01

    We propose neural network model that demonstrates the phenomenon of signal transfer between separated neuron groups via other chaotic neurons that show no apparent correlations with the input signal. The model is a recurrent neural network in which it is supposed that synchronous behavior between small groups of input and output neurons has been learned as fragments of high-dimensional memory patterns, and depletion of neural connections results in chaotic wandering dynamics. Computer experiments show that when a strong oscillatory signal is applied to an input group in the chaotic regime, the signal is successfully transferred to the corresponding output group, although no correlation is observed between the input signal and the intermediary neurons. Signal transfer is also observed when multiple signals are applied simultaneously to separate input groups belonging to different memory attractors. In this sense simultaneous multichannel communications are realized, and the chaotic neural dynamics acts as a signal transfer medium in which the signal appears to be hidden.

  14. A non-penalty recurrent neural network for solving a class of constrained optimization problems.

    Science.gov (United States)

    Hosseini, Alireza

    2016-01-01

    In this paper, we explain a methodology to analyze convergence of some differential inclusion-based neural networks for solving nonsmooth optimization problems. For a general differential inclusion, we show that if its right hand-side set valued map satisfies some conditions, then solution trajectory of the differential inclusion converges to optimal solution set of its corresponding in optimization problem. Based on the obtained methodology, we introduce a new recurrent neural network for solving nonsmooth optimization problems. Objective function does not need to be convex on R(n) nor does the new neural network model require any penalty parameter. We compare our new method with some penalty-based and non-penalty based models. Moreover for differentiable cases, we implement circuit diagram of the new neural network.

  15. Ship motion extreme short time prediction of ship pitch based on diagonal recurrent neural network

    Institute of Scientific and Technical Information of China (English)

    SHEN Yan; XIE Mei-ping

    2005-01-01

    A DRNN (diagonal recurrent neural network) and its RPE (recurrent prediction error) learning algorithm are proposed in this paper .Using of the simple structure of DRNN can reduce the capacity of calculation. The principle of RPE learning algorithm is to adjust weights along the direction of Gauss-Newton. Meanwhile, it is unnecessary to calculate the second local derivative and the inverse matrixes, whose unbiasedness is proved. With application to the extremely short time prediction of large ship pitch, satisfactory results are obtained. Prediction effect of this algorithm is compared with that of auto-regression and periodical diagram method, and comparison results show that the proposed algorithm is feasible.

  16. Global robust dissipativity of interval recurrent neural networks with time-varying delay and discontinuous activations.

    Science.gov (United States)

    Duan, Lian; Huang, Lihong; Guo, Zhenyuan

    2016-07-01

    In this paper, the problems of robust dissipativity and robust exponential dissipativity are discussed for a class of recurrent neural networks with time-varying delay and discontinuous activations. We extend an invariance principle for the study of the dissipativity problem of delay systems to the discontinuous case. Based on the developed theory, some novel criteria for checking the global robust dissipativity and global robust exponential dissipativity of the addressed neural network model are established by constructing appropriate Lyapunov functionals and employing the theory of Filippov systems and matrix inequality techniques. The effectiveness of the theoretical results is shown by two examples with numerical simulations.

  17. Identification Simulation for Dynamical System Based on Genetic Algorithm and Recurrent Multilayer Neural Network

    Institute of Scientific and Technical Information of China (English)

    鄢田云; 张翠芳; 靳蕃

    2003-01-01

    Identification simulation for dynamical system which is based on genetic algorithm (GA) and recurrent multilayer neural network (RMNN) is presented. In order to reduce the inputs of the model, RMNN which can remember and store some previous parameters is used for identifier. And for its high efficiency and optimization, genetic algorithm is introduced into training RMNN. Simulation results show the effectiveness of the proposed scheme. Under the same training algorithm, the identification performance of RMNN is superior to that of nonrecurrent multilayer neural network (NRMNN).

  18. Global robust dissipativity of interval recurrent neural networks with time-varying delay and discontinuous activations

    Science.gov (United States)

    Duan, Lian; Huang, Lihong; Guo, Zhenyuan

    2016-07-01

    In this paper, the problems of robust dissipativity and robust exponential dissipativity are discussed for a class of recurrent neural networks with time-varying delay and discontinuous activations. We extend an invariance principle for the study of the dissipativity problem of delay systems to the discontinuous case. Based on the developed theory, some novel criteria for checking the global robust dissipativity and global robust exponential dissipativity of the addressed neural network model are established by constructing appropriate Lyapunov functionals and employing the theory of Filippov systems and matrix inequality techniques. The effectiveness of the theoretical results is shown by two examples with numerical simulations.

  19. Stability Analysis for Recurrent Neural Networks with Time-varying Delay

    Institute of Scientific and Technical Information of China (English)

    Yuan-Yuan Wu; Yu-Qiang Wu

    2009-01-01

    This paper is concerned with the stability analysis for static recurrent neural networks (RNNs) with time-varying delay. By Lyapunov functional method and linear matrix inequality technique, some new delay-dependent conditions are established to ensure the asymptotic stability of the neural network. Expressed in linear matrix inequalities (LMIs), the proposed delay-dependent stability conditions can be checked using the recently developed algorithms. A numerical example is given to show that the obtained conditions can provide less conservative results than some existing ones.

  20. Dynamical stability analysis of delayed recurrent neural networks with ring structure

    Science.gov (United States)

    Zhang, Huaguang; Huang, Yujiao; Cai, Tiaoyang; Wang, Zhanshan

    2014-04-01

    In this paper, multistability is discussed for delayed recurrent neural networks with ring structure and multi-step piecewise linear activation functions. Sufficient criteria are obtained to check the existence of multiple equilibria. A lemma is proposed to explore the number and the cross-direction of purely imaginary roots for the characteristic equation, which corresponds to the neural network model. Stability of all of equilibria is investigated. The work improves and extends the existing stability results in the literature. Finally, two examples are given to illustrate the effectiveness of the obtained results.

  1. Design and analysis of a novel chaotic diagonal recurrent neural network

    Science.gov (United States)

    Wang, Libiao; Meng, Zhuo; Sun, Yize; Guo, Lei; Zhou, Mingxing

    2015-09-01

    A chaotic neural network model with logistic mapping is proposed to improve the performance of the conventional diagonal recurrent neural network. The network shows rich dynamic behaviors that contribute to escaping from a local minimum to reach the global minimum easily. Then, a simple parameter modulated chaos controller is adopted to enhance convergence speed of the network. Furthermore, an adaptive learning algorithm with the robust adaptive dead zone vector is designed to improve the generalization performance of the network, and weights convergence for the network with the adaptive dead zone vectors is proved in the sense of Lyapunov functions. Finally, the numerical simulation is carried out to demonstrate the correctness of the theory.

  2. A Study on Protein Residue Contacts Prediction by Recurrent Neural Network

    Institute of Scientific and Technical Information of China (English)

    Liu Gui-xia; Zhu Yuan-xian; Zhou Wen-gang; Huang Yan-xin; Zhou Chun-guang; Wang Rong-xing

    2005-01-01

    A new method was described for using a recurrent neural network with bias units to predict contact maps in proteins.The main inputs to the neural network include residues pairwise, residue classification according to hydrophobicity, polar,acidic, basic and secondary structure information and residue separation between two residues. In our work, a dataset was used which was composed of 53 globulin proteins of known 3D structure. An average predictive accuracy of 0. 29 was obtained. Our results demonstrate the viability of the approach for predicting contact maps.

  3. A recurrent neural network for solving a class of generalized convex optimization problems.

    Science.gov (United States)

    Hosseini, Alireza; Wang, Jun; Hosseini, S Mohammad

    2013-08-01

    In this paper, we propose a penalty-based recurrent neural network for solving a class of constrained optimization problems with generalized convex objective functions. The model has a simple structure described by using a differential inclusion. It is also applicable for any nonsmooth optimization problem with affine equality and convex inequality constraints, provided that the objective function is regular and pseudoconvex on feasible region of the problem. It is proven herein that the state vector of the proposed neural network globally converges to and stays thereafter in the feasible region in finite time, and converges to the optimal solution set of the problem.

  4. Nonlinear dynamics of direction-selective recurrent neural media.

    Science.gov (United States)

    Xie, Xiaohui; Giese, Martin A

    2002-05-01

    The direction selectivity of cortical neurons can be accounted for by asymmetric lateral connections. Such lateral connectivity leads to a network dynamics with characteristic properties that can be exploited for distinguishing in neurophysiological experiments this mechanism for direction selectivity from other possible mechanisms. We present a mathematical analysis for a class of direction-selective neural models with asymmetric lateral connections. Contrasting with earlier theoretical studies that have analyzed approximations of the network dynamics by neglecting nonlinearities using methods from linear systems theory, we study the network dynamics with nonlinearity taken into consideration. We show that asymmetrically coupled networks can stabilize stimulus-locked traveling pulse solutions that are appropriate for the modeling of the responses of direction-selective neurons. In addition, our analysis shows that outside a certain regime of stimulus speeds the stability of these solutions breaks down, giving rise to lurching activity waves with specific spatiotemporal periodicity. These solutions, and the bifurcation by which they arise, cannot be easily accounted for by classical models for direction selectivity.

  5. Understanding Gating Operations in Recurrent Neural Networks through Opinion Expression Extraction

    Directory of Open Access Journals (Sweden)

    Xin Wang

    2016-08-01

    Full Text Available Extracting opinion expressions from text is an essential task of sentiment analysis, which is usually treated as one of the word-level sequence labeling problems. In such problems, compositional models with multiplicative gating operations provide efficient ways to encode the contexts, as well as to choose critical information. Thus, in this paper, we adopt Long Short-Term Memory (LSTM recurrent neural networks to address the task of opinion expression extraction and explore the internal mechanisms of the model. The proposed approach is evaluated on the Multi-Perspective Question Answering (MPQA opinion corpus. The experimental results demonstrate improvement over previous approaches, including the state-of-the-art method based on simple recurrent neural networks. We also provide a novel micro perspective to analyze the run-time processes and gain new insights into the advantages of LSTM selecting the source of information with its flexible connections and multiplicative gating operations.

  6. Delay dependent stability criteria for recurrent neural networks with time varying delays

    Institute of Scientific and Technical Information of China (English)

    Zhanshan WANG; Huaguang ZHANG

    2009-01-01

    This paper aims to present some delay-dependent global asymptotic stability criteria for recurrent neural networks with time varying delays.The obtained results have no restriction on the magnitude of derivative of time varying delay,and can be easily checked due to the form of linear matrix inequality.By comparison with some previous results,the obtained results are less conservative.A numerical example is utilized to demonstrate the effectiveness of the obtained results.

  7. Complex Dynamical Network Control for Trajectory Tracking Using Delayed Recurrent Neural Networks

    Directory of Open Access Journals (Sweden)

    Jose P. Perez

    2014-01-01

    Full Text Available In this paper, the problem of trajectory tracking is studied. Based on the V-stability and Lyapunov theory, a control law that achieves the global asymptotic stability of the tracking error between a delayed recurrent neural network and a complex dynamical network is obtained. To illustrate the analytic results, we present a tracking simulation of a dynamical network with each node being just one Lorenz’s dynamical system and three identical Chen’s dynamical systems.

  8. Stability of Stochastic Reaction-Diffusion Recurrent Neural Networks with Unbounded Distributed Delays

    Directory of Open Access Journals (Sweden)

    Chuangxia Huang

    2011-01-01

    Full Text Available Stability of reaction-diffusion recurrent neural networks (RNNs with continuously distributed delays and stochastic influence are considered. Some new sufficient conditions to guarantee the almost sure exponential stability and mean square exponential stability of an equilibrium solution are obtained, respectively. Lyapunov's functional method, M-matrix properties, some inequality technique, and nonnegative semimartingale convergence theorem are used in our approach. The obtained conclusions improve some published results.

  9. Constructing Long Short-Term Memory based Deep Recurrent Neural Networks for Large Vocabulary Speech Recognition

    OpenAIRE

    Li, Xiangang; Wu, Xihong

    2014-01-01

    Long short-term memory (LSTM) based acoustic modeling methods have recently been shown to give state-of-the-art performance on some speech recognition tasks. To achieve a further performance improvement, in this research, deep extensions on LSTM are investigated considering that deep hierarchical model has turned out to be more efficient than a shallow one. Motivated by previous research on constructing deep recurrent neural networks (RNNs), alternative deep LSTM architectures are proposed an...

  10. Generalized cost-criterion-based learning algorithm for diagonal recurrent neural networks

    Science.gov (United States)

    Wang, Yongji; Wang, Hong

    2000-05-01

    A new generalized cost criterion based learning algorithm for diagonal recurrent neural networks is presented, which is with form of recursive prediction error (RPE) and has second convergent order. A guideline for the choice of the optimal learning rate is derived from convergence analysis. The application of this method to dynamic modeling of typical chemical processes shows that the generalized cost criterion RPE (QRPE) has higher modeling precision than BP trained MLP and quadratic cost criterion trained RPE (QRPE).

  11. Delay-Dependent Exponential Stability of Stochastic Delayed Recurrent Neural Networks with Markovian Switching

    Institute of Scientific and Technical Information of China (English)

    LIU Hai-feng; WANG Chun-hua; WEI Guo-liang

    2008-01-01

    The exponential stability problem is investigated fora class of stochastic recurrent neural networks with time delay and Markovian switching.By using It(o)'s differential formula and the Lyapunov stabifity theory,sufficient condition for the solvability of this problem is derived in telm of linear matrix inequalities,which can be easily checked by resorting to available software packages.A numerical example and the simulation are exploited to demonstrate the effectiveness of the proposed results.

  12. INFLUENCE OF NOISE AND DELAY ON REACTION-DIFFUSION RECURRENT NEURAL NETWORKS

    Institute of Scientific and Technical Information of China (English)

    Li Wu

    2006-01-01

    In this paper, the influence of the noise and delay upon the stability property of reaction-diffusion recurrent neural networks (RNNs) with the time-varying delay is discussed. The new and easily verifiable conditions to guarantee the mean value exponential stability of an equilibrium solution are derived. The rate of exponential convergence can be estimated by means of a simple computation based on these criteria.

  13. Non-Minimum Phase Nonlinear System Predictive Control Based on Local Recurrent Neural Networks

    Institute of Scientific and Technical Information of China (English)

    张燕; 陈增强; 袁著祉

    2003-01-01

    After a recursive multi-step-ahead predictor for nonlinear systems based on local recurrent neural networks is introduced, an intelligent PID controller is adopted to correct the errors including identified model errors and accumulated errors produced in the recursive process. Characterized by predictive control, this method can achieve a good control accuracy and has good robustness. A simulation study shows that this control algorithm is very effective.

  14. Particle Swarm Optimization Recurrent Neural Network Based Z-source Inverter Fed Induction Motor Drive

    OpenAIRE

    R. Selva Santhose Kumar; S.M. Girirajkumar

    2014-01-01

    In this study, the proposal is made for Particle Swarm Optimization (PSO) Recurrent Neural Network (RNN) based Z-Source Inverter Fed Induction Motor Drive. The proposed method is used to enhance the performance of the induction motor while reducing the Total Harmonic Distortion (THD), eliminating the oscillation period of the stator current, torque and speed. Here, the PSO technique uses the induction motor speed and reference speed as the input parameters. From the input parameters, it optim...

  15. Diagonal recurrent neural network based adaptive control of nonlinear dynamical systems using lyapunov stability criterion.

    Science.gov (United States)

    Kumar, Rajesh; Srivastava, Smriti; Gupta, J R P

    2017-03-01

    In this paper adaptive control of nonlinear dynamical systems using diagonal recurrent neural network (DRNN) is proposed. The structure of DRNN is a modification of fully connected recurrent neural network (FCRNN). Presence of self-recurrent neurons in the hidden layer of DRNN gives it an ability to capture the dynamic behaviour of the nonlinear plant under consideration (to be controlled). To ensure stability, update rules are developed using lyapunov stability criterion. These rules are then used for adjusting the various parameters of DRNN. The responses of plants obtained with DRNN are compared with those obtained when multi-layer feed forward neural network (MLFFNN) is used as a controller. Also, in example 4, FCRNN is also investigated and compared with DRNN and MLFFNN. Robustness of the proposed control scheme is also tested against parameter variations and disturbance signals. Four simulation examples including one-link robotic manipulator and inverted pendulum are considered on which the proposed controller is applied. The results so obtained show the superiority of DRNN over MLFFNN as a controller. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.

  16. Bifurcation analysis on a generalized recurrent neural network with two interconnected three-neuron components

    Energy Technology Data Exchange (ETDEWEB)

    Hajihosseini, Amirhossein, E-mail: hajihosseini@khayam.ut.ac.ir [School of Mathematics, Institute for Research in Fundamental Sciences (IPM), Tehran 19395-5746 (Iran, Islamic Republic of); Center of Excellence in Biomathematics, School of Mathematics, Statistics and Computer Science, University of Tehran, Tehran 14176-14411 (Iran, Islamic Republic of); Maleki, Farzaneh, E-mail: farzanmaleki83@khayam.ut.ac.ir [School of Mathematics, Statistics and Computer Science, College of Science, University of Tehran, Tehran 14176-14411 (Iran, Islamic Republic of); School of Mathematics, Institute for Research in Fundamental Sciences (IPM), Tehran 19395-5746 (Iran, Islamic Republic of); Center of Excellence in Biomathematics, School of Mathematics, Statistics and Computer Science, University of Tehran, Tehran 14176-14411 (Iran, Islamic Republic of); Rokni Lamooki, Gholam Reza, E-mail: rokni@khayam.ut.ac.ir [School of Mathematics, Statistics and Computer Science, College of Science, University of Tehran, Tehran 14176-14411 (Iran, Islamic Republic of); School of Mathematics, Institute for Research in Fundamental Sciences (IPM), Tehran 19395-5746 (Iran, Islamic Republic of); Center of Excellence in Biomathematics, School of Mathematics, Statistics and Computer Science, University of Tehran, Tehran 14176-14411 (Iran, Islamic Republic of)

    2011-11-15

    Highlights: > We construct a recurrent neural network by generalizing a specific n-neuron network. > Several codimension 1 and 2 bifurcations take place in the newly constructed network. > The newly constructed network has higher capabilities to learn periodic signals. > The normal form theorem is applied to investigate dynamics of the network. > A series of bifurcation diagrams is given to support theoretical results. - Abstract: A class of recurrent neural networks is constructed by generalizing a specific class of n-neuron networks. It is shown that the newly constructed network experiences generic pitchfork and Hopf codimension one bifurcations. It is also proved that the emergence of generic Bogdanov-Takens, pitchfork-Hopf and Hopf-Hopf codimension two, and the degenerate Bogdanov-Takens bifurcation points in the parameter space is possible due to the intersections of codimension one bifurcation curves. The occurrence of bifurcations of higher codimensions significantly increases the capability of the newly constructed recurrent neural network to learn broader families of periodic signals.

  17. Interline power flow controller (IPFC) based damping recurrent neural network controllers for enhancing stability

    Energy Technology Data Exchange (ETDEWEB)

    Banaei, M.R., E-mail: m.banaei@azaruniv.ed [Electrical Engineering Department, Faculty of Engineering, Azarbaijan University of Tarbiat Moallem, Tabriz (Iran, Islamic Republic of); Kami, A. [Electrical Engineering Department, Faculty of Engineering, Azarbaijan University of Tarbiat Moallem, Tabriz (Iran, Islamic Republic of)

    2011-07-15

    Highlights: {yields} A method is presented to improve power system stability using IPFC. {yields} Recurrent neural network controllers damp oscillations in a power system. {yields} Training is based on back propagation with adaptive training parameters. {yields} Selection of effectiveness damping control signal carried out using SVD method. -- Abstract: This paper presents a method to improve power system stability using IPFC based damping online learning recurrent neural network controllers for damping oscillations in a power system. Parameters of equipped controllers for enhancing dynamical stability at the IPFC are tuned using mathematical methods. Therefore these control parameters are often fixed and are set for particular system configurations or operating points. Multilayer recurrent neural network, which can be tuned for changing system conditions, is used in this paper for effectively damp the oscillations. Training is based on back propagation with adaptive training parameters. This controller is tested to variations in system loading and fault in the power system and its performance is compared with performance of a controller that the phase compensation method is used to set its parameters. Selection of effectiveness damping control signal for the design of robust IPFC damping controller carried out through singular value decomposition (SVD) method. Simulation studies show the superior robustness and stabilizing effect of the proposed controller in comparison with phase compensation method.

  18. Model for a flexible motor memory based on a self-active recurrent neural network.

    Science.gov (United States)

    Boström, Kim Joris; Wagner, Heiko; Prieske, Markus; de Lussanet, Marc

    2013-10-01

    Using recent recurrent network architecture based on the reservoir computing approach, we propose and numerically simulate a model that is focused on the aspects of a flexible motor memory for the storage of elementary movement patterns into the synaptic weights of a neural network, so that the patterns can be retrieved at any time by simple static commands. The resulting motor memory is flexible in that it is capable to continuously modulate the stored patterns. The modulation consists in an approximately linear inter- and extrapolation, generating a large space of possible movements that have not been learned before. A recurrent network of thousand neurons is trained in a manner that corresponds to a realistic exercising scenario, with experimentally measured muscular activations and with kinetic data representing proprioceptive feedback. The network is "self-active" in that it maintains recurrent flow of activation even in the absence of input, a feature that resembles the "resting-state activity" found in the human and animal brain. The model involves the concept of "neural outsourcing" which amounts to the permanent shifting of computational load from higher to lower-level neural structures, which might help to explain why humans are able to execute learned skills in a fluent and flexible manner without the need for attention to the details of the movement.

  19. Improved Generalization in Recurrent Neural Networks Using the Tangent Plane Algorithm

    Directory of Open Access Journals (Sweden)

    P May

    2014-01-01

    Full Text Available The tangent plane algorithm for real time recurrent learning (TPA-RTRL is an effective online training method for fully recurrent neural networks. TPA-RTRL uses the method of approaching tangent planes to accelerate the learning processes. Compared to the original gradient descent real time recurrent learning algorithm (GD-RTRL it is very fast and avoids problems like local minima of the search space. However, the TPA-RTRL algorithm actively encourages the formation of large weight values that can be harmful to generalization. This paper presents a new TPA-RTRL variant that encourages small weight values to decay to zero by using a weight elimination procedure built into the geometry of the algorithm. Experimental results show that the new algorithm gives good generalization over a range of network sizes whilst retaining the fast convergence speed of the TPA-RTRL algorithm.

  20. Recurrent RBFN-based fuzzy neural network control for X-Y-theta motion control stage using linear ultrasonic motors.

    Science.gov (United States)

    Lin, Faa-Jeng; Shieh, Po-Huang

    2006-12-01

    A recurrent radial basis function network (RBFN) based fuzzy neural network (FNN) control system is proposed to control the position of an X-Y-theta motion control stage using linear ultrasonic motors (LUSMs) to track various contours in this study. The proposed recurrent RBFN-based FNN combines the merits of self-constructing fuzzy neural network (SCFNN), recurrent neural network (RNN), and RBFN. Moreover, the structure and the parameter learning phases of the recurrent RBFN-based FNN are performed concurrently and on line. The structure learning is based on the partition of input space, and the parameter learning is based on the supervised gradient decent method using a delta adaptation law. The experimental results due to various contours show that the dynamic behaviors of the proposed recurrent RBFN-based FNN control system are robust with regard to uncertainties.

  1. An R implementation of a Recurrent Neural Network Trained by Extended Kalman Filter

    Directory of Open Access Journals (Sweden)

    Bogdan Oancea

    2016-06-01

    Full Text Available Nowadays there are several techniques used for forecasting with different performances and accuracies. One of the most performant techniques for time series prediction is neural networks. The accuracy of the predictions greatly depends on the network architecture and training method. In this paper we describe an R implementation of a recurrent neural network trained by the Extended Kalman Filter. For the implementation of the network we used the Matrix package that allows efficient vector-matrix and matrix-matrix operations. We tested the performance of our R implementation comparing it with a pure C++ implementation and we showed that R can achieve about 75% of the C++ programs. Considering the other advantages of R, our results recommend R as a serious alternative to classical programming languages for high performance implementations of neural networks.

  2. Complete stability of delayed recurrent neural networks with Gaussian activation functions.

    Science.gov (United States)

    Liu, Peng; Zeng, Zhigang; Wang, Jun

    2017-01-01

    This paper addresses the complete stability of delayed recurrent neural networks with Gaussian activation functions. By means of the geometrical properties of Gaussian function and algebraic properties of nonsingular M-matrix, some sufficient conditions are obtained to ensure that for an n-neuron neural network, there are exactly 3(k) equilibrium points with 0≤k≤n, among which 2(k) and 3(k)-2(k) equilibrium points are locally exponentially stable and unstable, respectively. Moreover, it concludes that all the states converge to one of the equilibrium points; i.e., the neural networks are completely stable. The derived conditions herein can be easily tested. Finally, a numerical example is given to illustrate the theoretical results. Copyright © 2016 Elsevier Ltd. All rights reserved.

  3. Undamped Oscillations Generated by Hopf Bifurcations in Fractional-Order Recurrent Neural Networks With Caputo Derivative.

    Science.gov (United States)

    Xiao, Min; Zheng, Wei Xing; Jiang, Guoping; Cao, Jinde

    2015-12-01

    In this paper, a fractional-order recurrent neural network is proposed and several topics related to the dynamics of such a network are investigated, such as the stability, Hopf bifurcations, and undamped oscillations. The stability domain of the trivial steady state is completely characterized with respect to network parameters and orders of the commensurate-order neural network. Based on the stability analysis, the critical values of the fractional order are identified, where Hopf bifurcations occur and a family of oscillations bifurcate from the trivial steady state. Then, the parametric range of undamped oscillations is also estimated and the frequency and amplitude of oscillations are determined analytically and numerically for such commensurate-order networks. Meanwhile, it is shown that the incommensurate-order neural network can also exhibit a Hopf bifurcation as the network parameter passes through a critical value which can be determined exactly. The frequency and amplitude of bifurcated oscillations are determined.

  4. Robust recurrent neural network modeling for software fault detection and correction prediction

    Energy Technology Data Exchange (ETDEWEB)

    Hu, Q.P. [Quality and Innovation Research Centre, Department of Industrial and Systems Engineering, National University of Singapore, Singapore 119260 (Singapore)]. E-mail: g0305835@nus.edu.sg; Xie, M. [Quality and Innovation Research Centre, Department of Industrial and Systems Engineering, National University of Singapore, Singapore 119260 (Singapore)]. E-mail: mxie@nus.edu.sg; Ng, S.H. [Quality and Innovation Research Centre, Department of Industrial and Systems Engineering, National University of Singapore, Singapore 119260 (Singapore)]. E-mail: isensh@nus.edu.sg; Levitin, G. [Israel Electric Corporation, Reliability and Equipment Department, R and D Division, Aaifa 31000 (Israel)]. E-mail: levitin@iec.co.il

    2007-03-15

    Software fault detection and correction processes are related although different, and they should be studied together. A practical approach is to apply software reliability growth models to model fault detection, and fault correction process is assumed to be a delayed process. On the other hand, the artificial neural networks model, as a data-driven approach, tries to model these two processes together with no assumptions. Specifically, feedforward backpropagation networks have shown their advantages over analytical models in fault number predictions. In this paper, the following approach is explored. First, recurrent neural networks are applied to model these two processes together. Within this framework, a systematic networks configuration approach is developed with genetic algorithm according to the prediction performance. In order to provide robust predictions, an extra factor characterizing the dispersion of prediction repetitions is incorporated into the performance function. Comparisons with feedforward neural networks and analytical models are developed with respect to a real data set.

  5. Robust exponential stability analysis of a larger class of discrete-time recurrent neural networks

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    The robust exponential stability of a larger class of discrete-time recurrent neural networks (RNNs) is explored in this paper. A novel neural network model, named standard neural network model (SNNM), is introduced to provide a general framework for stability analysis of RNNs. Most of the existing RNNs can be transformed into SNNMs to be analyzed in a unified way.Applying Lyapunov stability theory method and S-Procedure technique, two useful criteria of robust exponential stability for the discrete-time SNNMs are derived. The conditions presented are formulated as linear matrix inequalities (LMIs) to be easily solved using existing efficient convex optimization techniques. An example is presented to demonstrate the transformation procedure and the effectiveness of the results.

  6. LYAPUNOV-Based Sensor Failure Detection and Recovery for the Reverse Water Gas Shift Process

    Science.gov (United States)

    Haralambous, Michael G.

    2002-01-01

    Livingstone, a model-based AI software system, is planned for use in the autonomous fault diagnosis, reconfiguration, and control of the oxygen-producing reverse water gas shift (RWGS) process test-bed located in the Applied Chemistry Laboratory at KSC. In this report the RWGS process is first briefly described and an overview of Livingstone is given. Next, a Lyapunov-based approach for detecting and recovering from sensor failures, differing significantly from that used by Livingstone, is presented. In this new method, models used are in t e m of the defining differential equations of system components, thus differing from the qualitative, static models used by Livingstone. An easily computed scalar inequality constraint, expressed in terms of sensed system variables, is used to determine the existence of sensor failures. In the event of sensor failure, an observer/estimator is used for determining which sensors have failed. The theory underlying the new approach is developed. Finally, a recommendation is made to use the Lyapunov-based approach to complement the capability of Livingstone and to use this combination in the RWGS process.

  7. Passivity/Lyapunov based controller design for trajectory tracking of flexible joint manipulators

    Science.gov (United States)

    Sicard, Pierre; Wen, John T.; Lanari, Leonardo

    1992-01-01

    A passivity and Lyapunov based approach for the control design for the trajectory tracking problem of flexible joint robots is presented. The basic structure of the proposed controller is the sum of a model-based feedforward and a model-independent feedback. Feedforward selection and solution is analyzed for a general model for flexible joints, and for more specific and practical model structures. Passivity theory is used to design a motor state-based controller in order to input-output stabilize the error system formed by the feedforward. Observability conditions for asymptotic stability are stated and verified. In order to accommodate for modeling uncertainties and to allow for the implementation of a simplified feedforward compensation, the stability of the system is analyzed in presence of approximations in the feedforward by using a Lyapunov based robustness analysis. It is shown that under certain conditions, e.g., the desired trajectory is varying slowly enough, stability is maintained for various approximations of a canonical feedforward.

  8. LYAPUNOV-Based Sensor Failure Detection and Recovery for the Reverse Water Gas Shift Process

    Science.gov (United States)

    Haralambous, Michael G.

    2002-01-01

    Livingstone, a model-based AI software system, is planned for use in the autonomous fault diagnosis, reconfiguration, and control of the oxygen-producing reverse water gas shift (RWGS) process test-bed located in the Applied Chemistry Laboratory at KSC. In this report the RWGS process is first briefly described and an overview of Livingstone is given. Next, a Lyapunov-based approach for detecting and recovering from sensor failures, differing significantly from that used by Livingstone, is presented. In this new method, models used are in t e m of the defining differential equations of system components, thus differing from the qualitative, static models used by Livingstone. An easily computed scalar inequality constraint, expressed in terms of sensed system variables, is used to determine the existence of sensor failures. In the event of sensor failure, an observer/estimator is used for determining which sensors have failed. The theory underlying the new approach is developed. Finally, a recommendation is made to use the Lyapunov-based approach to complement the capability of Livingstone and to use this combination in the RWGS process.

  9. Adaptive Flutter Suppression for a Fighter Wing via Recurrent Neural Networks over a Wide Transonic Range

    Directory of Open Access Journals (Sweden)

    Haojie Liu

    2016-01-01

    Full Text Available The paper presents a digital adaptive controller of recurrent neural networks for the active flutter suppression of a wing structure over a wide transonic range. The basic idea behind the controller is as follows. At first, the parameters of recurrent neural networks, such as the number of neurons and the learning rate, are initially determined so as to suppress the flutter under a specific flight condition in the transonic regime. Then, the controller automatically adjusts itself for a new flight condition by updating the synaptic weights of networks online via the real-time recurrent learning algorithm. Hence, the controller is able to suppress the aeroelastic instability of the wing structure over a range of flight conditions in the transonic regime. To demonstrate the effectiveness and robustness of the controller, the aeroservoelastic model of a typical fighter wing with a tip missile was established and a single-input/single-output controller was synthesized. Numerical simulations of the open/closed-loop aeroservoelastic simulations were made to demonstrate the efficacy of the adaptive controller with respect to the change of flight parameters in the transonic regime.

  10. Deep Convolutional and LSTM Recurrent Neural Networks for Multimodal Wearable Activity Recognition

    Science.gov (United States)

    Ordóñez, Francisco Javier; Roggen, Daniel

    2016-01-01

    Human activity recognition (HAR) tasks have traditionally been solved using engineered features obtained by heuristic processes. Current research suggests that deep convolutional neural networks are suited to automate feature extraction from raw sensor inputs. However, human activities are made of complex sequences of motor movements, and capturing this temporal dynamics is fundamental for successful HAR. Based on the recent success of recurrent neural networks for time series domains, we propose a generic deep framework for activity recognition based on convolutional and LSTM recurrent units, which: (i) is suitable for multimodal wearable sensors; (ii) can perform sensor fusion naturally; (iii) does not require expert knowledge in designing features; and (iv) explicitly models the temporal dynamics of feature activations. We evaluate our framework on two datasets, one of which has been used in a public activity recognition challenge. Our results show that our framework outperforms competing deep non-recurrent networks on the challenge dataset by 4% on average; outperforming some of the previous reported results by up to 9%. Our results show that the framework can be applied to homogeneous sensor modalities, but can also fuse multimodal sensors to improve performance. We characterise key architectural hyperparameters’ influence on performance to provide insights about their optimisation. PMID:26797612

  11. Deep Recurrent Neural Network-Based Autoencoders for Acoustic Novelty Detection

    Directory of Open Access Journals (Sweden)

    Erik Marchi

    2017-01-01

    Full Text Available In the emerging field of acoustic novelty detection, most research efforts are devoted to probabilistic approaches such as mixture models or state-space models. Only recent studies introduced (pseudo-generative models for acoustic novelty detection with recurrent neural networks in the form of an autoencoder. In these approaches, auditory spectral features of the next short term frame are predicted from the previous frames by means of Long-Short Term Memory recurrent denoising autoencoders. The reconstruction error between the input and the output of the autoencoder is used as activation signal to detect novel events. There is no evidence of studies focused on comparing previous efforts to automatically recognize novel events from audio signals and giving a broad and in depth evaluation of recurrent neural network-based autoencoders. The present contribution aims to consistently evaluate our recent novel approaches to fill this white spot in the literature and provide insight by extensive evaluations carried out on three databases: A3Novelty, PASCAL CHiME, and PROMETHEUS. Besides providing an extensive analysis of novel and state-of-the-art methods, the article shows how RNN-based autoencoders outperform statistical approaches up to an absolute improvement of 16.4% average F-measure over the three databases.

  12. Deep Recurrent Neural Network-Based Autoencoders for Acoustic Novelty Detection

    Science.gov (United States)

    Vesperini, Fabio; Schuller, Björn

    2017-01-01

    In the emerging field of acoustic novelty detection, most research efforts are devoted to probabilistic approaches such as mixture models or state-space models. Only recent studies introduced (pseudo-)generative models for acoustic novelty detection with recurrent neural networks in the form of an autoencoder. In these approaches, auditory spectral features of the next short term frame are predicted from the previous frames by means of Long-Short Term Memory recurrent denoising autoencoders. The reconstruction error between the input and the output of the autoencoder is used as activation signal to detect novel events. There is no evidence of studies focused on comparing previous efforts to automatically recognize novel events from audio signals and giving a broad and in depth evaluation of recurrent neural network-based autoencoders. The present contribution aims to consistently evaluate our recent novel approaches to fill this white spot in the literature and provide insight by extensive evaluations carried out on three databases: A3Novelty, PASCAL CHiME, and PROMETHEUS. Besides providing an extensive analysis of novel and state-of-the-art methods, the article shows how RNN-based autoencoders outperform statistical approaches up to an absolute improvement of 16.4% average F-measure over the three databases. PMID:28182121

  13. Deep Convolutional and LSTM Recurrent Neural Networks for Multimodal Wearable Activity Recognition

    Directory of Open Access Journals (Sweden)

    Francisco Javier Ordóñez

    2016-01-01

    Full Text Available Human activity recognition (HAR tasks have traditionally been solved using engineered features obtained by heuristic processes. Current research suggests that deep convolutional neural networks are suited to automate feature extraction from raw sensor inputs. However, human activities are made of complex sequences of motor movements, and capturing this temporal dynamics is fundamental for successful HAR. Based on the recent success of recurrent neural networks for time series domains, we propose a generic deep framework for activity recognition based on convolutional and LSTM recurrent units, which: (i is suitable for multimodal wearable sensors; (ii can perform sensor fusion naturally; (iii does not require expert knowledge in designing features; and (iv explicitly models the temporal dynamics of feature activations. We evaluate our framework on two datasets, one of which has been used in a public activity recognition challenge. Our results show that our framework outperforms competing deep non-recurrent networks on the challenge dataset by 4% on average; outperforming some of the previous reported results by up to 9%. Our results show that the framework can be applied to homogeneous sensor modalities, but can also fuse multimodal sensors to improve performance. We characterise key architectural hyperparameters’ influence on performance to provide insights about their optimisation.

  14. Deep Convolutional and LSTM Recurrent Neural Networks for Multimodal Wearable Activity Recognition.

    Science.gov (United States)

    Ordóñez, Francisco Javier; Roggen, Daniel

    2016-01-18

    Human activity recognition (HAR) tasks have traditionally been solved using engineered features obtained by heuristic processes. Current research suggests that deep convolutional neural networks are suited to automate feature extraction from raw sensor inputs. However, human activities are made of complex sequences of motor movements, and capturing this temporal dynamics is fundamental for successful HAR. Based on the recent success of recurrent neural networks for time series domains, we propose a generic deep framework for activity recognition based on convolutional and LSTM recurrent units, which: (i) is suitable for multimodal wearable sensors; (ii) can perform sensor fusion naturally; (iii) does not require expert knowledge in designing features; and (iv) explicitly models the temporal dynamics of feature activations. We evaluate our framework on two datasets, one of which has been used in a public activity recognition challenge. Our results show that our framework outperforms competing deep non-recurrent networks on the challenge dataset by 4% on average; outperforming some of the previous reported results by up to 9%. Our results show that the framework can be applied to homogeneous sensor modalities, but can also fuse multimodal sensors to improve performance. We characterise key architectural hyperparameters' influence on performance to provide insights about their optimisation.

  15. A generalized LSTM-like training algorithm for second-order recurrent neural networks.

    Science.gov (United States)

    Monner, Derek; Reggia, James A

    2012-01-01

    The long short term memory (LSTM) is a second-order recurrent neural network architecture that excels at storing sequential short-term memories and retrieving them many time-steps later. LSTM's original training algorithm provides the important properties of spatial and temporal locality, which are missing from other training approaches, at the cost of limiting its applicability to a small set of network architectures. Here we introduce the generalized long short-term memory(LSTM-g) training algorithm, which provides LSTM-like locality while being applicable without modification to a much wider range of second-order network architectures. With LSTM-g, all units have an identical set of operating instructions for both activation and learning, subject only to the configuration of their local environment in the network; this is in contrast to the original LSTM training algorithm, where each type of unit has its own activation and training instructions. When applied to LSTM architectures with peephole connections, LSTM-g takes advantage of an additional source of back-propagated error which can enable better performance than the original algorithm. Enabled by the broad architectural applicability of LSTM-g, we demonstrate that training recurrent networks engineered for specific tasks can produce better results than single-layer networks. We conclude that LSTM-g has the potential to both improve the performance and broaden the applicability of spatially and temporally local gradient-based training algorithms for recurrent neural networks.

  16. Brain Dynamics in Predicting Driving Fatigue Using a Recurrent Self-Evolving Fuzzy Neural Network.

    Science.gov (United States)

    Liu, Yu-Ting; Lin, Yang-Yin; Wu, Shang-Lin; Chuang, Chun-Hsiang; Lin, Chin-Teng

    2016-02-01

    This paper proposes a generalized prediction system called a recurrent self-evolving fuzzy neural network (RSEFNN) that employs an on-line gradient descent learning rule to address the electroencephalography (EEG) regression problem in brain dynamics for driving fatigue. The cognitive states of drivers significantly affect driving safety; in particular, fatigue driving, or drowsy driving, endangers both the individual and the public. For this reason, the development of brain-computer interfaces (BCIs) that can identify drowsy driving states is a crucial and urgent topic of study. Many EEG-based BCIs have been developed as artificial auxiliary systems for use in various practical applications because of the benefits of measuring EEG signals. In the literature, the efficacy of EEG-based BCIs in recognition tasks has been limited by low resolutions. The system proposed in this paper represents the first attempt to use the recurrent fuzzy neural network (RFNN) architecture to increase adaptability in realistic EEG applications to overcome this bottleneck. This paper further analyzes brain dynamics in a simulated car driving task in a virtual-reality environment. The proposed RSEFNN model is evaluated using the generalized cross-subject approach, and the results indicate that the RSEFNN is superior to competing models regardless of the use of recurrent or nonrecurrent structures.

  17. Low-complexity nonlinear adaptive filter based on a pipelined bilinear recurrent neural network.

    Science.gov (United States)

    Zhao, Haiquan; Zeng, Xiangping; He, Zhengyou

    2011-09-01

    To reduce the computational complexity of the bilinear recurrent neural network (BLRNN), a novel low-complexity nonlinear adaptive filter with a pipelined bilinear recurrent neural network (PBLRNN) is presented in this paper. The PBLRNN, inheriting the modular architectures of the pipelined RNN proposed by Haykin and Li, comprises a number of BLRNN modules that are cascaded in a chained form. Each module is implemented by a small-scale BLRNN with internal dynamics. Since those modules of the PBLRNN can be performed simultaneously in a pipelined parallelism fashion, it would result in a significant improvement of computational efficiency. Moreover, due to nesting module, the performance of the PBLRNN can be further improved. To suit for the modular architectures, a modified adaptive amplitude real-time recurrent learning algorithm is derived on the gradient descent approach. Extensive simulations are carried out to evaluate the performance of the PBLRNN on nonlinear system identification, nonlinear channel equalization, and chaotic time series prediction. Experimental results show that the PBLRNN provides considerably better performance compared to the single BLRNN and RNN models.

  18. Synaptic plasticity in a recurrent neural network for versatile and adaptive behaviors of a walking robot

    Directory of Open Access Journals (Sweden)

    Eduard eGrinke

    2015-10-01

    Full Text Available Walking animals, like insects, with little neural computing can effectively perform complex behaviors. They can walk around their environment, escape from corners/deadlocks, and avoid or climb over obstacles. While performing all these behaviors, they can also adapt their movements to deal with an unknown situation. As a consequence, they successfully navigate through their complex environment. The versatile and adaptive abilities are the result of an integration of several ingredients embedded in their sensorimotor loop. Biological studies reveal that the ingredients include neural dynamics, plasticity, sensory feedback, and biomechanics. Generating such versatile and adaptive behaviors for a walking robot is a challenging task. In this study, we present a bio-inspired approach to solve this task. Specifically, the approach combines neural mechanisms with plasticity, sensory feedback, and biomechanics. The neural mechanisms consist of adaptive neural sensory processing and modular neural locomotion control. The sensory processing is based on a small recurrent network consisting of two fully connected neurons. Online correlation-based learning with synaptic scaling is applied to adequately change the connections of the network. By doing so, we can effectively exploit neural dynamics (i.e., hysteresis effects and single attractors in the network to generate different turning angles with short-term memory for a biomechanical walking robot. The turning information is transmitted as descending steering signals to the locomotion control which translates the signals into motor actions. As a result, the robot can walk around and adapt its turning angle for avoiding obstacles in different situations as well as escaping from sharp corners or deadlocks. Using backbone joint control embedded in the locomotion control allows the robot to climb over small obstacles. Consequently, it can successfully explore and navigate in complex environments.

  19. Improving protein disorder prediction by deep bidirectional long short-term memory recurrent neural networks.

    Science.gov (United States)

    Hanson, Jack; Yang, Yuedong; Paliwal, Kuldip; Zhou, Yaoqi

    2017-03-01

    Capturing long-range interactions between structural but not sequence neighbors of proteins is a long-standing challenging problem in bioinformatics. Recently, long short-term memory (LSTM) networks have significantly improved the accuracy of speech and image classification problems by remembering useful past information in long sequential events. Here, we have implemented deep bidirectional LSTM recurrent neural networks in the problem of protein intrinsic disorder prediction. The new method, named SPOT-Disorder, has steadily improved over a similar method using a traditional, window-based neural network (SPINE-D) in all datasets tested without separate training on short and long disordered regions. Independent tests on four other datasets including the datasets from critical assessment of structure prediction (CASP) techniques and >10 000 annotated proteins from MobiDB, confirmed SPOT-Disorder as one of the best methods in disorder prediction. Moreover, initial studies indicate that the method is more accurate in predicting functional sites in disordered regions. These results highlight the usefulness combining LSTM with deep bidirectional recurrent neural networks in capturing non-local, long-range interactions for bioinformatics applications. SPOT-disorder is available as a web server and as a standalone program at: http://sparks-lab.org/server/SPOT-disorder/index.php . j.hanson@griffith.edu.au or yuedong.yang@griffith.edu.au or yaoqi.zhou@griffith.edu.au. Supplementary data is available at Bioinformatics online.

  20. Iterative prediction of chaotic time series using a recurrent neural network

    Energy Technology Data Exchange (ETDEWEB)

    Essawy, M.A.; Bodruzzaman, M. [Tennessee State Univ., Nashville, TN (United States). Dept. of Electrical and Computer Engineering; Shamsi, A.; Noel, S. [USDOE Morgantown Energy Technology Center, WV (United States)

    1996-12-31

    Chaotic systems are known for their unpredictability due to their sensitive dependence on initial conditions. When only time series measurements from such systems are available, neural network based models are preferred due to their simplicity, availability, and robustness. However, the type of neutral network used should be capable of modeling the highly non-linear behavior and the multi-attractor nature of such systems. In this paper the authors use a special type of recurrent neural network called the ``Dynamic System Imitator (DSI)``, that has been proven to be capable of modeling very complex dynamic behaviors. The DSI is a fully recurrent neural network that is specially designed to model a wide variety of dynamic systems. The prediction method presented in this paper is based upon predicting one step ahead in the time series, and using that predicted value to iteratively predict the following steps. This method was applied to chaotic time series generated from the logistic, Henon, and the cubic equations, in addition to experimental pressure drop time series measured from a Fluidized Bed Reactor (FBR), which is known to exhibit chaotic behavior. The time behavior and state space attractor of the actual and network synthetic chaotic time series were analyzed and compared. The correlation dimension and the Kolmogorov entropy for both the original and network synthetic data were computed. They were found to resemble each other, confirming the success of the DSI based chaotic system modeling.

  1. Reinforced recurrent neural networks for multi-step-ahead flood forecasts

    Science.gov (United States)

    Chen, Pin-An; Chang, Li-Chiu; Chang, Fi-John

    2013-08-01

    Considering true values cannot be available at every time step in an online learning algorithm for multi-step-ahead (MSA) forecasts, a MSA reinforced real-time recurrent learning algorithm for recurrent neural networks (R-RTRL NN) is proposed. The main merit of the proposed method is to repeatedly adjust model parameters with the current information including the latest observed values and model's outputs to enhance the reliability and the forecast accuracy of the proposed method. The sequential formulation of the R-RTRL NN is derived. To demonstrate its reliability and effectiveness, the proposed R-RTRL NN is implemented to make 2-, 4- and 6-step-ahead forecasts in a famous benchmark chaotic time series and a reservoir flood inflow series in North Taiwan. For comparison purpose, three comparative neural networks (two dynamic and one static neural networks) were performed. Numerical and experimental results indicate that the R-RTRL NN not only achieves superior performance to comparative networks but significantly improves the precision of MSA forecasts for both chaotic time series and reservoir inflow case during typhoon events with effective mitigation in the time-lag problem.

  2. Nonlinear Lyapunov-based boundary control of distributed heat transfer mechanisms in membrane distillation plant

    KAUST Repository

    Eleiwi, Fadi

    2015-07-01

    This paper presents a nonlinear Lyapunov-based boundary control for the temperature difference of a membrane distillation boundary layers. The heat transfer mechanisms inside the process are modeled with a 2D advection-diffusion equation. The model is semi-descretized in space, and a nonlinear state-space representation is provided. The control is designed to force the temperature difference along the membrane sides to track a desired reference asymptotically, and hence a desired flux would be generated. Certain constraints are put on the control law inputs to be within an economic range of energy supplies. The effect of the controller gain is discussed. Simulations with real process parameters for the model, and the controller are provided. © 2015 American Automatic Control Council.

  3. A unifying Lyapunov-based framework for the event-triggered control of nonlinear systems

    CERN Document Server

    Postoyan, Romain; Nesic, Dragan; Tabuada, Paulo

    2011-01-01

    We present a prescriptive framework for the event-triggered control of nonlinear systems. Rather than closing the loop periodically, as traditionally done in digital control, in event-triggered implementations the loop is closed according to a state-dependent criterion. Event-triggered control is especially well suited for embedded systems and networked control systems since it reduces the amount of resources needed for control such as communication bandwidth. By modeling the event-triggered implementations as hybrid systems, we provide Lyapunov-based conditions to guarantee the stability of the resulting closed-loop system and explain how they can be utilized to synthesize event-triggering rules. We illustrate the generality of the approach by showing how it encompasses several existing event-triggering policies and by developing new strategies which further reduce the resources needed for control.

  4. Lyapunov-based boundary feedback control in multi-reach canals

    Institute of Scientific and Technical Information of China (English)

    CEN LiHui; XI YuGeng

    2009-01-01

    This paper presents a Lyapunov-based approach to design the boundary feedback control for an open-channel network composed of a cascade of multi-reach canals, each described by a pair of Saint-Venant equations. The weighted sum of entropies of the multi-reaches is adopted to construct the Lyapunov function. The time derivative of the Lyapunov function is expressed by the water depth variations at the gate boundaries, based on which a class of boundary feedback controllers is presented to guarantee the local asymptotic closed-loop stability. The advantage of this approach is that only the water level depths at the gate boundaries are measured as the feedback.

  5. A Lyapunov-based three-axis attitude intelligent control approach for unmanned aerial vehicle

    Institute of Scientific and Technical Information of China (English)

    A.H. Mazinan

    2015-01-01

    A novel Lyapunov-based three-axis attitude intelligent control approach via allocation scheme is considered in the proposed research to deal with kinematics and dynamics regarding the unmanned aerial vehicle systems. There is a consensus among experts of this field that the new outcomes in the present complicated systems modeling and control are highly appreciated with respect to state-of-the-art. The control scheme presented here is organized in line with a new integration of the linear-nonlinear control approaches, as long as the angular velocities in the three axes of the system are accurately dealt with in the inner closed loop control. And the corresponding rotation angles are dealt with in the outer closed loop control. It should be noted that the linear control in the present outer loop is first designed through proportional based linear quadratic regulator (PD based LQR) approach under optimum coefficients, while the nonlinear control in the corresponding inner loop is then realized through Lyapunov-based approach in the presence of uncertainties and disturbances. In order to complete the inner closed loop control, there is a pulse-width pulse-frequency (PWPF) modulator to be able to handle on-off thrusters. Furthermore, the number of these on-off thrusters may be increased with respect to the investigated control efforts to provide the overall accurate performance of the system, where the control allocation scheme is realized in the proposed strategy. It may be shown that the dynamics and kinematics of the unmanned aerial vehicle systems have to be investigated through the quaternion matrix and its corresponding vector to avoid presenting singularity of the results. At the end, the investigated outcomes are presented in comparison with a number of potential benchmarks to verify the approach performance.

  6. A Multilayer Recurrent Fuzzy Neural Network for Accurate Dynamic System Modeling

    Institute of Scientific and Technical Information of China (English)

    LIU He; HUANG Dao

    2008-01-01

    A muitilayer recurrent fuzzy neural network (MRFNN)is proposed for accurate dynamic system modeling.The proposed MRFNN has six layers combined with T-S fuzzy model.The recurrent structures are formed by local feedback connections in the membership layer and the rule layer.With these feedbacks,the fuzzy sets are time-varying and the temporal problem of dynamic system can he solved well.The parameters of MRFNN are learned by chaotic search(CS)and least square estimation(LSE)simultaneously,where CS is for tuning the premise parameters and LSE is for updating the consequent coefficients accordingly.Results of simulations show the proposed approach is effective for dynamic system modeling with high accuracy.

  7. Multicomponent Kinetic Determination by Wavelet Packet Transform Based Elman Recurrent Neural Network Method

    Institute of Scientific and Technical Information of China (English)

    REN Shou-xin; GAO Ling

    2004-01-01

    This paper covers a novel method named wavelet packet transform based Elman recurrent neural network(WPTERNN) for the simultaneous kinetic determination of periodate and iodate. The wavelet packet representations of signals provide a local time-frequency description, thus in the wavelet packet domain, the quality of the noise removal can be improved. The Elman recurrent network was applied to non-linear multivariate calibration. In this case, by means of optimization, the wavelet function, decomposition level and number of hidden nodes for WPTERNN method were selected as D4, 5 and 5 respectively. A program PWPTERNN was designed to perform multicomponent kinetic determination. The relative standard error of prediction(RSEP) for all the components with WPTERNN, Elman RNN and PLS were 3.23%, 11.8% and 10.9% respectively. The experimental results show that the method is better than the others.

  8. Identification of Jets Containing b-Hadrons with Recurrent Neural Networks at the ATLAS Experiment

    CERN Document Server

    CERN. Geneva

    2017-01-01

    A novel b-jet identification algorithm is constructed with a Recurrent Neural Network (RNN) at the ATLAS Experiment. This talk presents the expected performance of the RNN based b-tagging in simulated $t \\bar t$ events. The RNN based b-tagging processes properties of tracks associated to jets which are represented in sequences. In contrast to traditional impact-parameter-based b-tagging algorithms which assume the tracks of jets are independent from each other, RNN based b-tagging can exploit the spatial and kinematic correlations of tracks which are initiated from the same b-hadrons. The neural network nature of the tagging algorithm also allows the flexibility of extending input features to include more track properties than can be effectively used in traditional algorithms.

  9. Object class segmentation of RGB-D video using recurrent convolutional neural networks.

    Science.gov (United States)

    Pavel, Mircea Serban; Schulz, Hannes; Behnke, Sven

    2017-04-01

    Object class segmentation is a computer vision task which requires labeling each pixel of an image with the class of the object it belongs to. Deep convolutional neural networks (DNN) are able to learn and take advantage of local spatial correlations required for this task. They are, however, restricted by their small, fixed-sized filters, which limits their ability to learn long-range dependencies. Recurrent Neural Networks (RNN), on the other hand, do not suffer from this restriction. Their iterative interpretation allows them to model long-range dependencies by propagating activity. This property is especially useful when labeling video sequences, where both spatial and temporal long-range dependencies occur. In this work, a novel RNN architecture for object class segmentation is presented. We investigate several ways to train such a network. We evaluate our models on the challenging NYU Depth v2 dataset for object class segmentation and obtain competitive results. Copyright © 2017 Elsevier Ltd. All rights reserved.

  10. Nonlinear Recurrent Neural Network Predictive Control for Energy Distribution of a Fuel Cell Powered Robot

    Directory of Open Access Journals (Sweden)

    Qihong Chen

    2014-01-01

    Full Text Available This paper presents a neural network predictive control strategy to optimize power distribution for a fuel cell/ultracapacitor hybrid power system of a robot. We model the nonlinear power system by employing time variant auto-regressive moving average with exogenous (ARMAX, and using recurrent neural network to represent the complicated coefficients of the ARMAX model. Because the dynamic of the system is viewed as operating- state- dependent time varying local linear behavior in this frame, a linear constrained model predictive control algorithm is developed to optimize the power splitting between the fuel cell and ultracapacitor. The proposed algorithm significantly simplifies implementation of the controller and can handle multiple constraints, such as limiting substantial fluctuation of fuel cell current. Experiment and simulation results demonstrate that the control strategy can optimally split power between the fuel cell and ultracapacitor, limit the change rate of the fuel cell current, and so as to extend the lifetime of the fuel cell.

  11. Nonlinear recurrent neural network predictive control for energy distribution of a fuel cell powered robot.

    Science.gov (United States)

    Chen, Qihong; Long, Rong; Quan, Shuhai; Zhang, Liyan

    2014-01-01

    This paper presents a neural network predictive control strategy to optimize power distribution for a fuel cell/ultracapacitor hybrid power system of a robot. We model the nonlinear power system by employing time variant auto-regressive moving average with exogenous (ARMAX), and using recurrent neural network to represent the complicated coefficients of the ARMAX model. Because the dynamic of the system is viewed as operating- state- dependent time varying local linear behavior in this frame, a linear constrained model predictive control algorithm is developed to optimize the power splitting between the fuel cell and ultracapacitor. The proposed algorithm significantly simplifies implementation of the controller and can handle multiple constraints, such as limiting substantial fluctuation of fuel cell current. Experiment and simulation results demonstrate that the control strategy can optimally split power between the fuel cell and ultracapacitor, limit the change rate of the fuel cell current, and so as to extend the lifetime of the fuel cell.

  12. Continuous attractors of Lotka-Volterra recurrent neural networks with infinite neurons.

    Science.gov (United States)

    Yu, Jiali; Yi, Zhang; Zhou, Jiliu

    2010-10-01

    Continuous attractors of Lotka-Volterra recurrent neural networks (LV RNNs) with infinite neurons are studied in this brief. A continuous attractor is a collection of connected equilibria, and it has been recognized as a suitable model for describing the encoding of continuous stimuli in neural networks. The existence of the continuous attractors depends on many factors such as the connectivity and the external inputs of the network. A continuous attractor can be stable or unstable. It is shown in this brief that a LV RNN can possess multiple continuous attractors if the synaptic connections and the external inputs are Gussian-like in shape. Moreover, both stable and unstable continuous attractors can coexist in a network. Explicit expressions of the continuous attractors are calculated. Simulations are employed to illustrate the theory.

  13. RECURRENT NEURAL NETWORK MODEL BASED ON PROJECTIVE OPERATOR AND ITS APPLICATION TO OPTIMIZATION PROBLEMS

    Institute of Scientific and Technical Information of China (English)

    2006-01-01

    The recurrent neural network (RNN) model based on projective operator was studied. Different from the former study, the value region of projective operator in the neural network in this paper is a general closed convex subset of n-dimensional Euclidean space and it is not a compact convex set in general, that is, the value region of projective operator is probably unbounded. It was proved that the network has a global solution and its solution trajectory converges to some equilibrium set whenever objective function satisfies some conditions. After that, the model was applied to continuously differentiable optimization and nonlinear or implicit complementarity problems. In addition, simulation experiments confirm the efficiency of the RNN.

  14. A TWO-LAYER RECURRENT NEURAL NETWORK BASED APPROACH FOR OVERLAY MULTICAST

    Institute of Scientific and Technical Information of China (English)

    Liu Shidong; Zhang Shunyi; Zhou Jinquan; Qiu Gong'an

    2008-01-01

    Overlay multicast has become one of the most promising multicast solutions for IP network, and Neutral Network(NN) has been a good candidate for searching optimal solutions to the constrained shortest routing path in virtue of its powerful capacity for parallel computation. Though traditional Hopfield NN can tackle the optimization problem, it is incapable of dealing with large scale networks due to the large number of neurons. In this paper, a neural network for overlay multicast tree computation is presented to reliably implement routing algorithm in real time. The neural network is constructed as a two-layer recurrent architecture, which is comprised of Independent Variable Neurons (IDVN) and Dependent Variable Neurons (DVN), according to the independence of the decision variables associated with the edges in directed graph. Compared with the heuristic routing algorithms, it is characterized as shorter computational time, fewer neurons, and better precision.

  15. Prenatal Diagnosis, Fetal Surgery, Recurrence Risk and Differential Diagnosis of Neural Tube Defects

    Directory of Open Access Journals (Sweden)

    Chih-Ping Chen

    2008-09-01

    Full Text Available Prenatal screening with α-fetoprotein (AFP and ultrasonography have allowed the prenatal diagnosis of neural tube defects (NTDs in current obstetric care, and open spina bifida has been considered a potential candidate for in utero treatment in modern pediatric surgery. This article provides an overview of maternal serum AFP screening, amniotic fluid AFP assays, amniotic fluid acetylcholinesterase immunoassays and level II ultrasound for NTDs, prenatal repair of fetal myelomeningocele, recurrence risk of NTDs, and differential diagnosis of NTDs on prenatal ultrasound.

  16. Chaos control and synchronization, with input saturation, via recurrent neural networks.

    Science.gov (United States)

    Sanchez, Edgar N; Ricalde, Luis J

    2003-01-01

    This paper deals with the adaptive tracking problem of non-linear systems in presence of unknown parameters, unmodelled dynamics and input saturation. A high order recurrent neural network is used in order to identify the unknown system and a learning law is obtained using the Lyapunov methodology. Then a stabilizing control law for the reference tracking error dynamics is developed using the Lyapunov methodology and the Sontag control law. Tracking error boundedness is established as a function of a design parameter. The new approach is illustrated by examples of complex dynamical systems: chaos control and synchronization.

  17. Recurrent Neural Networks for Polyphonic Sound Event Detection in Real Life Recordings

    OpenAIRE

    Parascandolo, Giambattista; Huttunen, Heikki; Virtanen, Tuomas

    2016-01-01

    In this paper we present an approach to polyphonic sound event detection in real life recordings based on bi-directional long short term memory (BLSTM) recurrent neural networks (RNNs). A single multilabel BLSTM RNN is trained to map acoustic features of a mixture signal consisting of sounds from multiple classes, to binary activity indicators of each event class. Our method is tested on a large database of real-life recordings, with 61 classes (e.g. music, car, speech) from 10 different ever...

  18. An alternative recurrent neural network for solving variational inequalities and related optimization problems.

    Science.gov (United States)

    Hu, Xiaolin; Zhang, Bo

    2009-12-01

    There exist many recurrent neural networks for solving optimization-related problems. In this paper, we present a method for deriving such networks from existing ones by changing connections between computing blocks. Although the dynamic systems may become much different, some distinguished properties may be retained. One example is discussed to solve variational inequalities and related optimization problems with mixed linear and nonlinear constraints. A new network is obtained from two classical models by this means, and its performance is comparable to its predecessors. Thus, an alternative choice for circuits implementation is offered to accomplish such computing tasks.

  19. A generalized LSTM-like training algorithm for second-order recurrent neural networks

    OpenAIRE

    Monner, Derek; Reggia, James A.

    2011-01-01

    The Long Short Term Memory (LSTM) is a second-order recurrent neural network architecture that excels at storing sequential short-term memories and retrieving them many time-steps later. LSTM’s original training algorithm provides the important properties of spatial and temporal locality, which are missing from other training approaches, at the cost of limiting it’s applicability to a small set of network architectures. Here we introduce the Generalized Long Short-Term Memory (LSTM-g) trainin...

  20. Passivity analysis for memristor-based recurrent neural networks with discrete and distributed delays.

    Science.gov (United States)

    Guodong Zhang; Yi Shen; Quan Yin; Junwei Sun

    2015-01-01

    In this paper, based on the knowledge of memristor and recurrent neural networks (RNNs), the model of the memristor-based RNNs with discrete and distributed delays is established. By constructing proper Lyapunov functionals and using inequality technique, several sufficient conditions are given to ensure the passivity of the memristor-based RNNs with discrete and distributed delays in the sense of Filippov solutions. The passivity conditions here are presented in terms of linear matrix inequalities, which can be easily solved by using Matlab Tools. In addition, the results of this paper complement and extend the earlier publications. Finally, numerical simulations are employed to illustrate the effectiveness of the obtained results.

  1. Symmetric sequence processing in a recurrent neural network model with a synchronous dynamics

    Energy Technology Data Exchange (ETDEWEB)

    Metz, F L; Theumann, W K [Instituto de Fisica, Universidade Federal do Rio Grande do Sul, Caixa Postal 15051, 91501-970 Porto Alegre (Brazil)], E-mail: fernando@itf.fys.kuleuven.be, E-mail: theumann@if.ufrgs.br

    2009-09-25

    The synchronous dynamics and the stationary states of a recurrent attractor neural network model with competing synapses between symmetric sequence processing and Hebbian pattern reconstruction are studied in this work allowing for the presence of a self-interaction for each unit. Phase diagrams of stationary states are obtained exhibiting phases of retrieval, symmetric and period-two cyclic states as well as correlated and frozen-in states, in the absence of noise. The frozen-in states are destabilized by synaptic noise and well-separated regions of correlated and cyclic states are obtained. Excitatory or inhibitory self-interactions yield enlarged phases of fixed-point or cyclic behaviour.

  2. Adaptive recurrent neural network control of uncertain constrained nonholonomic mobile manipulators

    Science.gov (United States)

    Wang, Z. P.; Zhou, T.; Mao, Y.; Chen, Q. J.

    2014-02-01

    In this article, motion/force control problem of a class of constrained mobile manipulators with unknown dynamics is considered. The system is subject to both holonomic and nonholonomic constraints. An adaptive recurrent neural network controller is proposed to deal with the unmodelled system dynamics. The proposed control strategy guarantees that the system motion asymptotically converges to the desired manifold while the constraint force remains bounded. In addition, an adaptive method is proposed to identify the contact surface. Simulation studies are carried out to verify the validation of the proposed approach.

  3. Using recurrent neural networks to optimize dynamical decoupling for quantum memory

    Science.gov (United States)

    August, Moritz; Ni, Xiaotong

    2017-01-01

    We utilize machine learning models that are based on recurrent neural networks to optimize dynamical decoupling (DD) sequences. Dynamical decoupling is a relatively simple technique for suppressing the errors in quantum memory for certain noise models. In numerical simulations, we show that with minimum use of prior knowledge and starting from random sequences, the models are able to improve over time and eventually output DD sequences with performance better than that of the well known DD families. Furthermore, our algorithm is easy to implement in experiments to find solutions tailored to the specific hardware, as it treats the figure of merit as a black box.

  4. A one-layer recurrent neural network with a discontinuous hard-limiting activation function for quadratic programming.

    Science.gov (United States)

    Liu, Q; Wang, J

    2008-04-01

    In this paper, a one-layer recurrent neural network with a discontinuous hard-limiting activation function is proposed for quadratic programming. This neural network is capable of solving a large class of quadratic programming problems. The state variables of the neural network are proven to be globally stable and the output variables are proven to be convergent to optimal solutions as long as the objective function is strictly convex on a set defined by the equality constraints. In addition, a sequential quadratic programming approach based on the proposed recurrent neural network is developed for general nonlinear programming. Simulation results on numerical examples and support vector machine (SVM) learning show the effectiveness and performance of the neural network.

  5. The super-Turing computational power of plastic recurrent neural networks.

    Science.gov (United States)

    Cabessa, Jérémie; Siegelmann, Hava T

    2014-12-01

    We study the computational capabilities of a biologically inspired neural model where the synaptic weights, the connectivity pattern, and the number of neurons can evolve over time rather than stay static. Our study focuses on the mere concept of plasticity of the model so that the nature of the updates is assumed to be not constrained. In this context, we show that the so-called plastic recurrent neural networks (RNNs) are capable of the precise super-Turing computational power--as the static analog neural networks--irrespective of whether their synaptic weights are modeled by rational or real numbers, and moreover, irrespective of whether their patterns of plasticity are restricted to bi-valued updates or expressed by any other more general form of updating. Consequently, the incorporation of only bi-valued plastic capabilities in a basic model of RNNs suffices to break the Turing barrier and achieve the super-Turing level of computation. The consideration of more general mechanisms of architectural plasticity or of real synaptic weights does not further increase the capabilities of the networks. These results support the claim that the general mechanism of plasticity is crucially involved in the computational and dynamical capabilities of biological neural networks. They further show that the super-Turing level of computation reflects in a suitable way the capabilities of brain-like models of computation.

  6. Modeling the motor cortex: Optimality, recurrent neural networks, and spatial dynamics.

    Science.gov (United States)

    Tanaka, Hirokazu

    2016-03-01

    Specialization of motor function in the frontal lobe was first discovered in the seminal experiments by Fritsch and Hitzig and subsequently by Ferrier in the 19th century. It is, however, ironical that the functional and computational role of the motor cortex still remains unresolved. A computational understanding of the motor cortex equals to understanding what movement variables the motor neurons represent (movement representation problem) and how such movement variables are computed through the interaction with anatomically connected areas (neural computation problem). Electrophysiological experiments in the 20th century demonstrated that the neural activities in motor cortex correlated with a number of motor-related and cognitive variables, thereby igniting the controversy over movement representations in motor cortex. Despite substantial experimental efforts, the overwhelming complexity found in neural activities has impeded our understanding of how movements are represented in the motor cortex. Recent progresses in computational modeling have rekindled this controversy in the 21st century. Here, I review the recent developments in computational models of the motor cortex, with a focus on optimality models, recurrent neural network models and spatial dynamics models. Although individual models provide consistent pictures within their domains, our current understanding about functions of the motor cortex is still fragmented.

  7. Nonlinear Model Predictive Control Based on a Self-Organizing Recurrent Neural Network.

    Science.gov (United States)

    Han, Hong-Gui; Zhang, Lu; Hou, Ying; Qiao, Jun-Fei

    2016-02-01

    A nonlinear model predictive control (NMPC) scheme is developed in this paper based on a self-organizing recurrent radial basis function (SR-RBF) neural network, whose structure and parameters are adjusted concurrently in the training process. The proposed SR-RBF neural network is represented in a general nonlinear form for predicting the future dynamic behaviors of nonlinear systems. To improve the modeling accuracy, a spiking-based growing and pruning algorithm and an adaptive learning algorithm are developed to tune the structure and parameters of the SR-RBF neural network, respectively. Meanwhile, for the control problem, an improved gradient method is utilized for the solution of the optimization problem in NMPC. The stability of the resulting control system is proved based on the Lyapunov stability theory. Finally, the proposed SR-RBF neural network-based NMPC (SR-RBF-NMPC) is used to control the dissolved oxygen (DO) concentration in a wastewater treatment process (WWTP). Comparisons with other existing methods demonstrate that the SR-RBF-NMPC can achieve a considerably better model fitting for WWTP and a better control performance for DO concentration.

  8. Synaptic plasticity in a recurrent neural network for versatile and adaptive behaviors of a walking robot.

    Science.gov (United States)

    Grinke, Eduard; Tetzlaff, Christian; Wörgötter, Florentin; Manoonpong, Poramate

    2015-01-01

    Walking animals, like insects, with little neural computing can effectively perform complex behaviors. For example, they can walk around their environment, escape from corners/deadlocks, and avoid or climb over obstacles. While performing all these behaviors, they can also adapt their movements to deal with an unknown situation. As a consequence, they successfully navigate through their complex environment. The versatile and adaptive abilities are the result of an integration of several ingredients embedded in their sensorimotor loop. Biological studies reveal that the ingredients include neural dynamics, plasticity, sensory feedback, and biomechanics. Generating such versatile and adaptive behaviors for a many degrees-of-freedom (DOFs) walking robot is a challenging task. Thus, in this study, we present a bio-inspired approach to solve this task. Specifically, the approach combines neural mechanisms with plasticity, exteroceptive sensory feedback, and biomechanics. The neural mechanisms consist of adaptive neural sensory processing and modular neural locomotion control. The sensory processing is based on a small recurrent neural network consisting of two fully connected neurons. Online correlation-based learning with synaptic scaling is applied to adequately change the connections of the network. By doing so, we can effectively exploit neural dynamics (i.e., hysteresis effects and single attractors) in the network to generate different turning angles with short-term memory for a walking robot. The turning information is transmitted as descending steering signals to the neural locomotion control which translates the signals into motor actions. As a result, the robot can walk around and adapt its turning angle for avoiding obstacles in different situations. The adaptation also enables the robot to effectively escape from sharp corners or deadlocks. Using backbone joint control embedded in the the locomotion control allows the robot to climb over small obstacles

  9. Decentralized Identification and Control in Real-Time of a Robot Manipulator via Recurrent Wavelet First-Order Neural Network

    Directory of Open Access Journals (Sweden)

    Luis A. Vázquez

    2015-01-01

    Full Text Available A decentralized recurrent wavelet first-order neural network (RWFONN structure is presented. The use of a wavelet Morlet activation function allows proposing a neural structure in continuous time of a single layer and a single neuron in order to identify online in a series-parallel configuration, using the filtered error (FE training algorithm, the dynamics behavior of each joint for a two-degree-of-freedom (DOF vertical robot manipulator, whose parameters such as friction and inertia are unknown. Based on the RWFONN subsystem, a decentralized neural controller is designed via backstepping approach. The performance of the decentralized wavelet neural controller is validated via real-time results.

  10. Engine cylinder pressure reconstruction using crank kinematics and recurrently-trained neural networks

    Science.gov (United States)

    Bennett, C.; Dunne, J. F.; Trimby, S.; Richardson, D.

    2017-02-01

    A recurrent non-linear autoregressive with exogenous input (NARX) neural network is proposed, and a suitable fully-recurrent training methodology is adapted and tuned, for reconstructing cylinder pressure in multi-cylinder IC engines using measured crank kinematics. This type of indirect sensing is important for cost effective closed-loop combustion control and for On-Board Diagnostics. The challenge addressed is to accurately predict cylinder pressure traces within the cycle under generalisation conditions: i.e. using data not previously seen by the network during training. This involves direct construction and calibration of a suitable inverse crank dynamic model, which owing to singular behaviour at top-dead-centre (TDC), has proved difficult via physical model construction, calibration, and inversion. The NARX architecture is specialised and adapted to cylinder pressure reconstruction, using a fully-recurrent training methodology which is needed because the alternatives are too slow and unreliable for practical network training on production engines. The fully-recurrent Robust Adaptive Gradient Descent (RAGD) algorithm, is tuned initially using synthesised crank kinematics, and then tested on real engine data to assess the reconstruction capability. Real data is obtained from a 1.125 l, 3-cylinder, in-line, direct injection spark ignition (DISI) engine involving synchronised measurements of crank kinematics and cylinder pressure across a range of steady-state speed and load conditions. The paper shows that a RAGD-trained NARX network using both crank velocity and crank acceleration as input information, provides fast and robust training. By using the optimum epoch identified during RAGD training, acceptably accurate cylinder pressures, and especially accurate location-of-peak-pressure, can be reconstructed robustly under generalisation conditions, making it the most practical NARX configuration and recurrent training methodology for use on production engines.

  11. Enhancing the Authentication of Bank Cheque Signatures by Implementing Automated System Using Recurrent Neural Network

    CERN Document Server

    Rao, Mukta; Dhaka, Vijaypal Singh

    2010-01-01

    The associatie memory feature of the Hopfield type recurrent neural network is used for the pattern storage and pattern authentication.This paper outlines an optimization relaxation approach for signature verification based on the Hopfield neural network (HNN)which is a recurrent network.The standard sample signature of the customer is cross matched with the one supplied on the Cheque.The difference percentage is obtained by calculating the different pixels in both the images.The network topology is built so that each pixel in the difference image is a neuron in the network.Each neuron is categorized by its states,which in turn signifies that if the particular pixel is changed.The network converges to unwavering condition based on the energy function which is derived in experiments.The Hopfield's model allows each node to take on two binary state values (changed/unchanged)for each pixel.The performance of the proposed technique is evaluated by applying it in various binary and gray scale images.This paper con...

  12. Optimal Formation of Multirobot Systems Based on a Recurrent Neural Network.

    Science.gov (United States)

    Wang, Yunpeng; Cheng, Long; Hou, Zeng-Guang; Yu, Junzhi; Tan, Min

    2016-02-01

    The optimal formation problem of multirobot systems is solved by a recurrent neural network in this paper. The desired formation is described by the shape theory. This theory can generate a set of feasible formations that share the same relative relation among robots. An optimal formation means that finding one formation from the feasible formation set, which has the minimum distance to the initial formation of the multirobot system. Then, the formation problem is transformed into an optimization problem. In addition, the orientation, scale, and admissible range of the formation can also be considered as the constraints in the optimization problem. Furthermore, if all robots are identical, their positions in the system are exchangeable. Then, each robot does not necessarily move to one specific position in the formation. In this case, the optimal formation problem becomes a combinational optimization problem, whose optimal solution is very hard to obtain. Inspired by the penalty method, this combinational optimization problem can be approximately transformed into a convex optimization problem. Due to the involvement of the Euclidean norm in the distance, the objective function of these optimization problems are nonsmooth. To solve these nonsmooth optimization problems efficiently, a recurrent neural network approach is employed, owing to its parallel computation ability. Finally, some simulations and experiments are given to validate the effectiveness and efficiency of the proposed optimal formation approach.

  13. Multistability of delayed complex-valued recurrent neural networks with discontinuous real-imaginary-type activation functions

    Science.gov (United States)

    Huang, Yu-Jiao; Hu, Hai-Gen

    2015-12-01

    In this paper, the multistability issue is discussed for delayed complex-valued recurrent neural networks with discontinuous real-imaginary-type activation functions. Based on a fixed theorem and stability definition, sufficient criteria are established for the existence and stability of multiple equilibria of complex-valued recurrent neural networks. The number of stable equilibria is larger than that of real-valued recurrent neural networks, which can be used to achieve high-capacity associative memories. One numerical example is provided to show the effectiveness and superiority of the presented results. Project supported by the National Natural Science Foundation of China (Grant Nos. 61374094 and 61503338) and the Natural Science Foundation of Zhejiang Province, China (Grant No. LQ15F030005).

  14. Reinforcement Learning of Linking and Tracing Contours in Recurrent Neural Networks

    Science.gov (United States)

    Brosch, Tobias; Neumann, Heiko; Roelfsema, Pieter R.

    2015-01-01

    The processing of a visual stimulus can be subdivided into a number of stages. Upon stimulus presentation there is an early phase of feedforward processing where the visual information is propagated from lower to higher visual areas for the extraction of basic and complex stimulus features. This is followed by a later phase where horizontal connections within areas and feedback connections from higher areas back to lower areas come into play. In this later phase, image elements that are behaviorally relevant are grouped by Gestalt grouping rules and are labeled in the cortex with enhanced neuronal activity (object-based attention in psychology). Recent neurophysiological studies revealed that reward-based learning influences these recurrent grouping processes, but it is not well understood how rewards train recurrent circuits for perceptual organization. This paper examines the mechanisms for reward-based learning of new grouping rules. We derive a learning rule that can explain how rewards influence the information flow through feedforward, horizontal and feedback connections. We illustrate the efficiency with two tasks that have been used to study the neuronal correlates of perceptual organization in early visual cortex. The first task is called contour-integration and demands the integration of collinear contour elements into an elongated curve. We show how reward-based learning causes an enhancement of the representation of the to-be-grouped elements at early levels of a recurrent neural network, just as is observed in the visual cortex of monkeys. The second task is curve-tracing where the aim is to determine the endpoint of an elongated curve composed of connected image elements. If trained with the new learning rule, neural networks learn to propagate enhanced activity over the curve, in accordance with neurophysiological data. We close the paper with a number of model predictions that can be tested in future neurophysiological and computational studies

  15. Reinforcement Learning of Linking and Tracing Contours in Recurrent Neural Networks.

    Science.gov (United States)

    Brosch, Tobias; Neumann, Heiko; Roelfsema, Pieter R

    2015-10-01

    The processing of a visual stimulus can be subdivided into a number of stages. Upon stimulus presentation there is an early phase of feedforward processing where the visual information is propagated from lower to higher visual areas for the extraction of basic and complex stimulus features. This is followed by a later phase where horizontal connections within areas and feedback connections from higher areas back to lower areas come into play. In this later phase, image elements that are behaviorally relevant are grouped by Gestalt grouping rules and are labeled in the cortex with enhanced neuronal activity (object-based attention in psychology). Recent neurophysiological studies revealed that reward-based learning influences these recurrent grouping processes, but it is not well understood how rewards train recurrent circuits for perceptual organization. This paper examines the mechanisms for reward-based learning of new grouping rules. We derive a learning rule that can explain how rewards influence the information flow through feedforward, horizontal and feedback connections. We illustrate the efficiency with two tasks that have been used to study the neuronal correlates of perceptual organization in early visual cortex. The first task is called contour-integration and demands the integration of collinear contour elements into an elongated curve. We show how reward-based learning causes an enhancement of the representation of the to-be-grouped elements at early levels of a recurrent neural network, just as is observed in the visual cortex of monkeys. The second task is curve-tracing where the aim is to determine the endpoint of an elongated curve composed of connected image elements. If trained with the new learning rule, neural networks learn to propagate enhanced activity over the curve, in accordance with neurophysiological data. We close the paper with a number of model predictions that can be tested in future neurophysiological and computational studies.

  16. Application of Recurrent Wavelet Neural Networks to the Digital Communications Channel Blind Equalization

    Institute of Scientific and Technical Information of China (English)

    HeShichun; HeZhenya

    1997-01-01

    This paper investigates the application of a Recurrent Wavelet Neural Network(RWNN)to the blind equalization of nonlinear communication channels.In contrast to the wavelet networks introduced in,the RWNN is well suited for use in real time adaptive signal processing.Furthermore,the RWNN has the advantage that a priori information of the underlying system need not be known,the dynamics of the system are configured in the recurrent connections and the network approximates the system over time.An RWNN based structure and a novel training approach for blind equalization was proposed and its performance evaluated via computer simulations for nolnlinear communication channel model.It is shown that the RWNN blind equalizer performs much better than the linear Constant Modulus Algorithm(CMA) and the Recurrent Radial Basis Function(RRBF) Networks based blind equalizers in nonlinear channel case.The small size and high performance of the RWNN equalizer make it suitable for high speed channel blind equalization.

  17. A Three-Threshold Learning Rule Approaches the Maximal Capacity of Recurrent Neural Networks.

    Directory of Open Access Journals (Sweden)

    Alireza Alemi

    2015-08-01

    Full Text Available Understanding the theoretical foundations of how memories are encoded and retrieved in neural populations is a central challenge in neuroscience. A popular theoretical scenario for modeling memory function is the attractor neural network scenario, whose prototype is the Hopfield model. The model simplicity and the locality of the synaptic update rules come at the cost of a poor storage capacity, compared with the capacity achieved with perceptron learning algorithms. Here, by transforming the perceptron learning rule, we present an online learning rule for a recurrent neural network that achieves near-maximal storage capacity without an explicit supervisory error signal, relying only upon locally accessible information. The fully-connected network consists of excitatory binary neurons with plastic recurrent connections and non-plastic inhibitory feedback stabilizing the network dynamics; the memory patterns to be memorized are presented online as strong afferent currents, producing a bimodal distribution for the neuron synaptic inputs. Synapses corresponding to active inputs are modified as a function of the value of the local fields with respect to three thresholds. Above the highest threshold, and below the lowest threshold, no plasticity occurs. In between these two thresholds, potentiation/depression occurs when the local field is above/below an intermediate threshold. We simulated and analyzed a network of binary neurons implementing this rule and measured its storage capacity for different sizes of the basins of attraction. The storage capacity obtained through numerical simulations is shown to be close to the value predicted by analytical calculations. We also measured the dependence of capacity on the strength of external inputs. Finally, we quantified the statistics of the resulting synaptic connectivity matrix, and found that both the fraction of zero weight synapses and the degree of symmetry of the weight matrix increase with the

  18. A Three-Threshold Learning Rule Approaches the Maximal Capacity of Recurrent Neural Networks.

    Science.gov (United States)

    Alemi, Alireza; Baldassi, Carlo; Brunel, Nicolas; Zecchina, Riccardo

    2015-08-01

    Understanding the theoretical foundations of how memories are encoded and retrieved in neural populations is a central challenge in neuroscience. A popular theoretical scenario for modeling memory function is the attractor neural network scenario, whose prototype is the Hopfield model. The model simplicity and the locality of the synaptic update rules come at the cost of a poor storage capacity, compared with the capacity achieved with perceptron learning algorithms. Here, by transforming the perceptron learning rule, we present an online learning rule for a recurrent neural network that achieves near-maximal storage capacity without an explicit supervisory error signal, relying only upon locally accessible information. The fully-connected network consists of excitatory binary neurons with plastic recurrent connections and non-plastic inhibitory feedback stabilizing the network dynamics; the memory patterns to be memorized are presented online as strong afferent currents, producing a bimodal distribution for the neuron synaptic inputs. Synapses corresponding to active inputs are modified as a function of the value of the local fields with respect to three thresholds. Above the highest threshold, and below the lowest threshold, no plasticity occurs. In between these two thresholds, potentiation/depression occurs when the local field is above/below an intermediate threshold. We simulated and analyzed a network of binary neurons implementing this rule and measured its storage capacity for different sizes of the basins of attraction. The storage capacity obtained through numerical simulations is shown to be close to the value predicted by analytical calculations. We also measured the dependence of capacity on the strength of external inputs. Finally, we quantified the statistics of the resulting synaptic connectivity matrix, and found that both the fraction of zero weight synapses and the degree of symmetry of the weight matrix increase with the number of stored

  19. Reward-based training of recurrent neural networks for cognitive and value-based tasks

    Science.gov (United States)

    Song, H Francis; Yang, Guangyu R; Wang, Xiao-Jing

    2017-01-01

    Trained neural network models, which exhibit features of neural activity recorded from behaving animals, may provide insights into the circuit mechanisms of cognitive functions through systematic analysis of network activity and connectivity. However, in contrast to the graded error signals commonly used to train networks through supervised learning, animals learn from reward feedback on definite actions through reinforcement learning. Reward maximization is particularly relevant when optimal behavior depends on an animal’s internal judgment of confidence or subjective preferences. Here, we implement reward-based training of recurrent neural networks in which a value network guides learning by using the activity of the decision network to predict future reward. We show that such models capture behavioral and electrophysiological findings from well-known experimental paradigms. Our work provides a unified framework for investigating diverse cognitive and value-based computations, and predicts a role for value representation that is essential for learning, but not executing, a task. DOI: http://dx.doi.org/10.7554/eLife.21492.001 PMID:28084991

  20. Speed Control of BLDC Motor Based on Recurrent Wavelet Neural Network

    Directory of Open Access Journals (Sweden)

    Adel A. Obed

    2014-12-01

    Full Text Available In recent years, artificial intelligence techniques such as wavelet neural network have been applied to control the speed of the BLDC motor drive. The BLDC motor is a multivariable and nonlinear system due to variations in stator resistance and moment of inertia. Therefore, it is not easy to obtain a good performance by applying conventional PID controller. The Recurrent Wavelet Neural Network (RWNN is proposed, in this paper, with PID controller in parallel to produce a modified controller called RWNN-PID controller, which combines the capability of the artificial neural networks for learning from the BLDC motor drive and the capability of wavelet decomposition for identification and control of dynamic system and also having the ability of self-learning and self-adapting. The proposed controller is applied for controlling the speed of BLDC motor which provides a better performance than using conventional controllers with a wide range of speed. The parameters of the proposed controller are optimized using Particle Swarm Optimization (PSO algorithm. The BLDC motor drive with RWNN-PID controller through simulation results proves a better in the performance and stability compared with using conventional PID and classical WNN-PID controllers.

  1. Particle Swarm Optimization Recurrent Neural Network Based Z-source Inverter Fed Induction Motor Drive

    Directory of Open Access Journals (Sweden)

    R. Selva Santhose Kumar

    2014-06-01

    Full Text Available In this study, the proposal is made for Particle Swarm Optimization (PSO Recurrent Neural Network (RNN based Z-Source Inverter Fed Induction Motor Drive. The proposed method is used to enhance the performance of the induction motor while reducing the Total Harmonic Distortion (THD, eliminating the oscillation period of the stator current, torque and speed. Here, the PSO technique uses the induction motor speed and reference speed as the input parameters. From the input parameters, it optimizes the gain of the PI controller and generates the reference quadrature axis current. By using the RNN, the reference three phase current for accurate control pulses of the voltage source inverter is predicted. The RNN is trained by the input motor actual quadrature axis current and the reference quadrature axis current with the corresponding target reference three phase current. The training process utilized the supervised learning process. Then the proposed technique is implemented in the MATLAB/SIMULINK platform and the effectiveness is analyzed by comparing with the other techniques such as PSO-Radial Biased Neural Network (RBNN and PSO-Artificial Neural Network (ANN. The comparison results demonstrate the superiority of the proposed approach and confirm its potential to solve the problem.

  2. Echo state property linked to an input: exploring a fundamental characteristic of recurrent neural networks.

    Science.gov (United States)

    Manjunath, G; Jaeger, H

    2013-03-01

    The echo state property is a key for the design and training of recurrent neural networks within the paradigm of reservoir computing. In intuitive terms, this is a passivity condition: a network having this property, when driven by an input signal, will become entrained by the input and develop an internal response signal. This excited internal dynamics can be seen as a high-dimensional, nonlinear, unique transform of the input with a rich memory content. This view has implications for understanding neural dynamics beyond the field of reservoir computing. Available definitions and theorems concerning the echo state property, however, are of little practical use because they do not relate the network response to temporal or statistical properties of the driving input. Here we present a new definition of the echo state property that directly connects it to such properties. We derive a fundamental 0-1 law: if the input comes from an ergodic source, the network response has the echo state property with probability one or zero, independent of the given network. Furthermore, we give a sufficient condition for the echo state property that connects statistical characteristics of the input to algebraic properties of the network connection matrix. The mathematical methods that we employ are freshly imported from the young field of nonautonomous dynamical systems theory. Since these methods are not yet well known in neural computation research, we introduce them in some detail. As a side story, we hope to demonstrate the eminent usefulness of these methods.

  3. New delay-dependent criterion for the stability of recurrent neural networks with time-varying delay

    Institute of Scientific and Technical Information of China (English)

    ZHANG HuaGuang; WANG ZhanShan

    2009-01-01

    This paper is concerned with the global asymptotic stability of a class of recurrent neural networks with interval time-varying delay. By constructing a suitable Lyapunov functional, a new criterion is established to ensure the global asymptotic stability of the concerned neural networks, which can be expressed in the form of linear matrix inequality and independent of the size of derivative of time varying delay. Two numerical examples show the effectiveness of the obtained results.

  4. Direct Lyapunov-based control law design for spacecraft attitude maneuvers

    Institute of Scientific and Technical Information of China (English)

    HU Likun; ANG Qingchao

    2006-01-01

    A direct Lyapunov-based control law is presented to perform on-orbit stability for spacecraft attitude maneuvers. Spacecraft attitude kinematic equations and dynamic equations are coupled, nonlinear, multi-input multi-output(MIMO), which baffles controller design. Orbit angular rates are taken into account in kinematic equations and influence of gravity gradient moments and disturbance moments on the spacecraft attitude in dynamic equations is considered to approach the practical environment, which enhance the problem complexity to some extent. Based on attitude tracking errors and angular rates, a Lyapunov function is constructed, through which the stabilizing feedback control law is deduced via Lie derivation of the Lyapunov function. The proposed method can deal with the case that the spacecraft is subjected to mass property variations or centroidal inertia matrix variations due to fuel assumption or flexibility, and disturbance moments, which shows the proposed controller is robust for spacecraft attitude maneuvers. The unlimited controller and the limited controller are taken into account respectively in simulations. Simulation results are demonstrated to validate effectiveness and feasibility of the proposed method.

  5. Biological oscillations for learning walking coordination: dynamic recurrent neural network functionally models physiological central pattern generator.

    Science.gov (United States)

    Hoellinger, Thomas; Petieau, Mathieu; Duvinage, Matthieu; Castermans, Thierry; Seetharaman, Karthik; Cebolla, Ana-Maria; Bengoetxea, Ana; Ivanenko, Yuri; Dan, Bernard; Cheron, Guy

    2013-01-01

    The existence of dedicated neuronal modules such as those organized in the cerebral cortex, thalamus, basal ganglia, cerebellum, or spinal cord raises the question of how these functional modules are coordinated for appropriate motor behavior. Study of human locomotion offers an interesting field for addressing this central question. The coordination of the elevation of the 3 leg segments under a planar covariation rule (Borghese et al., 1996) was recently modeled (Barliya et al., 2009) by phase-adjusted simple oscillators shedding new light on the understanding of the central pattern generator (CPG) processing relevant oscillation signals. We describe the use of a dynamic recurrent neural network (DRNN) mimicking the natural oscillatory behavior of human locomotion for reproducing the planar covariation rule in both legs at different walking speeds. Neural network learning was based on sinusoid signals integrating frequency and amplitude features of the first three harmonics of the sagittal elevation angles of the thigh, shank, and foot of each lower limb. We verified the biological plausibility of the neural networks. Best results were obtained with oscillations extracted from the first three harmonics in comparison to oscillations outside the harmonic frequency peaks. Physiological replication steadily increased with the number of neuronal units from 1 to 80, where similarity index reached 0.99. Analysis of synaptic weighting showed that the proportion of inhibitory connections consistently increased with the number of neuronal units in the DRNN. This emerging property in the artificial neural networks resonates with recent advances in neurophysiology of inhibitory neurons that are involved in central nervous system oscillatory activities. The main message of this study is that this type of DRNN may offer a useful model of physiological central pattern generator for gaining insights in basic research and developing clinical applications.

  6. Biological oscillations for learning walking coordination: dynamic recurrent neural network functionally models physiological central pattern generator

    Directory of Open Access Journals (Sweden)

    Thomas eHoellinger

    2013-05-01

    Full Text Available The existence of dedicated neuronal modules such as those organized in the cerebral cortex, thalamus, basal ganglia, cerebellum or spinal cord raises the question of how these functional modules are coordinated for appropriate motor behavior. Study of human locomotion offers an interesting field for addressing this central question. The coordination of the elevation of the 3 leg segments under a planar covariation rule (Borghese et al., 1996 was recently modeled (Barliya et al., 2009 by phase-adjusted simple oscillators shedding new light on the understanding of the central pattern generator processing relevant oscillation signals. We describe the use of a dynamic recurrent neural network (DRNN mimicking the natural oscillatory behavior of human locomotion for reproducing the planar covariation rule in both legs at different walking speeds. Neural network learning was based on sinusoid signals integrating frequency and amplitude features of the first three harmonics of the sagittal elevation angles of the thigh, shank and foot of each lower limb. We verified the biological plausibility of the neural networks. Best results were obtained with oscillations extracted from the first three harmonics in comparison to oscillations outside the harmonic frequency peaks. Physiological replication steadily increased with the number of neuronal units from 1 to 80, where similarity index reached 0.99. Analysis of synaptic weighting showed that the proportion of inhibitory connections consistently increased with the number of neuronal units in the DRNN. This emerging property in the artificial neural networks resonates with recent advances in neurophysiology of inhibitory neurons that are involved in central nervous system oscillatory activities. The main message of this study is that this type of DRNN may offer a useful model of physiological central pattern generator for gaining insights in basic research and developing clinical applications.

  7. Training Excitatory-Inhibitory Recurrent Neural Networks for Cognitive Tasks: A Simple and Flexible Framework.

    Directory of Open Access Journals (Sweden)

    H Francis Song

    2016-02-01

    Full Text Available The ability to simultaneously record from large numbers of neurons in behaving animals has ushered in a new era for the study of the neural circuit mechanisms underlying cognitive functions. One promising approach to uncovering the dynamical and computational principles governing population responses is to analyze model recurrent neural networks (RNNs that have been optimized to perform the same tasks as behaving animals. Because the optimization of network parameters specifies the desired output but not the manner in which to achieve this output, "trained" networks serve as a source of mechanistic hypotheses and a testing ground for data analyses that link neural computation to behavior. Complete access to the activity and connectivity of the circuit, and the ability to manipulate them arbitrarily, make trained networks a convenient proxy for biological circuits and a valuable platform for theoretical investigation. However, existing RNNs lack basic biological features such as the distinction between excitatory and inhibitory units (Dale's principle, which are essential if RNNs are to provide insights into the operation of biological circuits. Moreover, trained networks can achieve the same behavioral performance but differ substantially in their structure and dynamics, highlighting the need for a simple and flexible framework for the exploratory training of RNNs. Here, we describe a framework for gradient descent-based training of excitatory-inhibitory RNNs that can incorporate a variety of biological knowledge. We provide an implementation based on the machine learning library Theano, whose automatic differentiation capabilities facilitate modifications and extensions. We validate this framework by applying it to well-known experimental paradigms such as perceptual decision-making, context-dependent integration, multisensory integration, parametric working memory, and motor sequence generation. Our results demonstrate the wide range of neural

  8. Training Excitatory-Inhibitory Recurrent Neural Networks for Cognitive Tasks: A Simple and Flexible Framework.

    Science.gov (United States)

    Song, H Francis; Yang, Guangyu R; Wang, Xiao-Jing

    2016-02-01

    The ability to simultaneously record from large numbers of neurons in behaving animals has ushered in a new era for the study of the neural circuit mechanisms underlying cognitive functions. One promising approach to uncovering the dynamical and computational principles governing population responses is to analyze model recurrent neural networks (RNNs) that have been optimized to perform the same tasks as behaving animals. Because the optimization of network parameters specifies the desired output but not the manner in which to achieve this output, "trained" networks serve as a source of mechanistic hypotheses and a testing ground for data analyses that link neural computation to behavior. Complete access to the activity and connectivity of the circuit, and the ability to manipulate them arbitrarily, make trained networks a convenient proxy for biological circuits and a valuable platform for theoretical investigation. However, existing RNNs lack basic biological features such as the distinction between excitatory and inhibitory units (Dale's principle), which are essential if RNNs are to provide insights into the operation of biological circuits. Moreover, trained networks can achieve the same behavioral performance but differ substantially in their structure and dynamics, highlighting the need for a simple and flexible framework for the exploratory training of RNNs. Here, we describe a framework for gradient descent-based training of excitatory-inhibitory RNNs that can incorporate a variety of biological knowledge. We provide an implementation based on the machine learning library Theano, whose automatic differentiation capabilities facilitate modifications and extensions. We validate this framework by applying it to well-known experimental paradigms such as perceptual decision-making, context-dependent integration, multisensory integration, parametric working memory, and motor sequence generation. Our results demonstrate the wide range of neural activity patterns

  9. PERAMALAN KONSUMSI LISTRIK JANGKA PENDEK DENGAN ARIMA MUSIMAN GANDA DAN ELMAN-RECURRENT NEURAL NETWORK

    Directory of Open Access Journals (Sweden)

    Suhartono Suhartono

    2009-07-01

    Full Text Available Neural network (NN is one of many method used to predict the electricity consumption per hour in many countries. NN method which is used in many previous studies is Feed-Forward Neural Network (FFNN or Autoregressive Neural Network(AR-NN. AR-NN model is not able to capture and explain the effect of moving average (MA order on a time series of data. This research was conducted with the purpose of reviewing the application of other types of NN, that is Elman-Recurrent Neural Network (Elman-RNN which could explain MA order effect and compare the result of prediction accuracy with multiple seasonal ARIMA (Autoregressive Integrated Moving Average models. As a case study, we used data electricity consumption per hour in Mengare Gresik. Result of analysis showed that the best of double seasonal Arima models suited to short-term forecasting in the case study data is ARIMA([1,2,3,4,6,7,9,10,14,21,33],1,8(0,1,124 (1,1,0168. This model produces a white noise residuals, but it does not have a normal distribution due to suspected outlier. Outlier detection in iterative produce 14 innovation outliers. There are 4 inputs of Elman-RNN network that were examined and tested for forecasting the data, the input according to lag Arima, input such as lag Arima plus 14 dummy outlier, inputs are the lag-multiples of 24 up to lag 480, and the inputs are lag 1 and lag multiples of 24+1. All of four network uses one hidden layer with tangent sigmoid activation function and one output with a linear function. The result of comparative forecast accuracy through value of MAPE out-sample showed that the fourth networks, namely Elman-RNN (22, 3, 1, is the best model for forecasting electricity consumption per hour in short term in Mengare Gresik.

  10. A state space approach for piecewise-linear recurrent neural networks for identifying computational dynamics from neural measurements.

    Science.gov (United States)

    Durstewitz, Daniel

    2017-06-01

    The computational and cognitive properties of neural systems are often thought to be implemented in terms of their (stochastic) network dynamics. Hence, recovering the system dynamics from experimentally observed neuronal time series, like multiple single-unit recordings or neuroimaging data, is an important step toward understanding its computations. Ideally, one would not only seek a (lower-dimensional) state space representation of the dynamics, but would wish to have access to its statistical properties and their generative equations for in-depth analysis. Recurrent neural networks (RNNs) are a computationally powerful and dynamically universal formal framework which has been extensively studied from both the computational and the dynamical systems perspective. Here we develop a semi-analytical maximum-likelihood estimation scheme for piecewise-linear RNNs (PLRNNs) within the statistical framework of state space models, which accounts for noise in both the underlying latent dynamics and the observation process. The Expectation-Maximization algorithm is used to infer the latent state distribution, through a global Laplace approximation, and the PLRNN parameters iteratively. After validating the procedure on toy examples, and using inference through particle filters for comparison, the approach is applied to multiple single-unit recordings from the rodent anterior cingulate cortex (ACC) obtained during performance of a classical working memory task, delayed alternation. Models estimated from kernel-smoothed spike time data were able to capture the essential computational dynamics underlying task performance, including stimulus-selective delay activity. The estimated models were rarely multi-stable, however, but rather were tuned to exhibit slow dynamics in the vicinity of a bifurcation point. In summary, the present work advances a semi-analytical (thus reasonably fast) maximum-likelihood estimation framework for PLRNNs that may enable to recover relevant aspects

  11. A state space approach for piecewise-linear recurrent neural networks for identifying computational dynamics from neural measurements.

    Directory of Open Access Journals (Sweden)

    Daniel Durstewitz

    2017-06-01

    Full Text Available The computational and cognitive properties of neural systems are often thought to be implemented in terms of their (stochastic network dynamics. Hence, recovering the system dynamics from experimentally observed neuronal time series, like multiple single-unit recordings or neuroimaging data, is an important step toward understanding its computations. Ideally, one would not only seek a (lower-dimensional state space representation of the dynamics, but would wish to have access to its statistical properties and their generative equations for in-depth analysis. Recurrent neural networks (RNNs are a computationally powerful and dynamically universal formal framework which has been extensively studied from both the computational and the dynamical systems perspective. Here we develop a semi-analytical maximum-likelihood estimation scheme for piecewise-linear RNNs (PLRNNs within the statistical framework of state space models, which accounts for noise in both the underlying latent dynamics and the observation process. The Expectation-Maximization algorithm is used to infer the latent state distribution, through a global Laplace approximation, and the PLRNN parameters iteratively. After validating the procedure on toy examples, and using inference through particle filters for comparison, the approach is applied to multiple single-unit recordings from the rodent anterior cingulate cortex (ACC obtained during performance of a classical working memory task, delayed alternation. Models estimated from kernel-smoothed spike time data were able to capture the essential computational dynamics underlying task performance, including stimulus-selective delay activity. The estimated models were rarely multi-stable, however, but rather were tuned to exhibit slow dynamics in the vicinity of a bifurcation point. In summary, the present work advances a semi-analytical (thus reasonably fast maximum-likelihood estimation framework for PLRNNs that may enable to recover

  12. Methylenetetrahydrofolate reductase mutations, a genetic cause for familial recurrent neural tube defects

    Directory of Open Access Journals (Sweden)

    Laxmi V Yaliwal

    2012-01-01

    Full Text Available Methylenetetrahydrofolate reductase (MTHFR gene mutations have been implicated as risk factors for neural tube defects (NTDs. The best-characterized MTHFR genetic mutation 677C→T is associated with a 2-4 fold increased risk of NTD if patient is homozygous for this mutation. This risk factor is modulated by folate levels in the body. A second mutation in the MTHFR gene is an A→C transition at position 1298. The 1298A→C mutation is also a risk factor for NTD, but with a smaller relative risk than 677C→T mutation. Under conditions of low folate intake or high folate requirements, such as pregnancy, this mutation could become of clinical importance. We present a case report with MTHFR genetic mutation, who presented with recurrent familial pregnancy losses due to anencephaly/NTDs.

  13. Identification of Jets Containing $b$-Hadrons with Recurrent Neural Networks at the ATLAS Experiment

    CERN Document Server

    The ATLAS collaboration

    2017-01-01

    A novel $b$-jet identification algorithm is constructed with a Recurrent Neural Network (RNN) at the ATLAS experiment at the CERN Large Hadron Collider. The RNN based $b$-tagging algorithm processes charged particle tracks associated to jets without reliance on secondary vertex finding, and can augment existing secondary-vertex based taggers. In contrast to traditional impact-parameter-based $b$-tagging algorithms which assume that tracks associated to jets are independent from each other, the RNN based $b$-tagging algorithm can exploit the spatial and kinematic correlations between tracks which are initiated from the same $b$-hadrons. This new approach also accommodates an extended set of input variables. This note presents the expected performance of the RNN based $b$-tagging algorithm in simulated $t \\bar t$ events at $\\sqrt{s}=13$ TeV.

  14. Automatic temporal segment detection via bilateral long short-term memory recurrent neural networks

    Science.gov (United States)

    Sun, Bo; Cao, Siming; He, Jun; Yu, Lejun; Li, Liandong

    2017-03-01

    Constrained by the physiology, the temporal factors associated with human behavior, irrespective of facial movement or body gesture, are described by four phases: neutral, onset, apex, and offset. Although they may benefit related recognition tasks, it is not easy to accurately detect such temporal segments. An automatic temporal segment detection framework using bilateral long short-term memory recurrent neural networks (BLSTM-RNN) to learn high-level temporal-spatial features, which synthesizes the local and global temporal-spatial information more efficiently, is presented. The framework is evaluated in detail over the face and body database (FABO). The comparison shows that the proposed framework outperforms state-of-the-art methods for solving the problem of temporal segment detection.

  15. Using LSTM recurrent neural networks for detecting anomalous behavior of LHC superconducting magnets

    CERN Document Server

    Wielgosz, Maciej; Mertik, Matej

    2016-01-01

    The superconducting LHC magnets are coupled with an electronic monitoring system which records and analyses voltage time series reflecting their performance. A currently used system is based on a range of preprogrammed triggers which launches protection procedures when a misbehavior of the magnets is detected. All the procedures used in the protection equipment were designed and implemented according to known working scenarios of the system and are updated and monitored by human operators. This paper proposes a novel approach to monitoring and fault protection of the Large Hadron Collider (LHC) superconducting magnets which employs state-of-the-art Deep Learning algorithms. Consequently, the authors of the paper decided to examine the performance of LSTM recurrent neural networks for anomaly detection in voltage time series of the magnets. In order to address this challenging task different network architectures and hyper-parameters were used to achieve the best possible performance of the solution. The regre...

  16. Convergence study in extended Kalman filter-based training of recurrent neural networks.

    Science.gov (United States)

    Wang, Xiaoyu; Huang, Yong

    2011-04-01

    Recurrent neural network (RNN) has emerged as a promising tool in modeling nonlinear dynamical systems, but the training convergence is still of concern. This paper aims to develop an effective extended Kalman filter-based RNN training approach with a controllable training convergence. The training convergence problem during extended Kalman filter-based RNN training has been proposed and studied by adapting two artificial training noise parameters: the covariance of measurement noise (R) and the covariance of process noise (Q) of Kalman filter. The R and Q adaption laws have been developed using the Lyapunov method and the maximum likelihood method, respectively. The effectiveness of the proposed adaption laws has been tested using a nonlinear dynamical benchmark system and further applied in cutting tool wear modeling. The results show that the R adaption law can effectively avoid the divergence problem and ensure the training convergence, whereas the Q adaption law helps improve the training convergence speed.

  17. Recurrent fuzzy neural network backstepping control for the prescribed output tracking performance of nonlinear dynamic systems.

    Science.gov (United States)

    Han, Seong-Ik; Lee, Jang-Myung

    2014-01-01

    This paper proposes a backstepping control system that uses a tracking error constraint and recurrent fuzzy neural networks (RFNNs) to achieve a prescribed tracking performance for a strict-feedback nonlinear dynamic system. A new constraint variable was defined to generate the virtual control that forces the tracking error to fall within prescribed boundaries. An adaptive RFNN was also used to obtain the required improvement on the approximation performances in order to avoid calculating the explosive number of terms generated by the recursive steps of traditional backstepping control. The boundedness and convergence of the closed-loop system was confirmed based on the Lyapunov stability theory. The prescribed performance of the proposed control scheme was validated by using it to control the prescribed error of a nonlinear system and a robot manipulator.

  18. Robust passivity analysis for discrete-time recurrent neural networks with mixed delays

    Science.gov (United States)

    Huang, Chuan-Kuei; Shu, Yu-Jeng; Chang, Koan-Yuh; Shou, Ho-Nien; Lu, Chien-Yu

    2015-02-01

    This article considers the robust passivity analysis for a class of discrete-time recurrent neural networks (DRNNs) with mixed time-delays and uncertain parameters. The mixed time-delays that consist of both the discrete time-varying and distributed time-delays in a given range are presented, and the uncertain parameters are norm-bounded. The activation functions are assumed to be globally Lipschitz continuous. Based on new bounding technique and appropriate type of Lyapunov functional, a sufficient condition is investigated to guarantee the existence of the desired robust passivity condition for the DRNNs, which can be derived in terms of a family of linear matrix inequality (LMI). Some free-weighting matrices are introduced to reduce the conservatism of the criterion by using the bounding technique. A numerical example is given to illustrate the effectiveness and applicability.

  19. Distributed Fault Detection in Sensor Networks using a Recurrent Neural Network

    CERN Document Server

    Obst, Oliver

    2009-01-01

    In long-term deployments of sensor networks, monitoring the quality of gathered data is a critical issue. Over the time of deployment, sensors are exposed to harsh conditions, causing some of them to fail or to deliver less accurate data. If such a degradation remains undetected, the usefulness of a sensor network can be greatly reduced. We present an approach that learns spatio-temporal correlations between different sensors, and makes use of the learned model to detect misbehaving sensors by using distributed computation and only local communication between nodes. We introduce SODESN, a distributed recurrent neural network architecture, and a learning method to train SODESN for fault detection in a distributed scenario. Our approach is evaluated using data from different types of sensors and is able to work well even with less-than-perfect link qualities and more than 50% of failed nodes.

  20. D-optimal Bayesian Interrogation for Parameter and Noise Identification of Recurrent Neural Networks

    CERN Document Server

    Poczos, Barnabas

    2008-01-01

    We introduce a novel online Bayesian method for the identification of a family of noisy recurrent neural networks (RNNs). We develop Bayesian active learning technique in order to optimize the interrogating stimuli given past experiences. In particular, we consider the unknown parameters as stochastic variables and use the D-optimality principle, also known as `\\emph{infomax method}', to choose optimal stimuli. We apply a greedy technique to maximize the information gain concerning network parameters at each time step. We also derive the D-optimal estimation of the additive noise that perturbs the dynamical system of the RNN. Our analytical results are approximation-free. The analytic derivation gives rise to attractive quadratic update rules.

  1. H∞ state estimation for discrete-time memristive recurrent neural networks with stochastic time-delays

    Science.gov (United States)

    Liu, Hongjian; Wang, Zidong; Shen, Bo; Alsaadi, Fuad E.

    2016-07-01

    This paper deals with the robust H∞ state estimation problem for a class of memristive recurrent neural networks with stochastic time-delays. The stochastic time-delays under consideration are governed by a Bernoulli-distributed stochastic sequence. The purpose of the addressed problem is to design the robust state estimator such that the dynamics of the estimation error is exponentially stable in the mean square, and the prescribed ? performance constraint is met. By utilizing the difference inclusion theory and choosing a proper Lyapunov-Krasovskii functional, the existence condition of the desired estimator is derived. Based on it, the explicit expression of the estimator gain is given in terms of the solution to a linear matrix inequality. Finally, a numerical example is employed to demonstrate the effectiveness and applicability of the proposed estimation approach.

  2. Application of real time recurrent neural network for detection of small natural earthquakes in Poland

    Science.gov (United States)

    Wiszniowski, Jan; Plesiewicz, Beata; Trojanowski, Jacek

    2014-06-01

    This study is an application of a Real Time Recurrent Neural Network (RTRN) in the detection of small natural seismic events in Poland. Most of the events studied are from the Podhale region with a magnitude of 0.4 to 2.5. The population distribution of the region required that seismic signals be recorded using temporary stations deployed in populated areas. As a consequence, the high level of seismic noise that cannot be removed by filtration made it impossible to detect small events by STA/LTA based algorithms. The presence of high noise requires an alternate method of seismic detection capable of recognizing small seismic events. We applied the RTRN, which potentially can detect seismic signals in the frequency domain as well as in the phase arrival times. Data results of small local seismic events showed that the RTRN has the ability to correctly detect most of the events with fewer false detections than STA/LTA methods.

  3. Adaptive dynamic surface control of flexible-joint robots using self-recurrent wavelet neural networks.

    Science.gov (United States)

    Yoo, Sung Jin; Park, Jin Bae; Choi, Yoon Ho

    2006-12-01

    A new method for the robust control of flexible-joint (FJ) robots with model uncertainties in both robot dynamics and actuator dynamics is proposed. The proposed control system is a combination of the adaptive dynamic surface control (DSC) technique and the self-recurrent wavelet neural network (SRWNN). The adaptive DSC technique provides the ability to overcome the "explosion of complexity" problem in backstepping controllers. The SRWNNs are used to observe the arbitrary model uncertainties of FJ robots, and all their weights are trained online. From the Lyapunov stability analysis, their adaptation laws are induced, and the uniformly ultimately boundedness of all signals in a closed-loop adaptive system is proved. Finally, simulation results for a three-link FJ robot are utilized to validate the good position tracking performance and robustness against payload uncertainties and external disturbances of the proposed control system.

  4. An Incremental Time-delay Neural Network for Dynamical Recurrent Associative Memory

    Institute of Scientific and Technical Information of China (English)

    2002-01-01

    An incremental time-delay neural network based on synapse growth, which is suitable for dynamic control and learning of autonomous robots, is proposed to improve the learning and retrieving performance of dynamical recurrent associative memory architecture. The model allows steady and continuous establishment of associative memory for spatio-temporal regularities and time series in discrete sequence of inputs. The inserted hidden units can be taken as the long-term memories that expand the capacity of network and sometimes may fade away under certain condition. Preliminary experiment has shown that this incremental network may be a promising approach to endow autonomous robots with the ability of adapting to new data without destroying the learned patterns. The system also benefits from its potential chaos character for emergence.

  5. Implementation of recurrent artificial neural networks for nonlinear dynamic modeling in biomedical applications.

    Science.gov (United States)

    Stošovic, Miona V Andrejevic; Litovski, Vanco B

    2013-11-01

    Simulation is indispensable during the design of many biomedical prostheses that are based on fundamental electrical and electronic actions. However, simulation necessitates the use of adequate models. The main difficulties related to the modeling of such devices are their nonlinearity and dynamic behavior. Here we report the application of recurrent artificial neural networks for modeling of a nonlinear, two-terminal circuit equivalent to a specific implantable hearing device. The method is general in the sense that any nonlinear dynamic two-terminal device or circuit may be modeled in the same way. The model generated was successfully used for simulation and optimization of a driver (operational amplifier)-transducer ensemble. This confirms our claim that in addition to the proper design and optimization of the hearing actuator, optimization in the electronic domain, at the electronic driver circuit-to-actuator interface, should take place in order to achieve best performance of the complete hearing aid.

  6. Learning and retrieval behavior in recurrent neural networks with pre-synaptic dependent homeostatic plasticity

    Science.gov (United States)

    Mizusaki, Beatriz E. P.; Agnes, Everton J.; Erichsen, Rubem; Brunnet, Leonardo G.

    2017-08-01

    The plastic character of brain synapses is considered to be one of the foundations for the formation of memories. There are numerous kinds of such phenomenon currently described in the literature, but their role in the development of information pathways in neural networks with recurrent architectures is still not completely clear. In this paper we study the role of an activity-based process, called pre-synaptic dependent homeostatic scaling, in the organization of networks that yield precise-timed spiking patterns. It encodes spatio-temporal information in the synaptic weights as it associates a learned input with a specific response. We introduce a correlation measure to evaluate the precision of the spiking patterns and explore the effects of different inhibitory interactions and learning parameters. We find that large learning periods are important in order to improve the network learning capacity and discuss this ability in the presence of distinct inhibitory currents.

  7. Neural processing of short-term recurrence in songbird vocal communication.

    Directory of Open Access Journals (Sweden)

    Gabriël J L Beckers

    Full Text Available BACKGROUND: Many situations involving animal communication are dominated by recurring, stereotyped signals. How do receivers optimally distinguish between frequently recurring signals and novel ones? Cortical auditory systems are known to be pre-attentively sensitive to short-term delivery statistics of artificial stimuli, but it is unknown if this phenomenon extends to the level of behaviorally relevant delivery patterns, such as those used during communication. METHODOLOGY/PRINCIPAL FINDINGS: We recorded and analyzed complete auditory scenes of spontaneously communicating zebra finch (Taeniopygia guttata pairs over a week-long period, and show that they can produce tens of thousands of short-range contact calls per day. Individual calls recur at time scales (median interval 1.5 s matching those at which mammalian sensory systems are sensitive to recent stimulus history. Next, we presented to anesthetized birds sequences of frequently recurring calls interspersed with rare ones, and recorded, in parallel, action and local field potential responses in the medio-caudal auditory forebrain at 32 unique sites. Variation in call recurrence rate over natural ranges leads to widespread and significant modulation in strength of neural responses. Such modulation is highly call-specific in secondary auditory areas, but not in the main thalamo-recipient, primary auditory area. CONCLUSIONS/SIGNIFICANCE: Our results support the hypothesis that pre-attentive neural sensitivity to short-term stimulus recurrence is involved in the analysis of auditory scenes at the level of delivery patterns of meaningful sounds. This may enable birds to efficiently and automatically distinguish frequently recurring vocalizations from other events in their auditory scene.

  8. Foundations of implementing the competitive layer model by Lotka-Volterra recurrent neural networks.

    Science.gov (United States)

    Yi, Zhang

    2010-03-01

    The competitive layer model (CLM) can be described by an optimization problem. The problem can be further formulated by an energy function, called the CLM energy function, in the subspace of nonnegative orthant. The set of minimum points of the CLM energy function forms the set of solutions of the CLM problem. Solving the CLM problem means to find out such solutions. Recurrent neural networks (RNNs) can be used to implement the CLM to solve the CLM problem. The key point is to make the set of minimum points of the CLM energy function just correspond to the set of stable attractors of the recurrent neural networks. This paper proposes to use Lotka-Volterra RNNs (LV RNNs) to implement the CLM. The contribution of this paper is to establish foundations of implementing the CLM by LV RNNs. The contribution mainly contains three parts. The first part is on the CLM energy function. Necessary and sufficient conditions for minimum points of the CLM energy function are established by detailed study. The second part is on the convergence of the proposed model of the LV RNNs. It is proven that interesting trajectories are convergent. The third part is the most important. It proves that the set of stable attractors of the proposed LV RNN just equals the set of minimum points of the CLM energy function in the nonnegative orthant. Thus, the LV RNNs can be used to solve the problem of the CLM. It is believed that by establishing such basic rigorous theories, more and interesting applications of the CLM can be found.

  9. Chaos and asymptotical stability in discrete-time recurrent neural networks with generalized input-output function

    Institute of Scientific and Technical Information of China (English)

    2001-01-01

    We theoretically investigate the asymptotical stability, localbifurcations and chaos of discrete-time recurrent neural networks with the form ofwhere the input-output function is defined as a generalized sigmoid function, such as vi=tanh(μiui), etc. Numerical simulations are also provided to demonstrate the theoretical results.

  10. Deep Recurrent Neural Networks for seizure detection and early seizure detection systems

    Energy Technology Data Exchange (ETDEWEB)

    Talathi, S. S. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2017-06-05

    Epilepsy is common neurological diseases, affecting about 0.6-0.8 % of world population. Epileptic patients suffer from chronic unprovoked seizures, which can result in broad spectrum of debilitating medical and social consequences. Since seizures, in general, occur infrequently and are unpredictable, automated seizure detection systems are recommended to screen for seizures during long-term electroencephalogram (EEG) recordings. In addition, systems for early seizure detection can lead to the development of new types of intervention systems that are designed to control or shorten the duration of seizure events. In this article, we investigate the utility of recurrent neural networks (RNNs) in designing seizure detection and early seizure detection systems. We propose a deep learning framework via the use of Gated Recurrent Unit (GRU) RNNs for seizure detection. We use publicly available data in order to evaluate our method and demonstrate very promising evaluation results with overall accuracy close to 100 %. We also systematically investigate the application of our method for early seizure warning systems. Our method can detect about 98% of seizure events within the first 5 seconds of the overall epileptic seizure duration.

  11. Single Layer Recurrent Neural Network for detection of swarm-like earthquakes in W-Bohemia/Vogtland-the method

    Science.gov (United States)

    Doubravová, Jana; Wiszniowski, Jan; Horálek, Josef

    2016-08-01

    In this paper, we present a new method of local event detection of swarm-like earthquakes based on neural networks. The proposed algorithm uses unique neural network architecture. It combines features used in other neural network concepts such as the Real Time Recurrent Network and Nonlinear Autoregressive Neural Network to achieve good performance of detection. We use the recurrence combined with various delays applied to recurrent inputs so the network remembers history of many samples. This method has been tested on data from a local seismic network in West Bohemia with promising results. We found that phases not picked in training data diminish the detection capability of the neural network and proper preparation of training data is therefore fundamental. To train the network we define a parameter called the learning importance weight of events and show that it affects the number of acceptable solutions achieved by many trials of the Back Propagation Through Time algorithm. We also compare the individual training of stations with training all of them simultaneously, and we conclude that results of joint training are better for some stations than training only one station.

  12. Nonlinear dynamics analysis of a self-organizing recurrent neural network: chaos waning.

    Directory of Open Access Journals (Sweden)

    Jürgen Eser

    Full Text Available Self-organization is thought to play an important role in structuring nervous systems. It frequently arises as a consequence of plasticity mechanisms in neural networks: connectivity determines network dynamics which in turn feed back on network structure through various forms of plasticity. Recently, self-organizing recurrent neural network models (SORNs have been shown to learn non-trivial structure in their inputs and to reproduce the experimentally observed statistics and fluctuations of synaptic connection strengths in cortex and hippocampus. However, the dynamics in these networks and how they change with network evolution are still poorly understood. Here we investigate the degree of chaos in SORNs by studying how the networks' self-organization changes their response to small perturbations. We study the effect of perturbations to the excitatory-to-excitatory weight matrix on connection strengths and on unit activities. We find that the network dynamics, characterized by an estimate of the maximum Lyapunov exponent, becomes less chaotic during its self-organization, developing into a regime where only few perturbations become amplified. We also find that due to the mixing of discrete and (quasi-continuous variables in SORNs, small perturbations to the synaptic weights may become amplified only after a substantial delay, a phenomenon we propose to call deferred chaos.

  13. Emergence of hierarchical structure mirroring linguistic composition in a recurrent neural network.

    Science.gov (United States)

    Hinoshita, Wataru; Arie, Hiroaki; Tani, Jun; Okuno, Hiroshi G; Ogata, Tetsuya

    2011-05-01

    We show that a Multiple Timescale Recurrent Neural Network (MTRNN) can acquire the capabilities to recognize, generate, and correct sentences by self-organizing in a way that mirrors the hierarchical structure of sentences: characters grouped into words, and words into sentences. The model can control which sentence to generate depending on its initial states (generation phase) and the initial states can be calculated from the target sentence (recognition phase). In an experiment, we trained our model over a set of unannotated sentences from an artificial language, represented as sequences of characters. Once trained, the model could recognize and generate grammatical sentences, even if they were not learned. Moreover, we found that our model could correct a few substitution errors in a sentence, and the correction performance was improved by adding the errors to the training sentences in each training iteration with a certain probability. An analysis of the neural activations in our model revealed that the MTRNN had self-organized, reflecting the hierarchical linguistic structure by taking advantage of the differences in timescale among its neurons: in particular, neurons that change the fastest represented "characters", those that change more slowly, "words", and those that change the slowest, "sentences".

  14. Fault Detection and Isolation of Wind Energy Conversion Systems using Recurrent Neural Networks

    Directory of Open Access Journals (Sweden)

    N. Talebi

    2014-07-01

    Full Text Available Reliability of Wind Energy Conversion Systems (WECSs is greatly important regarding to extract the maximum amount of available wind energy. In order to accurately study WECSs during occurrence of faults and to explore the impact of faults on each component of WECSs, a detailed model is required in which mechanical and electrical parts of WECSs are properly involved. In addition, a Fault Detection and Isolation System (FDIS is required by which occurred faults can be diagnosed at the appropriate time in order to ensure safe system operation and avoid heavy economic losses. This can be performed by subsequent actions through fast and accurate detection and isolation of faults. In this paper, by utilizing a comprehensive dynamic model of the WECS, an FDIS is presented using dynamic recurrent neural networks. In industrial processes, dynamic neural networks are known as a good mathematical tool for fault detection. Simulation results show that the proposed FDIS detects faults of the generator's angular velocity sensor, pitch angle sensors and pitch actuators appropriately. The suggested FDIS is capable to detect and isolate the faults shortly while owing very low false alarms rate. The presented FDIS scheme can be used to identify faults in other parts of the WECS.

  15. Language Identification in Short Utterances Using Long Short-Term Memory (LSTM) Recurrent Neural Networks.

    Science.gov (United States)

    Zazo, Ruben; Lozano-Diez, Alicia; Gonzalez-Dominguez, Javier; Toledano, Doroteo T; Gonzalez-Rodriguez, Joaquin

    2016-01-01

    Long Short Term Memory (LSTM) Recurrent Neural Networks (RNNs) have recently outperformed other state-of-the-art approaches, such as i-vector and Deep Neural Networks (DNNs), in automatic Language Identification (LID), particularly when dealing with very short utterances (∼3s). In this contribution we present an open-source, end-to-end, LSTM RNN system running on limited computational resources (a single GPU) that outperforms a reference i-vector system on a subset of the NIST Language Recognition Evaluation (8 target languages, 3s task) by up to a 26%. This result is in line with previously published research using proprietary LSTM implementations and huge computational resources, which made these former results hardly reproducible. Further, we extend those previous experiments modeling unseen languages (out of set, OOS, modeling), which is crucial in real applications. Results show that a LSTM RNN with OOS modeling is able to detect these languages and generalizes robustly to unseen OOS languages. Finally, we also analyze the effect of even more limited test data (from 2.25s to 0.1s) proving that with as little as 0.5s an accuracy of over 50% can be achieved.

  16. A biologically plausible learning rule for the Infomax on recurrent neural networks.

    Science.gov (United States)

    Hayakawa, Takashi; Kaneko, Takeshi; Aoyagi, Toshio

    2014-01-01

    A fundamental issue in neuroscience is to understand how neuronal circuits in the cerebral cortex play their functional roles through their characteristic firing activity. Several characteristics of spontaneous and sensory-evoked cortical activity have been reproduced by Infomax learning of neural networks in computational studies. There are, however, still few models of the underlying learning mechanisms that allow cortical circuits to maximize information and produce the characteristics of spontaneous and sensory-evoked cortical activity. In the present article, we derive a biologically plausible learning rule for the maximization of information retained through time in dynamics of simple recurrent neural networks. Applying the derived learning rule in a numerical simulation, we reproduce the characteristics of spontaneous and sensory-evoked cortical activity: cell-assembly-like repeats of precise firing sequences, neuronal avalanches, spontaneous replays of learned firing sequences and orientation selectivity observed in the primary visual cortex. We further discuss the similarity between the derived learning rule and the spike timing-dependent plasticity of cortical neurons.

  17. Language Identification in Short Utterances Using Long Short-Term Memory (LSTM Recurrent Neural Networks.

    Directory of Open Access Journals (Sweden)

    Ruben Zazo

    Full Text Available Long Short Term Memory (LSTM Recurrent Neural Networks (RNNs have recently outperformed other state-of-the-art approaches, such as i-vector and Deep Neural Networks (DNNs, in automatic Language Identification (LID, particularly when dealing with very short utterances (∼3s. In this contribution we present an open-source, end-to-end, LSTM RNN system running on limited computational resources (a single GPU that outperforms a reference i-vector system on a subset of the NIST Language Recognition Evaluation (8 target languages, 3s task by up to a 26%. This result is in line with previously published research using proprietary LSTM implementations and huge computational resources, which made these former results hardly reproducible. Further, we extend those previous experiments modeling unseen languages (out of set, OOS, modeling, which is crucial in real applications. Results show that a LSTM RNN with OOS modeling is able to detect these languages and generalizes robustly to unseen OOS languages. Finally, we also analyze the effect of even more limited test data (from 2.25s to 0.1s proving that with as little as 0.5s an accuracy of over 50% can be achieved.

  18. Global Exponential Stability for Complex-Valued Recurrent Neural Networks With Asynchronous Time Delays.

    Science.gov (United States)

    Liu, Xiwei; Chen, Tianping

    2016-03-01

    In this paper, we investigate the global exponential stability for complex-valued recurrent neural networks with asynchronous time delays by decomposing complex-valued networks to real and imaginary parts and construct an equivalent real-valued system. The network model is described by a continuous-time equation. There are two main differences of this paper with previous works: 1) time delays can be asynchronous, i.e., delays between different nodes are different, which make our model more general and 2) we prove the exponential convergence directly, while the existence and uniqueness of the equilibrium point is just a direct consequence of the exponential convergence. Using three generalized norms, we present some sufficient conditions for the uniqueness and global exponential stability of the equilibrium point for delayed complex-valued neural networks. These conditions in our results are less restrictive because of our consideration of the excitatory and inhibitory effects between neurons; so previous works of other researchers can be extended. Finally, some numerical simulations are given to demonstrate the correctness of our obtained results.

  19. Using Long-Short-Term-Memory Recurrent Neural Networks to Predict Aviation Engine Vibrations

    Science.gov (United States)

    ElSaid, AbdElRahman Ahmed

    This thesis examines building viable Recurrent Neural Networks (RNN) using Long Short Term Memory (LSTM) neurons to predict aircraft engine vibrations. The different networks are trained on a large database of flight data records obtained from an airline containing flights that suffered from excessive vibration. RNNs can provide a more generalizable and robust method for prediction over analytical calculations of engine vibration, as analytical calculations must be solved iteratively based on specific empirical engine parameters, and this database contains multiple types of engines. Further, LSTM RNNs provide a "memory" of the contribution of previous time series data which can further improve predictions of future vibration values. LSTM RNNs were used over traditional RNNs, as those suffer from vanishing/exploding gradients when trained with back propagation. The study managed to predict vibration values for 1, 5, 10, and 20 seconds in the future, with 2.84% 3.3%, 5.51% and 10.19% mean absolute error, respectively. These neural networks provide a promising means for the future development of warning systems so that suitable actions can be taken before the occurrence of excess vibration to avoid unfavorable situations during flight.

  20. Initialization and self-organized optimization of recurrent neural network connectivity.

    Science.gov (United States)

    Boedecker, Joschka; Obst, Oliver; Mayer, N Michael; Asada, Minoru

    2009-10-01

    Reservoir computing (RC) is a recent paradigm in the field of recurrent neural networks. Networks in RC have a sparsely and randomly connected fixed hidden layer, and only output connections are trained. RC networks have recently received increased attention as a mathematical model for generic neural microcircuits to investigate and explain computations in neocortical columns. Applied to specific tasks, their fixed random connectivity, however, leads to significant variation in performance. Few problem-specific optimization procedures are known, which would be important for engineering applications, but also in order to understand how networks in biology are shaped to be optimally adapted to requirements of their environment. We study a general network initialization method using permutation matrices and derive a new unsupervised learning rule based on intrinsic plasticity (IP). The IP-based learning uses only local learning, and its aim is to improve network performance in a self-organized way. Using three different benchmarks, we show that networks with permutation matrices for the reservoir connectivity have much more persistent memory than the other methods but are also able to perform highly nonlinear mappings. We also show that IP-based on sigmoid transfer functions is limited concerning the output distributions that can be achieved.

  1. A novel recurrent neural network forecasting model for power intelligence center

    Institute of Scientific and Technical Information of China (English)

    LIU Ji-cheng; NIU Dong-xiao

    2008-01-01

    In order to accurately forecast the load of power system and enhance the stability of the power network, a novel unascertained mathematics based recurrent neural network (UMRNN) for power intelligence center (PIC) was created through three steps. First, by combining with the general project uncertain element transmission theory (GPUET), the basic definitions of stochastic,fuzzy, and grey uncertain elements were given based on the principal types of uncertain information. Second, a power dynamic alliance including four sectors: generation sector, transmission sector, distribution sector and customers was established. The key factors were amended according to the four transmission topologies of uncertain elements, thus the new factors entered the power Intelligence center as the input elements. Finally, in the intelligence handing background of PIC, by performing uncertain and recursive process to the input values of network, and combining unascertained mathematics, the novel load forecasting model was built. Three different approaches were put forward to forecast an eastern regional power grid load in China. The root mean square error (ERMS) demonstrates that the forecasting accuracy of the proposed model UMRNN is 3% higher than that of BP neural network (BPNN), and 5% higher than that of autoregressive integrated moving average (ARIMA). Besides, an example also shows that the average relative error of the first quarter of 2008 forecasted by UMRNN is only 2.59%, which has high precision.

  2. Competition and Collaboration in Cooperative Coevolution of Elman Recurrent Neural Networks for Time-Series Prediction.

    Science.gov (United States)

    Chandra, Rohitash

    2015-12-01

    Collaboration enables weak species to survive in an environment where different species compete for limited resources. Cooperative coevolution (CC) is a nature-inspired optimization method that divides a problem into subcomponents and evolves them while genetically isolating them. Problem decomposition is an important aspect in using CC for neuroevolution. CC employs different problem decomposition methods to decompose the neural network training problem into subcomponents. Different problem decomposition methods have features that are helpful at different stages in the evolutionary process. Adaptation, collaboration, and competition are needed for CC, as multiple subpopulations are used to represent the problem. It is important to add collaboration and competition in CC. This paper presents a competitive CC method for training recurrent neural networks for chaotic time-series prediction. Two different instances of the competitive method are proposed that employs different problem decomposition methods to enforce island-based competition. The results show improvement in the performance of the proposed methods in most cases when compared with standalone CC and other methods from the literature.

  3. Recurrent fuzzy neural network by using feedback error learning approaches for LFC in interconnected power system

    Energy Technology Data Exchange (ETDEWEB)

    Sabahi, Kamel; Teshnehlab, Mohammad; Shoorhedeli, Mahdi Aliyari [Department of Electrical Engineering, K.N. Toosi University of Technology, Intelligent System Lab, Tehran (Iran)

    2009-04-15

    In this study, a new adaptive controller based on modified feedback error learning (FEL) approaches is proposed for load frequency control (LFC) problem. The FEL strategy consists of intelligent and conventional controllers in feedforward and feedback paths, respectively. In this strategy, a conventional feedback controller (CFC), i.e. proportional, integral and derivative (PID) controller, is essential to guarantee global asymptotic stability of the overall system; and an intelligent feedforward controller (INFC) is adopted to learn the inverse of the controlled system. Therefore, when the INFC learns the inverse of controlled system, the tracking of reference signal is done properly. Generally, the CFC is designed at nominal operating conditions of the system and, therefore, fails to provide the best control performance as well as global stability over a wide range of changes in the operating conditions of the system. So, in this study a supervised controller (SC), a lookup table based controller, is addressed for tuning of the CFC. During abrupt changes of the power system parameters, the SC adjusts the PID parameters according to these operating conditions. Moreover, for improving the performance of overall system, a recurrent fuzzy neural network (RFNN) is adopted in INFC instead of the conventional neural network, which was used in past studies. The proposed FEL controller has been compared with the conventional feedback error learning controller (CFEL) and the PID controller through some performance indices. (author)

  4. Nonlinear dynamics analysis of a self-organizing recurrent neural network: chaos waning.

    Science.gov (United States)

    Eser, Jürgen; Zheng, Pengsheng; Triesch, Jochen

    2014-01-01

    Self-organization is thought to play an important role in structuring nervous systems. It frequently arises as a consequence of plasticity mechanisms in neural networks: connectivity determines network dynamics which in turn feed back on network structure through various forms of plasticity. Recently, self-organizing recurrent neural network models (SORNs) have been shown to learn non-trivial structure in their inputs and to reproduce the experimentally observed statistics and fluctuations of synaptic connection strengths in cortex and hippocampus. However, the dynamics in these networks and how they change with network evolution are still poorly understood. Here we investigate the degree of chaos in SORNs by studying how the networks' self-organization changes their response to small perturbations. We study the effect of perturbations to the excitatory-to-excitatory weight matrix on connection strengths and on unit activities. We find that the network dynamics, characterized by an estimate of the maximum Lyapunov exponent, becomes less chaotic during its self-organization, developing into a regime where only few perturbations become amplified. We also find that due to the mixing of discrete and (quasi-)continuous variables in SORNs, small perturbations to the synaptic weights may become amplified only after a substantial delay, a phenomenon we propose to call deferred chaos.

  5. Global exponential dissipativity and stabilization of memristor-based recurrent neural networks with time-varying delays.

    Science.gov (United States)

    Guo, Zhenyuan; Wang, Jun; Yan, Zheng

    2013-12-01

    This paper addresses the global exponential dissipativity of memristor-based recurrent neural networks with time-varying delays. By constructing proper Lyapunov functionals and using M-matrix theory and LaSalle invariant principle, the sets of global exponentially dissipativity are characterized parametrically. It is proven herein that there are 2(2n(2)-n) equilibria for an n-neuron memristor-based neural network and they are located in the derived globally attractive sets. It is also shown that memristor-based recurrent neural networks with time-varying delays are stabilizable at the origin of the state space by using a linear state feedback control law with appropriate gains. Finally, two numerical examples are discussed in detail to illustrate the characteristics of the results. Copyright © 2013 Elsevier Ltd. All rights reserved.

  6. Centralized and decentralized global outer-synchronization of asymmetric recurrent time-varying neural network by data-sampling.

    Science.gov (United States)

    Lu, Wenlian; Zheng, Ren; Chen, Tianping

    2016-03-01

    In this paper, we discuss outer-synchronization of the asymmetrically connected recurrent time-varying neural networks. By using both centralized and decentralized discretization data sampling principles, we derive several sufficient conditions based on three vector norms to guarantee that the difference of any two trajectories starting from different initial values of the neural network converges to zero. The lower bounds of the common time intervals between data samples in centralized and decentralized principles are proved to be positive, which guarantees exclusion of Zeno behavior. A numerical example is provided to illustrate the efficiency of the theoretical results.

  7. Matrix measure method for global exponential stability of complex-valued recurrent neural networks with time-varying delays.

    Science.gov (United States)

    Gong, Weiqiang; Liang, Jinling; Cao, Jinde

    2015-10-01

    In this paper, based on the matrix measure method and the Halanay inequality, global exponential stability problem is investigated for the complex-valued recurrent neural networks with time-varying delays. Without constructing any Lyapunov functions, several sufficient criteria are obtained to ascertain the global exponential stability of the addressed complex-valued neural networks under different activation functions. Here, the activation functions are no longer assumed to be derivative which is always demanded in relating references. In addition, the obtained results are easy to be verified and implemented in practice. Finally, two examples are given to illustrate the effectiveness of the obtained results.

  8. Dual coding with STDP in a spiking recurrent neural network model of the hippocampus.

    Directory of Open Access Journals (Sweden)

    Daniel Bush

    Full Text Available The firing rate of single neurons in the mammalian hippocampus has been demonstrated to encode for a range of spatial and non-spatial stimuli. It has also been demonstrated that phase of firing, with respect to the theta oscillation that dominates the hippocampal EEG during stereotype learning behaviour, correlates with an animal's spatial location. These findings have led to the hypothesis that the hippocampus operates using a dual (rate and temporal coding system. To investigate the phenomenon of dual coding in the hippocampus, we examine a spiking recurrent network model with theta coded neural dynamics and an STDP rule that mediates rate-coded Hebbian learning when pre- and post-synaptic firing is stochastic. We demonstrate that this plasticity rule can generate both symmetric and asymmetric connections between neurons that fire at concurrent or successive theta phase, respectively, and subsequently produce both pattern completion and sequence prediction from partial cues. This unifies previously disparate auto- and hetero-associative network models of hippocampal function and provides them with a firmer basis in modern neurobiology. Furthermore, the encoding and reactivation of activity in mutually exciting Hebbian cell assemblies demonstrated here is believed to represent a fundamental mechanism of cognitive processing in the brain.

  9. Adaptive Sliding Mode Control of Dynamic Systems Using Double Loop Recurrent Neural Network Structure.

    Science.gov (United States)

    Fei, Juntao; Lu, Cheng

    2017-03-06

    In this paper, an adaptive sliding mode control system using a double loop recurrent neural network (DLRNN) structure is proposed for a class of nonlinear dynamic systems. A new three-layer RNN is proposed to approximate unknown dynamics with two different kinds of feedback loops where the firing weights and output signal calculated in the last step are stored and used as the feedback signals in each feedback loop. Since the new structure has combined the advantages of internal feedback NN and external feedback NN, it can acquire the internal state information while the output signal is also captured, thus the new designed DLRNN can achieve better approximation performance compared with the regular NNs without feedback loops or the regular RNNs with a single feedback loop. The new proposed DLRNN structure is employed in an equivalent controller to approximate the unknown nonlinear system dynamics, and the parameters of the DLRNN are updated online by adaptive laws to get favorable approximation performance. To investigate the effectiveness of the proposed controller, the designed adaptive sliding mode controller with the DLRNN is applied to a z-axis microelectromechanical system gyroscope to control the vibrating dynamics of the proof mass. Simulation results demonstrate that the proposed methodology can achieve good tracking property, and the comparisons of the approximation performance between radial basis function NN, RNN, and DLRNN show that the DLRNN can accurately estimate the unknown dynamics with a fast speed while the internal states of DLRNN are more stable.

  10. Applying long short-term memory recurrent neural networks to intrusion detection

    Directory of Open Access Journals (Sweden)

    Ralf C. Staudemeyer

    2015-07-01

    Full Text Available We claim that modelling network traffic as a time series with a supervised learning approach, using known genuine and malicious behaviour, improves intrusion detection. To substantiate this, we trained long short-term memory (LSTM recurrent neural networks with the training data provided by the DARPA / KDD Cup ’99 challenge. To identify suitable LSTM-RNN network parameters and structure we experimented with various network topologies. We found networks with four memory blocks containing two cells each offer a good compromise between computational cost and detection performance. We applied forget gates and shortcut connections respectively. A learning rate of 0.1 and up to 1,000 epochs showed good results. We tested the performance on all features and on extracted minimal feature sets respectively. We evaluated different feature sets for the detection of all attacks within one network and also to train networks specialised on individual attack classes. Our results show that the LSTM classifier provides superior performance in comparison to results previously published results of strong static classifiers. With 93.82% accuracy and 22.13 cost, LSTM outperforms the winning entries of the KDD Cup ’99 challenge by far. This is due to the fact that LSTM learns to look back in time and correlate consecutive connection records. For the first time ever, we have demonstrated the usefulness of LSTM networks to intrusion detection.

  11. Local community detection as pattern restoration by attractor dynamics of recurrent neural networks.

    Science.gov (United States)

    Okamoto, Hiroshi

    2016-08-01

    Densely connected parts in networks are referred to as "communities". Community structure is a hallmark of a variety of real-world networks. Individual communities in networks form functional modules of complex systems described by networks. Therefore, finding communities in networks is essential to approaching and understanding complex systems described by networks. In fact, network science has made a great deal of effort to develop effective and efficient methods for detecting communities in networks. Here we put forward a type of community detection, which has been little examined so far but will be practically useful. Suppose that we are given a set of source nodes that includes some (but not all) of "true" members of a particular community; suppose also that the set includes some nodes that are not the members of this community (i.e., "false" members of the community). We propose to detect the community from this "imperfect" and "inaccurate" set of source nodes using attractor dynamics of recurrent neural networks. Community detection by the proposed method can be viewed as restoration of the original pattern from a deteriorated pattern, which is analogous to cue-triggered recall of short-term memory in the brain. We demonstrate the effectiveness of the proposed method using synthetic networks and real social networks for which correct communities are known.

  12. Migration and differentiation of neural progenitor cells after recurrent laryngeal nerve avulsion in rats.

    Directory of Open Access Journals (Sweden)

    Wan Zhao

    Full Text Available To investigate migration and differentiation of neural progenitor cells (NPCs from the ependymal layer to the nucleus ambiguus (NA after recurrent laryngeal nerve (RLN avulsion. All of the animals received a CM-DiI injection in the left lateral ventricle. Forty-five adult rats were subjected to a left RLN avulsion injury, and nine rats were used as controls. 5-Bromo-2-deoxyuridine (BrdU was injected intraperitoneally. Immunohistochemical analyses were performed in the brain stems at different time points after RLN injury. After RLN avulsion, the CM-DiI+ NPCs from the ependymal layer migrated to the lesioned NA. CM-DiI+/GFAP+ astrocytes, CM-DiI+/DCX+ neuroblasts and CM-DiI+/NeuN+ neurons were observed in the migratory stream. However, the ipsilateral NA included only CM-DiI+ astrocytes, not newborn neurons. After RLN avulsion, the NPCs in the ependymal layer of the 4th ventricle or central canal attempt to restore the damaged NA. We first confirm that the migratory stream includes both neurons and glia differentiated from the NPCs. However, only differentiated astrocytes are successfully incorporated into the NA. The presence of both cell types in the migratory process may play a role in repairing RLN injuries.

  13. Migration and differentiation of neural progenitor cells after recurrent laryngeal nerve avulsion in rats.

    Science.gov (United States)

    Zhao, Wan; Xu, Wen

    2014-01-01

    To investigate migration and differentiation of neural progenitor cells (NPCs) from the ependymal layer to the nucleus ambiguus (NA) after recurrent laryngeal nerve (RLN) avulsion. All of the animals received a CM-DiI injection in the left lateral ventricle. Forty-five adult rats were subjected to a left RLN avulsion injury, and nine rats were used as controls. 5-Bromo-2-deoxyuridine (BrdU) was injected intraperitoneally. Immunohistochemical analyses were performed in the brain stems at different time points after RLN injury. After RLN avulsion, the CM-DiI+ NPCs from the ependymal layer migrated to the lesioned NA. CM-DiI+/GFAP+ astrocytes, CM-DiI+/DCX+ neuroblasts and CM-DiI+/NeuN+ neurons were observed in the migratory stream. However, the ipsilateral NA included only CM-DiI+ astrocytes, not newborn neurons. After RLN avulsion, the NPCs in the ependymal layer of the 4th ventricle or central canal attempt to restore the damaged NA. We first confirm that the migratory stream includes both neurons and glia differentiated from the NPCs. However, only differentiated astrocytes are successfully incorporated into the NA. The presence of both cell types in the migratory process may play a role in repairing RLN injuries.

  14. Disease named entity recognition by combining conditional random fields and bidirectional recurrent neural networks

    Science.gov (United States)

    Wei, Qikang; Chen, Tao; Xu, Ruifeng; He, Yulan; Gui, Lin

    2016-01-01

    The recognition of disease and chemical named entities in scientific articles is a very important subtask in information extraction in the biomedical domain. Due to the diversity and complexity of disease names, the recognition of named entities of diseases is rather tougher than those of chemical names. Although there are some remarkable chemical named entity recognition systems available online such as ChemSpot and tmChem, the publicly available recognition systems of disease named entities are rare. This article presents a system for disease named entity recognition (DNER) and normalization. First, two separate DNER models are developed. One is based on conditional random fields model with a rule-based post-processing module. The other one is based on the bidirectional recurrent neural networks. Then the named entities recognized by each of the DNER model are fed into a support vector machine classifier for combining results. Finally, each recognized disease named entity is normalized to a medical subject heading disease name by using a vector space model based method. Experimental results show that using 1000 PubMed abstracts for training, our proposed system achieves an F1-measure of 0.8428 at the mention level and 0.7804 at the concept level, respectively, on the testing data of the chemical-disease relation task in BioCreative V. Database URL: http://219.223.252.210:8080/SS/cdr.html PMID:27777244

  15. Using LSTM recurrent neural networks for monitoring the LHC superconducting magnets

    Science.gov (United States)

    Wielgosz, Maciej; Skoczeń, Andrzej; Mertik, Matej

    2017-09-01

    The superconducting LHC magnets are coupled with an electronic monitoring system which records and analyzes voltage time series reflecting their performance. A currently used system is based on a range of preprogrammed triggers which launches protection procedures when a misbehavior of the magnets is detected. All the procedures used in the protection equipment were designed and implemented according to known working scenarios of the system and are updated and monitored by human operators. This paper proposes a novel approach to monitoring and fault protection of the Large Hadron Collider (LHC) superconducting magnets which employs state-of-the-art Deep Learning algorithms. Consequently, the authors of the paper decided to examine the performance of LSTM recurrent neural networks for modeling of voltage time series of the magnets. In order to address this challenging task different network architectures and hyper-parameters were used to achieve the best possible performance of the solution. The regression results were measured in terms of RMSE for different number of future steps and history length taken into account for the prediction. The best result of RMSE = 0 . 00104 was obtained for a network of 128 LSTM cells within the internal layer and 16 steps history buffer.

  16. Automatic Estimation of the Dynamics of Channel Conductance Using a Recurrent Neural Network

    Directory of Open Access Journals (Sweden)

    Masaaki Takahashi

    2009-01-01

    Full Text Available In order to simulate neuronal electrical activities, we must estimate the dynamics of channel conductances from physiological experimental data. However, this approach requires the formulation of differential equations that express the time course of channel conductance. On the other hand, if the dynamics are automatically estimated, neuronal activities can be easily simulated. By using a recurrent neural network (RNN, it is possible to estimate the dynamics of channel conductances without formulating the differential equations. In the present study, we estimated the dynamics of the Na+ and K+ conductances of a squid giant axon using two different fully connected RNNs and were able to reproduce various neuronal activities of the axon. The reproduced activities were an action potential, a threshold, a refractory phenomenon, a rebound action potential, and periodic action potentials with a constant stimulation. RNNs can be trained using channels other than the Na+ and K+ channels. Therefore, using our RNN estimation method, the dynamics of channel conductance can be automatically estimated and the neuronal activities can be simulated using the channel RNNs. An RNN can be a useful tool to estimate the dynamics of the channel conductance of a neuron, and by using the method presented here, it is possible to simulate neuronal activities more easily than by using the previous methods.

  17. Study of Sentiment Classification for Chinese Microblog Based on Recurrent Neural Network

    Institute of Scientific and Technical Information of China (English)

    ZHANG Yangsen,JIANG Yuru; TONG Yixuan

    2016-01-01

    The sentiment classification of Chinese Microblog is a meaningful topic. Many studies has been done based on the methods of rule and word-bag, and to understand the structure information of a sentence will be the next target. We proposed a sentiment classifica-tion method based on Recurrent neural network (RNN). We adopted the technology of distributed word represen-tation to construct a vector for each word in a sentence;then train sentence vectors with fixed dimension for dif-ferent length sentences with RNN, so that the sentence vectors contain both word semantic features and word se-quence features; at last use softmax regression classifier in the output layer to predict each sentence’s sentiment ori-entation. Experiment results revealed that our method can understand the structure information of negative sentence and double negative sentence and achieve better accuracy. The way of calculating sentence vector can help to learn the deep structure of sentence and will be valuable for dif-ferent research area.

  18. A Recurrent Neural Network Approach to Rear Vehicle Detection Which Considered State Dependency

    Directory of Open Access Journals (Sweden)

    Kayichirou Inagaki

    2003-08-01

    Full Text Available Experimental vision-based detection often fails in cases when the acquired image quality is reduced by changing optical environments. In addition, the shape of vehicles in images that are taken from vision sensors change due to approaches by vehicle. Vehicle detection methods are required to perform successfully under these conditions. However, the conventional methods do not consider especially in rapidly varying by brightness conditions. We suggest a new detection method that compensates for those conditions in monocular vision-based vehicle detection. The suggested method employs a Recurrent Neural Network (RNN, which has been applied for spatiotemporal processing. The RNN is able to respond to consecutive scenes involving the target vehicle and can track the movements of the target by the effect of the past network states. The suggested method has a particularly beneficial effect in environments with sudden, extreme variations such as bright sunlight and shield. Finally, we demonstrate effectiveness by state-dependent of the RNN-based method by comparing its detection results with those of a Multi Layered Perceptron (MLP.

  19. Using Layer Recurrent Neural Network to Generate Pseudo Random Number Sequences

    Directory of Open Access Journals (Sweden)

    Veena Desai

    2012-03-01

    Full Text Available Pseudo Random Numbers (PRNs are required for many cryptographic applications. This paper proposes a new method for generating PRNs using Layer Recurrent Neural Network (LRNN. The proposed technique generates PRNs from the weight matrix obtained from the layer weights of the LRNN. The LRNN random number generator (RNG uses a short keyword as a seed and generates a long sequence as a pseudo PRN sequence. The number of bits generated in the PRN sequence depends on the number of neurons in the input layer of the LRNN. The generated PRN sequence changes, with a change in the training function of the LRNN .The sequences generated are a function of the keyword, initial state of network and the training function. In our implementation the PRN sequences have been generated using 3 training functions: 1Scaled Gradient Descent 2Levenberg-Marquartz (TRAINLM and 3 TRAINBGF. The generated sequences are tested for randomness using ENT and NIST test suites. The ENT test can be applied for sequences of small size. NIST has 16 tests to test random numbers. The LRNN generated PRNs pass in 11 tests, show no observations for 4 tests, and fail in 1 test when subjected to NIST .This paper presents the test results for random number sequence ranging from 25 bits to 1000 bits, generated using LRNN.

  20. Construction of Gene Regulatory Networks Using Recurrent Neural Networks and Swarm Intelligence.

    Science.gov (United States)

    Khan, Abhinandan; Mandal, Sudip; Pal, Rajat Kumar; Saha, Goutam

    2016-01-01

    We have proposed a methodology for the reverse engineering of biologically plausible gene regulatory networks from temporal genetic expression data. We have used established information and the fundamental mathematical theory for this purpose. We have employed the Recurrent Neural Network formalism to extract the underlying dynamics present in the time series expression data accurately. We have introduced a new hybrid swarm intelligence framework for the accurate training of the model parameters. The proposed methodology has been first applied to a small artificial network, and the results obtained suggest that it can produce the best results available in the contemporary literature, to the best of our knowledge. Subsequently, we have implemented our proposed framework on experimental (in vivo) datasets. Finally, we have investigated two medium sized genetic networks (in silico) extracted from GeneNetWeaver, to understand how the proposed algorithm scales up with network size. Additionally, we have implemented our proposed algorithm with half the number of time points. The results indicate that a reduction of 50% in the number of time points does not have an effect on the accuracy of the proposed methodology significantly, with a maximum of just over 15% deterioration in the worst case.

  1. Optimal sensorimotor integration in recurrent cortical networks: a neural implementation of Kalman filters.

    Science.gov (United States)

    Denève, Sophie; Duhamel, Jean-René; Pouget, Alexandre

    2007-05-23

    Several behavioral experiments suggest that the nervous system uses an internal model of the dynamics of the body to implement a close approximation to a Kalman filter. This filter can be used to perform a variety of tasks nearly optimally, such as predicting the sensory consequence of motor action, integrating sensory and body posture signals, and computing motor commands. We propose that the neural implementation of this Kalman filter involves recurrent basis function networks with attractor dynamics, a kind of architecture that can be readily mapped onto cortical circuits. In such networks, the tuning curves to variables such as arm velocity are remarkably noninvariant in the sense that the amplitude and width of the tuning curves of a given neuron can vary greatly depending on other variables such as the position of the arm or the reliability of the sensory feedback. This property could explain some puzzling properties of tuning curves in the motor and premotor cortex, and it leads to several new predictions.

  2. Global exponential periodicity and stability of discrete-time complex-valued recurrent neural networks with time-delays.

    Science.gov (United States)

    Hu, Jin; Wang, Jun

    2015-06-01

    In recent years, complex-valued recurrent neural networks have been developed and analysed in-depth in view of that they have good modelling performance for some applications involving complex-valued elements. In implementing continuous-time dynamical systems for simulation or computational purposes, it is quite necessary to utilize a discrete-time model which is an analogue of the continuous-time system. In this paper, we analyse a discrete-time complex-valued recurrent neural network model and obtain the sufficient conditions on its global exponential periodicity and exponential stability. Simulation results of several numerical examples are delineated to illustrate the theoretical results and an application on associative memory is also given.

  3. EXISTENCE AND EXPONENTIAL STABILITY OF ALMOST PERIODIC SOLUTIONS TO BAM RECURRENT NEURAL NETWORKS WITH TRANSMISSION DELAYS AND CONTINUOUSLY DISTRIBUTED DELAYS

    Institute of Scientific and Technical Information of China (English)

    2012-01-01

    In this paper,a class of bidirectional associative memory(BAM) recurrent neural networks with delays are studied.By a fixed point theorem and a Lyapunov functional,some new sufficient conditions for the existence,uniqueness and global exponential stability of the almost periodic solutions are established.These conditions are easy to be verified and our results complement the previous known results.

  4. Large-Scale Recurrent Neural Network Based Modelling of Gene Regulatory Network Using Cuckoo Search-Flower Pollination Algorithm.

    Science.gov (United States)

    Mandal, Sudip; Khan, Abhinandan; Saha, Goutam; Pal, Rajat K

    2016-01-01

    The accurate prediction of genetic networks using computational tools is one of the greatest challenges in the postgenomic era. Recurrent Neural Network is one of the most popular but simple approaches to model the network dynamics from time-series microarray data. To date, it has been successfully applied to computationally derive small-scale artificial and real-world genetic networks with high accuracy. However, they underperformed for large-scale genetic networks. Here, a new methodology has been proposed where a hybrid Cuckoo Search-Flower Pollination Algorithm has been implemented with Recurrent Neural Network. Cuckoo Search is used to search the best combination of regulators. Moreover, Flower Pollination Algorithm is applied to optimize the model parameters of the Recurrent Neural Network formalism. Initially, the proposed method is tested on a benchmark large-scale artificial network for both noiseless and noisy data. The results obtained show that the proposed methodology is capable of increasing the inference of correct regulations and decreasing false regulations to a high degree. Secondly, the proposed methodology has been validated against the real-world dataset of the DNA SOS repair network of Escherichia coli. However, the proposed method sacrifices computational time complexity in both cases due to the hybrid optimization process.

  5. A QoS Provisioning Recurrent Neural Network based Call Admission Control for beyond 3G Networks

    Directory of Open Access Journals (Sweden)

    Ramesh Babu H. S.

    2010-03-01

    Full Text Available The Call admission control (CAC is one of the Radio Resource Management (RRM techniques that plays influential role in ensuring the desired Quality of Service (QoS to the users and applications in next generation networks. This paper proposes a fuzzy neural approach for making the call admission control decision in multi class traffic based Next Generation Wireless Networks (NGWN. The proposed Fuzzy Neural call admission control (FNCAC scheme is an integrated CAC module that combines the linguistic control capabilities of the fuzzy logic controller and the learning capabilities of the neural networks. The model is based on recurrent radial basis function networks which have better learning and adaptability that can be used to develop intelligent system to handle the incoming traffic in an heterogeneous network environment. The simulation results are optimistic and indicates that the proposed FNCAC algorithm performs better than the other two methods and the call blocking probability is minimal when compared to other two methods.

  6. A QoS Provisioning Recurrent Neural Network based Call Admission Control for beyond 3G Networks

    CERN Document Server

    S., Ramesh Babu H; S, Satyanarayana P

    2010-01-01

    The Call admission control (CAC) is one of the Radio Resource Management (RRM) techniques that plays influential role in ensuring the desired Quality of Service (QoS) to the users and applications in next generation networks. This paper proposes a fuzzy neural approach for making the call admission control decision in multi class traffic based Next Generation Wireless Networks (NGWN). The proposed Fuzzy Neural call admission control (FNCAC) scheme is an integrated CAC module that combines the linguistic control capabilities of the fuzzy logic controller and the learning capabilities of the neural networks. The model is based on recurrent radial basis function networks which have better learning and adaptability that can be used to develop intelligent system to handle the incoming traffic in an heterogeneous network environment. The simulation results are optimistic and indicates that the proposed FNCAC algorithm performs better than the other two methods and the call blocking probability is minimal when compa...

  7. Identifying time-delayed gene regulatory networks via an evolvable hierarchical recurrent neural network.

    Science.gov (United States)

    Kordmahalleh, Mina Moradi; Sefidmazgi, Mohammad Gorji; Harrison, Scott H; Homaifar, Abdollah

    2017-01-01

    The modeling of genetic interactions within a cell is crucial for a basic understanding of physiology and for applied areas such as drug design. Interactions in gene regulatory networks (GRNs) include effects of transcription factors, repressors, small metabolites, and microRNA species. In addition, the effects of regulatory interactions are not always simultaneous, but can occur after a finite time delay, or as a combined outcome of simultaneous and time delayed interactions. Powerful biotechnologies have been rapidly and successfully measuring levels of genetic expression to illuminate different states of biological systems. This has led to an ensuing challenge to improve the identification of specific regulatory mechanisms through regulatory network reconstructions. Solutions to this challenge will ultimately help to spur forward efforts based on the usage of regulatory network reconstructions in systems biology applications. We have developed a hierarchical recurrent neural network (HRNN) that identifies time-delayed gene interactions using time-course data. A customized genetic algorithm (GA) was used to optimize hierarchical connectivity of regulatory genes and a target gene. The proposed design provides a non-fully connected network with the flexibility of using recurrent connections inside the network. These features and the non-linearity of the HRNN facilitate the process of identifying temporal patterns of a GRN. Our HRNN method was implemented with the Python language. It was first evaluated on simulated data representing linear and nonlinear time-delayed gene-gene interaction models across a range of network sizes and variances of noise. We then further demonstrated the capability of our method in reconstructing GRNs of the Saccharomyces cerevisiae synthetic network for in vivo benchmarking of reverse-engineering and modeling approaches (IRMA). We compared the performance of our method to TD-ARACNE, HCC-CLINDE, TSNI and ebdbNet across different network

  8. Dynamical Integration of Language and Behavior in a Recurrent Neural Network for Human–Robot Interaction

    Science.gov (United States)

    Yamada, Tatsuro; Murata, Shingo; Arie, Hiroaki; Ogata, Tetsuya

    2016-01-01

    To work cooperatively with humans by using language, robots must not only acquire a mapping between language and their behavior but also autonomously utilize the mapping in appropriate contexts of interactive tasks online. To this end, we propose a novel learning method linking language to robot behavior by means of a recurrent neural network. In this method, the network learns from correct examples of the imposed task that are given not as explicitly separated sets of language and behavior but as sequential data constructed from the actual temporal flow of the task. By doing this, the internal dynamics of the network models both language–behavior relationships and the temporal patterns of interaction. Here, “internal dynamics” refers to the time development of the system defined on the fixed-dimensional space of the internal states of the context layer. Thus, in the execution phase, by constantly representing where in the interaction context it is as its current state, the network autonomously switches between recognition and generation phases without any explicit signs and utilizes the acquired mapping in appropriate contexts. To evaluate our method, we conducted an experiment in which a robot generates appropriate behavior responding to a human’s linguistic instruction. After learning, the network actually formed the attractor structure representing both language–behavior relationships and the task’s temporal pattern in its internal dynamics. In the dynamics, language–behavior mapping was achieved by the branching structure. Repetition of human’s instruction and robot’s behavioral response was represented as the cyclic structure, and besides, waiting to a subsequent instruction was represented as the fixed-point attractor. Thanks to this structure, the robot was able to interact online with a human concerning the given task by autonomously switching phases. PMID:27471463

  9. Dynamical Integration of Language and Behavior in a Recurrent Neural Network for Human-Robot Interaction.

    Science.gov (United States)

    Yamada, Tatsuro; Murata, Shingo; Arie, Hiroaki; Ogata, Tetsuya

    2016-01-01

    To work cooperatively with humans by using language, robots must not only acquire a mapping between language and their behavior but also autonomously utilize the mapping in appropriate contexts of interactive tasks online. To this end, we propose a novel learning method linking language to robot behavior by means of a recurrent neural network. In this method, the network learns from correct examples of the imposed task that are given not as explicitly separated sets of language and behavior but as sequential data constructed from the actual temporal flow of the task. By doing this, the internal dynamics of the network models both language-behavior relationships and the temporal patterns of interaction. Here, "internal dynamics" refers to the time development of the system defined on the fixed-dimensional space of the internal states of the context layer. Thus, in the execution phase, by constantly representing where in the interaction context it is as its current state, the network autonomously switches between recognition and generation phases without any explicit signs and utilizes the acquired mapping in appropriate contexts. To evaluate our method, we conducted an experiment in which a robot generates appropriate behavior responding to a human's linguistic instruction. After learning, the network actually formed the attractor structure representing both language-behavior relationships and the task's temporal pattern in its internal dynamics. In the dynamics, language-behavior mapping was achieved by the branching structure. Repetition of human's instruction and robot's behavioral response was represented as the cyclic structure, and besides, waiting to a subsequent instruction was represented as the fixed-point attractor. Thanks to this structure, the robot was able to interact online with a human concerning the given task by autonomously switching phases.

  10. Recurrent neural networks for building energy use prediction and system identification -- A progress report

    Energy Technology Data Exchange (ETDEWEB)

    Kreider, J.F.; Curtiss, P.; Dodier, R.; Krarti, M. [Univ. of Colorado, Boulder, CO (United States); Claridge, D.E.; Haberl, J.S. [Texas A and M Univ., College Station, TX (United States). Dept. of Mechanical Engineering

    1995-11-01

    Following several successful applications of feedforward neural networks (NNs) to the building energy prediction problem a more difficult problem has been addressed recently: namely, the prediction of building energy consumption well into the future without knowledge of immediately past energy consumption. This paper reports results of a recent study of six months of hourly data recorded at the Zachry Engineering Center (ZEC) in College Station, Texas. An early study demonstrated the success of NNs used as predictors for hourly consumption of electricity, chilled water and hot water for the ZEC. Relatively simple networks with less than a dozen inputs were able to predict these three hourly, whole building energy end uses to within errors of 5--10% RMS, the difference depending on the specifics of energy type and time of year. These predictions were made for selected future months given network training data of between one and three past months. Inputs to these networks included measured energy consumption for one or two immediately past hours. Such data are available, for example, if one is trying to conduct hourly diagnostics on heating, ventilating and air conditioning (HVAC) systems in commercial buildings. The success of this study prompted a second study of a more difficult problem. In this case, the goal was to predict energy consumption into the future without knowledge of consumption of the various energies for the immediate past. Such a prediction is of value when estimating what a building, retrofitted with energy conservation features, would have consumed had it not been retrofitted. This prediction can be compared to actual consumption to estimate the savings, if any, that accrue due to the installation of the energy conservation subsystems or components. Because one is predicting for several months, not for one hour, into the future, the problem is more difficult. Results presented show that recurrent NNs can be used for this prediction task.

  11. Physiological modules for generating discrete and rhythmic movements: action identification by a dynamic recurrent neural network.

    Directory of Open Access Journals (Sweden)

    Ana eBengoetxea

    2014-09-01

    Full Text Available In this study we employed a dynamic recurrent neural network (DRNN in a novel fashion to reveal characteristics of control modules underlying the generation of muscle activations when drawing figures with the outstretched arm. We asked healthy human subjects to perform four different figure-eight movements in each of two workspaces (frontal plane and sagittal plane. We then trained a DRNN to predict the movement of the wrist from information in the EMG signals from seven different muscles. We trained different instances of the same network on a single movement direction, on all four movement directions in a single movement plane, or on all eight possible movement patterns and looked at the ability of the DRNN to generalize and predict movements for trials that were not included in the training set. Within a single movement plane, a DRNN trained on one movement direction was not able to predict movements of the hand for trials in the other three directions, but a DRNN trained simultaneously on all four movement directions could generalize across movement directions within the same plane. Similarly, the DRNN was able to reproduce the kinematics of the hand for both movement planes, but only if it was trained on examples performed in each one. As we will discuss, these results indicate that there are important dynamical constraints on the mapping of EMG to hand movement that depend on both the time sequence of the movement and on the anatomical constraints of the musculoskeletal system. In a second step, we injected EMG signals constructed from different synergies derived by the PCA in order to identify the mechanical significance of each of these components. From these results, one can surmise that discrete-rhythmic movements may be constructed from three different fundamental modules, one regulating the co-activation of all muscles over the time span of the movement and two others patterns of reciprocal activation operating in orthogonal

  12. Recognition of the physiological actions of the triphasic EMG pattern by a dynamic recurrent neural network.

    Science.gov (United States)

    Cheron, Guy; Cebolla, Ana Maria; Bengoetxea, Ana; Leurs, Françoise; Dan, Bernard

    2007-03-06

    Triphasic electromyographic (EMG) patterns with a sequence of activity in agonist (AG1), antagonist (ANT) and again in agonist (AG2) muscles are characteristic of ballistic movements. They have been studied in terms of rectangular pulse-width or pulse-height modulation. In order to take into account the complexity of the EMG signal within the bursts, we used a dynamic recurrent neural network (DRNN) for the identification of this pattern in subjects performing fast elbow flexion movements. Biceps and triceps EMGs were fed to all 35 fully-connected hidden units of the DRNN for mapping onto elbow angular acceleration signals. DRNN training was supervised, involving learning rule adaptations of synaptic weights and time constants of each unit. We demonstrated that the DRNN is able to perfectly reproduce the acceleration profile of the ballistic movements. Then we tested the physiological plausibility of all the networks that reached an error level below 0.001 by selectively increasing the amplitude of each burst of the triphasic pattern and evaluating the effects on the simulated accelerating profile. Nineteen percent of these simulations reproduced the physiological action classically attributed to the 3 EMG bursts: AG1 increase showed an increase of the first accelerating pulse, ANT an increase of the braking pulse and AG2 an increase of the clamping pulse. These networks also recognized the physiological function of the time interval between AG1 and ANT, reproducing the linear relationship between time interval and movement amplitude. This task-dynamics recognition has implications for the development of DRNN as diagnostic tools and prosthetic controllers.

  13. Learning a Transferable Change Rule from a Recurrent Neural Network for Land Cover Change Detection

    Directory of Open Access Journals (Sweden)

    Haobo Lyu

    2016-06-01

    Full Text Available When exploited in remote sensing analysis, a reliable change rule with transfer ability can detect changes accurately and be applied widely. However, in practice, the complexity of land cover changes makes it difficult to use only one change rule or change feature learned from a given multi-temporal dataset to detect any other new target images without applying other learning processes. In this study, we consider the design of an efficient change rule having transferability to detect both binary and multi-class changes. The proposed method relies on an improved Long Short-Term Memory (LSTM model to acquire and record the change information of long-term sequence remote sensing data. In particular, a core memory cell is utilized to learn the change rule from the information concerning binary changes or multi-class changes. Three gates are utilized to control the input, output and update of the LSTM model for optimization. In addition, the learned rule can be applied to detect changes and transfer the change rule from one learned image to another new target multi-temporal image. In this study, binary experiments, transfer experiments and multi-class change experiments are exploited to demonstrate the superiority of our method. Three contributions of this work can be summarized as follows: (1 the proposed method can learn an effective change rule to provide reliable change information for multi-temporal images; (2 the learned change rule has good transferability for detecting changes in new target images without any extra learning process, and the new target images should have a multi-spectral distribution similar to that of the training images; and (3 to the authors’ best knowledge, this is the first time that deep learning in recurrent neural networks is exploited for change detection. In addition, under the framework of the proposed method, changes can be detected under both binary detection and multi-class change detection.

  14. Physiological modules for generating discrete and rhythmic movements: action identification by a dynamic recurrent neural network.

    Science.gov (United States)

    Bengoetxea, Ana; Leurs, Françoise; Hoellinger, Thomas; Cebolla, Ana M; Dan, Bernard; McIntyre, Joseph; Cheron, Guy

    2014-01-01

    In this study we employed a dynamic recurrent neural network (DRNN) in a novel fashion to reveal characteristics of control modules underlying the generation of muscle activations when drawing figures with the outstretched arm. We asked healthy human subjects to perform four different figure-eight movements in each of two workspaces (frontal plane and sagittal plane). We then trained a DRNN to predict the movement of the wrist from information in the EMG signals from seven different muscles. We trained different instances of the same network on a single movement direction, on all four movement directions in a single movement plane, or on all eight possible movement patterns and looked at the ability of the DRNN to generalize and predict movements for trials that were not included in the training set. Within a single movement plane, a DRNN trained on one movement direction was not able to predict movements of the hand for trials in the other three directions, but a DRNN trained simultaneously on all four movement directions could generalize across movement directions within the same plane. Similarly, the DRNN was able to reproduce the kinematics of the hand for both movement planes, but only if it was trained on examples performed in each one. As we will discuss, these results indicate that there are important dynamical constraints on the mapping of EMG to hand movement that depend on both the time sequence of the movement and on the anatomical constraints of the musculoskeletal system. In a second step, we injected EMG signals constructed from different synergies derived by the PCA in order to identify the mechanical significance of each of these components. From these results, one can surmise that discrete-rhythmic movements may be constructed from three different fundamental modules, one regulating the co-activation of all muscles over the time span of the movement and two others elliciting patterns of reciprocal activation operating in orthogonal directions.

  15. Dynamical Integration of Language and Behavior in a Recurrent Neural Network for Human--Robot Interaction

    Directory of Open Access Journals (Sweden)

    Tatsuro Yamada

    2016-07-01

    Full Text Available To work cooperatively with humans by using language, robots must not only acquire a mapping between language and their behavior but also autonomously utilize the mapping in appropriate contexts of interactive tasks online. To this end, we propose a novel learning method linking language to robot behavior by means of a recurrent neural network. In this method, the network learns from correct examples of the imposed task that are given not as explicitly separated sets of language and behavior but as sequential data constructed from the actual temporal flow of the task. By doing this, the internal dynamics of the network models both language--behavior relationships and the temporal patterns of interaction. Here, ``internal dynamics'' refers to the time development of the system defined on the fixed-dimensional space of the internal states of the context layer. Thus, in the execution phase, by constantly representing where in the interaction context it is as its current state, the network autonomously switches between recognition and generation phases without any explicit signs and utilizes the acquired mapping in appropriate contexts. To evaluate our method, we conducted an experiment in which a robot generates appropriate behavior responding to a human's linguistic instruction. After learning, the network actually formed the attractor structure representing both language--behavior relationships and the task's temporal pattern in its internal dynamics. In the dynamics, language--behavior mapping was achieved by the branching structure. Repetition of human's instruction and robot's behavioral response was represented as the cyclic structure, and besides, waiting to a subsequent instruction was represented as the fixed-point attractor. Thanks to this structure, the robot was able to interact online with a human concerning the given task by autonomously switching phases.

  16. Electronic realisation of recurrent neural network for solving simultaneous linear equations

    Science.gov (United States)

    Wang, J.

    1992-02-01

    An electronic neural network for solving simultaneous linear equations is presented. The proposed electronic neural network is able to generate real-time solutions to large-scale problems. The operating characteristics of an opamp based neural network is demonstrated via an illustrative example.

  17. Distributed Recurrent Neural Forward Models with Synaptic Adaptation and CPG-based control for Complex Behaviors of Walking Robots

    Directory of Open Access Journals (Sweden)

    Sakyasingha eDasgupta

    2015-09-01

    Full Text Available Walking animals, like stick insects, cockroaches or ants, demonstrate a fascinating range of locomotive abilities and complex behaviors. The locomotive behaviors can consist of a variety of walking patterns along with adaptation that allow the animals to deal with changes in environmental conditions, like uneven terrains, gaps, obstacles etc. Biological study has revealed that such complex behaviors are a result of a combination of biomechanics and neural mechanism thus representing the true nature of embodied interactions. While the biomechanics helps maintain flexibility and sustain a variety of movements, the neural mechanisms generate movements while making appropriate predictions crucial for achieving adaptation. Such predictions or planning ahead can be achieved by way of internal models that are grounded in the overall behavior of the animal. Inspired by these findings, we present here, an artificial bio-inspired walking system which effectively combines biomechanics (in terms of the body and leg structures with the underlying neural mechanisms. The neural mechanisms consist of 1 central pattern generator based control for generating basic rhythmic patterns and coordinated movements, 2 distributed (at each leg recurrent neural network based adaptive forward models with efference copies as internal models for sensory predictions and instantaneous state estimations, and 3 searching and elevation control for adapting the movement of an individual leg to deal with different environmental conditions. Using simulations we show that this bio-inspired approach with adaptive internal models allows the walking robot to perform complex locomotive behaviors as observed in insects, including walking on undulated terrains, crossing large gaps as well as climbing over high obstacles. Furthermore we demonstrate that the newly developed recurrent network based approach to sensorimotor prediction outperforms the previous state of the art adaptive neuron

  18. Distributed recurrent neural forward models with synaptic adaptation and CPG-based control for complex behaviors of walking robots.

    Science.gov (United States)

    Dasgupta, Sakyasingha; Goldschmidt, Dennis; Wörgötter, Florentin; Manoonpong, Poramate

    2015-01-01

    Walking animals, like stick insects, cockroaches or ants, demonstrate a fascinating range of locomotive abilities and complex behaviors. The locomotive behaviors can consist of a variety of walking patterns along with adaptation that allow the animals to deal with changes in environmental conditions, like uneven terrains, gaps, obstacles etc. Biological study has revealed that such complex behaviors are a result of a combination of biomechanics and neural mechanism thus representing the true nature of embodied interactions. While the biomechanics helps maintain flexibility and sustain a variety of movements, the neural mechanisms generate movements while making appropriate predictions crucial for achieving adaptation. Such predictions or planning ahead can be achieved by way of internal models that are grounded in the overall behavior of the animal. Inspired by these findings, we present here, an artificial bio-inspired walking system which effectively combines biomechanics (in terms of the body and leg structures) with the underlying neural mechanisms. The neural mechanisms consist of (1) central pattern generator based control for generating basic rhythmic patterns and coordinated movements, (2) distributed (at each leg) recurrent neural network based adaptive forward models with efference copies as internal models for sensory predictions and instantaneous state estimations, and (3) searching and elevation control for adapting the movement of an individual leg to deal with different environmental conditions. Using simulations we show that this bio-inspired approach with adaptive internal models allows the walking robot to perform complex locomotive behaviors as observed in insects, including walking on undulated terrains, crossing large gaps, leg damage adaptations, as well as climbing over high obstacles. Furthermore, we demonstrate that the newly developed recurrent network based approach to online forward models outperforms the adaptive neuron forward models

  19. Identification of a Typical CSTR Using Optimal Focused Time Lagged Recurrent Neural Network Model with Gamma Memory Filter

    Directory of Open Access Journals (Sweden)

    S. N. Naikwad

    2009-01-01

    Full Text Available A focused time lagged recurrent neural network (FTLR NN with gamma memory filter is designed to learn the subtle complex dynamics of a typical CSTR process. Continuous stirred tank reactor exhibits complex nonlinear operations where reaction is exothermic. It is noticed from literature review that process control of CSTR using neuro-fuzzy systems was attempted by many, but optimal neural network model for identification of CSTR process is not yet available. As CSTR process includes temporal relationship in the input-output mappings, time lagged recurrent neural network is particularly used for identification purpose. The standard back propagation algorithm with momentum term has been proposed in this model. The various parameters like number of processing elements, number of hidden layers, training and testing percentage, learning rule and transfer function in hidden and output layer are investigated on the basis of performance measures like MSE, NMSE, and correlation coefficient on testing data set. Finally effects of different norms are tested along with variation in gamma memory filter. It is demonstrated that dynamic NN model has a remarkable system identification capability for the problems considered in this paper. Thus FTLR NN with gamma memory filter can be used to learn underlying highly nonlinear dynamics of the system, which is a major contribution of this paper.

  20. A system of recurrent neural networks for modularising, parameterising and dynamic analysis of cell signalling networks.

    Science.gov (United States)

    Samarasinghe, S; Ling, H

    2017-02-04

    In this paper, we show how to extend our previously proposed novel continuous time Recurrent Neural Networks (RNN) approach that retains the advantage of continuous dynamics offered by Ordinary Differential Equations (ODE) while enabling parameter estimation through adaptation, to larger signalling networks using a modular approach. Specifically, the signalling network is decomposed into several sub-models based on important temporal events in the network. Each sub-model is represented by the proposed RNN and trained using data generated from the corresponding ODE model. Trained sub-models are assembled into a whole system RNN which is then subjected to systems dynamics and sensitivity analyses. The concept is illustrated by application to G1/S transition in cell cycle using Iwamoto et al. (2008) ODE model. We decomposed the G1/S network into 3 sub-models: (i) E2F transcription factor release; (ii) E2F and CycE positive feedback loop for elevating cyclin levels; and (iii) E2F and CycA negative feedback to degrade E2F. The trained sub-models accurately represented system dynamics and parameters were in good agreement with the ODE model. The whole system RNN however revealed couple of parameters contributing to compounding errors due to feedback and required refinement to sub-model 2. These related to the reversible reaction between CycE/CDK2 and p27, its inhibitor. The revised whole system RNN model very accurately matched dynamics of the ODE system. Local sensitivity analysis of the whole system model further revealed the most dominant influence of the above two parameters in perturbing G1/S transition, giving support to a recent hypothesis that the release of inhibitor p27 from Cyc/CDK complex triggers cell cycle stage transition. To make the model useful in a practical setting, we modified each RNN sub-model with a time relay switch to facilitate larger interval input data (≈20min) (original model used data for 30s or less) and retrained them that produced

  1. Novel recurrent neural network for modelling biological networks: oscillatory p53 interaction dynamics.

    Science.gov (United States)

    Ling, Hong; Samarasinghe, Sandhya; Kulasiri, Don

    2013-12-01

    Understanding the control of cellular networks consisting of gene and protein interactions and their emergent properties is a central activity of Systems Biology research. For this, continuous, discrete, hybrid, and stochastic methods have been proposed. Currently, the most common approach to modelling accurate temporal dynamics of networks is ordinary differential equations (ODE). However, critical limitations of ODE models are difficulty in kinetic parameter estimation and numerical solution of a large number of equations, making them more suited to smaller systems. In this article, we introduce a novel recurrent artificial neural network (RNN) that addresses above limitations and produces a continuous model that easily estimates parameters from data, can handle a large number of molecular interactions and quantifies temporal dynamics and emergent systems properties. This RNN is based on a system of ODEs representing molecular interactions in a signalling network. Each neuron represents concentration change of one molecule represented by an ODE. Weights of the RNN correspond to kinetic parameters in the system and can be adjusted incrementally during network training. The method is applied to the p53-Mdm2 oscillation system - a crucial component of the DNA damage response pathways activated by a damage signal. Simulation results indicate that the proposed RNN can successfully represent the behaviour of the p53-Mdm2 oscillation system and solve the parameter estimation problem with high accuracy. Furthermore, we presented a modified form of the RNN that estimates parameters and captures systems dynamics from sparse data collected over relatively large time steps. We also investigate the robustness of the p53-Mdm2 system using the trained RNN under various levels of parameter perturbation to gain a greater understanding of the control of the p53-Mdm2 system. Its outcomes on robustness are consistent with the current biological knowledge of this system. As more

  2. Synchronization of chaotic systems and identification of nonlinear systems by using recurrent hierarchical type-2 fuzzy neural networks.

    Science.gov (United States)

    Mohammadzadeh, Ardashir; Ghaemi, Sehraneh

    2015-09-01

    This paper proposes a novel approach for training of proposed recurrent hierarchical interval type-2 fuzzy neural networks (RHT2FNN) based on the square-root cubature Kalman filters (SCKF). The SCKF algorithm is used to adjust the premise part of the type-2 FNN and the weights of defuzzification and the feedback weights. The recurrence property in the proposed network is the output feeding of each membership function to itself. The proposed RHT2FNN is employed in the sliding mode control scheme for the synchronization of chaotic systems. Unknown functions in the sliding mode control approach are estimated by RHT2FNN. Another application of the proposed RHT2FNN is the identification of dynamic nonlinear systems. The effectiveness of the proposed network and its learning algorithm is verified by several simulation examples. Furthermore, the universal approximation of RHT2FNNs is also shown.

  3. Design and analysis of high-capacity associative memories based on a class of discrete-time recurrent neural networks.

    Science.gov (United States)

    Zeng, Zhigang; Wang, Jun

    2008-12-01

    This paper presents a design method for synthesizing associative memories based on discrete-time recurrent neural networks. The proposed procedure enables both hetero- and autoassociative memories to be synthesized with high storage capacity and assured global asymptotic stability. The stored patterns are retrieved by feeding probes via external inputs rather than initial conditions. As typical representatives, discrete-time cellular neural networks (CNNs) designed with space-invariant cloning templates are examined in detail. In particular, it is shown that procedure herein can determine the input matrix of any CNN based on a space-invariant cloning template which involves only a few design parameters. Two specific examples and many experimental results are included to demonstrate the characteristics and performance of the designed associative memories.

  4. Interval Type-2 Recurrent Fuzzy Neural System for Nonlinear Systems Control Using Stable Simultaneous Perturbation Stochastic Approximation Algorithm

    Directory of Open Access Journals (Sweden)

    Ching-Hung Lee

    2011-01-01

    Full Text Available This paper proposes a new type fuzzy neural systems, denoted IT2RFNS-A (interval type-2 recurrent fuzzy neural system with asymmetric membership function, for nonlinear systems identification and control. To enhance the performance and approximation ability, the triangular asymmetric fuzzy membership function (AFMF and TSK-type consequent part are adopted for IT2RFNS-A. The gradient information of the IT2RFNS-A is not easy to obtain due to the asymmetric membership functions and interval valued sets. The corresponding stable learning is derived by simultaneous perturbation stochastic approximation (SPSA algorithm which guarantees the convergence and stability of the closed-loop systems. Simulation and comparison results for the chaotic system identification and the control of Chua's chaotic circuit are shown to illustrate the feasibility and effectiveness of the proposed method.

  5. Determining the amount of anesthetic medicine to be applied by using Elman's recurrent neural networks via resilient back propagation.

    Science.gov (United States)

    Güntürkün, Rüştü

    2010-08-01

    In this study, Elman recurrent neural networks have been defined by using Resilient Back Propagation in order to determine the depth of anesthesia in the continuation stage of the anesthesia and to estimate the amount of medicine to be applied at that moment. From 30 patients, 57 distinct EEG recordings have been collected prior to during anaesthesia of different levels. The applied artificial neural network is composed of three layers, namely the input layer, the middle layer and the output layer. The nonlinear activation function sigmoid (sigmoid function) has been used in the hidden layer and the output layer. Prediction has been made by means of ANN. Training and testing the ANN have been used previous anaesthesia amount, total power/normal power and total power/previous. The system has been able to correctly purposeful responses in average accuracy of 95% of the cases. This method is also computationally fast and acceptable real-time clinical performance has been obtained.

  6. On the Nature of the Intrinsic Connectivity of the Cat Motor Cortex: Evidence for a Recurrent Neural Network Topology

    DEFF Research Database (Denmark)

    Capaday, Charles; Ethier, C; Brizzi, L

    2009-01-01

    Capaday C, Ethier C, Brizzi L, Sik A, van Vreeswijk C, Gingras D. On the nature of the intrinsic connectivity of the cat motor cortex: evidence for a recurrent neural network topology. J Neurophysiol 102: 2131-2141, 2009. First published July 22, 2009; doi: 10.1152/jn.91319.2008. The details...... and functional significance of the intrinsic horizontal connections between neurons in the motor cortex (MCx) remain to be clarified. To further elucidate the nature of this intracortical connectivity pattern, experiments were done on the MCx of three cats. The anterograde tracer biocytin was ejected...

  7. Reducing interferences in wireless communication systems by mobile agents with recurrent neural networks-based adaptive channel equalization

    Science.gov (United States)

    Beritelli, Francesco; Capizzi, Giacomo; Lo Sciuto, Grazia; Napoli, Christian; Tramontana, Emiliano; Woźniak, Marcin

    2015-09-01

    Solving channel equalization problem in communication systems is based on adaptive filtering algorithms. Today, Mobile Agents (MAs) with Recurrent Neural Networks (RNNs) can be also adopted for effective interference reduction in modern wireless communication systems (WCSs). In this paper MAs with RNNs are proposed as novel computing algorithms for reducing interferences in WCSs performing an adaptive channel equalization. The method to provide it is so called MAs-RNNs. We perform the implementation of this new paradigm for interferences reduction. Simulations results and evaluations demonstrates the effectiveness of this approach and as better transmission performance in wireless communication network can be achieved by using the MAs-RNNs based adaptive filtering algorithm.

  8. Robust stability analysis of Takagi-Sugeno uncertain stochastic fuzzy recurrent neural networks with mixed time-varying delays

    Institute of Scientific and Technical Information of China (English)

    M.Syed Ali

    2011-01-01

    In this paper,the global stability of Takagi-Sugeno(TS)uncertain stochastic fuzzy recurrent neural networks with discrete and distributed time-varying delays(TSUSFRNNs)is considered.A novel LMI-based stability criterion is obtained by using Lyapunov functional theory to guarantee the asymptotic stability of TSUSFRNNs.The proposed stability conditions are demonstrated through numerical examples.Furthermore,the supplementary requirement that the time derivative of time-varying delays must be smaller than one is removed.Comparison results are demonstrated to show that the proposed method is more able to guarantee the widest stability region than the other methods available in the existing literature.

  9. Dynamic recurrent Elman neural network based on immune clonal selection algorithm

    Science.gov (United States)

    Wang, Limin; Han, Xuming; Li, Ming; Sun, Haibo; Li, Qingzhao

    2012-04-01

    Owing to the immune clonal selection algorithm introduced into dynamic threshold strategy has better advantage on optimizing multi-parameters, therefore a novel approach that the immune clonal selection algorithm introduced into dynamic threshold strategy, is used to optimize the dynamic recursion Elman neural network is proposed in the paper. The concrete structure of the recursion neural network, the connect weight and the initial values of the contact units etc. are done by evolving training and learning automatically. Thus it could realize to construct and design for dynamic recursion Elman neural networks. It could provide a new effective approach for immune clonal selection algorithm optimizing dynamic recursion neural networks.

  10. Modeling of PEM Fuel Cell Stack System using Feed-forward and Recurrent Neural Networks for Automotive Applications

    Directory of Open Access Journals (Sweden)

    Mr. M. Karthik

    2014-05-01

    Full Text Available Artificial Neural Network (ANN has become a significant modeling tool for predicting the performance of complex systems that provide appropriate mapping between input-output variables without acquiring any empirical relationship due to the intrinsic properties. This paper is focussed towards the modeling of Proton Exchange Membrane (PEM Fuel Cell system using Artificial Neural Networks especially for automotive applications. Three different neural networks such as Static Feed Forward Network (SFFN, Cascaded Feed Forward Network (CFFN & Fully Connected Dynamic Recurrent Network (FCRN are discussed in this paper for modeling the PEM Fuel Cell System. The numerical analysis is carried out between the three Neural Network architectures for predicting the output performance of the PEM Fuel Cell. The performance of the proposed Networks is evaluated using various error criteria such as Mean Square Error, Mean Absolute Percentage Error, Mean Absolute Error, Coefficient of correlation and Iteration Values. The optimum network with high performance indices (low prediction error values and iteration values can be used as an ancillary model in developing the PEM Fuel Cell powered vehicle system. The development of the fuel cell driven vehicle model also incorporates the modeling of DC-DC Power Converter and Vehicle Dynamics. Finally the Performance of the Electric vehicle model is analyzed for two different drive cycle such as M-NEDC & M-UDDS.

  11. Robust nonlinear autoregressive moving average model parameter estimation using stochastic recurrent artificial neural networks

    DEFF Research Database (Denmark)

    Chon, K H; Hoyer, D; Armoundas, A A;

    1999-01-01

    part of the stochastic ARMA model are first estimated via a three-layer artificial neural network (deterministic estimation step) and then reestimated using the prediction error as one of the inputs to the artificial neural networks in an iterative algorithm (stochastic estimation step). The prediction...

  12. An adaptive PID like controller using mix locally recurrent neural network for robotic manipulator with variable payload.

    Science.gov (United States)

    Sharma, Richa; Kumar, Vikas; Gaur, Prerna; Mittal, A P

    2016-05-01

    Being complex, non-linear and coupled system, the robotic manipulator cannot be effectively controlled using classical proportional-integral-derivative (PID) controller. To enhance the effectiveness of the conventional PID controller for the nonlinear and uncertain systems, gains of the PID controller should be conservatively tuned and should adapt to the process parameter variations. In this work, a mix locally recurrent neural network (MLRNN) architecture is investigated to mimic a conventional PID controller which consists of at most three hidden nodes which act as proportional, integral and derivative node. The gains of the mix locally recurrent neural network based PID (MLRNNPID) controller scheme are initialized with a newly developed cuckoo search algorithm (CSA) based optimization method rather than assuming randomly. A sequential learning based least square algorithm is then investigated for the on-line adaptation of the gains of MLRNNPID controller. The performance of the proposed controller scheme is tested against the plant parameters uncertainties and external disturbances for both links of the two link robotic manipulator with variable payload (TL-RMWVP). The stability of the proposed controller is analyzed using Lyapunov stability criteria. A performance comparison is carried out among MLRNNPID controller, CSA optimized NNPID (OPTNNPID) controller and CSA optimized conventional PID (OPTPID) controller in order to establish the effectiveness of the MLRNNPID controller.

  13. Nonlinear control for systems containing input uncertainty via a Lyapunov-based approach

    Science.gov (United States)

    Mackunis, William

    Controllers are often designed based on the assumption that a control actuation can be directly applied to the system. This assumption may not be valid, however, for systems containing parametric input uncertainty or unmodeled actuator dynamics. In this dissertation, a tracking control methodology is proposed for aircaft and aerospace systems for which the corresponding dynamic models contain uncertainty in the control actuation. The dissertation will focus on five problems of interest: (1) adaptive CMG-actuated satellite attitude control in the presence of inertia uncertainty and uncertain CMG gimbal friction; (2) adaptive neural network (NN)-based satellite attitude control for CMG-actuated small-sats in the presence of uncertain satellite inertia, nonlinear disturbance torques, uncertain CMG gimbal friction, and nonlinear electromechanical CMG actuator disturbances; (3) dynamic inversion (DI) control for aircraft systems containing parametric input uncertainty and additive, nonlinearly parameterizable (non-LP) disturbances; (4) adaptive dynamic inversion (ADI) control for aircraft systems as described in (3); and (5) adaptive output feedback control for aircraft systems as described in (3) and (4).

  14. Wind Turbine Driving a PM Synchronous Generator Using Novel Recurrent Chebyshev Neural Network Control with the Ideal Learning Rate

    Directory of Open Access Journals (Sweden)

    Chih-Hong Lin

    2016-06-01

    Full Text Available A permanent magnet (PM synchronous generator system driven by wind turbine (WT, connected with smart grid via AC-DC converter and DC-AC converter, are controlled by the novel recurrent Chebyshev neural network (NN and amended particle swarm optimization (PSO to regulate output power and output voltage in two power converters in this study. Because a PM synchronous generator system driven by WT is an unknown non-linear and time-varying dynamic system, the on-line training novel recurrent Chebyshev NN control system is developed to regulate DC voltage of the AC-DC converter and AC voltage of the DC-AC converter connected with smart grid. Furthermore, the variable learning rate of the novel recurrent Chebyshev NN is regulated according to discrete-type Lyapunov function for improving the control performance and enhancing convergent speed. Finally, some experimental results are shown to verify the effectiveness of the proposed control method for a WT driving a PM synchronous generator system in smart grid.

  15. Weak signal detection and propagation in diluted feed-forward neural network with recurrent excitation and inhibition

    Science.gov (United States)

    Wang, Jiang; Han, Ruixue; Wei, Xilei; Qin, Yingmei; Yu, Haitao; Deng, Bin

    2016-12-01

    Reliable signal propagation across distributed brain areas provides the basis for neural circuit function. Modeling studies on cortical circuits have shown that multilayered feed-forward networks (FFNs), if strongly and/or densely connected, can enable robust signal propagation. However, cortical networks are typically neither densely connected nor have strong synapses. This paper investigates under which conditions spiking activity can be propagated reliably across diluted FFNs. Extending previous works, we model each layer as a recurrent sub-network constituting both excitatory (E) and inhibitory (I) neurons and consider the effect of interactions between local excitation and inhibition on signal propagation. It is shown that elevation of cellular excitation-inhibition (EI) balance in the local sub-networks (layers) softens the requirement for dense/strong anatomical connections and thereby promotes weak signal propagation in weakly connected networks. By means of iterated maps, we show how elevated local excitability state compensates for the decreased gain of synchrony transfer function that is due to sparse long-range connectivity. Finally, we report that modulations of EI balance and background activity provide a mechanism for selectively gating and routing neural signal. Our results highlight the essential role of intrinsic network states in neural computation.

  16. Modeling the dynamics of the lead bismuth eutectic experimental accelerator driven system by an infinite impulse response locally recurrent neural network

    Energy Technology Data Exchange (ETDEWEB)

    Zio, Enrico; Pedroni, Nicola; Broggi, Matteo; Golea, Lucia Roxana [Polytechnic of Milan, Milan (Italy)

    2009-12-15

    In this paper, an infinite impulse response locally recurrent neural network (IIR-LRNN) is employed for modelling the dynamics of the Lead Bismuth Eutectic eXperimental Accelerator Driven System (LBE-XADS). The network is trained by recursive back-propagation (RBP) and its ability in estimating transients is tested under various conditions. The results demonstrate the robustness of the locally recurrent scheme in the reconstruction of complex nonlinear dynamic relationships

  17. Analysis and design of associative memories based on recurrent neural networks with linear saturation activation functions and time-varying delays.

    Science.gov (United States)

    Zeng, Zhigang; Wang, Jun

    2007-08-01

    In this letter, some sufficient conditions are obtained to guarantee recurrent neural networks with linear saturation activation functions, and time-varying delays have multiequilibria located in the saturation region and the boundaries of the saturation region. These results on pattern characterization are used to analyze and design autoassociative memories, which are directly based on the parameters of the neural networks. Moreover, a formula for the numbers of spurious equilibria is also derived. Four design procedures for recurrent neural networks with linear saturation activation functions and time-varying delays are developed based on stability results. Two of these procedures allow the neural network to be capable of learning and forgetting. Finally, simulation results demonstrate the validity and characteristics of the proposed approach.

  18. Large-Signal Lyapunov-Based Stability Analysis of DC/AC Inverters and Inverter-Based Microgrids

    Science.gov (United States)

    Kabalan, Mahmoud

    Microgrid stability studies have been largely based on small-signal linearization techniques. However, the validity and magnitude of the linearization domain is limited to small perturbations. Thus, there is a need to examine microgrids with large-signal nonlinear techniques to fully understand and examine their stability. Large-signal stability analysis can be accomplished by Lyapunov-based mathematical methods. These Lyapunov methods estimate the domain of asymptotic stability of the studied system. A survey of Lyapunov-based large-signal stability studies showed that few large-signal studies have been completed on either individual systems (dc/ac inverters, dc/dc rectifiers, etc.) or microgrids. The research presented in this thesis addresses the large-signal stability of droop-controlled dc/ac inverters and inverter-based microgrids. Dc/ac power electronic inverters allow microgrids to be technically feasible. Thus, as a prelude to examining the stability of microgrids, the research presented in Chapter 3 analyzes the stability of inverters. First, the 13 th order large-signal nonlinear model of a droop-controlled dc/ac inverter connected to an infinite bus is presented. The singular perturbation method is used to decompose the nonlinear model into 11th, 9th, 7th, 5th, 3rd and 1st order models. Each model ignores certain control or structural components of the full order model. The aim of the study is to understand the accuracy and validity of the reduced order models in replicating the performance of the full order nonlinear model. The performance of each model is studied in three different areas: time domain simulations, Lyapunov's indirect method and domain of attraction estimation. The work aims to present the best model to use in each of the three domains of study. Results show that certain reduced order models are capable of accurately reproducing the performance of the full order model while others can be used to gain insights into those three areas of

  19. 递归T-S模糊模型的神经网络%Neural Network Based on Recurrent T-S Fuzzy Model

    Institute of Scientific and Technical Information of China (English)

    宋春宁; 刘少东

    2013-01-01

    The dynamic recursive elements were added to the general T-S fuzzy neural network to propose a recurrent T-S fuzzy neural network.In the system identification,the unsupervised clustering algorithm and dynamic back-propagation algorithm were applied to the parameter training of this recurrent neural network and the approximation of the fuzzy neural network was proved.Comparing the identification results of the two fuzzy neural networks shows that the recurrent T-S fuzzy neural network can perform well in nonlinear system identification.%在常规T-S模糊神经网络的基础上加入动态递归元件,提出了递归T-S模糊模型的神经网络.在系统辨识中采用无监督聚类算法和动态反向传播算法训练该递归神经网络的参数,给出了该递归网络的逼近性证明.辨识效果与常规T-S模糊模型作比较,说明递归T-S模糊模型的神经网络在非线性系统辨识中表现出更好的性能.

  20. Mining e-cigarette adverse events in social media using Bi-LSTM recurrent neural network with word embedding representation.

    Science.gov (United States)

    Xie, Jiaheng; Liu, Xiao; Dajun Zeng, Daniel

    2017-05-13

    Recent years have seen increased worldwide popularity of e-cigarette use. However, the risks of e-cigarettes are underexamined. Most e-cigarette adverse event studies have achieved low detection rates due to limited subject sample sizes in the experiments and surveys. Social media provides a large data repository of consumers' e-cigarette feedback and experiences, which are useful for e-cigarette safety surveillance. However, it is difficult to automatically interpret the informal and nontechnical consumer vocabulary about e-cigarettes in social media. This issue hinders the use of social media content for e-cigarette safety surveillance. Recent developments in deep neural network methods have shown promise for named entity extraction from noisy text. Motivated by these observations, we aimed to design a deep neural network approach to extract e-cigarette safety information in social media. Our deep neural language model utilizes word embedding as the representation of text input and recognizes named entity types with the state-of-the-art Bidirectional Long Short-Term Memory (Bi-LSTM) Recurrent Neural Network. Our Bi-LSTM model achieved the best performance compared to 3 baseline models, with a precision of 94.10%, a recall of 91.80%, and an F-measure of 92.94%. We identified 1591 unique adverse events and 9930 unique e-cigarette components (ie, chemicals, flavors, and devices) from our research testbed. Although the conditional random field baseline model had slightly better precision than our approach, our Bi-LSTM model achieved much higher recall, resulting in the best F-measure. Our method can be generalized to extract medical concepts from social media for other medical applications.

  1. RM-SORN: a reward-modulated self-organizing recurrent neural network.

    Science.gov (United States)

    Aswolinskiy, Witali; Pipa, Gordon

    2015-01-01

    Neural plasticity plays an important role in learning and memory. Reward-modulation of plasticity offers an explanation for the ability of the brain to adapt its neural activity to achieve a rewarded goal. Here, we define a neural network model that learns through the interaction of Intrinsic Plasticity (IP) and reward-modulated Spike-Timing-Dependent Plasticity (STDP). IP enables the network to explore possible output sequences and STDP, modulated by reward, reinforces the creation of the rewarded output sequences. The model is tested on tasks for prediction, recall, non-linear computation, pattern recognition, and sequence generation. It achieves performance comparable to networks trained with supervised learning, while using simple, biologically motivated plasticity rules, and rewarding strategies. The results confirm the importance of investigating the interaction of several plasticity rules in the context of reward-modulated learning and whether reward-modulated self-organization can explain the amazing capabilities of the brain.

  2. Meckel Gruber syndrome--a single gene cause of recurrent neural tube defects.

    Science.gov (United States)

    de Silva, D; Suriyawansa, D; Mangalika, M; Samarasinghe, D

    2001-03-01

    Meckel Gruber syndrome (MGS), an autosomal recessive disorder characterised by posterior encephalocoele, multicystic kidneys and post-axial polydactyly should be recognised by obstetricians and paediatricians to counsel parents regarding the 25% recurrence risk. We report a consanguineous family with MGS affecting three infants.

  3. Comparative analysis of Recurrent and Finite Impulse Response Neural Networks in Time Series Prediction

    Directory of Open Access Journals (Sweden)

    Milos Miljanovic

    2012-02-01

    Full Text Available The purpose of this paper is to perform evaluation of two different neural network architectures used for solving temporal problems, i.e. time series prediction. The data sets in this project include Mackey-Glass,Sunspots, and Standard & Poor's 500, the stock market index. The study also presents a comparison study on the two networks and their performance.

  4. Finite time-Lyapunov based approach for robust adaptive control of wind-induced oscillations in power transmission lines

    Science.gov (United States)

    Ghabraei, Soheil; Moradi, Hamed; Vossoughi, Gholamreza

    2016-06-01

    Large amplitude oscillation of the power transmission lines, which is also known as galloping phenomenon, has hazardous consequences such as short circuiting and failure of transmission line. In this article, to suppress the undesirable vibrations of the transmission lines, first the governing equations of transmission line are derived via mode summation technique. Then, due to the occurrence of large amplitude vibrations, nonlinear quadratic and cubic terms are included in the derived linear equations. To suppress the vibrations, arbitrary number of the piezoelectric actuators is assumed to exert the actuation forces. Afterwards, a Lyapunov based approach is proposed for the robust adaptive suppression of the undesirable vibrations in the finite time. To compensate the supposed parametric uncertainties with unknown bands, proper adaption laws are introduced. To avoid the vibration devastating consequences as quickly as possible, appropriate control laws are designed. The vibration suppression in the finite time with supposed adaption and control laws is mathematically proved via Lyapunov finite time stability theory. Finally, to illustrate and validate the efficiency and robustness of the proposed finite time control scheme, a parametric case study with three piezoelectric actuators is performed. It is observed that the proposed active control strategy is more efficient and robust than the passive control methods.

  5. Stability analysis of autonomous space systems in the presence of large disturbances: A Lyapunov-based constrained control strategy.

    Science.gov (United States)

    Mazinan, A H

    2016-03-01

    The research addresses a Lyapunov-based constrained control strategy to deal with the autonomous space system in the presence of large disturbances. The aforementioned autonomous space system under control is first represented through a dynamics model and subsequently the proposed control strategy is fully investigated with a focus on the three-axis detumbling and the corresponding pointing mode control approaches. The three-axis detumbling mode control approach is designed to deal with the unwanted angular rates of the system to be zero, while the saturations of the actuators are taken into consideration. Moreover, the three-axis pointing mode control approach is designed in the similar state to deal with the rotational angles of the system to be desirable. The contribution of the research is mathematically made to propose a control law in connection with a new candidate of Lyapunov function to deal with the rotational angles and the related angular rates of the present autonomous space system with respect to state-of-the-art. A series of experiments are carried out to consider the efficiency of the proposed control strategy, as long as a number of benchmarks are realized in the same condition to verify and guarantee the strategy performance in both modes of control approaches.

  6. Exploring multiple feature combination strategies with a recurrent neural network architecture for off-line handwriting recognition

    Science.gov (United States)

    Mioulet, L.; Bideault, G.; Chatelain, C.; Paquet, T.; Brunessaux, S.

    2015-01-01

    The BLSTM-CTC is a novel recurrent neural network architecture that has outperformed previous state of the art algorithms in tasks such as speech recognition or handwriting recognition. It has the ability to process long term dependencies in temporal signals in order to label unsegmented data. This paper describes different ways of combining features using a BLSTM-CTC architecture. Not only do we explore the low level combination (feature space combination) but we also explore high level combination (decoding combination) and mid-level (internal system representation combination). The results are compared on the RIMES word database. Our results show that the low level combination works best, thanks to the powerful data modeling of the LSTM neurons.

  7. Novel delay-distribution-dependent stability analysis for continuous-time recurrent neural networks with stochastic delay

    Institute of Scientific and Technical Information of China (English)

    Wang Shen-Quan; Feng Jian; Zhao Qing

    2012-01-01

    In this paper,the problem of delay-distribution-dependent stability is investigated for continuous-time recurrent neural networks (CRNNs) with stochastic delay.Different from the common assumptions on time delays,it is assumed that the probability distribution of the delay taking values in some intervals is known a priori.By making full use of the information concerning the probability distribution of the delay and by using a tighter bounding technique (the reciprocally convex combination method),less conservative asymptotic mean-square stable sufficient conditions are derived in terms of linear matrix inequalities (LMIs).Two numerical examples show that our results are better than the existing ones.

  8. A Novel Method in Two-Step-Ahead Weight Adjustment of Recurrent Neural Networks: Application in Market Forecasting

    Directory of Open Access Journals (Sweden)

    Narges Talebi Motlagh

    2016-07-01

    Full Text Available Gold price prediction is a very complex nonlinear problem which is severely difficult. Real-time price prediction, as a principle of many economic models, is one of the most challenging tasks for economists since the context of the financial agents are often dynamic. Since in financial time series, direction prediction is important, in this work, an innovative Recurrent Neural Network (RNN is utilized to obtain accurate Two-Step- Ahead (2SA prediction results and ameliorate forecasting per- formances of gold market. The training method of the proposed network has been combined with an adaptive learning rate algorithm and a linear combination of Directional Symmetry (DS is utilized in the training phase. The proposed method has been developed for online and offline applications. Simulations and experiments on the daily Gold market data and the benchmark time series of Lorenz and Rossler shows the high efficiency of proposed method which could forecast future gold price precisely.

  9. Dissipativity analysis of stochastic memristor-based recurrent neural networks with discrete and distributed time-varying delays.

    Science.gov (United States)

    Radhika, Thirunavukkarasu; Nagamani, Gnaneswaran

    2016-01-01

    In this paper, based on the knowledge of memristor-based recurrent neural networks (MRNNs), the model of the stochastic MRNNs with discrete and distributed delays is established. In real nervous systems and in the implementation of very large-scale integration (VLSI) circuits, noise is unavoidable, which leads to the stochastic model of the MRNNs. In this model, the delay interval is decomposed into two subintervals by using the tuning parameter α such that 0 stochastic MRNNs with discrete and distributed delays in the sense of Filippov solutions. Using the stochastic analysis theory and Itô's formula for stochastic differential equations, we establish sufficient conditions for dissipativity criterion. The dissipativity and passivity conditions are presented in terms of linear matrix inequalities, which can be easily solved by using Matlab Tools. Finally, three numerical examples with simulations are presented to demonstrate the effectiveness of the theoretical results.

  10. DanQ: a hybrid convolutional and recurrent deep neural network for quantifying the function of DNA sequences.

    Science.gov (United States)

    Quang, Daniel; Xie, Xiaohui

    2016-06-20

    Modeling the properties and functions of DNA sequences is an important, but challenging task in the broad field of genomics. This task is particularly difficult for non-coding DNA, the vast majority of which is still poorly understood in terms of function. A powerful predictive model for the function of non-coding DNA can have enormous benefit for both basic science and translational research because over 98% of the human genome is non-coding and 93% of disease-associated variants lie in these regions. To address this need, we propose DanQ, a novel hybrid convolutional and bi-directional long short-term memory recurrent neural network framework for predicting non-coding function de novo from sequence. In the DanQ model, the convolution layer captures regulatory motifs, while the recurrent layer captures long-term dependencies between the motifs in order to learn a regulatory 'grammar' to improve predictions. DanQ improves considerably upon other models across several metrics. For some regulatory markers, DanQ can achieve over a 50% relative improvement in the area under the precision-recall curve metric compared to related models. We have made the source code available at the github repository http://github.com/uci-cbcl/DanQ.

  11. Bioelectric signal classification using a recurrent probabilistic neural network with time-series discriminant component analysis.

    Science.gov (United States)

    Hayashi, Hideaki; Shima, Keisuke; Shibanoki, Taro; Kurita, Yuichi; Tsuji, Toshio

    2013-01-01

    This paper outlines a probabilistic neural network developed on the basis of time-series discriminant component analysis (TSDCA) that can be used to classify high-dimensional time-series patterns. TSDCA involves the compression of high-dimensional time series into a lower-dimensional space using a set of orthogonal transformations and the calculation of posterior probabilities based on a continuous-density hidden Markov model that incorporates a Gaussian mixture model expressed in the reduced-dimensional space. The analysis can be incorporated into a neural network so that parameters can be obtained appropriately as network coefficients according to backpropagation-through-time-based training algorithm. The network is considered to enable high-accuracy classification of high-dimensional time-series patterns and to reduce the computation time taken for network training. In the experiments conducted during the study, the validity of the proposed network was demonstrated for EEG signals.

  12. Event-Triggered State Estimation for a Class of Delayed Recurrent Neural Networks with Sampled-Data Information

    Directory of Open Access Journals (Sweden)

    Hongjie Li

    2012-01-01

    Full Text Available The paper investigates the state estimation problem for a class of recurrent neural networks with sampled-data information and time-varying delays. The main purpose is to estimate the neuron states through output sampled measurement; a novel event-triggered scheme is proposed, which can lead to a significant reduction of the information communication burden in the network; the feature of this scheme is that whether or not the sampled data should be transmitted is determined by the current sampled data and the error between the current sampled data and the latest transmitted data. By using a delayed-input approach, the error dynamic system is equivalent to a dynamic system with two different time-varying delays. Based on the Lyapunov-krasovskii functional approach, a state estimator of the considered neural networks can be achieved by solving some linear matrix inequalities, which can be easily facilitated by using the standard numerical software. Finally, a numerical example is provided to show the effectiveness of the proposed event-triggered scheme.

  13. Real-time multi-step-ahead water level forecasting by recurrent neural networks for urban flood control

    Science.gov (United States)

    Chang, Fi-John; Chen, Pin-An; Lu, Ying-Ray; Huang, Eric; Chang, Kai-Yao

    2014-09-01

    Urban flood control is a crucial task, which commonly faces fast rising peak flows resulting from urbanization. To mitigate future flood damages, it is imperative to construct an on-line accurate model to forecast inundation levels during flood periods. The Yu-Cheng Pumping Station located in Taipei City of Taiwan is selected as the study area. Firstly, historical hydrologic data are fully explored by statistical techniques to identify the time span of rainfall affecting the rise of the water level in the floodwater storage pond (FSP) at the pumping station. Secondly, effective factors (rainfall stations) that significantly affect the FSP water level are extracted by the Gamma test (GT). Thirdly, one static artificial neural network (ANN) (backpropagation neural network-BPNN) and two dynamic ANNs (Elman neural network-Elman NN; nonlinear autoregressive network with exogenous inputs-NARX network) are used to construct multi-step-ahead FSP water level forecast models through two scenarios, in which scenario I adopts rainfall and FSP water level data as model inputs while scenario II adopts only rainfall data as model inputs. The results demonstrate that the GT can efficiently identify the effective rainfall stations as important inputs to the three ANNs; the recurrent connections from the output layer (NARX network) impose more effects on the output than those of the hidden layer (Elman NN) do; and the NARX network performs the best in real-time forecasting. The NARX network produces coefficients of efficiency within 0.9-0.7 (scenario I) and 0.7-0.5 (scenario II) in the testing stages for 10-60-min-ahead forecasts accordingly. This study suggests that the proposed NARX models can be valuable and beneficial to the government authority for urban flood control.

  14. Building energy use prediction and system identification using recurrent neural networks

    Energy Technology Data Exchange (ETDEWEB)

    Kreider, J.F.; Curtiss, P.; Dodier, R.; Krarti, M. [Univ. of Colorado, Boulder, CO (United States); Claridge, D.E.; Haberl, J.S. [Texas A and M Univ., College Station, TX (United States). Dept. of Mechanical Engineering

    1995-08-01

    Following several successful applications of feedforward neural networks (NNs) to the building energy prediction problem a more difficult problem has been addressed recently: namely, the prediction of building energy consumption well into the future without knowledge of immediately past energy consumption. This paper will report results on a recent study of six months of hourly data recorded at the Zachry Engineering Center (ZEC) in College Station, TX. Also reported are results on finding the R and C values for buildings from networks trained on building data.

  15. Schema generation in recurrent neural nets for intercepting a moving target.

    Science.gov (United States)

    Fleischer, Andreas G

    2010-06-01

    The grasping of a moving object requires the development of a motor strategy to anticipate the trajectory of the target and to compute an optimal course of interception. During the performance of perception-action cycles, a preprogrammed prototypical movement trajectory, a motor schema, may highly reduce the control load. Subjects were asked to hit a target that was moving along a circular path by means of a cursor. Randomized initial target positions and velocities were detected in the periphery of the eyes, resulting in a saccade toward the target. Even when the target disappeared, the eyes followed the target's anticipated course. The Gestalt of the trajectories was dependent on target velocity. The prediction capability of the motor schema was investigated by varying the visibility range of cursor and target. Motor schemata were determined to be of limited precision, and therefore visual feedback was continuously required to intercept the moving target. To intercept a target, the motor schema caused the hand to aim ahead and to adapt to the target trajectory. The control of cursor velocity determined the point of interception. From a modeling point of view, a neural network was developed that allowed the implementation of a motor schema interacting with feedback control in an iterative manner. The neural net of the Wilson type consists of an excitation-diffusion layer allowing the generation of a moving bubble. This activation bubble runs down an eye-centered motor schema and causes a planar arm model to move toward the target. A bubble provides local integration and straightening of the trajectory during repetitive moves. The schema adapts to task demands by learning and serves as forward controller. On the basis of these model considerations the principal problem of embedding motor schemata in generalized control strategies is discussed.

  16. A Recurrent Probabilistic Neural Network with Dimensionality Reduction Based on Time-series Discriminant Component Analysis.

    Science.gov (United States)

    Hayashi, Hideaki; Shibanoki, Taro; Shima, Keisuke; Kurita, Yuichi; Tsuji, Toshio

    2015-12-01

    This paper proposes a probabilistic neural network (NN) developed on the basis of time-series discriminant component analysis (TSDCA) that can be used to classify high-dimensional time-series patterns. TSDCA involves the compression of high-dimensional time series into a lower dimensional space using a set of orthogonal transformations and the calculation of posterior probabilities based on a continuous-density hidden Markov model with a Gaussian mixture model expressed in the reduced-dimensional space. The analysis can be incorporated into an NN, which is named a time-series discriminant component network (TSDCN), so that parameters of dimensionality reduction and classification can be obtained simultaneously as network coefficients according to a backpropagation through time-based learning algorithm with the Lagrange multiplier method. The TSDCN is considered to enable high-accuracy classification of high-dimensional time-series patterns and to reduce the computation time taken for network training. The validity of the TSDCN is demonstrated for high-dimensional artificial data and electroencephalogram signals in the experiments conducted during the study.

  17. Training Recurrent Neural Networks With the Levenberg-Marquardt Algorithm for Optimal Control of a Grid-Connected Converter.

    Science.gov (United States)

    Fu, Xingang; Li, Shuhui; Fairbank, Michael; Wunsch, Donald C; Alonso, Eduardo

    2015-09-01

    This paper investigates how to train a recurrent neural network (RNN) using the Levenberg-Marquardt (LM) algorithm as well as how to implement optimal control of a grid-connected converter (GCC) using an RNN. To successfully and efficiently train an RNN using the LM algorithm, a new forward accumulation through time (FATT) algorithm is proposed to calculate the Jacobian matrix required by the LM algorithm. This paper explores how to incorporate FATT into the LM algorithm. The results show that the combination of the LM and FATT algorithms trains RNNs better than the conventional backpropagation through time algorithm. This paper presents an analytical study on the optimal control of GCCs, including theoretically ideal optimal and suboptimal controllers. To overcome the inapplicability of the optimal GCC controller under practical conditions, a new RNN controller with an improved input structure is proposed to approximate the ideal optimal controller. The performance of an ideal optimal controller and a well-trained RNN controller was compared in close to real-life power converter switching environments, demonstrating that the proposed RNN controller can achieve close to ideal optimal control performance even under low sampling rate conditions. The excellent performance of the proposed RNN controller under challenging and distorted system conditions further indicates the feasibility of using an RNN to approximate optimal control in practical applications.

  18. Long Short-Term Memory Projection Recurrent Neural Network Architectures for Piano’s Continuous Note Recognition

    Directory of Open Access Journals (Sweden)

    YuKang Jia

    2017-01-01

    Full Text Available Long Short-Term Memory (LSTM is a kind of Recurrent Neural Networks (RNN relating to time series, which has achieved good performance in speech recogniton and image recognition. Long Short-Term Memory Projection (LSTMP is a variant of LSTM to further optimize speed and performance of LSTM by adding a projection layer. As LSTM and LSTMP have performed well in pattern recognition, in this paper, we combine them with Connectionist Temporal Classification (CTC to study piano’s continuous note recognition for robotics. Based on the Beijing Forestry University music library, we conduct experiments to show recognition rates and numbers of iterations of LSTM with a single layer, LSTMP with a single layer, and Deep LSTM (DLSTM, LSTM with multilayers. As a result, the single layer LSTMP proves performing much better than the single layer LSTM in both time and the recognition rate; that is, LSTMP has fewer parameters and therefore reduces the training time, and, moreover, benefiting from the projection layer, LSTMP has better performance, too. The best recognition rate of LSTMP is 99.8%. As for DLSTM, the recognition rate can reach 100% because of the effectiveness of the deep structure, but compared with the single layer LSTMP, DLSTM needs more training time.

  19. Brain Machine Interface: Analysis of segmented EEG Signal Classification Using Short-Time PCA and Recurrent Neural Networks

    Directory of Open Access Journals (Sweden)

    C. R. Hema

    2008-01-01

    Full Text Available Brain machine interface provides a communication channel between the human brain and an external device. Brain interfaces are studied to provide rehabilitation to patients with neurodegenerative diseases; such patients loose all communication pathways except for their sensory and cognitive functions. One of the possible rehabilitation methods for these patients is to provide a brain machine interface (BMI for communication; the BMI uses the electrical activity of the brain detected by scalp EEG electrodes. Classification of EEG signals extracted during mental tasks is a technique for designing a BMI. In this paper a BMI design using five mental tasks from two subjects were studied, a combination of two tasks is studied per subject. An Elman recurrent neural network is proposed for classification of EEG signals. Two feature extraction algorithms using overlapped and non overlapped signal segments are analyzed. Principal component analysis is used for extracting features from the EEG signal segments. Classification performance of overlapping EEG signal segments is observed to be better in terms of average classification with a range of 78.5% to 100%, while the non overlapping EEG signal segments show better classification in terms of maximum classifications.

  20. Word embeddings and recurrent neural networks based on Long-Short Term Memory nodes in supervised biomedical word sense disambiguation.

    Science.gov (United States)

    Jimeno Yepes, Antonio

    2017-09-01

    Word sense disambiguation helps identifying the proper sense of ambiguous words in text. With large terminologies such as the UMLS Metathesaurus ambiguities appear and highly effective disambiguation methods are required. Supervised learning algorithm methods are used as one of the approaches to perform disambiguation. Features extracted from the context of an ambiguous word are used to identify the proper sense of such a word. The type of features have an impact on machine learning methods, thus affect disambiguation performance. In this work, we have evaluated several types of features derived from the context of the ambiguous word and we have explored as well more global features derived from MEDLINE using word embeddings. Results show that word embeddings improve the performance of more traditional features and allow as well using recurrent neural network classifiers based on Long-Short Term Memory (LSTM) nodes. The combination of unigrams and word embeddings with an SVM sets a new state of the art performance with a macro accuracy of 95.97 in the MSH WSD data set. Copyright © 2017 Elsevier Inc. All rights reserved.

  1. Recurrent-Neural-Network-Based Multivariable Adaptive Control for a Class of Nonlinear Dynamic Systems With Time-Varying Delay.

    Science.gov (United States)

    Hwang, Chih-Lyang; Jan, Chau

    2016-02-01

    At the beginning, an approximate nonlinear autoregressive moving average (NARMA) model is employed to represent a class of multivariable nonlinear dynamic systems with time-varying delay. It is known that the disadvantages of robust control for the NARMA model are as follows: 1) suitable control parameters for larger time delay are more sensitive to achieving desirable performance; 2) it only deals with bounded uncertainty; and 3) the nominal NARMA model must be learned in advance. Due to the dynamic feature of the NARMA model, a recurrent neural network (RNN) is online applied to learn it. However, the system performance becomes deteriorated due to the poor learning of the larger variation of system vector functions. In this situation, a simple network is employed to compensate the upper bound of the residue caused by the linear parameterization of the approximation error of RNN. An e -modification learning law with a projection for weight matrix is applied to guarantee its boundedness without persistent excitation. Under suitable conditions, the semiglobally ultimately bounded tracking with the boundedness of estimated weight matrix is obtained by the proposed RNN-based multivariable adaptive control. Finally, simulations are presented to verify the effectiveness and robustness of the proposed control.

  2. Recurrent-neural-network-based identification of a cascade hydraulic actuator for closed-loop automotive power transmission control

    Energy Technology Data Exchange (ETDEWEB)

    You, Seung Han [Hyundai Motor Company, Seoul (Korea, Republic of); Hahn, Jin Oh [University of Alberta, Edmonton (Canada)

    2012-05-15

    By virtue of its ease of operation compared with its conventional manual counterpart, automatic transmissions are commonly used as automotive power transmission control system in today's passenger cars. In accordance with this trend, research efforts on closed-loop automatic transmission controls have been extensively carried out to improve ride quality and fuel economy. State-of-the-art power transmission control algorithms may have limitations in performance because they rely on the steady-state characteristics of the hydraulic actuator rather than fully exploit its dynamic characteristics. Since the ultimate viability of closed-loop power transmission control is dominated by precise pressure control at the level of hydraulic actuator, closed-loop control can potentially attain superior efficacy in case the hydraulic actuator can be easily incorporated into model-based observer/controller design. In this paper, we propose to use a recurrent neural network (RNN) to establish a nonlinear empirical model of a cascade hydraulic actuator in a passenger car automatic transmission, which has potential to be easily incorporated in designing observers and controllers. Experimental analysis is performed to grasp key system characteristics, based on which a nonlinear system identification procedure is carried out. Extensive experimental validation of the established model suggests that it has superb one-step-ahead prediction capability over appropriate frequency range, making it an attractive approach for model-based observer/controller design applications in automotive systems.

  3. Time-space analysis in photoelasticity images using recurrent neural networks to detect zones with stress concentration

    Science.gov (United States)

    Briñez de León, Juan C.; Restrepo M., Alejandro; Branch, John W.

    2016-09-01

    Digital photoelasticity is based on image analysis techniques to describe the stress distribution in birefringent materials subjected to mechanical loads. However, optical assemblies for capturing the images, the steps to extract the information, and the ambiguities of the results limit the analysis in zones with stress concentrations. These zones contain stress values that could produce a failure, making important their identification. This paper identifies zones with stress concentration in a sequence of photoelasticity images, which was captured from a circular disc under diametral compression. The capturing process was developed assembling a plane polariscope around the disc, and a digital camera stored the temporal fringe colors generated during the load application. Stress concentration zones were identified modeling the temporal intensities captured by every pixel contained into the sequence. In this case, an Elman artificial recurrent neural network was trained to model the temporal intensities. Pixel positions near to the stress concentration zones trained different network parameters in comparison with pixel positions belonging to zones of lower stress concentration.

  4. A Velocity-Level Bi-Criteria Optimization Scheme for Coordinated Path Tracking of Dual Robot Manipulators Using Recurrent Neural Network.

    Science.gov (United States)

    Xiao, Lin; Zhang, Yongsheng; Liao, Bolin; Zhang, Zhijun; Ding, Lei; Jin, Long

    2017-01-01

    A dual-robot system is a robotic device composed of two robot arms. To eliminate the joint-angle drift and prevent the occurrence of high joint velocity, a velocity-level bi-criteria optimization scheme, which includes two criteria (i.e., the minimum velocity norm and the repetitive motion), is proposed and investigated for coordinated path tracking of dual robot manipulators. Specifically, to realize the coordinated path tracking of dual robot manipulators, two subschemes are first presented for the left and right robot manipulators. After that, such two subschemes are reformulated as two general quadratic programs (QPs), which can be formulated as one unified QP. A recurrent neural network (RNN) is thus presented to solve effectively the unified QP problem. At last, computer simulation results based on a dual three-link planar manipulator further validate the feasibility and the efficacy of the velocity-level optimization scheme for coordinated path tracking using the recurrent neural network.

  5. Delay-Range-Dependent Global Robust Passivity Analysis of Discrete-Time Uncertain Recurrent Neural Networks with Interval Time-Varying Delay

    Directory of Open Access Journals (Sweden)

    Chien-Yu Lu

    2009-01-01

    Full Text Available This paper examines a passivity analysis for a class of discrete-time recurrent neural networks (DRNNs with norm-bounded time-varying parameter uncertainties and interval time-varying delay. The activation functions are assumed to be globally Lipschitz continuous. Based on an appropriate type of Lyapunov functional, sufficient passivity conditions for the DRNNs are derived in terms of a family of linear matrix inequalities (LMIs. Two numerical examples are given to illustrate the effectiveness and applicability.

  6. Multistability analysis of a general class of recurrent neural networks with non-monotonic activation functions and time-varying delays.

    Science.gov (United States)

    Liu, Peng; Zeng, Zhigang; Wang, Jun

    2016-07-01

    This paper addresses the multistability for a general class of recurrent neural networks with time-varying delays. Without assuming the linearity or monotonicity of the activation functions, several new sufficient conditions are obtained to ensure the existence of (2K+1)(n) equilibrium points and the exponential stability of (K+1)(n) equilibrium points among them for n-neuron neural networks, where K is a positive integer and determined by the type of activation functions and the parameters of neural network jointly. The obtained results generalize and improve the earlier publications. Furthermore, the attraction basins of these exponentially stable equilibrium points are estimated. It is revealed that the attraction basins of these exponentially stable equilibrium points can be larger than their originally partitioned subsets. Finally, three illustrative numerical examples show the effectiveness of theoretical results.

  7. Using Elman recurrent neural networks with conjugate gradient algorithm in determining the anesthetic the amount of anesthetic medicine to be applied.

    Science.gov (United States)

    Güntürkün, Rüştü

    2010-08-01

    In this study, Elman recurrent neural networks have been defined by using conjugate gradient algorithm in order to determine the depth of anesthesia in the continuation stage of the anesthesia and to estimate the amount of medicine to be applied at that moment. The feed forward neural networks are also used for comparison. The conjugate gradient algorithm is compared with back propagation (BP) for training of the neural Networks. The applied artificial neural network is composed of three layers, namely the input layer, the hidden layer and the output layer. The nonlinear activation function sigmoid (sigmoid function) has been used in the hidden layer and the output layer. EEG data has been recorded with Nihon Kohden 9200 brand 22-channel EEG device. The international 8-channel bipolar 10-20 montage system (8 TB-b system) has been used in assembling the recording electrodes. EEG data have been recorded by being sampled once in every 2 milliseconds. The artificial neural network has been designed so as to have 60 neurons in the input layer, 30 neurons in the hidden layer and 1 neuron in the output layer. The values of the power spectral density (PSD) of 10-second EEG segments which correspond to the 1-50 Hz frequency range; the ratio of the total power of PSD values of the EEG segment at that moment in the same range to the total of PSD values of EEG segment taken prior to the anesthesia.

  8. An Improved Recurrent Neural Network for Complex-Valued Systems of Linear Equation and Its Application to Robotic Motion Tracking.

    Science.gov (United States)

    Ding, Lei; Xiao, Lin; Liao, Bolin; Lu, Rongbo; Peng, Hua

    2017-01-01

    To obtain the online solution of complex-valued systems of linear equation in complex domain with higher precision and higher convergence rate, a new neural network based on Zhang neural network (ZNN) is investigated in this paper. First, this new neural network for complex-valued systems of linear equation in complex domain is proposed and theoretically proved to be convergent within finite time. Then, the illustrative results show that the new neural network model has the higher precision and the higher convergence rate, as compared with the gradient neural network (GNN) model and the ZNN model. Finally, the application for controlling the robot using the proposed method for the complex-valued systems of linear equation is realized, and the simulation results verify the effectiveness and superiorness of the new neural network for the complex-valued systems of linear equation.

  9. Existence and uniqueness of pseudo almost-periodic solutions of recurrent neural networks with time-varying coefficients and mixed delays.

    Science.gov (United States)

    Ammar, Boudour; Chérif, Farouk; Alimi, Adel M

    2012-01-01

    This paper is concerned with the existence and uniqueness of pseudo almost-periodic solutions to recurrent delayed neural networks. Several conditions guaranteeing the existence and uniqueness of such solutions are obtained in a suitable convex domain. Furthermore, several methods are applied to establish sufficient criteria for the globally exponential stability of this system. The approaches are based on constructing suitable Lyapunov functionals and the well-known Banach contraction mapping principle. Moreover, the attractivity and exponential stability of the pseudo almost-periodic solution are also considered for the system. A numerical example is given to illustrate the effectiveness of our results.

  10. Impulsive control for existence, uniqueness, and global stability of periodic solutions of recurrent neural networks with discrete and continuously distributed delays.

    Science.gov (United States)

    Li, Xiaodi; Song, Shiji

    2013-06-01

    In this paper, a class of recurrent neural networks with discrete and continuously distributed delays is considered. Sufficient conditions for the existence, uniqueness, and global exponential stability of a periodic solution are obtained by using contraction mapping theorem and stability theory on impulsive functional differential equations. The proposed method, which differs from the existing results in the literature, shows that network models may admit a periodic solution which is globally exponentially stable via proper impulsive control strategies even if it is originally unstable or divergent. Two numerical examples and their computer simulations are offered to show the effectiveness of our new results.

  11. A new recurrent neural network for solving convex quadratic programming problems with an application to the k-winners-take-all problem.

    Science.gov (United States)

    Hu, Xiaolin; Zhang, Bo

    2009-04-01

    In this paper, a new recurrent neural network is proposed for solving convex quadratic programming (QP) problems. Compared with existing neural networks, the proposed one features global convergence property under weak conditions, low structural complexity, and no calculation of matrix inverse. It serves as a competitive alternative in the neural network family for solving linear or quadratic programming problems. In addition, it is found that by some variable substitution, the proposed network turns out to be an existing model for solving minimax problems. In this sense, it can be also viewed as a special case of the minimax neural network. Based on this scheme, a k-winners-take-all ( k-WTA) network with O(n) complexity is designed, which is characterized by simple structure, global convergence, and capability to deal with some ill cases. Numerical simulations are provided to validate the theoretical results obtained. More importantly, the network design method proposed in this paper has great potential to inspire other competitive inventions along the same line.

  12. Sliding Mode Control of a Class of Uncertain Nonlinear Time-Delay Systems Using LMI and TS Recurrent Fuzzy Neural Network

    Science.gov (United States)

    Chiang, Tung-Sheng; Chiu, Chian-Song

    This paper proposes the sliding mode control using LMI techniques and adaptive recurrent fuzzy neural network (RFNN) for a class of uncertain nonlinear time-delay systems. First, a novel TS recurrent fuzzy neural network (TS-RFNN) is developed to provide more flexible and powerful compensation of system uncertainty. Then, the TS-RFNN based sliding model control is proposed for uncertain time-delay systems. In detail, sliding surface design is derived to cope with the non-Isidori-Bynes canonical form of dynamics, unknown delay time, and mismatched uncertainties. Based on the Lyapunov-Krasoviskii method, the asymptotic stability condition of the sliding motion is formulated into solving a Linear Matrix Inequality (LMI) problem which is independent on the time-varying delay. Furthermore, the input coupling uncertainty is also taken into our consideration. The overall controlled system achieves asymptotic stability even if considering poor modeling. The contributions include: i) asymptotic sliding surface is designed from solving a simple and legible delay-independent LMI; and ii) the TS-RFNN is more realizable (due to fewer fuzzy rules being used). Finally, simulation results demonstrate the validity of the proposed control scheme.

  13. Identification of a Typical CSTR Using Optimal Focused Time Lagged Recurrent Neural Network Model with Gamma Memory Filter

    National Research Council Canada - National Science Library

    Naikwad, S. N; Dudul, S. V

    2009-01-01

    .... It is noticed from literature review that process control of CSTR using neuro-fuzzy systems was attempted by many, but optimal neural network model for identification of CSTR process is not yet available...

  14. Recurrent networks for wave forecasting

    Digital Repository Service at National Institute of Oceanography (India)

    Mandal, S.; Prabaharan, N.

    , merchant vessel routing, nearshore construction, etc. more efficiently and safely. This paper presents an application of the Artificial Neural Network, namely Backpropagation Recurrent Neural Network (BRNN) with rprop update algorithm for wave forecasting...

  15. A RECURRENT ELMAN NEURAL NETWORK - BASED APPROACH TO DETECT THE PRESENCE OF EPILEPTIC ATTACK IN ELECTROENCEPHALOGRAM (EEG SIGNALS

    Directory of Open Access Journals (Sweden)

    Mr.S.Sundaram

    2014-10-01

    Full Text Available Epileptic attack persons are detected largely on the analysis of Electroencephalogram (EEG signals. The EEG signals recordings generate very bulk data which require a skilled and careful analysis. This method can be automated based on Elman Neural Network by using a time frequency domain characteristics of EEG signal called Approximate Entropy (ApEn. This method consists of EEG collection of data, extraction and classification. EEG data from normal persons and epileptic affected persons was collected, digitized and then fed into the Elman neural network. This proposed system proposes a neural-network-based automated epileptic EEG detection system that uses approximate entropy (ApEn as the input feature. Approximate Entropy (ApEn [1] is a statistical parameter that measures the predictability of the current amplitude values of a physiological signal based on its previous amplitude values. It is known that the value of the Approximate Entropy drops sharply during an epileptic attack[2]and this fact is used in the proposed system. Type of a neural network namely, Elman neural network is considered in this paper. The experimental results portray that this proposed approach efficiently detects the presence of epileptic seizures[3] in EEG signals and showed a reasonable accuracy.

  16. Use of Family History Information for Neural Tube Defect Prevention: Integration into State-Based Recurrence Prevention Programs

    Science.gov (United States)

    Green, Ridgely Fisk; Ehrhardt, Joan; Ruttenber, Margaret F.; Olney, Richard S.

    2011-01-01

    A family history of neural tube defects (NTDs) can increase the risk of a pregnancy affected by an NTD. Periconceptional folic acid use decreases this risk. Purpose: Our objective was to determine whether second-degree relatives of NTD-affected children showed differences in folic acid use compared with the general population and to provide them…

  17. A visual sense of number emerges from the dynamics of a recurrent on-center off-surround neural network.

    Science.gov (United States)

    Sengupta, Rakesh; Surampudi, Bapi Raju; Melcher, David

    2014-09-25

    It has been proposed that the ability of humans to quickly perceive numerosity involves a visual sense of number. Different paradigms of enumeration and numerosity comparison have produced a gamut of behavioral and neuroimaging data, but there has been no unified conceptual framework that can explain results across the entire range of numerosity. The current work tries to address the ongoing debate concerning whether the same mechanism operates for enumeration of small and large numbers, through a computational approach. We describe the workings of a single-layered, fully connected network characterized by self-excitation and recurrent inhibition that operates at both subitizing and estimation ranges. We show that such a network can account for classic numerical cognition effects (the distance effect, Fechner׳s law, Weber fraction for numerosity comparison) through the network steady state activation response across different recurrent inhibition values. The model also accounts for fMRI data previously reported for different enumeration related tasks. The model also allows us to generate an estimate of the pattern of reaction times in enumeration tasks. Overall, these findings suggest that a single network architecture can account for both small and large number processing.

  18. Multiple data fusion for rainfall estimation using a NARX-based recurrent neural network - the development of the REIINN model

    Science.gov (United States)

    Ang, M. R. C. O.; Gonzalez, R. M.; Castro, P. P. M.

    2014-03-01

    Rainfall, one of the important elements of the hydrologic cycle, is also the most difficult to model. Thus, accurate rainfall estimation is necessary especially in localized catchment areas where variability of rainfall is extremely high. Moreover, early warning of severe rainfall through timely and accurate estimation and forecasting could help prevent disasters from flooding. This paper presents the development of two rainfall estimation models that utilize a NARX-based neural network architecture namely: REIINN 1 and REIINN 2. These REIINN models, or Rainfall Estimation by Information Integration using Neural Networks, were trained using MTSAT cloud-top temperature (CTT) images and rainfall rates from the combined rain gauge and TMPA 3B40RT datasets. Model performance was assessed using two metrics - root mean square error (RMSE) and correlation coefficient (R). REIINN 1 yielded an RMSE of 8.1423 mm/3h and an overall R of 0.74652 while REIINN 2 yielded an RMSE of 5.2303 and an overall R of 0.90373. The results, especially that of REIINN 2, are very promising for satellite-based rainfall estimation in a catchment scale. It is believed that model performance and accuracy will greatly improve with a denser and more spatially distributed in-situ rainfall measurements to calibrate the model with. The models proved the viability of using remote sensing images, with their good spatial coverage, near real time availability, and relatively inexpensive to acquire, as an alternative source for rainfall estimation to complement existing ground-based measurements.

  19. A recurrent neural network approach to quantitatively studying solar wind effects on TEC derived from GPS; preliminary results

    Directory of Open Access Journals (Sweden)

    J. B. Habarulema

    2009-05-01

    Full Text Available This paper attempts to describe the search for the parameter(s to represent solar wind effects in Global Positioning System total electron content (GPS TEC modelling using the technique of neural networks (NNs. A study is carried out by including solar wind velocity (Vsw, proton number density (Np and the Bz component of the interplanetary magnetic field (IMF Bz obtained from the Advanced Composition Explorer (ACE satellite as separate inputs to the NN each along with day number of the year (DN, hour (HR, a 4-month running mean of the daily sunspot number (R4 and the running mean of the previous eight 3-hourly magnetic A index values (A8. Hourly GPS TEC values derived from a dual frequency receiver located at Sutherland (32.38° S, 20.81° E, South Africa for 8 years (2000–2007 have been used to train the Elman neural network (ENN and the result has been used to predict TEC variations for a GPS station located at Cape Town (33.95° S, 18.47° E. Quantitative results indicate that each of the parameters considered may have some degree of influence on GPS TEC at certain periods although a decrease in prediction accuracy is also observed for some parameters for different days and seasons. It is also evident that there is still a difficulty in predicting TEC values during disturbed conditions. The improvements and degradation in prediction accuracies are both close to the benchmark values which lends weight to the belief that diurnal, seasonal, solar and magnetic variabilities may be the major determinants of TEC variability.

  20. Comparison of the dynamics of neural interactions between current-based and conductance-based integrate-and-fire recurrent networks.

    Science.gov (United States)

    Cavallari, Stefano; Panzeri, Stefano; Mazzoni, Alberto

    2014-01-01

    Models of networks of Leaky Integrate-and-Fire (LIF) neurons are a widely used tool for theoretical investigations of brain function. These models have been used both with current- and conductance-based synapses. However, the differences in the dynamics expressed by these two approaches have been so far mainly studied at the single neuron level. To investigate how these synaptic models affect network activity, we compared the single neuron and neural population dynamics of conductance-based networks (COBNs) and current-based networks (CUBNs) of LIF neurons. These networks were endowed with sparse excitatory and inhibitory recurrent connections, and were tested in conditions including both low- and high-conductance states. We developed a novel procedure to obtain comparable networks by properly tuning the synaptic parameters not shared by the models. The so defined comparable networks displayed an excellent and robust match of first order statistics (average single neuron firing rates and average frequency spectrum of network activity). However, these comparable networks showed profound differences in the second order statistics of neural population interactions and in the modulation of these properties by external inputs. The correlation between inhibitory and excitatory synaptic currents and the cross-neuron correlation between synaptic inputs, membrane potentials and spike trains were stronger and more stimulus-modulated in the COBN. Because of these properties, the spike train correlation carried more information about the strength of the input in the COBN, although the firing rates were equally informative in both network models. Moreover, the network activity of COBN showed stronger synchronization in the gamma band, and spectral information about the input higher and spread over a broader range of frequencies. These results suggest that the second order statistics of network dynamics depend strongly on the choice of synaptic model.

  1. Potentiation decay of synapses and length distributions of synfire chains self-organized in recurrent neural networks.

    Science.gov (United States)

    Miller, Aaron; Jin, Dezhe Z

    2013-12-01

    Synfire chains are thought to underlie precisely timed sequences of spikes observed in various brain regions and across species. How they are formed is not understood. Here we analyze self-organization of synfire chains through the spike-timing dependent plasticity (STDP) of the synapses, axon remodeling, and potentiation decay of synaptic weights in networks of neurons driven by noisy external inputs and subject to dominant feedback inhibition. Potentiation decay is the gradual, activity-independent reduction of synaptic weights over time. We show that potentiation decay enables a dynamic and statistically stable network connectivity when neurons spike spontaneously. Periodic stimulation of a subset of neurons leads to formation of synfire chains through a random recruitment process, which terminates when the chain connects to itself and forms a loop. We demonstrate that chain length distributions depend on the potentiation decay. Fast potentiation decay leads to long chains with wide distributions, while slow potentiation decay leads to short chains with narrow distributions. We suggest that the potentiation decay, which corresponds to the decay of early long-term potentiation of synapses, is an important synaptic plasticity rule in regulating formation of neural circuity through STDP.

  2. An Asynchronous Recurrent Network of Cellular Automaton-Based Neurons and Its Reproduction of Spiking Neural Network Activities.

    Science.gov (United States)

    Matsubara, Takashi; Torikai, Hiroyuki

    2016-04-01

    Modeling and implementation approaches for the reproduction of input-output relationships in biological nervous tissues contribute to the development of engineering and clinical applications. However, because of high nonlinearity, the traditional modeling and implementation approaches encounter difficulties in terms of generalization ability (i.e., performance when reproducing an unknown data set) and computational resources (i.e., computation time and circuit elements). To overcome these difficulties, asynchronous cellular automaton-based neuron (ACAN) models, which are described as special kinds of cellular automata that can be implemented as small asynchronous sequential logic circuits have been proposed. This paper presents a novel type of such ACAN and a theoretical analysis of its excitability. This paper also presents a novel network of such neurons, which can mimic input-output relationships of biological and nonlinear ordinary differential equation model neural networks. Numerical analyses confirm that the presented network has a higher generalization ability than other major modeling and implementation approaches. In addition, Field-Programmable Gate Array-implementations confirm that the presented network requires lower computational resources.

  3. Design of a decoupled AP1000 reactor core control system using digital proportional–integral–derivative (PID) control based on a quasi-diagonal recurrent neural network (QDRNN)

    Energy Technology Data Exchange (ETDEWEB)

    Wei, Xinyu, E-mail: xyuwei@mail.xjtu.edu.cn; Wang, Pengfei, E-mail: pengfeixiaoli@yahoo.cn; Zhao, Fuyu, E-mail: fuyuzhao_xj@163.com

    2016-08-01

    Highlights: • We establish a disperse dynamic model for AP1000 reactor core. • A digital PID control based on QDRNN is used to design a decoupling control system. • The decoupling performance is verified and discussed. • The decoupling control system is simulated under the load following operation. - Abstract: The control system of the AP1000 reactor core uses the mechanical shim (MSHIM) strategy, which includes a power control subsystem and an axial power distribution control subsystem. To address the strong coupling between the two subsystems, an interlock between the two subsystems is used, which can only alleviate but not eliminate the coupling. Therefore, sometimes the axial offset (AO) cannot be controlled tightly, and the flexibility of load-following operation is limited. Thus, the decoupling of the original AP1000 reactor core control system is the focus of this paper. First, a two-node disperse dynamic model is established for the AP1000 reactor core to use PID control. Then, a digital PID control system based on a quasi-diagonal recurrent neural network (QDRNN) is designed to decouple the original system. Finally, the decoupling of the control system is verified by the step signal and load-following condition. The results show that the designed control system can decouple the original system as expected and the AO can be controlled much more tightly. Moreover, the flexibility of the load following is increased.

  4. An adaptive recurrent neural-network controller using a stabilization matrix and predictive inputs to solve a tracking problem under disturbances.

    Science.gov (United States)

    Fairbank, Michael; Li, Shuhui; Fu, Xingang; Alonso, Eduardo; Wunsch, Donald

    2014-01-01

    We present a recurrent neural-network (RNN) controller designed to solve the tracking problem for control systems. We demonstrate that a major difficulty in training any RNN is the problem of exploding gradients, and we propose a solution to this in the case of tracking problems, by introducing a stabilization matrix and by using carefully constrained context units. This solution allows us to achieve consistently lower training errors, and hence allows us to more easily introduce adaptive capabilities. The resulting RNN is one that has been trained off-line to be rapidly adaptive to changing plant conditions and changing tracking targets. The case study we use is a renewable-energy generator application; that of producing an efficient controller for a three-phase grid-connected converter. The controller we produce can cope with the random variation of system parameters and fluctuating grid voltages. It produces tracking control with almost instantaneous response to changing reference states, and virtually zero oscillation. This compares very favorably to the classical proportional integrator (PI) controllers, which we show produce a much slower response and settling time. In addition, the RNN we propose exhibits better learning stability and convergence properties, and can exhibit faster adaptation, than has been achieved with adaptive critic designs.

  5. Reversal of rocuronium-induced neuromuscular blockade by sugammadex allows for optimization of neural monitoring of the recurrent laryngeal nerve.

    Science.gov (United States)

    Lu, I-Cheng; Wu, Che-Wei; Chang, Pi-Ying; Chen, Hsiu-Ya; Tseng, Kuang-Yi; Randolph, Gregory W; Cheng, Kuang-I; Chiang, Feng-Yu

    2016-04-01

    The use of neuromuscular blocking agent may effect intraoperative neuromonitoring (IONM) during thyroid surgery. An enhanced neuromuscular-blockade (NMB) recovery protocol was investigated in a porcine model and subsequently clinically applied during human thyroid neural monitoring surgery. Prospective animal and retrospective clinical study. In the animal experiment, 12 piglets were injected with rocuronium 0.6 mg/kg and randomly allocated to receive normal saline, sugammadex 2 mg/kg, or sugammadex 4 mg/kg to compare the recovery of laryngeal electromyography (EMG). In a subsequent clinical application study, 50 patients who underwent thyroidectomy with IONM followed an enhanced NMB recovery protocol-rocuronium 0.6 mg/kg at anesthesia induction and sugammadex 2 mg/kg at the operation start. The train-of-four (TOF) ratio was used for continuous quantitative monitoring of neuromuscular transmission. In our porcine model, it took 49 ± 15, 13.2 ± 5.6, and 4.2 ± 1.5 minutes for the 80% recovery of laryngeal EMG after injection of saline, sugammadex 2 mg/kg, and sugammadex 4 mg/kg, respectively. In subsequent clinical human application, the TOF ratio recovered from 0 to >0.9 within 5 minutes after administration of sugammadex 2 mg/kg at the operation start. All patients had positive and high EMG amplitude at the early stage of the operation, and intubation was without difficulty in 96% of patients. Both porcine modeling and clinical human application demonstrated that sugammadex 2 mg/kg allows effective and rapid restoration of neuromuscular function suppressed by rocuronium. Implementation of this enhanced NMB recovery protocol assures optimal conditions for tracheal intubation as well as IONM in thyroid surgery. NA. © 2016 The American Laryngological, Rhinological and Otological Society, Inc.

  6. Multivariable Nonlinear Proportional-Integral-Derivative Decoupling Control Based on Recurrent Neural Networks%基于递归神经网络的多变量非线性PID解耦控制

    Institute of Scientific and Technical Information of China (English)

    张燕; 陈增强; 杨鹏; 袁著祉

    2004-01-01

    A nonlinear proportional-integral-derivative (PID) controller is constructed based on recurrent neural networks. In the control process of nonlinear multivariable systems, several nonlinear PID controllers have been adopted in parallel. Under the decoupling cost function, a decoupling control strategy is proposed. Then the stability condition of the controller is presented based on the Lyapunov theory. Simulation examples are given to show effectiveness of the proposed decoupling control.

  7. Recurrent recurrent gallstone ileus.

    Science.gov (United States)

    Hussain, Z; Ahmed, M S; Alexander, D J; Miller, G V; Chintapatla, S

    2010-07-01

    We describe the second reported case of three consecutive episodes of gallstone ileus and ask the question whether recurrent gallstone ileus justifies definitive surgery to the fistula itself or can be safely managed by repeated enterotomies.

  8. Recurrent Spatial Transformer Networks

    DEFF Research Database (Denmark)

    Sønderby, Søren Kaae; Sønderby, Casper Kaae; Maaløe, Lars;

    2015-01-01

    We integrate the recently proposed spatial transformer network (SPN) [Jaderberg et. al 2015] into a recurrent neural network (RNN) to form an RNN-SPN model. We use the RNN-SPN to classify digits in cluttered MNIST sequences. The proposed model achieves a single digit error of 1.5% compared to 2...

  9. Speech Recognition Model Based on Recurrent Neural Networks%基于循环神经网络的语音识别模型

    Institute of Scientific and Technical Information of China (English)

    朱小燕; 王昱; 徐伟

    2001-01-01

    近年来基于隐马尔可夫模型(HMM)的语音识别技术得到很大发展.然而HMM模型有着一定的局限性,如何克服HMM的一阶假设和独立性假设带来的问题一直是研究讨论的热点.在语音识别中引入神经网络的方法是克服HMM局限性的一条途径.该文将循环神经网络应用于汉语语音识别,修改了原网络模型并提出了相应的训练方法.实验结果表明该模型具有良好的连续信号处理性能,与传统的HMM模型效果相当.新的训练策略能够在提高训练速度的同时,使得模型分类性能有明显提高.%To overcome some weaknesses of hidden Markov model in speech recognition, HMM/NN hybrid systems had been explored by many researchers in recent years. In the previous HMM/NN hybrid systems, the neural networks adopted are mostly multilayer perceptron (MLP). In our system, recurrent neural networks (RNN) were used to take the place of MLP as the syllable probability estimator. RNN is MLP incorporated with a feedback which can transport the output of some neurons to other neurons or themselves. The incorporation of feedback into a MLP gives the net the ability to efficiently process the context information of time sequence, which is especially useful for speech recognition. In this paper, the architecture of the RNN is modified and corresponding training schema is presented.   Following techniques have been adopted in our system.   1. A network with a single layer has been adopted, while the content of feedback is different from the network used by previous researchers, i.e., the external output is included in the feedback, not just the internal state output.   2. The training algorithm adopted in our system is back-propagation through time (BPTT) algorithm. In the common BPTT algorithm, the initial feedback values are set arbitrarily according to experience. This means that the initial feedback is not specific to the problem we are dealing with

  10. Recurrent varicocele

    Directory of Open Access Journals (Sweden)

    Katherine Rotker

    2016-01-01

    Full Text Available Varicocele recurrence is one of the most common complications associated with varicocele repair. A systematic review was performed to evaluate varicocele recurrence rates, anatomic causes of recurrence, and methods of management of recurrent varicoceles. The PubMed database was evaluated using keywords "recurrent" and "varicocele" as well as MESH criteria "recurrent" and "varicocele." Articles were not included that were not in English, represented single case reports, focused solely on subclinical varicocele, or focused solely on a pediatric population (age <18. Rates of recurrence vary with the technique of varicocele repair from 0% to 35%. Anatomy of recurrence can be defined by venography. Management of varicocele recurrence can be surgical or via embolization.

  11. Research on Estimation of Ads Cl ick Rate Based on Recurrent Neural Network%基于递归神经网络的广告点击率预估研究

    Institute of Scientific and Technical Information of China (English)

    陈巧红; 孙超红; 余仕敏; 贾宇波

    2016-01-01

    In order to improve the estimation accuracy of ads click rate and thus improve the revenue of online advertising,feature extraction and dimension reduction of advertising data were implemented. Then,the improved recurrent neural network based on LSTM was used as the ads click rate estimation model.Meanwhile,stochastic gradient descent and cross entropy were used as optimization algorithm and obj ective function separately. Experiments show that compared with logistic regression, BP neural network and recurrent neural network, the improved recurrent neural network based on LSTM can effectively improve the estimation accuracy of ads click rate.It not only helps advertising service providers develop reasonable price strategies,but also helps advertisers advertise reasonably.As a result,the revenue maximization of each role in the advertising industry chain is realized.%为提高广告点击率的预估准确率,从而提高在线广告的收益,对广告数据进行特征提取和特征降维,采用一种基于 LSTM的改进的递归神经网络作为广告点击率预估模型。分别采用随机梯度下降法和交叉熵函数作为预估模型的优化算法和目标函数。实验表明,与逻辑回归、BP神经网络和递归神经网络相比,基于 LSTM改进的递归神经网络模型,能有效提高广告点击率的预估准确率。该模型不仅有助于广告服务商制定合理的价格策略,也有助于广告主合理投放广告,实现广告产业链中各个角色的收益最大化。

  12. 一种递归模糊神经网络的广义预测控制方法%An Generalized Predictive Control Using Recurrent Fuzzy Neural Network

    Institute of Scientific and Technical Information of China (English)

    李国勇; 刘鹏

    2012-01-01

    A kind of recurrent fuzzy neural network(RFNN) is constructed, in which, the a bility of the input information handling is enhanced by adding the vector adjustment layer. Based on the designed recursion fuzzy neural network, nonlinear system's discrete mathematics multi-step fuzzy forecast model is established. This model is used to forecast the system's output, and the corresponding forecast control law is obtained by the existing predictive control algorithm. The simulation result indicates that this method has the high control precision as well as moderate certain anti-interference ability.%提出了一种递归模糊神经网络(RFNN),通过加入向量调节层,提高了网络对输入信息的处理能力.基于所设计的递归模糊神经网络,建立非线性系统的离散数学多步模糊预测模型,根据这一模型对系统的输出进行预测,然后利用预测控制算法得到相应的预测控制规律.仿真结果表明该方法具有较高的控制精度以及一定的抗干扰能力.

  13. 基于递归模糊神经网络的污水处理控制方法%Wastewater treatment control method based on recurrent fuzzy neural network

    Institute of Scientific and Technical Information of China (English)

    韩改堂; 乔俊飞; 韩红桂

    2016-01-01

    Due to the nonlinear and highly time-varying issues of wastewater treatment processes, a kind of multi- variable control method based on the recurrent fuzzy neural network (RFNN) is proposed. The proposed RFNN can obtain self-adaptive control accuracy of operating variables. The controller uses the learning rate on the basis of conventional BP learning algorithm on adaptive learning algorithm and the introduction of momentum to train network parameters, can avoid falling into local optimum network, which improved network control of the system accuracy. Finally, based on the benchmark simulation model (BSM1), experiments validate the effectiveness of the method that control the dissolved oxygen concentration in the fifth partition and nitrate nitrogen concentration in the second partition. Compared to PID, forward neural network and conventional recurrent neural network, the experimental results show that this control method can improve the adaptive control precision of the system.%针对污水处理过程具有非线性、大时变等问题,提出了一种基于递归模糊神经网络的多变量控制方法。该方法通过递归模糊神经网络控制器自适应地获得对操作变量的控制精度,控制器在常规 BP 学习算法的基础上采用学习率自适应学习算法且引入了动量项来训练网络参数,避免网络陷入局部最优,提高了网络对系统的控制精度。最后,基于仿真基准模型(BSM1)平台对第五分区中的溶解氧和第二分区中的硝态氮控制进行动态仿真实验,结果表明,与PID、前馈神经网络和常规递归神经网络相比,该方法能有效提高系统的自适应控制精度。

  14. Lyapunov-based Analyse of Weights' Convergence on Backpropagation Neural Networks Algorithm%基于李雅普诺夫函数的BP神经网络算法的收敛性分析

    Institute of Scientific and Technical Information of China (English)

    张茂元; 卢正鼎

    2004-01-01

    针对前馈神经网络应时变输入的自学习机制,采用李雅普诺夫函数来分析权值的收敛性,从而揭示BP神经网络算法朝最小误差方向调整权值的内在因素,并在分析单参数BP算法收敛性基础上,提出单参数变调整法则的离散型BP神经网络算法.

  15. Recurrent vulvovaginitis.

    Science.gov (United States)

    Powell, Anna M; Nyirjesy, Paul

    2014-10-01

    Vulvovaginitis (VV) is one of the most commonly encountered problems by a gynecologist. Many women frequently self-treat with over-the-counter medications, and may present to their health-care provider after a treatment failure. Vulvovaginal candidiasis, bacterial vaginosis, and trichomoniasis may occur as discreet or recurrent episodes, and have been associated with significant treatment cost and morbidity. We present an update on diagnostic capabilities and treatment modalities that address recurrent and refractory episodes of VV.

  16. 基于递归模糊神经网络的移动机器人滑模控制%Sliding mode control of mobile robots based on recurrent fuzzy-neural network

    Institute of Scientific and Technical Information of China (English)

    李艳东; 王宗义; 朱玲; 刘涛

    2011-01-01

    针对非完整移动机器人轨迹跟踪控制问题,提出了一种Backstepping运动学控制器与自适应动态递归模糊神经滑模控制器相结合的控制结构。采用遗传算法对运动学控制器的参数进行了优化选取,有效地抑制了因初始位姿过大而引起的初始速度及输出力矩过大的问题;采用动态递归模糊神经网络(Adaptive dynamic recurrent fuzzy neural network,AD—RFNN)对动态非线性不确定部分进行在线估计,使不确定性估计误差大大减小;通过与自适应鲁棒控制器结合应用,不但解决了移动机器人的参数与非参数不确定性问题,同时也消除了在滑模控制中的输入抖振现象;基于Lyapunov方法的设计过程,保证了控制系统的稳定与收敛;仿真结果表明了该方法的有效性。%A control structure is proposed for trajectory tracking control of nonholonomic mobile robots. It integrates the backstepping kinematic controller and a sliding mode controller with Adaptive Dynamic Recurrent Fuzzy Neural Network (ADRFNN). The genetic algorithm is used to optimize the parameters of kinematic controller that effectively suppresses the excessive initial speed and output torque caused by large initial error of posture. The ADRFNN is developed to achieve on-line estimation of the part of dynamic nonlinear uncertain, which greatly reduces estimation errors of uncertainties. By combing ADRFNN with the adaptive robust controller, this method can not only solve the problem of parameters and non-parameter uncertainties of mobile robots, but also eliminate input chattering of the sliding mode control. The stability and convergence of the control system are proved by Lyapunov theory. Simulation results demonstrate the effectiveness of the proposed method.

  17. Dynamic neural network-based robust observers for uncertain nonlinear systems.

    Science.gov (United States)

    Dinh, H T; Kamalapurkar, R; Bhasin, S; Dixon, W E

    2014-12-01

    A dynamic neural network (DNN) based robust observer for uncertain nonlinear systems is developed. The observer structure consists of a DNN to estimate the system dynamics on-line, a dynamic filter to estimate the unmeasurable state and a sliding mode feedback term to account for modeling errors and exogenous disturbances. The observed states are proven to asymptotically converge to the system states of high-order uncertain nonlinear systems through Lyapunov-based analysis. Simulations and experiments on a two-link robot manipulator are performed to show the effectiveness of the proposed method in comparison to several other state estimation methods.

  18. Neural Network Based Modeling and Analysis of LP Control Surface Allocation

    Science.gov (United States)

    Langari, Reza; Krishnakumar, Kalmanje; Gundy-Burlet, Karen

    2003-01-01

    This paper presents an approach to interpretive modeling of LP based control allocation in intelligent flight control. The emphasis is placed on a nonlinear interpretation of the LP allocation process as a static map to support analytical study of the resulting closed loop system, albeit in approximate form. The approach makes use of a bi-layer neural network to capture the essential functioning of the LP allocation process. It is further shown via Lyapunov based analysis that under certain relatively mild conditions the resulting closed loop system is stable. Some preliminary conclusions from a study at Ames are stated and directions for further research are given at the conclusion of the paper.

  19. The current state of intermittent intraoperative neural monitoring for prevention of recurrent laryngeal nerve injury during thyroidectomy: a PRISMA-compliant systematic review of overlapping meta-analyses.

    Science.gov (United States)

    Henry, Brandon Michael; Graves, Matthew J; Vikse, Jens; Sanna, Beatrice; Pękala, Przemysław A; Walocha, Jerzy A; Barczyński, Marcin; Tomaszewski, Krzysztof A

    2017-06-01

    Recurrent laryngeal nerve (RLN) injury is one of the most common and detrimental complications following thyroidectomy. Intermittent intraoperative nerve monitoring (I-IONM) has been proposed to reduce prevalence of RLN injury following thyroidectomy and has gained increasing acceptance in recent years. A comprehensive database search was performed, and data from eligible meta-analyses meeting the inclusion criteria were extracted. Transient, permanent, and overall RLN injuries were the primary outcome measures. Quality assessment via AMSTAR, heterogeneity appraisal, and selection of best evidence was performed via a Jadad algorithm. Eight meta-analyses met the inclusion criteria. Meta-analyses included between 6 and 23 original studies each. Via utilization of the Jadad algorithm, the selection of best evidence resulted in choosing of Pisanu et al. (Surg Res 188:152-161, 2014). Five out of eight meta-analyses demonstrated non-significant (p > 0.05) RLN injury reduction with the use of I-IONM versus nerve visualization alone. To date, I-IONM has not achieved a significant level of RLN injury reduction as shown by the meta-analysis conducted by Pisanu et al. (Surg Res 188:152-161, 2014). However, most recent developments of IONM technology including continuous vagal IONM and concept of staged thyroidectomy in case of loss of signal on the first side in order to prevent bilateral RLN injury may provide additional benefits which were out of the scope of this study and need to be assessed in further prospective multicenter trials.

  20. 油压机械臂及手系统递归神经网络建模与位置控制%Recurrent Neural Networks Modeling and Position Control of Hydraulic Manipulator and Hand

    Institute of Scientific and Technical Information of China (English)

    邵辉; 野波健藏

    2012-01-01

    The model and inverse model of angular velocity of a hydraulic manipulator and a hydraulic robotic hand are established using recurrent neural networks, providing an effective so- lution to the modeling and control of dynamic hydraulic systems. The inverse models are used as position controllers of the manipulator and the hand. Experiments show that the models are close enough to the system dynamics with accuracy of position control satisfying the requirements.%为解决多关节油压机械臂及手系统动态参数的时变性,应用递归神经网络(RNN)建立了油压机械臂及手的速度模型及逆模型,并用逆模型作为臂及手各关节的控制器实现了位置控制。实验结果表明,所建模型性能接近系统性能,位置控制精度也能达到控制目标的要求。

  1. Recurrent fevers.

    Science.gov (United States)

    Isaacs, David; Kesson, Alison; Lester-Smith, David; Chaitow, Jeffrey

    2013-03-01

    An 11-year-old girl had four episodes of fever in a year, lasting 7-10 days and associated with headache and neck stiffness. She had a long history of recurrent urticaria, usually preceding the fevers. There was also a history of vague pains in her knees and in the small joints of her hands. Her serum C-reactive protein was moderately raised at 41 g/L (normal <8). Her rheumatologist felt the association of recurrent fevers that lasted 7 or more days with headaches, arthralgia and recurrent urticaria suggested one of the periodic fever syndromes. Genetic testing confirmed she had a gene mutation consistent with one of tumour necrosis factor receptor-associated periodic syndrome.

  2. Modular, Hierarchical Learning By Artificial Neural Networks

    Science.gov (United States)

    Baldi, Pierre F.; Toomarian, Nikzad

    1996-01-01

    Modular and hierarchical approach to supervised learning by artificial neural networks leads to neural networks more structured than neural networks in which all neurons fully interconnected. These networks utilize general feedforward flow of information and sparse recurrent connections to achieve dynamical effects. The modular organization, sparsity of modular units and connections, and fact that learning is much more circumscribed are all attractive features for designing neural-network hardware. Learning streamlined by imitating some aspects of biological neural networks.

  3. Spatio-temporal prediction of leaf area index of rubber plantation using HJ-1A/1B CCD images and recurrent neural network

    Science.gov (United States)

    Chen, Bangqian; Wu, Zhixiang; Wang, Jikun; Dong, Jinwei; Guan, Liming; Chen, Junming; Yang, Kai; Xie, Guishui

    2015-04-01

    Rubber (Hevea brasiliensis) plantations are one of the most important economic forest in tropical area. Retrieving leaf area index (LAI) and its dynamics by remote sensing is of great significance in ecological study and production management, such as yield prediction and post-hurricane damage evaluation. Thirteen HJ-1A/1B CCD images, which possess the spatial advantage of Landsat TM/ETM+ and 2-days temporal resolution of MODIS, were introduced to predict the spatial-temporal LAI of rubber plantation on Hainan Island by Nonlinear AutoRegressive networks with eXogenous inputs (NARX) model. Monthly measured LAIs at 30 stands by LAI-2000 between 2012 and 2013 were used to explore the LAI dynamics and their relationship with spectral bands and seven vegetation indices, and to develop and validate model. The NARX model, which was built base on input variables of day of year (DOY), four spectral bands and weight difference vegetation index (WDVI), possessed good accuracies during the model building for the data set of training (N = 202, R2 = 0.98, RMSE = 0.13), validation (N = 43, R2 = 0.93, RMSE = 0.24) and testing (N = 43, R2 = 0.87, RMSE = 0.31), respectively. The model performed well during field validation (N = 24, R2 = 0.88, RMSE = 0.24) and most of its mapping results showed better agreement (R2 = 0.54-0.58, RMSE = 0.47-0.71) with the field data than the results of corresponding stepwise regression models (R2 = 0.43-0.51, RMSE = 0.52-0.82). Besides, the LAI statistical values from the spatio-temporal LAI maps and their dynamics, which increased dramatically from late March (2.36 ± 0.59) to early May (3.22 ± 0.64) and then gradually slow down until reached the maximum value in early October (4.21 ± 0.87), were quite consistent with the statistical results of the field data. The study demonstrates the feasibility and reliability of retrieving spatio-temporal LAI of rubber plantations by an artificial neural network (ANN) approach, and provides some insight on the

  4. Türkiye’de Enflasyonun İleri ve Geri Beslemeli Yapay Sinir Ağlarının Melez Yaklaşımı ile Öngörüsü = Forecasting of Turkey Inflation with Hybrid of Feed Forward and Recurrent Artifical Neural Networks

    Directory of Open Access Journals (Sweden)

    V. Rezan USLU

    2010-01-01

    Full Text Available Obtaining the inflation prediction is an important problem. Having this prediction accurately will lead to more accurate decisions. Various time series techniques have been used in the literature for inflation prediction. Recently, Artificial Neural Network (ANN is being preferred in the time series prediction problem due to its flexible modeling capacity. Artificial neural network can be applied easily to any time series since it does not require prior conditions such as a linear or curved specific model pattern, stationary and normal distribution. In this study, the predictions have been obtained using the feed forward and recurrent artificial neural network for the Consumer Price Index (CPI. A new combined forecast has been proposed based on ANN in which the ANN model predictions employed in analysis were used as data.

  5. 基于RNN汉语语言模型自适应算法研究%Research on a Self-Adaption Algorithm of Recurrent Neural Network Based Chinese Language Model

    Institute of Scientific and Technical Information of China (English)

    王龙; 杨俊安; 刘辉; 陈雷; 林伟

    2016-01-01

    深度学习在自然语言处理中的应用越来越广泛。相比于传统的n-gram统计语言模型,循环神经网络(Recurrent Neural Network,RNN)建模技术在语言模型建模方面表现出了极大的优越性,逐渐在语音识别、机器翻译等领域中得到应用。然而,目前RNN语言模型的训练大多是离线的,对于不同的语音识别任务,训练语料与识别任务之间存在着语言差异,使语音识别系统的识别率受到影响。在采用RNN建模技术训练汉语语言模型的同时,提出一种在线RNN模型自适应(self-adaption)算法,将语音信号初步识别结果作为语料继续训练模型,使自适应后的RNN模型与识别任务之间获得最大程度的匹配。实验结果表明:自适应模型有效地减少了语言模型与识别任务之间的语言差异,对汉语词混淆网络进行重打分后,系统识别率得到进一步提升,并在实际汉语语音识别系统中得到了验证。%Deep learning is used more and more widely in natural language processing. Compared with the conventional n-gram statistical language model,recurrent neural network(RNN)modeling technology shows great superiority in the aspect of language model modeling,which is gradually applied in speech recognition,machine translation and other fields. However,most RNN language models are trained off-line at present,for different speech recognition tasks,there exist many language differences between training corpus and recognition tasks that affects the recognition rate of speech recognition system deeply. The authors adopt RNN model technical in training the Chinese language model and put forward a online self-adaption model training algorithm at the same time,with this algorithm,we treat the voice signal preliminary recognition results as the additional training corpus to retrain the model to ensure that the adaptive RNN model can match with different tasks mostly. The experiment

  6. A multimodal interactive control method based on recurrent neural network with parametric bias model for companion robots%基于PB递归神经网络的陪护机器人多模式交互控制方法

    Institute of Scientific and Technical Information of China (English)

    徐敏; 陶永; 魏洪兴

    2011-01-01

    本文提出了一种基于PB递归神经网络(RNNPB)算法的陪护机器人多模式交互控制方法.首先,提出了一种包含多模式交互、交互识别与交互决策等智能体组成的陪护机器人多模式交互框架,然后,将基于PB的学习算法应用于陪护机器人的交互过程,形成了一种基于RNNPB模型的陪护机器人多模式交互控制方法,通过交互状态识别及决策判断结果进行交互输出,实现了陪护机器人交互过程中复杂任务的规划和交互的学习适应.实验验证了该交互控制方法的有效性.%A multimodal interactive control method based on the recurrent neural network with parametric bias (RNNPB) model was proposed for the companion robots. Firstly, a multimodal interaction framework, which was composed of the multimodal interaction agent, the interactive recognition agent and the interaction decision agent, was proposed, and then, a PB-based learning algorithm was applied to the interactive process of companion robots to form the multimodal interaction control method based on the RNNPB model. The RNNPB control method can be used to deal with the interactive sates recognition and decision analysis process and generate the patterns of behavior as the output of interaction process to realize the complex mission planning, studying and adapting of companion robots' interaction process. The experimental results showed the effectiveness of the control method.

  7. N-best Rescoring Algorithm Based on Recurrent Neural Network Language Model%基于循环神经网络语言模型的N-bes t重打分算法

    Institute of Scientific and Technical Information of China (English)

    张剑; 屈丹; 李真

    2016-01-01

    循环神经网络语言模型能够克服统计语言模型中存在的数据稀疏问题,同时具有更强的长距离约束能力,是一种重要的语言模型建模方法。但在语音解码时,由于该模型使词图的扩展次数过多,造成搜索空间过大而难以使用。本文提出了一种基于循环神经网络语言模型的N‐best重打分算法,利用N‐best引入循环神经网络语言模型概率得分,对识别结果进行重排序,并引入缓存模型对解码过程进行优化,得到最优的识别结果。实验结果表明,本文方法能够有效降低语音识别系统的词错误率。%Recurrent neural network language model (RNNLM ) is an important method in statistical lan‐guage models because it can tackle the data sparseness problem and contain a longer distance constraints . However ,it lacks practicability because the lattice has to expand too many times and explode the search space .Therefore ,a N‐best rescoring algorithm is proposed which uses the RNNLM to rerank the recog‐nition results and optimize the decoding process .Experimental results show that the proposed method can effectively reduce the word error rate of the speech recognition system .

  8. Recurrent correlation associative memories.

    Science.gov (United States)

    Chiueh, T D; Goodman, R M

    1991-01-01

    A model for a class of high-capacity associative memories is presented. Since they are based on two-layer recurrent neural networks and their operations depend on the correlation measure, these associative memories are called recurrent correlation associative memories (RCAMs). The RCAMs are shown to be asymptotically stable in both synchronous and asynchronous (sequential) update modes as long as their weighting functions are continuous and monotone nondecreasing. In particular, a high-capacity RCAM named the exponential correlation associative memory (ECAM) is proposed. The asymptotic storage capacity of the ECAM scales exponentially with the length of memory patterns, and it meets the ultimate upper bound for the capacity of associative memories. The asymptotic storage capacity of the ECAM with limited dynamic range in its exponentiation nodes is found to be proportional to that dynamic range. Design and fabrication of a 3-mm CMOS ECAM chip is reported. The prototype chip can store 32 24-bit memory patterns, and its speed is higher than one associative recall operation every 3 mus. An application of the ECAM chip to vector quantization is also described.

  9. Building a Chaotic Proved Neural Network

    CERN Document Server

    Bahi, Jacques M; Salomon, Michel

    2011-01-01

    Chaotic neural networks have received a great deal of attention these last years. In this paper we establish a precise correspondence between the so-called chaotic iterations and a particular class of artificial neural networks: global recurrent multi-layer perceptrons. We show formally that it is possible to make these iterations behave chaotically, as defined by Devaney, and thus we obtain the first neural networks proven chaotic. Several neural networks with different architectures are trained to exhibit a chaotical behavior.

  10. Recurrent corticocortical interactions in neural disease

    NARCIS (Netherlands)

    Lamme, V.A.F.

    2003-01-01

    The cerebral cortex consists of many areas, each subserving a more or less distinct function. This view has its roots in the early work of Penfield and today is reflected in functional magnetic resonance imaging literature describing the regions of the brain that are activated during particular task

  11. Smith Prediction Control for the PAM Manipulator Using a Recurrent Neural Network%气动人工肌肉手臂的神经网络Smith预估控制

    Institute of Scientific and Technical Information of China (English)

    王冬青; 王钰; 佟河亭; 韩平畴

    2012-01-01

    It adopts a three-layer recurrent neural network( RNN)as a nonlinear smith predictor, to model a 1-joint pneumatic artificial mu3cle(PAM) manipulator, and to predict the d-step ahead output of the PAM manipulator. The difference between the desired output and the feedback variable, which is the d-step ahead predict output, is taken as the input of the PID controller. And the Smith prediction PID control is realized. At every sampling step, the weights of the RNN are adjusted by using the criterion of the square of the difference between the present output of the RNN model and the present actual output of the PAM manipulator so as to handle the uncertainty and time-variety of the PAM manipulator. Through a serial port and two ADAM modules, this paper uses a Matlab program to control the PAM manipulator. The operation results of the PAM manipulator show that the proposed method is effective and feasible compared with the traditional PID control.%针对含时滞d的1关节气动人工肌肉(PAM)手臂,用三层递归神经网络(RNN),建立PAM手臂包含时滞的模型(即非线性Smith预估器),并超前d步预测PAM手臂的输出角度.将此超前d步的预测值作为反馈量,与设定值相比较得到的误差作为PID控制器输入量,实现Smith预估PID控制.同时每一步都用RNN模型当前时刻的输出值与PAM手臂当前时刻实际输出值之差的平方做为RNN权值的在线调整准则对RNN预测模型的权值进行在线调整,以自适应PAM手臂的不确定性和时变性.使用Matlab通过串口和研华亚当模块对实物PAM手臂进行控制,控制效果表明所提出的Smith预估PID控制算法比常规PID控制算法的性能有显著提高,证明所提出的算法是有效的和切实可行的.

  12. Contemporary deep recurrent learning for recognition

    Science.gov (United States)

    Iftekharuddin, K. M.; Alam, M.; Vidyaratne, L.

    2017-05-01

    Large-scale feed-forward neural networks have seen intense application in many computer vision problems. However, these networks can get hefty and computationally intensive with increasing complexity of the task. Our work, for the first time in literature, introduces a Cellular Simultaneous Recurrent Network (CSRN) based hierarchical neural network for object detection. CSRN has shown to be more effective to solving complex tasks such as maze traversal and image processing when compared to generic feed forward networks. While deep neural networks (DNN) have exhibited excellent performance in object detection and recognition, such hierarchical structure has largely been absent in neural networks with recurrency. Further, our work introduces deep hierarchy in SRN for object recognition. The simultaneous recurrency results in an unfolding effect of the SRN through time, potentially enabling the design of an arbitrarily deep network. This paper shows experiments using face, facial expression and character recognition tasks using novel deep recurrent model and compares recognition performance with that of generic deep feed forward model. Finally, we demonstrate the flexibility of incorporating our proposed deep SRN based recognition framework in a humanoid robotic platform called NAO.

  13. Neural dynamics for mobile robot adaptive control

    OpenAIRE

    Oubbati, Mohamed

    2006-01-01

    In this thesis, we investigate how dynamics in recurrent neural networks can be used to solve some specific mobile robot problems. We have designed a motion control approach based on a novel recurrent neural network. The advantage of this approach is that, no knowledge about the dynamic model is required, and no synaptic weight changing is needed in presence of time varying parameters. Furthermore, this approach allows a single fixed-weight network to act as a dynamic controller for several d...

  14. Training recurrent networks by Evolino.

    Science.gov (United States)

    Schmidhuber, Jürgen; Wierstra, Daan; Gagliolo, Matteo; Gomez, Faustino

    2007-03-01

    In recent years, gradient-based LSTM recurrent neural networks (RNNs) solved many previously RNN-unlearnable tasks. Sometimes, however, gradient information is of little use for training RNNs, due to numerous local minima. For such cases, we present a novel method: EVOlution of systems with LINear Outputs (Evolino). Evolino evolves weights to the nonlinear, hidden nodes of RNNs while computing optimal linear mappings from hidden state to output, using methods such as pseudo-inverse-based linear regression. If we instead use quadratic programming to maximize the margin, we obtain the first evolutionary recurrent support vector machines. We show that Evolino-based LSTM can solve tasks that Echo State nets (Jaeger, 2004a) cannot and achieves higher accuracy in certain continuous function generation tasks than conventional gradient descent RNNs, including gradient-based LSTM.

  15. Use of recurrent neural networks for determination of 7-epiclusianone acidity constants in ethanol-water mixtures; Uso de redes neurais recorrentes na determinacao das constantes de acidez para a 7-epiclusianona em misturas etanol-agua

    Energy Technology Data Exchange (ETDEWEB)

    Costa, Ederson D' Martin; Lemes, Nelson Henrique Teixeira, E-mail: nelson.lemes@unifal-mg.edu.br [Instituto de Ciencias Exatas, Universidade Federal de Alfenas, Alfenas, MG (Brazil); Santos, Marcelo Henrique dos [Instituto de Ciencias Farmaceuticas, Universidade Federal de Alfenas, Alfenas, MG (Brazil); Braga, Joao Pedro [Departamento de Quimica, Universidade Federal de Minas Gerais (UFMG), Belo Horizonte, MG (Brazil)

    2012-07-01

    This work propose a recursive neural network to solve inverse equilibrium problem. The acidity constants of 7-epiclusianone in ethanol-water binary mixtures were determined from multiwavelength spectrophotometric data. A linear relationship between acidity constants and the % w/v of ethanol in the solvent mixture was observed. The proposed method efficiency is compared with the Simplex method, commonly used in nonlinear optimization techniques. The neural network method is simple, numerically stable and has a broad range of applicability. (author)

  16. Animal models of recurrent or bipolar depression.

    Science.gov (United States)

    Kato, T; Kasahara, T; Kubota-Sakashita, M; Kato, T M; Nakajima, K

    2016-05-03

    Animal models of mental disorders should ideally have construct, face, and predictive validity, but current animal models do not always satisfy these validity criteria. Additionally, animal models of depression rely mainly on stress-induced behavioral changes. These stress-induced models have limited validity, because stress is not a risk factor specific to depression, and the models do not recapitulate the recurrent and spontaneous nature of depressive episodes. Although animal models exhibiting recurrent depressive episodes or bipolar depression have not yet been established, several researchers are trying to generate such animals by modeling clinical risk factors as well as by manipulating a specific neural circuit using emerging techniques.

  17. A dual neural network for convex quadratic programming subject to linear equality and inequality constraints

    Science.gov (United States)

    Zhang, Yunong; Wang, Jun

    2002-06-01

    A recurrent neural network called the dual neural network is proposed in this Letter for solving the strictly convex quadratic programming problems. Compared to other recurrent neural networks, the proposed dual network with fewer neurons can solve quadratic programming problems subject to equality, inequality, and bound constraints. The dual neural network is shown to be globally exponentially convergent to optimal solutions of quadratic programming problems. In addition, compared to neural networks containing high-order nonlinear terms, the dynamic equation of the proposed dual neural network is piecewise linear, and the network architecture is thus much simpler. The global convergence behavior of the dual neural network is demonstrated by an illustrative numerical example.

  18. Fuzzy recurrence plots

    Science.gov (United States)

    Pham, T. D.

    2016-12-01

    Recurrence plots display binary texture of time series from dynamical systems with single dots and line structures. Using fuzzy recurrence plots, recurrences of the phase-space states can be visualized as grayscale texture, which is more informative for pattern analysis. The proposed method replaces the crucial similarity threshold required by symmetrical recurrence plots with the number of cluster centers, where the estimate of the latter parameter is less critical than the estimate of the former.

  19. ON THE STABILITY OF THE CELLULAR NEURAL NETWORKS WITH TIME LAGS

    OpenAIRE

    Vladimir RASVAN; Daniela DANCIU

    2004-01-01

    Cellular neural networks (CNNs) are recurrent artificial neural networks. Due to their cyclic connections and to the neurons’ nonlinear activation functions, recurrent neural networks are nonlinear dynamic systems, which display stable and unstable fixed points, limit cycles and chaotic behavior. Since the field of neural networks is still a young one, improving the stability conditions for such systems is an obvious and quasipermanent task. This paper focuses on CNNs affected by time delays....

  20. Recurrent zosteriform herpes simplex

    Directory of Open Access Journals (Sweden)

    Inamadar Arun

    1992-01-01

    Full Text Available A 25-year-old man had recurrent zosteriform herpes simplex for past 6 years. The attacks were precipitated by prolonged exposure to sunlight. Pain was mild and lesions used to subside each time in about 7 days. Clinical features which help in differentiating recurrent herpes simplex from recurrent herpes zoster are summarized.

  1. Neural adaptive control for vibration suppression in composite fin-tip of aircraft.

    Science.gov (United States)

    Suresh, S; Kannan, N; Sundararajan, N; Saratchandran, P

    2008-06-01

    In this paper, we present a neural adaptive control scheme for active vibration suppression of a composite aircraft fin tip. The mathematical model of a composite aircraft fin tip is derived using the finite element approach. The finite element model is updated experimentally to reflect the natural frequencies and mode shapes very accurately. Piezo-electric actuators and sensors are placed at optimal locations such that the vibration suppression is a maximum. Model-reference direct adaptive neural network control scheme is proposed to force the vibration level within the minimum acceptable limit. In this scheme, Gaussian neural network with linear filters is used to approximate the inverse dynamics of the system and the parameters of the neural controller are estimated using Lyapunov based update law. In order to reduce the computational burden, which is critical for real-time applications, the number of hidden neurons is also estimated in the proposed scheme. The global asymptotic stability of the overall system is ensured using the principles of Lyapunov approach. Simulation studies are carried-out using sinusoidal force functions of varying frequency. Experimental results show that the proposed neural adaptive control scheme is capable of providing significant vibration suppression in the multiple bending modes of interest. The performance of the proposed scheme is better than the H(infinity) control scheme.

  2. Neural network-based distributed attitude coordination control for spacecraft formation flying with input saturation.

    Science.gov (United States)

    Zou, An-Min; Kumar, Krishna Dev

    2012-07-01

    This brief considers the attitude coordination control problem for spacecraft formation flying when only a subset of the group members has access to the common reference attitude. A quaternion-based distributed attitude coordination control scheme is proposed with consideration of the input saturation and with the aid of the sliding-mode observer, separation principle theorem, Chebyshev neural networks, smooth projection algorithm, and robust control technique. Using graph theory and a Lyapunov-based approach, it is shown that the distributed controller can guarantee the attitude of all spacecraft to converge to a common time-varying reference attitude when the reference attitude is available only to a portion of the group of spacecraft. Numerical simulations are presented to demonstrate the performance of the proposed distributed controller.

  3. Cooperative tracking control of nonlinear multiagent systems using self-structuring neural networks.

    Science.gov (United States)

    Chen, Gang; Song, Yong-Duan

    2014-08-01

    This paper considers a cooperative tracking problem for a group of nonlinear multiagent systems under a directed graph that characterizes the interaction between the leader and the followers. All the networked systems can have different dynamics and all the dynamics are unknown. A neural network (NN) with flexible structure is used to approximate the unknown dynamics at each node. Considering that the leader is a neighbor of only a subset of the followers and the followers have only local interactions, we introduce a cooperative dynamic observer at each node to overcome the deficiency of the traditional tracking control strategies. An observer-based cooperative controller design framework is proposed with the aid of graph tools, Lyapunov-based design method, self-structuring NN, and separation principle. It is proved that each agent can follow the active leader only if the communication graph contains a spanning tree. Simulation results on networked robots are provided to show the effectiveness of the proposed control algorithms.

  4. Dynamic Object Identification with SOM-based neural networks

    Directory of Open Access Journals (Sweden)

    Aleksey Averkin

    2014-03-01

    Full Text Available In this article a number of neural networks based on self-organizing maps, that can be successfully used for dynamic object identification, is described. Unique SOM-based modular neural networks with vector quantized associative memory and recurrent self-organizing maps as modules are presented. The structured algorithms of learning and operation of such SOM-based neural networks are described in details, also some experimental results and comparison with some other neural networks are given.

  5. Recurrent Syncope due to Esophageal Squamous Cell Carcinoma

    OpenAIRE

    2011-01-01

    Syncope is caused by a wide variety of disorders. Recurrent syncope as a complication of malignancy is uncommon and may be difficult to diagnose and to treat. Primary neck carcinoma or metastases spreading in parapharyngeal and carotid spaces can involve the internal carotid artery and cause neurally mediated syncope with a clinical presentation like carotid sinus syndrome. We report the case of a 76-year-old man who suffered from recurrent syncope due to invasion of the right carotid sinus b...

  6. Late recurrence of medulloblastoma.

    Science.gov (United States)

    Stevens, Brittney; Razzaqi, Faisal; Yu, Lolie; Craver, Randall

    2008-01-01

    We present a child with a cerebellar medulloblastoma, diagnosed at age three, treated with near total surgical resection, radiotherapy, and chemotherapy, that recurred 13 years after the initial diagnosis. This late recurrence exceeds the typical 10-year survival statistics that are in common use, and exceeds the Collins rule. Continued follow-up of these children is justified to increase the likelihood of detecting these late recurrences early and to learn more about these late recurrences.

  7. Recurrent intracerebral hemorrhage

    Institute of Scientific and Technical Information of China (English)

    Shen jinsong; Lu jianhong

    2000-01-01

    Objective: In order to study the clinical manifestation and risk factor of recurrent intracerebral hemorrhage(ICH).Methods:The 256 patients were analysed who admitted to our hospital for intracerebral hemorrhage between 1995 and 1997.The 15(5 .86%)patients had a recurrent ICH.There were 9 men and 6 women and the mean age of the patients was 63.5 ± 6.4years at the first bleeding episode and 67.8± 8. 5 years at the second. The mean interval between the two bleeding episodes was 44.6 ± 12.5 months. The 73.3%patients were hypertensive .′The site of the first hemorrhage was ganglionic in 8 patients , ]ohar in six paients and brainstem in one .The recurrent hemorrhage occurred at a different location from the previous ICH.The most common pattern of recurrence was “ganglionic -ganglionic” (7 patients), lobar - ganglionic (3 patients), lobar-lobar(three patients), which was always observed in hypertensive patients. The outcome after the recurrent hemorrhage was usually poor. By comparison with 24 patients followed up to average 47.5± 18.7 months with isolated ICH without recurrence .Only lobar hematoma and a younger age were risk factors for recurrences whereas sex and previous hypertension were not. The mechanism of recurrence of ICH were multiple(hypertension, cerebral amyloid angiopathy).Contral of blood pressure and good living habit after the first hemorrhage may prevent ICH recurrences.

  8. Optimal Recurrence Grammars

    CERN Document Server

    Graben, Peter beim; Fröhlich, Flavio

    2015-01-01

    We optimally estimate the recurrence structure of a multivariate time series by Markov chains obtained from recurrence grammars. The goodness of fit is assessed with a utility function derived from the stochastic Markov transition matrix. It assumes a local maximum for the distance threshold of the optimal recurrence grammar. We validate our approach by means of the nonlinear Lorenz system and its linearized stochastic surrogates. Finally we apply our optimization procedure to the segmentation of neurophysiological time series obtained from anesthetized animals. We propose the number of optimal recurrence domains as a statistic for classifying an animals' state of consciousness.

  9. Recurrent Takotsubo Cardiomyopathy Related to Recurrent Thyrotoxicosis.

    Science.gov (United States)

    Patel, Keval; Griffing, George T; Hauptman, Paul J; Stolker, Joshua M

    2016-04-01

    Takotsubo cardiomyopathy, or transient left ventricular apical ballooning syndrome, is characterized by acute left ventricular dysfunction caused by transient wall-motion abnormalities of the left ventricular apex and mid ventricle in the absence of obstructive coronary artery disease. Recurrent episodes are rare but have been reported, and several cases of takotsubo cardiomyopathy have been described in the presence of hyperthyroidism. We report the case of a 55-year-old woman who had recurrent takotsubo cardiomyopathy, documented by repeat coronary angiography and evaluations of left ventricular function, in the presence of recurrent hyperthyroidism related to Graves disease. After both episodes, the patient's left ventricular function returned to normal when her thyroid function normalized. These findings suggest a possible role of thyroid-hormone excess in the pathophysiology of some patients who have takotsubo cardiomyopathy.

  10. On Application of Improved Recurrent Neural Network in Network Security Situation Monitoring%改进的递归神经网络在网络安全态势监测中的应用

    Institute of Scientific and Technical Information of China (English)

    李静

    2014-01-01

    With the expansion and diversification of the network ,network topology structure becomes more complex ,and the data traffic rises rapidly in the network ,which causes the network load increases ,at-tack ,fault and other unexpected severe network security events .Neural network to deal with nonlinear , complexity advantage of this paper ,network security situation prediction based on improved recursive neu-ral networks ,experimental results show that the high efficiency of the method ,results are compared with the actual values ,low error ,high accuracy .%随着网络规模的扩大,组网方式多样化,网络拓扑架构变得更加复杂,网络中的数据流量大规模迅速上升,导致网络负载增大,网络受到的攻击、故障等突发性安全事件更加严峻。该文利用神经网络处理非线性、复杂性等优势,基于改进的递归神经网络预测网络安全态势,实验结果证明该方法运行效率较高,运行结果与实际值相比,误差较低,精确性较高。

  11. Recurrent Abdominal Pain

    Science.gov (United States)

    Banez, Gerard A.; Gallagher, Heather M.

    2006-01-01

    The purpose of this article is to provide an empirically informed but clinically oriented overview of behavioral treatment of recurrent abdominal pain. The epidemiology and scope of recurrent abdominal pain are presented. Referral process and procedures are discussed, and standardized approaches to assessment are summarized. Treatment protocols…

  12. The eternal recurrence today

    CERN Document Server

    Neves, J C S

    2015-01-01

    In this work we have carried out an approach between the nonsingular scientific cosmologies (without the initial singularity, the big bang), specially the cyclic models, and the Nietzsche's thought of the eternal recurrence. Moreover, we have pointed out reasons for the Nietzsche's search for scientific proofs about the eternal recurrence in the decade of 1880's.

  13. Recurrent aphthous stomatitis

    Directory of Open Access Journals (Sweden)

    L Preeti

    2011-01-01

    Full Text Available Recurrent aphthous ulcers are common painful mucosal conditions affecting the oral cavity. Despite their high prevalence, etiopathogenesis remains unclear. This review article summarizes the clinical presentation, diagnostic criteria, and recent trends in the management of recurrent apthous stomatitis.

  14. Acute recurrent polyhydramnios

    DEFF Research Database (Denmark)

    Rode, Line; Bundgaard, Anne; Skibsted, Lillian

    2007-01-01

    Acute recurrent polyhydramnios is a rare occurrence characterized by a poor fetal outcome. This is a case report describing a 34-year-old woman presenting with acute recurrent polyhydramnios. Treatment with non-steroidal anti-inflammatory drugs (NSAID) and therapeutic amniocenteses was initiated ...

  15. Recurrent kernel machines: computing with infinite echo state networks.

    Science.gov (United States)

    Hermans, Michiel; Schrauwen, Benjamin

    2012-01-01

    Echo state networks (ESNs) are large, random recurrent neural networks with a single trained linear readout layer. Despite the untrained nature of the recurrent weights, they are capable of performing universal computations on temporal input data, which makes them interesting for both theoretical research and practical applications. The key to their success lies in the fact that the network computes a broad set of nonlinear, spatiotemporal mappings of the input data, on which linear regression or classification can easily be performed. One could consider the reservoir as a spatiotemporal kernel, in which the mapping to a high-dimensional space is computed explicitly. In this letter, we build on this idea and extend the concept of ESNs to infinite-sized recurrent neural networks, which can be considered recursive kernels that subsequently can be used to create recursive support vector machines. We present the theoretical framework, provide several practical examples of recursive kernels, and apply them to typical temporal tasks.

  16. Predictors of Recurrent AKI.

    Science.gov (United States)

    Siew, Edward D; Parr, Sharidan K; Abdel-Kader, Khaled; Eden, Svetlana K; Peterson, Josh F; Bansal, Nisha; Hung, Adriana M; Fly, James; Speroff, Ted; Ikizler, T Alp; Matheny, Michael E

    2016-04-01

    Recurrent AKI is common among patients after hospitalized AKI and is associated with progressive CKD. In this study, we identified clinical risk factors for recurrent AKI present during index AKI hospitalizations that occurred between 2003 and 2010 using a regional Veterans Administration database in the United States. AKI was defined as a 0.3 mg/dl or 50% increase from a baseline creatinine measure. The primary outcome was hospitalization with recurrent AKI within 12 months of discharge from the index hospitalization. Time to recurrent AKI was examined using Cox regression analysis, and sensitivity analyses were performed using a competing risk approach. Among 11,683 qualifying AKI hospitalizations, 2954 patients (25%) were hospitalized with recurrent AKI within 12 months of discharge. Median time to recurrent AKI within 12 months was 64 (interquartile range 19-167) days. In addition to known demographic and comorbid risk factors for AKI, patients with longer AKI duration and those whose discharge diagnosis at index AKI hospitalization included congestive heart failure (primary diagnosis), decompensated advanced liver disease, cancer with or without chemotherapy, acute coronary syndrome, or volume depletion, were at highest risk for being hospitalized with recurrent AKI. Risk factors identified were similar when a competing risk model for death was applied. In conclusion, several inpatient conditions associated with AKI may increase the risk for recurrent AKI. These findings should facilitate risk stratification, guide appropriate patient referral after AKI, and help generate potential risk reduction strategies. Efforts to identify modifiable factors to prevent recurrent AKI in these patients are warranted. Copyright © 2016 by the American Society of Nephrology.

  17. Recurrent Intracerebral Hemorrhage

    DEFF Research Database (Denmark)

    Schmidt, Linnea Boegeskov; Goertz, Sanne; Wohlfahrt, Jan;

    2016-01-01

    the use of any of the investigated medicines with antithrombotic effect (ATT, SSRI's, NSAID's) and recurrent ICH. CONCLUSIONS: The substantial short-and long-term recurrence risks warrant aggressive management of hypertension following a primary ICH, particularly in patients treated surgically...... treatment and renal insufficiency were associated with increased recurrence risks (RR 1.64, 95% CI 1.39-1.93 and RR 1.72, 95% CI 1.34-2.17, respectively), whereas anti-hypertensive treatment was associated with a reduced risk (RR 0.82, 95% CI 0.74-0.91). We observed non-significant associations between...

  18. RECURRENT CROUP IN CHILDREN

    Directory of Open Access Journals (Sweden)

    S. L. Piskunova

    2014-01-01

    Full Text Available The article presents the results of examination of 1849 children, entering children's infectioushospitalofVladivostokwith the clinical picture of croup of viral etiology. The clinical features of primary and recurrent croup are described. Frequency of recurrent croup inVladivostokis 8%. Children with a recurrent croup had the burdened premorbid background, and also persistent herpetic infections (cytomegalic infection in 42,9% cases, cytomegalic infection in combination with the herpes simplex virus -1. Frequency of croups substantially rose in the period of epidemic of influenza.

  19. Speech Recognition Method Based on Multilayer Chaotic Neural Network

    Institute of Scientific and Technical Information of China (English)

    REN Xiaolin; HU Guangrui

    2001-01-01

    In this paper,speech recognitionusing neural networks is investigated.Especially,chaotic dynamics is introduced to neurons,and a mul-tilayer chaotic neural network (MLCNN) architectureis built.A learning algorithm is also derived to trainthe weights of the network.We apply the MLCNNto speech recognition and compare the performanceof the network with those of recurrent neural net-work (RNN) and time-delay neural network (TDNN).Experimental results show that the MLCNN methodoutperforms the other neural networks methods withrespect to average recognition rate.

  20. A New Lyapunov Based Robust Control for Uncertain Mechanical Systems

    Institute of Scientific and Technical Information of China (English)

    ZHEN Sheng-Chao; ZHAO Han; CHEN Ye-Hwa; HUANG Kang

    2014-01-01

    We design a new robust controller for uncertain mechanical systems. The inertia matrix0s singularity and upper bound property are first analyzed. It is shown that the inertia matrix may be positive semi-definite due to over-simplified model. Further-more, the inertia matrix0s being uniformly bounded above is also limited. A robust controller is proposed to suppress the effect of uncertainty in mechanical systems with the assumption of uniform positive definiteness and upper bound of the inertia matrix. We theoretically prove that the robust control renders uniform boundedness and uniform ultimate boundedness. The size of the ultimate boundedness ball can be made arbitrarily small by the designer. Simulation results are presented and discussed.

  1. Lyapunov-Based Control for Switched Power Converters

    Science.gov (United States)

    1990-06-01

    up-down converter of Figure 2 which has a state- V( space averaged model of the form MWny stabilizing control schemes can be obtained by in- S= Az...straigL -forward to specify a globally stabilizing control law for performing the described measurement process, it is possible the mcdel (6) of the form

  2. Lyapunov-based control designs for flexible-link manipulators

    Science.gov (United States)

    Juang, Jer-Nan; Huang, Jen-Kuang; Yang, Li-Farn

    1989-01-01

    A feedback controller for the stabilization of closed-loop systems is proposed which is based on the Liapunov stability criterion. A feedback control law is first generated for the linear portion of the system equation using linear control theory. A feedback control is then designed for the nonlinear portion of the system equation by making negative the time derivative of a positive definite Liapunov function.

  3. Lyapunov-based control designs for flexible-link manipulators

    Science.gov (United States)

    Juang, Jer-Nan; Huang, Jen-Kuang; Yang, Li-Farn

    1989-01-01

    A feedback controller for the stabilization of closed-loop systems is proposed which is based on the Liapunov stability criterion. A feedback control law is first generated for the linear portion of the system equation using linear control theory. A feedback control is then designed for the nonlinear portion of the system equation by making negative the time derivative of a positive definite Liapunov function.

  4. Neural networks and statistical learning

    CERN Document Server

    Du, Ke-Lin

    2014-01-01

    Providing a broad but in-depth introduction to neural network and machine learning in a statistical framework, this book provides a single, comprehensive resource for study and further research. All the major popular neural network models and statistical learning approaches are covered with examples and exercises in every chapter to develop a practical working understanding of the content. Each of the twenty-five chapters includes state-of-the-art descriptions and important research results on the respective topics. The broad coverage includes the multilayer perceptron, the Hopfield network, associative memory models, clustering models and algorithms, the radial basis function network, recurrent neural networks, principal component analysis, nonnegative matrix factorization, independent component analysis, discriminant analysis, support vector machines, kernel methods, reinforcement learning, probabilistic and Bayesian networks, data fusion and ensemble learning, fuzzy sets and logic, neurofuzzy models, hardw...

  5. Recurrent Breast Cancer

    Science.gov (United States)

    ... that can help you cope with distress include: Art therapy Dance or movement therapy Exercise Meditation Music ... mayoclinic.org/diseases-conditions/recurrent-breast-cancer/basics/definition/CON-20032432 . Mayo Clinic Footer Legal Conditions and ...

  6. Multifocal recurrent periostitis

    Energy Technology Data Exchange (ETDEWEB)

    Kozlowski, K.; Anderson, R.; Tink, A.

    1981-11-01

    Two case reports of recurrent multifocal periostitis in two girls aged 15 and 16 are added to the eight cases already reported in the literature. The disease is characterised clinically by recurrent mesomelic swelling of the extremities and radiologically by periosteal thickening and sclerosis of underlying bone. Hyperglobulinaemia is the most constant biochemical finding. The bone biopsy shows no typical features. The possibility of a viral etiology is discussed.

  7. Recurrences of strange attractors

    Indian Academy of Sciences (India)

    E J Ngamga; A Nandi; R Ramaswamy; M C Romano; M Thiel; J Kurths

    2008-06-01

    The transitions from or to strange nonchaotic attractors are investigated by recurrence plot-based methods. The techniques used here take into account the recurrence times and the fact that trajectories on strange nonchaotic attractors (SNAs) synchronize. The performance of these techniques is shown for the Heagy-Hammel transition to SNAs and for the fractalization transition to SNAs for which other usual nonlinear analysis tools are not successful.

  8. LSTM Neural Reordering Feature for Statistical Machine Translation

    OpenAIRE

    Cui, Yiming; Wang, Shijin; Li, Jianfeng

    2015-01-01

    Artificial neural networks are powerful models, which have been widely applied into many aspects of machine translation, such as language modeling and translation modeling. Though notable improvements have been made in these areas, the reordering problem still remains a challenge in statistical machine translations. In this paper, we present a novel neural reordering model that directly models word pairs and alignment. By utilizing LSTM recurrent neural networks, much longer context could be ...

  9. A C-LSTM Neural Network for Text Classification

    OpenAIRE

    Zhou, Chunting; Sun, Chonglin; Liu, Zhiyuan; Lau, Francis C. M.

    2015-01-01

    Neural network models have been demonstrated to be capable of achieving remarkable performance in sentence and document modeling. Convolutional neural network (CNN) and recurrent neural network (RNN) are two mainstream architectures for such modeling tasks, which adopt totally different ways of understanding natural languages. In this work, we combine the strengths of both architectures and propose a novel and unified model called C-LSTM for sentence representation and text classification. C-...

  10. Tutorial on neural network applications in high energy physics: A 1992 perspective

    Energy Technology Data Exchange (ETDEWEB)

    Denby, B.

    1992-04-01

    Feed forward and recurrent neural networks are introduced and related to standard data analysis tools. Tips are given on applications of neural nets to various areas of high energy physics. A review of applications within high energy physics and a summary of neural net hardware status are given.

  11. International Conference on Artificial Neural Networks (ICANN)

    CERN Document Server

    Mladenov, Valeri; Kasabov, Nikola; Artificial Neural Networks : Methods and Applications in Bio-/Neuroinformatics

    2015-01-01

    The book reports on the latest theories on artificial neural networks, with a special emphasis on bio-neuroinformatics methods. It includes twenty-three papers selected from among the best contributions on bio-neuroinformatics-related issues, which were presented at the International Conference on Artificial Neural Networks, held in Sofia, Bulgaria, on September 10-13, 2013 (ICANN 2013). The book covers a broad range of topics concerning the theory and applications of artificial neural networks, including recurrent neural networks, super-Turing computation and reservoir computing, double-layer vector perceptrons, nonnegative matrix factorization, bio-inspired models of cell communities, Gestalt laws, embodied theory of language understanding, saccadic gaze shifts and memory formation, and new training algorithms for Deep Boltzmann Machines, as well as dynamic neural networks and kernel machines. It also reports on new approaches to reinforcement learning, optimal control of discrete time-delay systems, new al...

  12. Neural Networks for Rapid Design and Analysis

    Science.gov (United States)

    Sparks, Dean W., Jr.; Maghami, Peiman G.

    1998-01-01

    Artificial neural networks have been employed for rapid and efficient dynamics and control analysis of flexible systems. Specifically, feedforward neural networks are designed to approximate nonlinear dynamic components over prescribed input ranges, and are used in simulations as a means to speed up the overall time response analysis process. To capture the recursive nature of dynamic components with artificial neural networks, recurrent networks, which use state feedback with the appropriate number of time delays, as inputs to the networks, are employed. Once properly trained, neural networks can give very good approximations to nonlinear dynamic components, and by their judicious use in simulations, allow the analyst the potential to speed up the analysis process considerably. To illustrate this potential speed up, an existing simulation model of a spacecraft reaction wheel system is executed, first conventionally, and then with an artificial neural network in place.

  13. Measure of the electroencephalographic effects of sevoflurane using recurrence dynamics.

    Science.gov (United States)

    Li, Xiaoli; Sleigh, Jamie W; Voss, Logan J; Ouyang, Gaoxiang

    2007-08-31

    This paper proposes a novel method to interpret the effect of anesthetic agents (sevoflurane) on the neural activity, by using recurrence quantification analysis of EEG data. First, we reduce the artefacts in the scalp EEG using a novel filter that combines wavelet transforms and empirical mode decomposition. Then, the determinism in the recurrence plot is calculated. It is found that the determinism increases gradually with increasing the concentration of sevoflurane. Finally, a pharmacokinetic and pharmacodynamic (PKPD) model is built to describe the relationship between the concentration of sevoflurane and the processed EEG measure ('determinism' of the recurrence plot). A test sample of nine patients shows the recurrence in EEG data may track the effect of the sevoflurane on the brain.

  14. Recurrent Fever in Children.

    Science.gov (United States)

    Torreggiani, Sofia; Filocamo, Giovanni; Esposito, Susanna

    2016-03-25

    Children presenting with recurrent fever may represent a diagnostic challenge. After excluding the most common etiologies, which include the consecutive occurrence of independent uncomplicated infections, a wide range of possible causes are considered. This article summarizes infectious and noninfectious causes of recurrent fever in pediatric patients. We highlight that, when investigating recurrent fever, it is important to consider age at onset, family history, duration of febrile episodes, length of interval between episodes, associated symptoms and response to treatment. Additionally, information regarding travel history and exposure to animals is helpful, especially with regard to infections. With the exclusion of repeated independent uncomplicated infections, many infective causes of recurrent fever are relatively rare in Western countries; therefore, clinicians should be attuned to suggestive case history data. It is important to rule out the possibility of an infectious process or a malignancy, in particular, if steroid therapy is being considered. After excluding an infectious or neoplastic etiology, immune-mediated and autoinflammatory diseases should be taken into consideration. Together with case history data, a careful physical exam during and between febrile episodes may give useful clues and guide laboratory investigations. However, despite a thorough evaluation, a recurrent fever may remain unexplained. A watchful follow-up is thus mandatory because new signs and symptoms may appear over time.

  15. Recurrent Fever in Children

    Directory of Open Access Journals (Sweden)

    Sofia Torreggiani

    2016-03-01

    Full Text Available Children presenting with recurrent fever may represent a diagnostic challenge. After excluding the most common etiologies, which include the consecutive occurrence of independent uncomplicated infections, a wide range of possible causes are considered. This article summarizes infectious and noninfectious causes of recurrent fever in pediatric patients. We highlight that, when investigating recurrent fever, it is important to consider age at onset, family history, duration of febrile episodes, length of interval between episodes, associated symptoms and response to treatment. Additionally, information regarding travel history and exposure to animals is helpful, especially with regard to infections. With the exclusion of repeated independent uncomplicated infections, many infective causes of recurrent fever are relatively rare in Western countries; therefore, clinicians should be attuned to suggestive case history data. It is important to rule out the possibility of an infectious process or a malignancy, in particular, if steroid therapy is being considered. After excluding an infectious or neoplastic etiology, immune-mediated and autoinflammatory diseases should be taken into consideration. Together with case history data, a careful physical exam during and between febrile episodes may give useful clues and guide laboratory investigations. However, despite a thorough evaluation, a recurrent fever may remain unexplained. A watchful follow-up is thus mandatory because new signs and symptoms may appear over time.

  16. Recurrent wheezing in children

    Science.gov (United States)

    Piazza, Michele; Piacentini, Giorgio

    2016-01-01

    Recurrent wheezing have a significant morbidity and it’s estimated that about one third of school-age children manifest the symptom during the first 5 years of life. Proper identification of children at risk of developing asthma at school age may predict long-term outcomes and improve treatment and preventive approach, but the possibility to identify these children at preschool age remains limited. For many years authors focused their studies to identify early children with recurrent wheezing at risk to develop asthma at school age. Different phenotypes have been proposed for a more precise characterization and a personalized plan of treatment. The main criticism concerns the inability to define stable phenotypes with the risk of overestimating or underestimating the characteristics of symptoms in these children. The aim of this review is to report the recent developments on the diagnosis and treatment of recurrent paediatric wheezing. PMID:26835404

  17. Recurrence of angular cheilitis.

    Science.gov (United States)

    Ohman, S C; Jontell, M; Dahlen, G

    1988-08-01

    The incidence of recurrence of angular cheilitis following a successful antimicrobial treatment was studied in 48 patients. Clinical assessments including a microbial examination were carried out 8 months and 5 yr after termination of treatment. Eighty percent of the patients reported recurrence of their angular cheilitis on one or more occasions during the observation period. Patients with cutaneous disorders associated with dry skin or intraoral leukoplakia had an increased incidence of recrudescence. Neither the presence of denture stomatitis nor the type of microorganisms isolated from the original lesions of angular cheilitis, i.e. Candida albicans and/or Staphylococcus aureus, were associated with the number of recurrences. The present observations indicate that treatment of the majority of patients with angular cheilitis should be considered in a longer perspective than previously supposed, due to the short lasting therapeutic effects of the antimicrobial therapy.

  18. Recurrent Novae - A Review

    CERN Document Server

    Mukai, Koji

    2014-01-01

    In recent years, recurrent nova eruptions are often observed very intensely in wide range of wavelengths from radio to optical to X-rays. Here I present selected highlights from recent multi-wavelength observations. The enigma of T Pyx is at the heart of this paper. While our current understanding of CV and symbiotic star evolution can explain why certain subset of recurrent novae have high accretion rate, that of T Pyx must be greatly elevated compared to the evolutionary mean. At the same time, we have extensive data to be able to estimate how the nova envelope was ejected in T Pyx, and it turns to be a rather complex tale. One suspects that envelope ejection in recurrent and classical novae in general is more complicated than the textbook descriptions. At the end of the review, I will speculate that these two may be connected.

  19. Neural correlates of consciousness.

    Science.gov (United States)

    Negrao, B L; Viljoen, M

    2009-11-01

    A basic understanding of consciousness and its neural correlates is of major importance for all clinicians, especially those involved with patients with altered states of consciousness. In this paper it is shown that consciousness is dependent on the brainstem and thalamus for arousal; that basic cognition is supported by recurrent electrical activity between the cortex and the thalamus at gamma band frequencies; aand that some kind of working memory must, at least fleetingly, be present for awareness to occur. The problem of cognitive binding and the role of attention are briefly addressed and it shown that consciousness depends on a multitude of subconscious processes. Although these processes do not represent consciousness, consciousness cannot exist without them.

  20. Recurrent parotitis in children

    Directory of Open Access Journals (Sweden)

    Bhattarai M

    2006-01-01

    Full Text Available Recurrent parotitis is an uncommon condition in children. Its etiological factors have not been proved till date although causes due to genetic inheritance, local autoimmune manifestation, allergy, viral infection and immunodeficiency have been suggested. The exact management of this disorder is not yet standardized, but a conservative approach is preferred and all affected children should be screened for Sjogren′s syndrome and immune deficiency including human immunodeficiency virus. We report a 12 years female child who presented with 12 episodes of non-painful recurrent swellings of the bilateral parotid gland in the past 3 years.

  1. Tackling a recurrent pinealoblastoma.

    Science.gov (United States)

    Palled, Siddanna; Kalavagunta, Sruthi; Beerappa Gowda, Jaipal; Umesh, Kavita; Aal, Mahalaxmi; Abdul Razack, Tanvir Pasha Chitraduraga; Gowda, Veerabhadre; Viswanath, Lokesh

    2014-01-01

    Pineoblastomas are rare, malignant, pineal region lesions that account for <0.1% of all intracranial tumors and can metastasize along the neuroaxis. Pineoblastomas are more common in children than in adults and adults account for <10% of patients. The management of pinealoblastoma is multimodality approach, surgery followed with radiation and chemotherapy. In view of aggressive nature few centres use high dose chemotherapy with autologus stem cell transplant in newly diagnosed cases but in recurrent setting the literature is very sparse. The present case represents the management of pinealoblastoma in the recurrent setting with reirradiation and adjuvant carmustine chemotherapy wherein the management guidelines are not definitive.

  2. Real-Time Inverse Optimal Neural Control for Image Based Visual Servoing with Nonholonomic Mobile Robots

    Directory of Open Access Journals (Sweden)

    Carlos López-Franco

    2015-01-01

    Full Text Available We present an inverse optimal neural controller for a nonholonomic mobile robot with parameter uncertainties and unknown external disturbances. The neural controller is based on a discrete-time recurrent high order neural network (RHONN trained with an extended Kalman filter. The reference velocities for the neural controller are obtained with a visual sensor. The effectiveness of the proposed approach is tested by simulations and real-time experiments.

  3. Lung Cancer Indicators Recurrence

    Science.gov (United States)

    This study describes prognostic factors for lung cancer spread and recurrence, as well as subsequent risk of death from the disease. The investigators observed that regardless of cancer stage, grade, or type of lung cancer, patients in the study were more

  4. Chronic recurrent multifocal osteomyelitis

    NARCIS (Netherlands)

    Wedman, Jan; van Weissenbruch, Ranny

    2005-01-01

    We report what is, to our best knowledge, the first case of chronic recurrent multifocal osteomyelitis (CRMO) in which the frontal and sphenoid bones were involved. Characterized by a prolonged and fluctuating course of osteomyelitis at different sites, CRMO is self-limited, although sequelae can oc

  5. Recurrent Gliosarcoma in Pregnancy

    OpenAIRE

    2014-01-01

    Gliosarcoma is a rare tumor of the central nervous system and it constitutes about 1 to 8% of all malignant gliomas. In this report we are presenting a recurrent gliosarcoma case during a pregnancy in a 30-year-old woman. This is the first report presenting gliosarcoma in the pregnancy.

  6. Recurrent gallstone ileus.

    Science.gov (United States)

    Hayes, Nicolas; Saha, Sanjoy

    2012-11-01

    Mechanical small bowel obstructions caused by gallstones account for 1% to 3% of cases. In these patients, 80% to 90% of residual gallstones in these patients will pass through a remaining fistula without consequence. Recurrent gallstone ileus has been reported in 5% of patients. We report the case of a woman, aged 72 years, who presented with mechanical small bowel obstruction caused by gallstone ileus. After successful surgical therapy for gallstone ileus, the patient's symptoms recurred, and she was diagnosed with recurrent gallstone ileus requiring a repeat operation. While management of gallstone ileus can be achieved through a single-stage operation including enterolithotomy and cholecystectomy with repair of biliary-enteric fistula or by enterolithotomy alone, the literature supports enterolithotomy alone as the treatment of choice for gallstone ileus due to decreased mortality and morbidity. However, the latter approach does not obviate potential recurrence. We present this case of recurrent gallstone ileus to elucidate and review the pathogenesis, presentation, diagnosis, and consensus recommendations regarding management of this disorder.

  7. Acute Recurrent Pancreatitis

    Directory of Open Access Journals (Sweden)

    Glen A Lehman

    2003-01-01

    Full Text Available History, physical examination, simple laboratory and radiological tests, and endoscopic retrograde cholangiopancreatography (ERCP are able to establish the cause of recurrent acute pancreatitis in 70% to 90% of patients. Dysfunction of the biliary and/or pancreatic sphincter, as identified by sphincter of Oddi manometry, accounts for the majority of the remaining cases. The diagnosis may be missed if the pancreatic sphincter is not evaluated. Pancreas divisum is a prevalent congenital abnormality that is usually innocuous but can lead to recurrent attacks of acute pancreatitis or abdominal pain. In select cases, endoscopic sphincterotomy of the minor papilla can provide relief of symptoms and prevent further attacks. A small proportion of patients with idiopathic pancreatitis have tiny stones in the common bile duct (microlithiasis. Crystals can be visualized during microscopic analysis of bile that is aspirated at the time of ERCP. Neoplasia is a rare cause of pancreatitis, and the diagnosis can usually be established by computerized tomography or ERCP. A wide variety of medications can also cause recurrent pancreatitis. ERCP, sphincter of Oddi manometry, and microscopy of aspirated bile should be undertaken in patients with recurrent pancreatitis in whom the diagnosis is not obvious.

  8. Existence and uniform stability analysis of fractional-order complex-valued neural networks with time delays.

    Science.gov (United States)

    Rakkiyappan, R; Cao, Jinde; Velmurugan, G

    2015-01-01

    This paper deals with the problem of existence and uniform stability analysis of fractional-order complex-valued neural networks with constant time delays. Complex-valued recurrent neural networks is an extension of real-valued recurrent neural networks that includes complex-valued states, connection weights, or activation functions. This paper explains sufficient condition for the existence and uniform stability analysis of such networks. Three numerical simulations are delineated to substantiate the effectiveness of the theoretical results.

  9. Modeling Broadband Microwave Structures by Artificial Neural Networks

    Directory of Open Access Journals (Sweden)

    V. Otevrel

    2004-06-01

    Full Text Available The paper describes the exploitation of feed-forward neural networksand recurrent neural networks for replacing full-wave numerical modelsof microwave structures in complex microwave design tools. Building aneural model, attention is turned to the modeling accuracy and to theefficiency of building a model. Dealing with the accuracy, we describea method of increasing it by successive completing a training set.Neural models are mutually compared in order to highlight theiradvantages and disadvantages. As a reference model for comparisons,approximations based on standard cubic splines are used. Neural modelsare used to replace both the time-domain numeric models and thefrequency-domain ones.

  10. Stability analysis of discrete-time BAM neural networks based on standard neural network models

    Institute of Scientific and Technical Information of China (English)

    ZHANG Sen-lin; LIU Mei-qin

    2005-01-01

    To facilitate stability analysis of discrete-time bidirectional associative memory (BAM) neural networks, they were converted into novel neural network models, termed standard neural network models (SNNMs), which interconnect linear dynamic systems and bounded static nonlinear operators. By combining a number of different Lyapunov functionals with S-procedure, some useful criteria of global asymptotic stability and global exponential stability of the equilibrium points of SNNMs were derived. These stability conditions were formulated as linear matrix inequalities (LMIs). So global stability of the discrete-time BAM neural networks could be analyzed by using the stability results of the SNNMs. Compared to the existing stability analysis methods, the proposed approach is easy to implement, less conservative, and is applicable to other recurrent neural networks.

  11. Coping with Fear of Recurrence

    Science.gov (United States)

    ... What Comes Next After Finishing Treatment Coping With Fear of Recurrence Having a Baby After Cancer: Pregnancy ... treatment and preparing for the future. Coping With Fear of Recurrence Learn ways to manage the fear ...

  12. LMI-based approach for global asymptotic stability analysis of continuous BAM neural networks

    Institute of Scientific and Technical Information of China (English)

    ZHANG Sen-lin; LIU Mei-qin

    2005-01-01

    Studies on the stability of the equilibrium points of continuous bidirectional associative memory (BAM) neural network have yielded many useful results. A novel neural network model called standard neural network model (SNNM) is advanced. By using state affine transformation, the BAM neural networks were converted to SNNMs. Some sufficient conditions for the global asymptotic stability of continuous BAM neural networks were derived from studies on the SNNMs' stability. These conditions were formulated as easily verifiable linear matrix inequalities (LMIs), whose conservativeness is relatively low. The approach proposed extends the known stability results, and can also be applied to other forms of recurrent neural networks (RNNs).

  13. A Dual Neural Network as an Identifier for a Robot Arm

    Directory of Open Access Journals (Sweden)

    Sergio Alvarez Rodríguez

    2015-04-01

    Full Text Available A novel dual recurrent neural network is presented and is used to identify the dynamics for a robot arm with three-Degrees of freedom (DoF and trained with a filtered error algorithm. The dual neural network has a structure of two recurrent neural networks working simultaneously, fighting each other to obtain the best identification values, being the criteria for the selection of the vest values: the standard deviation for the identification error. The neural identifier provides important information to a nonlinear block control transformation form acting as a control law to solve the trajectory tracking problem for the robotic plant’s behavior.

  14. The study of fuzzy chaotic neural network based on chaotic method

    Institute of Scientific and Technical Information of China (English)

    WANG Ke-jun; TANG Mo; ZHANG Yan

    2006-01-01

    This paper proposes a type of Fuzzy Chaotic Neural Network (FCNN). Firstly, the model of recurrent fuzzy neural network (RFNN) is considered, which adds a feedback in the second layer to realize dynamic map. Then, the Logistic map is introduced into the recurrent fuzzy neural network, so as to build a Fuzzy Chaotic Neural Network (FCNN). Its chaotic character is analyzed, and then the training algorithm and associate memory ability are studied subsequently. And then, a chaotic system is approximated using FCNN; the simulation results indicate that FCNN could approach dynamic system preferably. And owing to the introducing of chaotic map, the chaotic recollect capacity of FCNN is increased.

  15. TIME SERIES FORECASTING USING NEURAL NETWORKS

    Directory of Open Access Journals (Sweden)

    BOGDAN OANCEA

    2013-05-01

    Full Text Available Recent studies have shown the classification and prediction power of the Neural Networks. It has been demonstrated that a NN can approximate any continuous function. Neural networks have been successfully used for forecasting of financial data series. The classical methods used for time series prediction like Box-Jenkins or ARIMA assumes that there is a linear relationship between inputs and outputs. Neural Networks have the advantage that can approximate nonlinear functions. In this paper we compared the performances of different feed forward and recurrent neural networks and training algorithms for predicting the exchange rate EUR/RON and USD/RON. We used data series with daily exchange rates starting from 2005 until 2013.

  16. Hardware implementation of stochastic spiking neural networks.

    Science.gov (United States)

    Rosselló, Josep L; Canals, Vincent; Morro, Antoni; Oliver, Antoni

    2012-08-01

    Spiking Neural Networks, the last generation of Artificial Neural Networks, are characterized by its bio-inspired nature and by a higher computational capacity with respect to other neural models. In real biological neurons, stochastic processes represent an important mechanism of neural behavior and are responsible of its special arithmetic capabilities. In this work we present a simple hardware implementation of spiking neurons that considers this probabilistic nature. The advantage of the proposed implementation is that it is fully digital and therefore can be massively implemented in Field Programmable Gate Arrays. The high computational capabilities of the proposed model are demonstrated by the study of both feed-forward and recurrent networks that are able to implement high-speed signal filtering and to solve complex systems of linear equations.

  17. Immunomodulators to treat recurrent miscarriage

    NARCIS (Netherlands)

    Prins, Jelmer R.; Kieffer, Tom E.C.; Scherjon, Sicco A.

    2014-01-01

    Recurrent miscarriage is a reproductive disorder affecting many couples. Although several factors are associated with recurrent miscarriage, in more than 50% of the cases the cause is unknown. Maladaptation of the maternal immune system is associated with recurrent miscarriage and could explain part

  18. Imaging of recurrent prostate cancer

    NARCIS (Netherlands)

    Futterer, J.J.

    2012-01-01

    Approximately 30\\% of patients who underwent radical prostatectomy or radiation therapy will develop biochemical recurrent disease. Biochemical recurrent disease is defined as an increase in the serum value of prostate-specific antigen (PSA) after reaching the nadir. Prostate recurrence can present

  19. Immunomodulators to treat recurrent miscarriage

    NARCIS (Netherlands)

    Prins, Jelmer R.; Kieffer, Tom E. C.; Scherjon, Sicco A.

    2014-01-01

    Recurrent miscarriage is a reproductive disorder affecting many couples. Although several factors are associated with recurrent miscarriage, in more than 50% of the cases the cause is unknown. Maladaptation of the maternal immune system is associated with recurrent miscarriage and could explain part

  20. Training Recurrent Networks

    DEFF Research Database (Denmark)

    Pedersen, Morten With

    1997-01-01

    Training recurrent networks is generally believed to be a difficult task. Excessive training times and lack of convergence to an acceptable solution are frequently reported. In this paper we seek to explain the reason for this from a numerical point of view and show how to avoid problems when tra...... training. In particular we investigate ill-conditioning, the need for and effect of regularization and illustrate the superiority of second-order methods for training......Training recurrent networks is generally believed to be a difficult task. Excessive training times and lack of convergence to an acceptable solution are frequently reported. In this paper we seek to explain the reason for this from a numerical point of view and show how to avoid problems when...

  1. Spatial recurrence plots.

    Science.gov (United States)

    Vasconcelos, D B; Lopes, S R; Viana, R L; Kurths, J

    2006-05-01

    We propose an extension of the recurrence plot concept to perform quantitative analyzes of roughness and disorder of spatial patterns at a fixed time. We introduce spatial recurrence plots (SRPs) as a graphical representation of the pointwise correlation matrix, in terms of a two-dimensional spatial return plot. This technique is applied to the study of complex patterns generated by coupled map lattices, which are characterized by measures of complexity based on SRPs. We show that the complexity measures we propose for SRPs provide a systematic way of investigating the distribution of spatially coherent structures, such as synchronization domains, in lattice profiles. This approach has potential for many more applications, e.g., in surface roughness analyzes.

  2. Modularity promotes epidemic recurrence

    CERN Document Server

    Jesan, T; Sinha, Sitabhra

    2016-01-01

    The long-term evolution of epidemic processes depends crucially on the structure of contact networks. As empirical evidence indicates that human populations exhibit strong community organization, we investigate here how such mesoscopic configurations affect the likelihood of epidemic recurrence. Through numerical simulations on real social networks and theoretical arguments using spectral methods, we demonstrate that highly contagious diseases that would have otherwise died out rapidly can persist indefinitely for an optimal range of modularity in contact networks.

  3. Recurrent Gallstone Ileus

    OpenAIRE

    Hayes, Nicolas; Saha, Sanjoy

    2012-01-01

    Mechanical small bowel obstructions caused by gallstones account for 1% to 3% of cases. In these patients, 80% to 90% of residual gallstones in these patients will pass through a remaining fistula without consequence. Recurrent gallstone ileus has been reported in 5% of patients. We report the case of a woman, aged 72 years, who presented with mechanical small bowel obstruction caused by gallstone ileus. After successful surgical therapy for gallstone ileus, the patient's symptoms recurred, a...

  4. Recurrent Syncope due to Esophageal Squamous Cell Carcinoma

    Directory of Open Access Journals (Sweden)

    A. Casini

    2011-09-01

    Full Text Available Syncope is caused by a wide variety of disorders. Recurrent syncope as a complication of malignancy is uncommon and may be difficult to diagnose and to treat. Primary neck carcinoma or metastases spreading in parapharyngeal and carotid spaces can involve the internal carotid artery and cause neurally mediated syncope with a clinical presentation like carotid sinus syndrome. We report the case of a 76-year-old man who suffered from recurrent syncope due to invasion of the right carotid sinus by metastases of a carcinoma of the esophagus, successfully treated by radiotherapy. In such cases, surgery, chemotherapy or radiotherapy can be performed. Because syncope may be an early sign of neck or cervical cancer, the diagnostic approach of syncope in patients with a past history of cancer should include the possibility of neck tumor recurrence or metastasis and an oncologic workout should be considered.

  5. Recurrent neural network modeling of nearshore sandbar behavior

    NARCIS (Netherlands)

    Pape, Leo; Ruessink, B.G.; Wiering, Marco A.; Turner, Ian L.

    2007-01-01

    The temporal evolution of nearshore sandbars (alongshore ridges of sand fringing coasts in water depths less than 10 m and of paramount importance for coastal safety) is commonly predicted using process-based models. These models are autoregressive and require offshore wave characteristics as input,

  6. Recurrent cortico-cortical Interactions in neural disease

    NARCIS (Netherlands)

    Lamme, V.A.F.; Pletson, J.E.

    2005-01-01

    The cerebral cortex consists of a large number of areas, each subserving a more or less distinct function. This view has its roots in the early work of Penfield, and today is reflected in the body of functional MRI literature describing the regions of the brain that are activated during particular t

  7. Active Control of Complex Systems via Dynamic (Recurrent) Neural Networks

    Science.gov (United States)

    1992-05-30

    individuals ( genetic , same diet, etc.). 35 I z= o + Xe xi + Xe, x xj + Oijk Xi X Xk + 3:4 i ij ijk Since it has been shown that the KG multinomial can model...AI Fiur 48:Dynamic PNNUsdithSyemInifcio Fit ACTIJ3.1 ACTD3.2 ACTD3.3 ACTD3.4 FtD . 14 1.615314 ACTD3.2 1.42 1.54 1.54 1.423 ACTD3.3 1.40 1.56 1.52

  8. Recurrent neural network modeling of nearshore sandbar behavior

    NARCIS (Netherlands)

    Pape, Leo; Ruessink, B.G.; Wiering, Marco A.; Turner, Ian L.

    2007-01-01

    The temporal evolution of nearshore sandbars (alongshore ridges of sand fringing coasts in water depths less than 10 m and of paramount importance for coastal safety) is commonly predicted using process-based models. These models are autoregressive and require offshore wave characteristics as input,

  9. Normalizing tweets with edit scripts and recurrent neural embeddings

    NARCIS (Netherlands)

    Chrupala, Grzegorz; Toutanova, Kristina; Wu, Hua

    2014-01-01

    Tweets often contain a large proportion of abbreviations, alternative spellings, novel words and other non-canonical language. These features are problematic for standard language analysis tools and it can be desirable to convert them to canonical form. We propose a novel text normalization model ba

  10. Recurrent Neural Network Modeling of Nearshore Sandbar Behavior

    NARCIS (Netherlands)

    Pape, L.; Ruessink, B.G.; Wiering, M.A.; Turner, I.L.

    2007-01-01

    The temporal evolution of nearshore sandbars (alongshore ridges of sand fringing coasts in water depths less than 10 m and of paramount importance for coastal safety) is commonly predicted using process-based models. These models are autoregressive and require offshore wave characteristics as input,

  11. Neural circuits as computational dynamical systems.

    Science.gov (United States)

    Sussillo, David

    2014-04-01

    Many recent studies of neurons recorded from cortex reveal complex temporal dynamics. How such dynamics embody the computations that ultimately lead to behavior remains a mystery. Approaching this issue requires developing plausible hypotheses couched in terms of neural dynamics. A tool ideally suited to aid in this question is the recurrent neural network (RNN). RNNs straddle the fields of nonlinear dynamical systems and machine learning and have recently seen great advances in both theory and application. I summarize recent theoretical and technological advances and highlight an example of how RNNs helped to explain perplexing high-dimensional neurophysiological data in the prefrontal cortex.

  12. Neural Networks

    Directory of Open Access Journals (Sweden)

    Schwindling Jerome

    2010-04-01

    Full Text Available This course presents an overview of the concepts of the neural networks and their aplication in the framework of High energy physics analyses. After a brief introduction on the concept of neural networks, the concept is explained in the frame of neuro-biology, introducing the concept of multi-layer perceptron, learning and their use as data classifer. The concept is then presented in a second part using in more details the mathematical approach focussing on typical use cases faced in particle physics. Finally, the last part presents the best way to use such statistical tools in view of event classifers, putting the emphasis on the setup of the multi-layer perceptron. The full article (15 p. corresponding to this lecture is written in french and is provided in the proceedings of the book SOS 2008.

  13. Multiscale recurrence quantification analysis of order recurrence plots

    Science.gov (United States)

    Xu, Mengjia; Shang, Pengjian; Lin, Aijing

    2017-03-01

    In this paper, we propose a new method of multiscale recurrence quantification analysis (MSRQA) to analyze the structure of order recurrence plots. The MSRQA is based on order patterns over a range of time scales. Compared with conventional recurrence quantification analysis (RQA), the MSRQA can show richer and more recognizable information on the local characteristics of diverse systems which successfully describes their recurrence properties. Both synthetic series and stock market indexes exhibit their properties of recurrence at large time scales that quite differ from those at a single time scale. Some systems present more accurate recurrence patterns under large time scales. It demonstrates that the new approach is effective for distinguishing three similar stock market systems and showing some inherent differences.

  14. Study on Adaptive Control with Neural Network Compensation

    Institute of Scientific and Technical Information of China (English)

    单剑锋; 黄忠华; 崔占忠

    2004-01-01

    A scheme of adaptive control based on a recurrent neural network with a neural network compensation is presented for a class of nonlinear systems with a nonlinear prefix. The recurrent neural network is used to identify the unknown nonlinear part and compensate the difference between the real output and the identified model output. The identified model of the controlled object consists of a linear model and the neural network. The generalized minimum variance control method is used to identify pareters, which can deal with the problem of adaptive control of systems with unknown nonlinear part, which can not be controlled by traditional methods. Simulation results show that this algorithm has higher precision, faster convergent speed.

  15. Multivalued associative memories based on recurrent networks.

    Science.gov (United States)

    Chiueh, T D; Tsai, H K

    1993-01-01

    A multivalued neural associative memory model based on a recurrent network structure is proposed. This model adopts the same principle proposed in the authors' previous work, the exponential correlation associative memories (ECAM). The model also has a very high storage capacity and strong error-correction capability. The major components of the new model include a weighted average process and some similarity-measure computation. As in ECAM, in order to enhance the differences among the weights and make the largest weights more overwhelming, the new model incorporates a nonlinear function in the calculation of weights. Several possible similarity measures suitable for this model are suggested. Simulation results of the performance of the new model with different measures show that, loaded with 500 64-component patterns, the model can sustain noise with power about one fifth to three fifths of the average signal power.

  16. Recurrent Pregnancy Loss

    Directory of Open Access Journals (Sweden)

    Véronique Piroux

    1997-01-01

    Full Text Available Antiphospholipid antibodies (APA are associated with thrombosis, thrombocytopenia and fetal loss but they occur in a variety of diseases. Despite many efforts, a correlation between the specificity of particular subgroups of APA and particular clinical situations remains to be established. The antigens at the origin of APA remain to be identified. We discuss here the possible links between cell apoptosis or necrosis, leading to plasma membrane alterations, and the occurrence of APA in response to sustained stimulation. The pathogenic potential of APA is also considered with respect to recurrent pregnancy loss.

  17. Recurrent Miller Fisher syndrome.

    Science.gov (United States)

    Madhavan, S; Geetha; Bhargavan, P V

    2004-07-01

    Miller Fisher syndrome (MFS) is a variant of Guillan Barre syndrome characterized by the triad of ophthalmoplegia, ataxia and areflexia. Recurrences are exceptional with Miller Fisher syndrome. We are reporting a case with two episodes of MFS within two years. Initially he presented with partial ophthalmoplegia, ataxia. Second episode was characterized by full-blown presentation characterized by ataxia, areflexia and ophthalmoplegia. CSF analysis was typical during both episodes. Nerve conduction velocity study was fairly within normal limits. MRI of brain was within normal limits. He responded to symptomatic measures initially, then to steroids in the second episode. We are reporting the case due to its rarity.

  18. Recurrence Relations and Determinants

    CERN Document Server

    Janjic, Milan

    2011-01-01

    We examine relationships between two minors of order n of some matrices of n rows and n+r columns. This is done through a class of determinants, here called $n$-determinants, the investigation of which is our objective. We prove that 1-determinants are the upper Hessenberg determinants. In particular, we state several 1-determinants each of which equals a Fibonacci number. We also derive relationships among terms of sequences defined by the same recurrence equation independently of the initial conditions. A result generalizing the formula for the product of two determinants is obtained. Finally, we prove that the Schur functions may be expressed as $n$-determinants.

  19. The universal fuzzy Logical framework of neural circuits and its application in modeling primary visual cortex

    Institute of Scientific and Technical Information of China (English)

    HU Hong; LI Su; WANG YunJiu; QI XiangLin; SHI ZhongZhi

    2008-01-01

    Analytical study of large-scale nonlinear neural circuits is a difficult task. Here we analyze the function of neural systems by probing the fuzzy logical framework of the neural cells' dynamical equations. Al-though there is a close relation between the theories of fuzzy logical systems and neural systems and many papers investigate this subject, most investigations focus on finding new functions of neural systems by hybridizing fuzzy logical and neural system. In this paper, the fuzzy logical framework of neural cells is used to understand the nonlinear dynamic attributes of a common neural system by abstracting the fuzzy logical framework of a neural cell. Our analysis enables the educated design of network models for classes of computation. As an example, a recurrent network model of the primary visual cortex has been built and tested using this approach.

  20. The universal fuzzy logical framework of neural circuits and its application in modeling primary visual cortex

    Institute of Scientific and Technical Information of China (English)

    2008-01-01

    Analytical study of large-scale nonlinear neural circuits is a difficult task. Here we analyze the function of neural systems by probing the fuzzy logical framework of the neural cells’ dynamical equations. Al- though there is a close relation between the theories of fuzzy logical systems and neural systems and many papers investigate this subject, most investigations focus on finding new functions of neural systems by hybridizing fuzzy logical and neural system. In this paper, the fuzzy logical framework of neural cells is used to understand the nonlinear dynamic attributes of a common neural system by abstracting the fuzzy logical framework of a neural cell. Our analysis enables the educated design of network models for classes of computation. As an example, a recurrent network model of the primary visual cortex has been built and tested using this approach.