WorldWideScience

Sample records for lyapunov-based recurrent neural

  1. Chaotic diagonal recurrent neural network

    International Nuclear Information System (INIS)

    Wang Xing-Yuan; Zhang Yi

    2012-01-01

    We propose a novel neural network based on a diagonal recurrent neural network and chaos, and its structure and learning algorithm are designed. The multilayer feedforward neural network, diagonal recurrent neural network, and chaotic diagonal recurrent neural network are used to approach the cubic symmetry map. The simulation results show that the approximation capability of the chaotic diagonal recurrent neural network is better than the other two neural networks. (interdisciplinary physics and related areas of science and technology)

  2. Deep Gate Recurrent Neural Network

    Science.gov (United States)

    2016-11-22

    and Fred Cummins. Learning to forget: Continual prediction with lstm . Neural computation, 12(10):2451–2471, 2000. Alex Graves. Generating sequences...DSGU) and Simple Gated Unit (SGU), which are structures for learning long-term dependencies. Compared to traditional Long Short-Term Memory ( LSTM ) and...Gated Recurrent Unit (GRU), both structures require fewer parameters and less computation time in sequence classification tasks. Unlike GRU and LSTM

  3. Ocean wave forecasting using recurrent neural networks

    Digital Repository Service at National Institute of Oceanography (India)

    Mandal, S.; Prabaharan, N.

    , merchant vessel routing, nearshore construction, etc. more efficiently and safely. This paper describes an artificial neural network, namely recurrent neural network with rprop update algorithm and is applied for wave forecasting. Measured ocean waves off...

  4. Interpretation of Recurrent Neural Networks

    DEFF Research Database (Denmark)

    Pedersen, Morten With; Larsen, Jan

    1997-01-01

    This paper addresses techniques for interpretation and characterization of trained recurrent nets for time series problems. In particular, we focus on assessment of effective memory and suggest an operational definition of memory. Further we discuss the evaluation of learning curves. Various nume...

  5. Recurrent Neural Network for Computing Outer Inverse.

    Science.gov (United States)

    Živković, Ivan S; Stanimirović, Predrag S; Wei, Yimin

    2016-05-01

    Two linear recurrent neural networks for generating outer inverses with prescribed range and null space are defined. Each of the proposed recurrent neural networks is based on the matrix-valued differential equation, a generalization of dynamic equations proposed earlier for the nonsingular matrix inversion, the Moore-Penrose inversion, as well as the Drazin inversion, under the condition of zero initial state. The application of the first approach is conditioned by the properties of the spectrum of a certain matrix; the second approach eliminates this drawback, though at the cost of increasing the number of matrix operations. The cases corresponding to the most common generalized inverses are defined. The conditions that ensure stability of the proposed neural network are presented. Illustrative examples present the results of numerical simulations.

  6. Local Dynamics in Trained Recurrent Neural Networks.

    Science.gov (United States)

    Rivkind, Alexander; Barak, Omri

    2017-06-23

    Learning a task induces connectivity changes in neural circuits, thereby changing their dynamics. To elucidate task-related neural dynamics, we study trained recurrent neural networks. We develop a mean field theory for reservoir computing networks trained to have multiple fixed point attractors. Our main result is that the dynamics of the network's output in the vicinity of attractors is governed by a low-order linear ordinary differential equation. The stability of the resulting equation can be assessed, predicting training success or failure. As a consequence, networks of rectified linear units and of sigmoidal nonlinearities are shown to have diametrically different properties when it comes to learning attractors. Furthermore, a characteristic time constant, which remains finite at the edge of chaos, offers an explanation of the network's output robustness in the presence of variability of the internal neural dynamics. Finally, the proposed theory predicts state-dependent frequency selectivity in the network response.

  7. Local Dynamics in Trained Recurrent Neural Networks

    Science.gov (United States)

    Rivkind, Alexander; Barak, Omri

    2017-06-01

    Learning a task induces connectivity changes in neural circuits, thereby changing their dynamics. To elucidate task-related neural dynamics, we study trained recurrent neural networks. We develop a mean field theory for reservoir computing networks trained to have multiple fixed point attractors. Our main result is that the dynamics of the network's output in the vicinity of attractors is governed by a low-order linear ordinary differential equation. The stability of the resulting equation can be assessed, predicting training success or failure. As a consequence, networks of rectified linear units and of sigmoidal nonlinearities are shown to have diametrically different properties when it comes to learning attractors. Furthermore, a characteristic time constant, which remains finite at the edge of chaos, offers an explanation of the network's output robustness in the presence of variability of the internal neural dynamics. Finally, the proposed theory predicts state-dependent frequency selectivity in the network response.

  8. Supervised Sequence Labelling with Recurrent Neural Networks

    CERN Document Server

    Graves, Alex

    2012-01-01

    Supervised sequence labelling is a vital area of machine learning, encompassing tasks such as speech, handwriting and gesture recognition, protein secondary structure prediction and part-of-speech tagging. Recurrent neural networks are powerful sequence learning tools—robust to input noise and distortion, able to exploit long-range contextual information—that would seem ideally suited to such problems. However their role in large-scale sequence labelling systems has so far been auxiliary.    The goal of this book is a complete framework for classifying and transcribing sequential data with recurrent neural networks only. Three main innovations are introduced in order to realise this goal. Firstly, the connectionist temporal classification output layer allows the framework to be trained with unsegmented target sequences, such as phoneme-level speech transcriptions; this is in contrast to previous connectionist approaches, which were dependent on error-prone prior segmentation. Secondly, multidimensional...

  9. Collaborative Recurrent Neural Networks forDynamic Recommender Systems

    Science.gov (United States)

    2016-11-22

    JMLR: Workshop and Conference Proceedings 63:366–381, 2016 ACML 2016 Collaborative Recurrent Neural Networks for Dynamic Recommender Systems Young...an unprece- dented scale. Although such activity logs are abundantly available, most approaches to recommender systems are based on the rating...Recurrent Neural Network, Recommender System , Neural Language Model, Collaborative Filtering 1. Introduction As ever larger parts of the population

  10. Analysis of Recurrent Analog Neural Networks

    Directory of Open Access Journals (Sweden)

    Z. Raida

    1998-06-01

    Full Text Available In this paper, an original rigorous analysis of recurrent analog neural networks, which are built from opamp neurons, is presented. The analysis, which comes from the approximate model of the operational amplifier, reveals causes of possible non-stable states and enables to determine convergence properties of the network. Results of the analysis are discussed in order to enable development of original robust and fast analog networks. In the analysis, the special attention is turned to the examination of the influence of real circuit elements and of the statistical parameters of processed signals to the parameters of the network.

  11. Adaptive Filtering Using Recurrent Neural Networks

    Science.gov (United States)

    Parlos, Alexander G.; Menon, Sunil K.; Atiya, Amir F.

    2005-01-01

    A method for adaptive (or, optionally, nonadaptive) filtering has been developed for estimating the states of complex process systems (e.g., chemical plants, factories, or manufacturing processes at some level of abstraction) from time series of measurements of system inputs and outputs. The method is based partly on the fundamental principles of the Kalman filter and partly on the use of recurrent neural networks. The standard Kalman filter involves an assumption of linearity of the mathematical model used to describe a process system. The extended Kalman filter accommodates a nonlinear process model but still requires linearization about the state estimate. Both the standard and extended Kalman filters involve the often unrealistic assumption that process and measurement noise are zero-mean, Gaussian, and white. In contrast, the present method does not involve any assumptions of linearity of process models or of the nature of process noise; on the contrary, few (if any) assumptions are made about process models, noise models, or the parameters of such models. In this regard, the method can be characterized as one of nonlinear, nonparametric filtering. The method exploits the unique ability of neural networks to approximate nonlinear functions. In a given case, the process model is limited mainly by limitations of the approximation ability of the neural networks chosen for that case. Moreover, despite the lack of assumptions regarding process noise, the method yields minimum- variance filters. In that they do not require statistical models of noise, the neural- network-based state filters of this method are comparable to conventional nonlinear least-squares estimators.

  12. Identification of Non-Linear Structures using Recurrent Neural Networks

    DEFF Research Database (Denmark)

    Kirkegaard, Poul Henning; Nielsen, Søren R. K.; Hansen, H. I.

    Two different partially recurrent neural networks structured as Multi Layer Perceptrons (MLP) are investigated for time domain identification of a non-linear structure.......Two different partially recurrent neural networks structured as Multi Layer Perceptrons (MLP) are investigated for time domain identification of a non-linear structure....

  13. Identification of Non-Linear Structures using Recurrent Neural Networks

    DEFF Research Database (Denmark)

    Kirkegaard, Poul Henning; Nielsen, Søren R. K.; Hansen, H. I.

    1995-01-01

    Two different partially recurrent neural networks structured as Multi Layer Perceptrons (MLP) are investigated for time domain identification of a non-linear structure.......Two different partially recurrent neural networks structured as Multi Layer Perceptrons (MLP) are investigated for time domain identification of a non-linear structure....

  14. Deep Recurrent Convolutional Neural Network: Improving Performance For Speech Recognition

    OpenAIRE

    Zhang, Zewang; Sun, Zheng; Liu, Jiaqi; Chen, Jingwen; Huo, Zhao; Zhang, Xiao

    2016-01-01

    A deep learning approach has been widely applied in sequence modeling problems. In terms of automatic speech recognition (ASR), its performance has significantly been improved by increasing large speech corpus and deeper neural network. Especially, recurrent neural network and deep convolutional neural network have been applied in ASR successfully. Given the arising problem of training speed, we build a novel deep recurrent convolutional network for acoustic modeling and then apply deep resid...

  15. Precipitation Nowcast using Deep Recurrent Neural Network

    Science.gov (United States)

    Akbari Asanjan, A.; Yang, T.; Gao, X.; Hsu, K. L.; Sorooshian, S.

    2016-12-01

    An accurate precipitation nowcast (0-6 hours) with a fine temporal and spatial resolution has always been an important prerequisite for flood warning, streamflow prediction and risk management. Most of the popular approaches used for forecasting precipitation can be categorized into two groups. One type of precipitation forecast relies on numerical modeling of the physical dynamics of atmosphere and another is based on empirical and statistical regression models derived by local hydrologists or meteorologists. Given the recent advances in artificial intelligence, in this study a powerful Deep Recurrent Neural Network, termed as Long Short-Term Memory (LSTM) model, is creatively used to extract the patterns and forecast the spatial and temporal variability of Cloud Top Brightness Temperature (CTBT) observed from GOES satellite. Then, a 0-6 hours precipitation nowcast is produced using a Precipitation Estimation from Remote Sensing Information using Artificial Neural Network (PERSIANN) algorithm, in which the CTBT nowcast is used as the PERSIANN algorithm's raw inputs. Two case studies over the continental U.S. have been conducted that demonstrate the improvement of proposed approach as compared to a classical Feed Forward Neural Network and a couple simple regression models. The advantages and disadvantages of the proposed method are summarized with regard to its capability of pattern recognition through time, handling of vanishing gradient during model learning, and working with sparse data. The studies show that the LSTM model performs better than other methods, and it is able to learn the temporal evolution of the precipitation events through over 1000 time lags. The uniqueness of PERSIANN's algorithm enables an alternative precipitation nowcast approach as demonstrated in this study, in which the CTBT prediction is produced and used as the inputs for generating precipitation nowcast.

  16. Time series prediction with simple recurrent neural networks ...

    African Journals Online (AJOL)

    A hybrid of the two called Elman-Jordan (or Multi-recurrent) neural network is also being used. In this study, we evaluated the performance of these neural networks on three established bench mark time series prediction problems. Results from the experiments showed that Jordan neural network performed significantly ...

  17. Deep Recurrent Neural Networks for Supernovae Classification

    Science.gov (United States)

    Charnock, Tom; Moss, Adam

    2017-03-01

    We apply deep recurrent neural networks, which are capable of learning complex sequential information, to classify supernovae (code available at https://github.com/adammoss/supernovae). The observational time and filter fluxes are used as inputs to the network, but since the inputs are agnostic, additional data such as host galaxy information can also be included. Using the Supernovae Photometric Classification Challenge (SPCC) data, we find that deep networks are capable of learning about light curves, however the performance of the network is highly sensitive to the amount of training data. For a training size of 50% of the representational SPCC data set (around 104 supernovae) we obtain a type-Ia versus non-type-Ia classification accuracy of 94.7%, an area under the Receiver Operating Characteristic curve AUC of 0.986 and an SPCC figure-of-merit F 1 = 0.64. When using only the data for the early-epoch challenge defined by the SPCC, we achieve a classification accuracy of 93.1%, AUC of 0.977, and F 1 = 0.58, results almost as good as with the whole light curve. By employing bidirectional neural networks, we can acquire impressive classification results between supernovae types I, II and III at an accuracy of 90.4% and AUC of 0.974. We also apply a pre-trained model to obtain classification probabilities as a function of time and show that it can give early indications of supernovae type. Our method is competitive with existing algorithms and has applications for future large-scale photometric surveys.

  18. Bayesian Recurrent Neural Network for Language Modeling.

    Science.gov (United States)

    Chien, Jen-Tzung; Ku, Yuan-Chu

    2016-02-01

    A language model (LM) is calculated as the probability of a word sequence that provides the solution to word prediction for a variety of information systems. A recurrent neural network (RNN) is powerful to learn the large-span dynamics of a word sequence in the continuous space. However, the training of the RNN-LM is an ill-posed problem because of too many parameters from a large dictionary size and a high-dimensional hidden layer. This paper presents a Bayesian approach to regularize the RNN-LM and apply it for continuous speech recognition. We aim to penalize the too complicated RNN-LM by compensating for the uncertainty of the estimated model parameters, which is represented by a Gaussian prior. The objective function in a Bayesian classification network is formed as the regularized cross-entropy error function. The regularized model is constructed not only by calculating the regularized parameters according to the maximum a posteriori criterion but also by estimating the Gaussian hyperparameter by maximizing the marginal likelihood. A rapid approximation to a Hessian matrix is developed to implement the Bayesian RNN-LM (BRNN-LM) by selecting a small set of salient outer-products. The proposed BRNN-LM achieves a sparser model than the RNN-LM. Experiments on different corpora show the robustness of system performance by applying the rapid BRNN-LM under different conditions.

  19. Character recognition from trajectory by recurrent spiking neural networks.

    Science.gov (United States)

    Jiangrong Shen; Kang Lin; Yueming Wang; Gang Pan

    2017-07-01

    Spiking neural networks are biologically plausible and power-efficient on neuromorphic hardware, while recurrent neural networks have been proven to be efficient on time series data. However, how to use the recurrent property to improve the performance of spiking neural networks is still a problem. This paper proposes a recurrent spiking neural network for character recognition using trajectories. In the network, a new encoding method is designed, in which varying time ranges of input streams are used in different recurrent layers. This is able to improve the generalization ability of our model compared with general encoding methods. The experiments are conducted on four groups of the character data set from University of Edinburgh. The results show that our method can achieve a higher average recognition accuracy than existing methods.

  20. Representation of linguistic form and function in recurrent neural networks

    NARCIS (Netherlands)

    Kadar, Akos; Chrupala, Grzegorz; Alishahi, Afra

    2017-01-01

    We present novel methods for analyzing the activation patterns of recurrent neural networks from a linguistic point of view and explore the types of linguistic structure they learn. As a case study, we use a standard standalone language model, and a multi-task gated recurrent network architecture

  1. Noise-enhanced categorization in a recurrently reconnected neural network

    International Nuclear Information System (INIS)

    Monterola, Christopher; Zapotocky, Martin

    2005-01-01

    We investigate the interplay of recurrence and noise in neural networks trained to categorize spatial patterns of neural activity. We develop the following procedure to demonstrate how, in the presence of noise, the introduction of recurrence permits to significantly extend and homogenize the operating range of a feed-forward neural network. We first train a two-level perceptron in the absence of noise. Following training, we identify the input and output units of the feed-forward network, and thus convert it into a two-layer recurrent network. We show that the performance of the reconnected network has features reminiscent of nondynamic stochastic resonance: the addition of noise enables the network to correctly categorize stimuli of subthreshold strength, with optimal noise magnitude significantly exceeding the stimulus strength. We characterize the dynamics leading to this effect and contrast it to the behavior of a more simple associative memory network in which noise-mediated categorization fails

  2. Noise-enhanced categorization in a recurrently reconnected neural network

    Science.gov (United States)

    Monterola, Christopher; Zapotocky, Martin

    2005-03-01

    We investigate the interplay of recurrence and noise in neural networks trained to categorize spatial patterns of neural activity. We develop the following procedure to demonstrate how, in the presence of noise, the introduction of recurrence permits to significantly extend and homogenize the operating range of a feed-forward neural network. We first train a two-level perceptron in the absence of noise. Following training, we identify the input and output units of the feed-forward network, and thus convert it into a two-layer recurrent network. We show that the performance of the reconnected network has features reminiscent of nondynamic stochastic resonance: the addition of noise enables the network to correctly categorize stimuli of subthreshold strength, with optimal noise magnitude significantly exceeding the stimulus strength. We characterize the dynamics leading to this effect and contrast it to the behavior of a more simple associative memory network in which noise-mediated categorization fails.

  3. Optimization of recurrent neural networks for time series modeling

    DEFF Research Database (Denmark)

    Pedersen, Morten With

    1997-01-01

    The present thesis is about optimization of recurrent neural networks applied to time series modeling. In particular is considered fully recurrent networks working from only a single external input, one layer of nonlinear hidden units and a li near output unit applied to prediction of discrete time...... series. The overall objective s are to improve training by application of second-order methods and to improve generalization ability by architecture optimization accomplished by pruning. The major topics covered in the thesis are: 1. The problem of training recurrent networks is analyzed from a numerical...... of solution obtained as well as computation time required. 3. A theoretical definition of the generalization error for recurrent networks is provided. This definition justifies a commonly adopted approach for estimating generalization ability. 4. The viability of pruning recurrent networks by the Optimal...

  4. Energy Complexity of Recurrent Neural Networks

    Czech Academy of Sciences Publication Activity Database

    Šíma, Jiří

    2014-01-01

    Roč. 26, č. 5 (2014), s. 953-973 ISSN 0899-7667 R&D Projects: GA ČR GAP202/10/1333 Institutional support: RVO:67985807 Keywords : neural network * finite automaton * energy complexity * optimal size Subject RIV: IN - Informatics, Computer Science Impact factor: 2.207, year: 2014

  5. Neural Machine Translation with Recurrent Attention Modeling

    OpenAIRE

    Yang, Zichao; Hu, Zhiting; Deng, Yuntian; Dyer, Chris; Smola, Alex

    2016-01-01

    Knowing which words have been attended to in previous time steps while generating a translation is a rich source of information for predicting what words will be attended to in the future. We improve upon the attention model of Bahdanau et al. (2014) by explicitly modeling the relationship between previous and subsequent attention levels for each word using one recurrent network per input word. This architecture easily captures informative features, such as fertility and regularities in relat...

  6. Bach in 2014: Music Composition with Recurrent Neural Network

    OpenAIRE

    Liu, I-Ting; Ramakrishnan, Bhiksha

    2014-01-01

    We propose a framework for computer music composition that uses resilient propagation (RProp) and long short term memory (LSTM) recurrent neural network. In this paper, we show that LSTM network learns the structure and characteristics of music pieces properly by demonstrating its ability to recreate music. We also show that predicting existing music using RProp outperforms Back propagation through time (BPTT).

  7. Probing the basins of attraction of a recurrent neural network

    NARCIS (Netherlands)

    Heerema, M.; van Leeuwen, W.A.

    2000-01-01

    Analytical expressions for the weights $w_{ij}(b)$ of the connections of a recurrent neural network are found by taking explicitly into account basins of attraction, the size of which is characterized by a basin parameter $b$. It is shown that a network with $b \

  8. Bayesian model ensembling using meta-trained recurrent neural networks

    NARCIS (Netherlands)

    Ambrogioni, L.; Berezutskaya, Y.; Gü ç lü , U.; Borne, E.W.P. van den; Gü ç lü tü rk, Y.; Gerven, M.A.J. van; Maris, E.G.G.

    2017-01-01

    In this paper we demonstrate that a recurrent neural network meta-trained on an ensemble of arbitrary classification tasks can be used as an approximation of the Bayes optimal classifier. This result is obtained by relying on the framework of e-free approximate Bayesian inference, where the Bayesian

  9. Railway track circuit fault diagnosis using recurrent neural networks

    NARCIS (Netherlands)

    de Bruin, T.D.; Verbert, K.A.J.; Babuska, R.

    2017-01-01

    Timely detection and identification of faults in railway track circuits are crucial for the safety and availability of railway networks. In this paper, the use of the long-short-term memory (LSTM) recurrent neural network is proposed to accomplish these tasks based on the commonly available

  10. A recurrent neural network with ever changing synapses

    NARCIS (Netherlands)

    Heerema, M.; van Leeuwen, W.A.

    2000-01-01

    A recurrent neural network with noisy input is studied analytically, on the basis of a Discrete Time Master Equation. The latter is derived from a biologically realizable learning rule for the weights of the connections. In a numerical study it is found that the fixed points of the dynamics of the

  11. Active Control of Sound based on Diagonal Recurrent Neural Network

    NARCIS (Netherlands)

    Jayawardhana, Bayu; Xie, Lihua; Yuan, Shuqing

    2002-01-01

    Recurrent neural network has been known for its dynamic mapping and better suited for nonlinear dynamical system. Nonlinear controller may be needed in cases where the actuators exhibit the nonlinear characteristics, or in cases when the structure to be controlled exhibits nonlinear behavior. The

  12. Convolutional over Recurrent Encoder for Neural Machine Translation

    Directory of Open Access Journals (Sweden)

    Dakwale Praveen

    2017-06-01

    Full Text Available Neural machine translation is a recently proposed approach which has shown competitive results to traditional MT approaches. Standard neural MT is an end-to-end neural network where the source sentence is encoded by a recurrent neural network (RNN called encoder and the target words are predicted using another RNN known as decoder. Recently, various models have been proposed which replace the RNN encoder with a convolutional neural network (CNN. In this paper, we propose to augment the standard RNN encoder in NMT with additional convolutional layers in order to capture wider context in the encoder output. Experiments on English to German translation demonstrate that our approach can achieve significant improvements over a standard RNN-based baseline.

  13. Synthesis of recurrent neural networks for dynamical system simulation.

    Science.gov (United States)

    Trischler, Adam P; D'Eleuterio, Gabriele M T

    2016-08-01

    We review several of the most widely used techniques for training recurrent neural networks to approximate dynamical systems, then describe a novel algorithm for this task. The algorithm is based on an earlier theoretical result that guarantees the quality of the network approximation. We show that a feedforward neural network can be trained on the vector-field representation of a given dynamical system using backpropagation, then recast it as a recurrent network that replicates the original system's dynamics. After detailing this algorithm and its relation to earlier approaches, we present numerical examples that demonstrate its capabilities. One of the distinguishing features of our approach is that both the original dynamical systems and the recurrent networks that simulate them operate in continuous time. Copyright © 2016 Elsevier Ltd. All rights reserved.

  14. Recursive Bayesian recurrent neural networks for time-series modeling.

    Science.gov (United States)

    Mirikitani, Derrick T; Nikolaev, Nikolay

    2010-02-01

    This paper develops a probabilistic approach to recursive second-order training of recurrent neural networks (RNNs) for improved time-series modeling. A general recursive Bayesian Levenberg-Marquardt algorithm is derived to sequentially update the weights and the covariance (Hessian) matrix. The main strengths of the approach are a principled handling of the regularization hyperparameters that leads to better generalization, and stable numerical performance. The framework involves the adaptation of a noise hyperparameter and local weight prior hyperparameters, which represent the noise in the data and the uncertainties in the model parameters. Experimental investigations using artificial and real-world data sets show that RNNs equipped with the proposed approach outperform standard real-time recurrent learning and extended Kalman training algorithms for recurrent networks, as well as other contemporary nonlinear neural models, on time-series modeling.

  15. SORN: a self-organizing recurrent neural network

    Directory of Open Access Journals (Sweden)

    Andreea Lazar

    2009-10-01

    Full Text Available Understanding the dynamics of recurrent neural networks is crucial for explaining how the brain processes information. In the neocortex, a range of different plasticity mechanisms are shaping recurrent networks into effective information processing circuits that learn appropriate representations for time-varying sensory stimuli. However, it has been difficult to mimic these abilities in artificial neural network models. Here we introduce SORN, a self-organizing recurrent network. It combines three distinct forms of local plasticity to learn spatio-temporal patterns in its input while maintaining its dynamics in a healthy regime suitable for learning. The SORN learns to encode information in the form of trajectories through its high-dimensional state space reminiscent of recent biological findings on cortical coding. All three forms of plasticity are shown to be essential for the network's success.

  16. Relation Classification via Recurrent Neural Network

    OpenAIRE

    Zhang, Dongxu; Wang, Dong

    2015-01-01

    Deep learning has gained much success in sentence-level relation classification. For example, convolutional neural networks (CNN) have delivered competitive performance without much effort on feature engineering as the conventional pattern-based methods. Thus a lot of works have been produced based on CNN structures. However, a key issue that has not been well addressed by the CNN-based method is the lack of capability to learn temporal features, especially long-distance dependency between no...

  17. Fractional Hopfield Neural Networks: Fractional Dynamic Associative Recurrent Neural Networks.

    Science.gov (United States)

    Pu, Yi-Fei; Yi, Zhang; Zhou, Ji-Liu

    2017-10-01

    This paper mainly discusses a novel conceptual framework: fractional Hopfield neural networks (FHNN). As is commonly known, fractional calculus has been incorporated into artificial neural networks, mainly because of its long-term memory and nonlocality. Some researchers have made interesting attempts at fractional neural networks and gained competitive advantages over integer-order neural networks. Therefore, it is naturally makes one ponder how to generalize the first-order Hopfield neural networks to the fractional-order ones, and how to implement FHNN by means of fractional calculus. We propose to introduce a novel mathematical method: fractional calculus to implement FHNN. First, we implement fractor in the form of an analog circuit. Second, we implement FHNN by utilizing fractor and the fractional steepest descent approach, construct its Lyapunov function, and further analyze its attractors. Third, we perform experiments to analyze the stability and convergence of FHNN, and further discuss its applications to the defense against chip cloning attacks for anticounterfeiting. The main contribution of our work is to propose FHNN in the form of an analog circuit by utilizing a fractor and the fractional steepest descent approach, construct its Lyapunov function, prove its Lyapunov stability, analyze its attractors, and apply FHNN to the defense against chip cloning attacks for anticounterfeiting. A significant advantage of FHNN is that its attractors essentially relate to the neuron's fractional order. FHNN possesses the fractional-order-stability and fractional-order-sensitivity characteristics.

  18. Analysis of surface ozone using a recurrent neural network.

    Science.gov (United States)

    Biancofiore, Fabio; Verdecchia, Marco; Di Carlo, Piero; Tomassetti, Barbara; Aruffo, Eleonora; Busilacchio, Marcella; Bianco, Sebastiano; Di Tommaso, Sinibaldo; Colangeli, Carlo

    2015-05-01

    Hourly concentrations of ozone (O₃) and nitrogen dioxide (NO₂) have been measured for 16 years, from 1998 to 2013, in a seaside town in central Italy. The seasonal trends of O₃ and NO₂ recorded in this period have been studied. Furthermore, we used the data collected during one year (2005), to define the characteristics of a multiple linear regression model and a neural network model. Both models are used to model the hourly O₃ concentration, using, two scenarios: 1) in the first as inputs, only meteorological parameters and 2) in the second adding photochemical parameters at those of the first scenario. In order to evaluate the performance of the model four statistical criteria are used: correlation coefficient, fractional bias, normalized mean squared error and a factor of two. All the criteria show that the neural network gives better results, compared to the regression model, in all the model scenarios. Predictions of O₃ have been carried out by many authors using a feed forward neural architecture. In this paper we show that a recurrent architecture significantly improves the performances of neural predictors. Using only the meteorological parameters as input, the recurrent architecture shows performance better than the multiple linear regression model that uses meteorological and photochemical data as input, making the neural network model with recurrent architecture a more useful tool in areas where only weather measurements are available. Finally, we used the neural network model to forecast the O₃ hourly concentrations 1, 3, 6, 12, 24 and 48 h ahead. The performances of the model in predicting O₃ levels are discussed. Emphasis is given to the possibility of using the neural network model in operational ways in areas where only meteorological data are available, in order to predict O₃ also in sites where it has not been measured yet. Copyright © 2015 Elsevier B.V. All rights reserved.

  19. Iterative free-energy optimization for recurrent neural networks (INFERNO)

    Science.gov (United States)

    2017-01-01

    The intra-parietal lobe coupled with the Basal Ganglia forms a working memory that demonstrates strong planning capabilities for generating robust yet flexible neuronal sequences. Neurocomputational models however, often fails to control long range neural synchrony in recurrent spiking networks due to spontaneous activity. As a novel framework based on the free-energy principle, we propose to see the problem of spikes’ synchrony as an optimization problem of the neurons sub-threshold activity for the generation of long neuronal chains. Using a stochastic gradient descent, a reinforcement signal (presumably dopaminergic) evaluates the quality of one input vector to move the recurrent neural network to a desired activity; depending on the error made, this input vector is strengthened to hill-climb the gradient or elicited to search for another solution. This vector can be learned then by one associative memory as a model of the basal-ganglia to control the recurrent neural network. Experiments on habit learning and on sequence retrieving demonstrate the capabilities of the dual system to generate very long and precise spatio-temporal sequences, above two hundred iterations. Its features are applied then to the sequential planning of arm movements. In line with neurobiological theories, we discuss its relevance for modeling the cortico-basal working memory to initiate flexible goal-directed neuronal chains of causation and its relation to novel architectures such as Deep Networks, Neural Turing Machines and the Free-Energy Principle. PMID:28282439

  20. A recurrent neural network for solving bilevel linear programming problem.

    Science.gov (United States)

    He, Xing; Li, Chuandong; Huang, Tingwen; Li, Chaojie; Huang, Junjian

    2014-04-01

    In this brief, based on the method of penalty functions, a recurrent neural network (NN) modeled by means of a differential inclusion is proposed for solving the bilevel linear programming problem (BLPP). Compared with the existing NNs for BLPP, the model has the least number of state variables and simple structure. Using nonsmooth analysis, the theory of differential inclusions, and Lyapunov-like method, the equilibrium point sequence of the proposed NNs can approximately converge to an optimal solution of BLPP under certain conditions. Finally, the numerical simulations of a supply chain distribution model have shown excellent performance of the proposed recurrent NNs.

  1. Embedding recurrent neural networks into predator-prey models.

    Science.gov (United States)

    Moreau, Yves; Louiès, Stephane; Vandewalle, Joos; Brenig, Leon

    1999-03-01

    We study changes of coordinates that allow the embedding of ordinary differential equations describing continuous-time recurrent neural networks into differential equations describing predator-prey models-also called Lotka-Volterra systems. We transform the equations for the neural network first into quasi-monomial form (Brenig, L. (1988). Complete factorization and analytic solutions of generalized Lotka-Volterra equations. Physics Letters A, 133(7-8), 378-382), where we express the vector field of the dynamical system as a linear combination of products of powers of the variables. In practice, this transformation is possible only if the activation function is the hyperbolic tangent or the logistic sigmoid. From this quasi-monomial form, we can directly transform the system further into Lotka-Volterra equations. The resulting Lotka-Volterra system is of higher dimension than the original system, but the behavior of its first variables is equivalent to the behavior of the original neural network. We expect that this transformation will permit the application of existing techniques for the analysis of Lotka-Volterra systems to recurrent neural networks. Furthermore, our results show that Lotka-Volterra systems are universal approximators of dynamical systems, just as are continuous-time neural networks.

  2. Global robust stability of delayed recurrent neural networks

    International Nuclear Information System (INIS)

    Cao Jinde; Huang Deshuang; Qu Yuzhong

    2005-01-01

    This paper is concerned with the global robust stability of a class of delayed interval recurrent neural networks which contain time-invariant uncertain parameters whose values are unknown but bounded in given compact sets. A new sufficient condition is presented for the existence, uniqueness, and global robust stability of equilibria for interval neural networks with time delays by constructing Lyapunov functional and using matrix-norm inequality. An error is corrected in an earlier publication, and an example is given to show the effectiveness of the obtained results

  3. Nonlinear Lyapunov-based boundary control of distributed heat transfer mechanisms in membrane distillation plant

    KAUST Repository

    Eleiwi, Fadi; Laleg-Kirati, Taous-Meriem

    2015-01-01

    This paper presents a nonlinear Lyapunov-based boundary control for the temperature difference of a membrane distillation boundary layers. The heat transfer mechanisms inside the process are modeled with a 2D advection-diffusion equation. The model

  4. Lyapunov-based control of limit cycle oscillations in uncertain aircraft systems

    Science.gov (United States)

    Bialy, Brendan

    Store-induced limit cycle oscillations (LCO) affect several fighter aircraft and is expected to remain an issue for next generation fighters. LCO arises from the interaction of aerodynamic and structural forces, however the primary contributor to the phenomenon is still unclear. The practical concerns regarding this phenomenon include whether or not ordnance can be safely released and the ability of the aircrew to perform mission-related tasks while in an LCO condition. The focus of this dissertation is the development of control strategies to suppress LCO in aircraft systems. The first contribution of this work (Chapter 2) is the development of a controller consisting of a continuous Robust Integral of the Sign of the Error (RISE) feedback term with a neural network (NN) feedforward term to suppress LCO behavior in an uncertain airfoil system. The second contribution of this work (Chapter 3) is the extension of the development in Chapter 2 to include actuator saturation. Suppression of LCO behavior is achieved through the implementation of an auxiliary error system that features hyperbolic functions and a saturated RISE feedback control structure. Due to the lack of clarity regarding the driving mechanism behind LCO, common practice in literature and in Chapters 2 and 3 is to replicate the symptoms of LCO by including nonlinearities in the wing structure, typically a nonlinear torsional stiffness. To improve the accuracy of the system model a partial differential equation (PDE) model of a flexible wing is derived (see Appendix F) using Hamilton's principle. Chapters 4 and 5 are focused on developing boundary control strategies for regulating the bending and twisting deformations of the derived model. The contribution of Chapter 4 is the construction of a backstepping-based boundary control strategy for a linear PDE model of an aircraft wing. The backstepping-based strategy transforms the original system to a exponentially stable system. A Lyapunov-based stability

  5. Predicting local field potentials with recurrent neural networks.

    Science.gov (United States)

    Kim, Louis; Harer, Jacob; Rangamani, Akshay; Moran, James; Parks, Philip D; Widge, Alik; Eskandar, Emad; Dougherty, Darin; Chin, Sang Peter

    2016-08-01

    We present a Recurrent Neural Network using LSTM (Long Short Term Memory) that is capable of modeling and predicting Local Field Potentials. We train and test the network on real data recorded from epilepsy patients. We construct networks that predict multi-channel LFPs for 1, 10, and 100 milliseconds forward in time. Our results show that prediction using LSTM outperforms regression when predicting 10 and 100 millisecond forward in time.

  6. Web server's reliability improvements using recurrent neural networks

    DEFF Research Database (Denmark)

    Madsen, Henrik; Albu, Rǎzvan-Daniel; Felea, Ioan

    2012-01-01

    In this paper we describe an interesting approach to error prediction illustrated by experimental results. The application consists of monitoring the activity for the web servers in order to collect the specific data. Predicting an error with severe consequences for the performance of a server (t...... usage, network usage and memory usage. We collect different data sets from monitoring the web server's activity and for each one we predict the server's reliability with the proposed recurrent neural network. © 2012 Taylor & Francis Group...

  7. Parameter estimation in space systems using recurrent neural networks

    Science.gov (United States)

    Parlos, Alexander G.; Atiya, Amir F.; Sunkel, John W.

    1991-01-01

    The identification of time-varying parameters encountered in space systems is addressed, using artificial neural systems. A hybrid feedforward/feedback neural network, namely a recurrent multilayer perception, is used as the model structure in the nonlinear system identification. The feedforward portion of the network architecture provides its well-known interpolation property, while through recurrency and cross-talk, the local information feedback enables representation of temporal variations in the system nonlinearities. The standard back-propagation-learning algorithm is modified and it is used for both the off-line and on-line supervised training of the proposed hybrid network. The performance of recurrent multilayer perceptron networks in identifying parameters of nonlinear dynamic systems is investigated by estimating the mass properties of a representative large spacecraft. The changes in the spacecraft inertia are predicted using a trained neural network, during two configurations corresponding to the early and late stages of the spacecraft on-orbit assembly sequence. The proposed on-line mass properties estimation capability offers encouraging results, though, further research is warranted for training and testing the predictive capabilities of these networks beyond nominal spacecraft operations.

  8. Prediction of Bladder Cancer Recurrences Using Artificial Neural Networks

    Science.gov (United States)

    Zulueta Guerrero, Ekaitz; Garay, Naiara Telleria; Lopez-Guede, Jose Manuel; Vilches, Borja Ayerdi; Iragorri, Eider Egilegor; Castaños, David Lecumberri; de La Hoz Rastrollo, Ana Belén; Peña, Carlos Pertusa

    Even if considerable advances have been made in the field of early diagnosis, there is no simple, cheap and non-invasive method that can be applied to the clinical monitorisation of bladder cancer patients. Moreover, bladder cancer recurrences or the reappearance of the tumour after its surgical resection cannot be predicted in the current clinical setting. In this study, Artificial Neural Networks (ANN) were used to assess how different combinations of classical clinical parameters (stage-grade and age) and two urinary markers (growth factor and pro-inflammatory mediator) could predict post surgical recurrences in bladder cancer patients. Different ANN methods, input parameter combinations and recurrence related output variables were used and the resulting positive and negative prediction rates compared. MultiLayer Perceptron (MLP) was selected as the most predictive model and urinary markers showed the highest sensitivity, predicting correctly 50% of the patients that would recur in a 2 year follow-up period.

  9. Recurrent Neural Network for Computing the Drazin Inverse.

    Science.gov (United States)

    Stanimirović, Predrag S; Zivković, Ivan S; Wei, Yimin

    2015-11-01

    This paper presents a recurrent neural network (RNN) for computing the Drazin inverse of a real matrix in real time. This recurrent neural network (RNN) is composed of n independent parts (subnetworks), where n is the order of the input matrix. These subnetworks can operate concurrently, so parallel and distributed processing can be achieved. In this way, the computational advantages over the existing sequential algorithms can be attained in real-time applications. The RNN defined in this paper is convenient for an implementation in an electronic circuit. The number of neurons in the neural network is the same as the number of elements in the output matrix, which represents the Drazin inverse. The difference between the proposed RNN and the existing ones for the Drazin inverse computation lies in their network architecture and dynamics. The conditions that ensure the stability of the defined RNN as well as its convergence toward the Drazin inverse are considered. In addition, illustrative examples and examples of application to the practical engineering problems are discussed to show the efficacy of the proposed neural network.

  10. A Recurrent Neural Network for Nonlinear Fractional Programming

    Directory of Open Access Journals (Sweden)

    Quan-Ju Zhang

    2012-01-01

    Full Text Available This paper presents a novel recurrent time continuous neural network model which performs nonlinear fractional optimization subject to interval constraints on each of the optimization variables. The network is proved to be complete in the sense that the set of optima of the objective function to be minimized with interval constraints coincides with the set of equilibria of the neural network. It is also shown that the network is primal and globally convergent in the sense that its trajectory cannot escape from the feasible region and will converge to an exact optimal solution for any initial point being chosen in the feasible interval region. Simulation results are given to demonstrate further the global convergence and good performance of the proposing neural network for nonlinear fractional programming problems with interval constraints.

  11. Ideomotor feedback control in a recurrent neural network.

    Science.gov (United States)

    Galtier, Mathieu

    2015-06-01

    The architecture of a neural network controlling an unknown environment is presented. It is based on a randomly connected recurrent neural network from which both perception and action are simultaneously read and fed back. There are two concurrent learning rules implementing a sort of ideomotor control: (i) perception is learned along the principle that the network should predict reliably its incoming stimuli; (ii) action is learned along the principle that the prediction of the network should match a target time series. The coherent behavior of the neural network in its environment is a consequence of the interaction between the two principles. Numerical simulations show a promising performance of the approach, which can be turned into a local and better "biologically plausible" algorithm.

  12. A novel word spotting method based on recurrent neural networks.

    Science.gov (United States)

    Frinken, Volkmar; Fischer, Andreas; Manmatha, R; Bunke, Horst

    2012-02-01

    Keyword spotting refers to the process of retrieving all instances of a given keyword from a document. In the present paper, a novel keyword spotting method for handwritten documents is described. It is derived from a neural network-based system for unconstrained handwriting recognition. As such it performs template-free spotting, i.e., it is not necessary for a keyword to appear in the training set. The keyword spotting is done using a modification of the CTC Token Passing algorithm in conjunction with a recurrent neural network. We demonstrate that the proposed systems outperform not only a classical dynamic time warping-based approach but also a modern keyword spotting system, based on hidden Markov models. Furthermore, we analyze the performance of the underlying neural networks when using them in a recognition task followed by keyword spotting on the produced transcription. We point out the advantages of keyword spotting when compared to classic text line recognition.

  13. Convolutional neural networks for prostate cancer recurrence prediction

    Science.gov (United States)

    Kumar, Neeraj; Verma, Ruchika; Arora, Ashish; Kumar, Abhay; Gupta, Sanchit; Sethi, Amit; Gann, Peter H.

    2017-03-01

    Accurate prediction of the treatment outcome is important for cancer treatment planning. We present an approach to predict prostate cancer (PCa) recurrence after radical prostatectomy using tissue images. We used a cohort whose case vs. control (recurrent vs. non-recurrent) status had been determined using post-treatment follow up. Further, to aid the development of novel biomarkers of PCa recurrence, cases and controls were paired based on matching of other predictive clinical variables such as Gleason grade, stage, age, and race. For this cohort, tissue resection microarray with up to four cores per patient was available. The proposed approach is based on deep learning, and its novelty lies in the use of two separate convolutional neural networks (CNNs) - one to detect individual nuclei even in the crowded areas, and the other to classify them. To detect nuclear centers in an image, the first CNN predicts distance transform of the underlying (but unknown) multi-nuclear map from the input HE image. The second CNN classifies the patches centered at nuclear centers into those belonging to cases or controls. Voting across patches extracted from image(s) of a patient yields the probability of recurrence for the patient. The proposed approach gave 0.81 AUC for a sample of 30 recurrent cases and 30 non-recurrent controls, after being trained on an independent set of 80 case-controls pairs. If validated further, such an approach might help in choosing between a combination of treatment options such as active surveillance, radical prostatectomy, radiation, and hormone therapy. It can also generalize to the prediction of treatment outcomes in other cancers.

  14. Sensitivity analysis of linear programming problem through a recurrent neural network

    Science.gov (United States)

    Das, Raja

    2017-11-01

    In this paper we study the recurrent neural network for solving linear programming problems. To achieve optimality in accuracy and also in computational effort, an algorithm is presented. We investigate the sensitivity analysis of linear programming problem through the neural network. A detailed example is also presented to demonstrate the performance of the recurrent neural network.

  15. Fine-tuning and the stability of recurrent neural networks.

    Directory of Open Access Journals (Sweden)

    David MacNeil

    Full Text Available A central criticism of standard theoretical approaches to constructing stable, recurrent model networks is that the synaptic connection weights need to be finely-tuned. This criticism is severe because proposed rules for learning these weights have been shown to have various limitations to their biological plausibility. Hence it is unlikely that such rules are used to continuously fine-tune the network in vivo. We describe a learning rule that is able to tune synaptic weights in a biologically plausible manner. We demonstrate and test this rule in the context of the oculomotor integrator, showing that only known neural signals are needed to tune the weights. We demonstrate that the rule appropriately accounts for a wide variety of experimental results, and is robust under several kinds of perturbation. Furthermore, we show that the rule is able to achieve stability as good as or better than that provided by the linearly optimal weights often used in recurrent models of the integrator. Finally, we discuss how this rule can be generalized to tune a wide variety of recurrent attractor networks, such as those found in head direction and path integration systems, suggesting that it may be used to tune a wide variety of stable neural systems.

  16. Estimating Ads’ Click through Rate with Recurrent Neural Network

    Directory of Open Access Journals (Sweden)

    Chen Qiao-Hong

    2016-01-01

    Full Text Available With the development of the Internet, online advertising spreads across every corner of the world, the ads' click through rate (CTR estimation is an important method to improve the online advertising revenue. Compared with the linear model, the nonlinear models can study much more complex relationships between a large number of nonlinear characteristics, so as to improve the accuracy of the estimation of the ads’ CTR. The recurrent neural network (RNN based on Long-Short Term Memory (LSTM is an improved model of the feedback neural network with ring structure. The model overcomes the problem of the gradient of the general RNN. Experiments show that the RNN based on LSTM exceeds the linear models, and it can effectively improve the estimation effect of the ads’ click through rate.

  17. Delay-slope-dependent stability results of recurrent neural networks.

    Science.gov (United States)

    Li, Tao; Zheng, Wei Xing; Lin, Chong

    2011-12-01

    By using the fact that the neuron activation functions are sector bounded and nondecreasing, this brief presents a new method, named the delay-slope-dependent method, for stability analysis of a class of recurrent neural networks with time-varying delays. This method includes more information on the slope of neuron activation functions and fewer matrix variables in the constructed Lyapunov-Krasovskii functional. Then some improved delay-dependent stability criteria with less computational burden and conservatism are obtained. Numerical examples are given to illustrate the effectiveness and the benefits of the proposed method.

  18. Very deep recurrent convolutional neural network for object recognition

    Science.gov (United States)

    Brahimi, Sourour; Ben Aoun, Najib; Ben Amar, Chokri

    2017-03-01

    In recent years, Computer vision has become a very active field. This field includes methods for processing, analyzing, and understanding images. The most challenging problems in computer vision are image classification and object recognition. This paper presents a new approach for object recognition task. This approach exploits the success of the Very Deep Convolutional Neural Network for object recognition. In fact, it improves the convolutional layers by adding recurrent connections. This proposed approach was evaluated on two object recognition benchmarks: Pascal VOC 2007 and CIFAR-10. The experimental results prove the efficiency of our method in comparison with the state of the art methods.

  19. Optimizing Markovian modeling of chaotic systems with recurrent neural networks

    International Nuclear Information System (INIS)

    Cechin, Adelmo L.; Pechmann, Denise R.; Oliveira, Luiz P.L. de

    2008-01-01

    In this paper, we propose a methodology for optimizing the modeling of an one-dimensional chaotic time series with a Markov Chain. The model is extracted from a recurrent neural network trained for the attractor reconstructed from the data set. Each state of the obtained Markov Chain is a region of the reconstructed state space where the dynamics is approximated by a specific piecewise linear map, obtained from the network. The Markov Chain represents the dynamics of the time series in its statistical essence. An application to a time series resulted from Lorenz system is included

  20. Learning text representation using recurrent convolutional neural network with highway layers

    OpenAIRE

    Wen, Ying; Zhang, Weinan; Luo, Rui; Wang, Jun

    2016-01-01

    Recently, the rapid development of word embedding and neural networks has brought new inspiration to various NLP and IR tasks. In this paper, we describe a staged hybrid model combining Recurrent Convolutional Neural Networks (RCNN) with highway layers. The highway network module is incorporated in the middle takes the output of the bi-directional Recurrent Neural Network (Bi-RNN) module in the first stage and provides the Convolutional Neural Network (CNN) module in the last stage with the i...

  1. Classification of conductance traces with recurrent neural networks

    Science.gov (United States)

    Lauritzen, Kasper P.; Magyarkuti, András; Balogh, Zoltán; Halbritter, András; Solomon, Gemma C.

    2018-02-01

    We present a new automated method for structural classification of the traces obtained in break junction experiments. Using recurrent neural networks trained on the traces of minimal cross-sectional area in molecular dynamics simulations, we successfully separate the traces into two classes: point contact or nanowire. This is done without any assumptions about the expected features of each class. The trained neural network is applied to experimental break junction conductance traces, and it separates the classes as well as the previously used experimental methods. The effect of using partial conductance traces is explored, and we show that the method performs equally well using full or partial traces (as long as the trace just prior to breaking is included). When only the initial part of the trace is included, the results are still better than random chance. Finally, we show that the neural network classification method can be used to classify experimental conductance traces without using simulated results for training, but instead training the network on a few representative experimental traces. This offers a tool to recognize some characteristic motifs of the traces, which can be hard to find by simple data selection algorithms.

  2. Tuning Recurrent Neural Networks for Recognizing Handwritten Arabic Words

    KAUST Repository

    Qaralleh, Esam

    2013-10-01

    Artificial neural networks have the abilities to learn by example and are capable of solving problems that are hard to solve using ordinary rule-based programming. They have many design parameters that affect their performance such as the number and sizes of the hidden layers. Large sizes are slow and small sizes are generally not accurate. Tuning the neural network size is a hard task because the design space is often large and training is often a long process. We use design of experiments techniques to tune the recurrent neural network used in an Arabic handwriting recognition system. We show that best results are achieved with three hidden layers and two subsampling layers. To tune the sizes of these five layers, we use fractional factorial experiment design to limit the number of experiments to a feasible number. Moreover, we replicate the experiment configuration multiple times to overcome the randomness in the training process. The accuracy and time measurements are analyzed and modeled. The two models are then used to locate network sizes that are on the Pareto optimal frontier. The approach described in this paper reduces the label error from 26.2% to 19.8%.

  3. A modular architecture for transparent computation in recurrent neural networks.

    Science.gov (United States)

    Carmantini, Giovanni S; Beim Graben, Peter; Desroches, Mathieu; Rodrigues, Serafim

    2017-01-01

    Computation is classically studied in terms of automata, formal languages and algorithms; yet, the relation between neural dynamics and symbolic representations and operations is still unclear in traditional eliminative connectionism. Therefore, we suggest a unique perspective on this central issue, to which we would like to refer as transparent connectionism, by proposing accounts of how symbolic computation can be implemented in neural substrates. In this study we first introduce a new model of dynamics on a symbolic space, the versatile shift, showing that it supports the real-time simulation of a range of automata. We then show that the Gödelization of versatile shifts defines nonlinear dynamical automata, dynamical systems evolving on a vectorial space. Finally, we present a mapping between nonlinear dynamical automata and recurrent artificial neural networks. The mapping defines an architecture characterized by its granular modularity, where data, symbolic operations and their control are not only distinguishable in activation space, but also spatially localizable in the network itself, while maintaining a distributed encoding of symbolic representations. The resulting networks simulate automata in real-time and are programmed directly, in the absence of network training. To discuss the unique characteristics of the architecture and their consequences, we present two examples: (i) the design of a Central Pattern Generator from a finite-state locomotive controller, and (ii) the creation of a network simulating a system of interactive automata that supports the parsing of garden-path sentences as investigated in psycholinguistics experiments. Copyright © 2016 Elsevier Ltd. All rights reserved.

  4. Region stability analysis and tracking control of memristive recurrent neural network.

    Science.gov (United States)

    Bao, Gang; Zeng, Zhigang; Shen, Yanjun

    2018-02-01

    Memristor is firstly postulated by Leon Chua and realized by Hewlett-Packard (HP) laboratory. Research results show that memristor can be used to simulate the synapses of neurons. This paper presents a class of recurrent neural network with HP memristors. Firstly, it shows that memristive recurrent neural network has more compound dynamics than the traditional recurrent neural network by simulations. Then it derives that n dimensional memristive recurrent neural network is composed of [Formula: see text] sub neural networks which do not have a common equilibrium point. By designing the tracking controller, it can make memristive neural network being convergent to the desired sub neural network. At last, two numerical examples are given to verify the validity of our result. Copyright © 2017 Elsevier Ltd. All rights reserved.

  5. A novel recurrent neural network with finite-time convergence for linear programming.

    Science.gov (United States)

    Liu, Qingshan; Cao, Jinde; Chen, Guanrong

    2010-11-01

    In this letter, a novel recurrent neural network based on the gradient method is proposed for solving linear programming problems. Finite-time convergence of the proposed neural network is proved by using the Lyapunov method. Compared with the existing neural networks for linear programming, the proposed neural network is globally convergent to exact optimal solutions in finite time, which is remarkable and rare in the literature of neural networks for optimization. Some numerical examples are given to show the effectiveness and excellent performance of the new recurrent neural network.

  6. Lyapunov based control of hybrid energy storage system in electric vehicles

    DEFF Research Database (Denmark)

    El Fadil, H.; Giri, F.; Guerrero, Josep M.

    2012-01-01

    This paper deals with a Lyapunov based control principle in a hybrid energy storage system for electric vehicle. The storage system consists on fuel cell (FC) as a main power source and a supercapacitor (SC) as an auxiliary power source. The power stage of energy conversion consists on a boost...

  7. Lyapunov-Based Control Scheme for Single-Phase Grid-Connected PV Central Inverters

    NARCIS (Netherlands)

    Meza, C.; Biel, D.; Jeltsema, D.; Scherpen, J. M. A.

    A Lyapunov-based control scheme for single-phase single-stage grid-connected photovoltaic central inverters is presented. Besides rendering the closed-loop system globally stable, the designed controller is able to deal with the system uncertainty that depends on the solar irradiance. A laboratory

  8. Recurrent Neural Network Approach Based on the Integral Representation of the Drazin Inverse.

    Science.gov (United States)

    Stanimirović, Predrag S; Živković, Ivan S; Wei, Yimin

    2015-10-01

    In this letter, we present the dynamical equation and corresponding artificial recurrent neural network for computing the Drazin inverse for arbitrary square real matrix, without any restriction on its eigenvalues. Conditions that ensure the stability of the defined recurrent neural network as well as its convergence toward the Drazin inverse are considered. Several illustrative examples present the results of computer simulations.

  9. A recurrent neural network for adaptive beamforming and array correction.

    Science.gov (United States)

    Che, Hangjun; Li, Chuandong; He, Xing; Huang, Tingwen

    2016-08-01

    In this paper, a recurrent neural network (RNN) is proposed for solving adaptive beamforming problem. In order to minimize sidelobe interference, the problem is described as a convex optimization problem based on linear array model. RNN is designed to optimize system's weight values in the feasible region which is derived from arrays' state and plane wave's information. The new algorithm is proven to be stable and converge to optimal solution in the sense of Lyapunov. So as to verify new algorithm's performance, we apply it to beamforming under array mismatch situation. Comparing with other optimization algorithms, simulations suggest that RNN has strong ability to search for exact solutions under the condition of large scale constraints. Copyright © 2016 Elsevier Ltd. All rights reserved.

  10. Global robust exponential stability analysis for interval recurrent neural networks

    International Nuclear Information System (INIS)

    Xu Shengyuan; Lam, James; Ho, Daniel W.C.; Zou Yun

    2004-01-01

    This Letter investigates the problem of robust global exponential stability analysis for interval recurrent neural networks (RNNs) via the linear matrix inequality (LMI) approach. The values of the time-invariant uncertain parameters are assumed to be bounded within given compact sets. An improved condition for the existence of a unique equilibrium point and its global exponential stability of RNNs with known parameters is proposed. Based on this, a sufficient condition for the global robust exponential stability for interval RNNs is obtained. Both of the conditions are expressed in terms of LMIs, which can be checked easily by various recently developed convex optimization algorithms. Examples are provided to demonstrate the reduced conservatism of the proposed exponential stability condition

  11. Cascaded bidirectional recurrent neural networks for protein secondary structure prediction.

    Science.gov (United States)

    Chen, Jinmiao; Chaudhari, Narendra

    2007-01-01

    Protein secondary structure (PSS) prediction is an important topic in bioinformatics. Our study on a large set of non-homologous proteins shows that long-range interactions commonly exist and negatively affect PSS prediction. Besides, we also reveal strong correlations between secondary structure (SS) elements. In order to take into account the long-range interactions and SS-SS correlations, we propose a novel prediction system based on cascaded bidirectional recurrent neural network (BRNN). We compare the cascaded BRNN against another two BRNN architectures, namely the original BRNN architecture used for speech recognition as well as Pollastri's BRNN that was proposed for PSS prediction. Our cascaded BRNN achieves an overall three state accuracy Q3 of 74.38\\%, and reaches a high Segment OVerlap (SOV) of 66.0455. It outperforms the original BRNN and Pollastri's BRNN in both Q3 and SOV. Specifically, it improves the SOV score by 4-6%.

  12. Evaluation of the Performance of Feedforward and Recurrent Neural Networks in Active Cancellation of Sound Noise

    Directory of Open Access Journals (Sweden)

    Mehrshad Salmasi

    2012-07-01

    Full Text Available Active noise control is based on the destructive interference between the primary noise and generated noise from the secondary source. An antinoise of equal amplitude and opposite phase is generated and combined with the primary noise. In this paper, performance of the neural networks is evaluated in active cancellation of sound noise. For this reason, feedforward and recurrent neural networks are designed and trained. After training, performance of the feedforwrad and recurrent networks in noise attenuation are compared. We use Elman network as a recurrent neural network. For simulations, noise signals from a SPIB database are used. In order to compare the networks appropriately, equal number of layers and neurons are considered for the networks. Moreover, training and test samples are similar. Simulation results show that feedforward and recurrent neural networks present good performance in noise cancellation. As it is seen, the ability of recurrent neural network in noise attenuation is better than feedforward network.

  13. Recurrent Neural Networks for Multivariate Time Series with Missing Values.

    Science.gov (United States)

    Che, Zhengping; Purushotham, Sanjay; Cho, Kyunghyun; Sontag, David; Liu, Yan

    2018-04-17

    Multivariate time series data in practical applications, such as health care, geoscience, and biology, are characterized by a variety of missing values. In time series prediction and other related tasks, it has been noted that missing values and their missing patterns are often correlated with the target labels, a.k.a., informative missingness. There is very limited work on exploiting the missing patterns for effective imputation and improving prediction performance. In this paper, we develop novel deep learning models, namely GRU-D, as one of the early attempts. GRU-D is based on Gated Recurrent Unit (GRU), a state-of-the-art recurrent neural network. It takes two representations of missing patterns, i.e., masking and time interval, and effectively incorporates them into a deep model architecture so that it not only captures the long-term temporal dependencies in time series, but also utilizes the missing patterns to achieve better prediction results. Experiments of time series classification tasks on real-world clinical datasets (MIMIC-III, PhysioNet) and synthetic datasets demonstrate that our models achieve state-of-the-art performance and provide useful insights for better understanding and utilization of missing values in time series analysis.

  14. Deep Recurrent Neural Networks for Human Activity Recognition

    Directory of Open Access Journals (Sweden)

    Abdulmajid Murad

    2017-11-01

    Full Text Available Adopting deep learning methods for human activity recognition has been effective in extracting discriminative features from raw input sequences acquired from body-worn sensors. Although human movements are encoded in a sequence of successive samples in time, typical machine learning methods perform recognition tasks without exploiting the temporal correlations between input data samples. Convolutional neural networks (CNNs address this issue by using convolutions across a one-dimensional temporal sequence to capture dependencies among input data. However, the size of convolutional kernels restricts the captured range of dependencies between data samples. As a result, typical models are unadaptable to a wide range of activity-recognition configurations and require fixed-length input windows. In this paper, we propose the use of deep recurrent neural networks (DRNNs for building recognition models that are capable of capturing long-range dependencies in variable-length input sequences. We present unidirectional, bidirectional, and cascaded architectures based on long short-term memory (LSTM DRNNs and evaluate their effectiveness on miscellaneous benchmark datasets. Experimental results show that our proposed models outperform methods employing conventional machine learning, such as support vector machine (SVM and k-nearest neighbors (KNN. Additionally, the proposed models yield better performance than other deep learning techniques, such as deep believe networks (DBNs and CNNs.

  15. Recurrent Neural Network Applications for Astronomical Time Series

    Science.gov (United States)

    Protopapas, Pavlos

    2017-06-01

    The benefits of good predictive models in astronomy lie in early event prediction systems and effective resource allocation. Current time series methods applicable to regular time series have not evolved to generalize for irregular time series. In this talk, I will describe two Recurrent Neural Network methods, Long Short-Term Memory (LSTM) and Echo State Networks (ESNs) for predicting irregular time series. Feature engineering along with a non-linear modeling proved to be an effective predictor. For noisy time series, the prediction is improved by training the network on error realizations using the error estimates from astronomical light curves. In addition to this, we propose a new neural network architecture to remove correlation from the residuals in order to improve prediction and compensate for the noisy data. Finally, I show how to set hyperparameters for a stable and performant solution correctly. In this work, we circumvent this obstacle by optimizing ESN hyperparameters using Bayesian optimization with Gaussian Process priors. This automates the tuning procedure, enabling users to employ the power of RNN without needing an in-depth understanding of the tuning procedure.

  16. Drawing and Recognizing Chinese Characters with Recurrent Neural Network.

    Science.gov (United States)

    Zhang, Xu-Yao; Yin, Fei; Zhang, Yan-Ming; Liu, Cheng-Lin; Bengio, Yoshua

    2018-04-01

    Recent deep learning based approaches have achieved great success on handwriting recognition. Chinese characters are among the most widely adopted writing systems in the world. Previous research has mainly focused on recognizing handwritten Chinese characters. However, recognition is only one aspect for understanding a language, another challenging and interesting task is to teach a machine to automatically write (pictographic) Chinese characters. In this paper, we propose a framework by using the recurrent neural network (RNN) as both a discriminative model for recognizing Chinese characters and a generative model for drawing (generating) Chinese characters. To recognize Chinese characters, previous methods usually adopt the convolutional neural network (CNN) models which require transforming the online handwriting trajectory into image-like representations. Instead, our RNN based approach is an end-to-end system which directly deals with the sequential structure and does not require any domain-specific knowledge. With the RNN system (combining an LSTM and GRU), state-of-the-art performance can be achieved on the ICDAR-2013 competition database. Furthermore, under the RNN framework, a conditional generative model with character embedding is proposed for automatically drawing recognizable Chinese characters. The generated characters (in vector format) are human-readable and also can be recognized by the discriminative RNN model with high accuracy. Experimental results verify the effectiveness of using RNNs as both generative and discriminative models for the tasks of drawing and recognizing Chinese characters.

  17. Recurrent Neural Networks to Correct Satellite Image Classification Maps

    Science.gov (United States)

    Maggiori, Emmanuel; Charpiat, Guillaume; Tarabalka, Yuliya; Alliez, Pierre

    2017-09-01

    While initially devised for image categorization, convolutional neural networks (CNNs) are being increasingly used for the pixelwise semantic labeling of images. However, the proper nature of the most common CNN architectures makes them good at recognizing but poor at localizing objects precisely. This problem is magnified in the context of aerial and satellite image labeling, where a spatially fine object outlining is of paramount importance. Different iterative enhancement algorithms have been presented in the literature to progressively improve the coarse CNN outputs, seeking to sharpen object boundaries around real image edges. However, one must carefully design, choose and tune such algorithms. Instead, our goal is to directly learn the iterative process itself. For this, we formulate a generic iterative enhancement process inspired from partial differential equations, and observe that it can be expressed as a recurrent neural network (RNN). Consequently, we train such a network from manually labeled data for our enhancement task. In a series of experiments we show that our RNN effectively learns an iterative process that significantly improves the quality of satellite image classification maps.

  18. Deep Recurrent Neural Networks for Human Activity Recognition.

    Science.gov (United States)

    Murad, Abdulmajid; Pyun, Jae-Young

    2017-11-06

    Adopting deep learning methods for human activity recognition has been effective in extracting discriminative features from raw input sequences acquired from body-worn sensors. Although human movements are encoded in a sequence of successive samples in time, typical machine learning methods perform recognition tasks without exploiting the temporal correlations between input data samples. Convolutional neural networks (CNNs) address this issue by using convolutions across a one-dimensional temporal sequence to capture dependencies among input data. However, the size of convolutional kernels restricts the captured range of dependencies between data samples. As a result, typical models are unadaptable to a wide range of activity-recognition configurations and require fixed-length input windows. In this paper, we propose the use of deep recurrent neural networks (DRNNs) for building recognition models that are capable of capturing long-range dependencies in variable-length input sequences. We present unidirectional, bidirectional, and cascaded architectures based on long short-term memory (LSTM) DRNNs and evaluate their effectiveness on miscellaneous benchmark datasets. Experimental results show that our proposed models outperform methods employing conventional machine learning, such as support vector machine (SVM) and k-nearest neighbors (KNN). Additionally, the proposed models yield better performance than other deep learning techniques, such as deep believe networks (DBNs) and CNNs.

  19. Global dissipativity of continuous-time recurrent neural networks with time delay

    International Nuclear Information System (INIS)

    Liao Xiaoxin; Wang Jun

    2003-01-01

    This paper addresses the global dissipativity of a general class of continuous-time recurrent neural networks. First, the concepts of global dissipation and global exponential dissipation are defined and elaborated. Next, the sets of global dissipativity and global exponentially dissipativity are characterized using the parameters of recurrent neural network models. In particular, it is shown that the Hopfield network and cellular neural networks with or without time delays are dissipative systems

  20. Recurrent Neural Network Based Boolean Factor Analysis and its Application to Word Clustering

    Czech Academy of Sciences Publication Activity Database

    Frolov, A. A.; Húsek, Dušan; Polyakov, P.Y.

    2009-01-01

    Roč. 20, č. 7 (2009), s. 1073-1086 ISSN 1045-9227 R&D Projects: GA MŠk(CZ) 1M0567 Institutional research plan: CEZ:AV0Z10300504 Keywords : recurrent neural network * Hopfield-like neural network * associative memory * unsupervised learning * neural network architecture * neural network application * statistics * Boolean factor analysis * concepts search * information retrieval Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 2.889, year: 2009

  1. A recurrent neural network based on projection operator for extended general variational inequalities.

    Science.gov (United States)

    Liu, Qingshan; Cao, Jinde

    2010-06-01

    Based on the projection operator, a recurrent neural network is proposed for solving extended general variational inequalities (EGVIs). Sufficient conditions are provided to ensure the global convergence of the proposed neural network based on Lyapunov methods. Compared with the existing neural networks for variational inequalities, the proposed neural network is a modified version of the general projection neural network existing in the literature and capable of solving the EGVI problems. In addition, simulation results on numerical examples show the effectiveness and performance of the proposed neural network.

  2. Application of recurrent neural networks for drought projections in California

    Science.gov (United States)

    Le, J. A.; El-Askary, H. M.; Allali, M.; Struppa, D. C.

    2017-05-01

    We use recurrent neural networks (RNNs) to investigate the complex interactions between the long-term trend in dryness and a projected, short but intense, period of wetness due to the 2015-2016 El Niño. Although it was forecasted that this El Niño season would bring significant rainfall to the region, our long-term projections of the Palmer Z Index (PZI) showed a continuing drought trend, contrasting with the 1998-1999 El Niño event. RNN training considered PZI data during 1896-2006 that was validated against the 2006-2015 period to evaluate the potential of extreme precipitation forecast. We achieved a statistically significant correlation of 0.610 between forecasted and observed PZI on the validation set for a lead time of 1 month. This gives strong confidence to the forecasted precipitation indicator. The 2015-2016 El Niño season proved to be relatively weak as compared with the 1997-1998, with a peak PZI anomaly of 0.242 standard deviations below historical averages, continuing drought conditions.

  3. Recurrent Neural Network Model for Constructive Peptide Design.

    Science.gov (United States)

    Müller, Alex T; Hiss, Jan A; Schneider, Gisbert

    2018-02-26

    We present a generative long short-term memory (LSTM) recurrent neural network (RNN) for combinatorial de novo peptide design. RNN models capture patterns in sequential data and generate new data instances from the learned context. Amino acid sequences represent a suitable input for these machine-learning models. Generative models trained on peptide sequences could therefore facilitate the design of bespoke peptide libraries. We trained RNNs with LSTM units on pattern recognition of helical antimicrobial peptides and used the resulting model for de novo sequence generation. Of these sequences, 82% were predicted to be active antimicrobial peptides compared to 65% of randomly sampled sequences with the same amino acid distribution as the training set. The generated sequences also lie closer to the training data than manually designed amphipathic helices. The results of this study showcase the ability of LSTM RNNs to construct new amino acid sequences within the applicability domain of the model and motivate their prospective application to peptide and protein design without the need for the exhaustive enumeration of sequence libraries.

  4. Multiplex visibility graphs to investigate recurrent neural network dynamics

    Science.gov (United States)

    Bianchi, Filippo Maria; Livi, Lorenzo; Alippi, Cesare; Jenssen, Robert

    2017-03-01

    A recurrent neural network (RNN) is a universal approximator of dynamical systems, whose performance often depends on sensitive hyperparameters. Tuning them properly may be difficult and, typically, based on a trial-and-error approach. In this work, we adopt a graph-based framework to interpret and characterize internal dynamics of a class of RNNs called echo state networks (ESNs). We design principled unsupervised methods to derive hyperparameters configurations yielding maximal ESN performance, expressed in terms of prediction error and memory capacity. In particular, we propose to model time series generated by each neuron activations with a horizontal visibility graph, whose topological properties have been shown to be related to the underlying system dynamics. Successively, horizontal visibility graphs associated with all neurons become layers of a larger structure called a multiplex. We show that topological properties of such a multiplex reflect important features of ESN dynamics that can be used to guide the tuning of its hyperparamers. Results obtained on several benchmarks and a real-world dataset of telephone call data records show the effectiveness of the proposed methods.

  5. Spatial Clockwork Recurrent Neural Network for Muscle Perimysium Segmentation.

    Science.gov (United States)

    Xie, Yuanpu; Zhang, Zizhao; Sapkota, Manish; Yang, Lin

    2016-10-01

    Accurate segmentation of perimysium plays an important role in early diagnosis of many muscle diseases because many diseases contain different perimysium inflammation. However, it remains as a challenging task due to the complex appearance of the perymisum morphology and its ambiguity to the background area. The muscle perimysium also exhibits strong structure spanned in the entire tissue, which makes it difficult for current local patch-based methods to capture this long-range context information. In this paper, we propose a novel spatial clockwork recurrent neural network (spatial CW-RNN) to address those issues. Specifically, we split the entire image into a set of non-overlapping image patches, and the semantic dependencies among them are modeled by the proposed spatial CW-RNN. Our method directly takes the 2D structure of the image into consideration and is capable of encoding the context information of the entire image into the local representation of each patch. Meanwhile, we leverage on the structured regression to assign one prediction mask rather than a single class label to each local patch, which enables both efficient training and testing. We extensively test our method for perimysium segmentation using digitized muscle microscopy images. Experimental results demonstrate the superiority of the novel spatial CW-RNN over other existing state of the arts.

  6. Fast computation with spikes in a recurrent neural network

    International Nuclear Information System (INIS)

    Jin, Dezhe Z.; Seung, H. Sebastian

    2002-01-01

    Neural networks with recurrent connections are sometimes regarded as too slow at computation to serve as models of the brain. Here we analytically study a counterexample, a network consisting of N integrate-and-fire neurons with self excitation, all-to-all inhibition, instantaneous synaptic coupling, and constant external driving inputs. When the inhibition and/or excitation are large enough, the network performs a winner-take-all computation for all possible external inputs and initial states of the network. The computation is done very quickly: As soon as the winner spikes once, the computation is completed since no other neurons will spike. For some initial states, the winner is the first neuron to spike, and the computation is done at the first spike of the network. In general, there are M potential winners, corresponding to the top M external inputs. When the external inputs are close in magnitude, M tends to be larger. If M>1, the selection of the actual winner is strongly influenced by the initial states. If a special relation between the excitation and inhibition is satisfied, the network always selects the neuron with the maximum external input as the winner

  7. Low-dimensional recurrent neural network-based Kalman filter for speech enhancement.

    Science.gov (United States)

    Xia, Youshen; Wang, Jun

    2015-07-01

    This paper proposes a new recurrent neural network-based Kalman filter for speech enhancement, based on a noise-constrained least squares estimate. The parameters of speech signal modeled as autoregressive process are first estimated by using the proposed recurrent neural network and the speech signal is then recovered from Kalman filtering. The proposed recurrent neural network is globally asymptomatically stable to the noise-constrained estimate. Because the noise-constrained estimate has a robust performance against non-Gaussian noise, the proposed recurrent neural network-based speech enhancement algorithm can minimize the estimation error of Kalman filter parameters in non-Gaussian noise. Furthermore, having a low-dimensional model feature, the proposed neural network-based speech enhancement algorithm has a much faster speed than two existing recurrent neural networks-based speech enhancement algorithms. Simulation results show that the proposed recurrent neural network-based speech enhancement algorithm can produce a good performance with fast computation and noise reduction. Copyright © 2015 Elsevier Ltd. All rights reserved.

  8. Solving differential equations with unknown constitutive relations as recurrent neural networks

    Energy Technology Data Exchange (ETDEWEB)

    Hagge, Tobias J.; Stinis, Panagiotis; Yeung, Enoch H.; Tartakovsky, Alexandre M.

    2017-12-08

    We solve a system of ordinary differential equations with an unknown functional form of a sink (reaction rate) term. We assume that the measurements (time series) of state variables are partially available, and use a recurrent neural network to “learn” the reaction rate from this data. This is achieved by including discretized ordinary differential equations as part of a recurrent neural network training problem. We extend TensorFlow’s recurrent neural network architecture to create a simple but scalable and effective solver for the unknown functions, and apply it to a fedbatch bioreactor simulation problem. Use of techniques from recent deep learning literature enables training of functions with behavior manifesting over thousands of time steps. Our networks are structurally similar to recurrent neural networks, but differ in purpose, and require modified training strategies.

  9. A novel joint-processing adaptive nonlinear equalizer using a modular recurrent neural network for chaotic communication systems.

    Science.gov (United States)

    Zhao, Haiquan; Zeng, Xiangping; Zhang, Jiashu; Liu, Yangguang; Wang, Xiaomin; Li, Tianrui

    2011-01-01

    To eliminate nonlinear channel distortion in chaotic communication systems, a novel joint-processing adaptive nonlinear equalizer based on a pipelined recurrent neural network (JPRNN) is proposed, using a modified real-time recurrent learning (RTRL) algorithm. Furthermore, an adaptive amplitude RTRL algorithm is adopted to overcome the deteriorating effect introduced by the nesting process. Computer simulations illustrate that the proposed equalizer outperforms the pipelined recurrent neural network (PRNN) and recurrent neural network (RNN) equalizers. Copyright © 2010 Elsevier Ltd. All rights reserved.

  10. Evaluation of the Performance of Feedforward and Recurrent Neural Networks in Active Cancellation of Sound Noise

    OpenAIRE

    Mehrshad Salmasi; Homayoun Mahdavi-Nasab

    2012-01-01

    Active noise control is based on the destructive interference between the primary noise and generated noise from the secondary source. An antinoise of equal amplitude and opposite phase is generated and combined with the primary noise. In this paper, performance of the neural networks is evaluated in active cancellation of sound noise. For this reason, feedforward and recurrent neural networks are designed and trained. After training, performance of the feedforwrad and recurrent networks in n...

  11. Global exponential stability of reaction-diffusion recurrent neural networks with time-varying delays

    International Nuclear Information System (INIS)

    Liang Jinling; Cao Jinde

    2003-01-01

    Employing general Halanay inequality, we analyze the global exponential stability of a class of reaction-diffusion recurrent neural networks with time-varying delays. Several new sufficient conditions are obtained to ensure existence, uniqueness and global exponential stability of the equilibrium point of delayed reaction-diffusion recurrent neural networks. The results extend and improve the earlier publications. In addition, an example is given to show the effectiveness of the obtained result

  12. Encoding sensory and motor patterns as time-invariant trajectories in recurrent neural networks.

    Science.gov (United States)

    Goudar, Vishwa; Buonomano, Dean V

    2018-03-14

    Much of the information the brain processes and stores is temporal in nature-a spoken word or a handwritten signature, for example, is defined by how it unfolds in time. However, it remains unclear how neural circuits encode complex time-varying patterns. We show that by tuning the weights of a recurrent neural network (RNN), it can recognize and then transcribe spoken digits. The model elucidates how neural dynamics in cortical networks may resolve three fundamental challenges: first, encode multiple time-varying sensory and motor patterns as stable neural trajectories; second, generalize across relevant spatial features; third, identify the same stimuli played at different speeds-we show that this temporal invariance emerges because the recurrent dynamics generate neural trajectories with appropriately modulated angular velocities. Together our results generate testable predictions as to how recurrent networks may use different mechanisms to generalize across the relevant spatial and temporal features of complex time-varying stimuli. © 2018, Goudar et al.

  13. Recurrent Convolutional Neural Networks: A Better Model of Biological Object Recognition.

    Science.gov (United States)

    Spoerer, Courtney J; McClure, Patrick; Kriegeskorte, Nikolaus

    2017-01-01

    Feedforward neural networks provide the dominant model of how the brain performs visual object recognition. However, these networks lack the lateral and feedback connections, and the resulting recurrent neuronal dynamics, of the ventral visual pathway in the human and non-human primate brain. Here we investigate recurrent convolutional neural networks with bottom-up (B), lateral (L), and top-down (T) connections. Combining these types of connections yields four architectures (B, BT, BL, and BLT), which we systematically test and compare. We hypothesized that recurrent dynamics might improve recognition performance in the challenging scenario of partial occlusion. We introduce two novel occluded object recognition tasks to test the efficacy of the models, digit clutter (where multiple target digits occlude one another) and digit debris (where target digits are occluded by digit fragments). We find that recurrent neural networks outperform feedforward control models (approximately matched in parametric complexity) at recognizing objects, both in the absence of occlusion and in all occlusion conditions. Recurrent networks were also found to be more robust to the inclusion of additive Gaussian noise. Recurrent neural networks are better in two respects: (1) they are more neurobiologically realistic than their feedforward counterparts; (2) they are better in terms of their ability to recognize objects, especially under challenging conditions. This work shows that computer vision can benefit from using recurrent convolutional architectures and suggests that the ubiquitous recurrent connections in biological brains are essential for task performance.

  14. A One-Layer Recurrent Neural Network for Constrained Complex-Variable Convex Optimization.

    Science.gov (United States)

    Qin, Sitian; Feng, Jiqiang; Song, Jiahui; Wen, Xingnan; Xu, Chen

    2018-03-01

    In this paper, based on calculus and penalty method, a one-layer recurrent neural network is proposed for solving constrained complex-variable convex optimization. It is proved that for any initial point from a given domain, the state of the proposed neural network reaches the feasible region in finite time and converges to an optimal solution of the constrained complex-variable convex optimization finally. In contrast to existing neural networks for complex-variable convex optimization, the proposed neural network has a lower model complexity and better convergence. Some numerical examples and application are presented to substantiate the effectiveness of the proposed neural network.

  15. Multi-step-prediction of chaotic time series based on co-evolutionary recurrent neural network

    International Nuclear Information System (INIS)

    Ma Qianli; Zheng Qilun; Peng Hong; Qin Jiangwei; Zhong Tanwei

    2008-01-01

    This paper proposes a co-evolutionary recurrent neural network (CERNN) for the multi-step-prediction of chaotic time series, it estimates the proper parameters of phase space reconstruction and optimizes the structure of recurrent neural networks by co-evolutionary strategy. The searching space was separated into two subspaces and the individuals are trained in a parallel computational procedure. It can dynamically combine the embedding method with the capability of recurrent neural network to incorporate past experience due to internal recurrence. The effectiveness of CERNN is evaluated by using three benchmark chaotic time series data sets: the Lorenz series, Mackey-Glass series and real-world sun spot series. The simulation results show that CERNN improves the performances of multi-step-prediction of chaotic time series

  16. From Imitation to Prediction, Data Compression vs Recurrent Neural Networks for Natural Language Processing

    Directory of Open Access Journals (Sweden)

    Juan Andres Laura

    2018-03-01

    Full Text Available In recent studies Recurrent Neural Networks were used for generative processes and their surprising performance can be explained by their ability to create good predictions. In addition, Data Compression is also based on prediction. What the problem comes down to is whether a data compressor could be used to perform as well as recurrent neural networks in the natural language processing tasks of sentiment analysis and automatic text generation. If this is possible, then the problem comes down to determining if a compression algorithm is even more intelligent than a neural network in such tasks. In our journey, a fundamental difference between a Data Compression Algorithm and Recurrent Neural Networks has been discovered.

  17. An Attractor-Based Complexity Measurement for Boolean Recurrent Neural Networks

    Science.gov (United States)

    Cabessa, Jérémie; Villa, Alessandro E. P.

    2014-01-01

    We provide a novel refined attractor-based complexity measurement for Boolean recurrent neural networks that represents an assessment of their computational power in terms of the significance of their attractor dynamics. This complexity measurement is achieved by first proving a computational equivalence between Boolean recurrent neural networks and some specific class of -automata, and then translating the most refined classification of -automata to the Boolean neural network context. As a result, a hierarchical classification of Boolean neural networks based on their attractive dynamics is obtained, thus providing a novel refined attractor-based complexity measurement for Boolean recurrent neural networks. These results provide new theoretical insights to the computational and dynamical capabilities of neural networks according to their attractive potentialities. An application of our findings is illustrated by the analysis of the dynamics of a simplified model of the basal ganglia-thalamocortical network simulated by a Boolean recurrent neural network. This example shows the significance of measuring network complexity, and how our results bear new founding elements for the understanding of the complexity of real brain circuits. PMID:24727866

  18. Entity recognition from clinical texts via recurrent neural network.

    Science.gov (United States)

    Liu, Zengjian; Yang, Ming; Wang, Xiaolong; Chen, Qingcai; Tang, Buzhou; Wang, Zhe; Xu, Hua

    2017-07-05

    Entity recognition is one of the most primary steps for text analysis and has long attracted considerable attention from researchers. In the clinical domain, various types of entities, such as clinical entities and protected health information (PHI), widely exist in clinical texts. Recognizing these entities has become a hot topic in clinical natural language processing (NLP), and a large number of traditional machine learning methods, such as support vector machine and conditional random field, have been deployed to recognize entities from clinical texts in the past few years. In recent years, recurrent neural network (RNN), one of deep learning methods that has shown great potential on many problems including named entity recognition, also has been gradually used for entity recognition from clinical texts. In this paper, we comprehensively investigate the performance of LSTM (long-short term memory), a representative variant of RNN, on clinical entity recognition and protected health information recognition. The LSTM model consists of three layers: input layer - generates representation of each word of a sentence; LSTM layer - outputs another word representation sequence that captures the context information of each word in this sentence; Inference layer - makes tagging decisions according to the output of LSTM layer, that is, outputting a label sequence. Experiments conducted on corpora of the 2010, 2012 and 2014 i2b2 NLP challenges show that LSTM achieves highest micro-average F1-scores of 85.81% on the 2010 i2b2 medical concept extraction, 92.29% on the 2012 i2b2 clinical event detection, and 94.37% on the 2014 i2b2 de-identification, which is considerably competitive with other state-of-the-art systems. LSTM that requires no hand-crafted feature has great potential on entity recognition from clinical texts. It outperforms traditional machine learning methods that suffer from fussy feature engineering. A possible future direction is how to integrate knowledge

  19. LYAPUNOV-Based Sensor Failure Detection and Recovery for the Reverse Water Gas Shift Process

    Science.gov (United States)

    Haralambous, Michael G.

    2002-01-01

    Livingstone, a model-based AI software system, is planned for use in the autonomous fault diagnosis, reconfiguration, and control of the oxygen-producing reverse water gas shift (RWGS) process test-bed located in the Applied Chemistry Laboratory at KSC. In this report the RWGS process is first briefly described and an overview of Livingstone is given. Next, a Lyapunov-based approach for detecting and recovering from sensor failures, differing significantly from that used by Livingstone, is presented. In this new method, models used are in t e m of the defining differential equations of system components, thus differing from the qualitative, static models used by Livingstone. An easily computed scalar inequality constraint, expressed in terms of sensed system variables, is used to determine the existence of sensor failures. In the event of sensor failure, an observer/estimator is used for determining which sensors have failed. The theory underlying the new approach is developed. Finally, a recommendation is made to use the Lyapunov-based approach to complement the capability of Livingstone and to use this combination in the RWGS process.

  20. A One-Layer Recurrent Neural Network for Real-Time Portfolio Optimization With Probability Criterion.

    Science.gov (United States)

    Liu, Qingshan; Dang, Chuangyin; Huang, Tingwen

    2013-02-01

    This paper presents a decision-making model described by a recurrent neural network for dynamic portfolio optimization. The portfolio-optimization problem is first converted into a constrained fractional programming problem. Since the objective function in the programming problem is not convex, the traditional optimization techniques are no longer applicable for solving this problem. Fortunately, the objective function in the fractional programming is pseudoconvex on the feasible region. It leads to a one-layer recurrent neural network modeled by means of a discontinuous dynamic system. To ensure the optimal solutions for portfolio optimization, the convergence of the proposed neural network is analyzed and proved. In fact, the neural network guarantees to get the optimal solutions for portfolio-investment advice if some mild conditions are satisfied. A numerical example with simulation results substantiates the effectiveness and illustrates the characteristics of the proposed neural network.

  1. Predicting recurrent aphthous ulceration using genetic algorithms-optimized neural networks

    Directory of Open Access Journals (Sweden)

    Najla S Dar-Odeh

    2010-05-01

    Full Text Available Najla S Dar-Odeh1, Othman M Alsmadi2, Faris Bakri3, Zaer Abu-Hammour2, Asem A Shehabi3, Mahmoud K Al-Omiri1, Shatha M K Abu-Hammad4, Hamzeh Al-Mashni4, Mohammad B Saeed4, Wael Muqbil4, Osama A Abu-Hammad1 1Faculty of Dentistry, 2Faculty of Engineering and Technology, 3Faculty of Medicine, University of Jordan, Amman, Jordan; 4Dental Department, University of Jordan Hospital, Amman, JordanObjective: To construct and optimize a neural network that is capable of predicting the occurrence of recurrent aphthous ulceration (RAU based on a set of appropriate input data.Participants and methods: Artificial neural networks (ANN software employing genetic algorithms to optimize the architecture neural networks was used. Input and output data of 86 participants (predisposing factors and status of the participants with regards to recurrent aphthous ulceration were used to construct and train the neural networks. The optimized neural networks were then tested using untrained data of a further 10 participants.Results: The optimized neural network, which produced the most accurate predictions for the presence or absence of recurrent aphthous ulceration was found to employ: gender, hematological (with or without ferritin and mycological data of the participants, frequency of tooth brushing, and consumption of vegetables and fruits.Conclusions: Factors appearing to be related to recurrent aphthous ulceration and appropriate for use as input data to construct ANNs that predict recurrent aphthous ulceration were found to include the following: gender, hemoglobin, serum vitamin B12, serum ferritin, red cell folate, salivary candidal colony count, frequency of tooth brushing, and the number of fruits or vegetables consumed daily.Keywords: artifical neural networks, recurrent, aphthous ulceration, ulcer

  2. Ads' click-through rates predicting based on gated recurrent unit neural networks

    Science.gov (United States)

    Chen, Qiaohong; Guo, Zixuan; Dong, Wen; Jin, Lingzi

    2018-05-01

    In order to improve the effect of online advertising and to increase the revenue of advertising, the gated recurrent unit neural networks(GRU) model is used as the ads' click through rates(CTR) predicting. Combined with the characteristics of gated unit structure and the unique of time sequence in data, using BPTT algorithm to train the model. Furthermore, by optimizing the step length algorithm of the gated unit recurrent neural networks, making the model reach optimal point better and faster in less iterative rounds. The experiment results show that the model based on the gated recurrent unit neural networks and its optimization of step length algorithm has the better effect on the ads' CTR predicting, which helps advertisers, media and audience achieve a win-win and mutually beneficial situation in Three-Side Game.

  3. Multistability and instability analysis of recurrent neural networks with time-varying delays.

    Science.gov (United States)

    Zhang, Fanghai; Zeng, Zhigang

    2018-01-01

    This paper provides new theoretical results on the multistability and instability analysis of recurrent neural networks with time-varying delays. It is shown that such n-neuronal recurrent neural networks have exactly [Formula: see text] equilibria, [Formula: see text] of which are locally exponentially stable and the others are unstable, where k 0 is a nonnegative integer such that k 0 ≤n. By using the combination method of two different divisions, recurrent neural networks can possess more dynamic properties. This method improves and extends the existing results in the literature. Finally, one numerical example is provided to show the superiority and effectiveness of the presented results. Copyright © 2017 Elsevier Ltd. All rights reserved.

  4. Financial Time Series Prediction Using Elman Recurrent Random Neural Networks

    Directory of Open Access Journals (Sweden)

    Jie Wang

    2016-01-01

    (ERNN, the empirical results show that the proposed neural network displays the best performance among these neural networks in financial time series forecasting. Further, the empirical research is performed in testing the predictive effects of SSE, TWSE, KOSPI, and Nikkei225 with the established model, and the corresponding statistical comparisons of the above market indices are also exhibited. The experimental results show that this approach gives good performance in predicting the values from the stock market indices.

  5. Synaptic plasticity in a recurrent neural network for versatile and adaptive behaviors of a walking robot

    DEFF Research Database (Denmark)

    Grinke, Eduard; Tetzlaff, Christian; Wörgötter, Florentin

    2015-01-01

    correlation-based learning with synaptic scaling is applied to adequately change the connections of the network. By doing so, we can effectively exploit neural dynamics (i.e., hysteresis effects and single attractors) in the network to generate different turning angles with short-term memory for a walking...... dynamics, plasticity, sensory feedback, and biomechanics. Generating such versatile and adaptive behaviors for a many degrees-of-freedom (DOFs) walking robot is a challenging task. Thus, in this study, we present a bio-inspired approach to solve this task. Specifically, the approach combines neural...... mechanisms with plasticity, exteroceptive sensory feedback, and biomechanics. The neural mechanisms consist of adaptive neural sensory processing and modular neural locomotion control. The sensory processing is based on a small recurrent neural network consisting of two fully connected neurons. Online...

  6. Robust nonlinear autoregressive moving average model parameter estimation using stochastic recurrent artificial neural networks

    DEFF Research Database (Denmark)

    Chon, K H; Hoyer, D; Armoundas, A A

    1999-01-01

    In this study, we introduce a new approach for estimating linear and nonlinear stochastic autoregressive moving average (ARMA) model parameters, given a corrupt signal, using artificial recurrent neural networks. This new approach is a two-step approach in which the parameters of the deterministic...... part of the stochastic ARMA model are first estimated via a three-layer artificial neural network (deterministic estimation step) and then reestimated using the prediction error as one of the inputs to the artificial neural networks in an iterative algorithm (stochastic estimation step). The prediction...... error is obtained by subtracting the corrupt signal of the estimated ARMA model obtained via the deterministic estimation step from the system output response. We present computer simulation examples to show the efficacy of the proposed stochastic recurrent neural network approach in obtaining accurate...

  7. A novel nonlinear adaptive filter using a pipelined second-order Volterra recurrent neural network.

    Science.gov (United States)

    Zhao, Haiquan; Zhang, Jiashu

    2009-12-01

    To enhance the performance and overcome the heavy computational complexity of recurrent neural networks (RNN), a novel nonlinear adaptive filter based on a pipelined second-order Volterra recurrent neural network (PSOVRNN) is proposed in this paper. A modified real-time recurrent learning (RTRL) algorithm of the proposed filter is derived in much more detail. The PSOVRNN comprises of a number of simple small-scale second-order Volterra recurrent neural network (SOVRNN) modules. In contrast to the standard RNN, these modules of a PSOVRNN can be performed simultaneously in a pipelined parallelism fashion, which can lead to a significant improvement in its total computational efficiency. Moreover, since each module of the PSOVRNN is a SOVRNN in which nonlinearity is introduced by the recursive second-order Volterra (RSOV) expansion, its performance can be further improved. Computer simulations have demonstrated that the PSOVRNN performs better than the pipelined recurrent neural network (PRNN) and RNN for nonlinear colored signals prediction and nonlinear channel equalization. However, the superiority of the PSOVRNN over the PRNN is at the cost of increasing computational complexity due to the introduced nonlinear expansion of each module.

  8. Identification and prediction of dynamic systems using an interactively recurrent self-evolving fuzzy neural network.

    Science.gov (United States)

    Lin, Yang-Yin; Chang, Jyh-Yeong; Lin, Chin-Teng

    2013-02-01

    This paper presents a novel recurrent fuzzy neural network, called an interactively recurrent self-evolving fuzzy neural network (IRSFNN), for prediction and identification of dynamic systems. The recurrent structure in an IRSFNN is formed as an external loops and internal feedback by feeding the rule firing strength of each rule to others rules and itself. The consequent part in the IRSFNN is composed of a Takagi-Sugeno-Kang (TSK) or functional-link-based type. The proposed IRSFNN employs a functional link neural network (FLNN) to the consequent part of fuzzy rules for promoting the mapping ability. Unlike a TSK-type fuzzy neural network, the FLNN in the consequent part is a nonlinear function of input variables. An IRSFNNs learning starts with an empty rule base and all of the rules are generated and learned online through a simultaneous structure and parameter learning. An on-line clustering algorithm is effective in generating fuzzy rules. The consequent update parameters are derived by a variable-dimensional Kalman filter algorithm. The premise and recurrent parameters are learned through a gradient descent algorithm. We test the IRSFNN for the prediction and identification of dynamic plants and compare it to other well-known recurrent FNNs. The proposed model obtains enhanced performance results.

  9. Boundedness and stability for recurrent neural networks with variable coefficients and time-varying delays

    International Nuclear Information System (INIS)

    Liang Jinling; Cao Jinde

    2003-01-01

    In this Letter, the problems of boundedness and stability for a general class of non-autonomous recurrent neural networks with variable coefficients and time-varying delays are analyzed via employing Young inequality technique and Lyapunov method. Some simple sufficient conditions are given for boundedness and stability of the solutions for the recurrent neural networks. These results generalize and improve the previous works, and they are easy to check and apply in practice. Two illustrative examples and their numerical simulations are also given to demonstrate the effectiveness of the proposed results

  10. Lyapunov-based Stability of Feedback Interconnections of Negative Imaginary Systems

    KAUST Repository

    Ghallab, Ahmed G.

    2017-10-19

    Feedback control systems using sensors and actuators such as piezoelectric sensors and actuators, micro-electro-mechanical systems (MEMS) sensors and opto-mechanical sensors, are allowing new advances in designing such high precision technologies. The negative imaginary control systems framework allows for robust control design for such high precision systems in the face of uncertainties due to unmodelled dynamics. The stability of the feedback interconnection of negative imaginary systems has been well established in the literature. However, the proofs of stability feedback interconnection which are used in some previous papers have a shortcoming due to a matrix inevitability issue. In this paper, we provide a new and correct Lyapunov-based proof of one such result and show that the result is still true.

  11. Lyapunov-based Stability of Feedback Interconnections of Negative Imaginary Systems

    KAUST Repository

    Ghallab, Ahmed G.; Mabrok, Mohamed; Petersen, Ian R.

    2017-01-01

    Feedback control systems using sensors and actuators such as piezoelectric sensors and actuators, micro-electro-mechanical systems (MEMS) sensors and opto-mechanical sensors, are allowing new advances in designing such high precision technologies. The negative imaginary control systems framework allows for robust control design for such high precision systems in the face of uncertainties due to unmodelled dynamics. The stability of the feedback interconnection of negative imaginary systems has been well established in the literature. However, the proofs of stability feedback interconnection which are used in some previous papers have a shortcoming due to a matrix inevitability issue. In this paper, we provide a new and correct Lyapunov-based proof of one such result and show that the result is still true.

  12. Nonlinear Lyapunov-based boundary control of distributed heat transfer mechanisms in membrane distillation plant

    KAUST Repository

    Eleiwi, Fadi

    2015-07-01

    This paper presents a nonlinear Lyapunov-based boundary control for the temperature difference of a membrane distillation boundary layers. The heat transfer mechanisms inside the process are modeled with a 2D advection-diffusion equation. The model is semi-descretized in space, and a nonlinear state-space representation is provided. The control is designed to force the temperature difference along the membrane sides to track a desired reference asymptotically, and hence a desired flux would be generated. Certain constraints are put on the control law inputs to be within an economic range of energy supplies. The effect of the controller gain is discussed. Simulations with real process parameters for the model, and the controller are provided. © 2015 American Automatic Control Council.

  13. Training the Recurrent neural network by the Fuzzy Min-Max algorithm for fault prediction

    International Nuclear Information System (INIS)

    Zemouri, Ryad; Racoceanu, Daniel; Zerhouni, Noureddine; Minca, Eugenia; Filip, Florin

    2009-01-01

    In this paper, we present a training technique of a Recurrent Radial Basis Function neural network for fault prediction. We use the Fuzzy Min-Max technique to initialize the k-center of the RRBF neural network. The k-means algorithm is then applied to calculate the centers that minimize the mean square error of the prediction task. The performances of the k-means algorithm are then boosted by the Fuzzy Min-Max technique.

  14. Deep Convolutional and LSTM Recurrent Neural Networks for Multimodal Wearable Activity Recognition

    OpenAIRE

    Francisco Javier Ordóñez; Daniel Roggen

    2016-01-01

    Human activity recognition (HAR) tasks have traditionally been solved using engineered features obtained by heuristic processes. Current research suggests that deep convolutional neural networks are suited to automate feature extraction from raw sensor inputs. However, human activities are made of complex sequences of motor movements, and capturing this temporal dynamics is fundamental for successful HAR. Based on the recent success of recurrent neural networks for time series domains, we pro...

  15. Bi-directional LSTM Recurrent Neural Network for Chinese Word Segmentation

    OpenAIRE

    Yao, Yushi; Huang, Zheng

    2016-01-01

    Recurrent neural network(RNN) has been broadly applied to natural language processing(NLP) problems. This kind of neural network is designed for modeling sequential data and has been testified to be quite efficient in sequential tagging tasks. In this paper, we propose to use bi-directional RNN with long short-term memory(LSTM) units for Chinese word segmentation, which is a crucial preprocess task for modeling Chinese sentences and articles. Classical methods focus on designing and combining...

  16. Synchronization of chaotic recurrent neural networks with time-varying delays using nonlinear feedback control

    International Nuclear Information System (INIS)

    Cui Baotong; Lou Xuyang

    2009-01-01

    In this paper, a new method to synchronize two identical chaotic recurrent neural networks is proposed. Using the drive-response concept, a nonlinear feedback control law is derived to achieve the state synchronization of the two identical chaotic neural networks. Furthermore, based on the Lyapunov method, a delay independent sufficient synchronization condition in terms of linear matrix inequality (LMI) is obtained. A numerical example with graphical illustrations is given to illuminate the presented synchronization scheme

  17. Global exponential stability for reaction-diffusion recurrent neural networks with multiple time varying delays

    International Nuclear Information System (INIS)

    Lou, X.; Cui, B.

    2008-01-01

    In this paper we consider the problem of exponential stability for recurrent neural networks with multiple time varying delays and reaction-diffusion terms. The activation functions are supposed to be bounded and globally Lipschitz continuous. By means of Lyapunov functional, sufficient conditions are derived, which guarantee global exponential stability of the delayed neural network. Finally, a numerical example is given to show the correctness of our analysis. (author)

  18. ReSeg: A Recurrent Neural Network-Based Model for Semantic Segmentation

    OpenAIRE

    Visin, Francesco; Ciccone, Marco; Romero, Adriana; Kastner, Kyle; Cho, Kyunghyun; Bengio, Yoshua; Matteucci, Matteo; Courville, Aaron

    2015-01-01

    We propose a structured prediction architecture, which exploits the local generic features extracted by Convolutional Neural Networks and the capacity of Recurrent Neural Networks (RNN) to retrieve distant dependencies. The proposed architecture, called ReSeg, is based on the recently introduced ReNet model for image classification. We modify and extend it to perform the more challenging task of semantic segmentation. Each ReNet layer is composed of four RNN that sweep the image horizontally ...

  19. Reduced-Order Modeling for Flutter/LCO Using Recurrent Artificial Neural Network

    Science.gov (United States)

    Yao, Weigang; Liou, Meng-Sing

    2012-01-01

    The present study demonstrates the efficacy of a recurrent artificial neural network to provide a high fidelity time-dependent nonlinear reduced-order model (ROM) for flutter/limit-cycle oscillation (LCO) modeling. An artificial neural network is a relatively straightforward nonlinear method for modeling an input-output relationship from a set of known data, for which we use the radial basis function (RBF) with its parameters determined through a training process. The resulting RBF neural network, however, is only static and is not yet adequate for an application to problems of dynamic nature. The recurrent neural network method [1] is applied to construct a reduced order model resulting from a series of high-fidelity time-dependent data of aero-elastic simulations. Once the RBF neural network ROM is constructed properly, an accurate approximate solution can be obtained at a fraction of the cost of a full-order computation. The method derived during the study has been validated for predicting nonlinear aerodynamic forces in transonic flow and is capable of accurate flutter/LCO simulations. The obtained results indicate that the present recurrent RBF neural network is accurate and efficient for nonlinear aero-elastic system analysis

  20. Stimulus-dependent suppression of chaos in recurrent neural networks

    International Nuclear Information System (INIS)

    Rajan, Kanaka; Abbott, L. F.; Sompolinsky, Haim

    2010-01-01

    Neuronal activity arises from an interaction between ongoing firing generated spontaneously by neural circuits and responses driven by external stimuli. Using mean-field analysis, we ask how a neural network that intrinsically generates chaotic patterns of activity can remain sensitive to extrinsic input. We find that inputs not only drive network responses, but they also actively suppress ongoing activity, ultimately leading to a phase transition in which chaos is completely eliminated. The critical input intensity at the phase transition is a nonmonotonic function of stimulus frequency, revealing a 'resonant' frequency at which the input is most effective at suppressing chaos even though the power spectrum of the spontaneous activity peaks at zero and falls exponentially. A prediction of our analysis is that the variance of neural responses should be most strongly suppressed at frequencies matching the range over which many sensory systems operate.

  1. Tuning Recurrent Neural Networks for Recognizing Handwritten Arabic Words

    KAUST Repository

    Qaralleh, Esam; Abandah, Gheith; Jamour, Fuad Tarek

    2013-01-01

    and sizes of the hidden layers. Large sizes are slow and small sizes are generally not accurate. Tuning the neural network size is a hard task because the design space is often large and training is often a long process. We use design of experiments

  2. Recurrent Artificial Neural Networks and Finite State Natural Language Processing.

    Science.gov (United States)

    Moisl, Hermann

    It is argued that pessimistic assessments of the adequacy of artificial neural networks (ANNs) for natural language processing (NLP) on the grounds that they have a finite state architecture are unjustified, and that their adequacy in this regard is an empirical issue. First, arguments that counter standard objections to finite state NLP on the…

  3. Homeostatic scaling of excitability in recurrent neural networks.

    NARCIS (Netherlands)

    Remme, M.W.H.; Wadman, W.J.

    2012-01-01

    Neurons adjust their intrinsic excitability when experiencing a persistent change in synaptic drive. This process can prevent neural activity from moving into either a quiescent state or a saturated state in the face of ongoing plasticity, and is thought to promote stability of the network in which

  4. Individual Identification Using Functional Brain Fingerprint Detected by Recurrent Neural Network.

    Science.gov (United States)

    Chen, Shiyang; Hu, Xiaoping P

    2018-03-20

    Individual identification based on brain function has gained traction in literature. Investigating individual differences in brain function can provide additional insights into the brain. In this work, we introduce a recurrent neural network based model for identifying individuals based on only a short segment of resting state functional MRI data. In addition, we demonstrate how the global signal and differences in atlases affect the individual identifiability. Furthermore, we investigate neural network features that exhibit the uniqueness of each individual. The results indicate that our model is able to identify individuals based on neural features and provides additional information regarding brain dynamics.

  5. A One-Layer Recurrent Neural Network for Pseudoconvex Optimization Problems With Equality and Inequality Constraints.

    Science.gov (United States)

    Qin, Sitian; Yang, Xiudong; Xue, Xiaoping; Song, Jiahui

    2017-10-01

    Pseudoconvex optimization problem, as an important nonconvex optimization problem, plays an important role in scientific and engineering applications. In this paper, a recurrent one-layer neural network is proposed for solving the pseudoconvex optimization problem with equality and inequality constraints. It is proved that from any initial state, the state of the proposed neural network reaches the feasible region in finite time and stays there thereafter. It is also proved that the state of the proposed neural network is convergent to an optimal solution of the related problem. Compared with the related existing recurrent neural networks for the pseudoconvex optimization problems, the proposed neural network in this paper does not need the penalty parameters and has a better convergence. Meanwhile, the proposed neural network is used to solve three nonsmooth optimization problems, and we make some detailed comparisons with the known related conclusions. In the end, some numerical examples are provided to illustrate the effectiveness of the performance of the proposed neural network.

  6. Folk music style modelling by recurrent neural networks with long short term memory units

    OpenAIRE

    Sturm, Bob; Santos, João Felipe; Korshunova, Iryna

    2015-01-01

    We demonstrate two generative models created by training a recurrent neural network (RNN) with three hidden layers of long short-term memory (LSTM) units. This extends past work in numerous directions, including training deeper models with nearly 24,000 high-level transcriptions of folk tunes. We discuss our on-going work.

  7. Recurrent Neural Network For Forecasting Time Series With Long Memory Pattern

    Science.gov (United States)

    Walid; Alamsyah

    2017-04-01

    Recurrent Neural Network as one of the hybrid models are often used to predict and estimate the issues related to electricity, can be used to describe the cause of the swelling of electrical load which experienced by PLN. In this research will be developed RNN forecasting procedures at the time series with long memory patterns. Considering the application is the national electrical load which of course has a different trend with the condition of the electrical load in any country. This research produces the algorithm of time series forecasting which has long memory pattern using E-RNN after this referred to the algorithm of integrated fractional recurrent neural networks (FIRNN).The prediction results of long memory time series using models Fractional Integrated Recurrent Neural Network (FIRNN) showed that the model with the selection of data difference in the range of [-1,1] and the model of Fractional Integrated Recurrent Neural Network (FIRNN) (24,6,1) provides the smallest MSE value, which is 0.00149684.

  8. Encoding of phonology in a recurrent neural model of grounded speech

    NARCIS (Netherlands)

    Alishahi, Afra; Barking, Marie; Chrupala, Grzegorz; Levy, Roger; Specia, Lucia

    2017-01-01

    We study the representation and encoding of phonemes in a recurrent neural network model of grounded speech. We use a model which processes images and their spoken descriptions, and projects the visual and auditory representations into the same semantic space. We perform a number of analyses on how

  9. Direction-of-change forecasting using a volatility-based recurrent neural network

    NARCIS (Netherlands)

    Bekiros, S.D.; Georgoutsos, D.A.

    2008-01-01

    This paper investigates the profitability of a trading strategy, based on recurrent neural networks, that attempts to predict the direction-of-change of the market in the case of the NASDAQ composite index. The sample extends over the period 8 February 1971 to 7 April 1998, while the sub-period 8

  10. Global stability of discrete-time recurrent neural networks with impulse effects

    International Nuclear Information System (INIS)

    Zhou, L; Li, C; Wan, J

    2008-01-01

    This paper formulates and studies a class of discrete-time recurrent neural networks with impulse effects. A stability criterion, which characterizes the effects of impulse and stability property of the corresponding impulse-free networks on the stability of the impulsive networks in an aggregate form, is established. Two simplified and numerically tractable criteria are also provided

  11. A one-layer recurrent neural network for constrained nonsmooth optimization.

    Science.gov (United States)

    Liu, Qingshan; Wang, Jun

    2011-10-01

    This paper presents a novel one-layer recurrent neural network modeled by means of a differential inclusion for solving nonsmooth optimization problems, in which the number of neurons in the proposed neural network is the same as the number of decision variables of optimization problems. Compared with existing neural networks for nonsmooth optimization problems, the global convexity condition on the objective functions and constraints is relaxed, which allows the objective functions and constraints to be nonconvex. It is proven that the state variables of the proposed neural network are convergent to optimal solutions if a single design parameter in the model is larger than a derived lower bound. Numerical examples with simulation results substantiate the effectiveness and illustrate the characteristics of the proposed neural network.

  12. A one-layer recurrent neural network for constrained nonconvex optimization.

    Science.gov (United States)

    Li, Guocheng; Yan, Zheng; Wang, Jun

    2015-01-01

    In this paper, a one-layer recurrent neural network is proposed for solving nonconvex optimization problems subject to general inequality constraints, designed based on an exact penalty function method. It is proved herein that any neuron state of the proposed neural network is convergent to the feasible region in finite time and stays there thereafter, provided that the penalty parameter is sufficiently large. The lower bounds of the penalty parameter and convergence time are also estimated. In addition, any neural state of the proposed neural network is convergent to its equilibrium point set which satisfies the Karush-Kuhn-Tucker conditions of the optimization problem. Moreover, the equilibrium point set is equivalent to the optimal solution to the nonconvex optimization problem if the objective function and constraints satisfy given conditions. Four numerical examples are provided to illustrate the performances of the proposed neural network.

  13. A one-layer recurrent neural network for constrained nonsmooth invex optimization.

    Science.gov (United States)

    Li, Guocheng; Yan, Zheng; Wang, Jun

    2014-02-01

    Invexity is an important notion in nonconvex optimization. In this paper, a one-layer recurrent neural network is proposed for solving constrained nonsmooth invex optimization problems, designed based on an exact penalty function method. It is proved herein that any state of the proposed neural network is globally convergent to the optimal solution set of constrained invex optimization problems, with a sufficiently large penalty parameter. In addition, any neural state is globally convergent to the unique optimal solution, provided that the objective function and constraint functions are pseudoconvex. Moreover, any neural state is globally convergent to the feasible region in finite time and stays there thereafter. The lower bounds of the penalty parameter and convergence time are also estimated. Two numerical examples are provided to illustrate the performances of the proposed neural network. Copyright © 2013 Elsevier Ltd. All rights reserved.

  14. Statistical downscaling of precipitation using long short-term memory recurrent neural networks

    Science.gov (United States)

    Misra, Saptarshi; Sarkar, Sudeshna; Mitra, Pabitra

    2017-11-01

    Hydrological impacts of global climate change on regional scale are generally assessed by downscaling large-scale climatic variables, simulated by General Circulation Models (GCMs), to regional, small-scale hydrometeorological variables like precipitation, temperature, etc. In this study, we propose a new statistical downscaling model based on Recurrent Neural Network with Long Short-Term Memory which captures the spatio-temporal dependencies in local rainfall. The previous studies have used several other methods such as linear regression, quantile regression, kernel regression, beta regression, and artificial neural networks. Deep neural networks and recurrent neural networks have been shown to be highly promising in modeling complex and highly non-linear relationships between input and output variables in different domains and hence we investigated their performance in the task of statistical downscaling. We have tested this model on two datasets—one on precipitation in Mahanadi basin in India and the second on precipitation in Campbell River basin in Canada. Our autoencoder coupled long short-term memory recurrent neural network model performs the best compared to other existing methods on both the datasets with respect to temporal cross-correlation, mean squared error, and capturing the extremes.

  15. Hysteretic recurrent neural networks: a tool for modeling hysteretic materials and systems

    International Nuclear Information System (INIS)

    Veeramani, Arun S; Crews, John H; Buckner, Gregory D

    2009-01-01

    This paper introduces a novel recurrent neural network, the hysteretic recurrent neural network (HRNN), that is ideally suited to modeling hysteretic materials and systems. This network incorporates a hysteretic neuron consisting of conjoined sigmoid activation functions. Although similar hysteretic neurons have been explored previously, the HRNN is unique in its utilization of simple recurrence to 'self-select' relevant activation functions. Furthermore, training is facilitated by placing the network weights on the output side, allowing standard backpropagation of error training algorithms to be used. We present two- and three-phase versions of the HRNN for modeling hysteretic materials with distinct phases. These models are experimentally validated using data collected from shape memory alloys and ferromagnetic materials. The results demonstrate the HRNN's ability to accurately generalize hysteretic behavior with a relatively small number of neurons. Additional benefits lie in the network's ability to identify statistical information concerning the macroscopic material by analyzing the weights of the individual neurons

  16. Spatiotemporal Dynamics and Reliable Computations in Recurrent Spiking Neural Networks

    Science.gov (United States)

    Pyle, Ryan; Rosenbaum, Robert

    2017-01-01

    Randomly connected networks of excitatory and inhibitory spiking neurons provide a parsimonious model of neural variability, but are notoriously unreliable for performing computations. We show that this difficulty is overcome by incorporating the well-documented dependence of connection probability on distance. Spatially extended spiking networks exhibit symmetry-breaking bifurcations and generate spatiotemporal patterns that can be trained to perform dynamical computations under a reservoir computing framework.

  17. Spatiotemporal Dynamics and Reliable Computations in Recurrent Spiking Neural Networks.

    Science.gov (United States)

    Pyle, Ryan; Rosenbaum, Robert

    2017-01-06

    Randomly connected networks of excitatory and inhibitory spiking neurons provide a parsimonious model of neural variability, but are notoriously unreliable for performing computations. We show that this difficulty is overcome by incorporating the well-documented dependence of connection probability on distance. Spatially extended spiking networks exhibit symmetry-breaking bifurcations and generate spatiotemporal patterns that can be trained to perform dynamical computations under a reservoir computing framework.

  18. Delay-Dependent Stability Criteria of Uncertain Periodic Switched Recurrent Neural Networks with Time-Varying Delays

    Directory of Open Access Journals (Sweden)

    Xing Yin

    2011-01-01

    uncertain periodic switched recurrent neural networks with time-varying delays. When uncertain discrete-time recurrent neural network is a periodic system, it is expressed as switched neural network for the finite switching state. Based on the switched quadratic Lyapunov functional approach (SQLF and free-weighting matrix approach (FWM, some linear matrix inequality criteria are found to guarantee the delay-dependent asymptotical stability of these systems. Two examples illustrate the exactness of the proposed criteria.

  19. Using a multi-state recurrent neural network to optimize loading patterns in BWRs

    International Nuclear Information System (INIS)

    Ortiz, Juan Jose; Requena, Ignacio

    2004-01-01

    A Multi-State Recurrent Neural Network is used to optimize Loading Patterns (LP) in BWRs. We have proposed an energy function that depends on fuel assembly positions and their nuclear cross sections to carry out optimisation. Multi-State Recurrent Neural Networks creates LPs that satisfy the Radial Power Peaking Factor and maximize the effective multiplication factor at the Beginning of the Cycle, and also satisfy the Minimum Critical Power Ratio and Maximum Linear Heat Generation Rate at the End of the Cycle, thereby maximizing the effective multiplication factor. In order to evaluate the LPs, we have used a trained back-propagation neural network to predict the parameter values, instead of using a reactor core simulator, which saved considerable computation time in the search process. We applied this method to find optimal LPs for five cycles of Laguna Verde Nuclear Power Plant (LVNPP) in Mexico

  20. Natural Language Video Description using Deep Recurrent Neural Networks

    Science.gov (United States)

    2015-11-23

    ht = f (Wxhxt + Whhht−1) (2.1) zt = g(Wzhht) (2.2) where f and g are element-wise non-linear functions such as a sigmoid or hyperbolic tan - gent, xt...space. arXiv preprint arXiv:1301.3781, 2013. 22 [68] Volodymyr Mnih, Nicolas Heess, Alex Graves, et al. Recurrent models of visual attention. In...2010. 2 36 Bibliography [107] Li Yao, Atousa Torabi, Kyunghyun Cho, Nicolas Ballas, Christopher Pal, Hugo Larochelle, and Aaron Courville. Describing

  1. Internal representation of task rules by recurrent dynamics: the importance of the diversity of neural responses

    Directory of Open Access Journals (Sweden)

    Mattia Rigotti

    2010-10-01

    Full Text Available Neural activity of behaving animals, especially in the prefrontal cortex, is highly heterogeneous, with selective responses to diverse aspects of the executed task. We propose a general model of recurrent neural networks that perform complex rule-based tasks, and we show that the diversity of neuronal responses plays a fundamental role when the behavioral responses are context dependent. Specifically, we found that when the inner mental states encoding the task rules are represented by stable patterns of neural activity (attractors of the neural dynamics, the neurons must be selective for combinations of sensory stimuli and inner mental states. Such mixed selectivity is easily obtained by neurons that connect with random synaptic strengths both to the recurrent network and to neurons encoding sensory inputs. The number of randomly connected neurons needed to solve a task is on average only three times as large as the number of neurons needed in a network designed ad hoc. Moreover, the number of needed neurons grows only linearly with the number of task-relevant events and mental states, provided that each neuron responds to a large proportion of events (dense/distributed coding. A biologically realistic implementation of the model captures several aspects of the activity recorded from monkeys performing context dependent tasks. Our findings explain the importance of the diversity of neural responses and provide us with simple and general principles for designing attractor neural networks that perform complex computation.

  2. Firing rate dynamics in recurrent spiking neural networks with intrinsic and network heterogeneity.

    Science.gov (United States)

    Ly, Cheng

    2015-12-01

    Heterogeneity of neural attributes has recently gained a lot of attention and is increasing recognized as a crucial feature in neural processing. Despite its importance, this physiological feature has traditionally been neglected in theoretical studies of cortical neural networks. Thus, there is still a lot unknown about the consequences of cellular and circuit heterogeneity in spiking neural networks. In particular, combining network or synaptic heterogeneity and intrinsic heterogeneity has yet to be considered systematically despite the fact that both are known to exist and likely have significant roles in neural network dynamics. In a canonical recurrent spiking neural network model, we study how these two forms of heterogeneity lead to different distributions of excitatory firing rates. To analytically characterize how these types of heterogeneities affect the network, we employ a dimension reduction method that relies on a combination of Monte Carlo simulations and probability density function equations. We find that the relationship between intrinsic and network heterogeneity has a strong effect on the overall level of heterogeneity of the firing rates. Specifically, this relationship can lead to amplification or attenuation of firing rate heterogeneity, and these effects depend on whether the recurrent network is firing asynchronously or rhythmically firing. These observations are captured with the aforementioned reduction method, and furthermore simpler analytic descriptions based on this dimension reduction method are developed. The final analytic descriptions provide compact and descriptive formulas for how the relationship between intrinsic and network heterogeneity determines the firing rate heterogeneity dynamics in various settings.

  3. Exponentially convergent state estimation for delayed switched recurrent neural networks.

    Science.gov (United States)

    Ahn, Choon Ki

    2011-11-01

    This paper deals with the delay-dependent exponentially convergent state estimation problem for delayed switched neural networks. A set of delay-dependent criteria is derived under which the resulting estimation error system is exponentially stable. It is shown that the gain matrix of the proposed state estimator is characterised in terms of the solution to a set of linear matrix inequalities (LMIs), which can be checked readily by using some standard numerical packages. An illustrative example is given to demonstrate the effectiveness of the proposed state estimator.

  4. A two-layer recurrent neural network for nonsmooth convex optimization problems.

    Science.gov (United States)

    Qin, Sitian; Xue, Xiaoping

    2015-06-01

    In this paper, a two-layer recurrent neural network is proposed to solve the nonsmooth convex optimization problem subject to convex inequality and linear equality constraints. Compared with existing neural network models, the proposed neural network has a low model complexity and avoids penalty parameters. It is proved that from any initial point, the state of the proposed neural network reaches the equality feasible region in finite time and stays there thereafter. Moreover, the state is unique if the initial point lies in the equality feasible region. The equilibrium point set of the proposed neural network is proved to be equivalent to the Karush-Kuhn-Tucker optimality set of the original optimization problem. It is further proved that the equilibrium point of the proposed neural network is stable in the sense of Lyapunov. Moreover, from any initial point, the state is proved to be convergent to an equilibrium point of the proposed neural network. Finally, as applications, the proposed neural network is used to solve nonlinear convex programming with linear constraints and L1 -norm minimization problems.

  5. Land Cover Classification via Multitemporal Spatial Data by Deep Recurrent Neural Networks

    Science.gov (United States)

    Ienco, Dino; Gaetano, Raffaele; Dupaquier, Claire; Maurel, Pierre

    2017-10-01

    Nowadays, modern earth observation programs produce huge volumes of satellite images time series (SITS) that can be useful to monitor geographical areas through time. How to efficiently analyze such kind of information is still an open question in the remote sensing field. Recently, deep learning methods proved suitable to deal with remote sensing data mainly for scene classification (i.e. Convolutional Neural Networks - CNNs - on single images) while only very few studies exist involving temporal deep learning approaches (i.e Recurrent Neural Networks - RNNs) to deal with remote sensing time series. In this letter we evaluate the ability of Recurrent Neural Networks, in particular the Long-Short Term Memory (LSTM) model, to perform land cover classification considering multi-temporal spatial data derived from a time series of satellite images. We carried out experiments on two different datasets considering both pixel-based and object-based classification. The obtained results show that Recurrent Neural Networks are competitive compared to state-of-the-art classifiers, and may outperform classical approaches in presence of low represented and/or highly mixed classes. We also show that using the alternative feature representation generated by LSTM can improve the performances of standard classifiers.

  6. Simultaneous multichannel signal transfers via chaos in a recurrent neural network.

    Science.gov (United States)

    Soma, Ken-ichiro; Mori, Ryota; Sato, Ryuichi; Furumai, Noriyuki; Nara, Shigetoshi

    2015-05-01

    We propose neural network model that demonstrates the phenomenon of signal transfer between separated neuron groups via other chaotic neurons that show no apparent correlations with the input signal. The model is a recurrent neural network in which it is supposed that synchronous behavior between small groups of input and output neurons has been learned as fragments of high-dimensional memory patterns, and depletion of neural connections results in chaotic wandering dynamics. Computer experiments show that when a strong oscillatory signal is applied to an input group in the chaotic regime, the signal is successfully transferred to the corresponding output group, although no correlation is observed between the input signal and the intermediary neurons. Signal transfer is also observed when multiple signals are applied simultaneously to separate input groups belonging to different memory attractors. In this sense simultaneous multichannel communications are realized, and the chaotic neural dynamics acts as a signal transfer medium in which the signal appears to be hidden.

  7. A non-penalty recurrent neural network for solving a class of constrained optimization problems.

    Science.gov (United States)

    Hosseini, Alireza

    2016-01-01

    In this paper, we explain a methodology to analyze convergence of some differential inclusion-based neural networks for solving nonsmooth optimization problems. For a general differential inclusion, we show that if its right hand-side set valued map satisfies some conditions, then solution trajectory of the differential inclusion converges to optimal solution set of its corresponding in optimization problem. Based on the obtained methodology, we introduce a new recurrent neural network for solving nonsmooth optimization problems. Objective function does not need to be convex on R(n) nor does the new neural network model require any penalty parameter. We compare our new method with some penalty-based and non-penalty based models. Moreover for differentiable cases, we implement circuit diagram of the new neural network. Copyright © 2015 Elsevier Ltd. All rights reserved.

  8. A Novel Recurrent Neural Network for Manipulator Control With Improved Noise Tolerance.

    Science.gov (United States)

    Li, Shuai; Wang, Huanqing; Rafique, Muhammad Usman

    2017-04-12

    In this paper, we propose a novel recurrent neural network to resolve the redundancy of manipulators for efficient kinematic control in the presence of noises in a polynomial type. Leveraging the high-order derivative properties of polynomial noises, a deliberately devised neural network is proposed to eliminate the impact of noises and recover the accurate tracking of desired trajectories in workspace. Rigorous analysis shows that the proposed neural law stabilizes the system dynamics and the position tracking error converges to zero in the presence of noises. Extensive simulations verify the theoretical results. Numerical comparisons show that existing dual neural solutions lose stability when exposed to large constant noises or time-varying noises. In contrast, the proposed approach works well and has a low tracking error comparable to noise-free situations.

  9. Multi-stability and almost periodic solutions of a class of recurrent neural networks

    International Nuclear Information System (INIS)

    Liu Yiguang; You Zhisheng

    2007-01-01

    This paper studies multi-stability, existence of almost periodic solutions of a class of recurrent neural networks with bounded activation functions. After introducing a sufficient condition insuring multi-stability, many criteria guaranteeing existence of almost periodic solutions are derived using Mawhin's coincidence degree theory. All the criteria are constructed without assuming the activation functions are smooth, monotonic or Lipschitz continuous, and that the networks contains periodic variables (such as periodic coefficients, periodic inputs or periodic activation functions), so all criteria can be easily extended to fit many concrete forms of neural networks such as Hopfield neural networks, or cellular neural networks, etc. Finally, all kinds of simulations are employed to illustrate the criteria

  10. Multistability of delayed complex-valued recurrent neural networks with discontinuous real-imaginary-type activation functions

    International Nuclear Information System (INIS)

    Huang Yu-Jiao; Hu Hai-Gen

    2015-01-01

    In this paper, the multistability issue is discussed for delayed complex-valued recurrent neural networks with discontinuous real-imaginary-type activation functions. Based on a fixed theorem and stability definition, sufficient criteria are established for the existence and stability of multiple equilibria of complex-valued recurrent neural networks. The number of stable equilibria is larger than that of real-valued recurrent neural networks, which can be used to achieve high-capacity associative memories. One numerical example is provided to show the effectiveness and superiority of the presented results. (paper)

  11. Stability results for stochastic delayed recurrent neural networks with discrete and distributed delays

    Science.gov (United States)

    Chen, Guiling; Li, Dingshi; Shi, Lin; van Gaans, Onno; Verduyn Lunel, Sjoerd

    2018-03-01

    We present new conditions for asymptotic stability and exponential stability of a class of stochastic recurrent neural networks with discrete and distributed time varying delays. Our approach is based on the method using fixed point theory, which do not resort to any Liapunov function or Liapunov functional. Our results neither require the boundedness, monotonicity and differentiability of the activation functions nor differentiability of the time varying delays. In particular, a class of neural networks without stochastic perturbations is also considered. Examples are given to illustrate our main results.

  12. Stochastic exponential stability of the delayed reaction-diffusion recurrent neural networks with Markovian jumping parameters

    International Nuclear Information System (INIS)

    Wang Linshan; Zhang Zhe; Wang Yangfan

    2008-01-01

    Some criteria for the global stochastic exponential stability of the delayed reaction-diffusion recurrent neural networks with Markovian jumping parameters are presented. The jumping parameters considered here are generated from a continuous-time discrete-state homogeneous Markov process, which are governed by a Markov process with discrete and finite state space. By employing a new Lyapunov-Krasovskii functional, a linear matrix inequality (LMI) approach is developed to establish some easy-to-test criteria of global exponential stability in the mean square for the stochastic neural networks. The criteria are computationally efficient, since they are in the forms of some linear matrix inequalities

  13. Robust sliding mode control for uncertain servo system using friction observer and recurrent fuzzy neural networks

    International Nuclear Information System (INIS)

    Han, Seong Ik; Jeong, Chan Se; Yang, Soon Yong

    2012-01-01

    A robust positioning control scheme has been developed using friction parameter observer and recurrent fuzzy neural networks based on the sliding mode control. As a dynamic friction model, the LuGre model is adopted for handling friction compensation because it has been known to capture sufficiently the properties of a nonlinear dynamic friction. A developed friction parameter observer has a simple structure and also well estimates friction parameters of the LuGre friction model. In addition, an approximation method for the system uncertainty is developed using recurrent fuzzy neural networks technology to improve the precision positioning degree. Some simulation and experiment provide the verification on the performance of a proposed robust control scheme

  14. CloudScan - A Configuration-Free Invoice Analysis System Using Recurrent Neural Networks

    DEFF Research Database (Denmark)

    Palm, Rasmus Berg; Winther, Ole; Laws, Florian

    2017-01-01

    We present CloudScan; an invoice analysis system that requires zero configuration or upfront annotation. In contrast to previous work, CloudScan does not rely on templates of invoice layout, instead it learns a single global model of invoices that naturally generalizes to unseen invoice layouts....... The model is trained using data automatically extracted from end-user provided feedback. This automatic training data extraction removes the requirement for users to annotate the data precisely. We describe a recurrent neural network model that can capture long range context and compare it to a baseline...... logistic regression model corresponding to the current CloudScan production system. We train and evaluate the system on 8 important fields using a dataset of 326,471 invoices. The recurrent neural network and baseline model achieve 0.891 and 0.887 average F1 scores respectively on seen invoice layouts...

  15. Robust sliding mode control for uncertain servo system using friction observer and recurrent fuzzy neural networks

    Energy Technology Data Exchange (ETDEWEB)

    Han, Seong Ik [Pusan National University, Busan (Korea, Republic of); Jeong, Chan Se; Yang, Soon Yong [University of Ulsan, Ulsan (Korea, Republic of)

    2012-04-15

    A robust positioning control scheme has been developed using friction parameter observer and recurrent fuzzy neural networks based on the sliding mode control. As a dynamic friction model, the LuGre model is adopted for handling friction compensation because it has been known to capture sufficiently the properties of a nonlinear dynamic friction. A developed friction parameter observer has a simple structure and also well estimates friction parameters of the LuGre friction model. In addition, an approximation method for the system uncertainty is developed using recurrent fuzzy neural networks technology to improve the precision positioning degree. Some simulation and experiment provide the verification on the performance of a proposed robust control scheme.

  16. Online Signature Verification using Recurrent Neural Network and Length-normalized Path Signature

    OpenAIRE

    Lai, Songxuan; Jin, Lianwen; Yang, Weixin

    2017-01-01

    Inspired by the great success of recurrent neural networks (RNNs) in sequential modeling, we introduce a novel RNN system to improve the performance of online signature verification. The training objective is to directly minimize intra-class variations and to push the distances between skilled forgeries and genuine samples above a given threshold. By back-propagating the training signals, our RNN network produced discriminative features with desired metrics. Additionally, we propose a novel d...

  17. Complex Dynamical Network Control for Trajectory Tracking Using Delayed Recurrent Neural Networks

    Directory of Open Access Journals (Sweden)

    Jose P. Perez

    2014-01-01

    Full Text Available In this paper, the problem of trajectory tracking is studied. Based on the V-stability and Lyapunov theory, a control law that achieves the global asymptotic stability of the tracking error between a delayed recurrent neural network and a complex dynamical network is obtained. To illustrate the analytic results, we present a tracking simulation of a dynamical network with each node being just one Lorenz’s dynamical system and three identical Chen’s dynamical systems.

  18. Online Sequence Training of Recurrent Neural Networks with Connectionist Temporal Classification

    OpenAIRE

    Hwang, Kyuyeon; Sung, Wonyong

    2015-01-01

    Connectionist temporal classification (CTC) based supervised sequence training of recurrent neural networks (RNNs) has shown great success in many machine learning areas including end-to-end speech and handwritten character recognition. For the CTC training, however, it is required to unroll (or unfold) the RNN by the length of an input sequence. This unrolling requires a lot of memory and hinders a small footprint implementation of online learning or adaptation. Furthermore, the length of tr...

  19. Simulating the dynamics of the neutron flux in a nuclear reactor by locally recurrent neural networks

    International Nuclear Information System (INIS)

    Cadini, F.; Zio, E.; Pedroni, N.

    2007-01-01

    In this paper, a locally recurrent neural network (LRNN) is employed for approximating the temporal evolution of a nonlinear dynamic system model of a simplified nuclear reactor. To this aim, an infinite impulse response multi-layer perceptron (IIR-MLP) is trained according to a recursive back-propagation (RBP) algorithm. The network nodes contain internal feedback paths and their connections are realized by means of IIR synaptic filters, which provide the LRNN with the necessary system state memory

  20. Some new results for recurrent neural networks with varying-time coefficients and delays

    International Nuclear Information System (INIS)

    Jiang Haijun; Teng Zhidong

    2005-01-01

    In this Letter, we consider the recurrent neural networks with varying-time coefficients and delays. By constructing new Lyapunov functional, introducing ingeniously many real parameters and applying the technique of Young inequality, we establish a series of criteria on the boundedness, global exponential stability and the existence of periodic solutions. In these criteria, we do not require that the response functions are differentiable, bounded and monotone nondecreasing. Some previous works are improved and extended

  1. Constructing Long Short-Term Memory based Deep Recurrent Neural Networks for Large Vocabulary Speech Recognition

    OpenAIRE

    Li, Xiangang; Wu, Xihong

    2014-01-01

    Long short-term memory (LSTM) based acoustic modeling methods have recently been shown to give state-of-the-art performance on some speech recognition tasks. To achieve a further performance improvement, in this research, deep extensions on LSTM are investigated considering that deep hierarchical model has turned out to be more efficient than a shallow one. Motivated by previous research on constructing deep recurrent neural networks (RNNs), alternative deep LSTM architectures are proposed an...

  2. DeepProbe: Information Directed Sequence Understanding and Chatbot Design via Recurrent Neural Networks

    OpenAIRE

    Yin, Zi; Chang, Keng-hao; Zhang, Ruofei

    2017-01-01

    Information extraction and user intention identification are central topics in modern query understanding and recommendation systems. In this paper, we propose DeepProbe, a generic information-directed interaction framework which is built around an attention-based sequence to sequence (seq2seq) recurrent neural network. DeepProbe can rephrase, evaluate, and even actively ask questions, leveraging the generative ability and likelihood estimation made possible by seq2seq models. DeepProbe makes...

  3. A Heuristic Approach to Intra-Brain Communications Using Chaos in a Recurrent Neural Network Model

    Science.gov (United States)

    Soma, Ken-ichiro; Mori, Ryota; Sato, Ryuichi; Nara, Shigetoshi

    2011-09-01

    To approach functional roles of chaos in brain, a heuristic model to consider mechanisms of intra-brain communications is proposed. The key idea is to use chaos in firing pattern dynamics of a recurrent neural network consisting of birary state neurons, as propagation medium of pulse signals. Computer experiments and numerical methods are introduced to evaluate signal transport characteristics by calculating correlation functions between sending neurons and receiving neurons of pulse signals.

  4. Image-Based Visual Servoing for Robotic Systems: A Nonlinear Lyapunov-Based Control Approach

    International Nuclear Information System (INIS)

    Dixon, Warren

    2003-01-01

    The objective of this project is to enable current and future EM robots with an increased ability to perceive and interact with unstructured and unknown environments through the use of camera-based visual servo controllers. The scientific goals of this research are to develop a new visual servo control methodology that: (1) adapts for the unknown camera calibration parameters (e.g., focal length, scaling factors, camera position, and orientation) and the physical parameters of the robotic system (e.g., mass, inertia, friction), (2) compensates for unknown depth information (extract 3D information from the 2D image), and (3) enables multi-uncalibrated cameras to be used as a means to provide a larger field-of-view. Nonlinear Lyapunov-based techniques in conjunction with results from projective geometry are being used to overcome the complex control issues and alleviate many of the restrictive assumptions that impact current visual servo controlled robotic systems. The potential relevance of this control methodology will be a plug-and-play visual servoing control module that can be utilized in conjunction with current technology such as feature extraction and recognition, to enable current EM robotic systems with the capabilities of increased accuracy, autonomy, and robustness, with a larger field of view (and hence a larger workspace). These capabilities will enable EM robots to significantly accelerate D and D operations by providing for improved robot autonomy and increased worker productivity, while also reducing the associated costs, removing the human operator from the hazardous environments, and reducing the burden and skill of the human operators

  5. Image-Based Visual Servoing for Robotic Systems: A Nonlinear Lyapunov-Based Control Approach

    International Nuclear Information System (INIS)

    Dixon, Warren

    2002-01-01

    The objective of this project is to enable current and future EM robots with an increased ability to perceive and interact with unstructured and unknown environments through the use of camera-based visual servo controlled robots. The scientific goals of this research are to develop a new visual servo control methodology that: (1) adapts for the unknown camera calibration parameters (e.g., focal length, scaling factors, camera position and orientation) and the physical parameters of the robotic system (e.g., mass, inertia, friction), (2) compensates for unknown depth information (extract 3D information from the 2D image), and (3) enables multi-uncalibrated cameras to be used as a means to provide a larger field-of-view. Nonlinear Lyapunov-based techniques are being used to overcome the complex control issues and alleviate many of the restrictive assumptions that impact current visual servo controlled robotic systems. The potential relevance of this control methodology will be a plug-and-play visual servoing control module that can be utilized in conjunction with current technology such as feature extraction and recognition, to enable current EM robotic systems with the capabilities of increased accuracy, autonomy, and robustness, with a larger field of view (and hence a larger workspace). These capabilities will enable EM robots to significantly accelerate D and D operations by providing for improved robot autonomy and increased worker productivity, while also reducing the associated costs, removing the human operator from the hazardous environments, and reducing the burden and skill of the human operators

  6. Image-Based Visual Servoing for Robotic Systems: A Nonlinear Lyapunov-Based Control Approach

    International Nuclear Information System (INIS)

    Dixon, Warren

    2004-01-01

    There is significant motivation to provide robotic systems with improved autonomy as a means to significantly accelerate deactivation and decommissioning (DandD) operations while also reducing the associated costs, removing human operators from hazardous environments, and reducing the required burden and skill of human operators. To achieve improved autonomy, this project focused on the basic science challenges leading to the development of visual servo controllers. The challenge in developing these controllers is that a camera provides 2-dimensional image information about the 3-dimensional Euclidean-space through a perspective (range dependent) projection that can be corrupted by uncertainty in the camera calibration matrix and by disturbances such as nonlinear radial distortion. Disturbances in this relationship (i.e., corruption in the sensor information) propagate erroneous information to the feedback controller of the robot, leading to potentially unpredictable task execution. This research project focused on the development of a visual servo control methodology that targets compensating for disturbances in the camera model (i.e., camera calibration and the recovery of range information) as a means to achieve predictable response by the robotic system operating in unstructured environments. The fundamental idea is to use nonlinear Lyapunov-based techniques along with photogrammetry methods to overcome the complex control issues and alleviate many of the restrictive assumptions that impact current robotic applications. The outcome of this control methodology is a plug-and-play visual servoing control module that can be utilized in conjunction with current technology such as feature recognition and extraction to enable robotic systems with the capabilities of increased accuracy, autonomy, and robustness, with a larger field of view (and hence a larger workspace). The developed methodology has been reported in numerous peer-reviewed publications and the

  7. Encoding Time in Feedforward Trajectories of a Recurrent Neural Network Model.

    Science.gov (United States)

    Hardy, N F; Buonomano, Dean V

    2018-02-01

    Brain activity evolves through time, creating trajectories of activity that underlie sensorimotor processing, behavior, and learning and memory. Therefore, understanding the temporal nature of neural dynamics is essential to understanding brain function and behavior. In vivo studies have demonstrated that sequential transient activation of neurons can encode time. However, it remains unclear whether these patterns emerge from feedforward network architectures or from recurrent networks and, furthermore, what role network structure plays in timing. We address these issues using a recurrent neural network (RNN) model with distinct populations of excitatory and inhibitory units. Consistent with experimental data, a single RNN could autonomously produce multiple functionally feedforward trajectories, thus potentially encoding multiple timed motor patterns lasting up to several seconds. Importantly, the model accounted for Weber's law, a hallmark of timing behavior. Analysis of network connectivity revealed that efficiency-a measure of network interconnectedness-decreased as the number of stored trajectories increased. Additionally, the balance of excitation (E) and inhibition (I) shifted toward excitation during each unit's activation time, generating the prediction that observed sequential activity relies on dynamic control of the E/I balance. Our results establish for the first time that the same RNN can generate multiple functionally feedforward patterns of activity as a result of dynamic shifts in the E/I balance imposed by the connectome of the RNN. We conclude that recurrent network architectures account for sequential neural activity, as well as for a fundamental signature of timing behavior: Weber's law.

  8. Bifurcation analysis on a generalized recurrent neural network with two interconnected three-neuron components

    International Nuclear Information System (INIS)

    Hajihosseini, Amirhossein; Maleki, Farzaneh; Rokni Lamooki, Gholam Reza

    2011-01-01

    Highlights: → We construct a recurrent neural network by generalizing a specific n-neuron network. → Several codimension 1 and 2 bifurcations take place in the newly constructed network. → The newly constructed network has higher capabilities to learn periodic signals. → The normal form theorem is applied to investigate dynamics of the network. → A series of bifurcation diagrams is given to support theoretical results. - Abstract: A class of recurrent neural networks is constructed by generalizing a specific class of n-neuron networks. It is shown that the newly constructed network experiences generic pitchfork and Hopf codimension one bifurcations. It is also proved that the emergence of generic Bogdanov-Takens, pitchfork-Hopf and Hopf-Hopf codimension two, and the degenerate Bogdanov-Takens bifurcation points in the parameter space is possible due to the intersections of codimension one bifurcation curves. The occurrence of bifurcations of higher codimensions significantly increases the capability of the newly constructed recurrent neural network to learn broader families of periodic signals.

  9. Model for a flexible motor memory based on a self-active recurrent neural network.

    Science.gov (United States)

    Boström, Kim Joris; Wagner, Heiko; Prieske, Markus; de Lussanet, Marc

    2013-10-01

    Using recent recurrent network architecture based on the reservoir computing approach, we propose and numerically simulate a model that is focused on the aspects of a flexible motor memory for the storage of elementary movement patterns into the synaptic weights of a neural network, so that the patterns can be retrieved at any time by simple static commands. The resulting motor memory is flexible in that it is capable to continuously modulate the stored patterns. The modulation consists in an approximately linear inter- and extrapolation, generating a large space of possible movements that have not been learned before. A recurrent network of thousand neurons is trained in a manner that corresponds to a realistic exercising scenario, with experimentally measured muscular activations and with kinetic data representing proprioceptive feedback. The network is "self-active" in that it maintains recurrent flow of activation even in the absence of input, a feature that resembles the "resting-state activity" found in the human and animal brain. The model involves the concept of "neural outsourcing" which amounts to the permanent shifting of computational load from higher to lower-level neural structures, which might help to explain why humans are able to execute learned skills in a fluent and flexible manner without the need for attention to the details of the movement. Copyright © 2013 Elsevier B.V. All rights reserved.

  10. Diagonal recurrent neural network based adaptive control of nonlinear dynamical systems using lyapunov stability criterion.

    Science.gov (United States)

    Kumar, Rajesh; Srivastava, Smriti; Gupta, J R P

    2017-03-01

    In this paper adaptive control of nonlinear dynamical systems using diagonal recurrent neural network (DRNN) is proposed. The structure of DRNN is a modification of fully connected recurrent neural network (FCRNN). Presence of self-recurrent neurons in the hidden layer of DRNN gives it an ability to capture the dynamic behaviour of the nonlinear plant under consideration (to be controlled). To ensure stability, update rules are developed using lyapunov stability criterion. These rules are then used for adjusting the various parameters of DRNN. The responses of plants obtained with DRNN are compared with those obtained when multi-layer feed forward neural network (MLFFNN) is used as a controller. Also, in example 4, FCRNN is also investigated and compared with DRNN and MLFFNN. Robustness of the proposed control scheme is also tested against parameter variations and disturbance signals. Four simulation examples including one-link robotic manipulator and inverted pendulum are considered on which the proposed controller is applied. The results so obtained show the superiority of DRNN over MLFFNN as a controller. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.

  11. Medical Concept Normalization in Social Media Posts with Recurrent Neural Networks.

    Science.gov (United States)

    Tutubalina, Elena; Miftahutdinov, Zulfat; Nikolenko, Sergey; Malykh, Valentin

    2018-06-12

    Text mining of scientific libraries and social media has already proven itself as a reliable tool for drug repurposing and hypothesis generation. The task of mapping a disease mention to a concept in a controlled vocabulary, typically to the standard thesaurus in the Unified Medical Language System (UMLS), is known as medical concept normalization. This task is challenging due to the differences in the use of medical terminology between health care professionals and social media texts coming from the lay public. To bridge this gap, we use sequence learning with recurrent neural networks and semantic representation of one- or multi-word expressions: we develop end-to-end architectures directly tailored to the task, including bidirectional Long Short-Term Memory, Gated Recurrent Units with an attention mechanism, and additional semantic similarity features based on UMLS. Our evaluation against a standard benchmark shows that recurrent neural networks improve results over an effective baseline for classification based on convolutional neural networks. A qualitative examination of mentions discovered in a dataset of user reviews collected from popular online health information platforms as well as a quantitative evaluation both show improvements in the semantic representation of health-related expressions in social media. Copyright © 2018. Published by Elsevier Inc.

  12. Protein secondary structure prediction using modular reciprocal bidirectional recurrent neural networks.

    Science.gov (United States)

    Babaei, Sepideh; Geranmayeh, Amir; Seyyedsalehi, Seyyed Ali

    2010-12-01

    The supervised learning of recurrent neural networks well-suited for prediction of protein secondary structures from the underlying amino acids sequence is studied. Modular reciprocal recurrent neural networks (MRR-NN) are proposed to model the strong correlations between adjacent secondary structure elements. Besides, a multilayer bidirectional recurrent neural network (MBR-NN) is introduced to capture the long-range intramolecular interactions between amino acids in formation of the secondary structure. The final modular prediction system is devised based on the interactive integration of the MRR-NN and the MBR-NN structures to arbitrarily engage the neighboring effects of the secondary structure types concurrent with memorizing the sequential dependencies of amino acids along the protein chain. The advanced combined network augments the percentage accuracy (Q₃) to 79.36% and boosts the segment overlap (SOV) up to 70.09% when tested on the PSIPRED dataset in three-fold cross-validation. Copyright © 2010 Elsevier Ireland Ltd. All rights reserved.

  13. Acoustic Event Detection in Multichannel Audio Using Gated Recurrent Neural Networks with High‐Resolution Spectral Features

    Directory of Open Access Journals (Sweden)

    Hyoung‐Gook Kim

    2017-12-01

    Full Text Available Recently, deep recurrent neural networks have achieved great success in various machine learning tasks, and have also been applied for sound event detection. The detection of temporally overlapping sound events in realistic environments is much more challenging than in monophonic detection problems. In this paper, we present an approach to improve the accuracy of polyphonic sound event detection in multichannel audio based on gated recurrent neural networks in combination with auditory spectral features. In the proposed method, human hearing perception‐based spatial and spectral‐domain noise‐reduced harmonic features are extracted from multichannel audio and used as high‐resolution spectral inputs to train gated recurrent neural networks. This provides a fast and stable convergence rate compared to long short‐term memory recurrent neural networks. Our evaluation reveals that the proposed method outperforms the conventional approaches.

  14. Detection of nonstationary transition to synchronized states of a neural network using recurrence analyses

    Science.gov (United States)

    Budzinski, R. C.; Boaretto, B. R. R.; Prado, T. L.; Lopes, S. R.

    2017-07-01

    We study the stability of asymptotic states displayed by a complex neural network. We focus on the loss of stability of a stationary state of networks using recurrence quantifiers as tools to diagnose local and global stabilities as well as the multistability of a coupled neural network. Numerical simulations of a neural network composed of 1024 neurons in a small-world connection scheme are performed using the model of Braun et al. [Int. J. Bifurcation Chaos 08, 881 (1998), 10.1142/S0218127498000681], which is a modified model from the Hodgkin-Huxley model [J. Phys. 117, 500 (1952)]. To validate the analyses, the results are compared with those produced by Kuramoto's order parameter [Chemical Oscillations, Waves, and Turbulence (Springer-Verlag, Berlin Heidelberg, 1984)]. We show that recurrence tools making use of just integrated signals provided by the networks, such as local field potential (LFP) (LFP signals) or mean field values bring new results on the understanding of neural behavior occurring before the synchronization states. In particular we show the occurrence of different stationary and nonstationarity asymptotic states.

  15. Interpretation of correlated neural variability from models of feed-forward and recurrent circuits

    Science.gov (United States)

    2018-01-01

    Neural populations respond to the repeated presentations of a sensory stimulus with correlated variability. These correlations have been studied in detail, with respect to their mechanistic origin, as well as their influence on stimulus discrimination and on the performance of population codes. A number of theoretical studies have endeavored to link network architecture to the nature of the correlations in neural activity. Here, we contribute to this effort: in models of circuits of stochastic neurons, we elucidate the implications of various network architectures—recurrent connections, shared feed-forward projections, and shared gain fluctuations—on the stimulus dependence in correlations. Specifically, we derive mathematical relations that specify the dependence of population-averaged covariances on firing rates, for different network architectures. In turn, these relations can be used to analyze data on population activity. We examine recordings from neural populations in mouse auditory cortex. We find that a recurrent network model with random effective connections captures the observed statistics. Furthermore, using our circuit model, we investigate the relation between network parameters, correlations, and how well different stimuli can be discriminated from one another based on the population activity. As such, our approach allows us to relate properties of the neural circuit to information processing. PMID:29408930

  16. Interpretation of correlated neural variability from models of feed-forward and recurrent circuits.

    Directory of Open Access Journals (Sweden)

    Volker Pernice

    2018-02-01

    Full Text Available Neural populations respond to the repeated presentations of a sensory stimulus with correlated variability. These correlations have been studied in detail, with respect to their mechanistic origin, as well as their influence on stimulus discrimination and on the performance of population codes. A number of theoretical studies have endeavored to link network architecture to the nature of the correlations in neural activity. Here, we contribute to this effort: in models of circuits of stochastic neurons, we elucidate the implications of various network architectures-recurrent connections, shared feed-forward projections, and shared gain fluctuations-on the stimulus dependence in correlations. Specifically, we derive mathematical relations that specify the dependence of population-averaged covariances on firing rates, for different network architectures. In turn, these relations can be used to analyze data on population activity. We examine recordings from neural populations in mouse auditory cortex. We find that a recurrent network model with random effective connections captures the observed statistics. Furthermore, using our circuit model, we investigate the relation between network parameters, correlations, and how well different stimuli can be discriminated from one another based on the population activity. As such, our approach allows us to relate properties of the neural circuit to information processing.

  17. Study of hourly and daily solar irradiation forecast using diagonal recurrent wavelet neural networks

    International Nuclear Information System (INIS)

    Cao Jiacong; Lin Xingchun

    2008-01-01

    An accurate forecast of solar irradiation is required for various solar energy applications and environmental impact analyses in recent years. Comparatively, various irradiation forecast models based on artificial neural networks (ANN) perform much better in accuracy than many conventional prediction models. However, the forecast precision of most existing ANN based forecast models has not been satisfactory to researchers and engineers so far, and the generalization capability of these networks needs further improving. Combining the prominent dynamic properties of a recurrent neural network (RNN) with the enhanced ability of a wavelet neural network (WNN) in mapping nonlinear functions, a diagonal recurrent wavelet neural network (DRWNN) is newly established in this paper to perform fine forecasting of hourly and daily global solar irradiance. Some additional steps, e.g. applying historical information of cloud cover to sample data sets and the cloud cover from the weather forecast to network input, are adopted to help enhance the forecast precision. Besides, a specially scheduled two phase training algorithm is adopted. As examples, both hourly and daily irradiance forecasts are completed using sample data sets in Shanghai and Macau, and comparisons between irradiation models show that the DRWNN models are definitely more accurate

  18. Reward-based training of recurrent neural networks for cognitive and value-based tasks.

    Science.gov (United States)

    Song, H Francis; Yang, Guangyu R; Wang, Xiao-Jing

    2017-01-13

    Trained neural network models, which exhibit features of neural activity recorded from behaving animals, may provide insights into the circuit mechanisms of cognitive functions through systematic analysis of network activity and connectivity. However, in contrast to the graded error signals commonly used to train networks through supervised learning, animals learn from reward feedback on definite actions through reinforcement learning. Reward maximization is particularly relevant when optimal behavior depends on an animal's internal judgment of confidence or subjective preferences. Here, we implement reward-based training of recurrent neural networks in which a value network guides learning by using the activity of the decision network to predict future reward. We show that such models capture behavioral and electrophysiological findings from well-known experimental paradigms. Our work provides a unified framework for investigating diverse cognitive and value-based computations, and predicts a role for value representation that is essential for learning, but not executing, a task.

  19. Robust recurrent neural network modeling for software fault detection and correction prediction

    International Nuclear Information System (INIS)

    Hu, Q.P.; Xie, M.; Ng, S.H.; Levitin, G.

    2007-01-01

    Software fault detection and correction processes are related although different, and they should be studied together. A practical approach is to apply software reliability growth models to model fault detection, and fault correction process is assumed to be a delayed process. On the other hand, the artificial neural networks model, as a data-driven approach, tries to model these two processes together with no assumptions. Specifically, feedforward backpropagation networks have shown their advantages over analytical models in fault number predictions. In this paper, the following approach is explored. First, recurrent neural networks are applied to model these two processes together. Within this framework, a systematic networks configuration approach is developed with genetic algorithm according to the prediction performance. In order to provide robust predictions, an extra factor characterizing the dispersion of prediction repetitions is incorporated into the performance function. Comparisons with feedforward neural networks and analytical models are developed with respect to a real data set

  20. Distributed Recurrent Neural Forward Models with Neural Control for Complex Locomotion in Walking Robots

    DEFF Research Database (Denmark)

    Dasgupta, Sakyasingha; Goldschmidt, Dennis; Wörgötter, Florentin

    2015-01-01

    here, an artificial bio-inspired walking system which effectively combines biomechanics (in terms of the body and leg structures) with the underlying neural mechanisms. The neural mechanisms consist of (1) central pattern generator based control for generating basic rhythmic patterns and coordinated......Walking animals, like stick insects, cockroaches or ants, demonstrate a fascinating range of locomotive abilities and complex behaviors. The locomotive behaviors can consist of a variety of walking patterns along with adaptation that allow the animals to deal with changes in environmental...... conditions, like uneven terrains, gaps, obstacles etc. Biological study has revealed that such complex behaviors are a result of a combination of biomechanics and neural mechanism thus representing the true nature of embodied interactions. While the biomechanics helps maintain flexibility and sustain...

  1. A Lyapunov based approach to energy maximization in renewable energy technologies

    Science.gov (United States)

    Iyasere, Erhun

    This dissertation describes the design and implementation of Lyapunov-based control strategies for the maximization of the power captured by renewable energy harnessing technologies such as (i) a variable speed, variable pitch wind turbine, (ii) a variable speed wind turbine coupled to a doubly fed induction generator, and (iii) a solar power generating system charging a constant voltage battery. First, a torque control strategy is presented to maximize wind energy captured in variable speed, variable pitch wind turbines at low to medium wind speeds. The proposed strategy applies control torque to the wind turbine pitch and rotor subsystems to simultaneously control the blade pitch and tip speed ratio, via the rotor angular speed, to an optimum point at which the capture efficiency is maximum. The control method allows for aerodynamic rotor power maximization without exact knowledge of the wind turbine model. A series of numerical results show that the wind turbine can be controlled to achieve maximum energy capture. Next, a control strategy is proposed to maximize the wind energy captured in a variable speed wind turbine, with an internal induction generator, at low to medium wind speeds. The proposed strategy controls the tip speed ratio, via the rotor angular speed, to an optimum point at which the efficiency constant (or power coefficient) is maximal for a particular blade pitch angle and wind speed by using the generator rotor voltage as a control input. This control method allows for aerodynamic rotor power maximization without exact wind turbine model knowledge. Representative numerical results demonstrate that the wind turbine can be controlled to achieve near maximum energy capture. Finally, a power system consisting of a photovoltaic (PV) array panel, dc-to-dc switching converter, charging a battery is considered wherein the environmental conditions are time-varying. A backstepping PWM controller is developed to maximize the power of the solar generating

  2. A one-layer recurrent neural network for non-smooth convex optimization subject to linear inequality constraints

    International Nuclear Information System (INIS)

    Liu, Xiaolan; Zhou, Mi

    2016-01-01

    In this paper, a one-layer recurrent network is proposed for solving a non-smooth convex optimization subject to linear inequality constraints. Compared with the existing neural networks for optimization, the proposed neural network is capable of solving more general convex optimization with linear inequality constraints. The convergence of the state variables of the proposed neural network to achieve solution optimality is guaranteed as long as the designed parameters in the model are larger than the derived lower bounds.

  3. Brain Dynamics in Predicting Driving Fatigue Using a Recurrent Self-Evolving Fuzzy Neural Network.

    Science.gov (United States)

    Liu, Yu-Ting; Lin, Yang-Yin; Wu, Shang-Lin; Chuang, Chun-Hsiang; Lin, Chin-Teng

    2016-02-01

    This paper proposes a generalized prediction system called a recurrent self-evolving fuzzy neural network (RSEFNN) that employs an on-line gradient descent learning rule to address the electroencephalography (EEG) regression problem in brain dynamics for driving fatigue. The cognitive states of drivers significantly affect driving safety; in particular, fatigue driving, or drowsy driving, endangers both the individual and the public. For this reason, the development of brain-computer interfaces (BCIs) that can identify drowsy driving states is a crucial and urgent topic of study. Many EEG-based BCIs have been developed as artificial auxiliary systems for use in various practical applications because of the benefits of measuring EEG signals. In the literature, the efficacy of EEG-based BCIs in recognition tasks has been limited by low resolutions. The system proposed in this paper represents the first attempt to use the recurrent fuzzy neural network (RFNN) architecture to increase adaptability in realistic EEG applications to overcome this bottleneck. This paper further analyzes brain dynamics in a simulated car driving task in a virtual-reality environment. The proposed RSEFNN model is evaluated using the generalized cross-subject approach, and the results indicate that the RSEFNN is superior to competing models regardless of the use of recurrent or nonrecurrent structures.

  4. Deep Recurrent Neural Network-Based Autoencoders for Acoustic Novelty Detection

    Directory of Open Access Journals (Sweden)

    Erik Marchi

    2017-01-01

    Full Text Available In the emerging field of acoustic novelty detection, most research efforts are devoted to probabilistic approaches such as mixture models or state-space models. Only recent studies introduced (pseudo-generative models for acoustic novelty detection with recurrent neural networks in the form of an autoencoder. In these approaches, auditory spectral features of the next short term frame are predicted from the previous frames by means of Long-Short Term Memory recurrent denoising autoencoders. The reconstruction error between the input and the output of the autoencoder is used as activation signal to detect novel events. There is no evidence of studies focused on comparing previous efforts to automatically recognize novel events from audio signals and giving a broad and in depth evaluation of recurrent neural network-based autoencoders. The present contribution aims to consistently evaluate our recent novel approaches to fill this white spot in the literature and provide insight by extensive evaluations carried out on three databases: A3Novelty, PASCAL CHiME, and PROMETHEUS. Besides providing an extensive analysis of novel and state-of-the-art methods, the article shows how RNN-based autoencoders outperform statistical approaches up to an absolute improvement of 16.4% average F-measure over the three databases.

  5. Deep Convolutional and LSTM Recurrent Neural Networks for Multimodal Wearable Activity Recognition

    Science.gov (United States)

    Ordóñez, Francisco Javier; Roggen, Daniel

    2016-01-01

    Human activity recognition (HAR) tasks have traditionally been solved using engineered features obtained by heuristic processes. Current research suggests that deep convolutional neural networks are suited to automate feature extraction from raw sensor inputs. However, human activities are made of complex sequences of motor movements, and capturing this temporal dynamics is fundamental for successful HAR. Based on the recent success of recurrent neural networks for time series domains, we propose a generic deep framework for activity recognition based on convolutional and LSTM recurrent units, which: (i) is suitable for multimodal wearable sensors; (ii) can perform sensor fusion naturally; (iii) does not require expert knowledge in designing features; and (iv) explicitly models the temporal dynamics of feature activations. We evaluate our framework on two datasets, one of which has been used in a public activity recognition challenge. Our results show that our framework outperforms competing deep non-recurrent networks on the challenge dataset by 4% on average; outperforming some of the previous reported results by up to 9%. Our results show that the framework can be applied to homogeneous sensor modalities, but can also fuse multimodal sensors to improve performance. We characterise key architectural hyperparameters’ influence on performance to provide insights about their optimisation. PMID:26797612

  6. Deep Recurrent Neural Network-Based Autoencoders for Acoustic Novelty Detection.

    Science.gov (United States)

    Marchi, Erik; Vesperini, Fabio; Squartini, Stefano; Schuller, Björn

    2017-01-01

    In the emerging field of acoustic novelty detection, most research efforts are devoted to probabilistic approaches such as mixture models or state-space models. Only recent studies introduced (pseudo-)generative models for acoustic novelty detection with recurrent neural networks in the form of an autoencoder. In these approaches, auditory spectral features of the next short term frame are predicted from the previous frames by means of Long-Short Term Memory recurrent denoising autoencoders. The reconstruction error between the input and the output of the autoencoder is used as activation signal to detect novel events. There is no evidence of studies focused on comparing previous efforts to automatically recognize novel events from audio signals and giving a broad and in depth evaluation of recurrent neural network-based autoencoders. The present contribution aims to consistently evaluate our recent novel approaches to fill this white spot in the literature and provide insight by extensive evaluations carried out on three databases: A3Novelty, PASCAL CHiME, and PROMETHEUS. Besides providing an extensive analysis of novel and state-of-the-art methods, the article shows how RNN-based autoencoders outperform statistical approaches up to an absolute improvement of 16.4% average F -measure over the three databases.

  7. Deep Convolutional and LSTM Recurrent Neural Networks for Multimodal Wearable Activity Recognition

    Directory of Open Access Journals (Sweden)

    Francisco Javier Ordóñez

    2016-01-01

    Full Text Available Human activity recognition (HAR tasks have traditionally been solved using engineered features obtained by heuristic processes. Current research suggests that deep convolutional neural networks are suited to automate feature extraction from raw sensor inputs. However, human activities are made of complex sequences of motor movements, and capturing this temporal dynamics is fundamental for successful HAR. Based on the recent success of recurrent neural networks for time series domains, we propose a generic deep framework for activity recognition based on convolutional and LSTM recurrent units, which: (i is suitable for multimodal wearable sensors; (ii can perform sensor fusion naturally; (iii does not require expert knowledge in designing features; and (iv explicitly models the temporal dynamics of feature activations. We evaluate our framework on two datasets, one of which has been used in a public activity recognition challenge. Our results show that our framework outperforms competing deep non-recurrent networks on the challenge dataset by 4% on average; outperforming some of the previous reported results by up to 9%. Our results show that the framework can be applied to homogeneous sensor modalities, but can also fuse multimodal sensors to improve performance. We characterise key architectural hyperparameters’ influence on performance to provide insights about their optimisation.

  8. Deep Convolutional and LSTM Recurrent Neural Networks for Multimodal Wearable Activity Recognition.

    Science.gov (United States)

    Ordóñez, Francisco Javier; Roggen, Daniel

    2016-01-18

    Human activity recognition (HAR) tasks have traditionally been solved using engineered features obtained by heuristic processes. Current research suggests that deep convolutional neural networks are suited to automate feature extraction from raw sensor inputs. However, human activities are made of complex sequences of motor movements, and capturing this temporal dynamics is fundamental for successful HAR. Based on the recent success of recurrent neural networks for time series domains, we propose a generic deep framework for activity recognition based on convolutional and LSTM recurrent units, which: (i) is suitable for multimodal wearable sensors; (ii) can perform sensor fusion naturally; (iii) does not require expert knowledge in designing features; and (iv) explicitly models the temporal dynamics of feature activations. We evaluate our framework on two datasets, one of which has been used in a public activity recognition challenge. Our results show that our framework outperforms competing deep non-recurrent networks on the challenge dataset by 4% on average; outperforming some of the previous reported results by up to 9%. Our results show that the framework can be applied to homogeneous sensor modalities, but can also fuse multimodal sensors to improve performance. We characterise key architectural hyperparameters' influence on performance to provide insights about their optimisation.

  9. ProLanGO: Protein Function Prediction Using Neural Machine Translation Based on a Recurrent Neural Network.

    Science.gov (United States)

    Cao, Renzhi; Freitas, Colton; Chan, Leong; Sun, Miao; Jiang, Haiqing; Chen, Zhangxin

    2017-10-17

    With the development of next generation sequencing techniques, it is fast and cheap to determine protein sequences but relatively slow and expensive to extract useful information from protein sequences because of limitations of traditional biological experimental techniques. Protein function prediction has been a long standing challenge to fill the gap between the huge amount of protein sequences and the known function. In this paper, we propose a novel method to convert the protein function problem into a language translation problem by the new proposed protein sequence language "ProLan" to the protein function language "GOLan", and build a neural machine translation model based on recurrent neural networks to translate "ProLan" language to "GOLan" language. We blindly tested our method by attending the latest third Critical Assessment of Function Annotation (CAFA 3) in 2016, and also evaluate the performance of our methods on selected proteins whose function was released after CAFA competition. The good performance on the training and testing datasets demonstrates that our new proposed method is a promising direction for protein function prediction. In summary, we first time propose a method which converts the protein function prediction problem to a language translation problem and applies a neural machine translation model for protein function prediction.

  10. Synaptic plasticity in a recurrent neural network for versatile and adaptive behaviors of a walking robot

    Directory of Open Access Journals (Sweden)

    Eduard eGrinke

    2015-10-01

    Full Text Available Walking animals, like insects, with little neural computing can effectively perform complex behaviors. They can walk around their environment, escape from corners/deadlocks, and avoid or climb over obstacles. While performing all these behaviors, they can also adapt their movements to deal with an unknown situation. As a consequence, they successfully navigate through their complex environment. The versatile and adaptive abilities are the result of an integration of several ingredients embedded in their sensorimotor loop. Biological studies reveal that the ingredients include neural dynamics, plasticity, sensory feedback, and biomechanics. Generating such versatile and adaptive behaviors for a walking robot is a challenging task. In this study, we present a bio-inspired approach to solve this task. Specifically, the approach combines neural mechanisms with plasticity, sensory feedback, and biomechanics. The neural mechanisms consist of adaptive neural sensory processing and modular neural locomotion control. The sensory processing is based on a small recurrent network consisting of two fully connected neurons. Online correlation-based learning with synaptic scaling is applied to adequately change the connections of the network. By doing so, we can effectively exploit neural dynamics (i.e., hysteresis effects and single attractors in the network to generate different turning angles with short-term memory for a biomechanical walking robot. The turning information is transmitted as descending steering signals to the locomotion control which translates the signals into motor actions. As a result, the robot can walk around and adapt its turning angle for avoiding obstacles in different situations as well as escaping from sharp corners or deadlocks. Using backbone joint control embedded in the locomotion control allows the robot to climb over small obstacles. Consequently, it can successfully explore and navigate in complex environments.

  11. Forecasting energy market indices with recurrent neural networks: Case study of crude oil price fluctuations

    International Nuclear Information System (INIS)

    Wang, Jie; Wang, Jun

    2016-01-01

    In an attempt to improve the forecasting accuracy of crude oil price fluctuations, a new neural network architecture is established in this work which combines Multilayer perception and ERNN (Elman recurrent neural networks) with stochastic time effective function. ERNN is a time-varying predictive control system and is developed with the ability to keep memory of recent events in order to predict future output. The stochastic time effective function represents that the recent information has a stronger effect for the investors than the old information. With the established model the empirical research has a good performance in testing the predictive effects on four different time series indices. Compared to other models, the present model is possible to evaluate data from 1990s to today with extreme accuracy and speedy. The applied CID (complexity invariant distance) analysis and multiscale CID analysis, are provided as the new useful measures to evaluate a better predicting ability of the proposed model than other traditional models. - Highlights: • A new forecasting model is developed by a random Elman recurrent neural network. • The forecasting accuracy of crude oil price fluctuations is improved by the model. • The forecasting results of the proposed model are more accurate than compared models. • Two new distance analysis methods are applied to confirm the predicting results.

  12. Indirect adaptive fuzzy wavelet neural network with self- recurrent consequent part for AC servo system.

    Science.gov (United States)

    Hou, Runmin; Wang, Li; Gao, Qiang; Hou, Yuanglong; Wang, Chao

    2017-09-01

    This paper proposes a novel indirect adaptive fuzzy wavelet neural network (IAFWNN) to control the nonlinearity, wide variations in loads, time-variation and uncertain disturbance of the ac servo system. In the proposed approach, the self-recurrent wavelet neural network (SRWNN) is employed to construct an adaptive self-recurrent consequent part for each fuzzy rule of TSK fuzzy model. For the IAFWNN controller, the online learning algorithm is based on back propagation (BP) algorithm. Moreover, an improved particle swarm optimization (IPSO) is used to adapt the learning rate. The aid of an adaptive SRWNN identifier offers the real-time gradient information to the adaptive fuzzy wavelet neural controller to overcome the impact of parameter variations, load disturbances and other uncertainties effectively, and has a good dynamic. The asymptotical stability of the system is guaranteed by using the Lyapunov method. The result of the simulation and the prototype test prove that the proposed are effective and suitable. Copyright © 2017. Published by Elsevier Ltd.

  13. Intelligent fault diagnosis of rolling bearings using an improved deep recurrent neural network

    Science.gov (United States)

    Jiang, Hongkai; Li, Xingqiu; Shao, Haidong; Zhao, Ke

    2018-06-01

    Traditional intelligent fault diagnosis methods for rolling bearings heavily depend on manual feature extraction and feature selection. For this purpose, an intelligent deep learning method, named the improved deep recurrent neural network (DRNN), is proposed in this paper. Firstly, frequency spectrum sequences are used as inputs to reduce the input size and ensure good robustness. Secondly, DRNN is constructed by the stacks of the recurrent hidden layer to automatically extract the features from the input spectrum sequences. Thirdly, an adaptive learning rate is adopted to improve the training performance of the constructed DRNN. The proposed method is verified with experimental rolling bearing data, and the results confirm that the proposed method is more effective than traditional intelligent fault diagnosis methods.

  14. Identification of Jets Containing b-Hadrons with Recurrent Neural Networks at the ATLAS Experiment

    CERN Multimedia

    CERN. Geneva

    2017-01-01

    A novel b-jet identification algorithm is constructed with a Recurrent Neural Network (RNN) at the ATLAS Experiment. This talk presents the expected performance of the RNN based b-tagging in simulated $t \\bar t$ events. The RNN based b-tagging processes properties of tracks associated to jets which are represented in sequences. In contrast to traditional impact-parameter-based b-tagging algorithms which assume the tracks of jets are independent from each other, RNN based b-tagging can exploit the spatial and kinematic correlations of tracks which are initiated from the same b-hadrons. The neural network nature of the tagging algorithm also allows the flexibility of extending input features to include more track properties than can be effectively used in traditional algorithms.

  15. Adaptive complementary fuzzy self-recurrent wavelet neural network controller for the electric load simulator system

    Directory of Open Access Journals (Sweden)

    Wang Chao

    2016-03-01

    Full Text Available Due to the complexities existing in the electric load simulator, this article develops a high-performance nonlinear adaptive controller to improve the torque tracking performance of the electric load simulator, which mainly consists of an adaptive fuzzy self-recurrent wavelet neural network controller with variable structure (VSFSWC and a complementary controller. The VSFSWC is clearly and easily used for real-time systems and greatly improves the convergence rate and control precision. The complementary controller is designed to eliminate the effect of the approximation error between the proposed neural network controller and the ideal feedback controller without chattering phenomena. Moreover, adaptive learning laws are derived to guarantee the system stability in the sense of the Lyapunov theory. Finally, the hardware-in-the-loop simulations are carried out to verify the feasibility and effectiveness of the proposed algorithms in different working styles.

  16. New results on global exponential stability of recurrent neural networks with time-varying delays

    International Nuclear Information System (INIS)

    Xu Shengyuan; Chu Yuming; Lu Junwei

    2006-01-01

    This Letter provides new sufficient conditions for the existence, uniqueness and global exponential stability of the equilibrium point of recurrent neural networks with time-varying delays by employing Lyapunov functions and using the Halanay inequality. The time-varying delays are not necessarily differentiable. Both Lipschitz continuous activation functions and monotone nondecreasing activation functions are considered. The derived stability criteria are expressed in terms of linear matrix inequalities (LMIs), which can be checked easily by resorting to recently developed algorithms solving LMIs. Furthermore, the proposed stability results are less conservative than some previous ones in the literature, which is demonstrated via some numerical examples

  17. Identification of serial number on bank card using recurrent neural network

    Science.gov (United States)

    Liu, Li; Huang, Linlin; Xue, Jian

    2018-04-01

    Identification of serial number on bank card has many applications. Due to the different number printing mode, complex background, distortion in shape, etc., it is quite challenging to achieve high identification accuracy. In this paper, we propose a method using Normalization-Cooperated Gradient Feature (NCGF) and Recurrent Neural Network (RNN) based on Long Short-Term Memory (LSTM) for serial number identification. The NCGF maps the gradient direction elements of original image to direction planes such that the RNN with direction planes as input can recognize numbers more accurately. Taking the advantages of NCGF and RNN, we get 90%digit string recognition accuracy.

  18. New results on global exponential stability of recurrent neural networks with time-varying delays

    Energy Technology Data Exchange (ETDEWEB)

    Xu Shengyuan [Department of Automation, Nanjing University of Science and Technology, Nanjing 210094 (China)]. E-mail: syxu02@yahoo.com.cn; Chu Yuming [Department of Mathematics, Huzhou Teacher' s College, Huzhou, Zhejiang 313000 (China); Lu Junwei [School of Electrical and Automation Engineering, Nanjing Normal University, 78 Bancang Street, Nanjing, 210042 (China)

    2006-04-03

    This Letter provides new sufficient conditions for the existence, uniqueness and global exponential stability of the equilibrium point of recurrent neural networks with time-varying delays by employing Lyapunov functions and using the Halanay inequality. The time-varying delays are not necessarily differentiable. Both Lipschitz continuous activation functions and monotone nondecreasing activation functions are considered. The derived stability criteria are expressed in terms of linear matrix inequalities (LMIs), which can be checked easily by resorting to recently developed algorithms solving LMIs. Furthermore, the proposed stability results are less conservative than some previous ones in the literature, which is demonstrated via some numerical examples.

  19. Automatic construction of a recurrent neural network based classifier for vehicle passage detection

    Science.gov (United States)

    Burnaev, Evgeny; Koptelov, Ivan; Novikov, German; Khanipov, Timur

    2017-03-01

    Recurrent Neural Networks (RNNs) are extensively used for time-series modeling and prediction. We propose an approach for automatic construction of a binary classifier based on Long Short-Term Memory RNNs (LSTM-RNNs) for detection of a vehicle passage through a checkpoint. As an input to the classifier we use multidimensional signals of various sensors that are installed on the checkpoint. Obtained results demonstrate that the previous approach to handcrafting a classifier, consisting of a set of deterministic rules, can be successfully replaced by an automatic RNN training on an appropriately labelled data.

  20. Hierarchical Recurrent Neural Hashing for Image Retrieval With Hierarchical Convolutional Features.

    Science.gov (United States)

    Lu, Xiaoqiang; Chen, Yaxiong; Li, Xuelong

    Hashing has been an important and effective technology in image retrieval due to its computational efficiency and fast search speed. The traditional hashing methods usually learn hash functions to obtain binary codes by exploiting hand-crafted features, which cannot optimally represent the information of the sample. Recently, deep learning methods can achieve better performance, since deep learning architectures can learn more effective image representation features. However, these methods only use semantic features to generate hash codes by shallow projection but ignore texture details. In this paper, we proposed a novel hashing method, namely hierarchical recurrent neural hashing (HRNH), to exploit hierarchical recurrent neural network to generate effective hash codes. There are three contributions of this paper. First, a deep hashing method is proposed to extensively exploit both spatial details and semantic information, in which, we leverage hierarchical convolutional features to construct image pyramid representation. Second, our proposed deep network can exploit directly convolutional feature maps as input to preserve the spatial structure of convolutional feature maps. Finally, we propose a new loss function that considers the quantization error of binarizing the continuous embeddings into the discrete binary codes, and simultaneously maintains the semantic similarity and balanceable property of hash codes. Experimental results on four widely used data sets demonstrate that the proposed HRNH can achieve superior performance over other state-of-the-art hashing methods.Hashing has been an important and effective technology in image retrieval due to its computational efficiency and fast search speed. The traditional hashing methods usually learn hash functions to obtain binary codes by exploiting hand-crafted features, which cannot optimally represent the information of the sample. Recently, deep learning methods can achieve better performance, since deep

  1. New baseline correction algorithm for text-line recognition with bidirectional recurrent neural networks

    Science.gov (United States)

    Morillot, Olivier; Likforman-Sulem, Laurence; Grosicki, Emmanuèle

    2013-04-01

    Many preprocessing techniques have been proposed for isolated word recognition. However, recently, recognition systems have dealt with text blocks and their compound text lines. In this paper, we propose a new preprocessing approach to efficiently correct baseline skew and fluctuations. Our approach is based on a sliding window within which the vertical position of the baseline is estimated. Segmentation of text lines into subparts is, thus, avoided. Experiments conducted on a large publicly available database (Rimes), with a BLSTM (bidirectional long short-term memory) recurrent neural network recognition system, show that our baseline correction approach highly improves performance.

  2. Nonlinear Model Predictive Control Based on a Self-Organizing Recurrent Neural Network.

    Science.gov (United States)

    Han, Hong-Gui; Zhang, Lu; Hou, Ying; Qiao, Jun-Fei

    2016-02-01

    A nonlinear model predictive control (NMPC) scheme is developed in this paper based on a self-organizing recurrent radial basis function (SR-RBF) neural network, whose structure and parameters are adjusted concurrently in the training process. The proposed SR-RBF neural network is represented in a general nonlinear form for predicting the future dynamic behaviors of nonlinear systems. To improve the modeling accuracy, a spiking-based growing and pruning algorithm and an adaptive learning algorithm are developed to tune the structure and parameters of the SR-RBF neural network, respectively. Meanwhile, for the control problem, an improved gradient method is utilized for the solution of the optimization problem in NMPC. The stability of the resulting control system is proved based on the Lyapunov stability theory. Finally, the proposed SR-RBF neural network-based NMPC (SR-RBF-NMPC) is used to control the dissolved oxygen (DO) concentration in a wastewater treatment process (WWTP). Comparisons with other existing methods demonstrate that the SR-RBF-NMPC can achieve a considerably better model fitting for WWTP and a better control performance for DO concentration.

  3. The super-Turing computational power of plastic recurrent neural networks.

    Science.gov (United States)

    Cabessa, Jérémie; Siegelmann, Hava T

    2014-12-01

    We study the computational capabilities of a biologically inspired neural model where the synaptic weights, the connectivity pattern, and the number of neurons can evolve over time rather than stay static. Our study focuses on the mere concept of plasticity of the model so that the nature of the updates is assumed to be not constrained. In this context, we show that the so-called plastic recurrent neural networks (RNNs) are capable of the precise super-Turing computational power--as the static analog neural networks--irrespective of whether their synaptic weights are modeled by rational or real numbers, and moreover, irrespective of whether their patterns of plasticity are restricted to bi-valued updates or expressed by any other more general form of updating. Consequently, the incorporation of only bi-valued plastic capabilities in a basic model of RNNs suffices to break the Turing barrier and achieve the super-Turing level of computation. The consideration of more general mechanisms of architectural plasticity or of real synaptic weights does not further increase the capabilities of the networks. These results support the claim that the general mechanism of plasticity is crucially involved in the computational and dynamical capabilities of biological neural networks. They further show that the super-Turing level of computation reflects in a suitable way the capabilities of brain-like models of computation.

  4. Synaptic plasticity in a recurrent neural network for versatile and adaptive behaviors of a walking robot.

    Science.gov (United States)

    Grinke, Eduard; Tetzlaff, Christian; Wörgötter, Florentin; Manoonpong, Poramate

    2015-01-01

    Walking animals, like insects, with little neural computing can effectively perform complex behaviors. For example, they can walk around their environment, escape from corners/deadlocks, and avoid or climb over obstacles. While performing all these behaviors, they can also adapt their movements to deal with an unknown situation. As a consequence, they successfully navigate through their complex environment. The versatile and adaptive abilities are the result of an integration of several ingredients embedded in their sensorimotor loop. Biological studies reveal that the ingredients include neural dynamics, plasticity, sensory feedback, and biomechanics. Generating such versatile and adaptive behaviors for a many degrees-of-freedom (DOFs) walking robot is a challenging task. Thus, in this study, we present a bio-inspired approach to solve this task. Specifically, the approach combines neural mechanisms with plasticity, exteroceptive sensory feedback, and biomechanics. The neural mechanisms consist of adaptive neural sensory processing and modular neural locomotion control. The sensory processing is based on a small recurrent neural network consisting of two fully connected neurons. Online correlation-based learning with synaptic scaling is applied to adequately change the connections of the network. By doing so, we can effectively exploit neural dynamics (i.e., hysteresis effects and single attractors) in the network to generate different turning angles with short-term memory for a walking robot. The turning information is transmitted as descending steering signals to the neural locomotion control which translates the signals into motor actions. As a result, the robot can walk around and adapt its turning angle for avoiding obstacles in different situations. The adaptation also enables the robot to effectively escape from sharp corners or deadlocks. Using backbone joint control embedded in the the locomotion control allows the robot to climb over small obstacles

  5. Global exponential stability and periodicity of reaction-diffusion delayed recurrent neural networks with Dirichlet boundary conditions

    International Nuclear Information System (INIS)

    Lu Junguo

    2008-01-01

    In this paper, the global exponential stability and periodicity for a class of reaction-diffusion delayed recurrent neural networks with Dirichlet boundary conditions are addressed by constructing suitable Lyapunov functionals and utilizing some inequality techniques. We first prove global exponential converge to 0 of the difference between any two solutions of the original reaction-diffusion delayed recurrent neural networks with Dirichlet boundary conditions, the existence and uniqueness of equilibrium is the direct results of this procedure. This approach is different from the usually used one where the existence, uniqueness of equilibrium and stability are proved in two separate steps. Furthermore, we prove periodicity of the reaction-diffusion delayed recurrent neural networks with Dirichlet boundary conditions. Sufficient conditions ensuring the global exponential stability and the existence of periodic oscillatory solutions for the reaction-diffusion delayed recurrent neural networks with Dirichlet boundary conditions are given. These conditions are easy to check and have important leading significance in the design and application of reaction-diffusion recurrent neural networks with delays. Finally, two numerical examples are given to show the effectiveness of the obtained results

  6. Nonlinear recurrent neural networks for finite-time solution of general time-varying linear matrix equations.

    Science.gov (United States)

    Xiao, Lin; Liao, Bolin; Li, Shuai; Chen, Ke

    2018-02-01

    In order to solve general time-varying linear matrix equations (LMEs) more efficiently, this paper proposes two nonlinear recurrent neural networks based on two nonlinear activation functions. According to Lyapunov theory, such two nonlinear recurrent neural networks are proved to be convergent within finite-time. Besides, by solving differential equation, the upper bounds of the finite convergence time are determined analytically. Compared with existing recurrent neural networks, the proposed two nonlinear recurrent neural networks have a better convergence property (i.e., the upper bound is lower), and thus the accurate solutions of general time-varying LMEs can be obtained with less time. At last, various different situations have been considered by setting different coefficient matrices of general time-varying LMEs and a great variety of computer simulations (including the application to robot manipulators) have been conducted to validate the better finite-time convergence of the proposed two nonlinear recurrent neural networks. Copyright © 2017 Elsevier Ltd. All rights reserved.

  7. Engine cylinder pressure reconstruction using crank kinematics and recurrently-trained neural networks

    Science.gov (United States)

    Bennett, C.; Dunne, J. F.; Trimby, S.; Richardson, D.

    2017-02-01

    A recurrent non-linear autoregressive with exogenous input (NARX) neural network is proposed, and a suitable fully-recurrent training methodology is adapted and tuned, for reconstructing cylinder pressure in multi-cylinder IC engines using measured crank kinematics. This type of indirect sensing is important for cost effective closed-loop combustion control and for On-Board Diagnostics. The challenge addressed is to accurately predict cylinder pressure traces within the cycle under generalisation conditions: i.e. using data not previously seen by the network during training. This involves direct construction and calibration of a suitable inverse crank dynamic model, which owing to singular behaviour at top-dead-centre (TDC), has proved difficult via physical model construction, calibration, and inversion. The NARX architecture is specialised and adapted to cylinder pressure reconstruction, using a fully-recurrent training methodology which is needed because the alternatives are too slow and unreliable for practical network training on production engines. The fully-recurrent Robust Adaptive Gradient Descent (RAGD) algorithm, is tuned initially using synthesised crank kinematics, and then tested on real engine data to assess the reconstruction capability. Real data is obtained from a 1.125 l, 3-cylinder, in-line, direct injection spark ignition (DISI) engine involving synchronised measurements of crank kinematics and cylinder pressure across a range of steady-state speed and load conditions. The paper shows that a RAGD-trained NARX network using both crank velocity and crank acceleration as input information, provides fast and robust training. By using the optimum epoch identified during RAGD training, acceptably accurate cylinder pressures, and especially accurate location-of-peak-pressure, can be reconstructed robustly under generalisation conditions, making it the most practical NARX configuration and recurrent training methodology for use on production engines.

  8. Deep recurrent neural network reveals a hierarchy of process memory during dynamic natural vision.

    Science.gov (United States)

    Shi, Junxing; Wen, Haiguang; Zhang, Yizhen; Han, Kuan; Liu, Zhongming

    2018-05-01

    The human visual cortex extracts both spatial and temporal visual features to support perception and guide behavior. Deep convolutional neural networks (CNNs) provide a computational framework to model cortical representation and organization for spatial visual processing, but unable to explain how the brain processes temporal information. To overcome this limitation, we extended a CNN by adding recurrent connections to different layers of the CNN to allow spatial representations to be remembered and accumulated over time. The extended model, or the recurrent neural network (RNN), embodied a hierarchical and distributed model of process memory as an integral part of visual processing. Unlike the CNN, the RNN learned spatiotemporal features from videos to enable action recognition. The RNN better predicted cortical responses to natural movie stimuli than the CNN, at all visual areas, especially those along the dorsal stream. As a fully observable model of visual processing, the RNN also revealed a cortical hierarchy of temporal receptive window, dynamics of process memory, and spatiotemporal representations. These results support the hypothesis of process memory, and demonstrate the potential of using the RNN for in-depth computational understanding of dynamic natural vision. © 2018 Wiley Periodicals, Inc.

  9. Analysis of recurrent neural networks for short-term energy load forecasting

    Science.gov (United States)

    Di Persio, Luca; Honchar, Oleksandr

    2017-11-01

    Short-term forecasts have recently gained an increasing attention because of the rise of competitive electricity markets. In fact, short-terms forecast of possible future loads turn out to be fundamental to build efficient energy management strategies as well as to avoid energy wastage. Such type of challenges are difficult to tackle both from a theoretical and applied point of view. Latter tasks require sophisticated methods to manage multidimensional time series related to stochastic phenomena which are often highly interconnected. In the present work we first review novel approaches to energy load forecasting based on recurrent neural network, focusing our attention on long/short term memory architectures (LSTMs). Such type of artificial neural networks have been widely applied to problems dealing with sequential data such it happens, e.g., in socio-economics settings, for text recognition purposes, concerning video signals, etc., always showing their effectiveness to model complex temporal data. Moreover, we consider different novel variations of basic LSTMs, such as sequence-to-sequence approach and bidirectional LSTMs, aiming at providing effective models for energy load data. Last but not least, we test all the described algorithms on real energy load data showing not only that deep recurrent networks can be successfully applied to energy load forecasting, but also that this approach can be extended to other problems based on time series prediction.

  10. An automatic microseismic or acoustic emission arrival identification scheme with deep recurrent neural networks

    Science.gov (United States)

    Zheng, Jing; Lu, Jiren; Peng, Suping; Jiang, Tianqi

    2018-02-01

    The conventional arrival pick-up algorithms cannot avoid the manual modification of the parameters for the simultaneous identification of multiple events under different signal-to-noise ratios (SNRs). Therefore, in order to automatically obtain the arrivals of multiple events with high precision under different SNRs, in this study an algorithm was proposed which had the ability to pick up the arrival of microseismic or acoustic emission events based on deep recurrent neural networks. The arrival identification was performed using two important steps, which included a training phase and a testing phase. The training process was mathematically modelled by deep recurrent neural networks using Long Short-Term Memory architecture. During the testing phase, the learned weights were utilized to identify the arrivals through the microseismic/acoustic emission data sets. The data sets were obtained by rock physics experiments of the acoustic emission. In order to obtain the data sets under different SNRs, this study added random noise to the raw experiments' data sets. The results showed that the outcome of the proposed method was able to attain an above 80 per cent hit-rate at SNR 0 dB, and an approximately 70 per cent hit-rate at SNR -5 dB, with an absolute error in 10 sampling points. These results indicated that the proposed method had high selection precision and robustness.

  11. Optimal Formation of Multirobot Systems Based on a Recurrent Neural Network.

    Science.gov (United States)

    Wang, Yunpeng; Cheng, Long; Hou, Zeng-Guang; Yu, Junzhi; Tan, Min

    2016-02-01

    The optimal formation problem of multirobot systems is solved by a recurrent neural network in this paper. The desired formation is described by the shape theory. This theory can generate a set of feasible formations that share the same relative relation among robots. An optimal formation means that finding one formation from the feasible formation set, which has the minimum distance to the initial formation of the multirobot system. Then, the formation problem is transformed into an optimization problem. In addition, the orientation, scale, and admissible range of the formation can also be considered as the constraints in the optimization problem. Furthermore, if all robots are identical, their positions in the system are exchangeable. Then, each robot does not necessarily move to one specific position in the formation. In this case, the optimal formation problem becomes a combinational optimization problem, whose optimal solution is very hard to obtain. Inspired by the penalty method, this combinational optimization problem can be approximately transformed into a convex optimization problem. Due to the involvement of the Euclidean norm in the distance, the objective function of these optimization problems are nonsmooth. To solve these nonsmooth optimization problems efficiently, a recurrent neural network approach is employed, owing to its parallel computation ability. Finally, some simulations and experiments are given to validate the effectiveness and efficiency of the proposed optimal formation approach.

  12. Using deep recurrent neural network for direct beam solar irradiance cloud screening

    Science.gov (United States)

    Chen, Maosi; Davis, John M.; Liu, Chaoshun; Sun, Zhibin; Zempila, Melina Maria; Gao, Wei

    2017-09-01

    Cloud screening is an essential procedure for in-situ calibration and atmospheric properties retrieval on (UV-)MultiFilter Rotating Shadowband Radiometer [(UV-)MFRSR]. Previous study has explored a cloud screening algorithm for direct-beam (UV-)MFRSR voltage measurements based on the stability assumption on a long time period (typically a half day or a whole day). To design such an algorithm requires in-depth understanding of radiative transfer and delicate data manipulation. Recent rapid developments on deep neural network and computation hardware have opened a window for modeling complicated End-to-End systems with a standardized strategy. In this study, a multi-layer dynamic bidirectional recurrent neural network is built for determining the cloudiness on each time point with a 17-year training dataset and tested with another 1-year dataset. The dataset is the daily 3-minute cosine corrected voltages, airmasses, and the corresponding cloud/clear-sky labels at two stations of the USDA UV-B Monitoring and Research Program. The results show that the optimized neural network model (3-layer, 250 hidden units, and 80 epochs of training) has an overall test accuracy of 97.87% (97.56% for the Oklahoma site and 98.16% for the Hawaii site). Generally, the neural network model grasps the key concept of the original model to use data in the entire day rather than short nearby measurements to perform cloud screening. A scrutiny of the logits layer suggests that the neural network model automatically learns a way to calculate a quantity similar to total optical depth and finds an appropriate threshold for cloud screening.

  13. Reinforcement Learning of Linking and Tracing Contours in Recurrent Neural Networks

    Science.gov (United States)

    Brosch, Tobias; Neumann, Heiko; Roelfsema, Pieter R.

    2015-01-01

    The processing of a visual stimulus can be subdivided into a number of stages. Upon stimulus presentation there is an early phase of feedforward processing where the visual information is propagated from lower to higher visual areas for the extraction of basic and complex stimulus features. This is followed by a later phase where horizontal connections within areas and feedback connections from higher areas back to lower areas come into play. In this later phase, image elements that are behaviorally relevant are grouped by Gestalt grouping rules and are labeled in the cortex with enhanced neuronal activity (object-based attention in psychology). Recent neurophysiological studies revealed that reward-based learning influences these recurrent grouping processes, but it is not well understood how rewards train recurrent circuits for perceptual organization. This paper examines the mechanisms for reward-based learning of new grouping rules. We derive a learning rule that can explain how rewards influence the information flow through feedforward, horizontal and feedback connections. We illustrate the efficiency with two tasks that have been used to study the neuronal correlates of perceptual organization in early visual cortex. The first task is called contour-integration and demands the integration of collinear contour elements into an elongated curve. We show how reward-based learning causes an enhancement of the representation of the to-be-grouped elements at early levels of a recurrent neural network, just as is observed in the visual cortex of monkeys. The second task is curve-tracing where the aim is to determine the endpoint of an elongated curve composed of connected image elements. If trained with the new learning rule, neural networks learn to propagate enhanced activity over the curve, in accordance with neurophysiological data. We close the paper with a number of model predictions that can be tested in future neurophysiological and computational studies

  14. A Three-Threshold Learning Rule Approaches the Maximal Capacity of Recurrent Neural Networks.

    Directory of Open Access Journals (Sweden)

    Alireza Alemi

    2015-08-01

    Full Text Available Understanding the theoretical foundations of how memories are encoded and retrieved in neural populations is a central challenge in neuroscience. A popular theoretical scenario for modeling memory function is the attractor neural network scenario, whose prototype is the Hopfield model. The model simplicity and the locality of the synaptic update rules come at the cost of a poor storage capacity, compared with the capacity achieved with perceptron learning algorithms. Here, by transforming the perceptron learning rule, we present an online learning rule for a recurrent neural network that achieves near-maximal storage capacity without an explicit supervisory error signal, relying only upon locally accessible information. The fully-connected network consists of excitatory binary neurons with plastic recurrent connections and non-plastic inhibitory feedback stabilizing the network dynamics; the memory patterns to be memorized are presented online as strong afferent currents, producing a bimodal distribution for the neuron synaptic inputs. Synapses corresponding to active inputs are modified as a function of the value of the local fields with respect to three thresholds. Above the highest threshold, and below the lowest threshold, no plasticity occurs. In between these two thresholds, potentiation/depression occurs when the local field is above/below an intermediate threshold. We simulated and analyzed a network of binary neurons implementing this rule and measured its storage capacity for different sizes of the basins of attraction. The storage capacity obtained through numerical simulations is shown to be close to the value predicted by analytical calculations. We also measured the dependence of capacity on the strength of external inputs. Finally, we quantified the statistics of the resulting synaptic connectivity matrix, and found that both the fraction of zero weight synapses and the degree of symmetry of the weight matrix increase with the

  15. Improving protein disorder prediction by deep bidirectional long short-term memory recurrent neural networks.

    Science.gov (United States)

    Hanson, Jack; Yang, Yuedong; Paliwal, Kuldip; Zhou, Yaoqi

    2017-03-01

    Capturing long-range interactions between structural but not sequence neighbors of proteins is a long-standing challenging problem in bioinformatics. Recently, long short-term memory (LSTM) networks have significantly improved the accuracy of speech and image classification problems by remembering useful past information in long sequential events. Here, we have implemented deep bidirectional LSTM recurrent neural networks in the problem of protein intrinsic disorder prediction. The new method, named SPOT-Disorder, has steadily improved over a similar method using a traditional, window-based neural network (SPINE-D) in all datasets tested without separate training on short and long disordered regions. Independent tests on four other datasets including the datasets from critical assessment of structure prediction (CASP) techniques and >10 000 annotated proteins from MobiDB, confirmed SPOT-Disorder as one of the best methods in disorder prediction. Moreover, initial studies indicate that the method is more accurate in predicting functional sites in disordered regions. These results highlight the usefulness combining LSTM with deep bidirectional recurrent neural networks in capturing non-local, long-range interactions for bioinformatics applications. SPOT-disorder is available as a web server and as a standalone program at: http://sparks-lab.org/server/SPOT-disorder/index.php . j.hanson@griffith.edu.au or yuedong.yang@griffith.edu.au or yaoqi.zhou@griffith.edu.au. Supplementary data is available at Bioinformatics online. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com

  16. The synaptic properties of cells define the hallmarks of interval timing in a recurrent neural network.

    Science.gov (United States)

    Pérez, Oswaldo; Merchant, Hugo

    2018-04-03

    Extensive research has described two key features of interval timing. The bias property is associated with accuracy and implies that time is overestimated for short intervals and underestimated for long intervals. The scalar property is linked to precision and states that the variability of interval estimates increases as a function of interval duration. The neural mechanisms behind these properties are not well understood. Here we implemented a recurrent neural network that mimics a cortical ensemble and includes cells that show paired-pulse facilitation and slow inhibitory synaptic currents. The network produces interval selective responses and reproduces both bias and scalar properties when a Bayesian decoder reads its activity. Notably, the interval-selectivity, timing accuracy, and precision of the network showed complex changes as a function of the decay time constants of the modeled synaptic properties and the level of background activity of the cells. These findings suggest that physiological values of the time constants for paired-pulse facilitation and GABAb, as well as the internal state of the network, determine the bias and scalar properties of interval timing. Significant Statement Timing is a fundamental element of complex behavior, including music and language. Temporal processing in a wide variety of contexts shows two primary features: time estimates exhibit a shift towards the mean (the bias property) and are more variable for longer intervals (the scalar property). We implemented a recurrent neural network that includes long-lasting synaptic currents, which can not only produce interval selective responses but also follow the bias and scalar properties. Interestingly, only physiological values of the time constants for paired-pulse facilitation and GABAb, as well as intermediate background activity within the network can reproduce the two key features of interval timing. Copyright © 2018 the authors.

  17. Tracking Control Based on Recurrent Neural Networks for Nonlinear Systems with Multiple Inputs and Unknown Deadzone

    Directory of Open Access Journals (Sweden)

    J. Humberto Pérez-Cruz

    2012-01-01

    Full Text Available This paper deals with the problem of trajectory tracking for a broad class of uncertain nonlinear systems with multiple inputs each one subject to an unknown symmetric deadzone. On the basis of a model of the deadzone as a combination of a linear term and a disturbance-like term, a continuous-time recurrent neural network is directly employed in order to identify the uncertain dynamics. By using a Lyapunov analysis, the exponential convergence of the identification error to a bounded zone is demonstrated. Subsequently, by a proper control law, the state of the neural network is compelled to follow a bounded reference trajectory. This control law is designed in such a way that the singularity problem is conveniently avoided and the exponential convergence to a bounded zone of the difference between the state of the neural identifier and the reference trajectory can be proven. Thus, the exponential convergence of the tracking error to a bounded zone and the boundedness of all closed-loop signals can be guaranteed. One of the main advantages of the proposed strategy is that the controller can work satisfactorily without any specific knowledge of an upper bound for the unmodeled dynamics and/or the disturbance term.

  18. EMG-Based Estimation of Limb Movement Using Deep Learning With Recurrent Convolutional Neural Networks.

    Science.gov (United States)

    Xia, Peng; Hu, Jie; Peng, Yinghong

    2017-10-25

    A novel model based on deep learning is proposed to estimate kinematic information for myoelectric control from multi-channel electromyogram (EMG) signals. The neural information of limb movement is embedded in EMG signals that are influenced by all kinds of factors. In order to overcome the negative effects of variability in signals, the proposed model employs the deep architecture combining convolutional neural networks (CNNs) and recurrent neural networks (RNNs). The EMG signals are transformed to time-frequency frames as the input to the model. The limb movement is estimated by the model that is trained with the gradient descent and backpropagation procedure. We tested the model for simultaneous and proportional estimation of limb movement in eight healthy subjects and compared it with support vector regression (SVR) and CNNs on the same data set. The experimental studies show that the proposed model has higher estimation accuracy and better robustness with respect to time. The combination of CNNs and RNNs can improve the model performance compared with using CNNs alone. The model of deep architecture is promising in EMG decoding and optimization of network structures can increase the accuracy and robustness. © 2017 International Center for Artificial Organs and Transplantation and Wiley Periodicals, Inc.

  19. A one-layer recurrent neural network for constrained pseudoconvex optimization and its application for dynamic portfolio optimization.

    Science.gov (United States)

    Liu, Qingshan; Guo, Zhishan; Wang, Jun

    2012-02-01

    In this paper, a one-layer recurrent neural network is proposed for solving pseudoconvex optimization problems subject to linear equality and bound constraints. Compared with the existing neural networks for optimization (e.g., the projection neural networks), the proposed neural network is capable of solving more general pseudoconvex optimization problems with equality and bound constraints. Moreover, it is capable of solving constrained fractional programming problems as a special case. The convergence of the state variables of the proposed neural network to achieve solution optimality is guaranteed as long as the designed parameters in the model are larger than the derived lower bounds. Numerical examples with simulation results illustrate the effectiveness and characteristics of the proposed neural network. In addition, an application for dynamic portfolio optimization is discussed. Copyright © 2011 Elsevier Ltd. All rights reserved.

  20. A novel recurrent neural network with one neuron and finite-time convergence for k-winners-take-all operation.

    Science.gov (United States)

    Liu, Qingshan; Dang, Chuangyin; Cao, Jinde

    2010-07-01

    In this paper, based on a one-neuron recurrent neural network, a novel k-winners-take-all ( k -WTA) network is proposed. Finite time convergence of the proposed neural network is proved using the Lyapunov method. The k-WTA operation is first converted equivalently into a linear programming problem. Then, a one-neuron recurrent neural network is proposed to get the kth or (k+1)th largest inputs of the k-WTA problem. Furthermore, a k-WTA network is designed based on the proposed neural network to perform the k-WTA operation. Compared with the existing k-WTA networks, the proposed network has simple structure and finite time convergence. In addition, simulation results on numerical examples show the effectiveness and performance of the proposed k-WTA network.

  1. Different-Level Simultaneous Minimization Scheme for Fault Tolerance of Redundant Manipulator Aided with Discrete-Time Recurrent Neural Network.

    Science.gov (United States)

    Jin, Long; Liao, Bolin; Liu, Mei; Xiao, Lin; Guo, Dongsheng; Yan, Xiaogang

    2017-01-01

    By incorporating the physical constraints in joint space, a different-level simultaneous minimization scheme, which takes both the robot kinematics and robot dynamics into account, is presented and investigated for fault-tolerant motion planning of redundant manipulator in this paper. The scheme is reformulated as a quadratic program (QP) with equality and bound constraints, which is then solved by a discrete-time recurrent neural network. Simulative verifications based on a six-link planar redundant robot manipulator substantiate the efficacy and accuracy of the presented acceleration fault-tolerant scheme, the resultant QP and the corresponding discrete-time recurrent neural network.

  2. Identification of a Typical CSTR Using Optimal Focused Time Lagged Recurrent Neural Network Model with Gamma Memory Filter

    OpenAIRE

    Naikwad, S. N.; Dudul, S. V.

    2009-01-01

    A focused time lagged recurrent neural network (FTLR NN) with gamma memory filter is designed to learn the subtle complex dynamics of a typical CSTR process. Continuous stirred tank reactor exhibits complex nonlinear operations where reaction is exothermic. It is noticed from literature review that process control of CSTR using neuro-fuzzy systems was attempted by many, but optimal neural network model for identification of CSTR process is not yet available. As CSTR process includes tempora...

  3. Training Excitatory-Inhibitory Recurrent Neural Networks for Cognitive Tasks: A Simple and Flexible Framework.

    Directory of Open Access Journals (Sweden)

    H Francis Song

    2016-02-01

    Full Text Available The ability to simultaneously record from large numbers of neurons in behaving animals has ushered in a new era for the study of the neural circuit mechanisms underlying cognitive functions. One promising approach to uncovering the dynamical and computational principles governing population responses is to analyze model recurrent neural networks (RNNs that have been optimized to perform the same tasks as behaving animals. Because the optimization of network parameters specifies the desired output but not the manner in which to achieve this output, "trained" networks serve as a source of mechanistic hypotheses and a testing ground for data analyses that link neural computation to behavior. Complete access to the activity and connectivity of the circuit, and the ability to manipulate them arbitrarily, make trained networks a convenient proxy for biological circuits and a valuable platform for theoretical investigation. However, existing RNNs lack basic biological features such as the distinction between excitatory and inhibitory units (Dale's principle, which are essential if RNNs are to provide insights into the operation of biological circuits. Moreover, trained networks can achieve the same behavioral performance but differ substantially in their structure and dynamics, highlighting the need for a simple and flexible framework for the exploratory training of RNNs. Here, we describe a framework for gradient descent-based training of excitatory-inhibitory RNNs that can incorporate a variety of biological knowledge. We provide an implementation based on the machine learning library Theano, whose automatic differentiation capabilities facilitate modifications and extensions. We validate this framework by applying it to well-known experimental paradigms such as perceptual decision-making, context-dependent integration, multisensory integration, parametric working memory, and motor sequence generation. Our results demonstrate the wide range of neural

  4. PERAMALAN KONSUMSI LISTRIK JANGKA PENDEK DENGAN ARIMA MUSIMAN GANDA DAN ELMAN-RECURRENT NEURAL NETWORK

    Directory of Open Access Journals (Sweden)

    Suhartono Suhartono

    2009-07-01

    Full Text Available Neural network (NN is one of many method used to predict the electricity consumption per hour in many countries. NN method which is used in many previous studies is Feed-Forward Neural Network (FFNN or Autoregressive Neural Network(AR-NN. AR-NN model is not able to capture and explain the effect of moving average (MA order on a time series of data. This research was conducted with the purpose of reviewing the application of other types of NN, that is Elman-Recurrent Neural Network (Elman-RNN which could explain MA order effect and compare the result of prediction accuracy with multiple seasonal ARIMA (Autoregressive Integrated Moving Average models. As a case study, we used data electricity consumption per hour in Mengare Gresik. Result of analysis showed that the best of double seasonal Arima models suited to short-term forecasting in the case study data is ARIMA([1,2,3,4,6,7,9,10,14,21,33],1,8(0,1,124 (1,1,0168. This model produces a white noise residuals, but it does not have a normal distribution due to suspected outlier. Outlier detection in iterative produce 14 innovation outliers. There are 4 inputs of Elman-RNN network that were examined and tested for forecasting the data, the input according to lag Arima, input such as lag Arima plus 14 dummy outlier, inputs are the lag-multiples of 24 up to lag 480, and the inputs are lag 1 and lag multiples of 24+1. All of four network uses one hidden layer with tangent sigmoid activation function and one output with a linear function. The result of comparative forecast accuracy through value of MAPE out-sample showed that the fourth networks, namely Elman-RNN (22, 3, 1, is the best model for forecasting electricity consumption per hour in short term in Mengare Gresik.

  5. Biological oscillations for learning walking coordination: dynamic recurrent neural network functionally models physiological central pattern generator.

    Science.gov (United States)

    Hoellinger, Thomas; Petieau, Mathieu; Duvinage, Matthieu; Castermans, Thierry; Seetharaman, Karthik; Cebolla, Ana-Maria; Bengoetxea, Ana; Ivanenko, Yuri; Dan, Bernard; Cheron, Guy

    2013-01-01

    The existence of dedicated neuronal modules such as those organized in the cerebral cortex, thalamus, basal ganglia, cerebellum, or spinal cord raises the question of how these functional modules are coordinated for appropriate motor behavior. Study of human locomotion offers an interesting field for addressing this central question. The coordination of the elevation of the 3 leg segments under a planar covariation rule (Borghese et al., 1996) was recently modeled (Barliya et al., 2009) by phase-adjusted simple oscillators shedding new light on the understanding of the central pattern generator (CPG) processing relevant oscillation signals. We describe the use of a dynamic recurrent neural network (DRNN) mimicking the natural oscillatory behavior of human locomotion for reproducing the planar covariation rule in both legs at different walking speeds. Neural network learning was based on sinusoid signals integrating frequency and amplitude features of the first three harmonics of the sagittal elevation angles of the thigh, shank, and foot of each lower limb. We verified the biological plausibility of the neural networks. Best results were obtained with oscillations extracted from the first three harmonics in comparison to oscillations outside the harmonic frequency peaks. Physiological replication steadily increased with the number of neuronal units from 1 to 80, where similarity index reached 0.99. Analysis of synaptic weighting showed that the proportion of inhibitory connections consistently increased with the number of neuronal units in the DRNN. This emerging property in the artificial neural networks resonates with recent advances in neurophysiology of inhibitory neurons that are involved in central nervous system oscillatory activities. The main message of this study is that this type of DRNN may offer a useful model of physiological central pattern generator for gaining insights in basic research and developing clinical applications.

  6. Criticality meets learning: Criticality signatures in a self-organizing recurrent neural network.

    Science.gov (United States)

    Del Papa, Bruno; Priesemann, Viola; Triesch, Jochen

    2017-01-01

    Many experiments have suggested that the brain operates close to a critical state, based on signatures of criticality such as power-law distributed neuronal avalanches. In neural network models, criticality is a dynamical state that maximizes information processing capacities, e.g. sensitivity to input, dynamical range and storage capacity, which makes it a favorable candidate state for brain function. Although models that self-organize towards a critical state have been proposed, the relation between criticality signatures and learning is still unclear. Here, we investigate signatures of criticality in a self-organizing recurrent neural network (SORN). Investigating criticality in the SORN is of particular interest because it has not been developed to show criticality. Instead, the SORN has been shown to exhibit spatio-temporal pattern learning through a combination of neural plasticity mechanisms and it reproduces a number of biological findings on neural variability and the statistics and fluctuations of synaptic efficacies. We show that, after a transient, the SORN spontaneously self-organizes into a dynamical state that shows criticality signatures comparable to those found in experiments. The plasticity mechanisms are necessary to attain that dynamical state, but not to maintain it. Furthermore, onset of external input transiently changes the slope of the avalanche distributions - matching recent experimental findings. Interestingly, the membrane noise level necessary for the occurrence of the criticality signatures reduces the model's performance in simple learning tasks. Overall, our work shows that the biologically inspired plasticity and homeostasis mechanisms responsible for the SORN's spatio-temporal learning abilities can give rise to criticality signatures in its activity when driven by random input, but these break down under the structured input of short repeating sequences.

  7. Training Excitatory-Inhibitory Recurrent Neural Networks for Cognitive Tasks: A Simple and Flexible Framework

    Science.gov (United States)

    Wang, Xiao-Jing

    2016-01-01

    The ability to simultaneously record from large numbers of neurons in behaving animals has ushered in a new era for the study of the neural circuit mechanisms underlying cognitive functions. One promising approach to uncovering the dynamical and computational principles governing population responses is to analyze model recurrent neural networks (RNNs) that have been optimized to perform the same tasks as behaving animals. Because the optimization of network parameters specifies the desired output but not the manner in which to achieve this output, “trained” networks serve as a source of mechanistic hypotheses and a testing ground for data analyses that link neural computation to behavior. Complete access to the activity and connectivity of the circuit, and the ability to manipulate them arbitrarily, make trained networks a convenient proxy for biological circuits and a valuable platform for theoretical investigation. However, existing RNNs lack basic biological features such as the distinction between excitatory and inhibitory units (Dale’s principle), which are essential if RNNs are to provide insights into the operation of biological circuits. Moreover, trained networks can achieve the same behavioral performance but differ substantially in their structure and dynamics, highlighting the need for a simple and flexible framework for the exploratory training of RNNs. Here, we describe a framework for gradient descent-based training of excitatory-inhibitory RNNs that can incorporate a variety of biological knowledge. We provide an implementation based on the machine learning library Theano, whose automatic differentiation capabilities facilitate modifications and extensions. We validate this framework by applying it to well-known experimental paradigms such as perceptual decision-making, context-dependent integration, multisensory integration, parametric working memory, and motor sequence generation. Our results demonstrate the wide range of neural activity

  8. A state space approach for piecewise-linear recurrent neural networks for identifying computational dynamics from neural measurements.

    Directory of Open Access Journals (Sweden)

    Daniel Durstewitz

    2017-06-01

    Full Text Available The computational and cognitive properties of neural systems are often thought to be implemented in terms of their (stochastic network dynamics. Hence, recovering the system dynamics from experimentally observed neuronal time series, like multiple single-unit recordings or neuroimaging data, is an important step toward understanding its computations. Ideally, one would not only seek a (lower-dimensional state space representation of the dynamics, but would wish to have access to its statistical properties and their generative equations for in-depth analysis. Recurrent neural networks (RNNs are a computationally powerful and dynamically universal formal framework which has been extensively studied from both the computational and the dynamical systems perspective. Here we develop a semi-analytical maximum-likelihood estimation scheme for piecewise-linear RNNs (PLRNNs within the statistical framework of state space models, which accounts for noise in both the underlying latent dynamics and the observation process. The Expectation-Maximization algorithm is used to infer the latent state distribution, through a global Laplace approximation, and the PLRNN parameters iteratively. After validating the procedure on toy examples, and using inference through particle filters for comparison, the approach is applied to multiple single-unit recordings from the rodent anterior cingulate cortex (ACC obtained during performance of a classical working memory task, delayed alternation. Models estimated from kernel-smoothed spike time data were able to capture the essential computational dynamics underlying task performance, including stimulus-selective delay activity. The estimated models were rarely multi-stable, however, but rather were tuned to exhibit slow dynamics in the vicinity of a bifurcation point. In summary, the present work advances a semi-analytical (thus reasonably fast maximum-likelihood estimation framework for PLRNNs that may enable to recover

  9. Automatic temporal segment detection via bilateral long short-term memory recurrent neural networks

    Science.gov (United States)

    Sun, Bo; Cao, Siming; He, Jun; Yu, Lejun; Li, Liandong

    2017-03-01

    Constrained by the physiology, the temporal factors associated with human behavior, irrespective of facial movement or body gesture, are described by four phases: neutral, onset, apex, and offset. Although they may benefit related recognition tasks, it is not easy to accurately detect such temporal segments. An automatic temporal segment detection framework using bilateral long short-term memory recurrent neural networks (BLSTM-RNN) to learn high-level temporal-spatial features, which synthesizes the local and global temporal-spatial information more efficiently, is presented. The framework is evaluated in detail over the face and body database (FABO). The comparison shows that the proposed framework outperforms state-of-the-art methods for solving the problem of temporal segment detection.

  10. Learning and retrieval behavior in recurrent neural networks with pre-synaptic dependent homeostatic plasticity

    Science.gov (United States)

    Mizusaki, Beatriz E. P.; Agnes, Everton J.; Erichsen, Rubem; Brunnet, Leonardo G.

    2017-08-01

    The plastic character of brain synapses is considered to be one of the foundations for the formation of memories. There are numerous kinds of such phenomenon currently described in the literature, but their role in the development of information pathways in neural networks with recurrent architectures is still not completely clear. In this paper we study the role of an activity-based process, called pre-synaptic dependent homeostatic scaling, in the organization of networks that yield precise-timed spiking patterns. It encodes spatio-temporal information in the synaptic weights as it associates a learned input with a specific response. We introduce a correlation measure to evaluate the precision of the spiking patterns and explore the effects of different inhibitory interactions and learning parameters. We find that large learning periods are important in order to improve the network learning capacity and discuss this ability in the presence of distinct inhibitory currents.

  11. Distributed representations of action sequences in anterior cingulate cortex: A recurrent neural network approach.

    Science.gov (United States)

    Shahnazian, Danesh; Holroyd, Clay B

    2018-02-01

    Anterior cingulate cortex (ACC) has been the subject of intense debate over the past 2 decades, but its specific computational function remains controversial. Here we present a simple computational model of ACC that incorporates distributed representations across a network of interconnected processing units. Based on the proposal that ACC is concerned with the execution of extended, goal-directed action sequences, we trained a recurrent neural network to predict each successive step of several sequences associated with multiple tasks. In keeping with neurophysiological observations from nonhuman animals, the network yields distributed patterns of activity across ACC neurons that track the progression of each sequence, and in keeping with human neuroimaging data, the network produces discrepancy signals when any step of the sequence deviates from the predicted step. These simulations illustrate a novel approach for investigating ACC function.

  12. Identification of Jets Containing $b$-Hadrons with Recurrent Neural Networks at the ATLAS Experiment

    CERN Document Server

    The ATLAS collaboration

    2017-01-01

    A novel $b$-jet identification algorithm is constructed with a Recurrent Neural Network (RNN) at the ATLAS experiment at the CERN Large Hadron Collider. The RNN based $b$-tagging algorithm processes charged particle tracks associated to jets without reliance on secondary vertex finding, and can augment existing secondary-vertex based taggers. In contrast to traditional impact-parameter-based $b$-tagging algorithms which assume that tracks associated to jets are independent from each other, the RNN based $b$-tagging algorithm can exploit the spatial and kinematic correlations between tracks which are initiated from the same $b$-hadrons. This new approach also accommodates an extended set of input variables. This note presents the expected performance of the RNN based $b$-tagging algorithm in simulated $t \\bar t$ events at $\\sqrt{s}=13$ TeV.

  13. Recurrent fuzzy neural network backstepping control for the prescribed output tracking performance of nonlinear dynamic systems.

    Science.gov (United States)

    Han, Seong-Ik; Lee, Jang-Myung

    2014-01-01

    This paper proposes a backstepping control system that uses a tracking error constraint and recurrent fuzzy neural networks (RFNNs) to achieve a prescribed tracking performance for a strict-feedback nonlinear dynamic system. A new constraint variable was defined to generate the virtual control that forces the tracking error to fall within prescribed boundaries. An adaptive RFNN was also used to obtain the required improvement on the approximation performances in order to avoid calculating the explosive number of terms generated by the recursive steps of traditional backstepping control. The boundedness and convergence of the closed-loop system was confirmed based on the Lyapunov stability theory. The prescribed performance of the proposed control scheme was validated by using it to control the prescribed error of a nonlinear system and a robot manipulator. © 2013 ISA. Published by Elsevier Ltd. All rights reserved.

  14. An Incremental Time-delay Neural Network for Dynamical Recurrent Associative Memory

    Institute of Scientific and Technical Information of China (English)

    2002-01-01

    An incremental time-delay neural network based on synapse growth, which is suitable for dynamic control and learning of autonomous robots, is proposed to improve the learning and retrieving performance of dynamical recurrent associative memory architecture. The model allows steady and continuous establishment of associative memory for spatio-temporal regularities and time series in discrete sequence of inputs. The inserted hidden units can be taken as the long-term memories that expand the capacity of network and sometimes may fade away under certain condition. Preliminary experiment has shown that this incremental network may be a promising approach to endow autonomous robots with the ability of adapting to new data without destroying the learned patterns. The system also benefits from its potential chaos character for emergence.

  15. H∞ state estimation for discrete-time memristive recurrent neural networks with stochastic time-delays

    Science.gov (United States)

    Liu, Hongjian; Wang, Zidong; Shen, Bo; Alsaadi, Fuad E.

    2016-07-01

    This paper deals with the robust H∞ state estimation problem for a class of memristive recurrent neural networks with stochastic time-delays. The stochastic time-delays under consideration are governed by a Bernoulli-distributed stochastic sequence. The purpose of the addressed problem is to design the robust state estimator such that the dynamics of the estimation error is exponentially stable in the mean square, and the prescribed ? performance constraint is met. By utilizing the difference inclusion theory and choosing a proper Lyapunov-Krasovskii functional, the existence condition of the desired estimator is derived. Based on it, the explicit expression of the estimator gain is given in terms of the solution to a linear matrix inequality. Finally, a numerical example is employed to demonstrate the effectiveness and applicability of the proposed estimation approach.

  16. A statistical framework for evaluating neural networks to predict recurrent events in breast cancer

    Science.gov (United States)

    Gorunescu, Florin; Gorunescu, Marina; El-Darzi, Elia; Gorunescu, Smaranda

    2010-07-01

    Breast cancer is the second leading cause of cancer deaths in women today. Sometimes, breast cancer can return after primary treatment. A medical diagnosis of recurrent cancer is often a more challenging task than the initial one. In this paper, we investigate the potential contribution of neural networks (NNs) to support health professionals in diagnosing such events. The NN algorithms are tested and applied to two different datasets. An extensive statistical analysis has been performed to verify our experiments. The results show that a simple network structure for both the multi-layer perceptron and radial basis function can produce equally good results, not all attributes are needed to train these algorithms and, finally, the classification performances of all algorithms are statistically robust. Moreover, we have shown that the best performing algorithm will strongly depend on the features of the datasets, and hence, there is not necessarily a single best classifier.

  17. Precision position control of servo systems using adaptive back-stepping and recurrent fuzzy neural networks

    International Nuclear Information System (INIS)

    Kim, Han Me; Kim, Jong Shik; Han, Seong Ik

    2009-01-01

    To improve position tracking performance of servo systems, a position tracking control using adaptive back-stepping control(ABSC) scheme and recurrent fuzzy neural networks(RFNN) is proposed. An adaptive rule of the ABSC based on system dynamics and dynamic friction model is also suggested to compensate nonlinear dynamic friction characteristics. However, it is difficult to reduce the position tracking error of servo systems by using only the ABSC scheme because of the system uncertainties which cannot be exactly identified during the modeling of servo systems. Therefore, in order to overcome system uncertainties and then to improve position tracking performance of servo systems, the RFNN technique is additionally applied to the servo system. The feasibility of the proposed control scheme for a servo system is validated through experiments. Experimental results show that the servo system with ABS controller based on the dual friction observer and RFNN including the reconstruction error estimator can achieve desired tracking performance and robustness

  18. Discrete-time recurrent neural networks with time-varying delays: Exponential stability analysis

    International Nuclear Information System (INIS)

    Liu, Yurong; Wang, Zidong; Serrano, Alan; Liu, Xiaohui

    2007-01-01

    This Letter is concerned with the analysis problem of exponential stability for a class of discrete-time recurrent neural networks (DRNNs) with time delays. The delay is of the time-varying nature, and the activation functions are assumed to be neither differentiable nor strict monotonic. Furthermore, the description of the activation functions is more general than the recently commonly used Lipschitz conditions. Under such mild conditions, we first prove the existence of the equilibrium point. Then, by employing a Lyapunov-Krasovskii functional, a unified linear matrix inequality (LMI) approach is developed to establish sufficient conditions for the DRNNs to be globally exponentially stable. It is shown that the delayed DRNNs are globally exponentially stable if a certain LMI is solvable, where the feasibility of such an LMI can be easily checked by using the numerically efficient Matlab LMI Toolbox. A simulation example is presented to show the usefulness of the derived LMI-based stability condition

  19. Evaluation of the cranial base in amnion rupture sequence involving the anterior neural tube: implications regarding recurrence risk.

    Science.gov (United States)

    Jones, Kenneth Lyons; Robinson, Luther K; Benirschke, Kurt

    2006-09-01

    Amniotic bands can cause disruption of the cranial end of the developing fetus, leading in some cases to a neural tube closure defect. Although recurrence for unaffected parents of an affected child with a defect in which the neural tube closed normally but was subsequently disrupted by amniotic bands is negligible; for a primary defect in closure of the neural tube to which amnion has subsequently adhered, recurrence risk is 1.7%. In that primary defects of neural tube closure are characterized by typical abnormalities of the base of the skull, evaluation of the cranial base in such fetuses provides an approach for making a distinction between these 2 mechanisms. This distinction has implications regarding recurrence risk. The skull base of 2 fetuses with amnion rupture sequence involving the cranial end of the neural tube were compared to that of 1 fetus with anencephaly as well as that of a structurally normal fetus. The skulls were cleaned, fixed in 10% formalin, recleaned, and then exposed to 10% KOH solution. After washing and recleaning, the skulls were exposed to hydrogen peroxide for bleaching and photography. Despite involvement of the anterior neural tube in both fetuses with amnion rupture sequence, in Case 3 the cranial base was normal while in Case 4 the cranial base was similar to that seen in anencephaly. This technique provides a method for determining the developmental pathogenesis of anterior neural tube defects in cases of amnion rupture sequence. As such, it provides information that can be used to counsel parents of affected children with respect to recurrence risk.

  20. Using recurrent neural network models for early detection of heart failure onset.

    Science.gov (United States)

    Choi, Edward; Schuetz, Andy; Stewart, Walter F; Sun, Jimeng

    2017-03-01

    We explored whether use of deep learning to model temporal relations among events in electronic health records (EHRs) would improve model performance in predicting initial diagnosis of heart failure (HF) compared to conventional methods that ignore temporality. Data were from a health system's EHR on 3884 incident HF cases and 28 903 controls, identified as primary care patients, between May 16, 2000, and May 23, 2013. Recurrent neural network (RNN) models using gated recurrent units (GRUs) were adapted to detect relations among time-stamped events (eg, disease diagnosis, medication orders, procedure orders, etc.) with a 12- to 18-month observation window of cases and controls. Model performance metrics were compared to regularized logistic regression, neural network, support vector machine, and K-nearest neighbor classifier approaches. Using a 12-month observation window, the area under the curve (AUC) for the RNN model was 0.777, compared to AUCs for logistic regression (0.747), multilayer perceptron (MLP) with 1 hidden layer (0.765), support vector machine (SVM) (0.743), and K-nearest neighbor (KNN) (0.730). When using an 18-month observation window, the AUC for the RNN model increased to 0.883 and was significantly higher than the 0.834 AUC for the best of the baseline methods (MLP). Deep learning models adapted to leverage temporal relations appear to improve performance of models for detection of incident heart failure with a short observation window of 12-18 months. © The Author 2016. Published by Oxford University Press on behalf of the American Medical Informatics Association.

  1. Design of a heart rate controller for treadmill exercise using a recurrent fuzzy neural network.

    Science.gov (United States)

    Lu, Chun-Hao; Wang, Wei-Cheng; Tai, Cheng-Chi; Chen, Tien-Chi

    2016-05-01

    In this study, we developed a computer controlled treadmill system using a recurrent fuzzy neural network heart rate controller (RFNNHRC). Treadmill speeds and inclines were controlled by corresponding control servo motors. The RFNNHRC was used to generate the control signals to automatically control treadmill speed and incline to minimize the user heart rate deviations from a preset profile. The RFNNHRC combines a fuzzy reasoning capability to accommodate uncertain information and an artificial recurrent neural network learning process that corrects for treadmill system nonlinearities and uncertainties. Treadmill speeds and inclines are controlled by the RFNNHRC to achieve minimal heart rate deviation from a pre-set profile using adjustable parameters and an on-line learning algorithm that provides robust performance against parameter variations. The on-line learning algorithm of RFNNHRC was developed and implemented using a dsPIC 30F4011 DSP. Application of the proposed control scheme to heart rate responses of runners resulted in smaller fluctuations than those produced by using proportional integra control, and treadmill speeds and inclines were smoother. The present experiments demonstrate improved heart rate tracking performance with the proposed control scheme. The RFNNHRC scheme with adjustable parameters and an on-line learning algorithm was applied to a computer controlled treadmill system with heart rate control during treadmill exercise. Novel RFNNHRC structure and controller stability analyses were introduced. The RFNNHRC were tuned using a Lyapunov function to ensure system stability. The superior heart rate control with the proposed RFNNHRC scheme was demonstrated with various pre-set heart rates. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  2. Neural processing of short-term recurrence in songbird vocal communication.

    Directory of Open Access Journals (Sweden)

    Gabriël J L Beckers

    Full Text Available BACKGROUND: Many situations involving animal communication are dominated by recurring, stereotyped signals. How do receivers optimally distinguish between frequently recurring signals and novel ones? Cortical auditory systems are known to be pre-attentively sensitive to short-term delivery statistics of artificial stimuli, but it is unknown if this phenomenon extends to the level of behaviorally relevant delivery patterns, such as those used during communication. METHODOLOGY/PRINCIPAL FINDINGS: We recorded and analyzed complete auditory scenes of spontaneously communicating zebra finch (Taeniopygia guttata pairs over a week-long period, and show that they can produce tens of thousands of short-range contact calls per day. Individual calls recur at time scales (median interval 1.5 s matching those at which mammalian sensory systems are sensitive to recent stimulus history. Next, we presented to anesthetized birds sequences of frequently recurring calls interspersed with rare ones, and recorded, in parallel, action and local field potential responses in the medio-caudal auditory forebrain at 32 unique sites. Variation in call recurrence rate over natural ranges leads to widespread and significant modulation in strength of neural responses. Such modulation is highly call-specific in secondary auditory areas, but not in the main thalamo-recipient, primary auditory area. CONCLUSIONS/SIGNIFICANCE: Our results support the hypothesis that pre-attentive neural sensitivity to short-term stimulus recurrence is involved in the analysis of auditory scenes at the level of delivery patterns of meaningful sounds. This may enable birds to efficiently and automatically distinguish frequently recurring vocalizations from other events in their auditory scene.

  3. Fault diagnosis of rolling bearings with recurrent neural network-based autoencoders.

    Science.gov (United States)

    Liu, Han; Zhou, Jianzhong; Zheng, Yang; Jiang, Wei; Zhang, Yuncheng

    2018-04-19

    As the rolling bearings being the key part of rotary machine, its healthy condition is quite important for safety production. Fault diagnosis of rolling bearing has been research focus for the sake of improving the economic efficiency and guaranteeing the operation security. However, the collected signals are mixed with ambient noise during the operation of rotary machine, which brings great challenge to the exact diagnosis results. Using signals collected from multiple sensors can avoid the loss of local information and extract more helpful characteristics. Recurrent Neural Networks (RNN) is a type of artificial neural network which can deal with multiple time sequence data. The capacity of RNN has been proved outstanding for catching time relevance about time sequence data. This paper proposed a novel method for bearing fault diagnosis with RNN in the form of an autoencoder. In this approach, multiple vibration value of the rolling bearings of the next period are predicted from the previous period by means of Gated Recurrent Unit (GRU)-based denoising autoencoder. These GRU-based non-linear predictive denoising autoencoders (GRU-NP-DAEs) are trained with strong generalization ability for each different fault pattern. Then for the given input data, the reconstruction errors between the next period data and the output data generated by different GRU-NP-DAEs are used to detect anomalous conditions and classify fault type. Classic rotating machinery datasets have been employed to testify the effectiveness of the proposed diagnosis method and its preponderance over some state-of-the-art methods. The experiment results indicate that the proposed method achieves satisfactory performance with strong robustness and high classification accuracy. Copyright © 2018 ISA. Published by Elsevier Ltd. All rights reserved.

  4. From phonemes to images : levels of representation in a recurrent neural model of visually-grounded language learning

    NARCIS (Netherlands)

    Gelderloos, L.J.; Chrupala, Grzegorz

    2016-01-01

    We present a model of visually-grounded language learning based on stacked gated recurrent neural networks which learns to predict visual features given an image description in the form of a sequence of phonemes. The learning task resembles that faced by human language learners who need to discover

  5. Sequence-specific bias correction for RNA-seq data using recurrent neural networks.

    Science.gov (United States)

    Zhang, Yao-Zhong; Yamaguchi, Rui; Imoto, Seiya; Miyano, Satoru

    2017-01-25

    The recent success of deep learning techniques in machine learning and artificial intelligence has stimulated a great deal of interest among bioinformaticians, who now wish to bring the power of deep learning to bare on a host of bioinformatical problems. Deep learning is ideally suited for biological problems that require automatic or hierarchical feature representation for biological data when prior knowledge is limited. In this work, we address the sequence-specific bias correction problem for RNA-seq data redusing Recurrent Neural Networks (RNNs) to model nucleotide sequences without pre-determining sequence structures. The sequence-specific bias of a read is then calculated based on the sequence probabilities estimated by RNNs, and used in the estimation of gene abundance. We explore the application of two popular RNN recurrent units for this task and demonstrate that RNN-based approaches provide a flexible way to model nucleotide sequences without knowledge of predetermined sequence structures. Our experiments show that training a RNN-based nucleotide sequence model is efficient and RNN-based bias correction methods compare well with the-state-of-the-art sequence-specific bias correction method on the commonly used MAQC-III data set. RNNs provides an alternative and flexible way to calculate sequence-specific bias without explicitly pre-determining sequence structures.

  6. Marginally Stable Triangular Recurrent Neural Network Architecture for Time Series Prediction.

    Science.gov (United States)

    Sivakumar, Seshadri; Sivakumar, Shyamala

    2017-09-25

    This paper introduces a discrete-time recurrent neural network architecture using triangular feedback weight matrices that allows a simplified approach to ensuring network and training stability. The triangular structure of the weight matrices is exploited to readily ensure that the eigenvalues of the feedback weight matrix represented by the block diagonal elements lie on the unit circle in the complex z-plane by updating these weights based on the differential of the angular error variable. Such placement of the eigenvalues together with the extended close interaction between state variables facilitated by the nondiagonal triangular elements, enhances the learning ability of the proposed architecture. Simulation results show that the proposed architecture is highly effective in time-series prediction tasks associated with nonlinear and chaotic dynamic systems with underlying oscillatory modes. This modular architecture with dual upper and lower triangular feedback weight matrices mimics fully recurrent network architectures, while maintaining learning stability with a simplified training process. While training, the block-diagonal weights (hence the eigenvalues) of the dual triangular matrices are constrained to the same values during weight updates aimed at minimizing the possibility of overfitting. The dual triangular architecture also exploits the benefit of parsing the input and selectively applying the parsed inputs to the two subnetworks to facilitate enhanced learning performance.

  7. Nonlinear dynamic systems identification using recurrent interval type-2 TSK fuzzy neural network - A novel structure.

    Science.gov (United States)

    El-Nagar, Ahmad M

    2018-01-01

    In this study, a novel structure of a recurrent interval type-2 Takagi-Sugeno-Kang (TSK) fuzzy neural network (FNN) is introduced for nonlinear dynamic and time-varying systems identification. It combines the type-2 fuzzy sets (T2FSs) and a recurrent FNN to avoid the data uncertainties. The fuzzy firing strengths in the proposed structure are returned to the network input as internal variables. The interval type-2 fuzzy sets (IT2FSs) is used to describe the antecedent part for each rule while the consequent part is a TSK-type, which is a linear function of the internal variables and the external inputs with interval weights. All the type-2 fuzzy rules for the proposed RIT2TSKFNN are learned on-line based on structure and parameter learning, which are performed using the type-2 fuzzy clustering. The antecedent and consequent parameters of the proposed RIT2TSKFNN are updated based on the Lyapunov function to achieve network stability. The obtained results indicate that our proposed network has a small root mean square error (RMSE) and a small integral of square error (ISE) with a small number of rules and a small computation time compared with other type-2 FNNs. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.

  8. LFNet: A Novel Bidirectional Recurrent Convolutional Neural Network for Light-Field Image Super-Resolution.

    Science.gov (United States)

    Wang, Yunlong; Liu, Fei; Zhang, Kunbo; Hou, Guangqi; Sun, Zhenan; Tan, Tieniu

    2018-09-01

    The low spatial resolution of light-field image poses significant difficulties in exploiting its advantage. To mitigate the dependency of accurate depth or disparity information as priors for light-field image super-resolution, we propose an implicitly multi-scale fusion scheme to accumulate contextual information from multiple scales for super-resolution reconstruction. The implicitly multi-scale fusion scheme is then incorporated into bidirectional recurrent convolutional neural network, which aims to iteratively model spatial relations between horizontally or vertically adjacent sub-aperture images of light-field data. Within the network, the recurrent convolutions are modified to be more effective and flexible in modeling the spatial correlations between neighboring views. A horizontal sub-network and a vertical sub-network of the same network structure are ensembled for final outputs via stacked generalization. Experimental results on synthetic and real-world data sets demonstrate that the proposed method outperforms other state-of-the-art methods by a large margin in peak signal-to-noise ratio and gray-scale structural similarity indexes, which also achieves superior quality for human visual systems. Furthermore, the proposed method can enhance the performance of light field applications such as depth estimation.

  9. Deep Recurrent Neural Networks for seizure detection and early seizure detection systems

    Energy Technology Data Exchange (ETDEWEB)

    Talathi, S. S. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2017-06-05

    Epilepsy is common neurological diseases, affecting about 0.6-0.8 % of world population. Epileptic patients suffer from chronic unprovoked seizures, which can result in broad spectrum of debilitating medical and social consequences. Since seizures, in general, occur infrequently and are unpredictable, automated seizure detection systems are recommended to screen for seizures during long-term electroencephalogram (EEG) recordings. In addition, systems for early seizure detection can lead to the development of new types of intervention systems that are designed to control or shorten the duration of seizure events. In this article, we investigate the utility of recurrent neural networks (RNNs) in designing seizure detection and early seizure detection systems. We propose a deep learning framework via the use of Gated Recurrent Unit (GRU) RNNs for seizure detection. We use publicly available data in order to evaluate our method and demonstrate very promising evaluation results with overall accuracy close to 100 %. We also systematically investigate the application of our method for early seizure warning systems. Our method can detect about 98% of seizure events within the first 5 seconds of the overall epileptic seizure duration.

  10. Language Identification in Short Utterances Using Long Short-Term Memory (LSTM) Recurrent Neural Networks.

    Science.gov (United States)

    Zazo, Ruben; Lozano-Diez, Alicia; Gonzalez-Dominguez, Javier; Toledano, Doroteo T; Gonzalez-Rodriguez, Joaquin

    2016-01-01

    Long Short Term Memory (LSTM) Recurrent Neural Networks (RNNs) have recently outperformed other state-of-the-art approaches, such as i-vector and Deep Neural Networks (DNNs), in automatic Language Identification (LID), particularly when dealing with very short utterances (∼3s). In this contribution we present an open-source, end-to-end, LSTM RNN system running on limited computational resources (a single GPU) that outperforms a reference i-vector system on a subset of the NIST Language Recognition Evaluation (8 target languages, 3s task) by up to a 26%. This result is in line with previously published research using proprietary LSTM implementations and huge computational resources, which made these former results hardly reproducible. Further, we extend those previous experiments modeling unseen languages (out of set, OOS, modeling), which is crucial in real applications. Results show that a LSTM RNN with OOS modeling is able to detect these languages and generalizes robustly to unseen OOS languages. Finally, we also analyze the effect of even more limited test data (from 2.25s to 0.1s) proving that with as little as 0.5s an accuracy of over 50% can be achieved.

  11. Recurrent neural networks for breast lesion classification based on DCE-MRIs

    Science.gov (United States)

    Antropova, Natasha; Huynh, Benjamin; Giger, Maryellen

    2018-02-01

    Dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) plays a significant role in breast cancer screening, cancer staging, and monitoring response to therapy. Recently, deep learning methods are being rapidly incorporated in image-based breast cancer diagnosis and prognosis. However, most of the current deep learning methods make clinical decisions based on 2-dimentional (2D) or 3D images and are not well suited for temporal image data. In this study, we develop a deep learning methodology that enables integration of clinically valuable temporal components of DCE-MRIs into deep learning-based lesion classification. Our work is performed on a database of 703 DCE-MRI cases for the task of distinguishing benign and malignant lesions, and uses the area under the ROC curve (AUC) as the performance metric in conducting that task. We train a recurrent neural network, specifically a long short-term memory network (LSTM), on sequences of image features extracted from the dynamic MRI sequences. These features are extracted with VGGNet, a convolutional neural network pre-trained on a large dataset of natural images ImageNet. The features are obtained from various levels of the network, to capture low-, mid-, and high-level information about the lesion. Compared to a classification method that takes as input only images at a single time-point (yielding an AUC = 0.81 (se = 0.04)), our LSTM method improves lesion classification with an AUC of 0.85 (se = 0.03).

  12. Using Long-Short-Term-Memory Recurrent Neural Networks to Predict Aviation Engine Vibrations

    Science.gov (United States)

    ElSaid, AbdElRahman Ahmed

    This thesis examines building viable Recurrent Neural Networks (RNN) using Long Short Term Memory (LSTM) neurons to predict aircraft engine vibrations. The different networks are trained on a large database of flight data records obtained from an airline containing flights that suffered from excessive vibration. RNNs can provide a more generalizable and robust method for prediction over analytical calculations of engine vibration, as analytical calculations must be solved iteratively based on specific empirical engine parameters, and this database contains multiple types of engines. Further, LSTM RNNs provide a "memory" of the contribution of previous time series data which can further improve predictions of future vibration values. LSTM RNNs were used over traditional RNNs, as those suffer from vanishing/exploding gradients when trained with back propagation. The study managed to predict vibration values for 1, 5, 10, and 20 seconds in the future, with 2.84% 3.3%, 5.51% and 10.19% mean absolute error, respectively. These neural networks provide a promising means for the future development of warning systems so that suitable actions can be taken before the occurrence of excess vibration to avoid unfavorable situations during flight.

  13. Nonlinear dynamics analysis of a self-organizing recurrent neural network: chaos waning.

    Science.gov (United States)

    Eser, Jürgen; Zheng, Pengsheng; Triesch, Jochen

    2014-01-01

    Self-organization is thought to play an important role in structuring nervous systems. It frequently arises as a consequence of plasticity mechanisms in neural networks: connectivity determines network dynamics which in turn feed back on network structure through various forms of plasticity. Recently, self-organizing recurrent neural network models (SORNs) have been shown to learn non-trivial structure in their inputs and to reproduce the experimentally observed statistics and fluctuations of synaptic connection strengths in cortex and hippocampus. However, the dynamics in these networks and how they change with network evolution are still poorly understood. Here we investigate the degree of chaos in SORNs by studying how the networks' self-organization changes their response to small perturbations. We study the effect of perturbations to the excitatory-to-excitatory weight matrix on connection strengths and on unit activities. We find that the network dynamics, characterized by an estimate of the maximum Lyapunov exponent, becomes less chaotic during its self-organization, developing into a regime where only few perturbations become amplified. We also find that due to the mixing of discrete and (quasi-)continuous variables in SORNs, small perturbations to the synaptic weights may become amplified only after a substantial delay, a phenomenon we propose to call deferred chaos.

  14. Recurrent fuzzy neural network by using feedback error learning approaches for LFC in interconnected power system

    International Nuclear Information System (INIS)

    Sabahi, Kamel; Teshnehlab, Mohammad; Shoorhedeli, Mahdi Aliyari

    2009-01-01

    In this study, a new adaptive controller based on modified feedback error learning (FEL) approaches is proposed for load frequency control (LFC) problem. The FEL strategy consists of intelligent and conventional controllers in feedforward and feedback paths, respectively. In this strategy, a conventional feedback controller (CFC), i.e. proportional, integral and derivative (PID) controller, is essential to guarantee global asymptotic stability of the overall system; and an intelligent feedforward controller (INFC) is adopted to learn the inverse of the controlled system. Therefore, when the INFC learns the inverse of controlled system, the tracking of reference signal is done properly. Generally, the CFC is designed at nominal operating conditions of the system and, therefore, fails to provide the best control performance as well as global stability over a wide range of changes in the operating conditions of the system. So, in this study a supervised controller (SC), a lookup table based controller, is addressed for tuning of the CFC. During abrupt changes of the power system parameters, the SC adjusts the PID parameters according to these operating conditions. Moreover, for improving the performance of overall system, a recurrent fuzzy neural network (RFNN) is adopted in INFC instead of the conventional neural network, which was used in past studies. The proposed FEL controller has been compared with the conventional feedback error learning controller (CFEL) and the PID controller through some performance indices

  15. Exponential stability of delayed recurrent neural networks with Markovian jumping parameters

    International Nuclear Information System (INIS)

    Wang Zidong; Liu Yurong; Yu Li; Liu Xiaohui

    2006-01-01

    In this Letter, the global exponential stability analysis problem is considered for a class of recurrent neural networks (RNNs) with time delays and Markovian jumping parameters. The jumping parameters considered here are generated from a continuous-time discrete-state homogeneous Markov process, which are governed by a Markov process with discrete and finite state space. The purpose of the problem addressed is to derive some easy-to-test conditions such that the dynamics of the neural network is stochastically exponentially stable in the mean square, independent of the time delay. By employing a new Lyapunov-Krasovskii functional, a linear matrix inequality (LMI) approach is developed to establish the desired sufficient conditions, and therefore the global exponential stability in the mean square for the delayed RNNs can be easily checked by utilizing the numerically efficient Matlab LMI toolbox, and no tuning of parameters is required. A numerical example is exploited to show the usefulness of the derived LMI-based stability conditions

  16. A recurrent neural model for proto-object based contour integration and figure-ground segregation.

    Science.gov (United States)

    Hu, Brian; Niebur, Ernst

    2017-12-01

    Visual processing of objects makes use of both feedforward and feedback streams of information. However, the nature of feedback signals is largely unknown, as is the identity of the neuronal populations in lower visual areas that receive them. Here, we develop a recurrent neural model to address these questions in the context of contour integration and figure-ground segregation. A key feature of our model is the use of grouping neurons whose activity represents tentative objects ("proto-objects") based on the integration of local feature information. Grouping neurons receive input from an organized set of local feature neurons, and project modulatory feedback to those same neurons. Additionally, inhibition at both the local feature level and the object representation level biases the interpretation of the visual scene in agreement with principles from Gestalt psychology. Our model explains several sets of neurophysiological results (Zhou et al. Journal of Neuroscience, 20(17), 6594-6611 2000; Qiu et al. Nature Neuroscience, 10(11), 1492-1499 2007; Chen et al. Neuron, 82(3), 682-694 2014), and makes testable predictions about the influence of neuronal feedback and attentional selection on neural responses across different visual areas. Our model also provides a framework for understanding how object-based attention is able to select both objects and the features associated with them.

  17. Recurrent neural network for non-smooth convex optimization problems with application to the identification of genetic regulatory networks.

    Science.gov (United States)

    Cheng, Long; Hou, Zeng-Guang; Lin, Yingzi; Tan, Min; Zhang, Wenjun Chris; Wu, Fang-Xiang

    2011-05-01

    A recurrent neural network is proposed for solving the non-smooth convex optimization problem with the convex inequality and linear equality constraints. Since the objective function and inequality constraints may not be smooth, the Clarke's generalized gradients of the objective function and inequality constraints are employed to describe the dynamics of the proposed neural network. It is proved that the equilibrium point set of the proposed neural network is equivalent to the optimal solution of the original optimization problem by using the Lagrangian saddle-point theorem. Under weak conditions, the proposed neural network is proved to be stable, and the state of the neural network is convergent to one of its equilibrium points. Compared with the existing neural network models for non-smooth optimization problems, the proposed neural network can deal with a larger class of constraints and is not based on the penalty method. Finally, the proposed neural network is used to solve the identification problem of genetic regulatory networks, which can be transformed into a non-smooth convex optimization problem. The simulation results show the satisfactory identification accuracy, which demonstrates the effectiveness and efficiency of the proposed approach.

  18. Global exponential stability and periodicity of reaction-diffusion recurrent neural networks with distributed delays and Dirichlet boundary conditions

    International Nuclear Information System (INIS)

    Lu Junguo; Lu Linji

    2009-01-01

    In this paper, global exponential stability and periodicity of a class of reaction-diffusion recurrent neural networks with distributed delays and Dirichlet boundary conditions are studied by constructing suitable Lyapunov functionals and utilizing some inequality techniques. We first prove global exponential convergence to 0 of the difference between any two solutions of the original neural networks, the existence and uniqueness of equilibrium is the direct results of this procedure. This approach is different from the usually used one where the existence, uniqueness of equilibrium and stability are proved in two separate steps. Secondly, we prove periodicity. Sufficient conditions ensuring the existence, uniqueness, and global exponential stability of the equilibrium and periodic solution are given. These conditions are easy to verify and our results play an important role in the design and application of globally exponentially stable neural circuits and periodic oscillatory neural circuits.

  19. Large-Signal Lyapunov-Based Stability Analysis of DC/AC Inverters and Inverter-Based Microgrids

    Science.gov (United States)

    Kabalan, Mahmoud

    Microgrid stability studies have been largely based on small-signal linearization techniques. However, the validity and magnitude of the linearization domain is limited to small perturbations. Thus, there is a need to examine microgrids with large-signal nonlinear techniques to fully understand and examine their stability. Large-signal stability analysis can be accomplished by Lyapunov-based mathematical methods. These Lyapunov methods estimate the domain of asymptotic stability of the studied system. A survey of Lyapunov-based large-signal stability studies showed that few large-signal studies have been completed on either individual systems (dc/ac inverters, dc/dc rectifiers, etc.) or microgrids. The research presented in this thesis addresses the large-signal stability of droop-controlled dc/ac inverters and inverter-based microgrids. Dc/ac power electronic inverters allow microgrids to be technically feasible. Thus, as a prelude to examining the stability of microgrids, the research presented in Chapter 3 analyzes the stability of inverters. First, the 13 th order large-signal nonlinear model of a droop-controlled dc/ac inverter connected to an infinite bus is presented. The singular perturbation method is used to decompose the nonlinear model into 11th, 9th, 7th, 5th, 3rd and 1st order models. Each model ignores certain control or structural components of the full order model. The aim of the study is to understand the accuracy and validity of the reduced order models in replicating the performance of the full order nonlinear model. The performance of each model is studied in three different areas: time domain simulations, Lyapunov's indirect method and domain of attraction estimation. The work aims to present the best model to use in each of the three domains of study. Results show that certain reduced order models are capable of accurately reproducing the performance of the full order model while others can be used to gain insights into those three areas of

  20. Recurrent neural network based hybrid model for reconstructing gene regulatory network.

    Science.gov (United States)

    Raza, Khalid; Alam, Mansaf

    2016-10-01

    One of the exciting problems in systems biology research is to decipher how genome controls the development of complex biological system. The gene regulatory networks (GRNs) help in the identification of regulatory interactions between genes and offer fruitful information related to functional role of individual gene in a cellular system. Discovering GRNs lead to a wide range of applications, including identification of disease related pathways providing novel tentative drug targets, helps to predict disease response, and also assists in diagnosing various diseases including cancer. Reconstruction of GRNs from available biological data is still an open problem. This paper proposes a recurrent neural network (RNN) based model of GRN, hybridized with generalized extended Kalman filter for weight update in backpropagation through time training algorithm. The RNN is a complex neural network that gives a better settlement between biological closeness and mathematical flexibility to model GRN; and is also able to capture complex, non-linear and dynamic relationships among variables. Gene expression data are inherently noisy and Kalman filter performs well for estimation problem even in noisy data. Hence, we applied non-linear version of Kalman filter, known as generalized extended Kalman filter, for weight update during RNN training. The developed model has been tested on four benchmark networks such as DNA SOS repair network, IRMA network, and two synthetic networks from DREAM Challenge. We performed a comparison of our results with other state-of-the-art techniques which shows superiority of our proposed model. Further, 5% Gaussian noise has been induced in the dataset and result of the proposed model shows negligible effect of noise on results, demonstrating the noise tolerance capability of the model. Copyright © 2016 Elsevier Ltd. All rights reserved.

  1. Monitoring tool usage in surgery videos using boosted convolutional and recurrent neural networks.

    Science.gov (United States)

    Al Hajj, Hassan; Lamard, Mathieu; Conze, Pierre-Henri; Cochener, Béatrice; Quellec, Gwenolé

    2018-05-09

    This paper investigates the automatic monitoring of tool usage during a surgery, with potential applications in report generation, surgical training and real-time decision support. Two surgeries are considered: cataract surgery, the most common surgical procedure, and cholecystectomy, one of the most common digestive surgeries. Tool usage is monitored in videos recorded either through a microscope (cataract surgery) or an endoscope (cholecystectomy). Following state-of-the-art video analysis solutions, each frame of the video is analyzed by convolutional neural networks (CNNs) whose outputs are fed to recurrent neural networks (RNNs) in order to take temporal relationships between events into account. Novelty lies in the way those CNNs and RNNs are trained. Computational complexity prevents the end-to-end training of "CNN+RNN" systems. Therefore, CNNs are usually trained first, independently from the RNNs. This approach is clearly suboptimal for surgical tool analysis: many tools are very similar to one another, but they can generally be differentiated based on past events. CNNs should be trained to extract the most useful visual features in combination with the temporal context. A novel boosting strategy is proposed to achieve this goal: the CNN and RNN parts of the system are simultaneously enriched by progressively adding weak classifiers (either CNNs or RNNs) trained to improve the overall classification accuracy. Experiments were performed in a dataset of 50 cataract surgery videos, where the usage of 21 surgical tools was manually annotated, and a dataset of 80 cholecystectomy videos, where the usage of 7 tools was manually annotated. Very good classification performance are achieved in both datasets: tool usage could be labeled with an average area under the ROC curve of A z =0.9961 and A z =0.9939, respectively, in offline mode (using past, present and future information), and A z =0.9957 and A z =0.9936, respectively, in online mode (using past and present

  2. Modeling long-term human activeness using recurrent neural networks for biometric data.

    Science.gov (United States)

    Kim, Zae Myung; Oh, Hyungrai; Kim, Han-Gyu; Lim, Chae-Gyun; Oh, Kyo-Joong; Choi, Ho-Jin

    2017-05-18

    With the invention of fitness trackers, it has been possible to continuously monitor a user's biometric data such as heart rates, number of footsteps taken, and amount of calories burned. This paper names the time series of these three types of biometric data, the user's "activeness", and investigates the feasibility in modeling and predicting the long-term activeness of the user. The dataset used in this study consisted of several months of biometric time-series data gathered by seven users independently. Four recurrent neural network (RNN) architectures-as well as a deep neural network and a simple regression model-were proposed to investigate the performance on predicting the activeness of the user under various length-related hyper-parameter settings. In addition, the learned model was tested to predict the time period when the user's activeness falls below a certain threshold. A preliminary experimental result shows that each type of activeness data exhibited a short-term autocorrelation; and among the three types of data, the consumed calories and the number of footsteps were positively correlated, while the heart rate data showed almost no correlation with neither of them. It is probably due to this characteristic of the dataset that although the RNN models produced the best results on modeling the user's activeness, the difference was marginal; and other baseline models, especially the linear regression model, performed quite admirably as well. Further experimental results show that it is feasible to predict a user's future activeness with precision, for example, a trained RNN model could predict-with the precision of 84%-when the user would be less active within the next hour given the latest 15 min of his activeness data. This paper defines and investigates the notion of a user's "activeness", and shows that forecasting the long-term activeness of the user is indeed possible. Such information can be utilized by a health-related application to proactively

  3. Centralized and decentralized global outer-synchronization of asymmetric recurrent time-varying neural network by data-sampling.

    Science.gov (United States)

    Lu, Wenlian; Zheng, Ren; Chen, Tianping

    2016-03-01

    In this paper, we discuss outer-synchronization of the asymmetrically connected recurrent time-varying neural networks. By using both centralized and decentralized discretization data sampling principles, we derive several sufficient conditions based on three vector norms to guarantee that the difference of any two trajectories starting from different initial values of the neural network converges to zero. The lower bounds of the common time intervals between data samples in centralized and decentralized principles are proved to be positive, which guarantees exclusion of Zeno behavior. A numerical example is provided to illustrate the efficiency of the theoretical results. Copyright © 2015 Elsevier Ltd. All rights reserved.

  4. Protein Solvent-Accessibility Prediction by a Stacked Deep Bidirectional Recurrent Neural Network

    Directory of Open Access Journals (Sweden)

    Buzhong Zhang

    2018-05-01

    Full Text Available Residue solvent accessibility is closely related to the spatial arrangement and packing of residues. Predicting the solvent accessibility of a protein is an important step to understand its structure and function. In this work, we present a deep learning method to predict residue solvent accessibility, which is based on a stacked deep bidirectional recurrent neural network applied to sequence profiles. To capture more long-range sequence information, a merging operator was proposed when bidirectional information from hidden nodes was merged for outputs. Three types of merging operators were used in our improved model, with a long short-term memory network performing as a hidden computing node. The trained database was constructed from 7361 proteins extracted from the PISCES server using a cut-off of 25% sequence identity. Sequence-derived features including position-specific scoring matrix, physical properties, physicochemical characteristics, conservation score and protein coding were used to represent a residue. Using this method, predictive values of continuous relative solvent-accessible area were obtained, and then, these values were transformed into binary states with predefined thresholds. Our experimental results showed that our deep learning method improved prediction quality relative to current methods, with mean absolute error and Pearson’s correlation coefficient values of 8.8% and 74.8%, respectively, on the CB502 dataset and 8.2% and 78%, respectively, on the Manesh215 dataset.

  5. Protein Solvent-Accessibility Prediction by a Stacked Deep Bidirectional Recurrent Neural Network.

    Science.gov (United States)

    Zhang, Buzhong; Li, Linqing; Lü, Qiang

    2018-05-25

    Residue solvent accessibility is closely related to the spatial arrangement and packing of residues. Predicting the solvent accessibility of a protein is an important step to understand its structure and function. In this work, we present a deep learning method to predict residue solvent accessibility, which is based on a stacked deep bidirectional recurrent neural network applied to sequence profiles. To capture more long-range sequence information, a merging operator was proposed when bidirectional information from hidden nodes was merged for outputs. Three types of merging operators were used in our improved model, with a long short-term memory network performing as a hidden computing node. The trained database was constructed from 7361 proteins extracted from the PISCES server using a cut-off of 25% sequence identity. Sequence-derived features including position-specific scoring matrix, physical properties, physicochemical characteristics, conservation score and protein coding were used to represent a residue. Using this method, predictive values of continuous relative solvent-accessible area were obtained, and then, these values were transformed into binary states with predefined thresholds. Our experimental results showed that our deep learning method improved prediction quality relative to current methods, with mean absolute error and Pearson's correlation coefficient values of 8.8% and 74.8%, respectively, on the CB502 dataset and 8.2% and 78%, respectively, on the Manesh215 dataset.

  6. Adaptive Sliding Mode Control of Dynamic Systems Using Double Loop Recurrent Neural Network Structure.

    Science.gov (United States)

    Fei, Juntao; Lu, Cheng

    2018-04-01

    In this paper, an adaptive sliding mode control system using a double loop recurrent neural network (DLRNN) structure is proposed for a class of nonlinear dynamic systems. A new three-layer RNN is proposed to approximate unknown dynamics with two different kinds of feedback loops where the firing weights and output signal calculated in the last step are stored and used as the feedback signals in each feedback loop. Since the new structure has combined the advantages of internal feedback NN and external feedback NN, it can acquire the internal state information while the output signal is also captured, thus the new designed DLRNN can achieve better approximation performance compared with the regular NNs without feedback loops or the regular RNNs with a single feedback loop. The new proposed DLRNN structure is employed in an equivalent controller to approximate the unknown nonlinear system dynamics, and the parameters of the DLRNN are updated online by adaptive laws to get favorable approximation performance. To investigate the effectiveness of the proposed controller, the designed adaptive sliding mode controller with the DLRNN is applied to a -axis microelectromechanical system gyroscope to control the vibrating dynamics of the proof mass. Simulation results demonstrate that the proposed methodology can achieve good tracking property, and the comparisons of the approximation performance between radial basis function NN, RNN, and DLRNN show that the DLRNN can accurately estimate the unknown dynamics with a fast speed while the internal states of DLRNN are more stable.

  7. A recurrent neural network for classification of unevenly sampled variable stars

    Science.gov (United States)

    Naul, Brett; Bloom, Joshua S.; Pérez, Fernando; van der Walt, Stéfan

    2018-02-01

    Astronomical surveys of celestial sources produce streams of noisy time series measuring flux versus time (`light curves'). Unlike in many other physical domains, however, large (and source-specific) temporal gaps in data arise naturally due to intranight cadence choices as well as diurnal and seasonal constraints1-5. With nightly observations of millions of variable stars and transients from upcoming surveys4,6, efficient and accurate discovery and classification techniques on noisy, irregularly sampled data must be employed with minimal human-in-the-loop involvement. Machine learning for inference tasks on such data traditionally requires the laborious hand-coding of domain-specific numerical summaries of raw data (`features')7. Here, we present a novel unsupervised autoencoding recurrent neural network8 that makes explicit use of sampling times and known heteroskedastic noise properties. When trained on optical variable star catalogues, this network produces supervised classification models that rival other best-in-class approaches. We find that autoencoded features learned in one time-domain survey perform nearly as well when applied to another survey. These networks can continue to learn from new unlabelled observations and may be used in other unsupervised tasks, such as forecasting and anomaly detection.

  8. Recurrent neural network-based modeling of gene regulatory network using elephant swarm water search algorithm.

    Science.gov (United States)

    Mandal, Sudip; Saha, Goutam; Pal, Rajat Kumar

    2017-08-01

    Correct inference of genetic regulations inside a cell from the biological database like time series microarray data is one of the greatest challenges in post genomic era for biologists and researchers. Recurrent Neural Network (RNN) is one of the most popular and simple approach to model the dynamics as well as to infer correct dependencies among genes. Inspired by the behavior of social elephants, we propose a new metaheuristic namely Elephant Swarm Water Search Algorithm (ESWSA) to infer Gene Regulatory Network (GRN). This algorithm is mainly based on the water search strategy of intelligent and social elephants during drought, utilizing the different types of communication techniques. Initially, the algorithm is tested against benchmark small and medium scale artificial genetic networks without and with presence of different noise levels and the efficiency was observed in term of parametric error, minimum fitness value, execution time, accuracy of prediction of true regulation, etc. Next, the proposed algorithm is tested against the real time gene expression data of Escherichia Coli SOS Network and results were also compared with others state of the art optimization methods. The experimental results suggest that ESWSA is very efficient for GRN inference problem and performs better than other methods in many ways.

  9. Applying long short-term memory recurrent neural networks to intrusion detection

    Directory of Open Access Journals (Sweden)

    Ralf C. Staudemeyer

    2015-07-01

    Full Text Available We claim that modelling network traffic as a time series with a supervised learning approach, using known genuine and malicious behaviour, improves intrusion detection. To substantiate this, we trained long short-term memory (LSTM recurrent neural networks with the training data provided by the DARPA / KDD Cup ’99 challenge. To identify suitable LSTM-RNN network parameters and structure we experimented with various network topologies. We found networks with four memory blocks containing two cells each offer a good compromise between computational cost and detection performance. We applied forget gates and shortcut connections respectively. A learning rate of 0.1 and up to 1,000 epochs showed good results. We tested the performance on all features and on extracted minimal feature sets respectively. We evaluated different feature sets for the detection of all attacks within one network and also to train networks specialised on individual attack classes. Our results show that the LSTM classifier provides superior performance in comparison to results previously published results of strong static classifiers. With 93.82% accuracy and 22.13 cost, LSTM outperforms the winning entries of the KDD Cup ’99 challenge by far. This is due to the fact that LSTM learns to look back in time and correlate consecutive connection records. For the first time ever, we have demonstrated the usefulness of LSTM networks to intrusion detection.

  10. Intelligent Noise Removal from EMG Signal Using Focused Time-Lagged Recurrent Neural Network

    Directory of Open Access Journals (Sweden)

    S. N. Kale

    2009-01-01

    Full Text Available Electromyography (EMG signals can be used for clinical/biomedical application and modern human computer interaction. EMG signals acquire noise while traveling through tissue, inherent noise in electronics equipment, ambient noise, and so forth. ANN approach is studied for reduction of noise in EMG signal. In this paper, it is shown that Focused Time-Lagged Recurrent Neural Network (FTLRNN can elegantly solve to reduce the noise from EMG signal. After rigorous computer simulations, authors developed an optimal FTLRNN model, which removes the noise from the EMG signal. Results show that the proposed optimal FTLRNN model has an MSE (Mean Square Error as low as 0.000067 and 0.000048, correlation coefficient as high as 0.99950 and 0.99939 for noise signal and EMG signal, respectively, when validated on the test dataset. It is also noticed that the output of the estimated FTLRNN model closely follows the real one. This network is indeed robust as EMG signal tolerates the noise variance from 0.1 to 0.4 for uniform noise and 0.30 for Gaussian noise. It is clear that the training of the network is independent of specific partitioning of dataset. It is seen that the performance of the proposed FTLRNN model clearly outperforms the best Multilayer perceptron (MLP and Radial Basis Function NN (RBF models. The simple NN model such as the FTLRNN with single-hidden layer can be employed to remove noise from EMG signal.

  11. Interactive natural language acquisition in a multi-modal recurrent neural architecture

    Science.gov (United States)

    Heinrich, Stefan; Wermter, Stefan

    2018-01-01

    For the complex human brain that enables us to communicate in natural language, we gathered good understandings of principles underlying language acquisition and processing, knowledge about sociocultural conditions, and insights into activity patterns in the brain. However, we were not yet able to understand the behavioural and mechanistic characteristics for natural language and how mechanisms in the brain allow to acquire and process language. In bridging the insights from behavioural psychology and neuroscience, the goal of this paper is to contribute a computational understanding of appropriate characteristics that favour language acquisition. Accordingly, we provide concepts and refinements in cognitive modelling regarding principles and mechanisms in the brain and propose a neurocognitively plausible model for embodied language acquisition from real-world interaction of a humanoid robot with its environment. In particular, the architecture consists of a continuous time recurrent neural network, where parts have different leakage characteristics and thus operate on multiple timescales for every modality and the association of the higher level nodes of all modalities into cell assemblies. The model is capable of learning language production grounded in both, temporal dynamic somatosensation and vision, and features hierarchical concept abstraction, concept decomposition, multi-modal integration, and self-organisation of latent representations.

  12. Construction of Gene Regulatory Networks Using Recurrent Neural Networks and Swarm Intelligence.

    Science.gov (United States)

    Khan, Abhinandan; Mandal, Sudip; Pal, Rajat Kumar; Saha, Goutam

    2016-01-01

    We have proposed a methodology for the reverse engineering of biologically plausible gene regulatory networks from temporal genetic expression data. We have used established information and the fundamental mathematical theory for this purpose. We have employed the Recurrent Neural Network formalism to extract the underlying dynamics present in the time series expression data accurately. We have introduced a new hybrid swarm intelligence framework for the accurate training of the model parameters. The proposed methodology has been first applied to a small artificial network, and the results obtained suggest that it can produce the best results available in the contemporary literature, to the best of our knowledge. Subsequently, we have implemented our proposed framework on experimental (in vivo) datasets. Finally, we have investigated two medium sized genetic networks (in silico) extracted from GeneNetWeaver, to understand how the proposed algorithm scales up with network size. Additionally, we have implemented our proposed algorithm with half the number of time points. The results indicate that a reduction of 50% in the number of time points does not have an effect on the accuracy of the proposed methodology significantly, with a maximum of just over 15% deterioration in the worst case.

  13. Using LSTM recurrent neural networks for monitoring the LHC superconducting magnets

    Science.gov (United States)

    Wielgosz, Maciej; Skoczeń, Andrzej; Mertik, Matej

    2017-09-01

    The superconducting LHC magnets are coupled with an electronic monitoring system which records and analyzes voltage time series reflecting their performance. A currently used system is based on a range of preprogrammed triggers which launches protection procedures when a misbehavior of the magnets is detected. All the procedures used in the protection equipment were designed and implemented according to known working scenarios of the system and are updated and monitored by human operators. This paper proposes a novel approach to monitoring and fault protection of the Large Hadron Collider (LHC) superconducting magnets which employs state-of-the-art Deep Learning algorithms. Consequently, the authors of the paper decided to examine the performance of LSTM recurrent neural networks for modeling of voltage time series of the magnets. In order to address this challenging task different network architectures and hyper-parameters were used to achieve the best possible performance of the solution. The regression results were measured in terms of RMSE for different number of future steps and history length taken into account for the prediction. The best result of RMSE = 0 . 00104 was obtained for a network of 128 LSTM cells within the internal layer and 16 steps history buffer.

  14. Confused or not Confused?: Disentangling Brain Activity from EEG Data Using Bidirectional LSTM Recurrent Neural Networks.

    Science.gov (United States)

    Ni, Zhaoheng; Yuksel, Ahmet Cem; Ni, Xiuyan; Mandel, Michael I; Xie, Lei

    2017-08-01

    Brain fog, also known as confusion, is one of the main reasons for low performance in the learning process or any kind of daily task that involves and requires thinking. Detecting confusion in a human's mind in real time is a challenging and important task that can be applied to online education, driver fatigue detection and so on. In this paper, we apply Bidirectional LSTM Recurrent Neural Networks to classify students' confusion in watching online course videos from EEG data. The results show that Bidirectional LSTM model achieves the state-of-the-art performance compared with other machine learning approaches, and shows strong robustness as evaluated by cross-validation. We can predict whether or not a student is confused in the accuracy of 73.3%. Furthermore, we find the most important feature to detecting the brain confusion is the gamma 1 wave of EEG signal. Our results suggest that machine learning is a potentially powerful tool to model and understand brain activity.

  15. Application of Recurrent Neural Networks on El Nino Impact on California Climate

    Science.gov (United States)

    Le, J.; El-Askary, H. M.; Allai, M.

    2017-12-01

    Following our successful paper on the application for the El Nino season of 2015-2016 over Southern California, we use recurrent neural networks (RNNs) to investigate the complex interactions between the long-term trend in dryness and a projected, short but intense, period of wetness due to the 2015-2016 El Niño. Although it was forecasted that this El Niño season would bring significant rainfall to the region, our long-term projections of the Palmer Z Index (PZI) showed a continuing drought trend. We achieved a statistically significant correlation of 0.610 between forecasted and observed PZI on the validation set for a lead time of 1 month. This gives strong confidence to the forecasted precipitation indicator. These predictions were bourne out in the resulting data. This paper details the expansion of our system to the climate of the entire California climate as a whole, dealing with inter-relationships and spatial variations within the state.

  16. Local community detection as pattern restoration by attractor dynamics of recurrent neural networks.

    Science.gov (United States)

    Okamoto, Hiroshi

    2016-08-01

    Densely connected parts in networks are referred to as "communities". Community structure is a hallmark of a variety of real-world networks. Individual communities in networks form functional modules of complex systems described by networks. Therefore, finding communities in networks is essential to approaching and understanding complex systems described by networks. In fact, network science has made a great deal of effort to develop effective and efficient methods for detecting communities in networks. Here we put forward a type of community detection, which has been little examined so far but will be practically useful. Suppose that we are given a set of source nodes that includes some (but not all) of "true" members of a particular community; suppose also that the set includes some nodes that are not the members of this community (i.e., "false" members of the community). We propose to detect the community from this "imperfect" and "inaccurate" set of source nodes using attractor dynamics of recurrent neural networks. Community detection by the proposed method can be viewed as restoration of the original pattern from a deteriorated pattern, which is analogous to cue-triggered recall of short-term memory in the brain. We demonstrate the effectiveness of the proposed method using synthetic networks and real social networks for which correct communities are known. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  17. Feature Set Evaluation for Offline Handwriting Recognition Systems: Application to the Recurrent Neural Network Model.

    Science.gov (United States)

    Chherawala, Youssouf; Roy, Partha Pratim; Cheriet, Mohamed

    2016-12-01

    The performance of handwriting recognition systems is dependent on the features extracted from the word image. A large body of features exists in the literature, but no method has yet been proposed to identify the most promising of these, other than a straightforward comparison based on the recognition rate. In this paper, we propose a framework for feature set evaluation based on a collaborative setting. We use a weighted vote combination of recurrent neural network (RNN) classifiers, each trained with a particular feature set. This combination is modeled in a probabilistic framework as a mixture model and two methods for weight estimation are described. The main contribution of this paper is to quantify the importance of feature sets through the combination weights, which reflect their strength and complementarity. We chose the RNN classifier because of its state-of-the-art performance. Also, we provide the first feature set benchmark for this classifier. We evaluated several feature sets on the IFN/ENIT and RIMES databases of Arabic and Latin script, respectively. The resulting combination model is competitive with state-of-the-art systems.

  18. Emergence of unstable itinerant orbits in a recurrent neural network model

    International Nuclear Information System (INIS)

    Suemitsu, Yoshikazu; Nara, Shigetoshi

    2005-01-01

    A recurrent neural network model with time delay is investigated by numerical methods. The model functions as both conventional associative memory and also enables us to embed a new kind of memory attractor that cannot be realized in models without time delay, for example chain-ring attractors. This is attributed to the fact that the time delay extends the available state space dimension. The difference between the basin structures of chain-ring attractors and of isolated cycle attractors is investigated with respect to the two attractor pattern sets, random memory patterns and designed memory patterns with intended structures. Compared to isolated attractors with random memory patterns, the basins of chain-ring attractors are reduced considerably. Computer experiments confirm that the basin volume of each embedded chain-ring attractor shrinks and the emergence of unstable itinerant orbits in the outer state space of the memory attractor basins is discovered. The instability of such itinerant orbits is investigated. Results show that a 1-bit difference in initial conditions does not exceed 10% of a total dimension within 100 updating steps

  19. A Recurrent Neural Network Approach to Rear Vehicle Detection Which Considered State Dependency

    Directory of Open Access Journals (Sweden)

    Kayichirou Inagaki

    2003-08-01

    Full Text Available Experimental vision-based detection often fails in cases when the acquired image quality is reduced by changing optical environments. In addition, the shape of vehicles in images that are taken from vision sensors change due to approaches by vehicle. Vehicle detection methods are required to perform successfully under these conditions. However, the conventional methods do not consider especially in rapidly varying by brightness conditions. We suggest a new detection method that compensates for those conditions in monocular vision-based vehicle detection. The suggested method employs a Recurrent Neural Network (RNN, which has been applied for spatiotemporal processing. The RNN is able to respond to consecutive scenes involving the target vehicle and can track the movements of the target by the effect of the past network states. The suggested method has a particularly beneficial effect in environments with sudden, extreme variations such as bright sunlight and shield. Finally, we demonstrate effectiveness by state-dependent of the RNN-based method by comparing its detection results with those of a Multi Layered Perceptron (MLP.

  20. Novel criteria for global exponential periodicity and stability of recurrent neural networks with time-varying delays

    International Nuclear Information System (INIS)

    Song Qiankun

    2008-01-01

    In this paper, the global exponential periodicity and stability of recurrent neural networks with time-varying delays are investigated by applying the idea of vector Lyapunov function, M-matrix theory and inequality technique. We assume neither the global Lipschitz conditions on these activation functions nor the differentiability on these time-varying delays, which were needed in other papers. Several novel criteria are found to ascertain the existence, uniqueness and global exponential stability of periodic solution for recurrent neural network with time-varying delays. Moreover, the exponential convergence rate index is estimated, which depends on the system parameters. Some previous results are improved and generalized, and an example is given to show the effectiveness of our method

  1. Improved delay-dependent globally asymptotic stability of delayed uncertain recurrent neural networks with Markovian jumping parameters

    International Nuclear Information System (INIS)

    Yan, Ji; Bao-Tong, Cui

    2010-01-01

    In this paper, we have improved delay-dependent stability criteria for recurrent neural networks with a delay varying over a range and Markovian jumping parameters. The criteria improve over some previous ones in that they have fewer matrix variables yet less conservatism. In addition, a numerical example is provided to illustrate the applicability of the result using the linear matrix inequality toolbox in MATLAB. (general)

  2. Large-Scale Recurrent Neural Network Based Modelling of Gene Regulatory Network Using Cuckoo Search-Flower Pollination Algorithm.

    Science.gov (United States)

    Mandal, Sudip; Khan, Abhinandan; Saha, Goutam; Pal, Rajat K

    2016-01-01

    The accurate prediction of genetic networks using computational tools is one of the greatest challenges in the postgenomic era. Recurrent Neural Network is one of the most popular but simple approaches to model the network dynamics from time-series microarray data. To date, it has been successfully applied to computationally derive small-scale artificial and real-world genetic networks with high accuracy. However, they underperformed for large-scale genetic networks. Here, a new methodology has been proposed where a hybrid Cuckoo Search-Flower Pollination Algorithm has been implemented with Recurrent Neural Network. Cuckoo Search is used to search the best combination of regulators. Moreover, Flower Pollination Algorithm is applied to optimize the model parameters of the Recurrent Neural Network formalism. Initially, the proposed method is tested on a benchmark large-scale artificial network for both noiseless and noisy data. The results obtained show that the proposed methodology is capable of increasing the inference of correct regulations and decreasing false regulations to a high degree. Secondly, the proposed methodology has been validated against the real-world dataset of the DNA SOS repair network of Escherichia coli. However, the proposed method sacrifices computational time complexity in both cases due to the hybrid optimization process.

  3. Learning to Generate Sequences with Combination of Hebbian and Non-hebbian Plasticity in Recurrent Spiking Neural Networks.

    Science.gov (United States)

    Panda, Priyadarshini; Roy, Kaushik

    2017-01-01

    Synaptic Plasticity, the foundation for learning and memory formation in the human brain, manifests in various forms. Here, we combine the standard spike timing correlation based Hebbian plasticity with a non-Hebbian synaptic decay mechanism for training a recurrent spiking neural model to generate sequences. We show that inclusion of the adaptive decay of synaptic weights with standard STDP helps learn stable contextual dependencies between temporal sequences, while reducing the strong attractor states that emerge in recurrent models due to feedback loops. Furthermore, we show that the combined learning scheme suppresses the chaotic activity in the recurrent model substantially, thereby enhancing its' ability to generate sequences consistently even in the presence of perturbations.

  4. A New Local Bipolar Autoassociative Memory Based on External Inputs of Discrete Recurrent Neural Networks With Time Delay.

    Science.gov (United States)

    Zhou, Caigen; Zeng, Xiaoqin; Luo, Chaomin; Zhang, Huaguang

    In this paper, local bipolar auto-associative memories are presented based on discrete recurrent neural networks with a class of gain type activation function. The weight parameters of neural networks are acquired by a set of inequalities without the learning procedure. The global exponential stability criteria are established to ensure the accuracy of the restored patterns by considering time delays and external inputs. The proposed methodology is capable of effectively overcoming spurious memory patterns and achieving memory capacity. The effectiveness, robustness, and fault-tolerant capability are validated by simulated experiments.In this paper, local bipolar auto-associative memories are presented based on discrete recurrent neural networks with a class of gain type activation function. The weight parameters of neural networks are acquired by a set of inequalities without the learning procedure. The global exponential stability criteria are established to ensure the accuracy of the restored patterns by considering time delays and external inputs. The proposed methodology is capable of effectively overcoming spurious memory patterns and achieving memory capacity. The effectiveness, robustness, and fault-tolerant capability are validated by simulated experiments.

  5. Identifying time-delayed gene regulatory networks via an evolvable hierarchical recurrent neural network.

    Science.gov (United States)

    Kordmahalleh, Mina Moradi; Sefidmazgi, Mohammad Gorji; Harrison, Scott H; Homaifar, Abdollah

    2017-01-01

    The modeling of genetic interactions within a cell is crucial for a basic understanding of physiology and for applied areas such as drug design. Interactions in gene regulatory networks (GRNs) include effects of transcription factors, repressors, small metabolites, and microRNA species. In addition, the effects of regulatory interactions are not always simultaneous, but can occur after a finite time delay, or as a combined outcome of simultaneous and time delayed interactions. Powerful biotechnologies have been rapidly and successfully measuring levels of genetic expression to illuminate different states of biological systems. This has led to an ensuing challenge to improve the identification of specific regulatory mechanisms through regulatory network reconstructions. Solutions to this challenge will ultimately help to spur forward efforts based on the usage of regulatory network reconstructions in systems biology applications. We have developed a hierarchical recurrent neural network (HRNN) that identifies time-delayed gene interactions using time-course data. A customized genetic algorithm (GA) was used to optimize hierarchical connectivity of regulatory genes and a target gene. The proposed design provides a non-fully connected network with the flexibility of using recurrent connections inside the network. These features and the non-linearity of the HRNN facilitate the process of identifying temporal patterns of a GRN. Our HRNN method was implemented with the Python language. It was first evaluated on simulated data representing linear and nonlinear time-delayed gene-gene interaction models across a range of network sizes and variances of noise. We then further demonstrated the capability of our method in reconstructing GRNs of the Saccharomyces cerevisiae synthetic network for in vivo benchmarking of reverse-engineering and modeling approaches (IRMA). We compared the performance of our method to TD-ARACNE, HCC-CLINDE, TSNI and ebdbNet across different network

  6. Physiological modules for generating discrete and rhythmic movements: action identification by a dynamic recurrent neural network.

    Science.gov (United States)

    Bengoetxea, Ana; Leurs, Françoise; Hoellinger, Thomas; Cebolla, Ana M; Dan, Bernard; McIntyre, Joseph; Cheron, Guy

    2014-01-01

    In this study we employed a dynamic recurrent neural network (DRNN) in a novel fashion to reveal characteristics of control modules underlying the generation of muscle activations when drawing figures with the outstretched arm. We asked healthy human subjects to perform four different figure-eight movements in each of two workspaces (frontal plane and sagittal plane). We then trained a DRNN to predict the movement of the wrist from information in the EMG signals from seven different muscles. We trained different instances of the same network on a single movement direction, on all four movement directions in a single movement plane, or on all eight possible movement patterns and looked at the ability of the DRNN to generalize and predict movements for trials that were not included in the training set. Within a single movement plane, a DRNN trained on one movement direction was not able to predict movements of the hand for trials in the other three directions, but a DRNN trained simultaneously on all four movement directions could generalize across movement directions within the same plane. Similarly, the DRNN was able to reproduce the kinematics of the hand for both movement planes, but only if it was trained on examples performed in each one. As we will discuss, these results indicate that there are important dynamical constraints on the mapping of EMG to hand movement that depend on both the time sequence of the movement and on the anatomical constraints of the musculoskeletal system. In a second step, we injected EMG signals constructed from different synergies derived by the PCA in order to identify the mechanical significance of each of these components. From these results, one can surmise that discrete-rhythmic movements may be constructed from three different fundamental modules, one regulating the co-activation of all muscles over the time span of the movement and two others elliciting patterns of reciprocal activation operating in orthogonal directions.

  7. Recurrent neural networks with specialized word embeddings for health-domain named-entity recognition.

    Science.gov (United States)

    Jauregi Unanue, Iñigo; Zare Borzeshi, Ehsan; Piccardi, Massimo

    2017-12-01

    Previous state-of-the-art systems on Drug Name Recognition (DNR) and Clinical Concept Extraction (CCE) have focused on a combination of text "feature engineering" and conventional machine learning algorithms such as conditional random fields and support vector machines. However, developing good features is inherently heavily time-consuming. Conversely, more modern machine learning approaches such as recurrent neural networks (RNNs) have proved capable of automatically learning effective features from either random assignments or automated word "embeddings". (i) To create a highly accurate DNR and CCE system that avoids conventional, time-consuming feature engineering. (ii) To create richer, more specialized word embeddings by using health domain datasets such as MIMIC-III. (iii) To evaluate our systems over three contemporary datasets. Two deep learning methods, namely the Bidirectional LSTM and the Bidirectional LSTM-CRF, are evaluated. A CRF model is set as the baseline to compare the deep learning systems to a traditional machine learning approach. The same features are used for all the models. We have obtained the best results with the Bidirectional LSTM-CRF model, which has outperformed all previously proposed systems. The specialized embeddings have helped to cover unusual words in DrugBank and MedLine, but not in the i2b2/VA dataset. We present a state-of-the-art system for DNR and CCE. Automated word embeddings has allowed us to avoid costly feature engineering and achieve higher accuracy. Nevertheless, the embeddings need to be retrained over datasets that are adequate for the domain, in order to adequately cover the domain-specific vocabulary. Copyright © 2017 Elsevier Inc. All rights reserved.

  8. Learning a Transferable Change Rule from a Recurrent Neural Network for Land Cover Change Detection

    Directory of Open Access Journals (Sweden)

    Haobo Lyu

    2016-06-01

    Full Text Available When exploited in remote sensing analysis, a reliable change rule with transfer ability can detect changes accurately and be applied widely. However, in practice, the complexity of land cover changes makes it difficult to use only one change rule or change feature learned from a given multi-temporal dataset to detect any other new target images without applying other learning processes. In this study, we consider the design of an efficient change rule having transferability to detect both binary and multi-class changes. The proposed method relies on an improved Long Short-Term Memory (LSTM model to acquire and record the change information of long-term sequence remote sensing data. In particular, a core memory cell is utilized to learn the change rule from the information concerning binary changes or multi-class changes. Three gates are utilized to control the input, output and update of the LSTM model for optimization. In addition, the learned rule can be applied to detect changes and transfer the change rule from one learned image to another new target multi-temporal image. In this study, binary experiments, transfer experiments and multi-class change experiments are exploited to demonstrate the superiority of our method. Three contributions of this work can be summarized as follows: (1 the proposed method can learn an effective change rule to provide reliable change information for multi-temporal images; (2 the learned change rule has good transferability for detecting changes in new target images without any extra learning process, and the new target images should have a multi-spectral distribution similar to that of the training images; and (3 to the authors’ best knowledge, this is the first time that deep learning in recurrent neural networks is exploited for change detection. In addition, under the framework of the proposed method, changes can be detected under both binary detection and multi-class change detection.

  9. Deep learning for pharmacovigilance: recurrent neural network architectures for labeling adverse drug reactions in Twitter posts.

    Science.gov (United States)

    Cocos, Anne; Fiks, Alexander G; Masino, Aaron J

    2017-07-01

    Social media is an important pharmacovigilance data source for adverse drug reaction (ADR) identification. Human review of social media data is infeasible due to data quantity, thus natural language processing techniques are necessary. Social media includes informal vocabulary and irregular grammar, which challenge natural language processing methods. Our objective is to develop a scalable, deep-learning approach that exceeds state-of-the-art ADR detection performance in social media. We developed a recurrent neural network (RNN) model that labels words in an input sequence with ADR membership tags. The only input features are word-embedding vectors, which can be formed through task-independent pretraining or during ADR detection training. Our best-performing RNN model used pretrained word embeddings created from a large, non-domain-specific Twitter dataset. It achieved an approximate match F-measure of 0.755 for ADR identification on the dataset, compared to 0.631 for a baseline lexicon system and 0.65 for the state-of-the-art conditional random field model. Feature analysis indicated that semantic information in pretrained word embeddings boosted sensitivity and, combined with contextual awareness captured in the RNN, precision. Our model required no task-specific feature engineering, suggesting generalizability to additional sequence-labeling tasks. Learning curve analysis showed that our model reached optimal performance with fewer training examples than the other models. ADR detection performance in social media is significantly improved by using a contextually aware model and word embeddings formed from large, unlabeled datasets. The approach reduces manual data-labeling requirements and is scalable to large social media datasets. © The Author 2017. Published by Oxford University Press on behalf of the American Medical Informatics Association. All rights reserved. For Permissions, please email: journals.permissions@oup.com

  10. Dynamical Integration of Language and Behavior in a Recurrent Neural Network for Human--Robot Interaction

    Directory of Open Access Journals (Sweden)

    Tatsuro Yamada

    2016-07-01

    Full Text Available To work cooperatively with humans by using language, robots must not only acquire a mapping between language and their behavior but also autonomously utilize the mapping in appropriate contexts of interactive tasks online. To this end, we propose a novel learning method linking language to robot behavior by means of a recurrent neural network. In this method, the network learns from correct examples of the imposed task that are given not as explicitly separated sets of language and behavior but as sequential data constructed from the actual temporal flow of the task. By doing this, the internal dynamics of the network models both language--behavior relationships and the temporal patterns of interaction. Here, ``internal dynamics'' refers to the time development of the system defined on the fixed-dimensional space of the internal states of the context layer. Thus, in the execution phase, by constantly representing where in the interaction context it is as its current state, the network autonomously switches between recognition and generation phases without any explicit signs and utilizes the acquired mapping in appropriate contexts. To evaluate our method, we conducted an experiment in which a robot generates appropriate behavior responding to a human's linguistic instruction. After learning, the network actually formed the attractor structure representing both language--behavior relationships and the task's temporal pattern in its internal dynamics. In the dynamics, language--behavior mapping was achieved by the branching structure. Repetition of human's instruction and robot's behavioral response was represented as the cyclic structure, and besides, waiting to a subsequent instruction was represented as the fixed-point attractor. Thanks to this structure, the robot was able to interact online with a human concerning the given task by autonomously switching phases.

  11. Neural correlates of working memory in first episode and recurrent depression: An fMRI study.

    Science.gov (United States)

    Yüksel, Dilara; Dietsche, Bruno; Konrad, Carsten; Dannlowski, Udo; Kircher, Tilo; Krug, Axel

    2018-06-08

    Patients suffering from major depressive disorder (MDD) show deficits in working memory (WM) performance accompanied by bilateral fronto-parietal BOLD signal changes. It is unclear whether patients with a first depressive episode (FDE) exhibit the same signal changes as patients with recurrent depressive episodes (RDE). We investigated seventy-four MDD inpatients (48 RDE, 26 FDE) and 74 healthy control (HC) subjects performing an n-back WM task (0-back, 2-back, 3-back condition) in a 3T-fMRI. FMRI analyses revealed deviating BOLD signal in MDD in the thalamus (0-back vs. 2-back), the angular gyrus (0-back vs. 3-back), and the superior frontal gyrus (2-back vs. 3-back). Further effects were observed between RDE vs. FDE. Thus, RDE displayed differing neural activation in the middle frontal gyrus (2-back vs. 3-back), the inferior frontal gyrus, and the precentral gyrus (0-back vs. 2-back). In addition, both HC and FDE indicated a linear activation trend depending on task complexity. Although we failed to find behavioral differences between the groups, results suggest differing BOLD signal in fronto-parietal brain regions in MDD vs. HC, and in RDE vs. FDE. Moreover, both HC and FDE show similar trends in activation shapes. This indicates a link between levels of complexity-dependent activation in fronto-parietal brain regions and the stage of MDD. We therefore assume that load-dependent BOLD signal during WM is impaired in MDD, and that it is particularly affected in RDE. We also suspect neurobiological compensatory mechanisms of the reported brain regions in (working) memory functioning. Copyright © 2018 Elsevier Inc. All rights reserved.

  12. Dynamical Integration of Language and Behavior in a Recurrent Neural Network for Human-Robot Interaction.

    Science.gov (United States)

    Yamada, Tatsuro; Murata, Shingo; Arie, Hiroaki; Ogata, Tetsuya

    2016-01-01

    To work cooperatively with humans by using language, robots must not only acquire a mapping between language and their behavior but also autonomously utilize the mapping in appropriate contexts of interactive tasks online. To this end, we propose a novel learning method linking language to robot behavior by means of a recurrent neural network. In this method, the network learns from correct examples of the imposed task that are given not as explicitly separated sets of language and behavior but as sequential data constructed from the actual temporal flow of the task. By doing this, the internal dynamics of the network models both language-behavior relationships and the temporal patterns of interaction. Here, "internal dynamics" refers to the time development of the system defined on the fixed-dimensional space of the internal states of the context layer. Thus, in the execution phase, by constantly representing where in the interaction context it is as its current state, the network autonomously switches between recognition and generation phases without any explicit signs and utilizes the acquired mapping in appropriate contexts. To evaluate our method, we conducted an experiment in which a robot generates appropriate behavior responding to a human's linguistic instruction. After learning, the network actually formed the attractor structure representing both language-behavior relationships and the task's temporal pattern in its internal dynamics. In the dynamics, language-behavior mapping was achieved by the branching structure. Repetition of human's instruction and robot's behavioral response was represented as the cyclic structure, and besides, waiting to a subsequent instruction was represented as the fixed-point attractor. Thanks to this structure, the robot was able to interact online with a human concerning the given task by autonomously switching phases.

  13. De-identification of clinical notes via recurrent neural network and conditional random field.

    Science.gov (United States)

    Liu, Zengjian; Tang, Buzhou; Wang, Xiaolong; Chen, Qingcai

    2017-11-01

    De-identification, identifying information from data, such as protected health information (PHI) present in clinical data, is a critical step to enable data to be shared or published. The 2016 Centers of Excellence in Genomic Science (CEGS) Neuropsychiatric Genome-scale and RDOC Individualized Domains (N-GRID) clinical natural language processing (NLP) challenge contains a de-identification track in de-identifying electronic medical records (EMRs) (i.e., track 1). The challenge organizers provide 1000 annotated mental health records for this track, 600 out of which are used as a training set and 400 as a test set. We develop a hybrid system for the de-identification task on the training set. Firstly, four individual subsystems, that is, a subsystem based on bidirectional LSTM (long-short term memory, a variant of recurrent neural network), a subsystem-based on bidirectional LSTM with features, a subsystem based on conditional random field (CRF) and a rule-based subsystem, are used to identify PHI instances. Then, an ensemble learning-based classifiers is deployed to combine all PHI instances predicted by above three machine learning-based subsystems. Finally, the results of the ensemble learning-based classifier and the rule-based subsystem are merged together. Experiments conducted on the official test set show that our system achieves the highest micro F1-scores of 93.07%, 91.43% and 95.23% under the "token", "strict" and "binary token" criteria respectively, ranking first in the 2016 CEGS N-GRID NLP challenge. In addition, on the dataset of 2014 i2b2 NLP challenge, our system achieves the highest micro F1-scores of 96.98%, 95.11% and 98.28% under the "token", "strict" and "binary token" criteria respectively, outperforming other state-of-the-art systems. All these experiments prove the effectiveness of our proposed method. Copyright © 2017. Published by Elsevier Inc.

  14. A novel prosodic-information synthesizer based on recurrent fuzzy neural network for the Chinese TTS system.

    Science.gov (United States)

    Lin, Chin-Teng; Wu, Rui-Cheng; Chang, Jyh-Yeong; Liang, Sheng-Fu

    2004-02-01

    In this paper, a new technique for the Chinese text-to-speech (TTS) system is proposed. Our major effort focuses on the prosodic information generation. New methodologies for constructing fuzzy rules in a prosodic model simulating human's pronouncing rules are developed. The proposed Recurrent Fuzzy Neural Network (RFNN) is a multilayer recurrent neural network (RNN) which integrates a Self-cOnstructing Neural Fuzzy Inference Network (SONFIN) into a recurrent connectionist structure. The RFNN can be functionally divided into two parts. The first part adopts the SONFIN as a prosodic model to explore the relationship between high-level linguistic features and prosodic information based on fuzzy inference rules. As compared to conventional neural networks, the SONFIN can always construct itself with an economic network size in high learning speed. The second part employs a five-layer network to generate all prosodic parameters by directly using the prosodic fuzzy rules inferred from the first part as well as other important features of syllables. The TTS system combined with the proposed method can behave not only sandhi rules but also the other prosodic phenomena existing in the traditional TTS systems. Moreover, the proposed scheme can even find out some new rules about prosodic phrase structure. The performance of the proposed RFNN-based prosodic model is verified by imbedding it into a Chinese TTS system with a Chinese monosyllable database based on the time-domain pitch synchronous overlap add (TD-PSOLA) method. Our experimental results show that the proposed RFNN can generate proper prosodic parameters including pitch means, pitch shapes, maximum energy levels, syllable duration, and pause duration. Some synthetic sounds are online available for demonstration.

  15. Distributed Recurrent Neural Forward Models with Synaptic Adaptation and CPG-based control for Complex Behaviors of Walking Robots

    Directory of Open Access Journals (Sweden)

    Sakyasingha eDasgupta

    2015-09-01

    Full Text Available Walking animals, like stick insects, cockroaches or ants, demonstrate a fascinating range of locomotive abilities and complex behaviors. The locomotive behaviors can consist of a variety of walking patterns along with adaptation that allow the animals to deal with changes in environmental conditions, like uneven terrains, gaps, obstacles etc. Biological study has revealed that such complex behaviors are a result of a combination of biomechanics and neural mechanism thus representing the true nature of embodied interactions. While the biomechanics helps maintain flexibility and sustain a variety of movements, the neural mechanisms generate movements while making appropriate predictions crucial for achieving adaptation. Such predictions or planning ahead can be achieved by way of internal models that are grounded in the overall behavior of the animal. Inspired by these findings, we present here, an artificial bio-inspired walking system which effectively combines biomechanics (in terms of the body and leg structures with the underlying neural mechanisms. The neural mechanisms consist of 1 central pattern generator based control for generating basic rhythmic patterns and coordinated movements, 2 distributed (at each leg recurrent neural network based adaptive forward models with efference copies as internal models for sensory predictions and instantaneous state estimations, and 3 searching and elevation control for adapting the movement of an individual leg to deal with different environmental conditions. Using simulations we show that this bio-inspired approach with adaptive internal models allows the walking robot to perform complex locomotive behaviors as observed in insects, including walking on undulated terrains, crossing large gaps as well as climbing over high obstacles. Furthermore we demonstrate that the newly developed recurrent network based approach to sensorimotor prediction outperforms the previous state of the art adaptive neuron

  16. Identification of a Typical CSTR Using Optimal Focused Time Lagged Recurrent Neural Network Model with Gamma Memory Filter

    Directory of Open Access Journals (Sweden)

    S. N. Naikwad

    2009-01-01

    Full Text Available A focused time lagged recurrent neural network (FTLR NN with gamma memory filter is designed to learn the subtle complex dynamics of a typical CSTR process. Continuous stirred tank reactor exhibits complex nonlinear operations where reaction is exothermic. It is noticed from literature review that process control of CSTR using neuro-fuzzy systems was attempted by many, but optimal neural network model for identification of CSTR process is not yet available. As CSTR process includes temporal relationship in the input-output mappings, time lagged recurrent neural network is particularly used for identification purpose. The standard back propagation algorithm with momentum term has been proposed in this model. The various parameters like number of processing elements, number of hidden layers, training and testing percentage, learning rule and transfer function in hidden and output layer are investigated on the basis of performance measures like MSE, NMSE, and correlation coefficient on testing data set. Finally effects of different norms are tested along with variation in gamma memory filter. It is demonstrated that dynamic NN model has a remarkable system identification capability for the problems considered in this paper. Thus FTLR NN with gamma memory filter can be used to learn underlying highly nonlinear dynamics of the system, which is a major contribution of this paper.

  17. Novel recurrent neural network for modelling biological networks: oscillatory p53 interaction dynamics.

    Science.gov (United States)

    Ling, Hong; Samarasinghe, Sandhya; Kulasiri, Don

    2013-12-01

    Understanding the control of cellular networks consisting of gene and protein interactions and their emergent properties is a central activity of Systems Biology research. For this, continuous, discrete, hybrid, and stochastic methods have been proposed. Currently, the most common approach to modelling accurate temporal dynamics of networks is ordinary differential equations (ODE). However, critical limitations of ODE models are difficulty in kinetic parameter estimation and numerical solution of a large number of equations, making them more suited to smaller systems. In this article, we introduce a novel recurrent artificial neural network (RNN) that addresses above limitations and produces a continuous model that easily estimates parameters from data, can handle a large number of molecular interactions and quantifies temporal dynamics and emergent systems properties. This RNN is based on a system of ODEs representing molecular interactions in a signalling network. Each neuron represents concentration change of one molecule represented by an ODE. Weights of the RNN correspond to kinetic parameters in the system and can be adjusted incrementally during network training. The method is applied to the p53-Mdm2 oscillation system - a crucial component of the DNA damage response pathways activated by a damage signal. Simulation results indicate that the proposed RNN can successfully represent the behaviour of the p53-Mdm2 oscillation system and solve the parameter estimation problem with high accuracy. Furthermore, we presented a modified form of the RNN that estimates parameters and captures systems dynamics from sparse data collected over relatively large time steps. We also investigate the robustness of the p53-Mdm2 system using the trained RNN under various levels of parameter perturbation to gain a greater understanding of the control of the p53-Mdm2 system. Its outcomes on robustness are consistent with the current biological knowledge of this system. As more

  18. A system of recurrent neural networks for modularising, parameterising and dynamic analysis of cell signalling networks.

    Science.gov (United States)

    Samarasinghe, S; Ling, H

    In this paper, we show how to extend our previously proposed novel continuous time Recurrent Neural Networks (RNN) approach that retains the advantage of continuous dynamics offered by Ordinary Differential Equations (ODE) while enabling parameter estimation through adaptation, to larger signalling networks using a modular approach. Specifically, the signalling network is decomposed into several sub-models based on important temporal events in the network. Each sub-model is represented by the proposed RNN and trained using data generated from the corresponding ODE model. Trained sub-models are assembled into a whole system RNN which is then subjected to systems dynamics and sensitivity analyses. The concept is illustrated by application to G1/S transition in cell cycle using Iwamoto et al. (2008) ODE model. We decomposed the G1/S network into 3 sub-models: (i) E2F transcription factor release; (ii) E2F and CycE positive feedback loop for elevating cyclin levels; and (iii) E2F and CycA negative feedback to degrade E2F. The trained sub-models accurately represented system dynamics and parameters were in good agreement with the ODE model. The whole system RNN however revealed couple of parameters contributing to compounding errors due to feedback and required refinement to sub-model 2. These related to the reversible reaction between CycE/CDK2 and p27, its inhibitor. The revised whole system RNN model very accurately matched dynamics of the ODE system. Local sensitivity analysis of the whole system model further revealed the most dominant influence of the above two parameters in perturbing G1/S transition, giving support to a recent hypothesis that the release of inhibitor p27 from Cyc/CDK complex triggers cell cycle stage transition. To make the model useful in a practical setting, we modified each RNN sub-model with a time relay switch to facilitate larger interval input data (≈20min) (original model used data for 30s or less) and retrained them that produced

  19. Synchronization of chaotic systems and identification of nonlinear systems by using recurrent hierarchical type-2 fuzzy neural networks.

    Science.gov (United States)

    Mohammadzadeh, Ardashir; Ghaemi, Sehraneh

    2015-09-01

    This paper proposes a novel approach for training of proposed recurrent hierarchical interval type-2 fuzzy neural networks (RHT2FNN) based on the square-root cubature Kalman filters (SCKF). The SCKF algorithm is used to adjust the premise part of the type-2 FNN and the weights of defuzzification and the feedback weights. The recurrence property in the proposed network is the output feeding of each membership function to itself. The proposed RHT2FNN is employed in the sliding mode control scheme for the synchronization of chaotic systems. Unknown functions in the sliding mode control approach are estimated by RHT2FNN. Another application of the proposed RHT2FNN is the identification of dynamic nonlinear systems. The effectiveness of the proposed network and its learning algorithm is verified by several simulation examples. Furthermore, the universal approximation of RHT2FNNs is also shown. Copyright © 2015 ISA. Published by Elsevier Ltd. All rights reserved.

  20. Recurrent neural network approach to quantum signal: coherent state restoration for continuous-variable quantum key distribution

    Science.gov (United States)

    Lu, Weizhao; Huang, Chunhui; Hou, Kun; Shi, Liting; Zhao, Huihui; Li, Zhengmei; Qiu, Jianfeng

    2018-05-01

    In continuous-variable quantum key distribution (CV-QKD), weak signal carrying information transmits from Alice to Bob; during this process it is easily influenced by unknown noise which reduces signal-to-noise ratio, and strongly impacts reliability and stability of the communication. Recurrent quantum neural network (RQNN) is an artificial neural network model which can perform stochastic filtering without any prior knowledge of the signal and noise. In this paper, a modified RQNN algorithm with expectation maximization algorithm is proposed to process the signal in CV-QKD, which follows the basic rule of quantum mechanics. After RQNN, noise power decreases about 15 dBm, coherent signal recognition rate of RQNN is 96%, quantum bit error rate (QBER) drops to 4%, which is 6.9% lower than original QBER, and channel capacity is notably enlarged.

  1. Protein-Protein Interaction Article Classification Using a Convolutional Recurrent Neural Network with Pre-trained Word Embeddings.

    Science.gov (United States)

    Matos, Sérgio; Antunes, Rui

    2017-12-13

    Curation of protein interactions from scientific articles is an important task, since interaction networks are essential for the understanding of biological processes associated with disease or pharmacological action for example. However, the increase in the number of publications that potentially contain relevant information turns this into a very challenging and expensive task. In this work we used a convolutional recurrent neural network for identifying relevant articles for extracting information regarding protein interactions. Using the BioCreative III Article Classification Task dataset, we achieved an area under the precision-recall curve of 0.715 and a Matthew's correlation coefficient of 0.600, which represents an improvement over previous works.

  2. Robust stability analysis of Takagi—Sugeno uncertain stochastic fuzzy recurrent neural networks with mixed time-varying delays

    International Nuclear Information System (INIS)

    Ali, M. Syed

    2011-01-01

    In this paper, the global stability of Takagi—Sugeno (TS) uncertain stochastic fuzzy recurrent neural networks with discrete and distributed time-varying delays (TSUSFRNNs) is considered. A novel LMI-based stability criterion is obtained by using Lyapunov functional theory to guarantee the asymptotic stability of TSUSFRNNs. The proposed stability conditions are demonstrated through numerical examples. Furthermore, the supplementary requirement that the time derivative of time-varying delays must be smaller than one is removed. Comparison results are demonstrated to show that the proposed method is more able to guarantee the widest stability region than the other methods available in the existing literature. (general)

  3. Novel delay-distribution-dependent stability analysis for continuous-time recurrent neural networks with stochastic delay

    International Nuclear Information System (INIS)

    Wang Shen-Quan; Feng Jian; Zhao Qing

    2012-01-01

    In this paper, the problem of delay-distribution-dependent stability is investigated for continuous-time recurrent neural networks (CRNNs) with stochastic delay. Different from the common assumptions on time delays, it is assumed that the probability distribution of the delay taking values in some intervals is known a priori. By making full use of the information concerning the probability distribution of the delay and by using a tighter bounding technique (the reciprocally convex combination method), less conservative asymptotic mean-square stable sufficient conditions are derived in terms of linear matrix inequalities (LMIs). Two numerical examples show that our results are better than the existing ones. (general)

  4. Indirect intelligent sliding mode control of a shape memory alloy actuated flexible beam using hysteretic recurrent neural networks

    International Nuclear Information System (INIS)

    Hannen, Jennifer C; Buckner, Gregory D; Crews, John H

    2012-01-01

    This paper introduces an indirect intelligent sliding mode controller (IISMC) for shape memory alloy (SMA) actuators, specifically a flexible beam deflected by a single offset SMA tendon. The controller manipulates applied voltage, which alters SMA tendon temperature to track reference bending angles. A hysteretic recurrent neural network (HRNN) captures the nonlinear, hysteretic relationship between SMA temperature and bending angle. The variable structure control strategy provides robustness to model uncertainties and parameter variations, while effectively compensating for system nonlinearities, achieving superior tracking compared to an optimized PI controller. (paper)

  5. Artificial neural network and falls in community-dwellers: a new approach to identify the risk of recurrent falling?

    Science.gov (United States)

    Kabeshova, Anastasiia; Launay, Cyrille P; Gromov, Vasilii A; Annweiler, Cédric; Fantino, Bruno; Beauchet, Olivier

    2015-04-01

    Identification of the risk of recurrent falls is complex in older adults. The aim of this study was to examine the efficiency of 3 artificial neural networks (ANNs: multilayer perceptron [MLP], modified MLP, and neuroevolution of augmenting topologies [NEAT]) for the classification of recurrent fallers and nonrecurrent fallers using a set of clinical characteristics corresponding to risk factors of falls measured among community-dwelling older adults. Based on a cross-sectional design, 3289 community-dwelling volunteers aged 65 and older were recruited. Age, gender, body mass index (BMI), number of drugs daily taken, use of psychoactive drugs, diphosphonate, calcium, vitamin D supplements and walking aid, fear of falling, distance vision score, Timed Up and Go (TUG) score, lower-limb proprioception, handgrip strength, depressive symptoms, cognitive disorders, and history of falls were recorded. Participants were separated into 2 groups based on the number of falls that occurred over the past year: 0 or 1 fall and 2 or more falls. In addition, total population was separated into training and testing subgroups for ANN analysis. Among 3289 participants, 18.9% (n = 622) were recurrent fallers. NEAT, using 15 clinical characteristics (ie, use of walking aid, fear of falling, use of calcium, depression, use of vitamin D supplements, female, cognitive disorders, BMI 4, vision score 9 seconds, handgrip strength score ≤29 (N), and age ≥75 years), showed the best efficiency for identification of recurrent fallers, sensitivity (80.42%), specificity (92.54%), positive predictive value (84.38), negative predictive value (90.34), accuracy (88.39), and Cohen κ (0.74), compared with MLP and modified MLP. NEAT, using a set of 15 clinical characteristics, was an efficient ANN for the identification of recurrent fallers in older community-dwellers. Copyright © 2015 AMDA – The Society for Post-Acute and Long-Term Care Medicine. Published by Elsevier Inc. All rights reserved.

  6. An adaptive PID like controller using mix locally recurrent neural network for robotic manipulator with variable payload.

    Science.gov (United States)

    Sharma, Richa; Kumar, Vikas; Gaur, Prerna; Mittal, A P

    2016-05-01

    Being complex, non-linear and coupled system, the robotic manipulator cannot be effectively controlled using classical proportional-integral-derivative (PID) controller. To enhance the effectiveness of the conventional PID controller for the nonlinear and uncertain systems, gains of the PID controller should be conservatively tuned and should adapt to the process parameter variations. In this work, a mix locally recurrent neural network (MLRNN) architecture is investigated to mimic a conventional PID controller which consists of at most three hidden nodes which act as proportional, integral and derivative node. The gains of the mix locally recurrent neural network based PID (MLRNNPID) controller scheme are initialized with a newly developed cuckoo search algorithm (CSA) based optimization method rather than assuming randomly. A sequential learning based least square algorithm is then investigated for the on-line adaptation of the gains of MLRNNPID controller. The performance of the proposed controller scheme is tested against the plant parameters uncertainties and external disturbances for both links of the two link robotic manipulator with variable payload (TL-RMWVP). The stability of the proposed controller is analyzed using Lyapunov stability criteria. A performance comparison is carried out among MLRNNPID controller, CSA optimized NNPID (OPTNNPID) controller and CSA optimized conventional PID (OPTPID) controller in order to establish the effectiveness of the MLRNNPID controller. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.

  7. A Study of Recurrent and Convolutional Neural Networks in the Native Language Identification Task

    KAUST Repository

    Werfelmann, Robert

    2018-01-01

    around the world. The neural network models consisted of Long Short-Term Memory and Convolutional networks using the sentences of each document as the input. Additional statistical features were generated from the text to complement the predictions

  8. Wind Turbine Driving a PM Synchronous Generator Using Novel Recurrent Chebyshev Neural Network Control with the Ideal Learning Rate

    Directory of Open Access Journals (Sweden)

    Chih-Hong Lin

    2016-06-01

    Full Text Available A permanent magnet (PM synchronous generator system driven by wind turbine (WT, connected with smart grid via AC-DC converter and DC-AC converter, are controlled by the novel recurrent Chebyshev neural network (NN and amended particle swarm optimization (PSO to regulate output power and output voltage in two power converters in this study. Because a PM synchronous generator system driven by WT is an unknown non-linear and time-varying dynamic system, the on-line training novel recurrent Chebyshev NN control system is developed to regulate DC voltage of the AC-DC converter and AC voltage of the DC-AC converter connected with smart grid. Furthermore, the variable learning rate of the novel recurrent Chebyshev NN is regulated according to discrete-type Lyapunov function for improving the control performance and enhancing convergent speed. Finally, some experimental results are shown to verify the effectiveness of the proposed control method for a WT driving a PM synchronous generator system in smart grid.

  9. Finite-time convergent recurrent neural network with a hard-limiting activation function for constrained optimization with piecewise-linear objective functions.

    Science.gov (United States)

    Liu, Qingshan; Wang, Jun

    2011-04-01

    This paper presents a one-layer recurrent neural network for solving a class of constrained nonsmooth optimization problems with piecewise-linear objective functions. The proposed neural network is guaranteed to be globally convergent in finite time to the optimal solutions under a mild condition on a derived lower bound of a single gain parameter in the model. The number of neurons in the neural network is the same as the number of decision variables of the optimization problem. Compared with existing neural networks for optimization, the proposed neural network has a couple of salient features such as finite-time convergence and a low model complexity. Specific models for two important special cases, namely, linear programming and nonsmooth optimization, are also presented. In addition, applications to the shortest path problem and constrained least absolute deviation problem are discussed with simulation results to demonstrate the effectiveness and characteristics of the proposed neural network.

  10. Modeling the dynamics of the lead bismuth eutectic experimental accelerator driven system by an infinite impulse response locally recurrent neural network

    International Nuclear Information System (INIS)

    Zio, Enrico; Pedroni, Nicola; Broggi, Matteo; Golea, Lucia Roxana

    2009-01-01

    In this paper, an infinite impulse response locally recurrent neural network (IIR-LRNN) is employed for modelling the dynamics of the Lead Bismuth Eutectic eXperimental Accelerator Driven System (LBE-XADS). The network is trained by recursive back-propagation (RBP) and its ability in estimating transients is tested under various conditions. The results demonstrate the robustness of the locally recurrent scheme in the reconstruction of complex nonlinear dynamic relationships

  11. RM-SORN: a reward-modulated self-organizing recurrent neural network.

    Science.gov (United States)

    Aswolinskiy, Witali; Pipa, Gordon

    2015-01-01

    Neural plasticity plays an important role in learning and memory. Reward-modulation of plasticity offers an explanation for the ability of the brain to adapt its neural activity to achieve a rewarded goal. Here, we define a neural network model that learns through the interaction of Intrinsic Plasticity (IP) and reward-modulated Spike-Timing-Dependent Plasticity (STDP). IP enables the network to explore possible output sequences and STDP, modulated by reward, reinforces the creation of the rewarded output sequences. The model is tested on tasks for prediction, recall, non-linear computation, pattern recognition, and sequence generation. It achieves performance comparable to networks trained with supervised learning, while using simple, biologically motivated plasticity rules, and rewarding strategies. The results confirm the importance of investigating the interaction of several plasticity rules in the context of reward-modulated learning and whether reward-modulated self-organization can explain the amazing capabilities of the brain.

  12. Mining e-cigarette adverse events in social media using Bi-LSTM recurrent neural network with word embedding representation.

    Science.gov (United States)

    Xie, Jiaheng; Liu, Xiao; Dajun Zeng, Daniel

    2018-01-01

    Recent years have seen increased worldwide popularity of e-cigarette use. However, the risks of e-cigarettes are underexamined. Most e-cigarette adverse event studies have achieved low detection rates due to limited subject sample sizes in the experiments and surveys. Social media provides a large data repository of consumers' e-cigarette feedback and experiences, which are useful for e-cigarette safety surveillance. However, it is difficult to automatically interpret the informal and nontechnical consumer vocabulary about e-cigarettes in social media. This issue hinders the use of social media content for e-cigarette safety surveillance. Recent developments in deep neural network methods have shown promise for named entity extraction from noisy text. Motivated by these observations, we aimed to design a deep neural network approach to extract e-cigarette safety information in social media. Our deep neural language model utilizes word embedding as the representation of text input and recognizes named entity types with the state-of-the-art Bidirectional Long Short-Term Memory (Bi-LSTM) Recurrent Neural Network. Our Bi-LSTM model achieved the best performance compared to 3 baseline models, with a precision of 94.10%, a recall of 91.80%, and an F-measure of 92.94%. We identified 1591 unique adverse events and 9930 unique e-cigarette components (ie, chemicals, flavors, and devices) from our research testbed. Although the conditional random field baseline model had slightly better precision than our approach, our Bi-LSTM model achieved much higher recall, resulting in the best F-measure. Our method can be generalized to extract medical concepts from social media for other medical applications. © The Author 2017. Published by Oxford University Press on behalf of the American Medical Informatics Association. All rights reserved. For Permissions, please email: journals.permissions@oup.com

  13. Quasi-projective synchronization of fractional-order complex-valued recurrent neural networks.

    Science.gov (United States)

    Yang, Shuai; Yu, Juan; Hu, Cheng; Jiang, Haijun

    2018-08-01

    In this paper, without separating the complex-valued neural networks into two real-valued systems, the quasi-projective synchronization of fractional-order complex-valued neural networks is investigated. First, two new fractional-order inequalities are established by using the theory of complex functions, Laplace transform and Mittag-Leffler functions, which generalize traditional inequalities with the first-order derivative in the real domain. Additionally, different from hybrid control schemes given in the previous work concerning the projective synchronization, a simple and linear control strategy is designed in this paper and several criteria are derived to ensure quasi-projective synchronization of the complex-valued neural networks with fractional-order based on the established fractional-order inequalities and the theory of complex functions. Moreover, the error bounds of quasi-projective synchronization are estimated. Especially, some conditions are also presented for the Mittag-Leffler synchronization of the addressed neural networks. Finally, some numerical examples with simulations are provided to show the effectiveness of the derived theoretical results. Copyright © 2018 Elsevier Ltd. All rights reserved.

  14. Combination of Deep Recurrent Neural Networks and Conditional Random Fields for Extracting Adverse Drug Reactions from User Reviews.

    Science.gov (United States)

    Tutubalina, Elena; Nikolenko, Sergey

    2017-01-01

    Adverse drug reactions (ADRs) are an essential part of the analysis of drug use, measuring drug use benefits, and making policy decisions. Traditional channels for identifying ADRs are reliable but very slow and only produce a small amount of data. Text reviews, either on specialized web sites or in general-purpose social networks, may lead to a data source of unprecedented size, but identifying ADRs in free-form text is a challenging natural language processing problem. In this work, we propose a novel model for this problem, uniting recurrent neural architectures and conditional random fields. We evaluate our model with a comprehensive experimental study, showing improvements over state-of-the-art methods of ADR extraction.

  15. Combination of Deep Recurrent Neural Networks and Conditional Random Fields for Extracting Adverse Drug Reactions from User Reviews

    Directory of Open Access Journals (Sweden)

    Elena Tutubalina

    2017-01-01

    Full Text Available Adverse drug reactions (ADRs are an essential part of the analysis of drug use, measuring drug use benefits, and making policy decisions. Traditional channels for identifying ADRs are reliable but very slow and only produce a small amount of data. Text reviews, either on specialized web sites or in general-purpose social networks, may lead to a data source of unprecedented size, but identifying ADRs in free-form text is a challenging natural language processing problem. In this work, we propose a novel model for this problem, uniting recurrent neural architectures and conditional random fields. We evaluate our model with a comprehensive experimental study, showing improvements over state-of-the-art methods of ADR extraction.

  16. On the Nature of the Intrinsic Connectivity of the Cat Motor Cortex: Evidence for a Recurrent Neural Network Topology

    DEFF Research Database (Denmark)

    Capaday, Charles; Ethier, C; Brizzi, L

    2009-01-01

    and functional significance of the intrinsic horizontal connections between neurons in the motor cortex (MCx) remain to be clarified. To further elucidate the nature of this intracortical connectivity pattern, experiments were done on the MCx of three cats. The anterograde tracer biocytin was ejected......Capaday C, Ethier C, Brizzi L, Sik A, van Vreeswijk C, Gingras D. On the nature of the intrinsic connectivity of the cat motor cortex: evidence for a recurrent neural network topology. J Neurophysiol 102: 2131-2141, 2009. First published July 22, 2009; doi: 10.1152/jn.91319.2008. The details...... iontophoretically in layers II, III, and V. Some 30-50 neurons within a radius of similar to 250 mu m were thus stained. The functional output of the motor cortical point at which biocytin was injected, and of the surrounding points, was identified by microstimulation and electromyographic recordings. The axonal...

  17. Exploring multiple feature combination strategies with a recurrent neural network architecture for off-line handwriting recognition

    Science.gov (United States)

    Mioulet, L.; Bideault, G.; Chatelain, C.; Paquet, T.; Brunessaux, S.

    2015-01-01

    The BLSTM-CTC is a novel recurrent neural network architecture that has outperformed previous state of the art algorithms in tasks such as speech recognition or handwriting recognition. It has the ability to process long term dependencies in temporal signals in order to label unsegmented data. This paper describes different ways of combining features using a BLSTM-CTC architecture. Not only do we explore the low level combination (feature space combination) but we also explore high level combination (decoding combination) and mid-level (internal system representation combination). The results are compared on the RIMES word database. Our results show that the low level combination works best, thanks to the powerful data modeling of the LSTM neurons.

  18. Auto-Associative Recurrent Neural Networks and Long Term Dependencies in Novelty Detection for Audio Surveillance Applications

    Science.gov (United States)

    Rossi, A.; Montefoschi, F.; Rizzo, A.; Diligenti, M.; Festucci, C.

    2017-10-01

    Machine Learning applied to Automatic Audio Surveillance has been attracting increasing attention in recent years. In spite of several investigations based on a large number of different approaches, little attention had been paid to the environmental temporal evolution of the input signal. In this work, we propose an exploration in this direction comparing the temporal correlations extracted at the feature level with the one learned by a representational structure. To this aim we analysed the prediction performances of a Recurrent Neural Network architecture varying the length of the processed input sequence and the size of the time window used in the feature extraction. Results corroborated the hypothesis that sequential models work better when dealing with data characterized by temporal order. However, so far the optimization of the temporal dimension remains an open issue.

  19. A Study of Recurrent and Convolutional Neural Networks in the Native Language Identification Task

    KAUST Repository

    Werfelmann, Robert

    2018-05-24

    Native Language Identification (NLI) is the task of predicting the native language of an author from their text written in a second language. The idea is to find writing habits that transfer from an author’s native language to their second language. Many approaches to this task have been studied, from simple word frequency analysis, to analyzing grammatical and spelling mistakes to find patterns and traits that are common between different authors of the same native language. This can be a very complex task, depending on the native language and the proficiency of the author’s second language. The most common approach that has seen very good results is based on the usage of n-gram features of words and characters. In this thesis, we attempt to extract lexical, grammatical, and semantic features from the sentences of non-native English essays using neural networks. The training and testing data was obtained from a large corpus of publicly available essays written by authors of several countries around the world. The neural network models consisted of Long Short-Term Memory and Convolutional networks using the sentences of each document as the input. Additional statistical features were generated from the text to complement the predictions of the neural networks, which were then used as feature inputs to a Support Vector Machine, making the final prediction. Results show that Long Short-Term Memory neural network can improve performance over a naive bag of words approach, but with a much smaller feature set. With more fine-tuning of neural network hyperparameters, these results will likely improve significantly.

  20. Deep Recurrent Neural Networks for Product Attribute Extraction in eCommerce

    OpenAIRE

    Majumder, Bodhisattwa Prasad; Subramanian, Aditya; Krishnan, Abhinandan; Gandhi, Shreyansh; More, Ajinkya

    2018-01-01

    Extracting accurate attribute qualities from product titles is a vital component in delivering eCommerce customers with a rewarding online shopping experience via an enriched faceted search. We demonstrate the potential of Deep Recurrent Networks in this domain, primarily models such as Bidirectional LSTMs and Bidirectional LSTM-CRF with or without an attention mechanism. These have improved overall F1 scores, as compared to the previous benchmarks (More et al.) by at least 0.0391, showcasing...

  1. DanQ: a hybrid convolutional and recurrent deep neural network for quantifying the function of DNA sequences.

    Science.gov (United States)

    Quang, Daniel; Xie, Xiaohui

    2016-06-20

    Modeling the properties and functions of DNA sequences is an important, but challenging task in the broad field of genomics. This task is particularly difficult for non-coding DNA, the vast majority of which is still poorly understood in terms of function. A powerful predictive model for the function of non-coding DNA can have enormous benefit for both basic science and translational research because over 98% of the human genome is non-coding and 93% of disease-associated variants lie in these regions. To address this need, we propose DanQ, a novel hybrid convolutional and bi-directional long short-term memory recurrent neural network framework for predicting non-coding function de novo from sequence. In the DanQ model, the convolution layer captures regulatory motifs, while the recurrent layer captures long-term dependencies between the motifs in order to learn a regulatory 'grammar' to improve predictions. DanQ improves considerably upon other models across several metrics. For some regulatory markers, DanQ can achieve over a 50% relative improvement in the area under the precision-recall curve metric compared to related models. We have made the source code available at the github repository http://github.com/uci-cbcl/DanQ. © The Author(s) 2016. Published by Oxford University Press on behalf of Nucleic Acids Research.

  2. The attractor recurrent neural network based on fuzzy functions: An effective model for the classification of lung abnormalities.

    Science.gov (United States)

    Khodabakhshi, Mohammad Bagher; Moradi, Mohammad Hassan

    2017-05-01

    The respiratory system dynamic is of high significance when it comes to the detection of lung abnormalities, which highlights the importance of presenting a reliable model for it. In this paper, we introduce a novel dynamic modelling method for the characterization of the lung sounds (LS), based on the attractor recurrent neural network (ARNN). The ARNN structure allows the development of an effective LS model. Additionally, it has the capability to reproduce the distinctive features of the lung sounds using its formed attractors. Furthermore, a novel ARNN topology based on fuzzy functions (FFs-ARNN) is developed. Given the utility of the recurrent quantification analysis (RQA) as a tool to assess the nature of complex systems, it was used to evaluate the performance of both the ARNN and the FFs-ARNN models. The experimental results demonstrate the effectiveness of the proposed approaches for multichannel LS analysis. In particular, a classification accuracy of 91% was achieved using FFs-ARNN with sequences of RQA features. Copyright © 2017 Elsevier Ltd. All rights reserved.

  3. Bioelectric signal classification using a recurrent probabilistic neural network with time-series discriminant component analysis.

    Science.gov (United States)

    Hayashi, Hideaki; Shima, Keisuke; Shibanoki, Taro; Kurita, Yuichi; Tsuji, Toshio

    2013-01-01

    This paper outlines a probabilistic neural network developed on the basis of time-series discriminant component analysis (TSDCA) that can be used to classify high-dimensional time-series patterns. TSDCA involves the compression of high-dimensional time series into a lower-dimensional space using a set of orthogonal transformations and the calculation of posterior probabilities based on a continuous-density hidden Markov model that incorporates a Gaussian mixture model expressed in the reduced-dimensional space. The analysis can be incorporated into a neural network so that parameters can be obtained appropriately as network coefficients according to backpropagation-through-time-based training algorithm. The network is considered to enable high-accuracy classification of high-dimensional time-series patterns and to reduce the computation time taken for network training. In the experiments conducted during the study, the validity of the proposed network was demonstrated for EEG signals.

  4. Stability switches, oscillatory multistability, and spatio-temporal patterns of nonlinear oscillations in recurrently delay coupled neural networks.

    Science.gov (United States)

    Song, Yongli; Makarov, Valeri A; Velarde, Manuel G

    2009-08-01

    A model of time-delay recurrently coupled spatially segregated neural assemblies is here proposed. We show that it operates like some of the hierarchical architectures of the brain. Each assembly is a neural network with no delay in the local couplings between the units. The delay appears in the long range feedforward and feedback inter-assemblies communications. Bifurcation analysis of a simple four-units system in the autonomous case shows the richness of the dynamical behaviors in a biophysically plausible parameter region. We find oscillatory multistability, hysteresis, and stability switches of the rest state provoked by the time delay. Then we investigate the spatio-temporal patterns of bifurcating periodic solutions by using the symmetric local Hopf bifurcation theory of delay differential equations and derive the equation describing the flow on the center manifold that enables us determining the direction of Hopf bifurcations and stability of the bifurcating periodic orbits. We also discuss computational properties of the system due to the delay when an external drive of the network mimicks external sensory input.

  5. Application of a Self-recurrent Wavelet Neural Network in the Modeling and Control of an AC Servo System

    Directory of Open Access Journals (Sweden)

    Run Min HOU

    2014-05-01

    Full Text Available To control the nonlinearity, widespread variations in loads and time varying characteristic of the high power ac servo system, the modeling and control techniques are studied here. A self-recurrent wavelet neural network (SRWNN modeling scheme is proposed, which successfully addresses the issue of the traditional wavelet neural network easily falling into local optimum, and significantly improves the network approximation capability and convergence rate. The control scheme of a SRWNN based on fuzzy compensation is expected. Gradient information is provided in real time for the controller by using a SRWNN identifier, so as to ensure that the learning and adjusting function of the controller of the SRWNN operate well, and fuzzy compensation control is applied to improve rapidity and accuracy of the entire system. Then the Lyapunov function is utilized to judge the stability of the system. The experimental analysis and comparisons with other modeling and control methods, it is clearly shown that the validities of the proposed modeling scheme and control scheme are effective.

  6. Use of Recurrent Neural Networks for Strategic Data Mining of Sales

    OpenAIRE

    Vadhavkar, Sanjeev; Shanmugasundaram, Jayavel; Gupta, Amar; Prasad, M.V. Nagendra

    2002-01-01

    An increasing number of organizations are involved in the development of strategic information systems for effective linkages with their suppliers, customers, and other channel partners involved in transportation, distribution, warehousing and maintenance activities. An efficient inter-organizational inventory management system based on data mining techniques is a significant step in this direction. This paper discusses the use of neural network based data mining and knowledge discovery techn...

  7. Deep Bidirectional and Unidirectional LSTM Recurrent Neural Network for Network-wide Traffic Speed Prediction

    OpenAIRE

    Cui, Zhiyong; Ke, Ruimin; Wang, Yinhai

    2018-01-01

    Short-term traffic forecasting based on deep learning methods, especially long short-term memory (LSTM) neural networks, has received much attention in recent years. However, the potential of deep learning methods in traffic forecasting has not yet fully been exploited in terms of the depth of the model architecture, the spatial scale of the prediction area, and the predictive power of spatial-temporal data. In this paper, a deep stacked bidirectional and unidirectional LSTM (SBU- LSTM) neura...

  8. C-RNN-GAN: Continuous recurrent neural networks with adversarial training

    OpenAIRE

    Mogren, Olof

    2016-01-01

    Generative adversarial networks have been proposed as a way of efficiently training deep generative neural networks. We propose a generative adversarial model that works on continuous sequential data, and apply it by training it on a collection of classical music. We conclude that it generates music that sounds better and better as the model is trained, report statistics on generated music, and let the reader judge the quality by downloading the generated songs.

  9. Learning to Recognize Actions From Limited Training Examples Using a Recurrent Spiking Neural Model

    Science.gov (United States)

    Panda, Priyadarshini; Srinivasa, Narayan

    2018-01-01

    A fundamental challenge in machine learning today is to build a model that can learn from few examples. Here, we describe a reservoir based spiking neural model for learning to recognize actions with a limited number of labeled videos. First, we propose a novel encoding, inspired by how microsaccades influence visual perception, to extract spike information from raw video data while preserving the temporal correlation across different frames. Using this encoding, we show that the reservoir generalizes its rich dynamical activity toward signature action/movements enabling it to learn from few training examples. We evaluate our approach on the UCF-101 dataset. Our experiments demonstrate that our proposed reservoir achieves 81.3/87% Top-1/Top-5 accuracy, respectively, on the 101-class data while requiring just 8 video examples per class for training. Our results establish a new benchmark for action recognition from limited video examples for spiking neural models while yielding competitive accuracy with respect to state-of-the-art non-spiking neural models. PMID:29551962

  10. Neural dynamics as sampling: a model for stochastic computation in recurrent networks of spiking neurons.

    Science.gov (United States)

    Buesing, Lars; Bill, Johannes; Nessler, Bernhard; Maass, Wolfgang

    2011-11-01

    The organization of computations in networks of spiking neurons in the brain is still largely unknown, in particular in view of the inherently stochastic features of their firing activity and the experimentally observed trial-to-trial variability of neural systems in the brain. In principle there exists a powerful computational framework for stochastic computations, probabilistic inference by sampling, which can explain a large number of macroscopic experimental data in neuroscience and cognitive science. But it has turned out to be surprisingly difficult to create a link between these abstract models for stochastic computations and more detailed models of the dynamics of networks of spiking neurons. Here we create such a link and show that under some conditions the stochastic firing activity of networks of spiking neurons can be interpreted as probabilistic inference via Markov chain Monte Carlo (MCMC) sampling. Since common methods for MCMC sampling in distributed systems, such as Gibbs sampling, are inconsistent with the dynamics of spiking neurons, we introduce a different approach based on non-reversible Markov chains that is able to reflect inherent temporal processes of spiking neuronal activity through a suitable choice of random variables. We propose a neural network model and show by a rigorous theoretical analysis that its neural activity implements MCMC sampling of a given distribution, both for the case of discrete and continuous time. This provides a step towards closing the gap between abstract functional models of cortical computation and more detailed models of networks of spiking neurons.

  11. Recurrent-neural-network-based Boolean factor analysis and its application to word clustering.

    Science.gov (United States)

    Frolov, Alexander A; Husek, Dusan; Polyakov, Pavel Yu

    2009-07-01

    The objective of this paper is to introduce a neural-network-based algorithm for word clustering as an extension of the neural-network-based Boolean factor analysis algorithm (Frolov , 2007). It is shown that this extended algorithm supports even the more complex model of signals that are supposed to be related to textual documents. It is hypothesized that every topic in textual data is characterized by a set of words which coherently appear in documents dedicated to a given topic. The appearance of each word in a document is coded by the activity of a particular neuron. In accordance with the Hebbian learning rule implemented in the network, sets of coherently appearing words (treated as factors) create tightly connected groups of neurons, hence, revealing them as attractors of the network dynamics. The found factors are eliminated from the network memory by the Hebbian unlearning rule facilitating the search of other factors. Topics related to the found sets of words can be identified based on the words' semantics. To make the method complete, a special technique based on a Bayesian procedure has been developed for the following purposes: first, to provide a complete description of factors in terms of component probability, and second, to enhance the accuracy of classification of signals to determine whether it contains the factor. Since it is assumed that every word may possibly contribute to several topics, the proposed method might be related to the method of fuzzy clustering. In this paper, we show that the results of Boolean factor analysis and fuzzy clustering are not contradictory, but complementary. To demonstrate the capabilities of this attempt, the method is applied to two types of textual data on neural networks in two different languages. The obtained topics and corresponding words are at a good level of agreement despite the fact that identical topics in Russian and English conferences contain different sets of keywords.

  12. Predictions of SEP events by means of a linear filter and layer-recurrent neural network

    Czech Academy of Sciences Publication Activity Database

    Valach, F.; Revallo, M.; Hejda, Pavel; Bochníček, Josef

    2011-01-01

    Roč. 69, č. 9-10 (2011), s. 758-766 ISSN 0094-5765 R&D Projects: GA AV ČR(CZ) IAA300120608; GA MŠk OC09070 Grant - others:VEGA(SK) 2/0015/11; VEGA(SK) 2/0022/11 Institutional research plan: CEZ:AV0Z30120515 Keywords : coronal mass ejection * X-ray flare * solar energetic particles * artificial neural network Subject RIV: DE - Earth Magnetism, Geodesy, Geography Impact factor: 0.614, year: 2011

  13. Predicting non-linear dynamics by stable local learning in a recurrent spiking neural network.

    Science.gov (United States)

    Gilra, Aditya; Gerstner, Wulfram

    2017-11-27

    The brain needs to predict how the body reacts to motor commands, but how a network of spiking neurons can learn non-linear body dynamics using local, online and stable learning rules is unclear. Here, we present a supervised learning scheme for the feedforward and recurrent connections in a network of heterogeneous spiking neurons. The error in the output is fed back through fixed random connections with a negative gain, causing the network to follow the desired dynamics. The rule for Feedback-based Online Local Learning Of Weights (FOLLOW) is local in the sense that weight changes depend on the presynaptic activity and the error signal projected onto the postsynaptic neuron. We provide examples of learning linear, non-linear and chaotic dynamics, as well as the dynamics of a two-link arm. Under reasonable approximations, we show, using the Lyapunov method, that FOLLOW learning is uniformly stable, with the error going to zero asymptotically.

  14. Learning in fully recurrent neural networks by approaching tangent planes to constraint surfaces.

    Science.gov (United States)

    May, P; Zhou, E; Lee, C W

    2012-10-01

    In this paper we present a new variant of the online real time recurrent learning algorithm proposed by Williams and Zipser (1989). Whilst the original algorithm utilises gradient information to guide the search towards the minimum training error, it is very slow in most applications and often gets stuck in local minima of the search space. It is also sensitive to the choice of learning rate and requires careful tuning. The new variant adjusts weights by moving to the tangent planes to constraint surfaces. It is simple to implement and requires no parameters to be set manually. Experimental results show that this new algorithm gives significantly faster convergence whilst avoiding problems like local minima. Copyright © 2012 Elsevier Ltd. All rights reserved.

  15. Slowly evolving connectivity in recurrent neural networks: I. The extreme dilution regime

    International Nuclear Information System (INIS)

    Wemmenhove, B; Skantzos, N S; Coolen, A C C

    2004-01-01

    We study extremely diluted spin models of neural networks in which the connectivity evolves in time, although adiabatically slowly compared to the neurons, according to stochastic equations which on average aim to reduce frustration. The (fast) neurons and (slow) connectivity variables equilibrate separately, but at different temperatures. Our model is exactly solvable in equilibrium. We obtain phase diagrams upon making the condensed ansatz (i.e. recall of one pattern). These show that, as the connectivity temperature is lowered, the volume of the retrieval phase diverges and the fraction of mis-aligned spins is reduced. Still one always retains a region in the retrieval phase where recall states other than the one corresponding to the 'condensed' pattern are locally stable, so the associative memory character of our model is preserved

  16. Schema generation in recurrent neural nets for intercepting a moving target.

    Science.gov (United States)

    Fleischer, Andreas G

    2010-06-01

    The grasping of a moving object requires the development of a motor strategy to anticipate the trajectory of the target and to compute an optimal course of interception. During the performance of perception-action cycles, a preprogrammed prototypical movement trajectory, a motor schema, may highly reduce the control load. Subjects were asked to hit a target that was moving along a circular path by means of a cursor. Randomized initial target positions and velocities were detected in the periphery of the eyes, resulting in a saccade toward the target. Even when the target disappeared, the eyes followed the target's anticipated course. The Gestalt of the trajectories was dependent on target velocity. The prediction capability of the motor schema was investigated by varying the visibility range of cursor and target. Motor schemata were determined to be of limited precision, and therefore visual feedback was continuously required to intercept the moving target. To intercept a target, the motor schema caused the hand to aim ahead and to adapt to the target trajectory. The control of cursor velocity determined the point of interception. From a modeling point of view, a neural network was developed that allowed the implementation of a motor schema interacting with feedback control in an iterative manner. The neural net of the Wilson type consists of an excitation-diffusion layer allowing the generation of a moving bubble. This activation bubble runs down an eye-centered motor schema and causes a planar arm model to move toward the target. A bubble provides local integration and straightening of the trajectory during repetitive moves. The schema adapts to task demands by learning and serves as forward controller. On the basis of these model considerations the principal problem of embedding motor schemata in generalized control strategies is discussed.

  17. Classification of epileptic seizures using wavelet packet log energy and norm entropies with recurrent Elman neural network classifier.

    Science.gov (United States)

    Raghu, S; Sriraam, N; Kumar, G Pradeep

    2017-02-01

    Electroencephalogram shortly termed as EEG is considered as the fundamental segment for the assessment of the neural activities in the brain. In cognitive neuroscience domain, EEG-based assessment method is found to be superior due to its non-invasive ability to detect deep brain structure while exhibiting superior spatial resolutions. Especially for studying the neurodynamic behavior of epileptic seizures, EEG recordings reflect the neuronal activity of the brain and thus provide required clinical diagnostic information for the neurologist. This specific proposed study makes use of wavelet packet based log and norm entropies with a recurrent Elman neural network (REN) for the automated detection of epileptic seizures. Three conditions, normal, pre-ictal and epileptic EEG recordings were considered for the proposed study. An adaptive Weiner filter was initially applied to remove the power line noise of 50 Hz from raw EEG recordings. Raw EEGs were segmented into 1 s patterns to ensure stationarity of the signal. Then wavelet packet using Haar wavelet with a five level decomposition was introduced and two entropies, log and norm were estimated and were applied to REN classifier to perform binary classification. The non-linear Wilcoxon statistical test was applied to observe the variation in the features under these conditions. The effect of log energy entropy (without wavelets) was also studied. It was found from the simulation results that the wavelet packet log entropy with REN classifier yielded a classification accuracy of 99.70 % for normal-pre-ictal, 99.70 % for normal-epileptic and 99.85 % for pre-ictal-epileptic.

  18. Faulty node detection in wireless sensor networks using a recurrent neural network

    Science.gov (United States)

    Atiga, Jamila; Mbarki, Nour Elhouda; Ejbali, Ridha; Zaied, Mourad

    2018-04-01

    The wireless sensor networks (WSN) consist of a set of sensors that are more and more used in surveillance applications on a large scale in different areas: military, Environment, Health ... etc. Despite the minimization and the reduction of the manufacturing costs of the sensors, they can operate in places difficult to access without the possibility of reloading of battery, they generally have limited resources in terms of power of emission, of processing capacity, data storage and energy. These sensors can be used in a hostile environment, such as, for example, on a field of battle, in the presence of fires, floods, earthquakes. In these environments the sensors can fail, even in a normal operation. It is therefore necessary to develop algorithms tolerant and detection of defects of the nodes for the network of sensor without wires, therefore, the faults of the sensor can reduce the quality of the surveillance if they are not detected. The values that are measured by the sensors are used to estimate the state of the monitored area. We used the Non-linear Auto- Regressive with eXogeneous (NARX), the recursive architecture of the neural network, to predict the state of a node of a sensor from the previous values described by the functions of time series. The experimental results have verified that the prediction of the State is enhanced by our proposed model.

  19. A Recurrent Probabilistic Neural Network with Dimensionality Reduction Based on Time-series Discriminant Component Analysis.

    Science.gov (United States)

    Hayashi, Hideaki; Shibanoki, Taro; Shima, Keisuke; Kurita, Yuichi; Tsuji, Toshio

    2015-12-01

    This paper proposes a probabilistic neural network (NN) developed on the basis of time-series discriminant component analysis (TSDCA) that can be used to classify high-dimensional time-series patterns. TSDCA involves the compression of high-dimensional time series into a lower dimensional space using a set of orthogonal transformations and the calculation of posterior probabilities based on a continuous-density hidden Markov model with a Gaussian mixture model expressed in the reduced-dimensional space. The analysis can be incorporated into an NN, which is named a time-series discriminant component network (TSDCN), so that parameters of dimensionality reduction and classification can be obtained simultaneously as network coefficients according to a backpropagation through time-based learning algorithm with the Lagrange multiplier method. The TSDCN is considered to enable high-accuracy classification of high-dimensional time-series patterns and to reduce the computation time taken for network training. The validity of the TSDCN is demonstrated for high-dimensional artificial data and electroencephalogram signals in the experiments conducted during the study.

  20. Coding the presence of visual objects in a recurrent neural network of visual cortex.

    Science.gov (United States)

    Zwickel, Timm; Wachtler, Thomas; Eckhorn, Reinhard

    2007-01-01

    Before we can recognize a visual object, our visual system has to segregate it from its background. This requires a fast mechanism for establishing the presence and location of objects independently of their identity. Recently, border-ownership neurons were recorded in monkey visual cortex which might be involved in this task [Zhou, H., Friedmann, H., von der Heydt, R., 2000. Coding of border ownership in monkey visual cortex. J. Neurosci. 20 (17), 6594-6611]. In order to explain the basic mechanisms required for fast coding of object presence, we have developed a neural network model of visual cortex consisting of three stages. Feed-forward and lateral connections support coding of Gestalt properties, including similarity, good continuation, and convexity. Neurons of the highest area respond to the presence of an object and encode its position, invariant of its form. Feedback connections to the lowest area facilitate orientation detectors activated by contours belonging to potential objects, and thus generate the experimentally observed border-ownership property. This feedback control acts fast and significantly improves the figure-ground segregation required for the consecutive task of object recognition.

  1. Recurrent-Neural-Network-Based Multivariable Adaptive Control for a Class of Nonlinear Dynamic Systems With Time-Varying Delay.

    Science.gov (United States)

    Hwang, Chih-Lyang; Jan, Chau

    2016-02-01

    At the beginning, an approximate nonlinear autoregressive moving average (NARMA) model is employed to represent a class of multivariable nonlinear dynamic systems with time-varying delay. It is known that the disadvantages of robust control for the NARMA model are as follows: 1) suitable control parameters for larger time delay are more sensitive to achieving desirable performance; 2) it only deals with bounded uncertainty; and 3) the nominal NARMA model must be learned in advance. Due to the dynamic feature of the NARMA model, a recurrent neural network (RNN) is online applied to learn it. However, the system performance becomes deteriorated due to the poor learning of the larger variation of system vector functions. In this situation, a simple network is employed to compensate the upper bound of the residue caused by the linear parameterization of the approximation error of RNN. An e -modification learning law with a projection for weight matrix is applied to guarantee its boundedness without persistent excitation. Under suitable conditions, the semiglobally ultimately bounded tracking with the boundedness of estimated weight matrix is obtained by the proposed RNN-based multivariable adaptive control. Finally, simulations are presented to verify the effectiveness and robustness of the proposed control.

  2. An intelligent nuclear reactor core controller for load following operations, using recurrent neural networks and fuzzy systems

    International Nuclear Information System (INIS)

    Boroushaki, M.; Ghofrani, M.B.; Lucas, C.; Yazdanpanah, M.J.

    2003-01-01

    In the last decade, the intelligent control community has paid great attention to the topic of intelligent control systems for nuclear plants (core, steam generator...). Papers mostly used approximate and simple mathematical SISO (single-input-single-output) model of nuclear plants for testing and/or tuning of the control systems. They also tried to generalize theses models to a real MIMO (multi-input-multi-output) plant, while nuclear plants are typically of complex nonlinear and multivariable nature with high interactions between their state variables and therefore, many of these proposed intelligent control systems are not appropriate for real cases. In this paper, we designed an on-line intelligent core controller for load following operations, based on a heuristic control algorithm, using a valid and updatable recurrent neural network (RNN). We have used an accurate 3-dimensional core calculation code to represent the real plant and to train the RNN. The results of simulation show that this intelligent controller can control the reactor core during load following operations, using optimum control rod groups manoeuvre and variable overlapping strategy. This methodology represents a simple and reliable procedure for controlling other complex nonlinear MIMO plants, and may improve the responses, comparing to other control systems

  3. Long Short-Term Memory Projection Recurrent Neural Network Architectures for Piano’s Continuous Note Recognition

    Directory of Open Access Journals (Sweden)

    YuKang Jia

    2017-01-01

    Full Text Available Long Short-Term Memory (LSTM is a kind of Recurrent Neural Networks (RNN relating to time series, which has achieved good performance in speech recogniton and image recognition. Long Short-Term Memory Projection (LSTMP is a variant of LSTM to further optimize speed and performance of LSTM by adding a projection layer. As LSTM and LSTMP have performed well in pattern recognition, in this paper, we combine them with Connectionist Temporal Classification (CTC to study piano’s continuous note recognition for robotics. Based on the Beijing Forestry University music library, we conduct experiments to show recognition rates and numbers of iterations of LSTM with a single layer, LSTMP with a single layer, and Deep LSTM (DLSTM, LSTM with multilayers. As a result, the single layer LSTMP proves performing much better than the single layer LSTM in both time and the recognition rate; that is, LSTMP has fewer parameters and therefore reduces the training time, and, moreover, benefiting from the projection layer, LSTMP has better performance, too. The best recognition rate of LSTMP is 99.8%. As for DLSTM, the recognition rate can reach 100% because of the effectiveness of the deep structure, but compared with the single layer LSTMP, DLSTM needs more training time.

  4. Word embeddings and recurrent neural networks based on Long-Short Term Memory nodes in supervised biomedical word sense disambiguation.

    Science.gov (United States)

    Jimeno Yepes, Antonio

    2017-09-01

    Word sense disambiguation helps identifying the proper sense of ambiguous words in text. With large terminologies such as the UMLS Metathesaurus ambiguities appear and highly effective disambiguation methods are required. Supervised learning algorithm methods are used as one of the approaches to perform disambiguation. Features extracted from the context of an ambiguous word are used to identify the proper sense of such a word. The type of features have an impact on machine learning methods, thus affect disambiguation performance. In this work, we have evaluated several types of features derived from the context of the ambiguous word and we have explored as well more global features derived from MEDLINE using word embeddings. Results show that word embeddings improve the performance of more traditional features and allow as well using recurrent neural network classifiers based on Long-Short Term Memory (LSTM) nodes. The combination of unigrams and word embeddings with an SVM sets a new state of the art performance with a macro accuracy of 95.97 in the MSH WSD data set. Copyright © 2017 Elsevier Inc. All rights reserved.

  5. Recurrent-neural-network-based identification of a cascade hydraulic actuator for closed-loop automotive power transmission control

    International Nuclear Information System (INIS)

    You, Seung Han; Hahn, Jin Oh

    2012-01-01

    By virtue of its ease of operation compared with its conventional manual counterpart, automatic transmissions are commonly used as automotive power transmission control system in today's passenger cars. In accordance with this trend, research efforts on closed-loop automatic transmission controls have been extensively carried out to improve ride quality and fuel economy. State-of-the-art power transmission control algorithms may have limitations in performance because they rely on the steady-state characteristics of the hydraulic actuator rather than fully exploit its dynamic characteristics. Since the ultimate viability of closed-loop power transmission control is dominated by precise pressure control at the level of hydraulic actuator, closed-loop control can potentially attain superior efficacy in case the hydraulic actuator can be easily incorporated into model-based observer/controller design. In this paper, we propose to use a recurrent neural network (RNN) to establish a nonlinear empirical model of a cascade hydraulic actuator in a passenger car automatic transmission, which has potential to be easily incorporated in designing observers and controllers. Experimental analysis is performed to grasp key system characteristics, based on which a nonlinear system identification procedure is carried out. Extensive experimental validation of the established model suggests that it has superb one-step-ahead prediction capability over appropriate frequency range, making it an attractive approach for model-based observer/controller design applications in automotive systems

  6. A Velocity-Level Bi-Criteria Optimization Scheme for Coordinated Path Tracking of Dual Robot Manipulators Using Recurrent Neural Network.

    Science.gov (United States)

    Xiao, Lin; Zhang, Yongsheng; Liao, Bolin; Zhang, Zhijun; Ding, Lei; Jin, Long

    2017-01-01

    A dual-robot system is a robotic device composed of two robot arms. To eliminate the joint-angle drift and prevent the occurrence of high joint velocity, a velocity-level bi-criteria optimization scheme, which includes two criteria (i.e., the minimum velocity norm and the repetitive motion), is proposed and investigated for coordinated path tracking of dual robot manipulators. Specifically, to realize the coordinated path tracking of dual robot manipulators, two subschemes are first presented for the left and right robot manipulators. After that, such two subschemes are reformulated as two general quadratic programs (QPs), which can be formulated as one unified QP. A recurrent neural network (RNN) is thus presented to solve effectively the unified QP problem. At last, computer simulation results based on a dual three-link planar manipulator further validate the feasibility and the efficacy of the velocity-level optimization scheme for coordinated path tracking using the recurrent neural network.

  7. A delay-dependent LMI approach to dynamics analysis of discrete-time recurrent neural networks with time-varying delays

    International Nuclear Information System (INIS)

    Song, Qiankun; Wang, Zidong

    2007-01-01

    In this Letter, the analysis problem for the existence and stability of periodic solutions is investigated for a class of general discrete-time recurrent neural networks with time-varying delays. For the neural networks under study, a generalized activation function is considered, and the traditional assumptions on the boundedness, monotony and differentiability of the activation functions are removed. By employing the latest free-weighting matrix method, an appropriate Lyapunov-Krasovskii functional is constructed and several sufficient conditions are established to ensure the existence, uniqueness, and globally exponential stability of the periodic solution for the addressed neural network. The conditions are dependent on both the lower bound and upper bound of the time-varying time delays. Furthermore, the conditions are expressed in terms of the linear matrix inequalities (LMIs), which can be checked numerically using the effective LMI toolbox in MATLAB. Two simulation examples are given to show the effectiveness and less conservatism of the proposed criteria

  8. A mathematical analysis of the effects of Hebbian learning rules on the dynamics and structure of discrete-time random recurrent neural networks.

    Science.gov (United States)

    Siri, Benoît; Berry, Hugues; Cessac, Bruno; Delord, Bruno; Quoy, Mathias

    2008-12-01

    We present a mathematical analysis of the effects of Hebbian learning in random recurrent neural networks, with a generic Hebbian learning rule, including passive forgetting and different timescales, for neuronal activity and learning dynamics. Previous numerical work has reported that Hebbian learning drives the system from chaos to a steady state through a sequence of bifurcations. Here, we interpret these results mathematically and show that these effects, involving a complex coupling between neuronal dynamics and synaptic graph structure, can be analyzed using Jacobian matrices, which introduce both a structural and a dynamical point of view on neural network evolution. Furthermore, we show that sensitivity to a learned pattern is maximal when the largest Lyapunov exponent is close to 0. We discuss how neural networks may take advantage of this regime of high functional interest.

  9. Use of a Deep Recurrent Neural Network to Reduce Wind Noise: Effects on Judged Speech Intelligibility and Sound Quality

    Science.gov (United States)

    Keshavarzi, Mahmoud; Goehring, Tobias; Zakis, Justin; Turner, Richard E.; Moore, Brian C. J.

    2018-01-01

    Despite great advances in hearing-aid technology, users still experience problems with noise in windy environments. The potential benefits of using a deep recurrent neural network (RNN) for reducing wind noise were assessed. The RNN was trained using recordings of the output of the two microphones of a behind-the-ear hearing aid in response to male and female speech at various azimuths in the presence of noise produced by wind from various azimuths with a velocity of 3 m/s, using the “clean” speech as a reference. A paired-comparison procedure was used to compare all possible combinations of three conditions for subjective intelligibility and for sound quality or comfort. The conditions were unprocessed noisy speech, noisy speech processed using the RNN, and noisy speech that was high-pass filtered (which also reduced wind noise). Eighteen native English-speaking participants were tested, nine with normal hearing and nine with mild-to-moderate hearing impairment. Frequency-dependent linear amplification was provided for the latter. Processing using the RNN was significantly preferred over no processing by both subject groups for both subjective intelligibility and sound quality, although the magnitude of the preferences was small. High-pass filtering (HPF) was not significantly preferred over no processing. Although RNN was significantly preferred over HPF only for sound quality for the hearing-impaired participants, for the results as a whole, there was a preference for RNN over HPF. Overall, the results suggest that reduction of wind noise using an RNN is possible and might have beneficial effects when used in hearing aids. PMID:29708061

  10. Use of a Deep Recurrent Neural Network to Reduce Wind Noise: Effects on Judged Speech Intelligibility and Sound Quality.

    Science.gov (United States)

    Keshavarzi, Mahmoud; Goehring, Tobias; Zakis, Justin; Turner, Richard E; Moore, Brian C J

    2018-01-01

    Despite great advances in hearing-aid technology, users still experience problems with noise in windy environments. The potential benefits of using a deep recurrent neural network (RNN) for reducing wind noise were assessed. The RNN was trained using recordings of the output of the two microphones of a behind-the-ear hearing aid in response to male and female speech at various azimuths in the presence of noise produced by wind from various azimuths with a velocity of 3 m/s, using the "clean" speech as a reference. A paired-comparison procedure was used to compare all possible combinations of three conditions for subjective intelligibility and for sound quality or comfort. The conditions were unprocessed noisy speech, noisy speech processed using the RNN, and noisy speech that was high-pass filtered (which also reduced wind noise). Eighteen native English-speaking participants were tested, nine with normal hearing and nine with mild-to-moderate hearing impairment. Frequency-dependent linear amplification was provided for the latter. Processing using the RNN was significantly preferred over no processing by both subject groups for both subjective intelligibility and sound quality, although the magnitude of the preferences was small. High-pass filtering (HPF) was not significantly preferred over no processing. Although RNN was significantly preferred over HPF only for sound quality for the hearing-impaired participants, for the results as a whole, there was a preference for RNN over HPF. Overall, the results suggest that reduction of wind noise using an RNN is possible and might have beneficial effects when used in hearing aids.

  11. Using Elman recurrent neural networks with conjugate gradient algorithm in determining the anesthetic the amount of anesthetic medicine to be applied.

    Science.gov (United States)

    Güntürkün, Rüştü

    2010-08-01

    In this study, Elman recurrent neural networks have been defined by using conjugate gradient algorithm in order to determine the depth of anesthesia in the continuation stage of the anesthesia and to estimate the amount of medicine to be applied at that moment. The feed forward neural networks are also used for comparison. The conjugate gradient algorithm is compared with back propagation (BP) for training of the neural Networks. The applied artificial neural network is composed of three layers, namely the input layer, the hidden layer and the output layer. The nonlinear activation function sigmoid (sigmoid function) has been used in the hidden layer and the output layer. EEG data has been recorded with Nihon Kohden 9200 brand 22-channel EEG device. The international 8-channel bipolar 10-20 montage system (8 TB-b system) has been used in assembling the recording electrodes. EEG data have been recorded by being sampled once in every 2 milliseconds. The artificial neural network has been designed so as to have 60 neurons in the input layer, 30 neurons in the hidden layer and 1 neuron in the output layer. The values of the power spectral density (PSD) of 10-second EEG segments which correspond to the 1-50 Hz frequency range; the ratio of the total power of PSD values of the EEG segment at that moment in the same range to the total of PSD values of EEG segment taken prior to the anesthesia.

  12. An Improved Recurrent Neural Network for Complex-Valued Systems of Linear Equation and Its Application to Robotic Motion Tracking.

    Science.gov (United States)

    Ding, Lei; Xiao, Lin; Liao, Bolin; Lu, Rongbo; Peng, Hua

    2017-01-01

    To obtain the online solution of complex-valued systems of linear equation in complex domain with higher precision and higher convergence rate, a new neural network based on Zhang neural network (ZNN) is investigated in this paper. First, this new neural network for complex-valued systems of linear equation in complex domain is proposed and theoretically proved to be convergent within finite time. Then, the illustrative results show that the new neural network model has the higher precision and the higher convergence rate, as compared with the gradient neural network (GNN) model and the ZNN model. Finally, the application for controlling the robot using the proposed method for the complex-valued systems of linear equation is realized, and the simulation results verify the effectiveness and superiorness of the new neural network for the complex-valued systems of linear equation.

  13. A Discrete-Time Recurrent Neural Network for Solving Rank-Deficient Matrix Equations With an Application to Output Regulation of Linear Systems.

    Science.gov (United States)

    Liu, Tao; Huang, Jie

    2017-04-17

    This paper presents a discrete-time recurrent neural network approach to solving systems of linear equations with two features. First, the system of linear equations may not have a unique solution. Second, the system matrix is not known precisely, but a sequence of matrices that converges to the unknown system matrix exponentially is known. The problem is motivated from solving the output regulation problem for linear systems. Thus, an application of our main result leads to an online solution to the output regulation problem for linear systems.

  14. Recurrent networks for wave forecasting

    Digital Repository Service at National Institute of Oceanography (India)

    Mandal, S.; Prabaharan, N.

    , merchant vessel routing, nearshore construction, etc. more efficiently and safely. This paper presents an application of the Artificial Neural Network, namely Backpropagation Recurrent Neural Network (BRNN) with rprop update algorithm for wave forecasting...

  15. A Pilot Feasibility Study of Oral 5-Fluorocytosine and Genetically-Modified Neural Stem Cells Expressing E.Coli Cytosine Deaminase for Treatment of Recurrent High Grade Gliomas

    Science.gov (United States)

    2017-11-07

    Adult Anaplastic Astrocytoma; Recurrent Grade III Glioma; Recurrent Grade IV Glioma; Adult Anaplastic Oligodendroglioma; Adult Brain Tumor; Adult Giant Cell Glioblastoma; Adult Glioblastoma; Adult Gliosarcoma; Adult Mixed Glioma; Recurrent Adult Brain Tumor; Adult Anaplastic Oligoastrocytoma; Recurrent High Grade Glioma

  16. A solution for two-dimensional mazes with use of chaotic dynamics in a recurrent neural network model.

    Science.gov (United States)

    Suemitsu, Yoshikazu; Nara, Shigetoshi

    2004-09-01

    Chaotic dynamics introduced into a neural network model is applied to solving two-dimensional mazes, which are ill-posed problems. A moving object moves from the position at t to t + 1 by simply defined motion function calculated from firing patterns of the neural network model at each time step t. We have embedded several prototype attractors that correspond to the simple motion of the object orienting toward several directions in two-dimensional space in our neural network model. Introducing chaotic dynamics into the network gives outputs sampled from intermediate state points between embedded attractors in a state space, and these dynamics enable the object to move in various directions. System parameter switching between a chaotic and an attractor regime in the state space of the neural network enables the object to move to a set target in a two-dimensional maze. Results of computer simulations show that the success rate for this method over 300 trials is higher than that of random walk. To investigate why the proposed method gives better performance, we calculate and discuss statistical data with respect to dynamical structure.

  17. Single Layer Recurrent Neural Network for detection of swarm-like earthquakes in W-Bohemia/Vogtland - the method

    Czech Academy of Sciences Publication Activity Database

    Doubravová, Jana; Wiszniowski, J.; Horálek, Josef

    2016-01-01

    Roč. 93, August (2016), s. 138-149 ISSN 0098-3004 R&D Projects: GA ČR GAP210/12/2336; GA MŠk LM2010008 Institutional support: RVO:67985530 Keywords : event detection * artificial neural network * West Bohemia/Vogtland Subject RIV: DC - Siesmology, Volcanology, Earth Structure Impact factor: 2.533, year: 2016

  18. Integrated built-in-test false and missed alarms reduction based on forward infinite impulse response & recurrent finite impulse response dynamic neural networks

    Science.gov (United States)

    Cui, Yiqian; Shi, Junyou; Wang, Zili

    2017-11-01

    Built-in tests (BITs) are widely used in mechanical systems to perform state identification, whereas the BIT false and missed alarms cause trouble to the operators or beneficiaries to make correct judgments. Artificial neural networks (ANN) are previously used for false and missed alarms identification, which has the features such as self-organizing and self-study. However, these ANN models generally do not incorporate the temporal effect of the bottom-level threshold comparison outputs and the historical temporal features are not fully considered. To improve the situation, this paper proposes a new integrated BIT design methodology by incorporating a novel type of dynamic neural networks (DNN) model. The new DNN model is termed as Forward IIR & Recurrent FIR DNN (FIRF-DNN), where its component neurons, network structures, and input/output relationships are discussed. The condition monitoring false and missed alarms reduction implementation scheme based on FIRF-DNN model is also illustrated, which is composed of three stages including model training, false and missed alarms detection, and false and missed alarms suppression. Finally, the proposed methodology is demonstrated in the application study and the experimental results are analyzed.

  19. Recurrent myocardial infarction: Mechanisms of free-floating adaptation and autonomic derangement in networked cardiac neural control

    Science.gov (United States)

    Ardell, Jeffrey L.; Shivkumar, Kalyanam; Armour, J. Andrew

    2017-01-01

    The cardiac nervous system continuously controls cardiac function whether or not pathology is present. While myocardial infarction typically has a major and catastrophic impact, population studies have shown that longer-term risk for recurrent myocardial infarction and the related potential for sudden cardiac death depends mainly upon standard atherosclerotic variables and autonomic nervous system maladaptations. Investigative neurocardiology has demonstrated that autonomic control of cardiac function includes local circuit neurons for networked control within the peripheral nervous system. The structural and adaptive characteristics of such networked interactions define the dynamics and a new normal for cardiac control that results in the aftermath of recurrent myocardial infarction and/or unstable angina that may or may not precipitate autonomic derangement. These features are explored here via a mathematical model of cardiac regulation. A main observation is that the control environment during pathology is an extrapolation to a setting outside prior experience. Although global bounds guarantee stability, the resulting closed-loop dynamics exhibited while the network adapts during pathology are aptly described as ‘free-floating’ in order to emphasize their dependence upon details of the network structure. The totality of the results provide a mechanistic reasoning that validates the clinical practice of reducing sympathetic efferent neuronal tone while aggressively targeting autonomic derangement in the treatment of ischemic heart disease. PMID:28692680

  20. A visual sense of number emerges from the dynamics of a recurrent on-center off-surround neural network.

    Science.gov (United States)

    Sengupta, Rakesh; Surampudi, Bapi Raju; Melcher, David

    2014-09-25

    It has been proposed that the ability of humans to quickly perceive numerosity involves a visual sense of number. Different paradigms of enumeration and numerosity comparison have produced a gamut of behavioral and neuroimaging data, but there has been no unified conceptual framework that can explain results across the entire range of numerosity. The current work tries to address the ongoing debate concerning whether the same mechanism operates for enumeration of small and large numbers, through a computational approach. We describe the workings of a single-layered, fully connected network characterized by self-excitation and recurrent inhibition that operates at both subitizing and estimation ranges. We show that such a network can account for classic numerical cognition effects (the distance effect, Fechner׳s law, Weber fraction for numerosity comparison) through the network steady state activation response across different recurrent inhibition values. The model also accounts for fMRI data previously reported for different enumeration related tasks. The model also allows us to generate an estimate of the pattern of reaction times in enumeration tasks. Overall, these findings suggest that a single network architecture can account for both small and large number processing. Copyright © 2014. Published by Elsevier B.V.

  1. Recurrent myocardial infarction: Mechanisms of free-floating adaptation and autonomic derangement in networked cardiac neural control.

    Science.gov (United States)

    Kember, Guy; Ardell, Jeffrey L; Shivkumar, Kalyanam; Armour, J Andrew

    2017-01-01

    The cardiac nervous system continuously controls cardiac function whether or not pathology is present. While myocardial infarction typically has a major and catastrophic impact, population studies have shown that longer-term risk for recurrent myocardial infarction and the related potential for sudden cardiac death depends mainly upon standard atherosclerotic variables and autonomic nervous system maladaptations. Investigative neurocardiology has demonstrated that autonomic control of cardiac function includes local circuit neurons for networked control within the peripheral nervous system. The structural and adaptive characteristics of such networked interactions define the dynamics and a new normal for cardiac control that results in the aftermath of recurrent myocardial infarction and/or unstable angina that may or may not precipitate autonomic derangement. These features are explored here via a mathematical model of cardiac regulation. A main observation is that the control environment during pathology is an extrapolation to a setting outside prior experience. Although global bounds guarantee stability, the resulting closed-loop dynamics exhibited while the network adapts during pathology are aptly described as 'free-floating' in order to emphasize their dependence upon details of the network structure. The totality of the results provide a mechanistic reasoning that validates the clinical practice of reducing sympathetic efferent neuronal tone while aggressively targeting autonomic derangement in the treatment of ischemic heart disease.

  2. Recurrent myocardial infarction: Mechanisms of free-floating adaptation and autonomic derangement in networked cardiac neural control.

    Directory of Open Access Journals (Sweden)

    Guy Kember

    Full Text Available The cardiac nervous system continuously controls cardiac function whether or not pathology is present. While myocardial infarction typically has a major and catastrophic impact, population studies have shown that longer-term risk for recurrent myocardial infarction and the related potential for sudden cardiac death depends mainly upon standard atherosclerotic variables and autonomic nervous system maladaptations. Investigative neurocardiology has demonstrated that autonomic control of cardiac function includes local circuit neurons for networked control within the peripheral nervous system. The structural and adaptive characteristics of such networked interactions define the dynamics and a new normal for cardiac control that results in the aftermath of recurrent myocardial infarction and/or unstable angina that may or may not precipitate autonomic derangement. These features are explored here via a mathematical model of cardiac regulation. A main observation is that the control environment during pathology is an extrapolation to a setting outside prior experience. Although global bounds guarantee stability, the resulting closed-loop dynamics exhibited while the network adapts during pathology are aptly described as 'free-floating' in order to emphasize their dependence upon details of the network structure. The totality of the results provide a mechanistic reasoning that validates the clinical practice of reducing sympathetic efferent neuronal tone while aggressively targeting autonomic derangement in the treatment of ischemic heart disease.

  3. Sequential neural models with stochastic layers

    DEFF Research Database (Denmark)

    Fraccaro, Marco; Sønderby, Søren Kaae; Paquet, Ulrich

    2016-01-01

    How can we efficiently propagate uncertainty in a latent state representation with recurrent neural networks? This paper introduces stochastic recurrent neural networks which glue a deterministic recurrent neural network and a state space model together to form a stochastic and sequential neural...... generative model. The clear separation of deterministic and stochastic layers allows a structured variational inference network to track the factorization of the model's posterior distribution. By retaining both the nonlinear recursive structure of a recurrent neural network and averaging over...

  4. A recurrent neural network approach to quantitatively studying solar wind effects on TEC derived from GPS; preliminary results

    Directory of Open Access Journals (Sweden)

    J. B. Habarulema

    2009-05-01

    Full Text Available This paper attempts to describe the search for the parameter(s to represent solar wind effects in Global Positioning System total electron content (GPS TEC modelling using the technique of neural networks (NNs. A study is carried out by including solar wind velocity (Vsw, proton number density (Np and the Bz component of the interplanetary magnetic field (IMF Bz obtained from the Advanced Composition Explorer (ACE satellite as separate inputs to the NN each along with day number of the year (DN, hour (HR, a 4-month running mean of the daily sunspot number (R4 and the running mean of the previous eight 3-hourly magnetic A index values (A8. Hourly GPS TEC values derived from a dual frequency receiver located at Sutherland (32.38° S, 20.81° E, South Africa for 8 years (2000–2007 have been used to train the Elman neural network (ENN and the result has been used to predict TEC variations for a GPS station located at Cape Town (33.95° S, 18.47° E. Quantitative results indicate that each of the parameters considered may have some degree of influence on GPS TEC at certain periods although a decrease in prediction accuracy is also observed for some parameters for different days and seasons. It is also evident that there is still a difficulty in predicting TEC values during disturbed conditions. The improvements and degradation in prediction accuracies are both close to the benchmark values which lends weight to the belief that diurnal, seasonal, solar and magnetic variabilities may be the major determinants of TEC variability.

  5. Coupled Heuristic Prediction of Long Lead-Time Accumulated Total Inflow of a Reservoir during Typhoons Using Deterministic Recurrent and Fuzzy Inference-Based Neural Network

    Directory of Open Access Journals (Sweden)

    Chien-Lin Huang

    2015-11-01

    Full Text Available This study applies Real-Time Recurrent Learning Neural Network (RTRLNN and Adaptive Network-based Fuzzy Inference System (ANFIS with novel heuristic techniques to develop an advanced prediction model of accumulated total inflow of a reservoir in order to solve the difficulties of future long lead-time highly varied uncertainty during typhoon attacks while using a real-time forecast. For promoting the temporal-spatial forecasted precision, the following original specialized heuristic inputs were coupled: observed-predicted inflow increase/decrease (OPIID rate, total precipitation, and duration from current time to the time of maximum precipitation and direct runoff ending (DRE. This study also investigated the temporal-spatial forecasted error feature to assess the feasibility of the developed models, and analyzed the output sensitivity of both single and combined heuristic inputs to determine whether the heuristic model is susceptible to the impact of future forecasted uncertainty/errors. Validation results showed that the long lead-time–predicted accuracy and stability of the RTRLNN-based accumulated total inflow model are better than that of the ANFIS-based model because of the real-time recurrent deterministic routing mechanism of RTRLNN. Simulations show that the RTRLNN-based model with coupled heuristic inputs (RTRLNN-CHI, average error percentage (AEP/average forecast lead-time (AFLT: 6.3%/49 h can achieve better prediction than the model with non-heuristic inputs (AEP of RTRLNN-NHI and ANFIS-NHI: 15.2%/31.8% because of the full consideration of real-time hydrological initial/boundary conditions. Besides, the RTRLNN-CHI model can promote the forecasted lead-time above 49 h with less than 10% of AEP which can overcome the previous forecasted limits of 6-h AFLT with above 20%–40% of AEP.

  6. An Asynchronous Recurrent Network of Cellular Automaton-Based Neurons and Its Reproduction of Spiking Neural Network Activities.

    Science.gov (United States)

    Matsubara, Takashi; Torikai, Hiroyuki

    2016-04-01

    Modeling and implementation approaches for the reproduction of input-output relationships in biological nervous tissues contribute to the development of engineering and clinical applications. However, because of high nonlinearity, the traditional modeling and implementation approaches encounter difficulties in terms of generalization ability (i.e., performance when reproducing an unknown data set) and computational resources (i.e., computation time and circuit elements). To overcome these difficulties, asynchronous cellular automaton-based neuron (ACAN) models, which are described as special kinds of cellular automata that can be implemented as small asynchronous sequential logic circuits have been proposed. This paper presents a novel type of such ACAN and a theoretical analysis of its excitability. This paper also presents a novel network of such neurons, which can mimic input-output relationships of biological and nonlinear ordinary differential equation model neural networks. Numerical analyses confirm that the presented network has a higher generalization ability than other major modeling and implementation approaches. In addition, Field-Programmable Gate Array-implementations confirm that the presented network requires lower computational resources.

  7. Design of a decoupled AP1000 reactor core control system using digital proportional–integral–derivative (PID) control based on a quasi-diagonal recurrent neural network (QDRNN)

    International Nuclear Information System (INIS)

    Wei, Xinyu; Wang, Pengfei; Zhao, Fuyu

    2016-01-01

    Highlights: • We establish a disperse dynamic model for AP1000 reactor core. • A digital PID control based on QDRNN is used to design a decoupling control system. • The decoupling performance is verified and discussed. • The decoupling control system is simulated under the load following operation. - Abstract: The control system of the AP1000 reactor core uses the mechanical shim (MSHIM) strategy, which includes a power control subsystem and an axial power distribution control subsystem. To address the strong coupling between the two subsystems, an interlock between the two subsystems is used, which can only alleviate but not eliminate the coupling. Therefore, sometimes the axial offset (AO) cannot be controlled tightly, and the flexibility of load-following operation is limited. Thus, the decoupling of the original AP1000 reactor core control system is the focus of this paper. First, a two-node disperse dynamic model is established for the AP1000 reactor core to use PID control. Then, a digital PID control system based on a quasi-diagonal recurrent neural network (QDRNN) is designed to decouple the original system. Finally, the decoupling of the control system is verified by the step signal and load-following condition. The results show that the designed control system can decouple the original system as expected and the AO can be controlled much more tightly. Moreover, the flexibility of the load following is increased.

  8. Reactive Power Control of Single-Stage Three-Phase Photovoltaic System during Grid Faults Using Recurrent Fuzzy Cerebellar Model Articulation Neural Network

    Directory of Open Access Journals (Sweden)

    Faa-Jeng Lin

    2014-01-01

    Full Text Available This study presents a new active and reactive power control scheme for a single-stage three-phase grid-connected photovoltaic (PV system during grid faults. The presented PV system utilizes a single-stage three-phase current-controlled voltage-source inverter to achieve the maximum power point tracking (MPPT control of the PV panel with the function of low voltage ride through (LVRT. Moreover, a formula based on positive sequence voltage for evaluating the percentage of voltage sag is derived to determine the ratio of the injected reactive current to satisfy the LVRT regulations. To reduce the risk of overcurrent during LVRT operation, a current limit is predefined for the injection of reactive current. Furthermore, the control of active and reactive power is designed using a two-dimensional recurrent fuzzy cerebellar model articulation neural network (2D-RFCMANN. In addition, the online learning laws of 2D-RFCMANN are derived according to gradient descent method with varied learning-rate coefficients for network parameters to assure the convergence of the tracking error. Finally, some experimental tests are realized to validate the effectiveness of the proposed control scheme.

  9. arXiv The prototype of the HL-LHC magnets monitoring system based on Recurrent Neural Networks and adaptive quantization

    CERN Document Server

    Wielgosz, Maciej; Skoczeń, Andrzej

    This paper focuses on an examination of an applicability of Recurrent Neural Network models for detecting anomalous behavior of the CERN superconducting magnets. In order to conduct the experiments, the authors designed and implemented an adaptive signal quantization algorithm and a custom GRU-based detector and developed a method for the detector parameters selection. Three different datasets were used for testing the detector. Two artificially generated datasets were used to assess the raw performance of the system whereas the 231 MB dataset composed of the signals acquired from HiLumi magnets was intended for real-life experiments and model training. Several different setups of the developed anomaly detection system were evaluated and compared with state-of-the-art OC-SVM reference model operating on the same data. The OC-SVM model was equipped with a rich set of feature extractors accounting for a range of the input signal properties. It was determined in the course of the experiments that the detector, a...

  10. Design of a decoupled AP1000 reactor core control system using digital proportional–integral–derivative (PID) control based on a quasi-diagonal recurrent neural network (QDRNN)

    Energy Technology Data Exchange (ETDEWEB)

    Wei, Xinyu, E-mail: xyuwei@mail.xjtu.edu.cn; Wang, Pengfei, E-mail: pengfeixiaoli@yahoo.cn; Zhao, Fuyu, E-mail: fuyuzhao_xj@163.com

    2016-08-01

    Highlights: • We establish a disperse dynamic model for AP1000 reactor core. • A digital PID control based on QDRNN is used to design a decoupling control system. • The decoupling performance is verified and discussed. • The decoupling control system is simulated under the load following operation. - Abstract: The control system of the AP1000 reactor core uses the mechanical shim (MSHIM) strategy, which includes a power control subsystem and an axial power distribution control subsystem. To address the strong coupling between the two subsystems, an interlock between the two subsystems is used, which can only alleviate but not eliminate the coupling. Therefore, sometimes the axial offset (AO) cannot be controlled tightly, and the flexibility of load-following operation is limited. Thus, the decoupling of the original AP1000 reactor core control system is the focus of this paper. First, a two-node disperse dynamic model is established for the AP1000 reactor core to use PID control. Then, a digital PID control system based on a quasi-diagonal recurrent neural network (QDRNN) is designed to decouple the original system. Finally, the decoupling of the control system is verified by the step signal and load-following condition. The results show that the designed control system can decouple the original system as expected and the AO can be controlled much more tightly. Moreover, the flexibility of the load following is increased.

  11. An adaptive recurrent neural-network controller using a stabilization matrix and predictive inputs to solve a tracking problem under disturbances.

    Science.gov (United States)

    Fairbank, Michael; Li, Shuhui; Fu, Xingang; Alonso, Eduardo; Wunsch, Donald

    2014-01-01

    We present a recurrent neural-network (RNN) controller designed to solve the tracking problem for control systems. We demonstrate that a major difficulty in training any RNN is the problem of exploding gradients, and we propose a solution to this in the case of tracking problems, by introducing a stabilization matrix and by using carefully constrained context units. This solution allows us to achieve consistently lower training errors, and hence allows us to more easily introduce adaptive capabilities. The resulting RNN is one that has been trained off-line to be rapidly adaptive to changing plant conditions and changing tracking targets. The case study we use is a renewable-energy generator application; that of producing an efficient controller for a three-phase grid-connected converter. The controller we produce can cope with the random variation of system parameters and fluctuating grid voltages. It produces tracking control with almost instantaneous response to changing reference states, and virtually zero oscillation. This compares very favorably to the classical proportional integrator (PI) controllers, which we show produce a much slower response and settling time. In addition, the RNN we propose exhibits better learning stability and convergence properties, and can exhibit faster adaptation, than has been achieved with adaptive critic designs. Copyright © 2013 Elsevier Ltd. All rights reserved.

  12. Toward a new task assignment and path evolution (TAPE) for missile defense system (MDS) using intelligent adaptive SOM with recurrent neural networks (RNNs).

    Science.gov (United States)

    Wang, Chi-Hsu; Chen, Chun-Yao; Hung, Kun-Neng

    2015-06-01

    In this paper, a new adaptive self-organizing map (SOM) with recurrent neural network (RNN) controller is proposed for task assignment and path evolution of missile defense system (MDS). We address the problem of N agents (defending missiles) and D targets (incoming missiles) in MDS. A new RNN controller is designed to force an agent (or defending missile) toward a target (or incoming missile), and a monitoring controller is also designed to reduce the error between RNN controller and ideal controller. A new SOM with RNN controller is then designed to dispatch agents to their corresponding targets by minimizing total damaging cost. This is actually an important application of the multiagent system. The SOM with RNN controller is the main controller. After task assignment, the weighting factors of our new SOM with RNN controller are activated to dispatch the agents toward their corresponding targets. Using the Lyapunov constraints, the weighting factors for the proposed SOM with RNN controller are updated to guarantee the stability of the path evolution (or planning) system. Excellent simulations are obtained using this new approach for MDS, which show that our RNN has the lowest average miss distance among the several techniques.

  13. Prediction of beta-turns and beta-turn types by a novel bidirectional Elman-type recurrent neural network with multiple output layers (MOLEBRNN).

    Science.gov (United States)

    Kirschner, Andreas; Frishman, Dmitrij

    2008-10-01

    Prediction of beta-turns from amino acid sequences has long been recognized as an important problem in structural bioinformatics due to their frequent occurrence as well as their structural and functional significance. Because various structural features of proteins are intercorrelated, secondary structure information has been often employed as an additional input for machine learning algorithms while predicting beta-turns. Here we present a novel bidirectional Elman-type recurrent neural network with multiple output layers (MOLEBRNN) capable of predicting multiple mutually dependent structural motifs and demonstrate its efficiency in recognizing three aspects of protein structure: beta-turns, beta-turn types, and secondary structure. The advantage of our method compared to other predictors is that it does not require any external input except for sequence profiles because interdependencies between different structural features are taken into account implicitly during the learning process. In a sevenfold cross-validation experiment on a standard test dataset our method exhibits the total prediction accuracy of 77.9% and the Mathew's Correlation Coefficient of 0.45, the highest performance reported so far. It also outperforms other known methods in delineating individual turn types. We demonstrate how simultaneous prediction of multiple targets influences prediction performance on single targets. The MOLEBRNN presented here is a generic method applicable in a variety of research fields where multiple mutually depending target classes need to be predicted. http://webclu.bio.wzw.tum.de/predator-web/.

  14. Reversal of rocuronium-induced neuromuscular blockade by sugammadex allows for optimization of neural monitoring of the recurrent laryngeal nerve.

    Science.gov (United States)

    Lu, I-Cheng; Wu, Che-Wei; Chang, Pi-Ying; Chen, Hsiu-Ya; Tseng, Kuang-Yi; Randolph, Gregory W; Cheng, Kuang-I; Chiang, Feng-Yu

    2016-04-01

    The use of neuromuscular blocking agent may effect intraoperative neuromonitoring (IONM) during thyroid surgery. An enhanced neuromuscular-blockade (NMB) recovery protocol was investigated in a porcine model and subsequently clinically applied during human thyroid neural monitoring surgery. Prospective animal and retrospective clinical study. In the animal experiment, 12 piglets were injected with rocuronium 0.6 mg/kg and randomly allocated to receive normal saline, sugammadex 2 mg/kg, or sugammadex 4 mg/kg to compare the recovery of laryngeal electromyography (EMG). In a subsequent clinical application study, 50 patients who underwent thyroidectomy with IONM followed an enhanced NMB recovery protocol-rocuronium 0.6 mg/kg at anesthesia induction and sugammadex 2 mg/kg at the operation start. The train-of-four (TOF) ratio was used for continuous quantitative monitoring of neuromuscular transmission. In our porcine model, it took 49 ± 15, 13.2 ± 5.6, and 4.2 ± 1.5 minutes for the 80% recovery of laryngeal EMG after injection of saline, sugammadex 2 mg/kg, and sugammadex 4 mg/kg, respectively. In subsequent clinical human application, the TOF ratio recovered from 0 to >0.9 within 5 minutes after administration of sugammadex 2 mg/kg at the operation start. All patients had positive and high EMG amplitude at the early stage of the operation, and intubation was without difficulty in 96% of patients. Both porcine modeling and clinical human application demonstrated that sugammadex 2 mg/kg allows effective and rapid restoration of neuromuscular function suppressed by rocuronium. Implementation of this enhanced NMB recovery protocol assures optimal conditions for tracheal intubation as well as IONM in thyroid surgery. NA. © 2016 The American Laryngological, Rhinological and Otological Society, Inc.

  15. Modeling of biologically motivated self-learning equivalent-convolutional recurrent-multilayer neural structures (BLM_SL_EC_RMNS) for image fragments clustering and recognition

    Science.gov (United States)

    Krasilenko, Vladimir G.; Lazarev, Alexander A.; Nikitovich, Diana V.

    2018-03-01

    The biologically-motivated self-learning equivalence-convolutional recurrent-multilayer neural structures (BLM_SL_EC_RMNS) for fragments images clustering and recognition will be discussed. We shall consider these neural structures and their spatial-invariant equivalental models (SIEMs) based on proposed equivalent two-dimensional functions of image similarity and the corresponding matrix-matrix (or tensor) procedures using as basic operations of continuous logic and nonlinear processing. These SIEMs can simply describe the signals processing during the all training and recognition stages and they are suitable for unipolar-coding multilevel signals. The clustering efficiency in such models and their implementation depends on the discriminant properties of neural elements of hidden layers. Therefore, the main models and architecture parameters and characteristics depends on the applied types of non-linear processing and function used for image comparison or for adaptive-equivalent weighing of input patterns. We show that these SL_EC_RMNSs have several advantages, such as the self-study and self-identification of features and signs of the similarity of fragments, ability to clustering and recognize of image fragments with best efficiency and strong mutual correlation. The proposed combined with learning-recognition clustering method of fragments with regard to their structural features is suitable not only for binary, but also color images and combines self-learning and the formation of weight clustered matrix-patterns. Its model is constructed and designed on the basis of recursively continuous logic and nonlinear processing algorithms and to k-average method or method the winner takes all (WTA). The experimental results confirmed that fragments with a large numbers of elements may be clustered. For the first time the possibility of generalization of these models for space invariant case is shown. The experiment for an images of different dimensions (a reference

  16. Recurrent varicocele

    Directory of Open Access Journals (Sweden)

    Katherine Rotker

    2016-01-01

    Full Text Available Varicocele recurrence is one of the most common complications associated with varicocele repair. A systematic review was performed to evaluate varicocele recurrence rates, anatomic causes of recurrence, and methods of management of recurrent varicoceles. The PubMed database was evaluated using keywords "recurrent" and "varicocele" as well as MESH criteria "recurrent" and "varicocele." Articles were not included that were not in English, represented single case reports, focused solely on subclinical varicocele, or focused solely on a pediatric population (age <18. Rates of recurrence vary with the technique of varicocele repair from 0% to 35%. Anatomy of recurrence can be defined by venography. Management of varicocele recurrence can be surgical or via embolization.

  17. Exponential stability for stochastic delayed recurrent neural networks with mixed time-varying delays and impulses: the continuous-time case

    International Nuclear Information System (INIS)

    Karthik Raja, U; Leelamani, A; Raja, R; Samidurai, R

    2013-01-01

    In this paper, the exponential stability for a class of stochastic neural networks with time-varying delays and impulsive effects is considered. By constructing suitable Lyapunov functionals and by using the linear matrix inequality optimization approach, we obtain sufficient delay-dependent criteria to ensure the exponential stability of stochastic neural networks with time-varying delays and impulses. Two numerical examples with simulation results are provided to illustrate the effectiveness of the obtained results over those already existing in the literature. (paper)

  18. Recurrent Meningitis.

    Science.gov (United States)

    Rosenberg, Jon; Galen, Benjamin T

    2017-07-01

    Recurrent meningitis is a rare clinical scenario that can be self-limiting or life threatening depending on the underlying etiology. This review describes the causes, risk factors, treatment, and prognosis for recurrent meningitis. As a general overview of a broad topic, the aim of this review is to provide clinicians with a comprehensive differential diagnosis to aide in the evaluation and management of a patient with recurrent meningitis. New developments related to understanding the pathophysiology of recurrent meningitis are as scarce as studies evaluating the treatment and prevention of this rare disorder. A trial evaluating oral valacyclovir suppression after HSV-2 meningitis did not demonstrate a benefit in preventing recurrences. The data on prophylactic antibiotics after basilar skull fractures do not support their use. Intrathecal trastuzumab has shown promise in treating leptomeningeal carcinomatosis from HER-2 positive breast cancer. Monoclonal antibodies used to treat cancer and autoimmune diseases are new potential causes of drug-induced aseptic meningitis. Despite their potential for causing recurrent meningitis, the clinical entities reviewed herein are not frequently discussed together given that they are a heterogeneous collection of unrelated, rare diseases. Epidemiologic data on recurrent meningitis are lacking. The syndrome of recurrent benign lymphocytic meningitis described by Mollaret in 1944 was later found to be closely related to HSV-2 reactivation, but HSV-2 is by no means the only etiology of recurrent aseptic meningitis. While the mainstay of treatment for recurrent meningitis is supportive care, it is paramount to ensure that reversible and treatable causes have been addressed for further prevention.

  19. Equivalence of Equilibrium Propagation and Recurrent Backpropagation

    OpenAIRE

    Scellier, Benjamin; Bengio, Yoshua

    2017-01-01

    Recurrent Backpropagation and Equilibrium Propagation are algorithms for fixed point recurrent neural networks which differ in their second phase. In the first phase, both algorithms converge to a fixed point which corresponds to the configuration where the prediction is made. In the second phase, Recurrent Backpropagation computes error derivatives whereas Equilibrium Propagation relaxes to another nearby fixed point. In this work we establish a close connection between these two algorithms....

  20. Multiple data fusion for rainfall estimation using a NARX-based recurrent neural network – the development of the REIINN model

    International Nuclear Information System (INIS)

    Ang, M R C O; Gonzalez, R M; Castro, P P M

    2014-01-01

    Rainfall, one of the important elements of the hydrologic cycle, is also the most difficult to model. Thus, accurate rainfall estimation is necessary especially in localized catchment areas where variability of rainfall is extremely high. Moreover, early warning of severe rainfall through timely and accurate estimation and forecasting could help prevent disasters from flooding. This paper presents the development of two rainfall estimation models that utilize a NARX-based neural network architecture namely: REIINN 1 and REIINN 2. These REIINN models, or Rainfall Estimation by Information Integration using Neural Networks, were trained using MTSAT cloud-top temperature (CTT) images and rainfall rates from the combined rain gauge and TMPA 3B40RT datasets. Model performance was assessed using two metrics – root mean square error (RMSE) and correlation coefficient (R). REIINN 1 yielded an RMSE of 8.1423 mm/3h and an overall R of 0.74652 while REIINN 2 yielded an RMSE of 5.2303 and an overall R of 0.90373. The results, especially that of REIINN 2, are very promising for satellite-based rainfall estimation in a catchment scale. It is believed that model performance and accuracy will greatly improve with a denser and more spatially distributed in-situ rainfall measurements to calibrate the model with. The models proved the viability of using remote sensing images, with their good spatial coverage, near real time availability, and relatively inexpensive to acquire, as an alternative source for rainfall estimation to complement existing ground-based measurements

  1. Estimating Time Series Soil Moisture by Applying Recurrent Nonlinear Autoregressive Neural Networks to Passive Microwave Data over the Heihe River Basin, China

    Directory of Open Access Journals (Sweden)

    Zheng Lu

    2017-06-01

    Full Text Available A method using a nonlinear auto-regressive neural network with exogenous input (NARXnn to retrieve time series soil moisture (SM that is spatially and temporally continuous and high quality over the Heihe River Basin (HRB in China was investigated in this study. The input training data consisted of the X-band dual polarization brightness temperature (TB and the Ka-band V polarization TB from the Advanced Microwave Scanning Radiometer II (AMSR2, Global Land Satellite product (GLASS Leaf Area Index (LAI, precipitation from the Tropical Rainfall Measuring Mission (TRMM and the Global Precipitation Measurement (GPM, and a global 30 arc-second elevation (GTOPO-30. The output training data were generated from fused SM products of the Japan Aerospace Exploration Agency (JAXA and the Land Surface Parameter Model (LPRM. The reprocessed fused SM from two years (2013 and 2014 was inputted into the NARXnn for training; subsequently, SM during a third year (2015 was estimated. Direct and indirect validations were then performed during the period 2015 by comparing with in situ measurements, SM from JAXA, LPRM and the Global Land Data Assimilation System (GLDAS, as well as precipitation data from TRMM and GPM. The results showed that the SM predictions from NARXnn performed best, as indicated by their higher correlation coefficients (R ≥ 0.85 for the whole year of 2015, lower Bias values (absolute value of Bias ≤ 0.02 and root mean square error values (RMSE ≤ 0.06, and their improved response to precipitation. This method is being used to produce the NARXnn SM product over the HRB in China.

  2. Recurrent vulvovaginitis.

    Science.gov (United States)

    Powell, Anna M; Nyirjesy, Paul

    2014-10-01

    Vulvovaginitis (VV) is one of the most commonly encountered problems by a gynecologist. Many women frequently self-treat with over-the-counter medications, and may present to their health-care provider after a treatment failure. Vulvovaginal candidiasis, bacterial vaginosis, and trichomoniasis may occur as discreet or recurrent episodes, and have been associated with significant treatment cost and morbidity. We present an update on diagnostic capabilities and treatment modalities that address recurrent and refractory episodes of VV. Copyright © 2014 Elsevier Ltd. All rights reserved.

  3. Dynamic training algorithm for dynamic neural networks

    International Nuclear Information System (INIS)

    Tan, Y.; Van Cauwenberghe, A.; Liu, Z.

    1996-01-01

    The widely used backpropagation algorithm for training neural networks based on the gradient descent has a significant drawback of slow convergence. A Gauss-Newton method based recursive least squares (RLS) type algorithm with dynamic error backpropagation is presented to speed-up the learning procedure of neural networks with local recurrent terms. Finally, simulation examples concerning the applications of the RLS type algorithm to identification of nonlinear processes using a local recurrent neural network are also included in this paper

  4. Türkiye’de Enflasyonun İleri ve Geri Beslemeli Yapay Sinir Ağlarının Melez Yaklaşımı ile Öngörüsü = Forecasting of Turkey Inflation with Hybrid of Feed Forward and Recurrent Artifical Neural Networks

    Directory of Open Access Journals (Sweden)

    V. Rezan USLU

    2010-01-01

    Full Text Available Obtaining the inflation prediction is an important problem. Having this prediction accurately will lead to more accurate decisions. Various time series techniques have been used in the literature for inflation prediction. Recently, Artificial Neural Network (ANN is being preferred in the time series prediction problem due to its flexible modeling capacity. Artificial neural network can be applied easily to any time series since it does not require prior conditions such as a linear or curved specific model pattern, stationary and normal distribution. In this study, the predictions have been obtained using the feed forward and recurrent artificial neural network for the Consumer Price Index (CPI. A new combined forecast has been proposed based on ANN in which the ANN model predictions employed in analysis were used as data.

  5. Recurrent Spatial Transformer Networks

    DEFF Research Database (Denmark)

    Sønderby, Søren Kaae; Sønderby, Casper Kaae; Maaløe, Lars

    2015-01-01

    We integrate the recently proposed spatial transformer network (SPN) [Jaderberg et. al 2015] into a recurrent neural network (RNN) to form an RNN-SPN model. We use the RNN-SPN to classify digits in cluttered MNIST sequences. The proposed model achieves a single digit error of 1.5% compared to 2.......9% for a convolutional networks and 2.0% for convolutional networks with SPN layers. The SPN outputs a zoomed, rotated and skewed version of the input image. We investigate different down-sampling factors (ratio of pixel in input and output) for the SPN and show that the RNN-SPN model is able to down-sample the input...

  6. Automated Item Generation with Recurrent Neural Networks.

    Science.gov (United States)

    von Davier, Matthias

    2018-03-12

    Utilizing technology for automated item generation is not a new idea. However, test items used in commercial testing programs or in research are still predominantly written by humans, in most cases by content experts or professional item writers. Human experts are a limited resource and testing agencies incur high costs in the process of continuous renewal of item banks to sustain testing programs. Using algorithms instead holds the promise of providing unlimited resources for this crucial part of assessment development. The approach presented here deviates in several ways from previous attempts to solve this problem. In the past, automatic item generation relied either on generating clones of narrowly defined item types such as those found in language free intelligence tests (e.g., Raven's progressive matrices) or on an extensive analysis of task components and derivation of schemata to produce items with pre-specified variability that are hoped to have predictable levels of difficulty. It is somewhat unlikely that researchers utilizing these previous approaches would look at the proposed approach with favor; however, recent applications of machine learning show success in solving tasks that seemed impossible for machines not too long ago. The proposed approach uses deep learning to implement probabilistic language models, not unlike what Google brain and Amazon Alexa use for language processing and generation.

  7. Capturing non-local interactions by long short-term memory bidirectional recurrent neural networks for improving prediction of protein secondary structure, backbone angles, contact numbers and solvent accessibility.

    Science.gov (United States)

    Heffernan, Rhys; Yang, Yuedong; Paliwal, Kuldip; Zhou, Yaoqi

    2017-09-15

    The accuracy of predicting protein local and global structural properties such as secondary structure and solvent accessible surface area has been stagnant for many years because of the challenge of accounting for non-local interactions between amino acid residues that are close in three-dimensional structural space but far from each other in their sequence positions. All existing machine-learning techniques relied on a sliding window of 10-20 amino acid residues to capture some 'short to intermediate' non-local interactions. Here, we employed Long Short-Term Memory (LSTM) Bidirectional Recurrent Neural Networks (BRNNs) which are capable of capturing long range interactions without using a window. We showed that the application of LSTM-BRNN to the prediction of protein structural properties makes the most significant improvement for residues with the most long-range contacts (|i-j| >19) over a previous window-based, deep-learning method SPIDER2. Capturing long-range interactions allows the accuracy of three-state secondary structure prediction to reach 84% and the correlation coefficient between predicted and actual solvent accessible surface areas to reach 0.80, plus a reduction of 5%, 10%, 5% and 10% in the mean absolute error for backbone ϕ , ψ , θ and τ angles, respectively, from SPIDER2. More significantly, 27% of 182724 40-residue models directly constructed from predicted C α atom-based θ and τ have similar structures to their corresponding native structures (6Å RMSD or less), which is 3% better than models built by ϕ and ψ angles. We expect the method to be useful for assisting protein structure and function prediction. The method is available as a SPIDER3 server and standalone package at http://sparks-lab.org . yaoqi.zhou@griffith.edu.au or yuedong.yang@griffith.edu.au. Supplementary data are available at Bioinformatics online. © The Author (2017). Published by Oxford University Press. All rights reserved. For Permissions, please email

  8. Contemporary deep recurrent learning for recognition

    Science.gov (United States)

    Iftekharuddin, K. M.; Alam, M.; Vidyaratne, L.

    2017-05-01

    Large-scale feed-forward neural networks have seen intense application in many computer vision problems. However, these networks can get hefty and computationally intensive with increasing complexity of the task. Our work, for the first time in literature, introduces a Cellular Simultaneous Recurrent Network (CSRN) based hierarchical neural network for object detection. CSRN has shown to be more effective to solving complex tasks such as maze traversal and image processing when compared to generic feed forward networks. While deep neural networks (DNN) have exhibited excellent performance in object detection and recognition, such hierarchical structure has largely been absent in neural networks with recurrency. Further, our work introduces deep hierarchy in SRN for object recognition. The simultaneous recurrency results in an unfolding effect of the SRN through time, potentially enabling the design of an arbitrarily deep network. This paper shows experiments using face, facial expression and character recognition tasks using novel deep recurrent model and compares recognition performance with that of generic deep feed forward model. Finally, we demonstrate the flexibility of incorporating our proposed deep SRN based recognition framework in a humanoid robotic platform called NAO.

  9. Pacemaker Therapy in Patients With Neurally Mediated Syncope and Documented Asystole Third International Study on Syncope of Uncertain Etiology (ISSUE-3) A Randomized Trial

    NARCIS (Netherlands)

    Brignole, Michele; Menozzi, Carlo; Moya, Angel; Andresen, Dietrich; Blanc, Jean Jacques; Krahn, Andrew D.; Wieling, Wouter; Beiras, Xulio; Deharo, Jean Claude; Russo, Vitantonio; Tomaino, Marco; Sutton, Richard; Tomaino, M.; Pescoller, F.; Donateo, P.; Oddone, D.; Russo, V.; Pierri, F.; Matino, M. G.; Vitale, E.; Massa, R.; Piccinni, G.; Melissano, D.; Menozzi, C.; Lolli, G.; Gulizia, M.; Francese, M.; Iorfida, M.; Golzio, P.; Gaggioli, G.; Laffi, M.; Rabjoli, F.; Cecchinato, C.; Ungar, A.; Rafanelli, M.; Chisciotti, V.; Morrione, A.; del Rosso, A.; Guernaccia, V.; Palella, M.; D'Agostino, C.; Campana, A.; Brigante, M.; Miracapillo, G.; Addonisio, L.; Proclemer, A.; Facchin, D.; Vado, A.; Knops, R. E.; Dekker, L. R. C.

    2012-01-01

    Background-The efficacy of cardiac pacing for prevention of syncopal recurrences in patients with neurally mediated syncope is controversial. We wanted to determine whether pacing therapy reduces syncopal recurrences in patients with severe asystolic neurally mediated syncope. Methods and

  10. Use of recurrent neural networks for determination of 7-epiclusianone acidity constants in ethanol-water mixtures; Uso de redes neurais recorrentes na determinacao das constantes de acidez para a 7-epiclusianona em misturas etanol-agua

    Energy Technology Data Exchange (ETDEWEB)

    Costa, Ederson D' Martin; Lemes, Nelson Henrique Teixeira, E-mail: nelson.lemes@unifal-mg.edu.br [Instituto de Ciencias Exatas, Universidade Federal de Alfenas, Alfenas, MG (Brazil); Santos, Marcelo Henrique dos [Instituto de Ciencias Farmaceuticas, Universidade Federal de Alfenas, Alfenas, MG (Brazil); Braga, Joao Pedro [Departamento de Quimica, Universidade Federal de Minas Gerais (UFMG), Belo Horizonte, MG (Brazil)

    2012-07-01

    This work propose a recursive neural network to solve inverse equilibrium problem. The acidity constants of 7-epiclusianone in ethanol-water binary mixtures were determined from multiwavelength spectrophotometric data. A linear relationship between acidity constants and the % w/v of ethanol in the solvent mixture was observed. The proposed method efficiency is compared with the Simplex method, commonly used in nonlinear optimization techniques. The neural network method is simple, numerically stable and has a broad range of applicability. (author)

  11. Lyapunov Based Estimation of Flight Stability Boundary under Icing Conditions

    Directory of Open Access Journals (Sweden)

    Binbin Pei

    2017-01-01

    Full Text Available Current fight boundary of the envelope protection in icing conditions is usually defined by the critical values of state parameters; however, such method does not take the interrelationship of each parameter and the effect of the external disturbance into consideration. This paper proposes constructing the stability boundary of the aircraft in icing conditions through analyzing the region of attraction (ROA around the equilibrium point. Nonlinear icing effect model is proposed according to existing wind tunnel test results. On this basis, the iced polynomial short period model can be deduced further to obtain the stability boundary under icing conditions using ROA analysis. Simulation results for a series of icing severity demonstrate that, regardless of the icing severity, the boundary of the calculated ROA can be treated as an estimation of the stability boundary around an equilibrium point. The proposed methodology is believed to be a promising way for ROA analysis and stability boundary construction of the aircraft in icing conditions, and it will provide theoretical support for multiple boundary protection of icing tolerant flight.

  12. Neural network-based distributed attitude coordination control for spacecraft formation flying with input saturation.

    Science.gov (United States)

    Zou, An-Min; Kumar, Krishna Dev

    2012-07-01

    This brief considers the attitude coordination control problem for spacecraft formation flying when only a subset of the group members has access to the common reference attitude. A quaternion-based distributed attitude coordination control scheme is proposed with consideration of the input saturation and with the aid of the sliding-mode observer, separation principle theorem, Chebyshev neural networks, smooth projection algorithm, and robust control technique. Using graph theory and a Lyapunov-based approach, it is shown that the distributed controller can guarantee the attitude of all spacecraft to converge to a common time-varying reference attitude when the reference attitude is available only to a portion of the group of spacecraft. Numerical simulations are presented to demonstrate the performance of the proposed distributed controller.

  13. Neural adaptive control for vibration suppression in composite fin-tip of aircraft.

    Science.gov (United States)

    Suresh, S; Kannan, N; Sundararajan, N; Saratchandran, P

    2008-06-01

    In this paper, we present a neural adaptive control scheme for active vibration suppression of a composite aircraft fin tip. The mathematical model of a composite aircraft fin tip is derived using the finite element approach. The finite element model is updated experimentally to reflect the natural frequencies and mode shapes very accurately. Piezo-electric actuators and sensors are placed at optimal locations such that the vibration suppression is a maximum. Model-reference direct adaptive neural network control scheme is proposed to force the vibration level within the minimum acceptable limit. In this scheme, Gaussian neural network with linear filters is used to approximate the inverse dynamics of the system and the parameters of the neural controller are estimated using Lyapunov based update law. In order to reduce the computational burden, which is critical for real-time applications, the number of hidden neurons is also estimated in the proposed scheme. The global asymptotic stability of the overall system is ensured using the principles of Lyapunov approach. Simulation studies are carried-out using sinusoidal force functions of varying frequency. Experimental results show that the proposed neural adaptive control scheme is capable of providing significant vibration suppression in the multiple bending modes of interest. The performance of the proposed scheme is better than the H(infinity) control scheme.

  14. Recurrent Intracerebral Hemorrhage

    DEFF Research Database (Denmark)

    Schmidt, Linnea Boegeskov; Goertz, Sanne; Wohlfahrt, Jan

    2016-01-01

    BACKGROUND: Intracerebral hemorrhage (ICH) is a disease with high mortality and a substantial risk of recurrence. However, the recurrence risk is poorly documented and the knowledge of potential predictors for recurrence among co-morbidities and medicine with antithrombotic effect is limited....... OBJECTIVES: 1) To estimate the short- and long-term cumulative risks of recurrent intracerebral hemorrhage (ICH). 2) To investigate associations between typical comorbid diseases, surgical treatment, use of medicine with antithrombotic effects, including antithrombotic treatment (ATT), selective serotonin...

  15. International Neural Network Society Annual Meeting (1994) Held in San Diego, California on 5-9 June 1994. Volume 3.

    Science.gov (United States)

    1994-06-09

    PROBLEM BASED ON LEARNING IN THE RECURRENT RANDOM NEURAL NETWORK Jose AGUILAR EHEI. UFR de Mathematiques et d’Informatique. Universiti Rene Descartes 45...parallelisme optimal". PHD thesis. Rene Descartes University, Paris, France, 1992. 9. GELENBE, E. "Learning in the recurrent Random Neural Network", Neural

  16. Training trajectories by continuous recurrent multilayer networks.

    Science.gov (United States)

    Leistritz, L; Galicki, M; Witte, H; Kochs, E

    2002-01-01

    This paper addresses the problem of training trajectories by means of continuous recurrent neural networks whose feedforward parts are multilayer perceptrons. Such networks can approximate a general nonlinear dynamic system with arbitrary accuracy. The learning process is transformed into an optimal control framework where the weights are the controls to be determined. A training algorithm based upon a variational formulation of Pontryagin's maximum principle is proposed for such networks. Computer examples demonstrating the efficiency of the given approach are also presented.

  17. BRITS: Bidirectional Recurrent Imputation for Time Series

    OpenAIRE

    Cao, Wei; Wang, Dong; Li, Jian; Zhou, Hao; Li, Lei; Li, Yitan

    2018-01-01

    Time series are widely used as signals in many classification/regression tasks. It is ubiquitous that time series contains many missing values. Given multiple correlated time series data, how to fill in missing values and to predict their class labels? Existing imputation methods often impose strong assumptions of the underlying data generating process, such as linear dynamics in the state space. In this paper, we propose BRITS, a novel method based on recurrent neural networks for missing va...

  18. Boolean Factor Analysis by Attractor Neural Network

    Czech Academy of Sciences Publication Activity Database

    Frolov, A. A.; Húsek, Dušan; Muraviev, I. P.; Polyakov, P.Y.

    2007-01-01

    Roč. 18, č. 3 (2007), s. 698-707 ISSN 1045-9227 R&D Projects: GA AV ČR 1ET100300419; GA ČR GA201/05/0079 Institutional research plan: CEZ:AV0Z10300504 Keywords : recurrent neural network * Hopfield-like neural network * associative memory * unsupervised learning * neural network architecture * neural network application * statistics * Boolean factor analysis * dimensionality reduction * features clustering * concepts search * information retrieval Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 2.769, year: 2007

  19. Attention-based Memory Selection Recurrent Network for Language Modeling

    OpenAIRE

    Liu, Da-Rong; Chuang, Shun-Po; Lee, Hung-yi

    2016-01-01

    Recurrent neural networks (RNNs) have achieved great success in language modeling. However, since the RNNs have fixed size of memory, their memory cannot store all the information about the words it have seen before in the sentence, and thus the useful long-term information may be ignored when predicting the next words. In this paper, we propose Attention-based Memory Selection Recurrent Network (AMSRN), in which the model can review the information stored in the memory at each previous time ...

  20. Recurrent Syncope due to Esophageal Squamous Cell Carcinoma

    OpenAIRE

    Casini, Alessandro; Tschanz, Elisabeth; Dietrich, Pierre-Yves; Nendaz, Mathieu

    2011-01-01

    Syncope is caused by a wide variety of disorders. Recurrent syncope as a complication of malignancy is uncommon and may be difficult to diagnose and to treat. Primary neck carcinoma or metastases spreading in parapharyngeal and carotid spaces can involve the internal carotid artery and cause neurally mediated syncope with a clinical presentation like carotid sinus syndrome. We report the case of a 76-year-old man who suffered from recurrent syncope due to invasion of the right carotid sinus b...

  1. Recurrence in affective disorder

    DEFF Research Database (Denmark)

    Kessing, L V; Olsen, E W; Andersen, P K

    1999-01-01

    The risk of recurrence in affective disorder is influenced by the number of prior episodes and by a person's tendency toward recurrence. Newly developed frailty models were used to estimate the effect of the number of episodes on the rate of recurrence, taking into account individual frailty toward...... recurrence. The study base was the Danish psychiatric case register of all hospital admissions for primary affective disorder in Denmark during 1971-1993. A total of 20,350 first-admission patients were discharged with a diagnosis of major affective disorder. For women with unipolar disorder and for all...... kinds of patients with bipolar disorder, the rate of recurrence was affected by the number of prior episodes even when the effect was adjusted for individual frailty toward recurrence. No effect of episodes but a large effect of the frailty parameter was found for unipolar men. The authors concluded...

  2. Learning State Space Dynamics in Recurrent Networks

    Science.gov (United States)

    Simard, Patrice Yvon

    Fully recurrent (asymmetrical) networks can be used to learn temporal trajectories. The network is unfolded in time, and backpropagation is used to train the weights. The presence of recurrent connections creates internal states in the system which vary as a function of time. The resulting dynamics can provide interesting additional computing power but learning is made more difficult by the existence of internal memories. This study first exhibits the properties of recurrent networks in terms of convergence when the internal states of the system are unknown. A new energy functional is provided to change the weights of the units in order to the control the stability of the fixed points of the network's dynamics. The power of the resultant algorithm is illustrated with the simulation of a content addressable memory. Next, the more general case of time trajectories on a recurrent network is studied. An application is proposed in which trajectories are generated to draw letters as a function of an input. In another application of recurrent systems, a neural network certain temporal properties observed in human callosally sectioned brains. Finally the proposed algorithm for stabilizing dynamics around fixed points is extended to one for stabilizing dynamics around time trajectories. Its effects are illustrated on a network which generates Lisajous curves.

  3. Recurrent hamburger thyrotoxicosis

    Science.gov (United States)

    Parmar, Malvinder S.; Sturge, Cecil

    2003-01-01

    RECURRENT EPISODES OF SPONTANEOUSLY RESOLVING HYPERTHYROIDISM may be caused by release of preformed hormone from the thyroid gland after it has been damaged by inflammation (recurrent silent thyroiditis) or by exogenous administration of thyroid hormone, which might be intentional or surreptitious (thyrotoxicosis factitia). Community-wide outbreaks of “hamburger thyrotoxicosis” resulting from inadvertent consumption of beef contaminated with bovine thyroid gland have been previously reported. Here we describe a single patient who experienced recurrent episodes of this phenomenon over an 11-year period and present an approach to systematically evaluating patients with recurrent hyperthyroidism. PMID:12952802

  4. Recurrent laughter-induced syncope.

    Science.gov (United States)

    Gaitatzis, Athanasios; Petzold, Axel

    2012-07-01

    Syncope is a common presenting complaint in Neurology clinics or Emergency departments, but its causes are sometimes difficult to diagnose. Apart from vasovagal attacks, other benign, neurally mediated syncopes include "situational" syncopes, which occur after urination, coughing, swallowing, or defecation. A healthy 42-year-old male patient presented to the neurology clinic with a long history of faints triggered by spontaneous laughter, especially after funny jokes. Physical and neurological examination, and electroencephalography and magnetic resonance imaging were unremarkable. There was no evidence to suggest cardiogenic causes, epilepsy, or cataplexy and a diagnosis of laughing syncope was made. Laughter-induced syncope is usually a single event in the majority of cases, but may present as recurrent attacks as in our case. Some cases occur in association with underlying neurological conditions. Prognosis is good in the case of neurally mediated attacks. Laughter may not be recognized by physicians as a cause of syncope, which may lead to unnecessary investigations or misdiagnosis, and affect patients' quality of life.

  5. Recurrent Takotsubo Cardiomyopathy Related to Recurrent Thyrotoxicosis.

    Science.gov (United States)

    Patel, Keval; Griffing, George T; Hauptman, Paul J; Stolker, Joshua M

    2016-04-01

    Takotsubo cardiomyopathy, or transient left ventricular apical ballooning syndrome, is characterized by acute left ventricular dysfunction caused by transient wall-motion abnormalities of the left ventricular apex and mid ventricle in the absence of obstructive coronary artery disease. Recurrent episodes are rare but have been reported, and several cases of takotsubo cardiomyopathy have been described in the presence of hyperthyroidism. We report the case of a 55-year-old woman who had recurrent takotsubo cardiomyopathy, documented by repeat coronary angiography and evaluations of left ventricular function, in the presence of recurrent hyperthyroidism related to Graves disease. After both episodes, the patient's left ventricular function returned to normal when her thyroid function normalized. These findings suggest a possible role of thyroid-hormone excess in the pathophysiology of some patients who have takotsubo cardiomyopathy.

  6. An interpretable LSTM neural network for autoregressive exogenous model

    OpenAIRE

    Guo, Tian; Lin, Tao; Lu, Yao

    2018-01-01

    In this paper, we propose an interpretable LSTM recurrent neural network, i.e., multi-variable LSTM for time series with exogenous variables. Currently, widely used attention mechanism in recurrent neural networks mostly focuses on the temporal aspect of data and falls short of characterizing variable importance. To this end, our multi-variable LSTM equipped with tensorized hidden states is developed to learn variable specific representations, which give rise to both temporal and variable lev...

  7. Neural networks

    International Nuclear Information System (INIS)

    Denby, Bruce; Lindsey, Clark; Lyons, Louis

    1992-01-01

    The 1980s saw a tremendous renewal of interest in 'neural' information processing systems, or 'artificial neural networks', among computer scientists and computational biologists studying cognition. Since then, the growth of interest in neural networks in high energy physics, fueled by the need for new information processing technologies for the next generation of high energy proton colliders, can only be described as explosive

  8. Persistent and recurrent hyperparathyroidism.

    Science.gov (United States)

    Guerin, Carole; Paladino, Nunzia Cinzia; Lowery, Aoife; Castinetti, Fréderic; Taieb, David; Sebag, Fréderic

    2017-06-01

    Despite remarkable progress in imaging modalities and surgical management, persistence or recurrence of primary hyperparathyroidism (PHPT) still occurs in 2.5-5% of cases of PHPT. The aim of this review is to expose the management of persistent and recurrent hyperparathyroidism. A literature search was performed on MEDLINE using the search terms "recurrent" or "persistent" and "hyperparathyroidism" within the past 10 years. We also searched the reference lists of articles identified by this search strategy and selected those we judged relevant. Before considering reoperation, the surgeon must confirm the diagnosis of PHPT. Then, the patient must be evaluated with new imaging modalities. A single adenoma is found in 68% of cases, multiglandular disease in 28%, and parathyroid carcinoma in 3%. Others causes (<1%) include parathyromatosis and graft recurrence. The surgeon must balance the benefits against the risks of a reoperation (permanent hypocalcemia and recurrent laryngeal nerve palsy). If surgery is necessary, a focused approach can be considered in cases of significant imaging foci, but in the case of multiglandular disease, a bilateral neck exploration could be necessary. Patients with multiple endocrine neoplasia syndromes are at high risk of recurrence and should be managed regarding their hereditary pathology. The cure rate of persistent-PHPT or recurrent-PHPT in expert centers is estimated from 93 to 97%. After confirming the diagnosis of PHPT, patients with persistent-PHPT and recurrent-PHPT should be managed in an expert center with all dedicated competencies.

  9. RECURRENT CROUP IN CHILDREN

    Directory of Open Access Journals (Sweden)

    S. L. Piskunova

    2014-01-01

    Full Text Available The article presents the results of examination of 1849 children, entering children's infectioushospitalofVladivostokwith the clinical picture of croup of viral etiology. The clinical features of primary and recurrent croup are described. Frequency of recurrent croup inVladivostokis 8%. Children with a recurrent croup had the burdened premorbid background, and also persistent herpetic infections (cytomegalic infection in 42,9% cases, cytomegalic infection in combination with the herpes simplex virus -1. Frequency of croups substantially rose in the period of epidemic of influenza.

  10. Neural networks and statistical learning

    CERN Document Server

    Du, Ke-Lin

    2014-01-01

    Providing a broad but in-depth introduction to neural network and machine learning in a statistical framework, this book provides a single, comprehensive resource for study and further research. All the major popular neural network models and statistical learning approaches are covered with examples and exercises in every chapter to develop a practical working understanding of the content. Each of the twenty-five chapters includes state-of-the-art descriptions and important research results on the respective topics. The broad coverage includes the multilayer perceptron, the Hopfield network, associative memory models, clustering models and algorithms, the radial basis function network, recurrent neural networks, principal component analysis, nonnegative matrix factorization, independent component analysis, discriminant analysis, support vector machines, kernel methods, reinforcement learning, probabilistic and Bayesian networks, data fusion and ensemble learning, fuzzy sets and logic, neurofuzzy models, hardw...

  11. Classification of behavior using unsupervised temporal neural networks

    International Nuclear Information System (INIS)

    Adair, K.L.

    1998-03-01

    Adding recurrent connections to unsupervised neural networks used for clustering creates a temporal neural network which clusters a sequence of inputs as they appear over time. The model presented combines the Jordan architecture with the unsupervised learning technique Adaptive Resonance Theory, Fuzzy ART. The combination yields a neural network capable of quickly clustering sequential pattern sequences as the sequences are generated. The applicability of the architecture is illustrated through a facility monitoring problem

  12. Tutorial on neural network applications in high energy physics: A 1992 perspective

    International Nuclear Information System (INIS)

    Denby, B.

    1992-04-01

    Feed forward and recurrent neural networks are introduced and related to standard data analysis tools. Tips are given on applications of neural nets to various areas of high energy physics. A review of applications within high energy physics and a summary of neural net hardware status are given

  13. Fast convergence of spike sequences to periodic patterns in recurrent networks

    International Nuclear Information System (INIS)

    Jin, Dezhe Z.

    2002-01-01

    The dynamical attractors are thought to underlie many biological functions of recurrent neural networks. Here we show that stable periodic spike sequences with precise timings are the attractors of the spiking dynamics of recurrent neural networks with global inhibition. Almost all spike sequences converge within a finite number of transient spikes to these attractors. The convergence is fast, especially when the global inhibition is strong. These results support the possibility that precise spatiotemporal sequences of spikes are useful for information encoding and processing in biological neural networks

  14. Modeling and control of magnetorheological fluid dampers using neural networks

    Science.gov (United States)

    Wang, D. H.; Liao, W. H.

    2005-02-01

    Due to the inherent nonlinear nature of magnetorheological (MR) fluid dampers, one of the challenging aspects for utilizing these devices to achieve high system performance is the development of accurate models and control algorithms that can take advantage of their unique characteristics. In this paper, the direct identification and inverse dynamic modeling for MR fluid dampers using feedforward and recurrent neural networks are studied. The trained direct identification neural network model can be used to predict the damping force of the MR fluid damper on line, on the basis of the dynamic responses across the MR fluid damper and the command voltage, and the inverse dynamic neural network model can be used to generate the command voltage according to the desired damping force through supervised learning. The architectures and the learning methods of the dynamic neural network models and inverse neural network models for MR fluid dampers are presented, and some simulation results are discussed. Finally, the trained neural network models are applied to predict and control the damping force of the MR fluid damper. Moreover, validation methods for the neural network models developed are proposed and used to evaluate their performance. Validation results with different data sets indicate that the proposed direct identification dynamic model using the recurrent neural network can be used to predict the damping force accurately and the inverse identification dynamic model using the recurrent neural network can act as a damper controller to generate the command voltage when the MR fluid damper is used in a semi-active mode.

  15. Signal Processing and Neural Network Simulator

    Science.gov (United States)

    Tebbe, Dennis L.; Billhartz, Thomas J.; Doner, John R.; Kraft, Timothy T.

    1995-04-01

    The signal processing and neural network simulator (SPANNS) is a digital signal processing simulator with the capability to invoke neural networks into signal processing chains. This is a generic tool which will greatly facilitate the design and simulation of systems with embedded neural networks. The SPANNS is based on the Signal Processing WorkSystemTM (SPWTM), a commercial-off-the-shelf signal processing simulator. SPW provides a block diagram approach to constructing signal processing simulations. Neural network paradigms implemented in the SPANNS include Backpropagation, Kohonen Feature Map, Outstar, Fully Recurrent, Adaptive Resonance Theory 1, 2, & 3, and Brain State in a Box. The SPANNS was developed by integrating SAIC's Industrial Strength Neural Networks (ISNN) Software into SPW.

  16. International Conference on Artificial Neural Networks (ICANN)

    CERN Document Server

    Mladenov, Valeri; Kasabov, Nikola; Artificial Neural Networks : Methods and Applications in Bio-/Neuroinformatics

    2015-01-01

    The book reports on the latest theories on artificial neural networks, with a special emphasis on bio-neuroinformatics methods. It includes twenty-three papers selected from among the best contributions on bio-neuroinformatics-related issues, which were presented at the International Conference on Artificial Neural Networks, held in Sofia, Bulgaria, on September 10-13, 2013 (ICANN 2013). The book covers a broad range of topics concerning the theory and applications of artificial neural networks, including recurrent neural networks, super-Turing computation and reservoir computing, double-layer vector perceptrons, nonnegative matrix factorization, bio-inspired models of cell communities, Gestalt laws, embodied theory of language understanding, saccadic gaze shifts and memory formation, and new training algorithms for Deep Boltzmann Machines, as well as dynamic neural networks and kernel machines. It also reports on new approaches to reinforcement learning, optimal control of discrete time-delay systems, new al...

  17. Hyperhomocysteinemia in Recurrent Miscarriage

    International Nuclear Information System (INIS)

    Gaber, Kh.R.; Farag, M.K.; Soliman, S.Et.; Abd Al-Kaderm, M.A.

    2008-01-01

    Objective: An elevated total plasma homocysteine level has been suggested as a possible risk factor in women suffering from recurrent pregnancy loss. The current study was undertaken to assess the association between homocysteine, folate, cobalamin (vitamin B12) and the risk of recurrent pregnancy loss. Design: Case . control study Materials and Methods: The study included 57 non-pregnant Egyptian women. They were classified according to their obstetric history into 2 groups: 32 cases with at least two consecutive miscarriages (Study group), and 25 cases with normal obstetric history (Control group). All cases were tested for plasma total homocysteine, serum folate and cobalamin (vitamin B12). Results: The fasting total homocysteine was significantly higher in the study group as compared to the control group. While the median concentrations for the vitamins studied were significantly lower in women of the study group as compared to the controls. Elevated homocysteine and reduced vitamin B12 can be considered risk factors for recurrent miscarriage with odds ratio (OR) and 95% confidence intervals (95% CI) of 1.839 (1.286, 2.63) and 1.993 (1.346, 2.951) respectively in the group of recurrent miscarriages. The OR (95% CI) in the study population for low serum folate concentrations was 1.23 (0.776, 2.256). Conclusion: Elevated homocysteine and reduced serum vitamin B12 are risk factors for recurrent miscarriage. Low serum folate did not seem a risk factor for recurrent miscarriage. Testing for homocysteine levels in women suffering from unexplained recurrent miscarriage and pre-conceptional supplementation with vitamin B12 might be beneficial to improve pregnancy outcome

  18. Hyperhomocysteinemia in Recurrent Miscarriage

    Energy Technology Data Exchange (ETDEWEB)

    Gaber, Kh R; Farag, M K [Prenatal Diagnosis and Fetal Medicine Department, National Research Centre, Dokki, Giza (Egypt); Soliman, S Et [Radioisotope Department, Nuclear Research Centre, Atomic Energy Authority, Cairo (Egypt); Abd Al-Kaderm, M A [Obstetrics and Gynecology Department, Faculty of Medicine, Cairo University, Cairo (Egypt)

    2008-07-01

    Objective: An elevated total plasma homocysteine level has been suggested as a possible risk factor in women suffering from recurrent pregnancy loss. The current study was undertaken to assess the association between homocysteine, folate, cobalamin (vitamin B12) and the risk of recurrent pregnancy loss. Design: Case . control study Materials and Methods: The study included 57 non-pregnant Egyptian women. They were classified according to their obstetric history into 2 groups: 32 cases with at least two consecutive miscarriages (Study group), and 25 cases with normal obstetric history (Control group). All cases were tested for plasma total homocysteine, serum folate and cobalamin (vitamin B12). Results: The fasting total homocysteine was significantly higher in the study group as compared to the control group. While the median concentrations for the vitamins studied were significantly lower in women of the study group as compared to the controls. Elevated homocysteine and reduced vitamin B12 can be considered risk factors for recurrent miscarriage with odds ratio (OR) and 95% confidence intervals (95% CI) of 1.839 (1.286, 2.63) and 1.993 (1.346, 2.951) respectively in the group of recurrent miscarriages. The OR (95% CI) in the study population for low serum folate concentrations was 1.23 (0.776, 2.256). Conclusion: Elevated homocysteine and reduced serum vitamin B12 are risk factors for recurrent miscarriage. Low serum folate did not seem a risk factor for recurrent miscarriage. Testing for homocysteine levels in women suffering from unexplained recurrent miscarriage and pre-conceptional supplementation with vitamin B12 might be beneficial to improve pregnancy outcome.

  19. Neural Networks

    International Nuclear Information System (INIS)

    Smith, Patrick I.

    2003-01-01

    Physicists use large detectors to measure particles created in high-energy collisions at particle accelerators. These detectors typically produce signals indicating either where ionization occurs along the path of the particle, or where energy is deposited by the particle. The data produced by these signals is fed into pattern recognition programs to try to identify what particles were produced, and to measure the energy and direction of these particles. Ideally, there are many techniques used in this pattern recognition software. One technique, neural networks, is particularly suitable for identifying what type of particle caused by a set of energy deposits. Neural networks can derive meaning from complicated or imprecise data, extract patterns, and detect trends that are too complex to be noticed by either humans or other computer related processes. To assist in the advancement of this technology, Physicists use a tool kit to experiment with several neural network techniques. The goal of this research is interface a neural network tool kit into Java Analysis Studio (JAS3), an application that allows data to be analyzed from any experiment. As the final result, a physicist will have the ability to train, test, and implement a neural network with the desired output while using JAS3 to analyze the results or output. Before an implementation of a neural network can take place, a firm understanding of what a neural network is and how it works is beneficial. A neural network is an artificial representation of the human brain that tries to simulate the learning process [5]. It is also important to think of the word artificial in that definition as computer programs that use calculations during the learning process. In short, a neural network learns by representative examples. Perhaps the easiest way to describe the way neural networks learn is to explain how the human brain functions. The human brain contains billions of neural cells that are responsible for processing

  20. Evolvable synthetic neural system

    Science.gov (United States)

    Curtis, Steven A. (Inventor)

    2009-01-01

    An evolvable synthetic neural system includes an evolvable neural interface operably coupled to at least one neural basis function. Each neural basis function includes an evolvable neural interface operably coupled to a heuristic neural system to perform high-level functions and an autonomic neural system to perform low-level functions. In some embodiments, the evolvable synthetic neural system is operably coupled to one or more evolvable synthetic neural systems in a hierarchy.

  1. Recurrences of Bell's palsy.

    Science.gov (United States)

    Cirpaciu, D; Goanta, C M; Cirpaciu, M D

    2014-01-01

    Bell's palsy in known as the most common cause of facial paralysis, determined by the acute onset of lower motor neuron weakness of the facial nerve with no detectable cause. With a lifetime risk of 1 in 60 and an annual incidence of 11-40/100,000 population, the condition resolves completely in around 71% of the untreated cases. Clinical trials performed for Bell's palsy have reported some recurrences, ipsilateral or contralateral to the side affected in the primary episode of facial palsy. Only few data are found in the literature. Melkersson-Rosenthal is a rare neuromucocutaneous syndrome characterized by recurrent facial paralysis, fissured tongue (lingua plicata), orofacial edema. We attempted to analyze some clinical and epidemiologic aspects of recurrent idiopathic palsy, and to develop relevant correlations between the existing data in literature and those obtained in this study. This is a retrospective study carried out on a 10-years period for adults and a five-year period for children. A number of 185 patients aged between 4 and 70 years old were analyzed. 136 of them were adults and 49 were children. 22 of 185 patients with Bell's palsy (12%) had a recurrent partial or complete facial paralysis with one to six episodes of palsy. From this group of 22 cases, 5 patients were diagnosed with Melkersson-Rosenthal syndrome. The patients' age was between 4 and 70 years old, with a medium age of 27,6 years. In the group studied, fifteen patients, meaning 68%, were women and seven were men. The majority of patients in our group with more than two facial palsy episodes had at least one episode on the contralateral side. Our study found a significant incidence of recurrences of idiopathic facial palsy. Recurrent idiopathic facial palsy and Melkersson-Rosenthal syndrome is diagnosed more often in young females. Recurrence is more likely to occur in the first two years from the onset, which leads to the conclusion that we should have a follow up of patients

  2. Recurrent parotitis in children

    Directory of Open Access Journals (Sweden)

    Bhattarai M

    2006-01-01

    Full Text Available Recurrent parotitis is an uncommon condition in children. Its etiological factors have not been proved till date although causes due to genetic inheritance, local autoimmune manifestation, allergy, viral infection and immunodeficiency have been suggested. The exact management of this disorder is not yet standardized, but a conservative approach is preferred and all affected children should be screened for Sjogren′s syndrome and immune deficiency including human immunodeficiency virus. We report a 12 years female child who presented with 12 episodes of non-painful recurrent swellings of the bilateral parotid gland in the past 3 years.

  3. Recurrent atrial myxoma.

    Science.gov (United States)

    Macarie, C; Stoica, E; Chioncel, O; Carp, A; Gherghiceanu, D; Stiru, O; Zarma, L; Herlea, V

    2004-01-01

    We have chosen this case of sporadic atrial myxoma for our presentation because it had a particular evolution, with recurrence at 8 years after surgical excision (echocardiography was performed every year) and a particular diagnostic means - at echocardiographic follow-up, the patient being asymptomatic. This presentation, together with a review of literature included in the article, emphasizes the importance of a careful postoperative follow-up of the patients and the existence of some particular aspects of the evolution and symptomatology of recurrent atrial myxoma.

  4. Real-Time Inverse Optimal Neural Control for Image Based Visual Servoing with Nonholonomic Mobile Robots

    Directory of Open Access Journals (Sweden)

    Carlos López-Franco

    2015-01-01

    Full Text Available We present an inverse optimal neural controller for a nonholonomic mobile robot with parameter uncertainties and unknown external disturbances. The neural controller is based on a discrete-time recurrent high order neural network (RHONN trained with an extended Kalman filter. The reference velocities for the neural controller are obtained with a visual sensor. The effectiveness of the proposed approach is tested by simulations and real-time experiments.

  5. Lung Cancer Indicators Recurrence

    Science.gov (United States)

    This study describes prognostic factors for lung cancer spread and recurrence, as well as subsequent risk of death from the disease. The investigators observed that regardless of cancer stage, grade, or type of lung cancer, patients in the study were more

  6. Recurrent diabetic ketoacidosis

    DEFF Research Database (Denmark)

    Skinner, T. Chas

    2002-01-01

    Longitudinal studies indicate that 20% of paediatric patients account for 80% of all admissions for diabetic ketoacidosis (DKA). The frequency of DKA peaks during adolescence and, although individuals generally go into remission, they may continue to have bouts of recurrent DKA in adulthood. The ...

  7. Recurrent infantile digital fibromatosis

    African Journals Online (AJOL)

    We present a case of an 8-year-old-boy with recurrent infantile digital fibromatosis (IDF) who presented with new ... Keywords: fibrous tumors, inclusion body fibromatosis, infantile digital fibromatosis, spindle cells, Reye tumor .... watch-and-wait strategy for patients with histologically confirmed IDF nodules that do not cause ...

  8. On Solving Linear Recurrences

    Science.gov (United States)

    Dobbs, David E.

    2013-01-01

    A direct method is given for solving first-order linear recurrences with constant coefficients. The limiting value of that solution is studied as "n to infinity." This classroom note could serve as enrichment material for the typical introductory course on discrete mathematics that follows a calculus course.

  9. Modeling Broadband Microwave Structures by Artificial Neural Networks

    Directory of Open Access Journals (Sweden)

    V. Otevrel

    2004-06-01

    Full Text Available The paper describes the exploitation of feed-forward neural networksand recurrent neural networks for replacing full-wave numerical modelsof microwave structures in complex microwave design tools. Building aneural model, attention is turned to the modeling accuracy and to theefficiency of building a model. Dealing with the accuracy, we describea method of increasing it by successive completing a training set.Neural models are mutually compared in order to highlight theiradvantages and disadvantages. As a reference model for comparisons,approximations based on standard cubic splines are used. Neural modelsare used to replace both the time-domain numeric models and thefrequency-domain ones.

  10. Neural Networks in Control Applications

    DEFF Research Database (Denmark)

    Sørensen, O.

    are examined. The models are separated into three groups representing input/output descriptions as well as state space descriptions: - Models, where all in- and outputs are measurable (static networks). - Models, where some inputs are non-measurable (recurrent networks). - Models, where some in- and some...... outputs are non-measurable (recurrent networks with incomplete state information). The three groups are ordered in increasing complexity, and for each group it is shown how to solve the problems concerning training and application of the specific model type. Of particular interest are the model types...... Kalmann filter) representing state space description. The potentials of neural networks for control of non-linear processes are also examined, focusing on three different groups of control concepts, all considered as generalizations of known linear control concepts to handle also non-linear processes...

  11. Coping with Fear of Recurrence

    Science.gov (United States)

    ... What Comes Next After Finishing Treatment Coping With Fear of Recurrence Having a Baby After Cancer: Pregnancy ... treatment and preparing for the future. Coping With Fear of Recurrence Learn ways to manage the fear ...

  12. Hardware implementation of stochastic spiking neural networks.

    Science.gov (United States)

    Rosselló, Josep L; Canals, Vincent; Morro, Antoni; Oliver, Antoni

    2012-08-01

    Spiking Neural Networks, the last generation of Artificial Neural Networks, are characterized by its bio-inspired nature and by a higher computational capacity with respect to other neural models. In real biological neurons, stochastic processes represent an important mechanism of neural behavior and are responsible of its special arithmetic capabilities. In this work we present a simple hardware implementation of spiking neurons that considers this probabilistic nature. The advantage of the proposed implementation is that it is fully digital and therefore can be massively implemented in Field Programmable Gate Arrays. The high computational capabilities of the proposed model are demonstrated by the study of both feed-forward and recurrent networks that are able to implement high-speed signal filtering and to solve complex systems of linear equations.

  13. Immunomodulators to treat recurrent miscarriage

    NARCIS (Netherlands)

    Prins, Jelmer R.; Kieffer, Tom E.C.; Scherjon, Sicco A.

    2014-01-01

    Recurrent miscarriage is a reproductive disorder affecting many couples. Although several factors are associated with recurrent miscarriage, in more than 50% of the cases the cause is unknown. Maladaptation of the maternal immune system is associated with recurrent miscarriage and could explain part

  14. Recurrent giant juvenile fibroadenoma

    Directory of Open Access Journals (Sweden)

    Kathryn S. King

    2017-11-01

    Full Text Available Breast masses in children, though rare, present a difficult clinical challenge as they can represent a wide variety of entities from benign fibroadenomas to phyllodes tumors. Rapidly growing or recurrent masses can be particularly concerning to patients, families and physicians alike. Clinical examination and conventional imaging modalities are not efficacious in distinguishing between different tumor types and surgical excision is often recommended for both final diagnosis and for treatment of large or rapidly growing masses. While surgical excision can result in significant long-term deformity of the breast there are some surgical techniques that can be used to limit deformity and/or aid in future reconstruction. Here we present a case of recurrent giant juvenile fibroadenoma with a review of the clinical presentation, diagnostic tools and treatment options.

  15. Recurrent Partial Words

    Directory of Open Access Journals (Sweden)

    Francine Blanchet-Sadri

    2011-08-01

    Full Text Available Partial words are sequences over a finite alphabet that may contain wildcard symbols, called holes, which match or are compatible with all letters; partial words without holes are said to be full words (or simply words. Given an infinite partial word w, the number of distinct full words over the alphabet that are compatible with factors of w of length n, called subwords of w, refers to a measure of complexity of infinite partial words so-called subword complexity. This measure is of particular interest because we can construct partial words with subword complexities not achievable by full words. In this paper, we consider the notion of recurrence over infinite partial words, that is, we study whether all of the finite subwords of a given infinite partial word appear infinitely often, and we establish connections between subword complexity and recurrence in this more general framework.

  16. Recurrent epileptic Wernicke aphasia.

    Science.gov (United States)

    Sahaya, Kinshuk; Dhand, Upinder K; Goyal, Munish K; Soni, Chetan R; Sahota, Pradeep K

    2010-04-15

    We report a patient with recurrent epileptic Wernicke aphasia who prior to this presentation, had been misdiagnosed as transient ischemic attacks for several years. This case report emphasizes the consideration of epileptic nature of aphasia when a clear alternate etiology is unavailable, even when EEG fails to show a clear ictal pattern. We also present a brief discussion of previously reported ictal aphasias. Copyright 2010 Elsevier B.V. All rights reserved.

  17. Training Recurrent Networks

    DEFF Research Database (Denmark)

    Pedersen, Morten With

    1997-01-01

    Training recurrent networks is generally believed to be a difficult task. Excessive training times and lack of convergence to an acceptable solution are frequently reported. In this paper we seek to explain the reason for this from a numerical point of view and show how to avoid problems when...... training. In particular we investigate ill-conditioning, the need for and effect of regularization and illustrate the superiority of second-order methods for training...

  18. Recurrent Tricuspid Insufficiency

    Science.gov (United States)

    Kara, Ibrahim; Koksal, Cengiz; Cakalagaoglu, Canturk; Sahin, Muslum; Yanartas, Mehmet; Ay, Yasin; Demir, Serdar

    2013-01-01

    This study compares the medium-term results of De Vega, modified De Vega, and ring annuloplasty techniques for the correction of tricuspid insufficiency and investigates the risk factors for recurrent grades 3 and 4 tricuspid insufficiency after repair. In our clinic, 93 patients with functional tricuspid insufficiency underwent surgical tricuspid repair from May 2007 through October 2010. The study was retrospective, and all the data pertaining to the patients were retrieved from hospital records. Functional capacity, recurrent tricuspid insufficiency, and risk factors aggravating the insufficiency were analyzed for each patient. In the medium term (25.4 ± 10.3 mo), the rates of grades 3 and 4 tricuspid insufficiency in the De Vega, modified De Vega, and ring annuloplasty groups were 31%, 23.1%, and 6.1%, respectively. Logistic regression analysis revealed that chronic obstructive pulmonary disease, left ventricular dysfunction (ejection fraction, tricuspid insufficiency. Medium-term survival was 90.6% for the De Vega group, 96.3% for the modified De Vega group, and 97.1% for the ring annuloplasty group. Ring annuloplasty provided the best relief from recurrent tricuspid insufficiency when compared with DeVega annuloplasty. Modified De Vega annuloplasty might be a suitable alternative to ring annuloplasty when rings are not available. PMID:23466680

  19. Multiplex Recurrence Networks

    Science.gov (United States)

    Eroglu, Deniz; Marwan, Norbert

    2017-04-01

    The complex nature of a variety of phenomena in physical, biological, or earth sciences is driven by a large number of degrees of freedom which are strongly interconnected. Although the evolution of such systems is described by multivariate time series (MTS), so far research mostly focuses on analyzing these components one by one. Recurrence based analyses are powerful methods to understand the underlying dynamics of a dynamical system and have been used for many successful applications including examples from earth science, economics, or chemical reactions. The backbone of these techniques is creating the phase space of the system. However, increasing the dimension of a system requires increasing the length of the time series in order get significant and reliable results. This requirement is one of the challenges in many disciplines, in particular in palaeoclimate, thus, it is not easy to create a phase space from measured MTS due to the limited number of available obervations (samples). To overcome this problem, we suggest to create recurrence networks from each component of the system and combine them into a multiplex network structure, the multiplex recurrence network (MRN). We test the MRN by using prototypical mathematical models and demonstrate its use by studying high-dimensional palaeoclimate dynamics derived from pollen data from the Bear Lake (Utah, US). By using the MRN, we can distinguish typical climate transition events, e.g., such between Marine Isotope Stages.

  20. Recurrent Syncope due to Esophageal Squamous Cell Carcinoma

    Directory of Open Access Journals (Sweden)

    A. Casini

    2011-09-01

    Full Text Available Syncope is caused by a wide variety of disorders. Recurrent syncope as a complication of malignancy is uncommon and may be difficult to diagnose and to treat. Primary neck carcinoma or metastases spreading in parapharyngeal and carotid spaces can involve the internal carotid artery and cause neurally mediated syncope with a clinical presentation like carotid sinus syndrome. We report the case of a 76-year-old man who suffered from recurrent syncope due to invasion of the right carotid sinus by metastases of a carcinoma of the esophagus, successfully treated by radiotherapy. In such cases, surgery, chemotherapy or radiotherapy can be performed. Because syncope may be an early sign of neck or cervical cancer, the diagnostic approach of syncope in patients with a past history of cancer should include the possibility of neck tumor recurrence or metastasis and an oncologic workout should be considered.

  1. Normalizing tweets with edit scripts and recurrent neural embeddings

    NARCIS (Netherlands)

    Chrupala, Grzegorz; Toutanova, Kristina; Wu, Hua

    2014-01-01

    Tweets often contain a large proportion of abbreviations, alternative spellings, novel words and other non-canonical language. These features are problematic for standard language analysis tools and it can be desirable to convert them to canonical form. We propose a novel text normalization model

  2. Recurrent cortico-cortical Interactions in neural disease

    NARCIS (Netherlands)

    Lamme, V.A.F.; Pletson, J.E.

    2005-01-01

    The cerebral cortex consists of a large number of areas, each subserving a more or less distinct function. This view has its roots in the early work of Penfield, and today is reflected in the body of functional MRI literature describing the regions of the brain that are activated during particular

  3. "FORCE" learning in recurrent neural networks as data assimilation

    Science.gov (United States)

    Duane, Gregory S.

    2017-12-01

    It is shown that the "FORCE" algorithm for learning in arbitrarily connected networks of simple neuronal units can be cast as a Kalman Filter, with a particular state-dependent form for the background error covariances. The resulting interpretation has implications for initialization of the learning algorithm, leads to an extension to include interactions between the weight updates for different neurons, and can represent relationships within groups of multiple target output signals.

  4. Active Control of Complex Systems via Dynamic (Recurrent) Neural Networks

    Science.gov (United States)

    1992-05-30

    deviations is made available for use by GARS. I I I I I I B-4n Begin GARS Search IIav Co p t ne , Choose New Point W * = - ACCA Divid Acceleration ~~Factor...motion of the missile become: f =- mV + mgsiny +qSCD = 0 21 f2 0 2.2 f3 - mV, + mgcosy- qSCL =0 2:3 f4 0 2:4 I f5 a N - Vcosy = 0 2:5 f6 a if - Vsiny = 0 2...X6], and f has the components fl, ..., f6 from Eqs. 2:1 - 2:6. The CO term (outside the integrand) introduces the final kinetic energy of the

  5. Using recurrent neural networks to predict colorectal cancer among patients

    NARCIS (Netherlands)

    Amirkhan, Ryan; Hoogendoorn, Mark; Numans, Mattijs E.; Moons, Leon

    2018-01-01

    Development of predictive models from Electronic Medical Records (EMRs) is a far from trivial task. Especially the temporal nature of health records is an aspect that is often ignored yet of utmost importance. Additionally, data is extremely sparse. Previous research has shown that the

  6. Low-power Appliance Recognition using Recurrent Neural Networks

    NARCIS (Netherlands)

    Rizky Pratama, Azkario; Simanjuntak, Frans Juanda; Lazovik, Aliaksandr; Aiello, Marco

    2018-01-01

    Indoor energy consumption can be understood by breaking overall power consumption down into individual components and appliance activations. The clas- sification of components of energy usage is known as load disaggregation or ap- pliance recognition. Most of the previous efforts address the

  7. Recurrent Neural Network Modeling of Nearshore Sandbar Behavior

    NARCIS (Netherlands)

    Pape, L.; Ruessink, B.G.; Wiering, M.A.; Turner, I.L.

    2007-01-01

    The temporal evolution of nearshore sandbars (alongshore ridges of sand fringing coasts in water depths less than 10 m and of paramount importance for coastal safety) is commonly predicted using process-based models. These models are autoregressive and require offshore wave characteristics as

  8. Recurrent neural network modeling of nearshore sandbar behavior

    NARCIS (Netherlands)

    Pape, Leo; Ruessink, B.G.; Wiering, Marco A.; Turner, Ian L.

    2007-01-01

    The temporal evolution of nearshore sandbars (alongshore ridges of sand fringing coasts in water depths less than 10 m and of paramount importance for coastal safety) is commonly predicted using process-based models. These models are autoregressive and require offshore wave characteristics as input,

  9. Neural Networks

    Directory of Open Access Journals (Sweden)

    Schwindling Jerome

    2010-04-01

    Full Text Available This course presents an overview of the concepts of the neural networks and their aplication in the framework of High energy physics analyses. After a brief introduction on the concept of neural networks, the concept is explained in the frame of neuro-biology, introducing the concept of multi-layer perceptron, learning and their use as data classifer. The concept is then presented in a second part using in more details the mathematical approach focussing on typical use cases faced in particle physics. Finally, the last part presents the best way to use such statistical tools in view of event classifers, putting the emphasis on the setup of the multi-layer perceptron. The full article (15 p. corresponding to this lecture is written in french and is provided in the proceedings of the book SOS 2008.

  10. Predictors of recurrence in pheochromocytoma.

    Science.gov (United States)

    Press, Danielle; Akyuz, Muhammet; Dural, Cem; Aliyev, Shamil; Monteiro, Rosebel; Mino, Jeff; Mitchell, Jamie; Hamrahian, Amir; Siperstein, Allan; Berber, Eren

    2014-12-01

    The recurrence rate of pheochromocytoma after adrenalectomy is 6.5-16.5%. This study aims to identify predictors of recurrence and optimal biochemical testing and imaging for detecting the recurrence of pheochromocytoma. In this retrospective study we reviewed all patients who underwent adrenalectomy for pheochromocytoma during a 14-year period at a single institution. One hundred thirty-five patients had adrenalectomy for pheochromocytoma. Eight patients (6%) developed recurrent disease. The median time from initial operation to diagnosis of recurrence was 35 months. On multivariate analysis, tumor size >5 cm was an independent predictor of recurrence. One patient with recurrence died, 4 had stable disease, 2 had progression of disease, and 1 was cured. Recurrence was diagnosed by increases in plasma and/or urinary metanephrines and positive imaging in 6 patients (75%), and by positive imaging and normal biochemical levels in 2 patients (25%). Patients with large tumors (>5 cm) should be followed vigilantly for recurrence. Because 25% of patients with recurrence had normal biochemical levels, we recommend routine imaging and testing of plasma or urinary metanephrines for prompt diagnosis of recurrence. Copyright © 2014 Elsevier Inc. All rights reserved.

  11. A new neural observer for an anaerobic bioreactor.

    Science.gov (United States)

    Belmonte-Izquierdo, R; Carlos-Hernandez, S; Sanchez, E N

    2010-02-01

    In this paper, a recurrent high order neural observer (RHONO) for anaerobic processes is proposed. The main objective is to estimate variables of methanogenesis: biomass, substrate and inorganic carbon in a completely stirred tank reactor (CSTR). The recurrent high order neural network (RHONN) structure is based on the hyperbolic tangent as activation function. The learning algorithm is based on an extended Kalman filter (EKF). The applicability of the proposed scheme is illustrated via simulation. A validation using real data from a lab scale process is included. Thus, this observer can be successfully implemented for control purposes.

  12. Recurrent cerebral thrombosis

    International Nuclear Information System (INIS)

    Iwamoto, Toshihiko; Abe, Shin-e; Kubo, Hideki; Hanyu, Haruo; Takasaki, Masaru

    1992-01-01

    Neuroradiological techniques were used to elucidate pathophysiology of recurrent cerebral thrombosis. Twenty-two patients with cerebral thrombosis who suffered a second attack under stable conditions more than 22 days after the initial stroke were studied. Hypertension, diabetes mellitus, and hypercholesterolemia were also seen in 20, 8, and 12 patients, respectively. The patients were divided into three groups according to their symptoms: (I) symptoms differed between the first and second strokes (n=12); (II) initial symptoms were suddenly deteriorated (n=6); and (III) symptoms occurring in groups I and II were seen (n=4). In group I, contralateral hemiparesis or suprabulbar palsy was often associated with the initial hemiparesis. The time of recurrent stroke varied from 4 months to 9 years. CT and MRI showed not only lacunae in both hemispheres, but also deep white-matter ischemia of the centrum semi-ovale. In group II, hemiparesis or visual field defect was deteriorated early after the initial stroke. In addition, neuroimaging revealed that infarction in the posterior cerebral artery was progressed on the contralateral side, or that white matter lesion in the middle artery was enlarged in spite of small lesion in the left cerebral hemisphere. All patients in group III had deterioration of right hemiparesis associated with aphasia. CT, MRI, SPECT, and angiography indicated deep white-matter ischemia caused by main trunk lesions in the left hemisphere. Group III seemed to be equivalent to group II, except for laterality of the lesion. Neuroradiological assessment of the initial stroke may help to predict the mode of recurrence, although pathophysiology of cerebral thrombosis is complicated and varies from patient to patient. (N.K.)

  13. Stress and recurrent miscarriage.

    Science.gov (United States)

    Craig, M

    2001-09-01

    Our current understanding into the role of stress in unexplained recurrent miscarriages comes from two different research strategies. The majority of research has examined the role of psychological support within this patient population. This support has been provided in a number of ways ranging from weekly interviews with a psychiatrist or gynaecologist and or visual re-assurance in the form of ultrasound scans. A comparison of psychological support with an absence of such intervention has found differences in successful pregnancy outcome varying from as great as 84 versus 26%, respectively. It has been assumed that psychological support reduces the miscarriage rate by reducing “stress”within this patient population. In addition it provides indirect support for a role of stress in the aetiology of unexplained recurrent miscarriage. Other studies have attempted to directly assess the effect of personality characteristics on miscarriage rate; these studies have yielded conflicting results.The mechanism by which stress may be causal in the aetiology of unexplained recurrent miscarriage has not been examined in humans. Animal studies, however, have found that psychological distress can alter immune parameters that may be intricately involved with implantation. These parameters include an elevation of the “abortive” cytokine TNF-a and a reduction in the “anti-abortive” cytokine TGF-P2. Cells that are involved in the release of TNF-a at the feto-maternal interface include T cells, macrophages and mast cells.Mechanisms through which stress may act on these cells are explored and an integrated model is postulated.

  14. Recurrent Bilateral Focal Myositis.

    Science.gov (United States)

    Nagafuchi, Hiroko; Nakano, Hiromasa; Ooka, Seido; Takakuwa, Yukiko; Yamada, Hidehiro; Tadokoro, Mamoru; Shimojo, Sadatomo; Ozaki, Shoichi

    This report describes a rare case of recurrent bilateral focal myositis and its successful treatment via methotrexate. A 38-year-old man presented myalgia of the right gastrocnemius in May 2005. Magnetic resonance imaging showed very high signal intensity in the right gastrocnemius on short-tau inversion recovery images. A muscle biopsy revealed inflammatory CD4+ cell-dominant myogenic change. Focal myositis was diagnosed. The first steroid treatment was effective. Tapering of prednisolone, however, repeatedly induced myositis relapse, which progressed to multiple muscle lesions of both lower limbs. Initiation of methotrexate finally allowed successful tapering of prednisolone, with no relapse in the past 4 years.

  15. Recurrent Pregnancy Loss

    Directory of Open Access Journals (Sweden)

    Véronique Piroux

    1997-01-01

    Full Text Available Antiphospholipid antibodies (APA are associated with thrombosis, thrombocytopenia and fetal loss but they occur in a variety of diseases. Despite many efforts, a correlation between the specificity of particular subgroups of APA and particular clinical situations remains to be established. The antigens at the origin of APA remain to be identified. We discuss here the possible links between cell apoptosis or necrosis, leading to plasma membrane alterations, and the occurrence of APA in response to sustained stimulation. The pathogenic potential of APA is also considered with respect to recurrent pregnancy loss.

  16. Equine recurrent airway obstruction

    Directory of Open Access Journals (Sweden)

    Artur Niedźwiedź

    2014-10-01

    Full Text Available Equine Recurrent Airway Obstruction (RAO, also known as heaves or broken wind, is one of the most common disease in middle-aged horses. Inflammation of the airway is inducted by organic dust exposure. This disease is characterized by neutrophilic inflammation, bronchospasm, excessive mucus production and pathologic changes in the bronchiolar walls. Clinical signs are resolved in 3-4 weeks after environmental changes. Horses suffering from RAO are susceptible to allergens throughout their lives, therefore they should be properly managed. In therapy the most importanthing is to eliminate dustexposure, administration of corticosteroids and use bronchodilators to improve pulmonary function.

  17. Implicitly Defined Neural Networks for Sequence Labeling

    Science.gov (United States)

    2017-07-31

    ularity has soared for the Long Short - Term Memory (LSTM) (Hochreiter and Schmidhuber, 1997) and vari- ants such as Gated Recurrent Unit (GRU) (Cho et...610. Sepp Hochreiter and Jürgen Schmidhuber. 1997. Long short - term memory . Neural computation 9(8):1735– 1780. Zhiheng Huang, Wei Xu, and Kai Yu. 2015...network are coupled together, in order to improve perfor- mance on complex, long -range dependencies in either direction of a sequence. We contrast our

  18. Artificial neural network cardiopulmonary modeling and diagnosis

    Science.gov (United States)

    Kangas, Lars J.; Keller, Paul E.

    1997-01-01

    The present invention is a method of diagnosing a cardiopulmonary condition in an individual by comparing data from a progressive multi-stage test for the individual to a non-linear multi-variate model, preferably a recurrent artificial neural network having sensor fusion. The present invention relies on a cardiovascular model developed from physiological measurements of an individual. Any differences between the modeled parameters and the parameters of an individual at a given time are used for diagnosis.

  19. Recurrent oral angioleiomyoma

    Directory of Open Access Journals (Sweden)

    V G Mahima

    2011-01-01

    Full Text Available Angioleiomyomas are vascular variant of leiomyomas which are benign tumors of smooth muscle. They are exceedingly rare in the oral cavity. Malignant transformation of these tumors has also been reported occasionally which warrants knowledge of this soft tissue tumor. A 57 year old male patient reported with a 15 day history of an asymptomatic growth that had started insidiously in his lower left back tooth region. Clinical examination revealed a solitary, oval, sessile growth in the mandibular left retro molar region. Excisional biopsy was suggestive of Angioleiomyoma. A recurrence of the same was noted two months later which was also histopathologically reported as Angioleiomyoma. The same was confirmed using special stains. This case reports an unusual presentation of Angioleiomyoma with regards to both recurrence as well as rapid growth. It is important to be well aware of this uncommon entity as these tumors often can mimic or transform into malignancy. Precise clinicopathological examinations are therefore invaluable in establishing an accurate diagnosis and delivering suitable treatment.

  20. Memory replay in balanced recurrent networks.

    Directory of Open Access Journals (Sweden)

    Nikolay Chenkov

    2017-01-01

    Full Text Available Complex patterns of neural activity appear during up-states in the neocortex and sharp waves in the hippocampus, including sequences that resemble those during prior behavioral experience. The mechanisms underlying this replay are not well understood. How can small synaptic footprints engraved by experience control large-scale network activity during memory retrieval and consolidation? We hypothesize that sparse and weak synaptic connectivity between Hebbian assemblies are boosted by pre-existing recurrent connectivity within them. To investigate this idea, we connect sequences of assemblies in randomly connected spiking neuronal networks with a balance of excitation and inhibition. Simulations and analytical calculations show that recurrent connections within assemblies allow for a fast amplification of signals that indeed reduces the required number of inter-assembly connections. Replay can be evoked by small sensory-like cues or emerge spontaneously by activity fluctuations. Global-potentially neuromodulatory-alterations of neuronal excitability can switch between network states that favor retrieval and consolidation.

  1. How adaptation shapes spike rate oscillations in recurrent neuronal networks

    Directory of Open Access Journals (Sweden)

    Moritz eAugustin

    2013-02-01

    Full Text Available Neural mass signals from in-vivo recordings often show oscillations with frequencies ranging from <1 Hz to 100 Hz. Fast rhythmic activity in the beta and gamma range can be generated by network based mechanisms such as recurrent synaptic excitation-inhibition loops. Slower oscillations might instead depend on neuronal adaptation currents whose timescales range from tens of milliseconds to seconds. Here we investigate how the dynamics of such adaptation currents contribute to spike rate oscillations and resonance properties in recurrent networks of excitatory and inhibitory neurons. Based on a network of sparsely coupled spiking model neurons with two types of adaptation current and conductance based synapses with heterogeneous strengths and delays we use a mean-field approach to analyze oscillatory network activity. For constant external input, we find that spike-triggered adaptation currents provide a mechanism to generate slow oscillations over a wide range of adaptation timescales as long as recurrent synaptic excitation is sufficiently strong. Faster rhythms occur when recurrent inhibition is slower than excitation and oscillation frequency increases with the strength of inhibition. Adaptation facilitates such network based oscillations for fast synaptic inhibition and leads to decreased frequencies. For oscillatory external input, adaptation currents amplify a narrow band of frequencies and cause phase advances for low frequencies in addition to phase delays at higher frequencies. Our results therefore identify the different key roles of neuronal adaptation dynamics for rhythmogenesis and selective signal propagation in recurrent networks.

  2. Biologically Inspired Modular Neural Control for a Leg-Wheel Hybrid Robot

    DEFF Research Database (Denmark)

    Manoonpong, Poramate; Wörgötter, Florentin; Laksanacharoen, Pudit

    2014-01-01

    In this article we present modular neural control for a leg-wheel hybrid robot consisting of three legs with omnidirectional wheels. This neural control has four main modules having their functional origin in biological neural systems. A minimal recurrent control (MRC) module is for sensory signal...... processing and state memorization. Its outputs drive two front wheels while the rear wheel is controlled through a velocity regulating network (VRN) module. In parallel, a neural oscillator network module serves as a central pattern generator (CPG) controls leg movements for sidestepping. Stepping directions...... or they can serve as useful modules for other module-based neural control applications....

  3. Recurrent pregnancy loss

    DEFF Research Database (Denmark)

    Egerup, P; Kolte, A M; Larsen, E C

    2016-01-01

    STUDY QUESTION: Is there a different prognostic impact for consecutive and non-consecutive early pregnancy losses in women with secondary recurrent pregnancy loss (RPL)? SUMMARY ANSWER: Only consecutive early pregnancy losses after the last birth have a statistically significant negative prognostic...... impact in women with secondary RPL. WHAT IS KNOWN ALREADY: The risk of a new pregnancy loss increases with the number of previous pregnancy losses in patients with RPL. Second trimester losses seem to exhibit a stronger negative impact than early losses. It is unknown whether the sequence of pregnancy...... losses plays a role for the prognosis in patients with a prior birth. STUDY DESIGN, SIZE, DURATION: This retrospective cohort study of pregnancy outcome in patients with unexplained secondary RPL included in three previously published, Danish double-blinded placebo-controlled trials of intravenous...

  4. Recurrent Aggressive Angiomyxoma*

    Directory of Open Access Journals (Sweden)

    Suelene Suassuna Silvestre de Alencar

    2013-10-01

    Full Text Available Introduction: aggressive angiomyxoma is a highly aggressive, rare neoplasm of the mesen- chymal tissue with a high recurrence rate. It represents an important differential diagnosis of pelvic tumors in women of reproductive age. This study aims to describe a case of ag- gressive angiomyxoma.Case report: woman, 37 years old, complained about a bulge on the right perianal region, and anal itching and burning, bleeding, tenesmus and incontinence. The proctologic examina- tion confirmed the perianal bulge and extrinsic compression of the posterior wall of the rectum. Computed tomography (CT of the pelvis showed a well-defined pelvic mass ex- tending to the right rectal area. Exploratory laparotomy showed a mass of fibro elastic con- sistency adjacent to the pelvic organs and closely attached to the distal rectum, and per- formed a resection of the pelvic tumor afterward. Anatomopathological analysis revealed an aggressive angiomyxoma. Magnetic resonance imaging (MRI of the pelvis showed signs of recurrence in the pelvic cavity on the right side of the rectum. A surgical procedure was performed to resect the lesion. After an asymptomatic period, the MRI showed solid growths located in the right ischiorectal fossa. A new surgical procedure identified only retention cysts in the pelvis and right ischiorectal fossa, only lysis of adhesions was per- formed. The patient is currently undergoing follow-up without disease recurrence. Resumo: Introdução: o angiomixoma agressivo é uma rara neoplasia do tecido mesenquimal de gran- de agressividade e alta taxa de recorrência. Representa um importante diagnóstico diferen- cial de tumorações pélvicas de mulheres em idade reprodutiva. Este estudo objetiva relatar um caso de angiomixoma agressivo.Relato de caso: mulher, 37 anos, com queixa de abaulamento em região perianal direita, além de prurido e ardor anal, sangramento, tenesmo e incontinência anal. Exame procto- lógico confirmou o abaulamento

  5. Analysis of Brain Recurrence

    Science.gov (United States)

    Frilot, Clifton; Kim, Paul Y.; Carrubba, Simona; McCarty, David E.; Chesson, Andrew L.; Marino, Andrew A.

    Analysis of Brain Recurrence (ABR) is a method for extracting physiologically significant information from the electroencephalogram (EEG), a non-stationary electrical output of the brain, the ultimate complex dynamical system. ABR permits quantification of temporal patterns in the EEG produced by the non-autonomous differential laws that govern brain metabolism. In the context of appropriate experimental and statistical designs, ABR is ideally suited to the task of interpreting the EEG. Present applications of ABR include discovery of a human magnetic sense, increased mechanistic understanding of neuronal membrane processes, diagnosis of degenerative neurological disease, detection of changes in brain metabolism caused by weak environmental electromagnetic fields, objective characterization of the quality of human sleep, and evaluation of sleep disorders. ABR has important beneficial implications for the development of clinical and experimental neuroscience.

  6. Nonlinear adaptive inverse control via the unified model neural network

    Science.gov (United States)

    Jeng, Jin-Tsong; Lee, Tsu-Tian

    1999-03-01

    In this paper, we propose a new nonlinear adaptive inverse control via a unified model neural network. In order to overcome nonsystematic design and long training time in nonlinear adaptive inverse control, we propose the approximate transformable technique to obtain a Chebyshev Polynomials Based Unified Model (CPBUM) neural network for the feedforward/recurrent neural networks. It turns out that the proposed method can use less training time to get an inverse model. Finally, we apply this proposed method to control magnetic bearing system. The experimental results show that the proposed nonlinear adaptive inverse control architecture provides a greater flexibility and better performance in controlling magnetic bearing systems.

  7. Neural network decoder for quantum error correcting codes

    Science.gov (United States)

    Krastanov, Stefan; Jiang, Liang

    Artificial neural networks form a family of extremely powerful - albeit still poorly understood - tools used in anything from image and sound recognition through text generation to, in our case, decoding. We present a straightforward Recurrent Neural Network architecture capable of deducing the correcting procedure for a quantum error-correcting code from a set of repeated stabilizer measurements. We discuss the fault-tolerance of our scheme and the cost of training the neural network for a system of a realistic size. Such decoders are especially interesting when applied to codes, like the quantum LDPC codes, that lack known efficient decoding schemes.

  8. Beneficial role of noise in artificial neural networks

    International Nuclear Information System (INIS)

    Monterola, Christopher; Saloma, Caesar; Zapotocky, Martin

    2008-01-01

    We demonstrate enhancement of neural networks efficacy to recognize frequency encoded signals and/or to categorize spatial patterns of neural activity as a result of noise addition. For temporal information recovery, noise directly added to the receiving neurons allow instantaneous improvement of signal-to-noise ratio [Monterola and Saloma, Phys. Rev. Lett. 2002]. For spatial patterns however, recurrence is necessary to extend and homogenize the operating range of a feed-forward neural network [Monterola and Zapotocky, Phys. Rev. E 2005]. Finally, using the size of the basin of attraction of the networks learned patterns (dynamical fixed points), a procedure for estimating the optimal noise is demonstrated

  9. Isolation of herpes simplex virus from the genital tract during symptomatic recurrence on the buttocks.

    Science.gov (United States)

    Kerkering, Katrina; Gardella, Carolyn; Selke, Stacy; Krantz, Elizabeth; Corey, Lawrence; Wald, Anna

    2006-10-01

    To estimate the frequency of isolation of herpes simplex virus (HSV) from the genital tract when recurrent herpes lesions were present on the buttocks. Data were extracted from a prospectively observed cohort attending a research clinic for genital herpes infections between 1975 and 2001. All patients with a documented herpes lesion on the buttocks, upper thigh or gluteal cleft ("buttock recurrence") and concomitant viral cultures from genital sites including the perianal region were eligible. We reviewed records of 237 subjects, 151 women and 86 men, with a total of 572 buttock recurrences. Of the 1,592 days with genital culture information during a buttock recurrence, participants had concurrent genital lesions on 311 (20%, 95% confidence interval [CI] 14-27%) of these days. Overall, HSV was isolated from the genital region on 12% (95% CI 8-17%) of days during a buttock recurrence. In the absence of genital lesions, HSV was isolated from the genital area on 7% (95% CI 4%-11%) of days during a buttock recurrence and, among women, from the vulvar or cervical sites on 1% of days. Viral shedding of herpes simplex virus from the genital area is a relatively common occurrence during a buttock recurrence of genital herpes, even without concurrent genital lesions, reflecting perhaps reactivation from concomitant regions of the sacral neural ganglia. Patients with buttock herpes recurrences should be instructed about the risk of genital shedding during such recurrences. II-2.

  10. Investigation of Prognostic Factors and Survival without Recurrence in Patients with Breast Cancer

    Directory of Open Access Journals (Sweden)

    Ahmad Abdollahi

    2017-01-01

    Full Text Available Background: One of the major consequences of breast cancer is the recurrence of the disease. The objective of present study was to estimate the 7-year survival without recurrence as well as the effective prognostic factors in recurrence. Materials and Methods: This historical cohort survival analysis was conducted on 1329 patients diagnosed with breast cancer in Motahari Breast Clinic, Shiraz, Iran between 2004 and 2011. We estimated the rate of survival without recurrence through the Kaplan–Meier method and the difference between the survival curves was investigated using the log-rank test. Furthermore, Cox regression model was used to model the effective factors in local recurrence as well as metastasis. Results: The mean age of the patients was 54.8 ± 11.4 years. Estrogen receptor positive, progesterone receptor positive, and human epidermal growth factor receptor-2 positive were observed in 70.6%, 66.6%, and 34.4% of the cases, respectively. The mean of the follow-up period was 3.7 ± 1.8 years in all patients. The results of the Kaplan–Meier method revealed 1-, 3-, 5-, and 7-year rate of survival without recurrence as 96.4%, 78.4%, 66.3%, and 54.8%, respectively. There was a significant relationship between survival without recurrence and histology grade (hazard ratio [HR] = 1.66, P = 0.009, neural invasion (HR = 1.74, P = 0.006, and progesterone receptors (HR = 0.69, P = 0.031. Conclusion: In this study, the rate of survival without recurrence in breast cancer was 54.8%. Among factors, histology grade and neural involvement at the time of diagnosis increased the chance of recurrence and progesterone receptors caused a longer interval between diagnosis and recurrence.

  11. Empirical modeling of nuclear power plants using neural networks

    International Nuclear Information System (INIS)

    Parlos, A.G.; Atiya, A.; Chong, K.T.

    1991-01-01

    A summary of a procedure for nonlinear identification of process dynamics encountered in nuclear power plant components is presented in this paper using artificial neural systems. A hybrid feedforward/feedback neural network, namely, a recurrent multilayer perceptron, is used as the nonlinear structure for system identification. In the overall identification process, the feedforward portion of the network architecture provides its well-known interpolation property, while through recurrency and cross-talk, the local information feedback enables representation of time-dependent system nonlinearities. The standard backpropagation learning algorithm is modified and is used to train the proposed hybrid network in a supervised manner. The performance of recurrent multilayer perceptron networks in identifying process dynamics is investigated via the case study of a U-tube steam generator. The nonlinear response of a representative steam generator is predicted using a neural network and is compared to the response obtained from a sophisticated physical model during both high- and low-power operation. The transient responses compare well, though further research is warranted for training and testing of recurrent neural networks during more severe operational transients and accident scenarios

  12. Cognitive Processes Underlying Nonnative Speech Production: The Significance of Recurrent Sequences.

    Science.gov (United States)

    Oppenheim, Nancy

    This study was designed to identify whether advanced nonnative speakers of English rely on recurrent sequences to produce fluent speech in conformance with neural network theories and symbolic network theories; participants were 6 advanced, speaking and listening university students, aged 18-37 years (their native countries being Korea, Japan,…

  13. Modelling the phonotactic structure of natural language words with simple recurrent networks

    NARCIS (Netherlands)

    Stoianov, [No Value; Nerbonne, J; Bouma, H; Coppen, PA; vanHalteren, H; Teunissen, L

    1998-01-01

    Simple Recurrent Networks (SRN) are Neural Network (connectionist) models able to process natural language. Phonotactics concerns the order of symbols in words. We continued an earlier unsuccessful trial to model the phonotactics of Dutch words with SRNs. In order to overcome the previously reported

  14. Issues in the use of neural networks in information retrieval

    CERN Document Server

    Iatan, Iuliana F

    2017-01-01

    This book highlights the ability of neural networks (NNs) to be excellent pattern matchers and their importance in information retrieval (IR), which is based on index term matching. The book defines a new NN-based method for learning image similarity and describes how to use fuzzy Gaussian neural networks to predict personality. It introduces the fuzzy Clifford Gaussian network, and two concurrent neural models: (1) concurrent fuzzy nonlinear perceptron modules, and (2) concurrent fuzzy Gaussian neural network modules. Furthermore, it explains the design of a new model of fuzzy nonlinear perceptron based on alpha level sets and describes a recurrent fuzzy neural network model with a learning algorithm based on the improved particle swarm optimization method.

  15. Handbook on neural information processing

    CERN Document Server

    Maggini, Marco; Jain, Lakhmi

    2013-01-01

    This handbook presents some of the most recent topics in neural information processing, covering both theoretical concepts and practical applications. The contributions include:                         Deep architectures                         Recurrent, recursive, and graph neural networks                         Cellular neural networks                         Bayesian networks                         Approximation capabilities of neural networks                         Semi-supervised learning                         Statistical relational learning                         Kernel methods for structured data                         Multiple classifier systems                         Self organisation and modal learning                         Applications to ...

  16. Recurrent vulvovaginal candidiasis.

    Science.gov (United States)

    Blostein, Freida; Levin-Sparenberg, Elizabeth; Wagner, Julian; Foxman, Betsy

    2017-09-01

    Recurrent vulvovaginal candidiasis (RVVC), multiple episodes of vulvovaginal candidiasis (VVC; vaginal yeast infection) within a 12-month period, adversely affects quality of life, mental health, and sexual activity. Diagnosis is not straightforward, as VVC is defined by the combination of often nonspecific vaginal symptoms and the presence of yeast-which is a common vaginal commensal. Estimating the incidence and prevalence is challenging: most VVC is diagnosed and treated empirically, the availability for purchase of effective therapies over the counter enables self-diagnosis and treatment, and the duration of the relatively benign VVC symptoms is short, introducing errors into any estimates relying on medical records or patient recall. We evaluate current estimates of VVC and RVVC and provide new prevalence estimates using data from a 2011 seven-country (n = 7345) internet panel survey on VVC conducted by Ipsos Health (https://www.ipsos.com/en). We also evaluate information on VVC-associated visits using the National Ambulatory Medical Care Survey. The estimated probability of VVC by age 50 varied widely by country (from 23% to 49%, mean 39%), as did the estimated probability of RVVC after VVC (from 14% to 28%, mean 23%). However estimated, the probability of RVVC was high suggesting RVVC is a common condition. Copyright © 2017 Elsevier Inc. All rights reserved.

  17. Recurrent Respiratory Papillomatosis or Laryngeal Papillomatosis

    Science.gov (United States)

    ... Home » Health Info » Voice, Speech, and Language Recurrent Respiratory Papillomatosis or Laryngeal Papillomatosis On this page: What ... find additional information about RRP? What is recurrent respiratory papillomatosis? Recurrent respiratory papillomatosis (RRP) is a disease ...

  18. Mondini dysplasia with recurrent meningitis.

    Science.gov (United States)

    Lu, M Y; Lee, P I; Lee, C Y; Hsu, C J

    1996-01-01

    Mondini dysplasia is a congenital malformation of the inner ear, commonly associated with hearing impairment, cerebrospinal fluid otorrhea/rhinorrhea and recurrent meningitis. Two such cases are described, with hearing impairment, cerebrospinal fluid rhinorrhea, and several episodes of meningitis. Diagnosis was confirmed by high-resolution computed tomography. After surgical correction of the malformation, there was no recurrent episode of meningitis at subsequent follow-up. To avoid the suffering and the sequelae of recurrent meningitis, an early diagnosis and prompt surgical intervention are crucial for such patients.

  19. A recurrent dynamic model for correspondence-based face recognition.

    Science.gov (United States)

    Wolfrum, Philipp; Wolff, Christian; Lücke, Jörg; von der Malsburg, Christoph

    2008-12-29

    Our aim here is to create a fully neural, functionally competitive, and correspondence-based model for invariant face recognition. By recurrently integrating information about feature similarities, spatial feature relations, and facial structure stored in memory, the system evaluates face identity ("what"-information) and face position ("where"-information) using explicit representations for both. The network consists of three functional layers of processing, (1) an input layer for image representation, (2) a middle layer for recurrent information integration, and (3) a gallery layer for memory storage. Each layer consists of cortical columns as functional building blocks that are modeled in accordance with recent experimental findings. In numerical simulations we apply the system to standard benchmark databases for face recognition. We find that recognition rates of our biologically inspired approach lie in the same range as recognition rates of recent and purely functionally motivated systems.

  20. Tuning Neural Phase Entrainment to Speech.

    Science.gov (United States)

    Falk, Simone; Lanzilotti, Cosima; Schön, Daniele

    2017-08-01

    Musical rhythm positively impacts on subsequent speech processing. However, the neural mechanisms underlying this phenomenon are so far unclear. We investigated whether carryover effects from a preceding musical cue to a speech stimulus result from a continuation of neural phase entrainment to periodicities that are present in both music and speech. Participants listened and memorized French metrical sentences that contained (quasi-)periodic recurrences of accents and syllables. Speech stimuli were preceded by a rhythmically regular or irregular musical cue. Our results show that the presence of a regular cue modulates neural response as estimated by EEG power spectral density, intertrial coherence, and source analyses at critical frequencies during speech processing compared with the irregular condition. Importantly, intertrial coherences for regular cues were indicative of the participants' success in memorizing the subsequent speech stimuli. These findings underscore the highly adaptive nature of neural phase entrainment across fundamentally different auditory stimuli. They also support current models of neural phase entrainment as a tool of predictive timing and attentional selection across cognitive domains.

  1. Opioids and breast cancer recurrence

    DEFF Research Database (Denmark)

    Cronin-Fenton, Deirdre P; Heide-Jørgensen, Uffe; Ahern, Thomas P

    2015-01-01

    BACKGROUND: Opioids may alter immune function, thereby potentially affecting cancer recurrence. The authors investigated the association between postdiagnosis opioid use and breast cancer recurrence. METHODS: Patients with incident, early stage breast cancer who were diagnosed during 1996 through...... 2008 in Denmark were identified from the Danish Breast Cancer Cooperative Group Registry. Opioid prescriptions were ascertained from the Danish National Prescription Registry. Follow-up began on the date of primary surgery for breast cancer and continued until breast cancer recurrence, death......, emigration, 10 years, or July 31, 2013, whichever occurred first. Cox regression models were used to compute hazard ratios and 95% confidence intervals associating breast cancer recurrence with opioid prescription use overall and by opioid type and strength, immunosuppressive effect, chronic use (≥6 months...

  2. Spatiotemporal Recurrent Convolutional Networks for Traffic Prediction in Transportation Networks.

    Science.gov (United States)

    Yu, Haiyang; Wu, Zhihai; Wang, Shuqin; Wang, Yunpeng; Ma, Xiaolei

    2017-06-26

    Predicting large-scale transportation network traffic has become an important and challenging topic in recent decades. Inspired by the domain knowledge of motion prediction, in which the future motion of an object can be predicted based on previous scenes, we propose a network grid representation method that can retain the fine-scale structure of a transportation network. Network-wide traffic speeds are converted into a series of static images and input into a novel deep architecture, namely, spatiotemporal recurrent convolutional networks (SRCNs), for traffic forecasting. The proposed SRCNs inherit the advantages of deep convolutional neural networks (DCNNs) and long short-term memory (LSTM) neural networks. The spatial dependencies of network-wide traffic can be captured by DCNNs, and the temporal dynamics can be learned by LSTMs. An experiment on a Beijing transportation network with 278 links demonstrates that SRCNs outperform other deep learning-based algorithms in both short-term and long-term traffic prediction.

  3. Deep recurrent conditional random field network for protein secondary prediction

    DEFF Research Database (Denmark)

    Johansen, Alexander Rosenberg; Sønderby, Søren Kaae; Sønderby, Casper Kaae

    2017-01-01

    Deep learning has become the state-of-the-art method for predicting protein secondary structure from only its amino acid residues and sequence profile. Building upon these results, we propose to combine a bi-directional recurrent neural network (biRNN) with a conditional random field (CRF), which...... of the labels for all time-steps. We condition the CRF on the output of biRNN, which learns a distributed representation based on the entire sequence. The biRNN-CRF is therefore close to ideally suited for the secondary structure task because a high degree of cross-talk between neighboring elements can...

  4. Criticality predicts maximum irregularity in recurrent networks of excitatory nodes.

    Directory of Open Access Journals (Sweden)

    Yahya Karimipanah

    Full Text Available A rigorous understanding of brain dynamics and function requires a conceptual bridge between multiple levels of organization, including neural spiking and network-level population activity. Mounting evidence suggests that neural networks of cerebral cortex operate at a critical regime, which is defined as a transition point between two phases of short lasting and chaotic activity. However, despite the fact that criticality brings about certain functional advantages for information processing, its supporting evidence is still far from conclusive, as it has been mostly based on power law scaling of size and durations of cascades of activity. Moreover, to what degree such hypothesis could explain some fundamental features of neural activity is still largely unknown. One of the most prevalent features of cortical activity in vivo is known to be spike irregularity of spike trains, which is measured in terms of the coefficient of variation (CV larger than one. Here, using a minimal computational model of excitatory nodes, we show that irregular spiking (CV > 1 naturally emerges in a recurrent network operating at criticality. More importantly, we show that even at the presence of other sources of spike irregularity, being at criticality maximizes the mean coefficient of variation of neurons, thereby maximizing their spike irregularity. Furthermore, we also show that such a maximized irregularity results in maximum correlation between neuronal firing rates and their corresponding spike irregularity (measured in terms of CV. On the one hand, using a model in the universality class of directed percolation, we propose new hallmarks of criticality at single-unit level, which could be applicable to any network of excitable nodes. On the other hand, given the controversy of the neural criticality hypothesis, we discuss the limitation of this approach to neural systems and to what degree they support the criticality hypothesis in real neural networks. Finally

  5. Recurrent Laughter-induced Syncope

    NARCIS (Netherlands)

    Gaitatzis, A.; Petzold, A.F.S.

    2012-01-01

    Introduction: Syncope is a common presenting complaint in Neurology clinics or Emergency departments, but its causes are sometimes difficult to diagnose. Apart from vasovagal attacks, other benign, neurally mediated syncopes include "situational" syncopes, which occur after urination, coughing,

  6. Recurrence of anxiety disorders and its predictors

    NARCIS (Netherlands)

    Scholten, Willemijn D.; Batelaan, Neeltje M.; van Balkom, Anton J. L. M.; Penninx, Brenda; Smit, Johannes H.; van Oppen, Patricia

    Background: The chronic course of anxiety disorders and its high burden of disease are partly due to the recurrence of anxiety disorders after remission. However, knowledge about recurrence rates and predictors of recurrence is scarce. This article reports on recurrence rates of anxiety disorders

  7. Recurrence of anxiety disorders and its predictors

    NARCIS (Netherlands)

    Scholten, W.D.; Batelaan, N.M.; van Balkom, A.J.L.M.; Penninx, B.W.J.H.; Smit, J.H.; van Oppen, P.

    2013-01-01

    Background: The chronic course of anxiety disorders and its high burden of disease are partly due to the recurrence of anxiety disorders after remission. However, knowledge about recurrence rates and predictors of recurrence is scarce. This article reports on recurrence rates of anxiety disorders

  8. Morphological neural networks

    Energy Technology Data Exchange (ETDEWEB)

    Ritter, G.X.; Sussner, P. [Univ. of Florida, Gainesville, FL (United States)

    1996-12-31

    The theory of artificial neural networks has been successfully applied to a wide variety of pattern recognition problems. In this theory, the first step in computing the next state of a neuron or in performing the next layer neural network computation involves the linear operation of multiplying neural values by their synaptic strengths and adding the results. Thresholding usually follows the linear operation in order to provide for nonlinearity of the network. In this paper we introduce a novel class of neural networks, called morphological neural networks, in which the operations of multiplication and addition are replaced by addition and maximum (or minimum), respectively. By taking the maximum (or minimum) of sums instead of the sum of products, morphological network computation is nonlinear before thresholding. As a consequence, the properties of morphological neural networks are drastically different than those of traditional neural network models. In this paper we consider some of these differences and provide some particular examples of morphological neural network.

  9. Neural Tube Defects

    Science.gov (United States)

    Neural tube defects are birth defects of the brain, spine, or spinal cord. They happen in the ... that she is pregnant. The two most common neural tube defects are spina bifida and anencephaly. In ...

  10. Recurrent pregnancy loss: current perspectives

    Directory of Open Access Journals (Sweden)

    El Hachem H

    2017-05-01

    Full Text Available Hady El Hachem,1,2 Vincent Crepaux,3 Pascale May-Panloup,4 Philippe Descamps,3 Guillaume Legendre,3 Pierre-Emmanuel Bouet3 1Department of Reproductive Medicine, Ovo Clinic, Montréal, QC, Canada; 2Department of Obstetrics and Gynecology, University of Montreal, Montréal, QC, Canada; 3Department of Obstetrics and Gynecology, Angers University Hopsital, Angers, France; 4Department of Reproductive Biology, Angers University Hospital, Angers, France Abstract: Recurrent pregnancy loss is an important reproductive health issue, affecting 2%–5% of couples. Common established causes include uterine anomalies, antiphospholipid syndrome, hormonal and metabolic disorders, and cytogenetic abnormalities. Other etiologies have been proposed but are still considered controversial, such as chronic endometritis, inherited thrombophilias, luteal phase deficiency, and high sperm DNA fragmentation levels. Over the years, evidence-based treatments such as surgical correction of uterine anomalies or aspirin and heparin for antiphospholipid syndrome have improved the outcomes for couples with recurrent pregnancy loss. However, almost half of the cases remain unexplained and are empirically treated using progesterone supplementation, anticoagulation, and/or immunomodulatory treatments. Regardless of the cause, the long-term prognosis of couples with recurrent pregnancy loss is good, and most eventually achieve a healthy live birth. However, multiple pregnancy losses can have a significant psychological toll on affected couples, and many efforts are being made to improve treatments and decrease the time needed to achieve a successful pregnancy. This article reviews the established and controversial etiologies, and the recommended therapeutic strategies, with a special focus on unexplained recurrent pregnancy losses and the empiric treatments used nowadays. It also discusses the current role of preimplantation genetic testing in the management of recurrent pregnancy

  11. Neural tissue-spheres

    DEFF Research Database (Denmark)

    Andersen, Rikke K; Johansen, Mathias; Blaabjerg, Morten

    2007-01-01

    By combining new and established protocols we have developed a procedure for isolation and propagation of neural precursor cells from the forebrain subventricular zone (SVZ) of newborn rats. Small tissue blocks of the SVZ were dissected and propagated en bloc as free-floating neural tissue...... content, thus allowing experimental studies of neural precursor cells and their niche...

  12. RECURRENT NOVAE IN M31

    Energy Technology Data Exchange (ETDEWEB)

    Shafter, A. W. [Department of Astronomy, San Diego State University, San Diego, CA 92182 (United States); Henze, M. [European Space Astronomy Centre, P.O. Box 78, E-28692 Villanueva de la Cañada, Madrid (Spain); Rector, T. A. [Department of Physics and Astronomy, University of Alaska Anchorage, 3211 Providence Dr., Anchorage, AK 99508 (United States); Schweizer, F. [Carnegie Observatories, 813 Santa Barbara St., Pasadena, CA 91101 (United States); Hornoch, K. [Astronomical Institute, Academy of Sciences, CZ-251 65 Ondřejov (Czech Republic); Orio, M. [Astronomical Observatory of Padova (INAF), I-35122 Padova (Italy); Pietsch, W. [Max Planck Institute for Extraterrestrial Physics, P.O. Box 1312, Giessenbachstr., D-85741, Garching (Germany); Darnley, M. J.; Williams, S. C.; Bode, M. F. [Astrophysics Research Institute, Liverpool John Moores University, Liverpool L3 5RF (United Kingdom); Bryan, J., E-mail: aws@nova.sdsu.edu [McDonald Observatory, Austin, TX 78712 (United States)

    2015-02-01

    The reported positions of 964 suspected nova eruptions in M31 recorded through the end of calendar year 2013 have been compared in order to identify recurrent nova (RN) candidates. To pass the initial screen and qualify as a RN candidate, two or more eruptions were required to be coincident within 0.′1, although this criterion was relaxed to 0.′15 for novae discovered on early photographic patrols. A total of 118 eruptions from 51 potential RN systems satisfied the screening criterion. To determine what fraction of these novae are indeed recurrent, the original plates and published images of the relevant eruptions have been carefully compared. This procedure has resulted in the elimination of 27 of the 51 progenitor candidates (61 eruptions) from further consideration as RNe, with another 8 systems (17 eruptions) deemed unlikely to be recurrent. Of the remaining 16 systems, 12 candidates (32 eruptions) were judged to be RNe, with an additional 4 systems (8 eruptions) being possibly recurrent. It is estimated that ∼4% of the nova eruptions seen in M31 over the past century are associated with RNe. A Monte Carlo analysis shows that the discovery efficiency for RNe may be as low as 10% that for novae in general, suggesting that as many as one in three nova eruptions observed in M31 arise from progenitor systems having recurrence times ≲100 yr. For plausible system parameters, it appears unlikely that RNe can provide a significant channel for the production of Type Ia supernovae.

  13. Recurrent Novae — A Review

    Directory of Open Access Journals (Sweden)

    K. Mukai

    2015-02-01

    Full Text Available In recent years, recurrent nova eruptions are often observed very intensely in wide range of wavelengths from radio to optical to X-rays. Here I present selected highlights from recent multi-wavelength observations. The enigma of T Pyx is at the heart of this paper. While our current understanding of CV and symbiotic star evolution can explain why certain subset of recurrent novae have high accretion rate, that of T Pyx must be greatly elevated compared to the evolutionary mean. At the same time, we have extensive data to be able to estimate how the nova envelope was ejected in T Pyx, and it turns to be a rather complex tale. One suspects that envelope ejection in recurrent and classical novae in general is more complicated than the textbook descriptions. At the end of the review, I will speculate that these two may be connected.

  14. A canonical neural mechanism for behavioral variability

    Science.gov (United States)

    Darshan, Ran; Wood, William E.; Peters, Susan; Leblois, Arthur; Hansel, David

    2017-05-01

    The ability to generate variable movements is essential for learning and adjusting complex behaviours. This variability has been linked to the temporal irregularity of neuronal activity in the central nervous system. However, how neuronal irregularity actually translates into behavioural variability is unclear. Here we combine modelling, electrophysiological and behavioural studies to address this issue. We demonstrate that a model circuit comprising topographically organized and strongly recurrent neural networks can autonomously generate irregular motor behaviours. Simultaneous recordings of neurons in singing finches reveal that neural correlations increase across the circuit driving song variability, in agreement with the model predictions. Analysing behavioural data, we find remarkable similarities in the babbling statistics of 5-6-month-old human infants and juveniles from three songbird species and show that our model naturally accounts for these `universal' statistics.

  15. Deep learning in neural networks: an overview.

    Science.gov (United States)

    Schmidhuber, Jürgen

    2015-01-01

    In recent years, deep artificial neural networks (including recurrent ones) have won numerous contests in pattern recognition and machine learning. This historical survey compactly summarizes relevant work, much of it from the previous millennium. Shallow and Deep Learners are distinguished by the depth of their credit assignment paths, which are chains of possibly learnable, causal links between actions and effects. I review deep supervised learning (also recapitulating the history of backpropagation), unsupervised learning, reinforcement learning & evolutionary computation, and indirect search for short programs encoding deep and large networks.

  16. Neural electrical activity and neural network growth.

    Science.gov (United States)

    Gafarov, F M

    2018-05-01

    The development of central and peripheral neural system depends in part on the emergence of the correct functional connectivity in its input and output pathways. Now it is generally accepted that molecular factors guide neurons to establish a primary scaffold that undergoes activity-dependent refinement for building a fully functional circuit. However, a number of experimental results obtained recently shows that the neuronal electrical activity plays an important role in the establishing of initial interneuronal connections. Nevertheless, these processes are rather difficult to study experimentally, due to the absence of theoretical description and quantitative parameters for estimation of the neuronal activity influence on growth in neural networks. In this work we propose a general framework for a theoretical description of the activity-dependent neural network growth. The theoretical description incorporates a closed-loop growth model in which the neural activity can affect neurite outgrowth, which in turn can affect neural activity. We carried out the detailed quantitative analysis of spatiotemporal activity patterns and studied the relationship between individual cells and the network as a whole to explore the relationship between developing connectivity and activity patterns. The model, developed in this work will allow us to develop new experimental techniques for studying and quantifying the influence of the neuronal activity on growth processes in neural networks and may lead to a novel techniques for constructing large-scale neural networks by self-organization. Copyright © 2018 Elsevier Ltd. All rights reserved.

  17. A study of reactor monitoring method with neural network

    Energy Technology Data Exchange (ETDEWEB)

    Nabeshima, Kunihiko [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment

    2001-03-01

    The purpose of this study is to investigate the methodology of Nuclear Power Plant (NPP) monitoring with neural networks, which create the plant models by the learning of the past normal operation patterns. The concept of this method is to detect the symptom of small anomalies by monitoring the deviations between the process signals measured from an actual plant and corresponding output signals from the neural network model, which might not be equal if the abnormal operational patterns are presented to the input of the neural network. Auto-associative network, which has same output as inputs, can detect an kind of anomaly condition by using normal operation data only. The monitoring tests of the feedforward neural network with adaptive learning were performed using the PWR plant simulator by which many kinds of anomaly conditions can be easily simulated. The adaptively trained feedforward network could follow the actual plant dynamics and the changes of plant condition, and then find most of the anomalies much earlier than the conventional alarm system during steady state and transient operations. Then the off-line and on-line test results during one year operation at the actual NPP (PWR) showed that the neural network could detect several small anomalies which the operators or the conventional alarm system didn't noticed. Furthermore, the sensitivity analysis suggests that the plant models by neural networks are appropriate. Finally, the simulation results show that the recurrent neural network with feedback connections could successfully model the slow behavior of the reactor dynamics without adaptive learning. Therefore, the recurrent neural network with adaptive learning will be the best choice for the actual reactor monitoring system. (author)

  18. Feature to prototype transition in neural networks

    Science.gov (United States)

    Krotov, Dmitry; Hopfield, John

    Models of associative memory with higher order (higher than quadratic) interactions, and their relationship to neural networks used in deep learning are discussed. Associative memory is conventionally described by recurrent neural networks with dynamical convergence to stable points. Deep learning typically uses feedforward neural nets without dynamics. However, a simple duality relates these two different views when applied to problems of pattern classification. From the perspective of associative memory such models deserve attention because they make it possible to store a much larger number of memories, compared to the quadratic case. In the dual description, these models correspond to feedforward neural networks with one hidden layer and unusual activation functions transmitting the activities of the visible neurons to the hidden layer. These activation functions are rectified polynomials of a higher degree rather than the rectified linear functions used in deep learning. The network learns representations of the data in terms of features for rectified linear functions, but as the power in the activation function is increased there is a gradual shift to a prototype-based representation, the two extreme regimes of pattern recognition known in cognitive psychology. Simons Center for Systems Biology.

  19. Lyapunov-Based Controller for a Class of Stochastic Chaotic Systems

    Directory of Open Access Journals (Sweden)

    Hossein Shokouhi-Nejad

    2014-01-01

    Full Text Available This study presents a general control law based on Lyapunov’s direct method for a group of well-known stochastic chaotic systems. Since real chaotic systems have undesired random-like behaviors which have also been deteriorated by environmental noise, chaotic systems are modeled by exciting a deterministic chaotic system with a white noise obtained from derivative of Wiener process which eventually generates an Ito differential equation. Proposed controller not only can asymptotically stabilize these systems in mean-square sense against their undesired intrinsic properties, but also exhibits good transient response. Simulation results highlight effectiveness and feasibility of proposed controller in outperforming stochastic chaotic systems.

  20. Lyapunov-based distributed control of the safety-factor profile in a tokamak plasma

    International Nuclear Information System (INIS)

    Bribiesca Argomedo, Federico; Witrant, Emmanuel; Prieur, Christophe; Brémond, Sylvain; Nouailletas, Rémy; Artaud, Jean-François

    2013-01-01

    A real-time model-based controller is developed for the tracking of the distributed safety-factor profile in a tokamak plasma. Using relevant physical models and simplifying assumptions, theoretical stability and robustness guarantees were obtained using a Lyapunov function. This approach considers the couplings between the poloidal flux diffusion equation, the time-varying temperature profiles and an independent total plasma current control. The actuator chosen for the safety-factor profile tracking is the lower hybrid current drive, although the results presented can be easily extended to any non-inductive current source. The performance and robustness of the proposed control law is evaluated with a physics-oriented simulation code on Tore Supra experimental test cases. (paper)