WorldWideScience

Sample records for recurrent neurally mediated

  1. Chaotic diagonal recurrent neural network

    International Nuclear Information System (INIS)

    Wang Xing-Yuan; Zhang Yi

    2012-01-01

    We propose a novel neural network based on a diagonal recurrent neural network and chaos, and its structure and learning algorithm are designed. The multilayer feedforward neural network, diagonal recurrent neural network, and chaotic diagonal recurrent neural network are used to approach the cubic symmetry map. The simulation results show that the approximation capability of the chaotic diagonal recurrent neural network is better than the other two neural networks. (interdisciplinary physics and related areas of science and technology)

  2. Deep Gate Recurrent Neural Network

    Science.gov (United States)

    2016-11-22

    and Fred Cummins. Learning to forget: Continual prediction with lstm . Neural computation, 12(10):2451–2471, 2000. Alex Graves. Generating sequences...DSGU) and Simple Gated Unit (SGU), which are structures for learning long-term dependencies. Compared to traditional Long Short-Term Memory ( LSTM ) and...Gated Recurrent Unit (GRU), both structures require fewer parameters and less computation time in sequence classification tasks. Unlike GRU and LSTM

  3. Ocean wave forecasting using recurrent neural networks

    Digital Repository Service at National Institute of Oceanography (India)

    Mandal, S.; Prabaharan, N.

    , merchant vessel routing, nearshore construction, etc. more efficiently and safely. This paper describes an artificial neural network, namely recurrent neural network with rprop update algorithm and is applied for wave forecasting. Measured ocean waves off...

  4. Noise-enhanced categorization in a recurrently reconnected neural network

    International Nuclear Information System (INIS)

    Monterola, Christopher; Zapotocky, Martin

    2005-01-01

    We investigate the interplay of recurrence and noise in neural networks trained to categorize spatial patterns of neural activity. We develop the following procedure to demonstrate how, in the presence of noise, the introduction of recurrence permits to significantly extend and homogenize the operating range of a feed-forward neural network. We first train a two-level perceptron in the absence of noise. Following training, we identify the input and output units of the feed-forward network, and thus convert it into a two-layer recurrent network. We show that the performance of the reconnected network has features reminiscent of nondynamic stochastic resonance: the addition of noise enables the network to correctly categorize stimuli of subthreshold strength, with optimal noise magnitude significantly exceeding the stimulus strength. We characterize the dynamics leading to this effect and contrast it to the behavior of a more simple associative memory network in which noise-mediated categorization fails

  5. Noise-enhanced categorization in a recurrently reconnected neural network

    Science.gov (United States)

    Monterola, Christopher; Zapotocky, Martin

    2005-03-01

    We investigate the interplay of recurrence and noise in neural networks trained to categorize spatial patterns of neural activity. We develop the following procedure to demonstrate how, in the presence of noise, the introduction of recurrence permits to significantly extend and homogenize the operating range of a feed-forward neural network. We first train a two-level perceptron in the absence of noise. Following training, we identify the input and output units of the feed-forward network, and thus convert it into a two-layer recurrent network. We show that the performance of the reconnected network has features reminiscent of nondynamic stochastic resonance: the addition of noise enables the network to correctly categorize stimuli of subthreshold strength, with optimal noise magnitude significantly exceeding the stimulus strength. We characterize the dynamics leading to this effect and contrast it to the behavior of a more simple associative memory network in which noise-mediated categorization fails.

  6. Interpretation of Recurrent Neural Networks

    DEFF Research Database (Denmark)

    Pedersen, Morten With; Larsen, Jan

    1997-01-01

    This paper addresses techniques for interpretation and characterization of trained recurrent nets for time series problems. In particular, we focus on assessment of effective memory and suggest an operational definition of memory. Further we discuss the evaluation of learning curves. Various nume...

  7. Recurrent Neural Network for Computing Outer Inverse.

    Science.gov (United States)

    Živković, Ivan S; Stanimirović, Predrag S; Wei, Yimin

    2016-05-01

    Two linear recurrent neural networks for generating outer inverses with prescribed range and null space are defined. Each of the proposed recurrent neural networks is based on the matrix-valued differential equation, a generalization of dynamic equations proposed earlier for the nonsingular matrix inversion, the Moore-Penrose inversion, as well as the Drazin inversion, under the condition of zero initial state. The application of the first approach is conditioned by the properties of the spectrum of a certain matrix; the second approach eliminates this drawback, though at the cost of increasing the number of matrix operations. The cases corresponding to the most common generalized inverses are defined. The conditions that ensure stability of the proposed neural network are presented. Illustrative examples present the results of numerical simulations.

  8. Local Dynamics in Trained Recurrent Neural Networks.

    Science.gov (United States)

    Rivkind, Alexander; Barak, Omri

    2017-06-23

    Learning a task induces connectivity changes in neural circuits, thereby changing their dynamics. To elucidate task-related neural dynamics, we study trained recurrent neural networks. We develop a mean field theory for reservoir computing networks trained to have multiple fixed point attractors. Our main result is that the dynamics of the network's output in the vicinity of attractors is governed by a low-order linear ordinary differential equation. The stability of the resulting equation can be assessed, predicting training success or failure. As a consequence, networks of rectified linear units and of sigmoidal nonlinearities are shown to have diametrically different properties when it comes to learning attractors. Furthermore, a characteristic time constant, which remains finite at the edge of chaos, offers an explanation of the network's output robustness in the presence of variability of the internal neural dynamics. Finally, the proposed theory predicts state-dependent frequency selectivity in the network response.

  9. Local Dynamics in Trained Recurrent Neural Networks

    Science.gov (United States)

    Rivkind, Alexander; Barak, Omri

    2017-06-01

    Learning a task induces connectivity changes in neural circuits, thereby changing their dynamics. To elucidate task-related neural dynamics, we study trained recurrent neural networks. We develop a mean field theory for reservoir computing networks trained to have multiple fixed point attractors. Our main result is that the dynamics of the network's output in the vicinity of attractors is governed by a low-order linear ordinary differential equation. The stability of the resulting equation can be assessed, predicting training success or failure. As a consequence, networks of rectified linear units and of sigmoidal nonlinearities are shown to have diametrically different properties when it comes to learning attractors. Furthermore, a characteristic time constant, which remains finite at the edge of chaos, offers an explanation of the network's output robustness in the presence of variability of the internal neural dynamics. Finally, the proposed theory predicts state-dependent frequency selectivity in the network response.

  10. Supervised Sequence Labelling with Recurrent Neural Networks

    CERN Document Server

    Graves, Alex

    2012-01-01

    Supervised sequence labelling is a vital area of machine learning, encompassing tasks such as speech, handwriting and gesture recognition, protein secondary structure prediction and part-of-speech tagging. Recurrent neural networks are powerful sequence learning tools—robust to input noise and distortion, able to exploit long-range contextual information—that would seem ideally suited to such problems. However their role in large-scale sequence labelling systems has so far been auxiliary.    The goal of this book is a complete framework for classifying and transcribing sequential data with recurrent neural networks only. Three main innovations are introduced in order to realise this goal. Firstly, the connectionist temporal classification output layer allows the framework to be trained with unsegmented target sequences, such as phoneme-level speech transcriptions; this is in contrast to previous connectionist approaches, which were dependent on error-prone prior segmentation. Secondly, multidimensional...

  11. Prediction of Bladder Cancer Recurrences Using Artificial Neural Networks

    Science.gov (United States)

    Zulueta Guerrero, Ekaitz; Garay, Naiara Telleria; Lopez-Guede, Jose Manuel; Vilches, Borja Ayerdi; Iragorri, Eider Egilegor; Castaños, David Lecumberri; de La Hoz Rastrollo, Ana Belén; Peña, Carlos Pertusa

    Even if considerable advances have been made in the field of early diagnosis, there is no simple, cheap and non-invasive method that can be applied to the clinical monitorisation of bladder cancer patients. Moreover, bladder cancer recurrences or the reappearance of the tumour after its surgical resection cannot be predicted in the current clinical setting. In this study, Artificial Neural Networks (ANN) were used to assess how different combinations of classical clinical parameters (stage-grade and age) and two urinary markers (growth factor and pro-inflammatory mediator) could predict post surgical recurrences in bladder cancer patients. Different ANN methods, input parameter combinations and recurrence related output variables were used and the resulting positive and negative prediction rates compared. MultiLayer Perceptron (MLP) was selected as the most predictive model and urinary markers showed the highest sensitivity, predicting correctly 50% of the patients that would recur in a 2 year follow-up period.

  12. Pacemaker Therapy in Patients With Neurally Mediated Syncope and Documented Asystole Third International Study on Syncope of Uncertain Etiology (ISSUE-3) A Randomized Trial

    NARCIS (Netherlands)

    Brignole, Michele; Menozzi, Carlo; Moya, Angel; Andresen, Dietrich; Blanc, Jean Jacques; Krahn, Andrew D.; Wieling, Wouter; Beiras, Xulio; Deharo, Jean Claude; Russo, Vitantonio; Tomaino, Marco; Sutton, Richard; Tomaino, M.; Pescoller, F.; Donateo, P.; Oddone, D.; Russo, V.; Pierri, F.; Matino, M. G.; Vitale, E.; Massa, R.; Piccinni, G.; Melissano, D.; Menozzi, C.; Lolli, G.; Gulizia, M.; Francese, M.; Iorfida, M.; Golzio, P.; Gaggioli, G.; Laffi, M.; Rabjoli, F.; Cecchinato, C.; Ungar, A.; Rafanelli, M.; Chisciotti, V.; Morrione, A.; del Rosso, A.; Guernaccia, V.; Palella, M.; D'Agostino, C.; Campana, A.; Brigante, M.; Miracapillo, G.; Addonisio, L.; Proclemer, A.; Facchin, D.; Vado, A.; Knops, R. E.; Dekker, L. R. C.

    2012-01-01

    Background-The efficacy of cardiac pacing for prevention of syncopal recurrences in patients with neurally mediated syncope is controversial. We wanted to determine whether pacing therapy reduces syncopal recurrences in patients with severe asystolic neurally mediated syncope. Methods and

  13. Collaborative Recurrent Neural Networks forDynamic Recommender Systems

    Science.gov (United States)

    2016-11-22

    JMLR: Workshop and Conference Proceedings 63:366–381, 2016 ACML 2016 Collaborative Recurrent Neural Networks for Dynamic Recommender Systems Young...an unprece- dented scale. Although such activity logs are abundantly available, most approaches to recommender systems are based on the rating...Recurrent Neural Network, Recommender System , Neural Language Model, Collaborative Filtering 1. Introduction As ever larger parts of the population

  14. Analysis of Recurrent Analog Neural Networks

    Directory of Open Access Journals (Sweden)

    Z. Raida

    1998-06-01

    Full Text Available In this paper, an original rigorous analysis of recurrent analog neural networks, which are built from opamp neurons, is presented. The analysis, which comes from the approximate model of the operational amplifier, reveals causes of possible non-stable states and enables to determine convergence properties of the network. Results of the analysis are discussed in order to enable development of original robust and fast analog networks. In the analysis, the special attention is turned to the examination of the influence of real circuit elements and of the statistical parameters of processed signals to the parameters of the network.

  15. Adaptive Filtering Using Recurrent Neural Networks

    Science.gov (United States)

    Parlos, Alexander G.; Menon, Sunil K.; Atiya, Amir F.

    2005-01-01

    A method for adaptive (or, optionally, nonadaptive) filtering has been developed for estimating the states of complex process systems (e.g., chemical plants, factories, or manufacturing processes at some level of abstraction) from time series of measurements of system inputs and outputs. The method is based partly on the fundamental principles of the Kalman filter and partly on the use of recurrent neural networks. The standard Kalman filter involves an assumption of linearity of the mathematical model used to describe a process system. The extended Kalman filter accommodates a nonlinear process model but still requires linearization about the state estimate. Both the standard and extended Kalman filters involve the often unrealistic assumption that process and measurement noise are zero-mean, Gaussian, and white. In contrast, the present method does not involve any assumptions of linearity of process models or of the nature of process noise; on the contrary, few (if any) assumptions are made about process models, noise models, or the parameters of such models. In this regard, the method can be characterized as one of nonlinear, nonparametric filtering. The method exploits the unique ability of neural networks to approximate nonlinear functions. In a given case, the process model is limited mainly by limitations of the approximation ability of the neural networks chosen for that case. Moreover, despite the lack of assumptions regarding process noise, the method yields minimum- variance filters. In that they do not require statistical models of noise, the neural- network-based state filters of this method are comparable to conventional nonlinear least-squares estimators.

  16. Identification of Non-Linear Structures using Recurrent Neural Networks

    DEFF Research Database (Denmark)

    Kirkegaard, Poul Henning; Nielsen, Søren R. K.; Hansen, H. I.

    Two different partially recurrent neural networks structured as Multi Layer Perceptrons (MLP) are investigated for time domain identification of a non-linear structure.......Two different partially recurrent neural networks structured as Multi Layer Perceptrons (MLP) are investigated for time domain identification of a non-linear structure....

  17. Identification of Non-Linear Structures using Recurrent Neural Networks

    DEFF Research Database (Denmark)

    Kirkegaard, Poul Henning; Nielsen, Søren R. K.; Hansen, H. I.

    1995-01-01

    Two different partially recurrent neural networks structured as Multi Layer Perceptrons (MLP) are investigated for time domain identification of a non-linear structure.......Two different partially recurrent neural networks structured as Multi Layer Perceptrons (MLP) are investigated for time domain identification of a non-linear structure....

  18. Deep Recurrent Convolutional Neural Network: Improving Performance For Speech Recognition

    OpenAIRE

    Zhang, Zewang; Sun, Zheng; Liu, Jiaqi; Chen, Jingwen; Huo, Zhao; Zhang, Xiao

    2016-01-01

    A deep learning approach has been widely applied in sequence modeling problems. In terms of automatic speech recognition (ASR), its performance has significantly been improved by increasing large speech corpus and deeper neural network. Especially, recurrent neural network and deep convolutional neural network have been applied in ASR successfully. Given the arising problem of training speed, we build a novel deep recurrent convolutional network for acoustic modeling and then apply deep resid...

  19. Precipitation Nowcast using Deep Recurrent Neural Network

    Science.gov (United States)

    Akbari Asanjan, A.; Yang, T.; Gao, X.; Hsu, K. L.; Sorooshian, S.

    2016-12-01

    An accurate precipitation nowcast (0-6 hours) with a fine temporal and spatial resolution has always been an important prerequisite for flood warning, streamflow prediction and risk management. Most of the popular approaches used for forecasting precipitation can be categorized into two groups. One type of precipitation forecast relies on numerical modeling of the physical dynamics of atmosphere and another is based on empirical and statistical regression models derived by local hydrologists or meteorologists. Given the recent advances in artificial intelligence, in this study a powerful Deep Recurrent Neural Network, termed as Long Short-Term Memory (LSTM) model, is creatively used to extract the patterns and forecast the spatial and temporal variability of Cloud Top Brightness Temperature (CTBT) observed from GOES satellite. Then, a 0-6 hours precipitation nowcast is produced using a Precipitation Estimation from Remote Sensing Information using Artificial Neural Network (PERSIANN) algorithm, in which the CTBT nowcast is used as the PERSIANN algorithm's raw inputs. Two case studies over the continental U.S. have been conducted that demonstrate the improvement of proposed approach as compared to a classical Feed Forward Neural Network and a couple simple regression models. The advantages and disadvantages of the proposed method are summarized with regard to its capability of pattern recognition through time, handling of vanishing gradient during model learning, and working with sparse data. The studies show that the LSTM model performs better than other methods, and it is able to learn the temporal evolution of the precipitation events through over 1000 time lags. The uniqueness of PERSIANN's algorithm enables an alternative precipitation nowcast approach as demonstrated in this study, in which the CTBT prediction is produced and used as the inputs for generating precipitation nowcast.

  20. Time series prediction with simple recurrent neural networks ...

    African Journals Online (AJOL)

    A hybrid of the two called Elman-Jordan (or Multi-recurrent) neural network is also being used. In this study, we evaluated the performance of these neural networks on three established bench mark time series prediction problems. Results from the experiments showed that Jordan neural network performed significantly ...

  1. Deep Recurrent Neural Networks for Supernovae Classification

    Science.gov (United States)

    Charnock, Tom; Moss, Adam

    2017-03-01

    We apply deep recurrent neural networks, which are capable of learning complex sequential information, to classify supernovae (code available at https://github.com/adammoss/supernovae). The observational time and filter fluxes are used as inputs to the network, but since the inputs are agnostic, additional data such as host galaxy information can also be included. Using the Supernovae Photometric Classification Challenge (SPCC) data, we find that deep networks are capable of learning about light curves, however the performance of the network is highly sensitive to the amount of training data. For a training size of 50% of the representational SPCC data set (around 104 supernovae) we obtain a type-Ia versus non-type-Ia classification accuracy of 94.7%, an area under the Receiver Operating Characteristic curve AUC of 0.986 and an SPCC figure-of-merit F 1 = 0.64. When using only the data for the early-epoch challenge defined by the SPCC, we achieve a classification accuracy of 93.1%, AUC of 0.977, and F 1 = 0.58, results almost as good as with the whole light curve. By employing bidirectional neural networks, we can acquire impressive classification results between supernovae types I, II and III at an accuracy of 90.4% and AUC of 0.974. We also apply a pre-trained model to obtain classification probabilities as a function of time and show that it can give early indications of supernovae type. Our method is competitive with existing algorithms and has applications for future large-scale photometric surveys.

  2. Bayesian Recurrent Neural Network for Language Modeling.

    Science.gov (United States)

    Chien, Jen-Tzung; Ku, Yuan-Chu

    2016-02-01

    A language model (LM) is calculated as the probability of a word sequence that provides the solution to word prediction for a variety of information systems. A recurrent neural network (RNN) is powerful to learn the large-span dynamics of a word sequence in the continuous space. However, the training of the RNN-LM is an ill-posed problem because of too many parameters from a large dictionary size and a high-dimensional hidden layer. This paper presents a Bayesian approach to regularize the RNN-LM and apply it for continuous speech recognition. We aim to penalize the too complicated RNN-LM by compensating for the uncertainty of the estimated model parameters, which is represented by a Gaussian prior. The objective function in a Bayesian classification network is formed as the regularized cross-entropy error function. The regularized model is constructed not only by calculating the regularized parameters according to the maximum a posteriori criterion but also by estimating the Gaussian hyperparameter by maximizing the marginal likelihood. A rapid approximation to a Hessian matrix is developed to implement the Bayesian RNN-LM (BRNN-LM) by selecting a small set of salient outer-products. The proposed BRNN-LM achieves a sparser model than the RNN-LM. Experiments on different corpora show the robustness of system performance by applying the rapid BRNN-LM under different conditions.

  3. Character recognition from trajectory by recurrent spiking neural networks.

    Science.gov (United States)

    Jiangrong Shen; Kang Lin; Yueming Wang; Gang Pan

    2017-07-01

    Spiking neural networks are biologically plausible and power-efficient on neuromorphic hardware, while recurrent neural networks have been proven to be efficient on time series data. However, how to use the recurrent property to improve the performance of spiking neural networks is still a problem. This paper proposes a recurrent spiking neural network for character recognition using trajectories. In the network, a new encoding method is designed, in which varying time ranges of input streams are used in different recurrent layers. This is able to improve the generalization ability of our model compared with general encoding methods. The experiments are conducted on four groups of the character data set from University of Edinburgh. The results show that our method can achieve a higher average recognition accuracy than existing methods.

  4. Representation of linguistic form and function in recurrent neural networks

    NARCIS (Netherlands)

    Kadar, Akos; Chrupala, Grzegorz; Alishahi, Afra

    2017-01-01

    We present novel methods for analyzing the activation patterns of recurrent neural networks from a linguistic point of view and explore the types of linguistic structure they learn. As a case study, we use a standard standalone language model, and a multi-task gated recurrent network architecture

  5. Optimization of recurrent neural networks for time series modeling

    DEFF Research Database (Denmark)

    Pedersen, Morten With

    1997-01-01

    The present thesis is about optimization of recurrent neural networks applied to time series modeling. In particular is considered fully recurrent networks working from only a single external input, one layer of nonlinear hidden units and a li near output unit applied to prediction of discrete time...... series. The overall objective s are to improve training by application of second-order methods and to improve generalization ability by architecture optimization accomplished by pruning. The major topics covered in the thesis are: 1. The problem of training recurrent networks is analyzed from a numerical...... of solution obtained as well as computation time required. 3. A theoretical definition of the generalization error for recurrent networks is provided. This definition justifies a commonly adopted approach for estimating generalization ability. 4. The viability of pruning recurrent networks by the Optimal...

  6. Energy Complexity of Recurrent Neural Networks

    Czech Academy of Sciences Publication Activity Database

    Šíma, Jiří

    2014-01-01

    Roč. 26, č. 5 (2014), s. 953-973 ISSN 0899-7667 R&D Projects: GA ČR GAP202/10/1333 Institutional support: RVO:67985807 Keywords : neural network * finite automaton * energy complexity * optimal size Subject RIV: IN - Informatics, Computer Science Impact factor: 2.207, year: 2014

  7. Neural Machine Translation with Recurrent Attention Modeling

    OpenAIRE

    Yang, Zichao; Hu, Zhiting; Deng, Yuntian; Dyer, Chris; Smola, Alex

    2016-01-01

    Knowing which words have been attended to in previous time steps while generating a translation is a rich source of information for predicting what words will be attended to in the future. We improve upon the attention model of Bahdanau et al. (2014) by explicitly modeling the relationship between previous and subsequent attention levels for each word using one recurrent network per input word. This architecture easily captures informative features, such as fertility and regularities in relat...

  8. Bach in 2014: Music Composition with Recurrent Neural Network

    OpenAIRE

    Liu, I-Ting; Ramakrishnan, Bhiksha

    2014-01-01

    We propose a framework for computer music composition that uses resilient propagation (RProp) and long short term memory (LSTM) recurrent neural network. In this paper, we show that LSTM network learns the structure and characteristics of music pieces properly by demonstrating its ability to recreate music. We also show that predicting existing music using RProp outperforms Back propagation through time (BPTT).

  9. Probing the basins of attraction of a recurrent neural network

    NARCIS (Netherlands)

    Heerema, M.; van Leeuwen, W.A.

    2000-01-01

    Analytical expressions for the weights $w_{ij}(b)$ of the connections of a recurrent neural network are found by taking explicitly into account basins of attraction, the size of which is characterized by a basin parameter $b$. It is shown that a network with $b \

  10. Bayesian model ensembling using meta-trained recurrent neural networks

    NARCIS (Netherlands)

    Ambrogioni, L.; Berezutskaya, Y.; Gü ç lü , U.; Borne, E.W.P. van den; Gü ç lü tü rk, Y.; Gerven, M.A.J. van; Maris, E.G.G.

    2017-01-01

    In this paper we demonstrate that a recurrent neural network meta-trained on an ensemble of arbitrary classification tasks can be used as an approximation of the Bayes optimal classifier. This result is obtained by relying on the framework of e-free approximate Bayesian inference, where the Bayesian

  11. Railway track circuit fault diagnosis using recurrent neural networks

    NARCIS (Netherlands)

    de Bruin, T.D.; Verbert, K.A.J.; Babuska, R.

    2017-01-01

    Timely detection and identification of faults in railway track circuits are crucial for the safety and availability of railway networks. In this paper, the use of the long-short-term memory (LSTM) recurrent neural network is proposed to accomplish these tasks based on the commonly available

  12. A recurrent neural network with ever changing synapses

    NARCIS (Netherlands)

    Heerema, M.; van Leeuwen, W.A.

    2000-01-01

    A recurrent neural network with noisy input is studied analytically, on the basis of a Discrete Time Master Equation. The latter is derived from a biologically realizable learning rule for the weights of the connections. In a numerical study it is found that the fixed points of the dynamics of the

  13. Active Control of Sound based on Diagonal Recurrent Neural Network

    NARCIS (Netherlands)

    Jayawardhana, Bayu; Xie, Lihua; Yuan, Shuqing

    2002-01-01

    Recurrent neural network has been known for its dynamic mapping and better suited for nonlinear dynamical system. Nonlinear controller may be needed in cases where the actuators exhibit the nonlinear characteristics, or in cases when the structure to be controlled exhibits nonlinear behavior. The

  14. Convolutional over Recurrent Encoder for Neural Machine Translation

    Directory of Open Access Journals (Sweden)

    Dakwale Praveen

    2017-06-01

    Full Text Available Neural machine translation is a recently proposed approach which has shown competitive results to traditional MT approaches. Standard neural MT is an end-to-end neural network where the source sentence is encoded by a recurrent neural network (RNN called encoder and the target words are predicted using another RNN known as decoder. Recently, various models have been proposed which replace the RNN encoder with a convolutional neural network (CNN. In this paper, we propose to augment the standard RNN encoder in NMT with additional convolutional layers in order to capture wider context in the encoder output. Experiments on English to German translation demonstrate that our approach can achieve significant improvements over a standard RNN-based baseline.

  15. Synthesis of recurrent neural networks for dynamical system simulation.

    Science.gov (United States)

    Trischler, Adam P; D'Eleuterio, Gabriele M T

    2016-08-01

    We review several of the most widely used techniques for training recurrent neural networks to approximate dynamical systems, then describe a novel algorithm for this task. The algorithm is based on an earlier theoretical result that guarantees the quality of the network approximation. We show that a feedforward neural network can be trained on the vector-field representation of a given dynamical system using backpropagation, then recast it as a recurrent network that replicates the original system's dynamics. After detailing this algorithm and its relation to earlier approaches, we present numerical examples that demonstrate its capabilities. One of the distinguishing features of our approach is that both the original dynamical systems and the recurrent networks that simulate them operate in continuous time. Copyright © 2016 Elsevier Ltd. All rights reserved.

  16. Recursive Bayesian recurrent neural networks for time-series modeling.

    Science.gov (United States)

    Mirikitani, Derrick T; Nikolaev, Nikolay

    2010-02-01

    This paper develops a probabilistic approach to recursive second-order training of recurrent neural networks (RNNs) for improved time-series modeling. A general recursive Bayesian Levenberg-Marquardt algorithm is derived to sequentially update the weights and the covariance (Hessian) matrix. The main strengths of the approach are a principled handling of the regularization hyperparameters that leads to better generalization, and stable numerical performance. The framework involves the adaptation of a noise hyperparameter and local weight prior hyperparameters, which represent the noise in the data and the uncertainties in the model parameters. Experimental investigations using artificial and real-world data sets show that RNNs equipped with the proposed approach outperform standard real-time recurrent learning and extended Kalman training algorithms for recurrent networks, as well as other contemporary nonlinear neural models, on time-series modeling.

  17. SORN: a self-organizing recurrent neural network

    Directory of Open Access Journals (Sweden)

    Andreea Lazar

    2009-10-01

    Full Text Available Understanding the dynamics of recurrent neural networks is crucial for explaining how the brain processes information. In the neocortex, a range of different plasticity mechanisms are shaping recurrent networks into effective information processing circuits that learn appropriate representations for time-varying sensory stimuli. However, it has been difficult to mimic these abilities in artificial neural network models. Here we introduce SORN, a self-organizing recurrent network. It combines three distinct forms of local plasticity to learn spatio-temporal patterns in its input while maintaining its dynamics in a healthy regime suitable for learning. The SORN learns to encode information in the form of trajectories through its high-dimensional state space reminiscent of recent biological findings on cortical coding. All three forms of plasticity are shown to be essential for the network's success.

  18. Relation Classification via Recurrent Neural Network

    OpenAIRE

    Zhang, Dongxu; Wang, Dong

    2015-01-01

    Deep learning has gained much success in sentence-level relation classification. For example, convolutional neural networks (CNN) have delivered competitive performance without much effort on feature engineering as the conventional pattern-based methods. Thus a lot of works have been produced based on CNN structures. However, a key issue that has not been well addressed by the CNN-based method is the lack of capability to learn temporal features, especially long-distance dependency between no...

  19. Fractional Hopfield Neural Networks: Fractional Dynamic Associative Recurrent Neural Networks.

    Science.gov (United States)

    Pu, Yi-Fei; Yi, Zhang; Zhou, Ji-Liu

    2017-10-01

    This paper mainly discusses a novel conceptual framework: fractional Hopfield neural networks (FHNN). As is commonly known, fractional calculus has been incorporated into artificial neural networks, mainly because of its long-term memory and nonlocality. Some researchers have made interesting attempts at fractional neural networks and gained competitive advantages over integer-order neural networks. Therefore, it is naturally makes one ponder how to generalize the first-order Hopfield neural networks to the fractional-order ones, and how to implement FHNN by means of fractional calculus. We propose to introduce a novel mathematical method: fractional calculus to implement FHNN. First, we implement fractor in the form of an analog circuit. Second, we implement FHNN by utilizing fractor and the fractional steepest descent approach, construct its Lyapunov function, and further analyze its attractors. Third, we perform experiments to analyze the stability and convergence of FHNN, and further discuss its applications to the defense against chip cloning attacks for anticounterfeiting. The main contribution of our work is to propose FHNN in the form of an analog circuit by utilizing a fractor and the fractional steepest descent approach, construct its Lyapunov function, prove its Lyapunov stability, analyze its attractors, and apply FHNN to the defense against chip cloning attacks for anticounterfeiting. A significant advantage of FHNN is that its attractors essentially relate to the neuron's fractional order. FHNN possesses the fractional-order-stability and fractional-order-sensitivity characteristics.

  20. Analysis of surface ozone using a recurrent neural network.

    Science.gov (United States)

    Biancofiore, Fabio; Verdecchia, Marco; Di Carlo, Piero; Tomassetti, Barbara; Aruffo, Eleonora; Busilacchio, Marcella; Bianco, Sebastiano; Di Tommaso, Sinibaldo; Colangeli, Carlo

    2015-05-01

    Hourly concentrations of ozone (O₃) and nitrogen dioxide (NO₂) have been measured for 16 years, from 1998 to 2013, in a seaside town in central Italy. The seasonal trends of O₃ and NO₂ recorded in this period have been studied. Furthermore, we used the data collected during one year (2005), to define the characteristics of a multiple linear regression model and a neural network model. Both models are used to model the hourly O₃ concentration, using, two scenarios: 1) in the first as inputs, only meteorological parameters and 2) in the second adding photochemical parameters at those of the first scenario. In order to evaluate the performance of the model four statistical criteria are used: correlation coefficient, fractional bias, normalized mean squared error and a factor of two. All the criteria show that the neural network gives better results, compared to the regression model, in all the model scenarios. Predictions of O₃ have been carried out by many authors using a feed forward neural architecture. In this paper we show that a recurrent architecture significantly improves the performances of neural predictors. Using only the meteorological parameters as input, the recurrent architecture shows performance better than the multiple linear regression model that uses meteorological and photochemical data as input, making the neural network model with recurrent architecture a more useful tool in areas where only weather measurements are available. Finally, we used the neural network model to forecast the O₃ hourly concentrations 1, 3, 6, 12, 24 and 48 h ahead. The performances of the model in predicting O₃ levels are discussed. Emphasis is given to the possibility of using the neural network model in operational ways in areas where only meteorological data are available, in order to predict O₃ also in sites where it has not been measured yet. Copyright © 2015 Elsevier B.V. All rights reserved.

  1. Iterative free-energy optimization for recurrent neural networks (INFERNO)

    Science.gov (United States)

    2017-01-01

    The intra-parietal lobe coupled with the Basal Ganglia forms a working memory that demonstrates strong planning capabilities for generating robust yet flexible neuronal sequences. Neurocomputational models however, often fails to control long range neural synchrony in recurrent spiking networks due to spontaneous activity. As a novel framework based on the free-energy principle, we propose to see the problem of spikes’ synchrony as an optimization problem of the neurons sub-threshold activity for the generation of long neuronal chains. Using a stochastic gradient descent, a reinforcement signal (presumably dopaminergic) evaluates the quality of one input vector to move the recurrent neural network to a desired activity; depending on the error made, this input vector is strengthened to hill-climb the gradient or elicited to search for another solution. This vector can be learned then by one associative memory as a model of the basal-ganglia to control the recurrent neural network. Experiments on habit learning and on sequence retrieving demonstrate the capabilities of the dual system to generate very long and precise spatio-temporal sequences, above two hundred iterations. Its features are applied then to the sequential planning of arm movements. In line with neurobiological theories, we discuss its relevance for modeling the cortico-basal working memory to initiate flexible goal-directed neuronal chains of causation and its relation to novel architectures such as Deep Networks, Neural Turing Machines and the Free-Energy Principle. PMID:28282439

  2. A recurrent neural network for solving bilevel linear programming problem.

    Science.gov (United States)

    He, Xing; Li, Chuandong; Huang, Tingwen; Li, Chaojie; Huang, Junjian

    2014-04-01

    In this brief, based on the method of penalty functions, a recurrent neural network (NN) modeled by means of a differential inclusion is proposed for solving the bilevel linear programming problem (BLPP). Compared with the existing NNs for BLPP, the model has the least number of state variables and simple structure. Using nonsmooth analysis, the theory of differential inclusions, and Lyapunov-like method, the equilibrium point sequence of the proposed NNs can approximately converge to an optimal solution of BLPP under certain conditions. Finally, the numerical simulations of a supply chain distribution model have shown excellent performance of the proposed recurrent NNs.

  3. Embedding recurrent neural networks into predator-prey models.

    Science.gov (United States)

    Moreau, Yves; Louiès, Stephane; Vandewalle, Joos; Brenig, Leon

    1999-03-01

    We study changes of coordinates that allow the embedding of ordinary differential equations describing continuous-time recurrent neural networks into differential equations describing predator-prey models-also called Lotka-Volterra systems. We transform the equations for the neural network first into quasi-monomial form (Brenig, L. (1988). Complete factorization and analytic solutions of generalized Lotka-Volterra equations. Physics Letters A, 133(7-8), 378-382), where we express the vector field of the dynamical system as a linear combination of products of powers of the variables. In practice, this transformation is possible only if the activation function is the hyperbolic tangent or the logistic sigmoid. From this quasi-monomial form, we can directly transform the system further into Lotka-Volterra equations. The resulting Lotka-Volterra system is of higher dimension than the original system, but the behavior of its first variables is equivalent to the behavior of the original neural network. We expect that this transformation will permit the application of existing techniques for the analysis of Lotka-Volterra systems to recurrent neural networks. Furthermore, our results show that Lotka-Volterra systems are universal approximators of dynamical systems, just as are continuous-time neural networks.

  4. Global robust stability of delayed recurrent neural networks

    International Nuclear Information System (INIS)

    Cao Jinde; Huang Deshuang; Qu Yuzhong

    2005-01-01

    This paper is concerned with the global robust stability of a class of delayed interval recurrent neural networks which contain time-invariant uncertain parameters whose values are unknown but bounded in given compact sets. A new sufficient condition is presented for the existence, uniqueness, and global robust stability of equilibria for interval neural networks with time delays by constructing Lyapunov functional and using matrix-norm inequality. An error is corrected in an earlier publication, and an example is given to show the effectiveness of the obtained results

  5. Predicting local field potentials with recurrent neural networks.

    Science.gov (United States)

    Kim, Louis; Harer, Jacob; Rangamani, Akshay; Moran, James; Parks, Philip D; Widge, Alik; Eskandar, Emad; Dougherty, Darin; Chin, Sang Peter

    2016-08-01

    We present a Recurrent Neural Network using LSTM (Long Short Term Memory) that is capable of modeling and predicting Local Field Potentials. We train and test the network on real data recorded from epilepsy patients. We construct networks that predict multi-channel LFPs for 1, 10, and 100 milliseconds forward in time. Our results show that prediction using LSTM outperforms regression when predicting 10 and 100 millisecond forward in time.

  6. Web server's reliability improvements using recurrent neural networks

    DEFF Research Database (Denmark)

    Madsen, Henrik; Albu, Rǎzvan-Daniel; Felea, Ioan

    2012-01-01

    In this paper we describe an interesting approach to error prediction illustrated by experimental results. The application consists of monitoring the activity for the web servers in order to collect the specific data. Predicting an error with severe consequences for the performance of a server (t...... usage, network usage and memory usage. We collect different data sets from monitoring the web server's activity and for each one we predict the server's reliability with the proposed recurrent neural network. © 2012 Taylor & Francis Group...

  7. Parameter estimation in space systems using recurrent neural networks

    Science.gov (United States)

    Parlos, Alexander G.; Atiya, Amir F.; Sunkel, John W.

    1991-01-01

    The identification of time-varying parameters encountered in space systems is addressed, using artificial neural systems. A hybrid feedforward/feedback neural network, namely a recurrent multilayer perception, is used as the model structure in the nonlinear system identification. The feedforward portion of the network architecture provides its well-known interpolation property, while through recurrency and cross-talk, the local information feedback enables representation of temporal variations in the system nonlinearities. The standard back-propagation-learning algorithm is modified and it is used for both the off-line and on-line supervised training of the proposed hybrid network. The performance of recurrent multilayer perceptron networks in identifying parameters of nonlinear dynamic systems is investigated by estimating the mass properties of a representative large spacecraft. The changes in the spacecraft inertia are predicted using a trained neural network, during two configurations corresponding to the early and late stages of the spacecraft on-orbit assembly sequence. The proposed on-line mass properties estimation capability offers encouraging results, though, further research is warranted for training and testing the predictive capabilities of these networks beyond nominal spacecraft operations.

  8. Recurrent Neural Network for Computing the Drazin Inverse.

    Science.gov (United States)

    Stanimirović, Predrag S; Zivković, Ivan S; Wei, Yimin

    2015-11-01

    This paper presents a recurrent neural network (RNN) for computing the Drazin inverse of a real matrix in real time. This recurrent neural network (RNN) is composed of n independent parts (subnetworks), where n is the order of the input matrix. These subnetworks can operate concurrently, so parallel and distributed processing can be achieved. In this way, the computational advantages over the existing sequential algorithms can be attained in real-time applications. The RNN defined in this paper is convenient for an implementation in an electronic circuit. The number of neurons in the neural network is the same as the number of elements in the output matrix, which represents the Drazin inverse. The difference between the proposed RNN and the existing ones for the Drazin inverse computation lies in their network architecture and dynamics. The conditions that ensure the stability of the defined RNN as well as its convergence toward the Drazin inverse are considered. In addition, illustrative examples and examples of application to the practical engineering problems are discussed to show the efficacy of the proposed neural network.

  9. A Recurrent Neural Network for Nonlinear Fractional Programming

    Directory of Open Access Journals (Sweden)

    Quan-Ju Zhang

    2012-01-01

    Full Text Available This paper presents a novel recurrent time continuous neural network model which performs nonlinear fractional optimization subject to interval constraints on each of the optimization variables. The network is proved to be complete in the sense that the set of optima of the objective function to be minimized with interval constraints coincides with the set of equilibria of the neural network. It is also shown that the network is primal and globally convergent in the sense that its trajectory cannot escape from the feasible region and will converge to an exact optimal solution for any initial point being chosen in the feasible interval region. Simulation results are given to demonstrate further the global convergence and good performance of the proposing neural network for nonlinear fractional programming problems with interval constraints.

  10. Ideomotor feedback control in a recurrent neural network.

    Science.gov (United States)

    Galtier, Mathieu

    2015-06-01

    The architecture of a neural network controlling an unknown environment is presented. It is based on a randomly connected recurrent neural network from which both perception and action are simultaneously read and fed back. There are two concurrent learning rules implementing a sort of ideomotor control: (i) perception is learned along the principle that the network should predict reliably its incoming stimuli; (ii) action is learned along the principle that the prediction of the network should match a target time series. The coherent behavior of the neural network in its environment is a consequence of the interaction between the two principles. Numerical simulations show a promising performance of the approach, which can be turned into a local and better "biologically plausible" algorithm.

  11. A novel word spotting method based on recurrent neural networks.

    Science.gov (United States)

    Frinken, Volkmar; Fischer, Andreas; Manmatha, R; Bunke, Horst

    2012-02-01

    Keyword spotting refers to the process of retrieving all instances of a given keyword from a document. In the present paper, a novel keyword spotting method for handwritten documents is described. It is derived from a neural network-based system for unconstrained handwriting recognition. As such it performs template-free spotting, i.e., it is not necessary for a keyword to appear in the training set. The keyword spotting is done using a modification of the CTC Token Passing algorithm in conjunction with a recurrent neural network. We demonstrate that the proposed systems outperform not only a classical dynamic time warping-based approach but also a modern keyword spotting system, based on hidden Markov models. Furthermore, we analyze the performance of the underlying neural networks when using them in a recognition task followed by keyword spotting on the produced transcription. We point out the advantages of keyword spotting when compared to classic text line recognition.

  12. Convolutional neural networks for prostate cancer recurrence prediction

    Science.gov (United States)

    Kumar, Neeraj; Verma, Ruchika; Arora, Ashish; Kumar, Abhay; Gupta, Sanchit; Sethi, Amit; Gann, Peter H.

    2017-03-01

    Accurate prediction of the treatment outcome is important for cancer treatment planning. We present an approach to predict prostate cancer (PCa) recurrence after radical prostatectomy using tissue images. We used a cohort whose case vs. control (recurrent vs. non-recurrent) status had been determined using post-treatment follow up. Further, to aid the development of novel biomarkers of PCa recurrence, cases and controls were paired based on matching of other predictive clinical variables such as Gleason grade, stage, age, and race. For this cohort, tissue resection microarray with up to four cores per patient was available. The proposed approach is based on deep learning, and its novelty lies in the use of two separate convolutional neural networks (CNNs) - one to detect individual nuclei even in the crowded areas, and the other to classify them. To detect nuclear centers in an image, the first CNN predicts distance transform of the underlying (but unknown) multi-nuclear map from the input HE image. The second CNN classifies the patches centered at nuclear centers into those belonging to cases or controls. Voting across patches extracted from image(s) of a patient yields the probability of recurrence for the patient. The proposed approach gave 0.81 AUC for a sample of 30 recurrent cases and 30 non-recurrent controls, after being trained on an independent set of 80 case-controls pairs. If validated further, such an approach might help in choosing between a combination of treatment options such as active surveillance, radical prostatectomy, radiation, and hormone therapy. It can also generalize to the prediction of treatment outcomes in other cancers.

  13. Sensitivity analysis of linear programming problem through a recurrent neural network

    Science.gov (United States)

    Das, Raja

    2017-11-01

    In this paper we study the recurrent neural network for solving linear programming problems. To achieve optimality in accuracy and also in computational effort, an algorithm is presented. We investigate the sensitivity analysis of linear programming problem through the neural network. A detailed example is also presented to demonstrate the performance of the recurrent neural network.

  14. Fine-tuning and the stability of recurrent neural networks.

    Directory of Open Access Journals (Sweden)

    David MacNeil

    Full Text Available A central criticism of standard theoretical approaches to constructing stable, recurrent model networks is that the synaptic connection weights need to be finely-tuned. This criticism is severe because proposed rules for learning these weights have been shown to have various limitations to their biological plausibility. Hence it is unlikely that such rules are used to continuously fine-tune the network in vivo. We describe a learning rule that is able to tune synaptic weights in a biologically plausible manner. We demonstrate and test this rule in the context of the oculomotor integrator, showing that only known neural signals are needed to tune the weights. We demonstrate that the rule appropriately accounts for a wide variety of experimental results, and is robust under several kinds of perturbation. Furthermore, we show that the rule is able to achieve stability as good as or better than that provided by the linearly optimal weights often used in recurrent models of the integrator. Finally, we discuss how this rule can be generalized to tune a wide variety of recurrent attractor networks, such as those found in head direction and path integration systems, suggesting that it may be used to tune a wide variety of stable neural systems.

  15. Estimating Ads’ Click through Rate with Recurrent Neural Network

    Directory of Open Access Journals (Sweden)

    Chen Qiao-Hong

    2016-01-01

    Full Text Available With the development of the Internet, online advertising spreads across every corner of the world, the ads' click through rate (CTR estimation is an important method to improve the online advertising revenue. Compared with the linear model, the nonlinear models can study much more complex relationships between a large number of nonlinear characteristics, so as to improve the accuracy of the estimation of the ads’ CTR. The recurrent neural network (RNN based on Long-Short Term Memory (LSTM is an improved model of the feedback neural network with ring structure. The model overcomes the problem of the gradient of the general RNN. Experiments show that the RNN based on LSTM exceeds the linear models, and it can effectively improve the estimation effect of the ads’ click through rate.

  16. Delay-slope-dependent stability results of recurrent neural networks.

    Science.gov (United States)

    Li, Tao; Zheng, Wei Xing; Lin, Chong

    2011-12-01

    By using the fact that the neuron activation functions are sector bounded and nondecreasing, this brief presents a new method, named the delay-slope-dependent method, for stability analysis of a class of recurrent neural networks with time-varying delays. This method includes more information on the slope of neuron activation functions and fewer matrix variables in the constructed Lyapunov-Krasovskii functional. Then some improved delay-dependent stability criteria with less computational burden and conservatism are obtained. Numerical examples are given to illustrate the effectiveness and the benefits of the proposed method.

  17. Very deep recurrent convolutional neural network for object recognition

    Science.gov (United States)

    Brahimi, Sourour; Ben Aoun, Najib; Ben Amar, Chokri

    2017-03-01

    In recent years, Computer vision has become a very active field. This field includes methods for processing, analyzing, and understanding images. The most challenging problems in computer vision are image classification and object recognition. This paper presents a new approach for object recognition task. This approach exploits the success of the Very Deep Convolutional Neural Network for object recognition. In fact, it improves the convolutional layers by adding recurrent connections. This proposed approach was evaluated on two object recognition benchmarks: Pascal VOC 2007 and CIFAR-10. The experimental results prove the efficiency of our method in comparison with the state of the art methods.

  18. Optimizing Markovian modeling of chaotic systems with recurrent neural networks

    International Nuclear Information System (INIS)

    Cechin, Adelmo L.; Pechmann, Denise R.; Oliveira, Luiz P.L. de

    2008-01-01

    In this paper, we propose a methodology for optimizing the modeling of an one-dimensional chaotic time series with a Markov Chain. The model is extracted from a recurrent neural network trained for the attractor reconstructed from the data set. Each state of the obtained Markov Chain is a region of the reconstructed state space where the dynamics is approximated by a specific piecewise linear map, obtained from the network. The Markov Chain represents the dynamics of the time series in its statistical essence. An application to a time series resulted from Lorenz system is included

  19. Learning text representation using recurrent convolutional neural network with highway layers

    OpenAIRE

    Wen, Ying; Zhang, Weinan; Luo, Rui; Wang, Jun

    2016-01-01

    Recently, the rapid development of word embedding and neural networks has brought new inspiration to various NLP and IR tasks. In this paper, we describe a staged hybrid model combining Recurrent Convolutional Neural Networks (RCNN) with highway layers. The highway network module is incorporated in the middle takes the output of the bi-directional Recurrent Neural Network (Bi-RNN) module in the first stage and provides the Convolutional Neural Network (CNN) module in the last stage with the i...

  20. Classification of conductance traces with recurrent neural networks

    Science.gov (United States)

    Lauritzen, Kasper P.; Magyarkuti, András; Balogh, Zoltán; Halbritter, András; Solomon, Gemma C.

    2018-02-01

    We present a new automated method for structural classification of the traces obtained in break junction experiments. Using recurrent neural networks trained on the traces of minimal cross-sectional area in molecular dynamics simulations, we successfully separate the traces into two classes: point contact or nanowire. This is done without any assumptions about the expected features of each class. The trained neural network is applied to experimental break junction conductance traces, and it separates the classes as well as the previously used experimental methods. The effect of using partial conductance traces is explored, and we show that the method performs equally well using full or partial traces (as long as the trace just prior to breaking is included). When only the initial part of the trace is included, the results are still better than random chance. Finally, we show that the neural network classification method can be used to classify experimental conductance traces without using simulated results for training, but instead training the network on a few representative experimental traces. This offers a tool to recognize some characteristic motifs of the traces, which can be hard to find by simple data selection algorithms.

  1. Tuning Recurrent Neural Networks for Recognizing Handwritten Arabic Words

    KAUST Repository

    Qaralleh, Esam

    2013-10-01

    Artificial neural networks have the abilities to learn by example and are capable of solving problems that are hard to solve using ordinary rule-based programming. They have many design parameters that affect their performance such as the number and sizes of the hidden layers. Large sizes are slow and small sizes are generally not accurate. Tuning the neural network size is a hard task because the design space is often large and training is often a long process. We use design of experiments techniques to tune the recurrent neural network used in an Arabic handwriting recognition system. We show that best results are achieved with three hidden layers and two subsampling layers. To tune the sizes of these five layers, we use fractional factorial experiment design to limit the number of experiments to a feasible number. Moreover, we replicate the experiment configuration multiple times to overcome the randomness in the training process. The accuracy and time measurements are analyzed and modeled. The two models are then used to locate network sizes that are on the Pareto optimal frontier. The approach described in this paper reduces the label error from 26.2% to 19.8%.

  2. A modular architecture for transparent computation in recurrent neural networks.

    Science.gov (United States)

    Carmantini, Giovanni S; Beim Graben, Peter; Desroches, Mathieu; Rodrigues, Serafim

    2017-01-01

    Computation is classically studied in terms of automata, formal languages and algorithms; yet, the relation between neural dynamics and symbolic representations and operations is still unclear in traditional eliminative connectionism. Therefore, we suggest a unique perspective on this central issue, to which we would like to refer as transparent connectionism, by proposing accounts of how symbolic computation can be implemented in neural substrates. In this study we first introduce a new model of dynamics on a symbolic space, the versatile shift, showing that it supports the real-time simulation of a range of automata. We then show that the Gödelization of versatile shifts defines nonlinear dynamical automata, dynamical systems evolving on a vectorial space. Finally, we present a mapping between nonlinear dynamical automata and recurrent artificial neural networks. The mapping defines an architecture characterized by its granular modularity, where data, symbolic operations and their control are not only distinguishable in activation space, but also spatially localizable in the network itself, while maintaining a distributed encoding of symbolic representations. The resulting networks simulate automata in real-time and are programmed directly, in the absence of network training. To discuss the unique characteristics of the architecture and their consequences, we present two examples: (i) the design of a Central Pattern Generator from a finite-state locomotive controller, and (ii) the creation of a network simulating a system of interactive automata that supports the parsing of garden-path sentences as investigated in psycholinguistics experiments. Copyright © 2016 Elsevier Ltd. All rights reserved.

  3. Region stability analysis and tracking control of memristive recurrent neural network.

    Science.gov (United States)

    Bao, Gang; Zeng, Zhigang; Shen, Yanjun

    2018-02-01

    Memristor is firstly postulated by Leon Chua and realized by Hewlett-Packard (HP) laboratory. Research results show that memristor can be used to simulate the synapses of neurons. This paper presents a class of recurrent neural network with HP memristors. Firstly, it shows that memristive recurrent neural network has more compound dynamics than the traditional recurrent neural network by simulations. Then it derives that n dimensional memristive recurrent neural network is composed of [Formula: see text] sub neural networks which do not have a common equilibrium point. By designing the tracking controller, it can make memristive neural network being convergent to the desired sub neural network. At last, two numerical examples are given to verify the validity of our result. Copyright © 2017 Elsevier Ltd. All rights reserved.

  4. A novel recurrent neural network with finite-time convergence for linear programming.

    Science.gov (United States)

    Liu, Qingshan; Cao, Jinde; Chen, Guanrong

    2010-11-01

    In this letter, a novel recurrent neural network based on the gradient method is proposed for solving linear programming problems. Finite-time convergence of the proposed neural network is proved by using the Lyapunov method. Compared with the existing neural networks for linear programming, the proposed neural network is globally convergent to exact optimal solutions in finite time, which is remarkable and rare in the literature of neural networks for optimization. Some numerical examples are given to show the effectiveness and excellent performance of the new recurrent neural network.

  5. Recurrent Neural Network Approach Based on the Integral Representation of the Drazin Inverse.

    Science.gov (United States)

    Stanimirović, Predrag S; Živković, Ivan S; Wei, Yimin

    2015-10-01

    In this letter, we present the dynamical equation and corresponding artificial recurrent neural network for computing the Drazin inverse for arbitrary square real matrix, without any restriction on its eigenvalues. Conditions that ensure the stability of the defined recurrent neural network as well as its convergence toward the Drazin inverse are considered. Several illustrative examples present the results of computer simulations.

  6. A recurrent neural network for adaptive beamforming and array correction.

    Science.gov (United States)

    Che, Hangjun; Li, Chuandong; He, Xing; Huang, Tingwen

    2016-08-01

    In this paper, a recurrent neural network (RNN) is proposed for solving adaptive beamforming problem. In order to minimize sidelobe interference, the problem is described as a convex optimization problem based on linear array model. RNN is designed to optimize system's weight values in the feasible region which is derived from arrays' state and plane wave's information. The new algorithm is proven to be stable and converge to optimal solution in the sense of Lyapunov. So as to verify new algorithm's performance, we apply it to beamforming under array mismatch situation. Comparing with other optimization algorithms, simulations suggest that RNN has strong ability to search for exact solutions under the condition of large scale constraints. Copyright © 2016 Elsevier Ltd. All rights reserved.

  7. Global robust exponential stability analysis for interval recurrent neural networks

    International Nuclear Information System (INIS)

    Xu Shengyuan; Lam, James; Ho, Daniel W.C.; Zou Yun

    2004-01-01

    This Letter investigates the problem of robust global exponential stability analysis for interval recurrent neural networks (RNNs) via the linear matrix inequality (LMI) approach. The values of the time-invariant uncertain parameters are assumed to be bounded within given compact sets. An improved condition for the existence of a unique equilibrium point and its global exponential stability of RNNs with known parameters is proposed. Based on this, a sufficient condition for the global robust exponential stability for interval RNNs is obtained. Both of the conditions are expressed in terms of LMIs, which can be checked easily by various recently developed convex optimization algorithms. Examples are provided to demonstrate the reduced conservatism of the proposed exponential stability condition

  8. Cascaded bidirectional recurrent neural networks for protein secondary structure prediction.

    Science.gov (United States)

    Chen, Jinmiao; Chaudhari, Narendra

    2007-01-01

    Protein secondary structure (PSS) prediction is an important topic in bioinformatics. Our study on a large set of non-homologous proteins shows that long-range interactions commonly exist and negatively affect PSS prediction. Besides, we also reveal strong correlations between secondary structure (SS) elements. In order to take into account the long-range interactions and SS-SS correlations, we propose a novel prediction system based on cascaded bidirectional recurrent neural network (BRNN). We compare the cascaded BRNN against another two BRNN architectures, namely the original BRNN architecture used for speech recognition as well as Pollastri's BRNN that was proposed for PSS prediction. Our cascaded BRNN achieves an overall three state accuracy Q3 of 74.38\\%, and reaches a high Segment OVerlap (SOV) of 66.0455. It outperforms the original BRNN and Pollastri's BRNN in both Q3 and SOV. Specifically, it improves the SOV score by 4-6%.

  9. Evaluation of the Performance of Feedforward and Recurrent Neural Networks in Active Cancellation of Sound Noise

    Directory of Open Access Journals (Sweden)

    Mehrshad Salmasi

    2012-07-01

    Full Text Available Active noise control is based on the destructive interference between the primary noise and generated noise from the secondary source. An antinoise of equal amplitude and opposite phase is generated and combined with the primary noise. In this paper, performance of the neural networks is evaluated in active cancellation of sound noise. For this reason, feedforward and recurrent neural networks are designed and trained. After training, performance of the feedforwrad and recurrent networks in noise attenuation are compared. We use Elman network as a recurrent neural network. For simulations, noise signals from a SPIB database are used. In order to compare the networks appropriately, equal number of layers and neurons are considered for the networks. Moreover, training and test samples are similar. Simulation results show that feedforward and recurrent neural networks present good performance in noise cancellation. As it is seen, the ability of recurrent neural network in noise attenuation is better than feedforward network.

  10. Recurrent Neural Networks for Multivariate Time Series with Missing Values.

    Science.gov (United States)

    Che, Zhengping; Purushotham, Sanjay; Cho, Kyunghyun; Sontag, David; Liu, Yan

    2018-04-17

    Multivariate time series data in practical applications, such as health care, geoscience, and biology, are characterized by a variety of missing values. In time series prediction and other related tasks, it has been noted that missing values and their missing patterns are often correlated with the target labels, a.k.a., informative missingness. There is very limited work on exploiting the missing patterns for effective imputation and improving prediction performance. In this paper, we develop novel deep learning models, namely GRU-D, as one of the early attempts. GRU-D is based on Gated Recurrent Unit (GRU), a state-of-the-art recurrent neural network. It takes two representations of missing patterns, i.e., masking and time interval, and effectively incorporates them into a deep model architecture so that it not only captures the long-term temporal dependencies in time series, but also utilizes the missing patterns to achieve better prediction results. Experiments of time series classification tasks on real-world clinical datasets (MIMIC-III, PhysioNet) and synthetic datasets demonstrate that our models achieve state-of-the-art performance and provide useful insights for better understanding and utilization of missing values in time series analysis.

  11. Deep Recurrent Neural Networks for Human Activity Recognition

    Directory of Open Access Journals (Sweden)

    Abdulmajid Murad

    2017-11-01

    Full Text Available Adopting deep learning methods for human activity recognition has been effective in extracting discriminative features from raw input sequences acquired from body-worn sensors. Although human movements are encoded in a sequence of successive samples in time, typical machine learning methods perform recognition tasks without exploiting the temporal correlations between input data samples. Convolutional neural networks (CNNs address this issue by using convolutions across a one-dimensional temporal sequence to capture dependencies among input data. However, the size of convolutional kernels restricts the captured range of dependencies between data samples. As a result, typical models are unadaptable to a wide range of activity-recognition configurations and require fixed-length input windows. In this paper, we propose the use of deep recurrent neural networks (DRNNs for building recognition models that are capable of capturing long-range dependencies in variable-length input sequences. We present unidirectional, bidirectional, and cascaded architectures based on long short-term memory (LSTM DRNNs and evaluate their effectiveness on miscellaneous benchmark datasets. Experimental results show that our proposed models outperform methods employing conventional machine learning, such as support vector machine (SVM and k-nearest neighbors (KNN. Additionally, the proposed models yield better performance than other deep learning techniques, such as deep believe networks (DBNs and CNNs.

  12. Recurrent Neural Network Applications for Astronomical Time Series

    Science.gov (United States)

    Protopapas, Pavlos

    2017-06-01

    The benefits of good predictive models in astronomy lie in early event prediction systems and effective resource allocation. Current time series methods applicable to regular time series have not evolved to generalize for irregular time series. In this talk, I will describe two Recurrent Neural Network methods, Long Short-Term Memory (LSTM) and Echo State Networks (ESNs) for predicting irregular time series. Feature engineering along with a non-linear modeling proved to be an effective predictor. For noisy time series, the prediction is improved by training the network on error realizations using the error estimates from astronomical light curves. In addition to this, we propose a new neural network architecture to remove correlation from the residuals in order to improve prediction and compensate for the noisy data. Finally, I show how to set hyperparameters for a stable and performant solution correctly. In this work, we circumvent this obstacle by optimizing ESN hyperparameters using Bayesian optimization with Gaussian Process priors. This automates the tuning procedure, enabling users to employ the power of RNN without needing an in-depth understanding of the tuning procedure.

  13. Drawing and Recognizing Chinese Characters with Recurrent Neural Network.

    Science.gov (United States)

    Zhang, Xu-Yao; Yin, Fei; Zhang, Yan-Ming; Liu, Cheng-Lin; Bengio, Yoshua

    2018-04-01

    Recent deep learning based approaches have achieved great success on handwriting recognition. Chinese characters are among the most widely adopted writing systems in the world. Previous research has mainly focused on recognizing handwritten Chinese characters. However, recognition is only one aspect for understanding a language, another challenging and interesting task is to teach a machine to automatically write (pictographic) Chinese characters. In this paper, we propose a framework by using the recurrent neural network (RNN) as both a discriminative model for recognizing Chinese characters and a generative model for drawing (generating) Chinese characters. To recognize Chinese characters, previous methods usually adopt the convolutional neural network (CNN) models which require transforming the online handwriting trajectory into image-like representations. Instead, our RNN based approach is an end-to-end system which directly deals with the sequential structure and does not require any domain-specific knowledge. With the RNN system (combining an LSTM and GRU), state-of-the-art performance can be achieved on the ICDAR-2013 competition database. Furthermore, under the RNN framework, a conditional generative model with character embedding is proposed for automatically drawing recognizable Chinese characters. The generated characters (in vector format) are human-readable and also can be recognized by the discriminative RNN model with high accuracy. Experimental results verify the effectiveness of using RNNs as both generative and discriminative models for the tasks of drawing and recognizing Chinese characters.

  14. Recurrent Neural Networks to Correct Satellite Image Classification Maps

    Science.gov (United States)

    Maggiori, Emmanuel; Charpiat, Guillaume; Tarabalka, Yuliya; Alliez, Pierre

    2017-09-01

    While initially devised for image categorization, convolutional neural networks (CNNs) are being increasingly used for the pixelwise semantic labeling of images. However, the proper nature of the most common CNN architectures makes them good at recognizing but poor at localizing objects precisely. This problem is magnified in the context of aerial and satellite image labeling, where a spatially fine object outlining is of paramount importance. Different iterative enhancement algorithms have been presented in the literature to progressively improve the coarse CNN outputs, seeking to sharpen object boundaries around real image edges. However, one must carefully design, choose and tune such algorithms. Instead, our goal is to directly learn the iterative process itself. For this, we formulate a generic iterative enhancement process inspired from partial differential equations, and observe that it can be expressed as a recurrent neural network (RNN). Consequently, we train such a network from manually labeled data for our enhancement task. In a series of experiments we show that our RNN effectively learns an iterative process that significantly improves the quality of satellite image classification maps.

  15. Deep Recurrent Neural Networks for Human Activity Recognition.

    Science.gov (United States)

    Murad, Abdulmajid; Pyun, Jae-Young

    2017-11-06

    Adopting deep learning methods for human activity recognition has been effective in extracting discriminative features from raw input sequences acquired from body-worn sensors. Although human movements are encoded in a sequence of successive samples in time, typical machine learning methods perform recognition tasks without exploiting the temporal correlations between input data samples. Convolutional neural networks (CNNs) address this issue by using convolutions across a one-dimensional temporal sequence to capture dependencies among input data. However, the size of convolutional kernels restricts the captured range of dependencies between data samples. As a result, typical models are unadaptable to a wide range of activity-recognition configurations and require fixed-length input windows. In this paper, we propose the use of deep recurrent neural networks (DRNNs) for building recognition models that are capable of capturing long-range dependencies in variable-length input sequences. We present unidirectional, bidirectional, and cascaded architectures based on long short-term memory (LSTM) DRNNs and evaluate their effectiveness on miscellaneous benchmark datasets. Experimental results show that our proposed models outperform methods employing conventional machine learning, such as support vector machine (SVM) and k-nearest neighbors (KNN). Additionally, the proposed models yield better performance than other deep learning techniques, such as deep believe networks (DBNs) and CNNs.

  16. Global dissipativity of continuous-time recurrent neural networks with time delay

    International Nuclear Information System (INIS)

    Liao Xiaoxin; Wang Jun

    2003-01-01

    This paper addresses the global dissipativity of a general class of continuous-time recurrent neural networks. First, the concepts of global dissipation and global exponential dissipation are defined and elaborated. Next, the sets of global dissipativity and global exponentially dissipativity are characterized using the parameters of recurrent neural network models. In particular, it is shown that the Hopfield network and cellular neural networks with or without time delays are dissipative systems

  17. Neural mechanisms mediating degrees of strategic uncertainty.

    Science.gov (United States)

    Nagel, Rosemarie; Brovelli, Andrea; Heinemann, Frank; Coricelli, Giorgio

    2018-01-01

    In social interactions, strategic uncertainty arises when the outcome of one's choice depends on the choices of others. An important question is whether strategic uncertainty can be resolved by assessing subjective probabilities to the counterparts' behavior, as if playing against nature, and thus transforming the strategic interaction into a risky (individual) situation. By means of functional magnetic resonance imaging with human participants we tested the hypothesis that choices under strategic uncertainty are supported by the neural circuits mediating choices under individual risk and deliberation in social settings (i.e. strategic thinking). Participants were confronted with risky lotteries and two types of coordination games requiring different degrees of strategic thinking of the kind 'I think that you think that I think etc.' We found that the brain network mediating risk during lotteries (anterior insula, dorsomedial prefrontal cortex and parietal cortex) is also engaged in the processing of strategic uncertainty in games. In social settings, activity in this network is modulated by the level of strategic thinking that is reflected in the activity of the dorsomedial and dorsolateral prefrontal cortex. These results suggest that strategic uncertainty is resolved by the interplay between the neural circuits mediating risk and higher order beliefs (i.e. beliefs about others' beliefs). © The Author(s) (2017). Published by Oxford University Press.

  18. Neurally mediated syncope in electroconvulsive therapy maintenance.

    Science.gov (United States)

    Arbaizar, Beatriz; Llorca, Javier

    2012-03-01

    Electroconvulsive therapy (ECT) is especially necessary to revert some types of depressive disease; nevertheless, it has some widely recognized adverse effects, such as short-term memory loss. Moreover, some articles have reported its potential association with falls; this literature is, however, scanty and mainly consists of case reports. We present the case of a man who has a diagnosis of neurally mediated syncope at the age of 79 years, during the maintenance ECT. The patient had a significant increase in syncope frequency in the period he was treated with ECT, followed by a dramatic decrease when ECT was discontinued.

  19. Recurrent Neural Network Based Boolean Factor Analysis and its Application to Word Clustering

    Czech Academy of Sciences Publication Activity Database

    Frolov, A. A.; Húsek, Dušan; Polyakov, P.Y.

    2009-01-01

    Roč. 20, č. 7 (2009), s. 1073-1086 ISSN 1045-9227 R&D Projects: GA MŠk(CZ) 1M0567 Institutional research plan: CEZ:AV0Z10300504 Keywords : recurrent neural network * Hopfield-like neural network * associative memory * unsupervised learning * neural network architecture * neural network application * statistics * Boolean factor analysis * concepts search * information retrieval Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 2.889, year: 2009

  20. A recurrent neural network based on projection operator for extended general variational inequalities.

    Science.gov (United States)

    Liu, Qingshan; Cao, Jinde

    2010-06-01

    Based on the projection operator, a recurrent neural network is proposed for solving extended general variational inequalities (EGVIs). Sufficient conditions are provided to ensure the global convergence of the proposed neural network based on Lyapunov methods. Compared with the existing neural networks for variational inequalities, the proposed neural network is a modified version of the general projection neural network existing in the literature and capable of solving the EGVI problems. In addition, simulation results on numerical examples show the effectiveness and performance of the proposed neural network.

  1. Application of recurrent neural networks for drought projections in California

    Science.gov (United States)

    Le, J. A.; El-Askary, H. M.; Allali, M.; Struppa, D. C.

    2017-05-01

    We use recurrent neural networks (RNNs) to investigate the complex interactions between the long-term trend in dryness and a projected, short but intense, period of wetness due to the 2015-2016 El Niño. Although it was forecasted that this El Niño season would bring significant rainfall to the region, our long-term projections of the Palmer Z Index (PZI) showed a continuing drought trend, contrasting with the 1998-1999 El Niño event. RNN training considered PZI data during 1896-2006 that was validated against the 2006-2015 period to evaluate the potential of extreme precipitation forecast. We achieved a statistically significant correlation of 0.610 between forecasted and observed PZI on the validation set for a lead time of 1 month. This gives strong confidence to the forecasted precipitation indicator. The 2015-2016 El Niño season proved to be relatively weak as compared with the 1997-1998, with a peak PZI anomaly of 0.242 standard deviations below historical averages, continuing drought conditions.

  2. Recurrent Neural Network Model for Constructive Peptide Design.

    Science.gov (United States)

    Müller, Alex T; Hiss, Jan A; Schneider, Gisbert

    2018-02-26

    We present a generative long short-term memory (LSTM) recurrent neural network (RNN) for combinatorial de novo peptide design. RNN models capture patterns in sequential data and generate new data instances from the learned context. Amino acid sequences represent a suitable input for these machine-learning models. Generative models trained on peptide sequences could therefore facilitate the design of bespoke peptide libraries. We trained RNNs with LSTM units on pattern recognition of helical antimicrobial peptides and used the resulting model for de novo sequence generation. Of these sequences, 82% were predicted to be active antimicrobial peptides compared to 65% of randomly sampled sequences with the same amino acid distribution as the training set. The generated sequences also lie closer to the training data than manually designed amphipathic helices. The results of this study showcase the ability of LSTM RNNs to construct new amino acid sequences within the applicability domain of the model and motivate their prospective application to peptide and protein design without the need for the exhaustive enumeration of sequence libraries.

  3. Multiplex visibility graphs to investigate recurrent neural network dynamics

    Science.gov (United States)

    Bianchi, Filippo Maria; Livi, Lorenzo; Alippi, Cesare; Jenssen, Robert

    2017-03-01

    A recurrent neural network (RNN) is a universal approximator of dynamical systems, whose performance often depends on sensitive hyperparameters. Tuning them properly may be difficult and, typically, based on a trial-and-error approach. In this work, we adopt a graph-based framework to interpret and characterize internal dynamics of a class of RNNs called echo state networks (ESNs). We design principled unsupervised methods to derive hyperparameters configurations yielding maximal ESN performance, expressed in terms of prediction error and memory capacity. In particular, we propose to model time series generated by each neuron activations with a horizontal visibility graph, whose topological properties have been shown to be related to the underlying system dynamics. Successively, horizontal visibility graphs associated with all neurons become layers of a larger structure called a multiplex. We show that topological properties of such a multiplex reflect important features of ESN dynamics that can be used to guide the tuning of its hyperparamers. Results obtained on several benchmarks and a real-world dataset of telephone call data records show the effectiveness of the proposed methods.

  4. Spatial Clockwork Recurrent Neural Network for Muscle Perimysium Segmentation.

    Science.gov (United States)

    Xie, Yuanpu; Zhang, Zizhao; Sapkota, Manish; Yang, Lin

    2016-10-01

    Accurate segmentation of perimysium plays an important role in early diagnosis of many muscle diseases because many diseases contain different perimysium inflammation. However, it remains as a challenging task due to the complex appearance of the perymisum morphology and its ambiguity to the background area. The muscle perimysium also exhibits strong structure spanned in the entire tissue, which makes it difficult for current local patch-based methods to capture this long-range context information. In this paper, we propose a novel spatial clockwork recurrent neural network (spatial CW-RNN) to address those issues. Specifically, we split the entire image into a set of non-overlapping image patches, and the semantic dependencies among them are modeled by the proposed spatial CW-RNN. Our method directly takes the 2D structure of the image into consideration and is capable of encoding the context information of the entire image into the local representation of each patch. Meanwhile, we leverage on the structured regression to assign one prediction mask rather than a single class label to each local patch, which enables both efficient training and testing. We extensively test our method for perimysium segmentation using digitized muscle microscopy images. Experimental results demonstrate the superiority of the novel spatial CW-RNN over other existing state of the arts.

  5. Fast computation with spikes in a recurrent neural network

    International Nuclear Information System (INIS)

    Jin, Dezhe Z.; Seung, H. Sebastian

    2002-01-01

    Neural networks with recurrent connections are sometimes regarded as too slow at computation to serve as models of the brain. Here we analytically study a counterexample, a network consisting of N integrate-and-fire neurons with self excitation, all-to-all inhibition, instantaneous synaptic coupling, and constant external driving inputs. When the inhibition and/or excitation are large enough, the network performs a winner-take-all computation for all possible external inputs and initial states of the network. The computation is done very quickly: As soon as the winner spikes once, the computation is completed since no other neurons will spike. For some initial states, the winner is the first neuron to spike, and the computation is done at the first spike of the network. In general, there are M potential winners, corresponding to the top M external inputs. When the external inputs are close in magnitude, M tends to be larger. If M>1, the selection of the actual winner is strongly influenced by the initial states. If a special relation between the excitation and inhibition is satisfied, the network always selects the neuron with the maximum external input as the winner

  6. Low-dimensional recurrent neural network-based Kalman filter for speech enhancement.

    Science.gov (United States)

    Xia, Youshen; Wang, Jun

    2015-07-01

    This paper proposes a new recurrent neural network-based Kalman filter for speech enhancement, based on a noise-constrained least squares estimate. The parameters of speech signal modeled as autoregressive process are first estimated by using the proposed recurrent neural network and the speech signal is then recovered from Kalman filtering. The proposed recurrent neural network is globally asymptomatically stable to the noise-constrained estimate. Because the noise-constrained estimate has a robust performance against non-Gaussian noise, the proposed recurrent neural network-based speech enhancement algorithm can minimize the estimation error of Kalman filter parameters in non-Gaussian noise. Furthermore, having a low-dimensional model feature, the proposed neural network-based speech enhancement algorithm has a much faster speed than two existing recurrent neural networks-based speech enhancement algorithms. Simulation results show that the proposed recurrent neural network-based speech enhancement algorithm can produce a good performance with fast computation and noise reduction. Copyright © 2015 Elsevier Ltd. All rights reserved.

  7. Solving differential equations with unknown constitutive relations as recurrent neural networks

    Energy Technology Data Exchange (ETDEWEB)

    Hagge, Tobias J.; Stinis, Panagiotis; Yeung, Enoch H.; Tartakovsky, Alexandre M.

    2017-12-08

    We solve a system of ordinary differential equations with an unknown functional form of a sink (reaction rate) term. We assume that the measurements (time series) of state variables are partially available, and use a recurrent neural network to “learn” the reaction rate from this data. This is achieved by including discretized ordinary differential equations as part of a recurrent neural network training problem. We extend TensorFlow’s recurrent neural network architecture to create a simple but scalable and effective solver for the unknown functions, and apply it to a fedbatch bioreactor simulation problem. Use of techniques from recent deep learning literature enables training of functions with behavior manifesting over thousands of time steps. Our networks are structurally similar to recurrent neural networks, but differ in purpose, and require modified training strategies.

  8. A novel joint-processing adaptive nonlinear equalizer using a modular recurrent neural network for chaotic communication systems.

    Science.gov (United States)

    Zhao, Haiquan; Zeng, Xiangping; Zhang, Jiashu; Liu, Yangguang; Wang, Xiaomin; Li, Tianrui

    2011-01-01

    To eliminate nonlinear channel distortion in chaotic communication systems, a novel joint-processing adaptive nonlinear equalizer based on a pipelined recurrent neural network (JPRNN) is proposed, using a modified real-time recurrent learning (RTRL) algorithm. Furthermore, an adaptive amplitude RTRL algorithm is adopted to overcome the deteriorating effect introduced by the nesting process. Computer simulations illustrate that the proposed equalizer outperforms the pipelined recurrent neural network (PRNN) and recurrent neural network (RNN) equalizers. Copyright © 2010 Elsevier Ltd. All rights reserved.

  9. Evaluation of the Performance of Feedforward and Recurrent Neural Networks in Active Cancellation of Sound Noise

    OpenAIRE

    Mehrshad Salmasi; Homayoun Mahdavi-Nasab

    2012-01-01

    Active noise control is based on the destructive interference between the primary noise and generated noise from the secondary source. An antinoise of equal amplitude and opposite phase is generated and combined with the primary noise. In this paper, performance of the neural networks is evaluated in active cancellation of sound noise. For this reason, feedforward and recurrent neural networks are designed and trained. After training, performance of the feedforwrad and recurrent networks in n...

  10. Global exponential stability of reaction-diffusion recurrent neural networks with time-varying delays

    International Nuclear Information System (INIS)

    Liang Jinling; Cao Jinde

    2003-01-01

    Employing general Halanay inequality, we analyze the global exponential stability of a class of reaction-diffusion recurrent neural networks with time-varying delays. Several new sufficient conditions are obtained to ensure existence, uniqueness and global exponential stability of the equilibrium point of delayed reaction-diffusion recurrent neural networks. The results extend and improve the earlier publications. In addition, an example is given to show the effectiveness of the obtained result

  11. Encoding sensory and motor patterns as time-invariant trajectories in recurrent neural networks.

    Science.gov (United States)

    Goudar, Vishwa; Buonomano, Dean V

    2018-03-14

    Much of the information the brain processes and stores is temporal in nature-a spoken word or a handwritten signature, for example, is defined by how it unfolds in time. However, it remains unclear how neural circuits encode complex time-varying patterns. We show that by tuning the weights of a recurrent neural network (RNN), it can recognize and then transcribe spoken digits. The model elucidates how neural dynamics in cortical networks may resolve three fundamental challenges: first, encode multiple time-varying sensory and motor patterns as stable neural trajectories; second, generalize across relevant spatial features; third, identify the same stimuli played at different speeds-we show that this temporal invariance emerges because the recurrent dynamics generate neural trajectories with appropriately modulated angular velocities. Together our results generate testable predictions as to how recurrent networks may use different mechanisms to generalize across the relevant spatial and temporal features of complex time-varying stimuli. © 2018, Goudar et al.

  12. Recurrent Convolutional Neural Networks: A Better Model of Biological Object Recognition.

    Science.gov (United States)

    Spoerer, Courtney J; McClure, Patrick; Kriegeskorte, Nikolaus

    2017-01-01

    Feedforward neural networks provide the dominant model of how the brain performs visual object recognition. However, these networks lack the lateral and feedback connections, and the resulting recurrent neuronal dynamics, of the ventral visual pathway in the human and non-human primate brain. Here we investigate recurrent convolutional neural networks with bottom-up (B), lateral (L), and top-down (T) connections. Combining these types of connections yields four architectures (B, BT, BL, and BLT), which we systematically test and compare. We hypothesized that recurrent dynamics might improve recognition performance in the challenging scenario of partial occlusion. We introduce two novel occluded object recognition tasks to test the efficacy of the models, digit clutter (where multiple target digits occlude one another) and digit debris (where target digits are occluded by digit fragments). We find that recurrent neural networks outperform feedforward control models (approximately matched in parametric complexity) at recognizing objects, both in the absence of occlusion and in all occlusion conditions. Recurrent networks were also found to be more robust to the inclusion of additive Gaussian noise. Recurrent neural networks are better in two respects: (1) they are more neurobiologically realistic than their feedforward counterparts; (2) they are better in terms of their ability to recognize objects, especially under challenging conditions. This work shows that computer vision can benefit from using recurrent convolutional architectures and suggests that the ubiquitous recurrent connections in biological brains are essential for task performance.

  13. A One-Layer Recurrent Neural Network for Constrained Complex-Variable Convex Optimization.

    Science.gov (United States)

    Qin, Sitian; Feng, Jiqiang; Song, Jiahui; Wen, Xingnan; Xu, Chen

    2018-03-01

    In this paper, based on calculus and penalty method, a one-layer recurrent neural network is proposed for solving constrained complex-variable convex optimization. It is proved that for any initial point from a given domain, the state of the proposed neural network reaches the feasible region in finite time and converges to an optimal solution of the constrained complex-variable convex optimization finally. In contrast to existing neural networks for complex-variable convex optimization, the proposed neural network has a lower model complexity and better convergence. Some numerical examples and application are presented to substantiate the effectiveness of the proposed neural network.

  14. Multi-step-prediction of chaotic time series based on co-evolutionary recurrent neural network

    International Nuclear Information System (INIS)

    Ma Qianli; Zheng Qilun; Peng Hong; Qin Jiangwei; Zhong Tanwei

    2008-01-01

    This paper proposes a co-evolutionary recurrent neural network (CERNN) for the multi-step-prediction of chaotic time series, it estimates the proper parameters of phase space reconstruction and optimizes the structure of recurrent neural networks by co-evolutionary strategy. The searching space was separated into two subspaces and the individuals are trained in a parallel computational procedure. It can dynamically combine the embedding method with the capability of recurrent neural network to incorporate past experience due to internal recurrence. The effectiveness of CERNN is evaluated by using three benchmark chaotic time series data sets: the Lorenz series, Mackey-Glass series and real-world sun spot series. The simulation results show that CERNN improves the performances of multi-step-prediction of chaotic time series

  15. From Imitation to Prediction, Data Compression vs Recurrent Neural Networks for Natural Language Processing

    Directory of Open Access Journals (Sweden)

    Juan Andres Laura

    2018-03-01

    Full Text Available In recent studies Recurrent Neural Networks were used for generative processes and their surprising performance can be explained by their ability to create good predictions. In addition, Data Compression is also based on prediction. What the problem comes down to is whether a data compressor could be used to perform as well as recurrent neural networks in the natural language processing tasks of sentiment analysis and automatic text generation. If this is possible, then the problem comes down to determining if a compression algorithm is even more intelligent than a neural network in such tasks. In our journey, a fundamental difference between a Data Compression Algorithm and Recurrent Neural Networks has been discovered.

  16. An Attractor-Based Complexity Measurement for Boolean Recurrent Neural Networks

    Science.gov (United States)

    Cabessa, Jérémie; Villa, Alessandro E. P.

    2014-01-01

    We provide a novel refined attractor-based complexity measurement for Boolean recurrent neural networks that represents an assessment of their computational power in terms of the significance of their attractor dynamics. This complexity measurement is achieved by first proving a computational equivalence between Boolean recurrent neural networks and some specific class of -automata, and then translating the most refined classification of -automata to the Boolean neural network context. As a result, a hierarchical classification of Boolean neural networks based on their attractive dynamics is obtained, thus providing a novel refined attractor-based complexity measurement for Boolean recurrent neural networks. These results provide new theoretical insights to the computational and dynamical capabilities of neural networks according to their attractive potentialities. An application of our findings is illustrated by the analysis of the dynamics of a simplified model of the basal ganglia-thalamocortical network simulated by a Boolean recurrent neural network. This example shows the significance of measuring network complexity, and how our results bear new founding elements for the understanding of the complexity of real brain circuits. PMID:24727866

  17. Entity recognition from clinical texts via recurrent neural network.

    Science.gov (United States)

    Liu, Zengjian; Yang, Ming; Wang, Xiaolong; Chen, Qingcai; Tang, Buzhou; Wang, Zhe; Xu, Hua

    2017-07-05

    Entity recognition is one of the most primary steps for text analysis and has long attracted considerable attention from researchers. In the clinical domain, various types of entities, such as clinical entities and protected health information (PHI), widely exist in clinical texts. Recognizing these entities has become a hot topic in clinical natural language processing (NLP), and a large number of traditional machine learning methods, such as support vector machine and conditional random field, have been deployed to recognize entities from clinical texts in the past few years. In recent years, recurrent neural network (RNN), one of deep learning methods that has shown great potential on many problems including named entity recognition, also has been gradually used for entity recognition from clinical texts. In this paper, we comprehensively investigate the performance of LSTM (long-short term memory), a representative variant of RNN, on clinical entity recognition and protected health information recognition. The LSTM model consists of three layers: input layer - generates representation of each word of a sentence; LSTM layer - outputs another word representation sequence that captures the context information of each word in this sentence; Inference layer - makes tagging decisions according to the output of LSTM layer, that is, outputting a label sequence. Experiments conducted on corpora of the 2010, 2012 and 2014 i2b2 NLP challenges show that LSTM achieves highest micro-average F1-scores of 85.81% on the 2010 i2b2 medical concept extraction, 92.29% on the 2012 i2b2 clinical event detection, and 94.37% on the 2014 i2b2 de-identification, which is considerably competitive with other state-of-the-art systems. LSTM that requires no hand-crafted feature has great potential on entity recognition from clinical texts. It outperforms traditional machine learning methods that suffer from fussy feature engineering. A possible future direction is how to integrate knowledge

  18. A One-Layer Recurrent Neural Network for Real-Time Portfolio Optimization With Probability Criterion.

    Science.gov (United States)

    Liu, Qingshan; Dang, Chuangyin; Huang, Tingwen

    2013-02-01

    This paper presents a decision-making model described by a recurrent neural network for dynamic portfolio optimization. The portfolio-optimization problem is first converted into a constrained fractional programming problem. Since the objective function in the programming problem is not convex, the traditional optimization techniques are no longer applicable for solving this problem. Fortunately, the objective function in the fractional programming is pseudoconvex on the feasible region. It leads to a one-layer recurrent neural network modeled by means of a discontinuous dynamic system. To ensure the optimal solutions for portfolio optimization, the convergence of the proposed neural network is analyzed and proved. In fact, the neural network guarantees to get the optimal solutions for portfolio-investment advice if some mild conditions are satisfied. A numerical example with simulation results substantiates the effectiveness and illustrates the characteristics of the proposed neural network.

  19. Predicting recurrent aphthous ulceration using genetic algorithms-optimized neural networks

    Directory of Open Access Journals (Sweden)

    Najla S Dar-Odeh

    2010-05-01

    Full Text Available Najla S Dar-Odeh1, Othman M Alsmadi2, Faris Bakri3, Zaer Abu-Hammour2, Asem A Shehabi3, Mahmoud K Al-Omiri1, Shatha M K Abu-Hammad4, Hamzeh Al-Mashni4, Mohammad B Saeed4, Wael Muqbil4, Osama A Abu-Hammad1 1Faculty of Dentistry, 2Faculty of Engineering and Technology, 3Faculty of Medicine, University of Jordan, Amman, Jordan; 4Dental Department, University of Jordan Hospital, Amman, JordanObjective: To construct and optimize a neural network that is capable of predicting the occurrence of recurrent aphthous ulceration (RAU based on a set of appropriate input data.Participants and methods: Artificial neural networks (ANN software employing genetic algorithms to optimize the architecture neural networks was used. Input and output data of 86 participants (predisposing factors and status of the participants with regards to recurrent aphthous ulceration were used to construct and train the neural networks. The optimized neural networks were then tested using untrained data of a further 10 participants.Results: The optimized neural network, which produced the most accurate predictions for the presence or absence of recurrent aphthous ulceration was found to employ: gender, hematological (with or without ferritin and mycological data of the participants, frequency of tooth brushing, and consumption of vegetables and fruits.Conclusions: Factors appearing to be related to recurrent aphthous ulceration and appropriate for use as input data to construct ANNs that predict recurrent aphthous ulceration were found to include the following: gender, hemoglobin, serum vitamin B12, serum ferritin, red cell folate, salivary candidal colony count, frequency of tooth brushing, and the number of fruits or vegetables consumed daily.Keywords: artifical neural networks, recurrent, aphthous ulceration, ulcer

  20. Ads' click-through rates predicting based on gated recurrent unit neural networks

    Science.gov (United States)

    Chen, Qiaohong; Guo, Zixuan; Dong, Wen; Jin, Lingzi

    2018-05-01

    In order to improve the effect of online advertising and to increase the revenue of advertising, the gated recurrent unit neural networks(GRU) model is used as the ads' click through rates(CTR) predicting. Combined with the characteristics of gated unit structure and the unique of time sequence in data, using BPTT algorithm to train the model. Furthermore, by optimizing the step length algorithm of the gated unit recurrent neural networks, making the model reach optimal point better and faster in less iterative rounds. The experiment results show that the model based on the gated recurrent unit neural networks and its optimization of step length algorithm has the better effect on the ads' CTR predicting, which helps advertisers, media and audience achieve a win-win and mutually beneficial situation in Three-Side Game.

  1. Multistability and instability analysis of recurrent neural networks with time-varying delays.

    Science.gov (United States)

    Zhang, Fanghai; Zeng, Zhigang

    2018-01-01

    This paper provides new theoretical results on the multistability and instability analysis of recurrent neural networks with time-varying delays. It is shown that such n-neuronal recurrent neural networks have exactly [Formula: see text] equilibria, [Formula: see text] of which are locally exponentially stable and the others are unstable, where k 0 is a nonnegative integer such that k 0 ≤n. By using the combination method of two different divisions, recurrent neural networks can possess more dynamic properties. This method improves and extends the existing results in the literature. Finally, one numerical example is provided to show the superiority and effectiveness of the presented results. Copyright © 2017 Elsevier Ltd. All rights reserved.

  2. Financial Time Series Prediction Using Elman Recurrent Random Neural Networks

    Directory of Open Access Journals (Sweden)

    Jie Wang

    2016-01-01

    (ERNN, the empirical results show that the proposed neural network displays the best performance among these neural networks in financial time series forecasting. Further, the empirical research is performed in testing the predictive effects of SSE, TWSE, KOSPI, and Nikkei225 with the established model, and the corresponding statistical comparisons of the above market indices are also exhibited. The experimental results show that this approach gives good performance in predicting the values from the stock market indices.

  3. Synaptic plasticity in a recurrent neural network for versatile and adaptive behaviors of a walking robot

    DEFF Research Database (Denmark)

    Grinke, Eduard; Tetzlaff, Christian; Wörgötter, Florentin

    2015-01-01

    correlation-based learning with synaptic scaling is applied to adequately change the connections of the network. By doing so, we can effectively exploit neural dynamics (i.e., hysteresis effects and single attractors) in the network to generate different turning angles with short-term memory for a walking...... dynamics, plasticity, sensory feedback, and biomechanics. Generating such versatile and adaptive behaviors for a many degrees-of-freedom (DOFs) walking robot is a challenging task. Thus, in this study, we present a bio-inspired approach to solve this task. Specifically, the approach combines neural...... mechanisms with plasticity, exteroceptive sensory feedback, and biomechanics. The neural mechanisms consist of adaptive neural sensory processing and modular neural locomotion control. The sensory processing is based on a small recurrent neural network consisting of two fully connected neurons. Online...

  4. Robust nonlinear autoregressive moving average model parameter estimation using stochastic recurrent artificial neural networks

    DEFF Research Database (Denmark)

    Chon, K H; Hoyer, D; Armoundas, A A

    1999-01-01

    In this study, we introduce a new approach for estimating linear and nonlinear stochastic autoregressive moving average (ARMA) model parameters, given a corrupt signal, using artificial recurrent neural networks. This new approach is a two-step approach in which the parameters of the deterministic...... part of the stochastic ARMA model are first estimated via a three-layer artificial neural network (deterministic estimation step) and then reestimated using the prediction error as one of the inputs to the artificial neural networks in an iterative algorithm (stochastic estimation step). The prediction...... error is obtained by subtracting the corrupt signal of the estimated ARMA model obtained via the deterministic estimation step from the system output response. We present computer simulation examples to show the efficacy of the proposed stochastic recurrent neural network approach in obtaining accurate...

  5. A novel nonlinear adaptive filter using a pipelined second-order Volterra recurrent neural network.

    Science.gov (United States)

    Zhao, Haiquan; Zhang, Jiashu

    2009-12-01

    To enhance the performance and overcome the heavy computational complexity of recurrent neural networks (RNN), a novel nonlinear adaptive filter based on a pipelined second-order Volterra recurrent neural network (PSOVRNN) is proposed in this paper. A modified real-time recurrent learning (RTRL) algorithm of the proposed filter is derived in much more detail. The PSOVRNN comprises of a number of simple small-scale second-order Volterra recurrent neural network (SOVRNN) modules. In contrast to the standard RNN, these modules of a PSOVRNN can be performed simultaneously in a pipelined parallelism fashion, which can lead to a significant improvement in its total computational efficiency. Moreover, since each module of the PSOVRNN is a SOVRNN in which nonlinearity is introduced by the recursive second-order Volterra (RSOV) expansion, its performance can be further improved. Computer simulations have demonstrated that the PSOVRNN performs better than the pipelined recurrent neural network (PRNN) and RNN for nonlinear colored signals prediction and nonlinear channel equalization. However, the superiority of the PSOVRNN over the PRNN is at the cost of increasing computational complexity due to the introduced nonlinear expansion of each module.

  6. Identification and prediction of dynamic systems using an interactively recurrent self-evolving fuzzy neural network.

    Science.gov (United States)

    Lin, Yang-Yin; Chang, Jyh-Yeong; Lin, Chin-Teng

    2013-02-01

    This paper presents a novel recurrent fuzzy neural network, called an interactively recurrent self-evolving fuzzy neural network (IRSFNN), for prediction and identification of dynamic systems. The recurrent structure in an IRSFNN is formed as an external loops and internal feedback by feeding the rule firing strength of each rule to others rules and itself. The consequent part in the IRSFNN is composed of a Takagi-Sugeno-Kang (TSK) or functional-link-based type. The proposed IRSFNN employs a functional link neural network (FLNN) to the consequent part of fuzzy rules for promoting the mapping ability. Unlike a TSK-type fuzzy neural network, the FLNN in the consequent part is a nonlinear function of input variables. An IRSFNNs learning starts with an empty rule base and all of the rules are generated and learned online through a simultaneous structure and parameter learning. An on-line clustering algorithm is effective in generating fuzzy rules. The consequent update parameters are derived by a variable-dimensional Kalman filter algorithm. The premise and recurrent parameters are learned through a gradient descent algorithm. We test the IRSFNN for the prediction and identification of dynamic plants and compare it to other well-known recurrent FNNs. The proposed model obtains enhanced performance results.

  7. Boundedness and stability for recurrent neural networks with variable coefficients and time-varying delays

    International Nuclear Information System (INIS)

    Liang Jinling; Cao Jinde

    2003-01-01

    In this Letter, the problems of boundedness and stability for a general class of non-autonomous recurrent neural networks with variable coefficients and time-varying delays are analyzed via employing Young inequality technique and Lyapunov method. Some simple sufficient conditions are given for boundedness and stability of the solutions for the recurrent neural networks. These results generalize and improve the previous works, and they are easy to check and apply in practice. Two illustrative examples and their numerical simulations are also given to demonstrate the effectiveness of the proposed results

  8. Training the Recurrent neural network by the Fuzzy Min-Max algorithm for fault prediction

    International Nuclear Information System (INIS)

    Zemouri, Ryad; Racoceanu, Daniel; Zerhouni, Noureddine; Minca, Eugenia; Filip, Florin

    2009-01-01

    In this paper, we present a training technique of a Recurrent Radial Basis Function neural network for fault prediction. We use the Fuzzy Min-Max technique to initialize the k-center of the RRBF neural network. The k-means algorithm is then applied to calculate the centers that minimize the mean square error of the prediction task. The performances of the k-means algorithm are then boosted by the Fuzzy Min-Max technique.

  9. Deep Convolutional and LSTM Recurrent Neural Networks for Multimodal Wearable Activity Recognition

    OpenAIRE

    Francisco Javier Ordóñez; Daniel Roggen

    2016-01-01

    Human activity recognition (HAR) tasks have traditionally been solved using engineered features obtained by heuristic processes. Current research suggests that deep convolutional neural networks are suited to automate feature extraction from raw sensor inputs. However, human activities are made of complex sequences of motor movements, and capturing this temporal dynamics is fundamental for successful HAR. Based on the recent success of recurrent neural networks for time series domains, we pro...

  10. Bi-directional LSTM Recurrent Neural Network for Chinese Word Segmentation

    OpenAIRE

    Yao, Yushi; Huang, Zheng

    2016-01-01

    Recurrent neural network(RNN) has been broadly applied to natural language processing(NLP) problems. This kind of neural network is designed for modeling sequential data and has been testified to be quite efficient in sequential tagging tasks. In this paper, we propose to use bi-directional RNN with long short-term memory(LSTM) units for Chinese word segmentation, which is a crucial preprocess task for modeling Chinese sentences and articles. Classical methods focus on designing and combining...

  11. Synchronization of chaotic recurrent neural networks with time-varying delays using nonlinear feedback control

    International Nuclear Information System (INIS)

    Cui Baotong; Lou Xuyang

    2009-01-01

    In this paper, a new method to synchronize two identical chaotic recurrent neural networks is proposed. Using the drive-response concept, a nonlinear feedback control law is derived to achieve the state synchronization of the two identical chaotic neural networks. Furthermore, based on the Lyapunov method, a delay independent sufficient synchronization condition in terms of linear matrix inequality (LMI) is obtained. A numerical example with graphical illustrations is given to illuminate the presented synchronization scheme

  12. Global exponential stability for reaction-diffusion recurrent neural networks with multiple time varying delays

    International Nuclear Information System (INIS)

    Lou, X.; Cui, B.

    2008-01-01

    In this paper we consider the problem of exponential stability for recurrent neural networks with multiple time varying delays and reaction-diffusion terms. The activation functions are supposed to be bounded and globally Lipschitz continuous. By means of Lyapunov functional, sufficient conditions are derived, which guarantee global exponential stability of the delayed neural network. Finally, a numerical example is given to show the correctness of our analysis. (author)

  13. ReSeg: A Recurrent Neural Network-Based Model for Semantic Segmentation

    OpenAIRE

    Visin, Francesco; Ciccone, Marco; Romero, Adriana; Kastner, Kyle; Cho, Kyunghyun; Bengio, Yoshua; Matteucci, Matteo; Courville, Aaron

    2015-01-01

    We propose a structured prediction architecture, which exploits the local generic features extracted by Convolutional Neural Networks and the capacity of Recurrent Neural Networks (RNN) to retrieve distant dependencies. The proposed architecture, called ReSeg, is based on the recently introduced ReNet model for image classification. We modify and extend it to perform the more challenging task of semantic segmentation. Each ReNet layer is composed of four RNN that sweep the image horizontally ...

  14. Reduced-Order Modeling for Flutter/LCO Using Recurrent Artificial Neural Network

    Science.gov (United States)

    Yao, Weigang; Liou, Meng-Sing

    2012-01-01

    The present study demonstrates the efficacy of a recurrent artificial neural network to provide a high fidelity time-dependent nonlinear reduced-order model (ROM) for flutter/limit-cycle oscillation (LCO) modeling. An artificial neural network is a relatively straightforward nonlinear method for modeling an input-output relationship from a set of known data, for which we use the radial basis function (RBF) with its parameters determined through a training process. The resulting RBF neural network, however, is only static and is not yet adequate for an application to problems of dynamic nature. The recurrent neural network method [1] is applied to construct a reduced order model resulting from a series of high-fidelity time-dependent data of aero-elastic simulations. Once the RBF neural network ROM is constructed properly, an accurate approximate solution can be obtained at a fraction of the cost of a full-order computation. The method derived during the study has been validated for predicting nonlinear aerodynamic forces in transonic flow and is capable of accurate flutter/LCO simulations. The obtained results indicate that the present recurrent RBF neural network is accurate and efficient for nonlinear aero-elastic system analysis

  15. Stimulus-dependent suppression of chaos in recurrent neural networks

    International Nuclear Information System (INIS)

    Rajan, Kanaka; Abbott, L. F.; Sompolinsky, Haim

    2010-01-01

    Neuronal activity arises from an interaction between ongoing firing generated spontaneously by neural circuits and responses driven by external stimuli. Using mean-field analysis, we ask how a neural network that intrinsically generates chaotic patterns of activity can remain sensitive to extrinsic input. We find that inputs not only drive network responses, but they also actively suppress ongoing activity, ultimately leading to a phase transition in which chaos is completely eliminated. The critical input intensity at the phase transition is a nonmonotonic function of stimulus frequency, revealing a 'resonant' frequency at which the input is most effective at suppressing chaos even though the power spectrum of the spontaneous activity peaks at zero and falls exponentially. A prediction of our analysis is that the variance of neural responses should be most strongly suppressed at frequencies matching the range over which many sensory systems operate.

  16. Tuning Recurrent Neural Networks for Recognizing Handwritten Arabic Words

    KAUST Repository

    Qaralleh, Esam; Abandah, Gheith; Jamour, Fuad Tarek

    2013-01-01

    and sizes of the hidden layers. Large sizes are slow and small sizes are generally not accurate. Tuning the neural network size is a hard task because the design space is often large and training is often a long process. We use design of experiments

  17. Recurrent Artificial Neural Networks and Finite State Natural Language Processing.

    Science.gov (United States)

    Moisl, Hermann

    It is argued that pessimistic assessments of the adequacy of artificial neural networks (ANNs) for natural language processing (NLP) on the grounds that they have a finite state architecture are unjustified, and that their adequacy in this regard is an empirical issue. First, arguments that counter standard objections to finite state NLP on the…

  18. Homeostatic scaling of excitability in recurrent neural networks.

    NARCIS (Netherlands)

    Remme, M.W.H.; Wadman, W.J.

    2012-01-01

    Neurons adjust their intrinsic excitability when experiencing a persistent change in synaptic drive. This process can prevent neural activity from moving into either a quiescent state or a saturated state in the face of ongoing plasticity, and is thought to promote stability of the network in which

  19. Individual Identification Using Functional Brain Fingerprint Detected by Recurrent Neural Network.

    Science.gov (United States)

    Chen, Shiyang; Hu, Xiaoping P

    2018-03-20

    Individual identification based on brain function has gained traction in literature. Investigating individual differences in brain function can provide additional insights into the brain. In this work, we introduce a recurrent neural network based model for identifying individuals based on only a short segment of resting state functional MRI data. In addition, we demonstrate how the global signal and differences in atlases affect the individual identifiability. Furthermore, we investigate neural network features that exhibit the uniqueness of each individual. The results indicate that our model is able to identify individuals based on neural features and provides additional information regarding brain dynamics.

  20. A One-Layer Recurrent Neural Network for Pseudoconvex Optimization Problems With Equality and Inequality Constraints.

    Science.gov (United States)

    Qin, Sitian; Yang, Xiudong; Xue, Xiaoping; Song, Jiahui

    2017-10-01

    Pseudoconvex optimization problem, as an important nonconvex optimization problem, plays an important role in scientific and engineering applications. In this paper, a recurrent one-layer neural network is proposed for solving the pseudoconvex optimization problem with equality and inequality constraints. It is proved that from any initial state, the state of the proposed neural network reaches the feasible region in finite time and stays there thereafter. It is also proved that the state of the proposed neural network is convergent to an optimal solution of the related problem. Compared with the related existing recurrent neural networks for the pseudoconvex optimization problems, the proposed neural network in this paper does not need the penalty parameters and has a better convergence. Meanwhile, the proposed neural network is used to solve three nonsmooth optimization problems, and we make some detailed comparisons with the known related conclusions. In the end, some numerical examples are provided to illustrate the effectiveness of the performance of the proposed neural network.

  1. Folk music style modelling by recurrent neural networks with long short term memory units

    OpenAIRE

    Sturm, Bob; Santos, João Felipe; Korshunova, Iryna

    2015-01-01

    We demonstrate two generative models created by training a recurrent neural network (RNN) with three hidden layers of long short-term memory (LSTM) units. This extends past work in numerous directions, including training deeper models with nearly 24,000 high-level transcriptions of folk tunes. We discuss our on-going work.

  2. Recurrent Neural Network For Forecasting Time Series With Long Memory Pattern

    Science.gov (United States)

    Walid; Alamsyah

    2017-04-01

    Recurrent Neural Network as one of the hybrid models are often used to predict and estimate the issues related to electricity, can be used to describe the cause of the swelling of electrical load which experienced by PLN. In this research will be developed RNN forecasting procedures at the time series with long memory patterns. Considering the application is the national electrical load which of course has a different trend with the condition of the electrical load in any country. This research produces the algorithm of time series forecasting which has long memory pattern using E-RNN after this referred to the algorithm of integrated fractional recurrent neural networks (FIRNN).The prediction results of long memory time series using models Fractional Integrated Recurrent Neural Network (FIRNN) showed that the model with the selection of data difference in the range of [-1,1] and the model of Fractional Integrated Recurrent Neural Network (FIRNN) (24,6,1) provides the smallest MSE value, which is 0.00149684.

  3. Encoding of phonology in a recurrent neural model of grounded speech

    NARCIS (Netherlands)

    Alishahi, Afra; Barking, Marie; Chrupala, Grzegorz; Levy, Roger; Specia, Lucia

    2017-01-01

    We study the representation and encoding of phonemes in a recurrent neural network model of grounded speech. We use a model which processes images and their spoken descriptions, and projects the visual and auditory representations into the same semantic space. We perform a number of analyses on how

  4. Direction-of-change forecasting using a volatility-based recurrent neural network

    NARCIS (Netherlands)

    Bekiros, S.D.; Georgoutsos, D.A.

    2008-01-01

    This paper investigates the profitability of a trading strategy, based on recurrent neural networks, that attempts to predict the direction-of-change of the market in the case of the NASDAQ composite index. The sample extends over the period 8 February 1971 to 7 April 1998, while the sub-period 8

  5. Global stability of discrete-time recurrent neural networks with impulse effects

    International Nuclear Information System (INIS)

    Zhou, L; Li, C; Wan, J

    2008-01-01

    This paper formulates and studies a class of discrete-time recurrent neural networks with impulse effects. A stability criterion, which characterizes the effects of impulse and stability property of the corresponding impulse-free networks on the stability of the impulsive networks in an aggregate form, is established. Two simplified and numerically tractable criteria are also provided

  6. A one-layer recurrent neural network for constrained nonsmooth optimization.

    Science.gov (United States)

    Liu, Qingshan; Wang, Jun

    2011-10-01

    This paper presents a novel one-layer recurrent neural network modeled by means of a differential inclusion for solving nonsmooth optimization problems, in which the number of neurons in the proposed neural network is the same as the number of decision variables of optimization problems. Compared with existing neural networks for nonsmooth optimization problems, the global convexity condition on the objective functions and constraints is relaxed, which allows the objective functions and constraints to be nonconvex. It is proven that the state variables of the proposed neural network are convergent to optimal solutions if a single design parameter in the model is larger than a derived lower bound. Numerical examples with simulation results substantiate the effectiveness and illustrate the characteristics of the proposed neural network.

  7. A one-layer recurrent neural network for constrained nonconvex optimization.

    Science.gov (United States)

    Li, Guocheng; Yan, Zheng; Wang, Jun

    2015-01-01

    In this paper, a one-layer recurrent neural network is proposed for solving nonconvex optimization problems subject to general inequality constraints, designed based on an exact penalty function method. It is proved herein that any neuron state of the proposed neural network is convergent to the feasible region in finite time and stays there thereafter, provided that the penalty parameter is sufficiently large. The lower bounds of the penalty parameter and convergence time are also estimated. In addition, any neural state of the proposed neural network is convergent to its equilibrium point set which satisfies the Karush-Kuhn-Tucker conditions of the optimization problem. Moreover, the equilibrium point set is equivalent to the optimal solution to the nonconvex optimization problem if the objective function and constraints satisfy given conditions. Four numerical examples are provided to illustrate the performances of the proposed neural network.

  8. A one-layer recurrent neural network for constrained nonsmooth invex optimization.

    Science.gov (United States)

    Li, Guocheng; Yan, Zheng; Wang, Jun

    2014-02-01

    Invexity is an important notion in nonconvex optimization. In this paper, a one-layer recurrent neural network is proposed for solving constrained nonsmooth invex optimization problems, designed based on an exact penalty function method. It is proved herein that any state of the proposed neural network is globally convergent to the optimal solution set of constrained invex optimization problems, with a sufficiently large penalty parameter. In addition, any neural state is globally convergent to the unique optimal solution, provided that the objective function and constraint functions are pseudoconvex. Moreover, any neural state is globally convergent to the feasible region in finite time and stays there thereafter. The lower bounds of the penalty parameter and convergence time are also estimated. Two numerical examples are provided to illustrate the performances of the proposed neural network. Copyright © 2013 Elsevier Ltd. All rights reserved.

  9. Statistical downscaling of precipitation using long short-term memory recurrent neural networks

    Science.gov (United States)

    Misra, Saptarshi; Sarkar, Sudeshna; Mitra, Pabitra

    2017-11-01

    Hydrological impacts of global climate change on regional scale are generally assessed by downscaling large-scale climatic variables, simulated by General Circulation Models (GCMs), to regional, small-scale hydrometeorological variables like precipitation, temperature, etc. In this study, we propose a new statistical downscaling model based on Recurrent Neural Network with Long Short-Term Memory which captures the spatio-temporal dependencies in local rainfall. The previous studies have used several other methods such as linear regression, quantile regression, kernel regression, beta regression, and artificial neural networks. Deep neural networks and recurrent neural networks have been shown to be highly promising in modeling complex and highly non-linear relationships between input and output variables in different domains and hence we investigated their performance in the task of statistical downscaling. We have tested this model on two datasets—one on precipitation in Mahanadi basin in India and the second on precipitation in Campbell River basin in Canada. Our autoencoder coupled long short-term memory recurrent neural network model performs the best compared to other existing methods on both the datasets with respect to temporal cross-correlation, mean squared error, and capturing the extremes.

  10. Hysteretic recurrent neural networks: a tool for modeling hysteretic materials and systems

    International Nuclear Information System (INIS)

    Veeramani, Arun S; Crews, John H; Buckner, Gregory D

    2009-01-01

    This paper introduces a novel recurrent neural network, the hysteretic recurrent neural network (HRNN), that is ideally suited to modeling hysteretic materials and systems. This network incorporates a hysteretic neuron consisting of conjoined sigmoid activation functions. Although similar hysteretic neurons have been explored previously, the HRNN is unique in its utilization of simple recurrence to 'self-select' relevant activation functions. Furthermore, training is facilitated by placing the network weights on the output side, allowing standard backpropagation of error training algorithms to be used. We present two- and three-phase versions of the HRNN for modeling hysteretic materials with distinct phases. These models are experimentally validated using data collected from shape memory alloys and ferromagnetic materials. The results demonstrate the HRNN's ability to accurately generalize hysteretic behavior with a relatively small number of neurons. Additional benefits lie in the network's ability to identify statistical information concerning the macroscopic material by analyzing the weights of the individual neurons

  11. Spatiotemporal Dynamics and Reliable Computations in Recurrent Spiking Neural Networks

    Science.gov (United States)

    Pyle, Ryan; Rosenbaum, Robert

    2017-01-01

    Randomly connected networks of excitatory and inhibitory spiking neurons provide a parsimonious model of neural variability, but are notoriously unreliable for performing computations. We show that this difficulty is overcome by incorporating the well-documented dependence of connection probability on distance. Spatially extended spiking networks exhibit symmetry-breaking bifurcations and generate spatiotemporal patterns that can be trained to perform dynamical computations under a reservoir computing framework.

  12. Spatiotemporal Dynamics and Reliable Computations in Recurrent Spiking Neural Networks.

    Science.gov (United States)

    Pyle, Ryan; Rosenbaum, Robert

    2017-01-06

    Randomly connected networks of excitatory and inhibitory spiking neurons provide a parsimonious model of neural variability, but are notoriously unreliable for performing computations. We show that this difficulty is overcome by incorporating the well-documented dependence of connection probability on distance. Spatially extended spiking networks exhibit symmetry-breaking bifurcations and generate spatiotemporal patterns that can be trained to perform dynamical computations under a reservoir computing framework.

  13. Delay-Dependent Stability Criteria of Uncertain Periodic Switched Recurrent Neural Networks with Time-Varying Delays

    Directory of Open Access Journals (Sweden)

    Xing Yin

    2011-01-01

    uncertain periodic switched recurrent neural networks with time-varying delays. When uncertain discrete-time recurrent neural network is a periodic system, it is expressed as switched neural network for the finite switching state. Based on the switched quadratic Lyapunov functional approach (SQLF and free-weighting matrix approach (FWM, some linear matrix inequality criteria are found to guarantee the delay-dependent asymptotical stability of these systems. Two examples illustrate the exactness of the proposed criteria.

  14. Using a multi-state recurrent neural network to optimize loading patterns in BWRs

    International Nuclear Information System (INIS)

    Ortiz, Juan Jose; Requena, Ignacio

    2004-01-01

    A Multi-State Recurrent Neural Network is used to optimize Loading Patterns (LP) in BWRs. We have proposed an energy function that depends on fuel assembly positions and their nuclear cross sections to carry out optimisation. Multi-State Recurrent Neural Networks creates LPs that satisfy the Radial Power Peaking Factor and maximize the effective multiplication factor at the Beginning of the Cycle, and also satisfy the Minimum Critical Power Ratio and Maximum Linear Heat Generation Rate at the End of the Cycle, thereby maximizing the effective multiplication factor. In order to evaluate the LPs, we have used a trained back-propagation neural network to predict the parameter values, instead of using a reactor core simulator, which saved considerable computation time in the search process. We applied this method to find optimal LPs for five cycles of Laguna Verde Nuclear Power Plant (LVNPP) in Mexico

  15. Natural Language Video Description using Deep Recurrent Neural Networks

    Science.gov (United States)

    2015-11-23

    ht = f (Wxhxt + Whhht−1) (2.1) zt = g(Wzhht) (2.2) where f and g are element-wise non-linear functions such as a sigmoid or hyperbolic tan - gent, xt...space. arXiv preprint arXiv:1301.3781, 2013. 22 [68] Volodymyr Mnih, Nicolas Heess, Alex Graves, et al. Recurrent models of visual attention. In...2010. 2 36 Bibliography [107] Li Yao, Atousa Torabi, Kyunghyun Cho, Nicolas Ballas, Christopher Pal, Hugo Larochelle, and Aaron Courville. Describing

  16. Internal representation of task rules by recurrent dynamics: the importance of the diversity of neural responses

    Directory of Open Access Journals (Sweden)

    Mattia Rigotti

    2010-10-01

    Full Text Available Neural activity of behaving animals, especially in the prefrontal cortex, is highly heterogeneous, with selective responses to diverse aspects of the executed task. We propose a general model of recurrent neural networks that perform complex rule-based tasks, and we show that the diversity of neuronal responses plays a fundamental role when the behavioral responses are context dependent. Specifically, we found that when the inner mental states encoding the task rules are represented by stable patterns of neural activity (attractors of the neural dynamics, the neurons must be selective for combinations of sensory stimuli and inner mental states. Such mixed selectivity is easily obtained by neurons that connect with random synaptic strengths both to the recurrent network and to neurons encoding sensory inputs. The number of randomly connected neurons needed to solve a task is on average only three times as large as the number of neurons needed in a network designed ad hoc. Moreover, the number of needed neurons grows only linearly with the number of task-relevant events and mental states, provided that each neuron responds to a large proportion of events (dense/distributed coding. A biologically realistic implementation of the model captures several aspects of the activity recorded from monkeys performing context dependent tasks. Our findings explain the importance of the diversity of neural responses and provide us with simple and general principles for designing attractor neural networks that perform complex computation.

  17. Firing rate dynamics in recurrent spiking neural networks with intrinsic and network heterogeneity.

    Science.gov (United States)

    Ly, Cheng

    2015-12-01

    Heterogeneity of neural attributes has recently gained a lot of attention and is increasing recognized as a crucial feature in neural processing. Despite its importance, this physiological feature has traditionally been neglected in theoretical studies of cortical neural networks. Thus, there is still a lot unknown about the consequences of cellular and circuit heterogeneity in spiking neural networks. In particular, combining network or synaptic heterogeneity and intrinsic heterogeneity has yet to be considered systematically despite the fact that both are known to exist and likely have significant roles in neural network dynamics. In a canonical recurrent spiking neural network model, we study how these two forms of heterogeneity lead to different distributions of excitatory firing rates. To analytically characterize how these types of heterogeneities affect the network, we employ a dimension reduction method that relies on a combination of Monte Carlo simulations and probability density function equations. We find that the relationship between intrinsic and network heterogeneity has a strong effect on the overall level of heterogeneity of the firing rates. Specifically, this relationship can lead to amplification or attenuation of firing rate heterogeneity, and these effects depend on whether the recurrent network is firing asynchronously or rhythmically firing. These observations are captured with the aforementioned reduction method, and furthermore simpler analytic descriptions based on this dimension reduction method are developed. The final analytic descriptions provide compact and descriptive formulas for how the relationship between intrinsic and network heterogeneity determines the firing rate heterogeneity dynamics in various settings.

  18. Exponentially convergent state estimation for delayed switched recurrent neural networks.

    Science.gov (United States)

    Ahn, Choon Ki

    2011-11-01

    This paper deals with the delay-dependent exponentially convergent state estimation problem for delayed switched neural networks. A set of delay-dependent criteria is derived under which the resulting estimation error system is exponentially stable. It is shown that the gain matrix of the proposed state estimator is characterised in terms of the solution to a set of linear matrix inequalities (LMIs), which can be checked readily by using some standard numerical packages. An illustrative example is given to demonstrate the effectiveness of the proposed state estimator.

  19. A two-layer recurrent neural network for nonsmooth convex optimization problems.

    Science.gov (United States)

    Qin, Sitian; Xue, Xiaoping

    2015-06-01

    In this paper, a two-layer recurrent neural network is proposed to solve the nonsmooth convex optimization problem subject to convex inequality and linear equality constraints. Compared with existing neural network models, the proposed neural network has a low model complexity and avoids penalty parameters. It is proved that from any initial point, the state of the proposed neural network reaches the equality feasible region in finite time and stays there thereafter. Moreover, the state is unique if the initial point lies in the equality feasible region. The equilibrium point set of the proposed neural network is proved to be equivalent to the Karush-Kuhn-Tucker optimality set of the original optimization problem. It is further proved that the equilibrium point of the proposed neural network is stable in the sense of Lyapunov. Moreover, from any initial point, the state is proved to be convergent to an equilibrium point of the proposed neural network. Finally, as applications, the proposed neural network is used to solve nonlinear convex programming with linear constraints and L1 -norm minimization problems.

  20. Land Cover Classification via Multitemporal Spatial Data by Deep Recurrent Neural Networks

    Science.gov (United States)

    Ienco, Dino; Gaetano, Raffaele; Dupaquier, Claire; Maurel, Pierre

    2017-10-01

    Nowadays, modern earth observation programs produce huge volumes of satellite images time series (SITS) that can be useful to monitor geographical areas through time. How to efficiently analyze such kind of information is still an open question in the remote sensing field. Recently, deep learning methods proved suitable to deal with remote sensing data mainly for scene classification (i.e. Convolutional Neural Networks - CNNs - on single images) while only very few studies exist involving temporal deep learning approaches (i.e Recurrent Neural Networks - RNNs) to deal with remote sensing time series. In this letter we evaluate the ability of Recurrent Neural Networks, in particular the Long-Short Term Memory (LSTM) model, to perform land cover classification considering multi-temporal spatial data derived from a time series of satellite images. We carried out experiments on two different datasets considering both pixel-based and object-based classification. The obtained results show that Recurrent Neural Networks are competitive compared to state-of-the-art classifiers, and may outperform classical approaches in presence of low represented and/or highly mixed classes. We also show that using the alternative feature representation generated by LSTM can improve the performances of standard classifiers.

  1. Simultaneous multichannel signal transfers via chaos in a recurrent neural network.

    Science.gov (United States)

    Soma, Ken-ichiro; Mori, Ryota; Sato, Ryuichi; Furumai, Noriyuki; Nara, Shigetoshi

    2015-05-01

    We propose neural network model that demonstrates the phenomenon of signal transfer between separated neuron groups via other chaotic neurons that show no apparent correlations with the input signal. The model is a recurrent neural network in which it is supposed that synchronous behavior between small groups of input and output neurons has been learned as fragments of high-dimensional memory patterns, and depletion of neural connections results in chaotic wandering dynamics. Computer experiments show that when a strong oscillatory signal is applied to an input group in the chaotic regime, the signal is successfully transferred to the corresponding output group, although no correlation is observed between the input signal and the intermediary neurons. Signal transfer is also observed when multiple signals are applied simultaneously to separate input groups belonging to different memory attractors. In this sense simultaneous multichannel communications are realized, and the chaotic neural dynamics acts as a signal transfer medium in which the signal appears to be hidden.

  2. A non-penalty recurrent neural network for solving a class of constrained optimization problems.

    Science.gov (United States)

    Hosseini, Alireza

    2016-01-01

    In this paper, we explain a methodology to analyze convergence of some differential inclusion-based neural networks for solving nonsmooth optimization problems. For a general differential inclusion, we show that if its right hand-side set valued map satisfies some conditions, then solution trajectory of the differential inclusion converges to optimal solution set of its corresponding in optimization problem. Based on the obtained methodology, we introduce a new recurrent neural network for solving nonsmooth optimization problems. Objective function does not need to be convex on R(n) nor does the new neural network model require any penalty parameter. We compare our new method with some penalty-based and non-penalty based models. Moreover for differentiable cases, we implement circuit diagram of the new neural network. Copyright © 2015 Elsevier Ltd. All rights reserved.

  3. A Novel Recurrent Neural Network for Manipulator Control With Improved Noise Tolerance.

    Science.gov (United States)

    Li, Shuai; Wang, Huanqing; Rafique, Muhammad Usman

    2017-04-12

    In this paper, we propose a novel recurrent neural network to resolve the redundancy of manipulators for efficient kinematic control in the presence of noises in a polynomial type. Leveraging the high-order derivative properties of polynomial noises, a deliberately devised neural network is proposed to eliminate the impact of noises and recover the accurate tracking of desired trajectories in workspace. Rigorous analysis shows that the proposed neural law stabilizes the system dynamics and the position tracking error converges to zero in the presence of noises. Extensive simulations verify the theoretical results. Numerical comparisons show that existing dual neural solutions lose stability when exposed to large constant noises or time-varying noises. In contrast, the proposed approach works well and has a low tracking error comparable to noise-free situations.

  4. Multi-stability and almost periodic solutions of a class of recurrent neural networks

    International Nuclear Information System (INIS)

    Liu Yiguang; You Zhisheng

    2007-01-01

    This paper studies multi-stability, existence of almost periodic solutions of a class of recurrent neural networks with bounded activation functions. After introducing a sufficient condition insuring multi-stability, many criteria guaranteeing existence of almost periodic solutions are derived using Mawhin's coincidence degree theory. All the criteria are constructed without assuming the activation functions are smooth, monotonic or Lipschitz continuous, and that the networks contains periodic variables (such as periodic coefficients, periodic inputs or periodic activation functions), so all criteria can be easily extended to fit many concrete forms of neural networks such as Hopfield neural networks, or cellular neural networks, etc. Finally, all kinds of simulations are employed to illustrate the criteria

  5. Multistability of delayed complex-valued recurrent neural networks with discontinuous real-imaginary-type activation functions

    International Nuclear Information System (INIS)

    Huang Yu-Jiao; Hu Hai-Gen

    2015-01-01

    In this paper, the multistability issue is discussed for delayed complex-valued recurrent neural networks with discontinuous real-imaginary-type activation functions. Based on a fixed theorem and stability definition, sufficient criteria are established for the existence and stability of multiple equilibria of complex-valued recurrent neural networks. The number of stable equilibria is larger than that of real-valued recurrent neural networks, which can be used to achieve high-capacity associative memories. One numerical example is provided to show the effectiveness and superiority of the presented results. (paper)

  6. Distinct Neural Mechanisms Mediate Olfactory Memory Formation at Different Timescales

    Science.gov (United States)

    McNamara, Ann Marie; Magidson, Phillip D.; Linster, Christiane; Wilson, Donald A.; Cleland, Thomas A.

    2008-01-01

    Habituation is one of the oldest forms of learning, broadly expressed across sensory systems and taxa. Here, we demonstrate that olfactory habituation induced at different timescales (comprising different odor exposure and intertrial interval durations) is mediated by different neural mechanisms. First, the persistence of habituation memory is…

  7. Stability results for stochastic delayed recurrent neural networks with discrete and distributed delays

    Science.gov (United States)

    Chen, Guiling; Li, Dingshi; Shi, Lin; van Gaans, Onno; Verduyn Lunel, Sjoerd

    2018-03-01

    We present new conditions for asymptotic stability and exponential stability of a class of stochastic recurrent neural networks with discrete and distributed time varying delays. Our approach is based on the method using fixed point theory, which do not resort to any Liapunov function or Liapunov functional. Our results neither require the boundedness, monotonicity and differentiability of the activation functions nor differentiability of the time varying delays. In particular, a class of neural networks without stochastic perturbations is also considered. Examples are given to illustrate our main results.

  8. Stochastic exponential stability of the delayed reaction-diffusion recurrent neural networks with Markovian jumping parameters

    International Nuclear Information System (INIS)

    Wang Linshan; Zhang Zhe; Wang Yangfan

    2008-01-01

    Some criteria for the global stochastic exponential stability of the delayed reaction-diffusion recurrent neural networks with Markovian jumping parameters are presented. The jumping parameters considered here are generated from a continuous-time discrete-state homogeneous Markov process, which are governed by a Markov process with discrete and finite state space. By employing a new Lyapunov-Krasovskii functional, a linear matrix inequality (LMI) approach is developed to establish some easy-to-test criteria of global exponential stability in the mean square for the stochastic neural networks. The criteria are computationally efficient, since they are in the forms of some linear matrix inequalities

  9. Robust sliding mode control for uncertain servo system using friction observer and recurrent fuzzy neural networks

    International Nuclear Information System (INIS)

    Han, Seong Ik; Jeong, Chan Se; Yang, Soon Yong

    2012-01-01

    A robust positioning control scheme has been developed using friction parameter observer and recurrent fuzzy neural networks based on the sliding mode control. As a dynamic friction model, the LuGre model is adopted for handling friction compensation because it has been known to capture sufficiently the properties of a nonlinear dynamic friction. A developed friction parameter observer has a simple structure and also well estimates friction parameters of the LuGre friction model. In addition, an approximation method for the system uncertainty is developed using recurrent fuzzy neural networks technology to improve the precision positioning degree. Some simulation and experiment provide the verification on the performance of a proposed robust control scheme

  10. CloudScan - A Configuration-Free Invoice Analysis System Using Recurrent Neural Networks

    DEFF Research Database (Denmark)

    Palm, Rasmus Berg; Winther, Ole; Laws, Florian

    2017-01-01

    We present CloudScan; an invoice analysis system that requires zero configuration or upfront annotation. In contrast to previous work, CloudScan does not rely on templates of invoice layout, instead it learns a single global model of invoices that naturally generalizes to unseen invoice layouts....... The model is trained using data automatically extracted from end-user provided feedback. This automatic training data extraction removes the requirement for users to annotate the data precisely. We describe a recurrent neural network model that can capture long range context and compare it to a baseline...... logistic regression model corresponding to the current CloudScan production system. We train and evaluate the system on 8 important fields using a dataset of 326,471 invoices. The recurrent neural network and baseline model achieve 0.891 and 0.887 average F1 scores respectively on seen invoice layouts...

  11. Robust sliding mode control for uncertain servo system using friction observer and recurrent fuzzy neural networks

    Energy Technology Data Exchange (ETDEWEB)

    Han, Seong Ik [Pusan National University, Busan (Korea, Republic of); Jeong, Chan Se; Yang, Soon Yong [University of Ulsan, Ulsan (Korea, Republic of)

    2012-04-15

    A robust positioning control scheme has been developed using friction parameter observer and recurrent fuzzy neural networks based on the sliding mode control. As a dynamic friction model, the LuGre model is adopted for handling friction compensation because it has been known to capture sufficiently the properties of a nonlinear dynamic friction. A developed friction parameter observer has a simple structure and also well estimates friction parameters of the LuGre friction model. In addition, an approximation method for the system uncertainty is developed using recurrent fuzzy neural networks technology to improve the precision positioning degree. Some simulation and experiment provide the verification on the performance of a proposed robust control scheme.

  12. Online Signature Verification using Recurrent Neural Network and Length-normalized Path Signature

    OpenAIRE

    Lai, Songxuan; Jin, Lianwen; Yang, Weixin

    2017-01-01

    Inspired by the great success of recurrent neural networks (RNNs) in sequential modeling, we introduce a novel RNN system to improve the performance of online signature verification. The training objective is to directly minimize intra-class variations and to push the distances between skilled forgeries and genuine samples above a given threshold. By back-propagating the training signals, our RNN network produced discriminative features with desired metrics. Additionally, we propose a novel d...

  13. Complex Dynamical Network Control for Trajectory Tracking Using Delayed Recurrent Neural Networks

    Directory of Open Access Journals (Sweden)

    Jose P. Perez

    2014-01-01

    Full Text Available In this paper, the problem of trajectory tracking is studied. Based on the V-stability and Lyapunov theory, a control law that achieves the global asymptotic stability of the tracking error between a delayed recurrent neural network and a complex dynamical network is obtained. To illustrate the analytic results, we present a tracking simulation of a dynamical network with each node being just one Lorenz’s dynamical system and three identical Chen’s dynamical systems.

  14. Online Sequence Training of Recurrent Neural Networks with Connectionist Temporal Classification

    OpenAIRE

    Hwang, Kyuyeon; Sung, Wonyong

    2015-01-01

    Connectionist temporal classification (CTC) based supervised sequence training of recurrent neural networks (RNNs) has shown great success in many machine learning areas including end-to-end speech and handwritten character recognition. For the CTC training, however, it is required to unroll (or unfold) the RNN by the length of an input sequence. This unrolling requires a lot of memory and hinders a small footprint implementation of online learning or adaptation. Furthermore, the length of tr...

  15. Simulating the dynamics of the neutron flux in a nuclear reactor by locally recurrent neural networks

    International Nuclear Information System (INIS)

    Cadini, F.; Zio, E.; Pedroni, N.

    2007-01-01

    In this paper, a locally recurrent neural network (LRNN) is employed for approximating the temporal evolution of a nonlinear dynamic system model of a simplified nuclear reactor. To this aim, an infinite impulse response multi-layer perceptron (IIR-MLP) is trained according to a recursive back-propagation (RBP) algorithm. The network nodes contain internal feedback paths and their connections are realized by means of IIR synaptic filters, which provide the LRNN with the necessary system state memory

  16. Some new results for recurrent neural networks with varying-time coefficients and delays

    International Nuclear Information System (INIS)

    Jiang Haijun; Teng Zhidong

    2005-01-01

    In this Letter, we consider the recurrent neural networks with varying-time coefficients and delays. By constructing new Lyapunov functional, introducing ingeniously many real parameters and applying the technique of Young inequality, we establish a series of criteria on the boundedness, global exponential stability and the existence of periodic solutions. In these criteria, we do not require that the response functions are differentiable, bounded and monotone nondecreasing. Some previous works are improved and extended

  17. Constructing Long Short-Term Memory based Deep Recurrent Neural Networks for Large Vocabulary Speech Recognition

    OpenAIRE

    Li, Xiangang; Wu, Xihong

    2014-01-01

    Long short-term memory (LSTM) based acoustic modeling methods have recently been shown to give state-of-the-art performance on some speech recognition tasks. To achieve a further performance improvement, in this research, deep extensions on LSTM are investigated considering that deep hierarchical model has turned out to be more efficient than a shallow one. Motivated by previous research on constructing deep recurrent neural networks (RNNs), alternative deep LSTM architectures are proposed an...

  18. DeepProbe: Information Directed Sequence Understanding and Chatbot Design via Recurrent Neural Networks

    OpenAIRE

    Yin, Zi; Chang, Keng-hao; Zhang, Ruofei

    2017-01-01

    Information extraction and user intention identification are central topics in modern query understanding and recommendation systems. In this paper, we propose DeepProbe, a generic information-directed interaction framework which is built around an attention-based sequence to sequence (seq2seq) recurrent neural network. DeepProbe can rephrase, evaluate, and even actively ask questions, leveraging the generative ability and likelihood estimation made possible by seq2seq models. DeepProbe makes...

  19. A Heuristic Approach to Intra-Brain Communications Using Chaos in a Recurrent Neural Network Model

    Science.gov (United States)

    Soma, Ken-ichiro; Mori, Ryota; Sato, Ryuichi; Nara, Shigetoshi

    2011-09-01

    To approach functional roles of chaos in brain, a heuristic model to consider mechanisms of intra-brain communications is proposed. The key idea is to use chaos in firing pattern dynamics of a recurrent neural network consisting of birary state neurons, as propagation medium of pulse signals. Computer experiments and numerical methods are introduced to evaluate signal transport characteristics by calculating correlation functions between sending neurons and receiving neurons of pulse signals.

  20. Encoding Time in Feedforward Trajectories of a Recurrent Neural Network Model.

    Science.gov (United States)

    Hardy, N F; Buonomano, Dean V

    2018-02-01

    Brain activity evolves through time, creating trajectories of activity that underlie sensorimotor processing, behavior, and learning and memory. Therefore, understanding the temporal nature of neural dynamics is essential to understanding brain function and behavior. In vivo studies have demonstrated that sequential transient activation of neurons can encode time. However, it remains unclear whether these patterns emerge from feedforward network architectures or from recurrent networks and, furthermore, what role network structure plays in timing. We address these issues using a recurrent neural network (RNN) model with distinct populations of excitatory and inhibitory units. Consistent with experimental data, a single RNN could autonomously produce multiple functionally feedforward trajectories, thus potentially encoding multiple timed motor patterns lasting up to several seconds. Importantly, the model accounted for Weber's law, a hallmark of timing behavior. Analysis of network connectivity revealed that efficiency-a measure of network interconnectedness-decreased as the number of stored trajectories increased. Additionally, the balance of excitation (E) and inhibition (I) shifted toward excitation during each unit's activation time, generating the prediction that observed sequential activity relies on dynamic control of the E/I balance. Our results establish for the first time that the same RNN can generate multiple functionally feedforward patterns of activity as a result of dynamic shifts in the E/I balance imposed by the connectome of the RNN. We conclude that recurrent network architectures account for sequential neural activity, as well as for a fundamental signature of timing behavior: Weber's law.

  1. Bifurcation analysis on a generalized recurrent neural network with two interconnected three-neuron components

    International Nuclear Information System (INIS)

    Hajihosseini, Amirhossein; Maleki, Farzaneh; Rokni Lamooki, Gholam Reza

    2011-01-01

    Highlights: → We construct a recurrent neural network by generalizing a specific n-neuron network. → Several codimension 1 and 2 bifurcations take place in the newly constructed network. → The newly constructed network has higher capabilities to learn periodic signals. → The normal form theorem is applied to investigate dynamics of the network. → A series of bifurcation diagrams is given to support theoretical results. - Abstract: A class of recurrent neural networks is constructed by generalizing a specific class of n-neuron networks. It is shown that the newly constructed network experiences generic pitchfork and Hopf codimension one bifurcations. It is also proved that the emergence of generic Bogdanov-Takens, pitchfork-Hopf and Hopf-Hopf codimension two, and the degenerate Bogdanov-Takens bifurcation points in the parameter space is possible due to the intersections of codimension one bifurcation curves. The occurrence of bifurcations of higher codimensions significantly increases the capability of the newly constructed recurrent neural network to learn broader families of periodic signals.

  2. Model for a flexible motor memory based on a self-active recurrent neural network.

    Science.gov (United States)

    Boström, Kim Joris; Wagner, Heiko; Prieske, Markus; de Lussanet, Marc

    2013-10-01

    Using recent recurrent network architecture based on the reservoir computing approach, we propose and numerically simulate a model that is focused on the aspects of a flexible motor memory for the storage of elementary movement patterns into the synaptic weights of a neural network, so that the patterns can be retrieved at any time by simple static commands. The resulting motor memory is flexible in that it is capable to continuously modulate the stored patterns. The modulation consists in an approximately linear inter- and extrapolation, generating a large space of possible movements that have not been learned before. A recurrent network of thousand neurons is trained in a manner that corresponds to a realistic exercising scenario, with experimentally measured muscular activations and with kinetic data representing proprioceptive feedback. The network is "self-active" in that it maintains recurrent flow of activation even in the absence of input, a feature that resembles the "resting-state activity" found in the human and animal brain. The model involves the concept of "neural outsourcing" which amounts to the permanent shifting of computational load from higher to lower-level neural structures, which might help to explain why humans are able to execute learned skills in a fluent and flexible manner without the need for attention to the details of the movement. Copyright © 2013 Elsevier B.V. All rights reserved.

  3. Diagonal recurrent neural network based adaptive control of nonlinear dynamical systems using lyapunov stability criterion.

    Science.gov (United States)

    Kumar, Rajesh; Srivastava, Smriti; Gupta, J R P

    2017-03-01

    In this paper adaptive control of nonlinear dynamical systems using diagonal recurrent neural network (DRNN) is proposed. The structure of DRNN is a modification of fully connected recurrent neural network (FCRNN). Presence of self-recurrent neurons in the hidden layer of DRNN gives it an ability to capture the dynamic behaviour of the nonlinear plant under consideration (to be controlled). To ensure stability, update rules are developed using lyapunov stability criterion. These rules are then used for adjusting the various parameters of DRNN. The responses of plants obtained with DRNN are compared with those obtained when multi-layer feed forward neural network (MLFFNN) is used as a controller. Also, in example 4, FCRNN is also investigated and compared with DRNN and MLFFNN. Robustness of the proposed control scheme is also tested against parameter variations and disturbance signals. Four simulation examples including one-link robotic manipulator and inverted pendulum are considered on which the proposed controller is applied. The results so obtained show the superiority of DRNN over MLFFNN as a controller. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.

  4. Medical Concept Normalization in Social Media Posts with Recurrent Neural Networks.

    Science.gov (United States)

    Tutubalina, Elena; Miftahutdinov, Zulfat; Nikolenko, Sergey; Malykh, Valentin

    2018-06-12

    Text mining of scientific libraries and social media has already proven itself as a reliable tool for drug repurposing and hypothesis generation. The task of mapping a disease mention to a concept in a controlled vocabulary, typically to the standard thesaurus in the Unified Medical Language System (UMLS), is known as medical concept normalization. This task is challenging due to the differences in the use of medical terminology between health care professionals and social media texts coming from the lay public. To bridge this gap, we use sequence learning with recurrent neural networks and semantic representation of one- or multi-word expressions: we develop end-to-end architectures directly tailored to the task, including bidirectional Long Short-Term Memory, Gated Recurrent Units with an attention mechanism, and additional semantic similarity features based on UMLS. Our evaluation against a standard benchmark shows that recurrent neural networks improve results over an effective baseline for classification based on convolutional neural networks. A qualitative examination of mentions discovered in a dataset of user reviews collected from popular online health information platforms as well as a quantitative evaluation both show improvements in the semantic representation of health-related expressions in social media. Copyright © 2018. Published by Elsevier Inc.

  5. Protein secondary structure prediction using modular reciprocal bidirectional recurrent neural networks.

    Science.gov (United States)

    Babaei, Sepideh; Geranmayeh, Amir; Seyyedsalehi, Seyyed Ali

    2010-12-01

    The supervised learning of recurrent neural networks well-suited for prediction of protein secondary structures from the underlying amino acids sequence is studied. Modular reciprocal recurrent neural networks (MRR-NN) are proposed to model the strong correlations between adjacent secondary structure elements. Besides, a multilayer bidirectional recurrent neural network (MBR-NN) is introduced to capture the long-range intramolecular interactions between amino acids in formation of the secondary structure. The final modular prediction system is devised based on the interactive integration of the MRR-NN and the MBR-NN structures to arbitrarily engage the neighboring effects of the secondary structure types concurrent with memorizing the sequential dependencies of amino acids along the protein chain. The advanced combined network augments the percentage accuracy (Q₃) to 79.36% and boosts the segment overlap (SOV) up to 70.09% when tested on the PSIPRED dataset in three-fold cross-validation. Copyright © 2010 Elsevier Ireland Ltd. All rights reserved.

  6. Acoustic Event Detection in Multichannel Audio Using Gated Recurrent Neural Networks with High‐Resolution Spectral Features

    Directory of Open Access Journals (Sweden)

    Hyoung‐Gook Kim

    2017-12-01

    Full Text Available Recently, deep recurrent neural networks have achieved great success in various machine learning tasks, and have also been applied for sound event detection. The detection of temporally overlapping sound events in realistic environments is much more challenging than in monophonic detection problems. In this paper, we present an approach to improve the accuracy of polyphonic sound event detection in multichannel audio based on gated recurrent neural networks in combination with auditory spectral features. In the proposed method, human hearing perception‐based spatial and spectral‐domain noise‐reduced harmonic features are extracted from multichannel audio and used as high‐resolution spectral inputs to train gated recurrent neural networks. This provides a fast and stable convergence rate compared to long short‐term memory recurrent neural networks. Our evaluation reveals that the proposed method outperforms the conventional approaches.

  7. Detection of nonstationary transition to synchronized states of a neural network using recurrence analyses

    Science.gov (United States)

    Budzinski, R. C.; Boaretto, B. R. R.; Prado, T. L.; Lopes, S. R.

    2017-07-01

    We study the stability of asymptotic states displayed by a complex neural network. We focus on the loss of stability of a stationary state of networks using recurrence quantifiers as tools to diagnose local and global stabilities as well as the multistability of a coupled neural network. Numerical simulations of a neural network composed of 1024 neurons in a small-world connection scheme are performed using the model of Braun et al. [Int. J. Bifurcation Chaos 08, 881 (1998), 10.1142/S0218127498000681], which is a modified model from the Hodgkin-Huxley model [J. Phys. 117, 500 (1952)]. To validate the analyses, the results are compared with those produced by Kuramoto's order parameter [Chemical Oscillations, Waves, and Turbulence (Springer-Verlag, Berlin Heidelberg, 1984)]. We show that recurrence tools making use of just integrated signals provided by the networks, such as local field potential (LFP) (LFP signals) or mean field values bring new results on the understanding of neural behavior occurring before the synchronization states. In particular we show the occurrence of different stationary and nonstationarity asymptotic states.

  8. Interpretation of correlated neural variability from models of feed-forward and recurrent circuits

    Science.gov (United States)

    2018-01-01

    Neural populations respond to the repeated presentations of a sensory stimulus with correlated variability. These correlations have been studied in detail, with respect to their mechanistic origin, as well as their influence on stimulus discrimination and on the performance of population codes. A number of theoretical studies have endeavored to link network architecture to the nature of the correlations in neural activity. Here, we contribute to this effort: in models of circuits of stochastic neurons, we elucidate the implications of various network architectures—recurrent connections, shared feed-forward projections, and shared gain fluctuations—on the stimulus dependence in correlations. Specifically, we derive mathematical relations that specify the dependence of population-averaged covariances on firing rates, for different network architectures. In turn, these relations can be used to analyze data on population activity. We examine recordings from neural populations in mouse auditory cortex. We find that a recurrent network model with random effective connections captures the observed statistics. Furthermore, using our circuit model, we investigate the relation between network parameters, correlations, and how well different stimuli can be discriminated from one another based on the population activity. As such, our approach allows us to relate properties of the neural circuit to information processing. PMID:29408930

  9. Interpretation of correlated neural variability from models of feed-forward and recurrent circuits.

    Directory of Open Access Journals (Sweden)

    Volker Pernice

    2018-02-01

    Full Text Available Neural populations respond to the repeated presentations of a sensory stimulus with correlated variability. These correlations have been studied in detail, with respect to their mechanistic origin, as well as their influence on stimulus discrimination and on the performance of population codes. A number of theoretical studies have endeavored to link network architecture to the nature of the correlations in neural activity. Here, we contribute to this effort: in models of circuits of stochastic neurons, we elucidate the implications of various network architectures-recurrent connections, shared feed-forward projections, and shared gain fluctuations-on the stimulus dependence in correlations. Specifically, we derive mathematical relations that specify the dependence of population-averaged covariances on firing rates, for different network architectures. In turn, these relations can be used to analyze data on population activity. We examine recordings from neural populations in mouse auditory cortex. We find that a recurrent network model with random effective connections captures the observed statistics. Furthermore, using our circuit model, we investigate the relation between network parameters, correlations, and how well different stimuli can be discriminated from one another based on the population activity. As such, our approach allows us to relate properties of the neural circuit to information processing.

  10. Study of hourly and daily solar irradiation forecast using diagonal recurrent wavelet neural networks

    International Nuclear Information System (INIS)

    Cao Jiacong; Lin Xingchun

    2008-01-01

    An accurate forecast of solar irradiation is required for various solar energy applications and environmental impact analyses in recent years. Comparatively, various irradiation forecast models based on artificial neural networks (ANN) perform much better in accuracy than many conventional prediction models. However, the forecast precision of most existing ANN based forecast models has not been satisfactory to researchers and engineers so far, and the generalization capability of these networks needs further improving. Combining the prominent dynamic properties of a recurrent neural network (RNN) with the enhanced ability of a wavelet neural network (WNN) in mapping nonlinear functions, a diagonal recurrent wavelet neural network (DRWNN) is newly established in this paper to perform fine forecasting of hourly and daily global solar irradiance. Some additional steps, e.g. applying historical information of cloud cover to sample data sets and the cloud cover from the weather forecast to network input, are adopted to help enhance the forecast precision. Besides, a specially scheduled two phase training algorithm is adopted. As examples, both hourly and daily irradiance forecasts are completed using sample data sets in Shanghai and Macau, and comparisons between irradiation models show that the DRWNN models are definitely more accurate

  11. Reward-based training of recurrent neural networks for cognitive and value-based tasks.

    Science.gov (United States)

    Song, H Francis; Yang, Guangyu R; Wang, Xiao-Jing

    2017-01-13

    Trained neural network models, which exhibit features of neural activity recorded from behaving animals, may provide insights into the circuit mechanisms of cognitive functions through systematic analysis of network activity and connectivity. However, in contrast to the graded error signals commonly used to train networks through supervised learning, animals learn from reward feedback on definite actions through reinforcement learning. Reward maximization is particularly relevant when optimal behavior depends on an animal's internal judgment of confidence or subjective preferences. Here, we implement reward-based training of recurrent neural networks in which a value network guides learning by using the activity of the decision network to predict future reward. We show that such models capture behavioral and electrophysiological findings from well-known experimental paradigms. Our work provides a unified framework for investigating diverse cognitive and value-based computations, and predicts a role for value representation that is essential for learning, but not executing, a task.

  12. Robust recurrent neural network modeling for software fault detection and correction prediction

    International Nuclear Information System (INIS)

    Hu, Q.P.; Xie, M.; Ng, S.H.; Levitin, G.

    2007-01-01

    Software fault detection and correction processes are related although different, and they should be studied together. A practical approach is to apply software reliability growth models to model fault detection, and fault correction process is assumed to be a delayed process. On the other hand, the artificial neural networks model, as a data-driven approach, tries to model these two processes together with no assumptions. Specifically, feedforward backpropagation networks have shown their advantages over analytical models in fault number predictions. In this paper, the following approach is explored. First, recurrent neural networks are applied to model these two processes together. Within this framework, a systematic networks configuration approach is developed with genetic algorithm according to the prediction performance. In order to provide robust predictions, an extra factor characterizing the dispersion of prediction repetitions is incorporated into the performance function. Comparisons with feedforward neural networks and analytical models are developed with respect to a real data set

  13. Distributed Recurrent Neural Forward Models with Neural Control for Complex Locomotion in Walking Robots

    DEFF Research Database (Denmark)

    Dasgupta, Sakyasingha; Goldschmidt, Dennis; Wörgötter, Florentin

    2015-01-01

    here, an artificial bio-inspired walking system which effectively combines biomechanics (in terms of the body and leg structures) with the underlying neural mechanisms. The neural mechanisms consist of (1) central pattern generator based control for generating basic rhythmic patterns and coordinated......Walking animals, like stick insects, cockroaches or ants, demonstrate a fascinating range of locomotive abilities and complex behaviors. The locomotive behaviors can consist of a variety of walking patterns along with adaptation that allow the animals to deal with changes in environmental...... conditions, like uneven terrains, gaps, obstacles etc. Biological study has revealed that such complex behaviors are a result of a combination of biomechanics and neural mechanism thus representing the true nature of embodied interactions. While the biomechanics helps maintain flexibility and sustain...

  14. Separating monocular and binocular neural mechanisms mediating chromatic contextual interactions.

    Science.gov (United States)

    D'Antona, Anthony D; Christiansen, Jens H; Shevell, Steven K

    2014-04-17

    When seen in isolation, a light that varies in chromaticity over time is perceived to oscillate in color. Perception of that same time-varying light may be altered by a surrounding light that is also temporally varying in chromaticity. The neural mechanisms that mediate these contextual interactions are the focus of this article. Observers viewed a central test stimulus that varied in chromaticity over time within a larger surround that also varied in chromaticity at the same temporal frequency. Center and surround were presented either to the same eye (monocular condition) or to opposite eyes (dichoptic condition) at the same frequency (3.125, 6.25, or 9.375 Hz). Relative phase between center and surround modulation was varied. In both the monocular and dichoptic conditions, the perceived modulation depth of the central light depended on the relative phase of the surround. A simple model implementing a linear combination of center and surround modulation fit the measurements well. At the lowest temporal frequency (3.125 Hz), the surround's influence was virtually identical for monocular and dichoptic conditions, suggesting that at this frequency, the surround's influence is mediated primarily by a binocular neural mechanism. At higher frequencies, the surround's influence was greater for the monocular condition than for the dichoptic condition, and this difference increased with temporal frequency. Our findings show that two separate neural mechanisms mediate chromatic contextual interactions: one binocular and dominant at lower temporal frequencies and the other monocular and dominant at higher frequencies (6-10 Hz).

  15. A one-layer recurrent neural network for non-smooth convex optimization subject to linear inequality constraints

    International Nuclear Information System (INIS)

    Liu, Xiaolan; Zhou, Mi

    2016-01-01

    In this paper, a one-layer recurrent network is proposed for solving a non-smooth convex optimization subject to linear inequality constraints. Compared with the existing neural networks for optimization, the proposed neural network is capable of solving more general convex optimization with linear inequality constraints. The convergence of the state variables of the proposed neural network to achieve solution optimality is guaranteed as long as the designed parameters in the model are larger than the derived lower bounds.

  16. Brain Dynamics in Predicting Driving Fatigue Using a Recurrent Self-Evolving Fuzzy Neural Network.

    Science.gov (United States)

    Liu, Yu-Ting; Lin, Yang-Yin; Wu, Shang-Lin; Chuang, Chun-Hsiang; Lin, Chin-Teng

    2016-02-01

    This paper proposes a generalized prediction system called a recurrent self-evolving fuzzy neural network (RSEFNN) that employs an on-line gradient descent learning rule to address the electroencephalography (EEG) regression problem in brain dynamics for driving fatigue. The cognitive states of drivers significantly affect driving safety; in particular, fatigue driving, or drowsy driving, endangers both the individual and the public. For this reason, the development of brain-computer interfaces (BCIs) that can identify drowsy driving states is a crucial and urgent topic of study. Many EEG-based BCIs have been developed as artificial auxiliary systems for use in various practical applications because of the benefits of measuring EEG signals. In the literature, the efficacy of EEG-based BCIs in recognition tasks has been limited by low resolutions. The system proposed in this paper represents the first attempt to use the recurrent fuzzy neural network (RFNN) architecture to increase adaptability in realistic EEG applications to overcome this bottleneck. This paper further analyzes brain dynamics in a simulated car driving task in a virtual-reality environment. The proposed RSEFNN model is evaluated using the generalized cross-subject approach, and the results indicate that the RSEFNN is superior to competing models regardless of the use of recurrent or nonrecurrent structures.

  17. Deep Recurrent Neural Network-Based Autoencoders for Acoustic Novelty Detection

    Directory of Open Access Journals (Sweden)

    Erik Marchi

    2017-01-01

    Full Text Available In the emerging field of acoustic novelty detection, most research efforts are devoted to probabilistic approaches such as mixture models or state-space models. Only recent studies introduced (pseudo-generative models for acoustic novelty detection with recurrent neural networks in the form of an autoencoder. In these approaches, auditory spectral features of the next short term frame are predicted from the previous frames by means of Long-Short Term Memory recurrent denoising autoencoders. The reconstruction error between the input and the output of the autoencoder is used as activation signal to detect novel events. There is no evidence of studies focused on comparing previous efforts to automatically recognize novel events from audio signals and giving a broad and in depth evaluation of recurrent neural network-based autoencoders. The present contribution aims to consistently evaluate our recent novel approaches to fill this white spot in the literature and provide insight by extensive evaluations carried out on three databases: A3Novelty, PASCAL CHiME, and PROMETHEUS. Besides providing an extensive analysis of novel and state-of-the-art methods, the article shows how RNN-based autoencoders outperform statistical approaches up to an absolute improvement of 16.4% average F-measure over the three databases.

  18. Deep Convolutional and LSTM Recurrent Neural Networks for Multimodal Wearable Activity Recognition

    Science.gov (United States)

    Ordóñez, Francisco Javier; Roggen, Daniel

    2016-01-01

    Human activity recognition (HAR) tasks have traditionally been solved using engineered features obtained by heuristic processes. Current research suggests that deep convolutional neural networks are suited to automate feature extraction from raw sensor inputs. However, human activities are made of complex sequences of motor movements, and capturing this temporal dynamics is fundamental for successful HAR. Based on the recent success of recurrent neural networks for time series domains, we propose a generic deep framework for activity recognition based on convolutional and LSTM recurrent units, which: (i) is suitable for multimodal wearable sensors; (ii) can perform sensor fusion naturally; (iii) does not require expert knowledge in designing features; and (iv) explicitly models the temporal dynamics of feature activations. We evaluate our framework on two datasets, one of which has been used in a public activity recognition challenge. Our results show that our framework outperforms competing deep non-recurrent networks on the challenge dataset by 4% on average; outperforming some of the previous reported results by up to 9%. Our results show that the framework can be applied to homogeneous sensor modalities, but can also fuse multimodal sensors to improve performance. We characterise key architectural hyperparameters’ influence on performance to provide insights about their optimisation. PMID:26797612

  19. Deep Recurrent Neural Network-Based Autoencoders for Acoustic Novelty Detection.

    Science.gov (United States)

    Marchi, Erik; Vesperini, Fabio; Squartini, Stefano; Schuller, Björn

    2017-01-01

    In the emerging field of acoustic novelty detection, most research efforts are devoted to probabilistic approaches such as mixture models or state-space models. Only recent studies introduced (pseudo-)generative models for acoustic novelty detection with recurrent neural networks in the form of an autoencoder. In these approaches, auditory spectral features of the next short term frame are predicted from the previous frames by means of Long-Short Term Memory recurrent denoising autoencoders. The reconstruction error between the input and the output of the autoencoder is used as activation signal to detect novel events. There is no evidence of studies focused on comparing previous efforts to automatically recognize novel events from audio signals and giving a broad and in depth evaluation of recurrent neural network-based autoencoders. The present contribution aims to consistently evaluate our recent novel approaches to fill this white spot in the literature and provide insight by extensive evaluations carried out on three databases: A3Novelty, PASCAL CHiME, and PROMETHEUS. Besides providing an extensive analysis of novel and state-of-the-art methods, the article shows how RNN-based autoencoders outperform statistical approaches up to an absolute improvement of 16.4% average F -measure over the three databases.

  20. Deep Convolutional and LSTM Recurrent Neural Networks for Multimodal Wearable Activity Recognition

    Directory of Open Access Journals (Sweden)

    Francisco Javier Ordóñez

    2016-01-01

    Full Text Available Human activity recognition (HAR tasks have traditionally been solved using engineered features obtained by heuristic processes. Current research suggests that deep convolutional neural networks are suited to automate feature extraction from raw sensor inputs. However, human activities are made of complex sequences of motor movements, and capturing this temporal dynamics is fundamental for successful HAR. Based on the recent success of recurrent neural networks for time series domains, we propose a generic deep framework for activity recognition based on convolutional and LSTM recurrent units, which: (i is suitable for multimodal wearable sensors; (ii can perform sensor fusion naturally; (iii does not require expert knowledge in designing features; and (iv explicitly models the temporal dynamics of feature activations. We evaluate our framework on two datasets, one of which has been used in a public activity recognition challenge. Our results show that our framework outperforms competing deep non-recurrent networks on the challenge dataset by 4% on average; outperforming some of the previous reported results by up to 9%. Our results show that the framework can be applied to homogeneous sensor modalities, but can also fuse multimodal sensors to improve performance. We characterise key architectural hyperparameters’ influence on performance to provide insights about their optimisation.

  1. Deep Convolutional and LSTM Recurrent Neural Networks for Multimodal Wearable Activity Recognition.

    Science.gov (United States)

    Ordóñez, Francisco Javier; Roggen, Daniel

    2016-01-18

    Human activity recognition (HAR) tasks have traditionally been solved using engineered features obtained by heuristic processes. Current research suggests that deep convolutional neural networks are suited to automate feature extraction from raw sensor inputs. However, human activities are made of complex sequences of motor movements, and capturing this temporal dynamics is fundamental for successful HAR. Based on the recent success of recurrent neural networks for time series domains, we propose a generic deep framework for activity recognition based on convolutional and LSTM recurrent units, which: (i) is suitable for multimodal wearable sensors; (ii) can perform sensor fusion naturally; (iii) does not require expert knowledge in designing features; and (iv) explicitly models the temporal dynamics of feature activations. We evaluate our framework on two datasets, one of which has been used in a public activity recognition challenge. Our results show that our framework outperforms competing deep non-recurrent networks on the challenge dataset by 4% on average; outperforming some of the previous reported results by up to 9%. Our results show that the framework can be applied to homogeneous sensor modalities, but can also fuse multimodal sensors to improve performance. We characterise key architectural hyperparameters' influence on performance to provide insights about their optimisation.

  2. ProLanGO: Protein Function Prediction Using Neural Machine Translation Based on a Recurrent Neural Network.

    Science.gov (United States)

    Cao, Renzhi; Freitas, Colton; Chan, Leong; Sun, Miao; Jiang, Haiqing; Chen, Zhangxin

    2017-10-17

    With the development of next generation sequencing techniques, it is fast and cheap to determine protein sequences but relatively slow and expensive to extract useful information from protein sequences because of limitations of traditional biological experimental techniques. Protein function prediction has been a long standing challenge to fill the gap between the huge amount of protein sequences and the known function. In this paper, we propose a novel method to convert the protein function problem into a language translation problem by the new proposed protein sequence language "ProLan" to the protein function language "GOLan", and build a neural machine translation model based on recurrent neural networks to translate "ProLan" language to "GOLan" language. We blindly tested our method by attending the latest third Critical Assessment of Function Annotation (CAFA 3) in 2016, and also evaluate the performance of our methods on selected proteins whose function was released after CAFA competition. The good performance on the training and testing datasets demonstrates that our new proposed method is a promising direction for protein function prediction. In summary, we first time propose a method which converts the protein function prediction problem to a language translation problem and applies a neural machine translation model for protein function prediction.

  3. Synaptic plasticity in a recurrent neural network for versatile and adaptive behaviors of a walking robot

    Directory of Open Access Journals (Sweden)

    Eduard eGrinke

    2015-10-01

    Full Text Available Walking animals, like insects, with little neural computing can effectively perform complex behaviors. They can walk around their environment, escape from corners/deadlocks, and avoid or climb over obstacles. While performing all these behaviors, they can also adapt their movements to deal with an unknown situation. As a consequence, they successfully navigate through their complex environment. The versatile and adaptive abilities are the result of an integration of several ingredients embedded in their sensorimotor loop. Biological studies reveal that the ingredients include neural dynamics, plasticity, sensory feedback, and biomechanics. Generating such versatile and adaptive behaviors for a walking robot is a challenging task. In this study, we present a bio-inspired approach to solve this task. Specifically, the approach combines neural mechanisms with plasticity, sensory feedback, and biomechanics. The neural mechanisms consist of adaptive neural sensory processing and modular neural locomotion control. The sensory processing is based on a small recurrent network consisting of two fully connected neurons. Online correlation-based learning with synaptic scaling is applied to adequately change the connections of the network. By doing so, we can effectively exploit neural dynamics (i.e., hysteresis effects and single attractors in the network to generate different turning angles with short-term memory for a biomechanical walking robot. The turning information is transmitted as descending steering signals to the locomotion control which translates the signals into motor actions. As a result, the robot can walk around and adapt its turning angle for avoiding obstacles in different situations as well as escaping from sharp corners or deadlocks. Using backbone joint control embedded in the locomotion control allows the robot to climb over small obstacles. Consequently, it can successfully explore and navigate in complex environments.

  4. Forecasting energy market indices with recurrent neural networks: Case study of crude oil price fluctuations

    International Nuclear Information System (INIS)

    Wang, Jie; Wang, Jun

    2016-01-01

    In an attempt to improve the forecasting accuracy of crude oil price fluctuations, a new neural network architecture is established in this work which combines Multilayer perception and ERNN (Elman recurrent neural networks) with stochastic time effective function. ERNN is a time-varying predictive control system and is developed with the ability to keep memory of recent events in order to predict future output. The stochastic time effective function represents that the recent information has a stronger effect for the investors than the old information. With the established model the empirical research has a good performance in testing the predictive effects on four different time series indices. Compared to other models, the present model is possible to evaluate data from 1990s to today with extreme accuracy and speedy. The applied CID (complexity invariant distance) analysis and multiscale CID analysis, are provided as the new useful measures to evaluate a better predicting ability of the proposed model than other traditional models. - Highlights: • A new forecasting model is developed by a random Elman recurrent neural network. • The forecasting accuracy of crude oil price fluctuations is improved by the model. • The forecasting results of the proposed model are more accurate than compared models. • Two new distance analysis methods are applied to confirm the predicting results.

  5. Indirect adaptive fuzzy wavelet neural network with self- recurrent consequent part for AC servo system.

    Science.gov (United States)

    Hou, Runmin; Wang, Li; Gao, Qiang; Hou, Yuanglong; Wang, Chao

    2017-09-01

    This paper proposes a novel indirect adaptive fuzzy wavelet neural network (IAFWNN) to control the nonlinearity, wide variations in loads, time-variation and uncertain disturbance of the ac servo system. In the proposed approach, the self-recurrent wavelet neural network (SRWNN) is employed to construct an adaptive self-recurrent consequent part for each fuzzy rule of TSK fuzzy model. For the IAFWNN controller, the online learning algorithm is based on back propagation (BP) algorithm. Moreover, an improved particle swarm optimization (IPSO) is used to adapt the learning rate. The aid of an adaptive SRWNN identifier offers the real-time gradient information to the adaptive fuzzy wavelet neural controller to overcome the impact of parameter variations, load disturbances and other uncertainties effectively, and has a good dynamic. The asymptotical stability of the system is guaranteed by using the Lyapunov method. The result of the simulation and the prototype test prove that the proposed are effective and suitable. Copyright © 2017. Published by Elsevier Ltd.

  6. Intelligent fault diagnosis of rolling bearings using an improved deep recurrent neural network

    Science.gov (United States)

    Jiang, Hongkai; Li, Xingqiu; Shao, Haidong; Zhao, Ke

    2018-06-01

    Traditional intelligent fault diagnosis methods for rolling bearings heavily depend on manual feature extraction and feature selection. For this purpose, an intelligent deep learning method, named the improved deep recurrent neural network (DRNN), is proposed in this paper. Firstly, frequency spectrum sequences are used as inputs to reduce the input size and ensure good robustness. Secondly, DRNN is constructed by the stacks of the recurrent hidden layer to automatically extract the features from the input spectrum sequences. Thirdly, an adaptive learning rate is adopted to improve the training performance of the constructed DRNN. The proposed method is verified with experimental rolling bearing data, and the results confirm that the proposed method is more effective than traditional intelligent fault diagnosis methods.

  7. Identification of Jets Containing b-Hadrons with Recurrent Neural Networks at the ATLAS Experiment

    CERN Multimedia

    CERN. Geneva

    2017-01-01

    A novel b-jet identification algorithm is constructed with a Recurrent Neural Network (RNN) at the ATLAS Experiment. This talk presents the expected performance of the RNN based b-tagging in simulated $t \\bar t$ events. The RNN based b-tagging processes properties of tracks associated to jets which are represented in sequences. In contrast to traditional impact-parameter-based b-tagging algorithms which assume the tracks of jets are independent from each other, RNN based b-tagging can exploit the spatial and kinematic correlations of tracks which are initiated from the same b-hadrons. The neural network nature of the tagging algorithm also allows the flexibility of extending input features to include more track properties than can be effectively used in traditional algorithms.

  8. Adaptive complementary fuzzy self-recurrent wavelet neural network controller for the electric load simulator system

    Directory of Open Access Journals (Sweden)

    Wang Chao

    2016-03-01

    Full Text Available Due to the complexities existing in the electric load simulator, this article develops a high-performance nonlinear adaptive controller to improve the torque tracking performance of the electric load simulator, which mainly consists of an adaptive fuzzy self-recurrent wavelet neural network controller with variable structure (VSFSWC and a complementary controller. The VSFSWC is clearly and easily used for real-time systems and greatly improves the convergence rate and control precision. The complementary controller is designed to eliminate the effect of the approximation error between the proposed neural network controller and the ideal feedback controller without chattering phenomena. Moreover, adaptive learning laws are derived to guarantee the system stability in the sense of the Lyapunov theory. Finally, the hardware-in-the-loop simulations are carried out to verify the feasibility and effectiveness of the proposed algorithms in different working styles.

  9. New results on global exponential stability of recurrent neural networks with time-varying delays

    International Nuclear Information System (INIS)

    Xu Shengyuan; Chu Yuming; Lu Junwei

    2006-01-01

    This Letter provides new sufficient conditions for the existence, uniqueness and global exponential stability of the equilibrium point of recurrent neural networks with time-varying delays by employing Lyapunov functions and using the Halanay inequality. The time-varying delays are not necessarily differentiable. Both Lipschitz continuous activation functions and monotone nondecreasing activation functions are considered. The derived stability criteria are expressed in terms of linear matrix inequalities (LMIs), which can be checked easily by resorting to recently developed algorithms solving LMIs. Furthermore, the proposed stability results are less conservative than some previous ones in the literature, which is demonstrated via some numerical examples

  10. Identification of serial number on bank card using recurrent neural network

    Science.gov (United States)

    Liu, Li; Huang, Linlin; Xue, Jian

    2018-04-01

    Identification of serial number on bank card has many applications. Due to the different number printing mode, complex background, distortion in shape, etc., it is quite challenging to achieve high identification accuracy. In this paper, we propose a method using Normalization-Cooperated Gradient Feature (NCGF) and Recurrent Neural Network (RNN) based on Long Short-Term Memory (LSTM) for serial number identification. The NCGF maps the gradient direction elements of original image to direction planes such that the RNN with direction planes as input can recognize numbers more accurately. Taking the advantages of NCGF and RNN, we get 90%digit string recognition accuracy.

  11. New results on global exponential stability of recurrent neural networks with time-varying delays

    Energy Technology Data Exchange (ETDEWEB)

    Xu Shengyuan [Department of Automation, Nanjing University of Science and Technology, Nanjing 210094 (China)]. E-mail: syxu02@yahoo.com.cn; Chu Yuming [Department of Mathematics, Huzhou Teacher' s College, Huzhou, Zhejiang 313000 (China); Lu Junwei [School of Electrical and Automation Engineering, Nanjing Normal University, 78 Bancang Street, Nanjing, 210042 (China)

    2006-04-03

    This Letter provides new sufficient conditions for the existence, uniqueness and global exponential stability of the equilibrium point of recurrent neural networks with time-varying delays by employing Lyapunov functions and using the Halanay inequality. The time-varying delays are not necessarily differentiable. Both Lipschitz continuous activation functions and monotone nondecreasing activation functions are considered. The derived stability criteria are expressed in terms of linear matrix inequalities (LMIs), which can be checked easily by resorting to recently developed algorithms solving LMIs. Furthermore, the proposed stability results are less conservative than some previous ones in the literature, which is demonstrated via some numerical examples.

  12. Automatic construction of a recurrent neural network based classifier for vehicle passage detection

    Science.gov (United States)

    Burnaev, Evgeny; Koptelov, Ivan; Novikov, German; Khanipov, Timur

    2017-03-01

    Recurrent Neural Networks (RNNs) are extensively used for time-series modeling and prediction. We propose an approach for automatic construction of a binary classifier based on Long Short-Term Memory RNNs (LSTM-RNNs) for detection of a vehicle passage through a checkpoint. As an input to the classifier we use multidimensional signals of various sensors that are installed on the checkpoint. Obtained results demonstrate that the previous approach to handcrafting a classifier, consisting of a set of deterministic rules, can be successfully replaced by an automatic RNN training on an appropriately labelled data.

  13. Hierarchical Recurrent Neural Hashing for Image Retrieval With Hierarchical Convolutional Features.

    Science.gov (United States)

    Lu, Xiaoqiang; Chen, Yaxiong; Li, Xuelong

    Hashing has been an important and effective technology in image retrieval due to its computational efficiency and fast search speed. The traditional hashing methods usually learn hash functions to obtain binary codes by exploiting hand-crafted features, which cannot optimally represent the information of the sample. Recently, deep learning methods can achieve better performance, since deep learning architectures can learn more effective image representation features. However, these methods only use semantic features to generate hash codes by shallow projection but ignore texture details. In this paper, we proposed a novel hashing method, namely hierarchical recurrent neural hashing (HRNH), to exploit hierarchical recurrent neural network to generate effective hash codes. There are three contributions of this paper. First, a deep hashing method is proposed to extensively exploit both spatial details and semantic information, in which, we leverage hierarchical convolutional features to construct image pyramid representation. Second, our proposed deep network can exploit directly convolutional feature maps as input to preserve the spatial structure of convolutional feature maps. Finally, we propose a new loss function that considers the quantization error of binarizing the continuous embeddings into the discrete binary codes, and simultaneously maintains the semantic similarity and balanceable property of hash codes. Experimental results on four widely used data sets demonstrate that the proposed HRNH can achieve superior performance over other state-of-the-art hashing methods.Hashing has been an important and effective technology in image retrieval due to its computational efficiency and fast search speed. The traditional hashing methods usually learn hash functions to obtain binary codes by exploiting hand-crafted features, which cannot optimally represent the information of the sample. Recently, deep learning methods can achieve better performance, since deep

  14. New baseline correction algorithm for text-line recognition with bidirectional recurrent neural networks

    Science.gov (United States)

    Morillot, Olivier; Likforman-Sulem, Laurence; Grosicki, Emmanuèle

    2013-04-01

    Many preprocessing techniques have been proposed for isolated word recognition. However, recently, recognition systems have dealt with text blocks and their compound text lines. In this paper, we propose a new preprocessing approach to efficiently correct baseline skew and fluctuations. Our approach is based on a sliding window within which the vertical position of the baseline is estimated. Segmentation of text lines into subparts is, thus, avoided. Experiments conducted on a large publicly available database (Rimes), with a BLSTM (bidirectional long short-term memory) recurrent neural network recognition system, show that our baseline correction approach highly improves performance.

  15. Nonlinear Model Predictive Control Based on a Self-Organizing Recurrent Neural Network.

    Science.gov (United States)

    Han, Hong-Gui; Zhang, Lu; Hou, Ying; Qiao, Jun-Fei

    2016-02-01

    A nonlinear model predictive control (NMPC) scheme is developed in this paper based on a self-organizing recurrent radial basis function (SR-RBF) neural network, whose structure and parameters are adjusted concurrently in the training process. The proposed SR-RBF neural network is represented in a general nonlinear form for predicting the future dynamic behaviors of nonlinear systems. To improve the modeling accuracy, a spiking-based growing and pruning algorithm and an adaptive learning algorithm are developed to tune the structure and parameters of the SR-RBF neural network, respectively. Meanwhile, for the control problem, an improved gradient method is utilized for the solution of the optimization problem in NMPC. The stability of the resulting control system is proved based on the Lyapunov stability theory. Finally, the proposed SR-RBF neural network-based NMPC (SR-RBF-NMPC) is used to control the dissolved oxygen (DO) concentration in a wastewater treatment process (WWTP). Comparisons with other existing methods demonstrate that the SR-RBF-NMPC can achieve a considerably better model fitting for WWTP and a better control performance for DO concentration.

  16. The super-Turing computational power of plastic recurrent neural networks.

    Science.gov (United States)

    Cabessa, Jérémie; Siegelmann, Hava T

    2014-12-01

    We study the computational capabilities of a biologically inspired neural model where the synaptic weights, the connectivity pattern, and the number of neurons can evolve over time rather than stay static. Our study focuses on the mere concept of plasticity of the model so that the nature of the updates is assumed to be not constrained. In this context, we show that the so-called plastic recurrent neural networks (RNNs) are capable of the precise super-Turing computational power--as the static analog neural networks--irrespective of whether their synaptic weights are modeled by rational or real numbers, and moreover, irrespective of whether their patterns of plasticity are restricted to bi-valued updates or expressed by any other more general form of updating. Consequently, the incorporation of only bi-valued plastic capabilities in a basic model of RNNs suffices to break the Turing barrier and achieve the super-Turing level of computation. The consideration of more general mechanisms of architectural plasticity or of real synaptic weights does not further increase the capabilities of the networks. These results support the claim that the general mechanism of plasticity is crucially involved in the computational and dynamical capabilities of biological neural networks. They further show that the super-Turing level of computation reflects in a suitable way the capabilities of brain-like models of computation.

  17. Synaptic plasticity in a recurrent neural network for versatile and adaptive behaviors of a walking robot.

    Science.gov (United States)

    Grinke, Eduard; Tetzlaff, Christian; Wörgötter, Florentin; Manoonpong, Poramate

    2015-01-01

    Walking animals, like insects, with little neural computing can effectively perform complex behaviors. For example, they can walk around their environment, escape from corners/deadlocks, and avoid or climb over obstacles. While performing all these behaviors, they can also adapt their movements to deal with an unknown situation. As a consequence, they successfully navigate through their complex environment. The versatile and adaptive abilities are the result of an integration of several ingredients embedded in their sensorimotor loop. Biological studies reveal that the ingredients include neural dynamics, plasticity, sensory feedback, and biomechanics. Generating such versatile and adaptive behaviors for a many degrees-of-freedom (DOFs) walking robot is a challenging task. Thus, in this study, we present a bio-inspired approach to solve this task. Specifically, the approach combines neural mechanisms with plasticity, exteroceptive sensory feedback, and biomechanics. The neural mechanisms consist of adaptive neural sensory processing and modular neural locomotion control. The sensory processing is based on a small recurrent neural network consisting of two fully connected neurons. Online correlation-based learning with synaptic scaling is applied to adequately change the connections of the network. By doing so, we can effectively exploit neural dynamics (i.e., hysteresis effects and single attractors) in the network to generate different turning angles with short-term memory for a walking robot. The turning information is transmitted as descending steering signals to the neural locomotion control which translates the signals into motor actions. As a result, the robot can walk around and adapt its turning angle for avoiding obstacles in different situations. The adaptation also enables the robot to effectively escape from sharp corners or deadlocks. Using backbone joint control embedded in the the locomotion control allows the robot to climb over small obstacles

  18. Global exponential stability and periodicity of reaction-diffusion delayed recurrent neural networks with Dirichlet boundary conditions

    International Nuclear Information System (INIS)

    Lu Junguo

    2008-01-01

    In this paper, the global exponential stability and periodicity for a class of reaction-diffusion delayed recurrent neural networks with Dirichlet boundary conditions are addressed by constructing suitable Lyapunov functionals and utilizing some inequality techniques. We first prove global exponential converge to 0 of the difference between any two solutions of the original reaction-diffusion delayed recurrent neural networks with Dirichlet boundary conditions, the existence and uniqueness of equilibrium is the direct results of this procedure. This approach is different from the usually used one where the existence, uniqueness of equilibrium and stability are proved in two separate steps. Furthermore, we prove periodicity of the reaction-diffusion delayed recurrent neural networks with Dirichlet boundary conditions. Sufficient conditions ensuring the global exponential stability and the existence of periodic oscillatory solutions for the reaction-diffusion delayed recurrent neural networks with Dirichlet boundary conditions are given. These conditions are easy to check and have important leading significance in the design and application of reaction-diffusion recurrent neural networks with delays. Finally, two numerical examples are given to show the effectiveness of the obtained results

  19. Nonlinear recurrent neural networks for finite-time solution of general time-varying linear matrix equations.

    Science.gov (United States)

    Xiao, Lin; Liao, Bolin; Li, Shuai; Chen, Ke

    2018-02-01

    In order to solve general time-varying linear matrix equations (LMEs) more efficiently, this paper proposes two nonlinear recurrent neural networks based on two nonlinear activation functions. According to Lyapunov theory, such two nonlinear recurrent neural networks are proved to be convergent within finite-time. Besides, by solving differential equation, the upper bounds of the finite convergence time are determined analytically. Compared with existing recurrent neural networks, the proposed two nonlinear recurrent neural networks have a better convergence property (i.e., the upper bound is lower), and thus the accurate solutions of general time-varying LMEs can be obtained with less time. At last, various different situations have been considered by setting different coefficient matrices of general time-varying LMEs and a great variety of computer simulations (including the application to robot manipulators) have been conducted to validate the better finite-time convergence of the proposed two nonlinear recurrent neural networks. Copyright © 2017 Elsevier Ltd. All rights reserved.

  20. Engine cylinder pressure reconstruction using crank kinematics and recurrently-trained neural networks

    Science.gov (United States)

    Bennett, C.; Dunne, J. F.; Trimby, S.; Richardson, D.

    2017-02-01

    A recurrent non-linear autoregressive with exogenous input (NARX) neural network is proposed, and a suitable fully-recurrent training methodology is adapted and tuned, for reconstructing cylinder pressure in multi-cylinder IC engines using measured crank kinematics. This type of indirect sensing is important for cost effective closed-loop combustion control and for On-Board Diagnostics. The challenge addressed is to accurately predict cylinder pressure traces within the cycle under generalisation conditions: i.e. using data not previously seen by the network during training. This involves direct construction and calibration of a suitable inverse crank dynamic model, which owing to singular behaviour at top-dead-centre (TDC), has proved difficult via physical model construction, calibration, and inversion. The NARX architecture is specialised and adapted to cylinder pressure reconstruction, using a fully-recurrent training methodology which is needed because the alternatives are too slow and unreliable for practical network training on production engines. The fully-recurrent Robust Adaptive Gradient Descent (RAGD) algorithm, is tuned initially using synthesised crank kinematics, and then tested on real engine data to assess the reconstruction capability. Real data is obtained from a 1.125 l, 3-cylinder, in-line, direct injection spark ignition (DISI) engine involving synchronised measurements of crank kinematics and cylinder pressure across a range of steady-state speed and load conditions. The paper shows that a RAGD-trained NARX network using both crank velocity and crank acceleration as input information, provides fast and robust training. By using the optimum epoch identified during RAGD training, acceptably accurate cylinder pressures, and especially accurate location-of-peak-pressure, can be reconstructed robustly under generalisation conditions, making it the most practical NARX configuration and recurrent training methodology for use on production engines.

  1. Neural stem cell-derived exosomes mediate viral entry

    Directory of Open Access Journals (Sweden)

    Sims B

    2014-10-01

    Full Text Available Brian Sims,1,2,* Linlin Gu,3,* Alexandre Krendelchtchikov,3 Qiana L Matthews3,4 1Division of Neonatology, Department of Pediatrics, 2Department of Cell, Developmental, and Integrative Biology, 3Division of Infectious Diseases, Department of Medicine, 4Center for AIDS Research, University of Alabama at Birmingham, Birmingham, AL, USA *These authors contributed equally to this work Background: Viruses enter host cells through interactions of viral ligands with cellular receptors. Viruses can also enter cells in a receptor-independent fashion. Mechanisms regarding the receptor-independent viral entry into cells have not been fully elucidated. Exosomal trafficking between cells may offer a mechanism by which viruses can enter cells.Methods: To investigate the role of exosomes on cellular viral entry, we employed neural stem cell-derived exosomes and adenovirus type 5 (Ad5 for the proof-of-principle study. Results: Exosomes significantly enhanced Ad5 entry in Coxsackie virus and adenovirus receptor (CAR-deficient cells, in which Ad5 only had very limited entry. The exosomes were shown to contain T-cell immunoglobulin mucin protein 4 (TIM-4, which binds phosphatidylserine. Treatment with anti-TIM-4 antibody significantly blocked the exosome-mediated Ad5 entry.Conclusion: Neural stem cell-derived exosomes mediated significant cellular entry of Ad5 in a receptor-independent fashion. This mediation may be hampered by an antibody specifically targeting TIM-4 on exosomes. This set of results will benefit further elucidation of virus/exosome pathways, which would contribute to reducing natural viral infection by developing therapeutic agents or vaccines. Keywords: neural stem cell-derived exosomes, adenovirus type 5, TIM-4, viral entry, phospholipids

  2. Deep recurrent neural network reveals a hierarchy of process memory during dynamic natural vision.

    Science.gov (United States)

    Shi, Junxing; Wen, Haiguang; Zhang, Yizhen; Han, Kuan; Liu, Zhongming

    2018-05-01

    The human visual cortex extracts both spatial and temporal visual features to support perception and guide behavior. Deep convolutional neural networks (CNNs) provide a computational framework to model cortical representation and organization for spatial visual processing, but unable to explain how the brain processes temporal information. To overcome this limitation, we extended a CNN by adding recurrent connections to different layers of the CNN to allow spatial representations to be remembered and accumulated over time. The extended model, or the recurrent neural network (RNN), embodied a hierarchical and distributed model of process memory as an integral part of visual processing. Unlike the CNN, the RNN learned spatiotemporal features from videos to enable action recognition. The RNN better predicted cortical responses to natural movie stimuli than the CNN, at all visual areas, especially those along the dorsal stream. As a fully observable model of visual processing, the RNN also revealed a cortical hierarchy of temporal receptive window, dynamics of process memory, and spatiotemporal representations. These results support the hypothesis of process memory, and demonstrate the potential of using the RNN for in-depth computational understanding of dynamic natural vision. © 2018 Wiley Periodicals, Inc.

  3. Analysis of recurrent neural networks for short-term energy load forecasting

    Science.gov (United States)

    Di Persio, Luca; Honchar, Oleksandr

    2017-11-01

    Short-term forecasts have recently gained an increasing attention because of the rise of competitive electricity markets. In fact, short-terms forecast of possible future loads turn out to be fundamental to build efficient energy management strategies as well as to avoid energy wastage. Such type of challenges are difficult to tackle both from a theoretical and applied point of view. Latter tasks require sophisticated methods to manage multidimensional time series related to stochastic phenomena which are often highly interconnected. In the present work we first review novel approaches to energy load forecasting based on recurrent neural network, focusing our attention on long/short term memory architectures (LSTMs). Such type of artificial neural networks have been widely applied to problems dealing with sequential data such it happens, e.g., in socio-economics settings, for text recognition purposes, concerning video signals, etc., always showing their effectiveness to model complex temporal data. Moreover, we consider different novel variations of basic LSTMs, such as sequence-to-sequence approach and bidirectional LSTMs, aiming at providing effective models for energy load data. Last but not least, we test all the described algorithms on real energy load data showing not only that deep recurrent networks can be successfully applied to energy load forecasting, but also that this approach can be extended to other problems based on time series prediction.

  4. An automatic microseismic or acoustic emission arrival identification scheme with deep recurrent neural networks

    Science.gov (United States)

    Zheng, Jing; Lu, Jiren; Peng, Suping; Jiang, Tianqi

    2018-02-01

    The conventional arrival pick-up algorithms cannot avoid the manual modification of the parameters for the simultaneous identification of multiple events under different signal-to-noise ratios (SNRs). Therefore, in order to automatically obtain the arrivals of multiple events with high precision under different SNRs, in this study an algorithm was proposed which had the ability to pick up the arrival of microseismic or acoustic emission events based on deep recurrent neural networks. The arrival identification was performed using two important steps, which included a training phase and a testing phase. The training process was mathematically modelled by deep recurrent neural networks using Long Short-Term Memory architecture. During the testing phase, the learned weights were utilized to identify the arrivals through the microseismic/acoustic emission data sets. The data sets were obtained by rock physics experiments of the acoustic emission. In order to obtain the data sets under different SNRs, this study added random noise to the raw experiments' data sets. The results showed that the outcome of the proposed method was able to attain an above 80 per cent hit-rate at SNR 0 dB, and an approximately 70 per cent hit-rate at SNR -5 dB, with an absolute error in 10 sampling points. These results indicated that the proposed method had high selection precision and robustness.

  5. Optimal Formation of Multirobot Systems Based on a Recurrent Neural Network.

    Science.gov (United States)

    Wang, Yunpeng; Cheng, Long; Hou, Zeng-Guang; Yu, Junzhi; Tan, Min

    2016-02-01

    The optimal formation problem of multirobot systems is solved by a recurrent neural network in this paper. The desired formation is described by the shape theory. This theory can generate a set of feasible formations that share the same relative relation among robots. An optimal formation means that finding one formation from the feasible formation set, which has the minimum distance to the initial formation of the multirobot system. Then, the formation problem is transformed into an optimization problem. In addition, the orientation, scale, and admissible range of the formation can also be considered as the constraints in the optimization problem. Furthermore, if all robots are identical, their positions in the system are exchangeable. Then, each robot does not necessarily move to one specific position in the formation. In this case, the optimal formation problem becomes a combinational optimization problem, whose optimal solution is very hard to obtain. Inspired by the penalty method, this combinational optimization problem can be approximately transformed into a convex optimization problem. Due to the involvement of the Euclidean norm in the distance, the objective function of these optimization problems are nonsmooth. To solve these nonsmooth optimization problems efficiently, a recurrent neural network approach is employed, owing to its parallel computation ability. Finally, some simulations and experiments are given to validate the effectiveness and efficiency of the proposed optimal formation approach.

  6. Using deep recurrent neural network for direct beam solar irradiance cloud screening

    Science.gov (United States)

    Chen, Maosi; Davis, John M.; Liu, Chaoshun; Sun, Zhibin; Zempila, Melina Maria; Gao, Wei

    2017-09-01

    Cloud screening is an essential procedure for in-situ calibration and atmospheric properties retrieval on (UV-)MultiFilter Rotating Shadowband Radiometer [(UV-)MFRSR]. Previous study has explored a cloud screening algorithm for direct-beam (UV-)MFRSR voltage measurements based on the stability assumption on a long time period (typically a half day or a whole day). To design such an algorithm requires in-depth understanding of radiative transfer and delicate data manipulation. Recent rapid developments on deep neural network and computation hardware have opened a window for modeling complicated End-to-End systems with a standardized strategy. In this study, a multi-layer dynamic bidirectional recurrent neural network is built for determining the cloudiness on each time point with a 17-year training dataset and tested with another 1-year dataset. The dataset is the daily 3-minute cosine corrected voltages, airmasses, and the corresponding cloud/clear-sky labels at two stations of the USDA UV-B Monitoring and Research Program. The results show that the optimized neural network model (3-layer, 250 hidden units, and 80 epochs of training) has an overall test accuracy of 97.87% (97.56% for the Oklahoma site and 98.16% for the Hawaii site). Generally, the neural network model grasps the key concept of the original model to use data in the entire day rather than short nearby measurements to perform cloud screening. A scrutiny of the logits layer suggests that the neural network model automatically learns a way to calculate a quantity similar to total optical depth and finds an appropriate threshold for cloud screening.

  7. Reinforcement Learning of Linking and Tracing Contours in Recurrent Neural Networks

    Science.gov (United States)

    Brosch, Tobias; Neumann, Heiko; Roelfsema, Pieter R.

    2015-01-01

    The processing of a visual stimulus can be subdivided into a number of stages. Upon stimulus presentation there is an early phase of feedforward processing where the visual information is propagated from lower to higher visual areas for the extraction of basic and complex stimulus features. This is followed by a later phase where horizontal connections within areas and feedback connections from higher areas back to lower areas come into play. In this later phase, image elements that are behaviorally relevant are grouped by Gestalt grouping rules and are labeled in the cortex with enhanced neuronal activity (object-based attention in psychology). Recent neurophysiological studies revealed that reward-based learning influences these recurrent grouping processes, but it is not well understood how rewards train recurrent circuits for perceptual organization. This paper examines the mechanisms for reward-based learning of new grouping rules. We derive a learning rule that can explain how rewards influence the information flow through feedforward, horizontal and feedback connections. We illustrate the efficiency with two tasks that have been used to study the neuronal correlates of perceptual organization in early visual cortex. The first task is called contour-integration and demands the integration of collinear contour elements into an elongated curve. We show how reward-based learning causes an enhancement of the representation of the to-be-grouped elements at early levels of a recurrent neural network, just as is observed in the visual cortex of monkeys. The second task is curve-tracing where the aim is to determine the endpoint of an elongated curve composed of connected image elements. If trained with the new learning rule, neural networks learn to propagate enhanced activity over the curve, in accordance with neurophysiological data. We close the paper with a number of model predictions that can be tested in future neurophysiological and computational studies

  8. A Three-Threshold Learning Rule Approaches the Maximal Capacity of Recurrent Neural Networks.

    Directory of Open Access Journals (Sweden)

    Alireza Alemi

    2015-08-01

    Full Text Available Understanding the theoretical foundations of how memories are encoded and retrieved in neural populations is a central challenge in neuroscience. A popular theoretical scenario for modeling memory function is the attractor neural network scenario, whose prototype is the Hopfield model. The model simplicity and the locality of the synaptic update rules come at the cost of a poor storage capacity, compared with the capacity achieved with perceptron learning algorithms. Here, by transforming the perceptron learning rule, we present an online learning rule for a recurrent neural network that achieves near-maximal storage capacity without an explicit supervisory error signal, relying only upon locally accessible information. The fully-connected network consists of excitatory binary neurons with plastic recurrent connections and non-plastic inhibitory feedback stabilizing the network dynamics; the memory patterns to be memorized are presented online as strong afferent currents, producing a bimodal distribution for the neuron synaptic inputs. Synapses corresponding to active inputs are modified as a function of the value of the local fields with respect to three thresholds. Above the highest threshold, and below the lowest threshold, no plasticity occurs. In between these two thresholds, potentiation/depression occurs when the local field is above/below an intermediate threshold. We simulated and analyzed a network of binary neurons implementing this rule and measured its storage capacity for different sizes of the basins of attraction. The storage capacity obtained through numerical simulations is shown to be close to the value predicted by analytical calculations. We also measured the dependence of capacity on the strength of external inputs. Finally, we quantified the statistics of the resulting synaptic connectivity matrix, and found that both the fraction of zero weight synapses and the degree of symmetry of the weight matrix increase with the

  9. Improving protein disorder prediction by deep bidirectional long short-term memory recurrent neural networks.

    Science.gov (United States)

    Hanson, Jack; Yang, Yuedong; Paliwal, Kuldip; Zhou, Yaoqi

    2017-03-01

    Capturing long-range interactions between structural but not sequence neighbors of proteins is a long-standing challenging problem in bioinformatics. Recently, long short-term memory (LSTM) networks have significantly improved the accuracy of speech and image classification problems by remembering useful past information in long sequential events. Here, we have implemented deep bidirectional LSTM recurrent neural networks in the problem of protein intrinsic disorder prediction. The new method, named SPOT-Disorder, has steadily improved over a similar method using a traditional, window-based neural network (SPINE-D) in all datasets tested without separate training on short and long disordered regions. Independent tests on four other datasets including the datasets from critical assessment of structure prediction (CASP) techniques and >10 000 annotated proteins from MobiDB, confirmed SPOT-Disorder as one of the best methods in disorder prediction. Moreover, initial studies indicate that the method is more accurate in predicting functional sites in disordered regions. These results highlight the usefulness combining LSTM with deep bidirectional recurrent neural networks in capturing non-local, long-range interactions for bioinformatics applications. SPOT-disorder is available as a web server and as a standalone program at: http://sparks-lab.org/server/SPOT-disorder/index.php . j.hanson@griffith.edu.au or yuedong.yang@griffith.edu.au or yaoqi.zhou@griffith.edu.au. Supplementary data is available at Bioinformatics online. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com

  10. The synaptic properties of cells define the hallmarks of interval timing in a recurrent neural network.

    Science.gov (United States)

    Pérez, Oswaldo; Merchant, Hugo

    2018-04-03

    Extensive research has described two key features of interval timing. The bias property is associated with accuracy and implies that time is overestimated for short intervals and underestimated for long intervals. The scalar property is linked to precision and states that the variability of interval estimates increases as a function of interval duration. The neural mechanisms behind these properties are not well understood. Here we implemented a recurrent neural network that mimics a cortical ensemble and includes cells that show paired-pulse facilitation and slow inhibitory synaptic currents. The network produces interval selective responses and reproduces both bias and scalar properties when a Bayesian decoder reads its activity. Notably, the interval-selectivity, timing accuracy, and precision of the network showed complex changes as a function of the decay time constants of the modeled synaptic properties and the level of background activity of the cells. These findings suggest that physiological values of the time constants for paired-pulse facilitation and GABAb, as well as the internal state of the network, determine the bias and scalar properties of interval timing. Significant Statement Timing is a fundamental element of complex behavior, including music and language. Temporal processing in a wide variety of contexts shows two primary features: time estimates exhibit a shift towards the mean (the bias property) and are more variable for longer intervals (the scalar property). We implemented a recurrent neural network that includes long-lasting synaptic currents, which can not only produce interval selective responses but also follow the bias and scalar properties. Interestingly, only physiological values of the time constants for paired-pulse facilitation and GABAb, as well as intermediate background activity within the network can reproduce the two key features of interval timing. Copyright © 2018 the authors.

  11. Tracking Control Based on Recurrent Neural Networks for Nonlinear Systems with Multiple Inputs and Unknown Deadzone

    Directory of Open Access Journals (Sweden)

    J. Humberto Pérez-Cruz

    2012-01-01

    Full Text Available This paper deals with the problem of trajectory tracking for a broad class of uncertain nonlinear systems with multiple inputs each one subject to an unknown symmetric deadzone. On the basis of a model of the deadzone as a combination of a linear term and a disturbance-like term, a continuous-time recurrent neural network is directly employed in order to identify the uncertain dynamics. By using a Lyapunov analysis, the exponential convergence of the identification error to a bounded zone is demonstrated. Subsequently, by a proper control law, the state of the neural network is compelled to follow a bounded reference trajectory. This control law is designed in such a way that the singularity problem is conveniently avoided and the exponential convergence to a bounded zone of the difference between the state of the neural identifier and the reference trajectory can be proven. Thus, the exponential convergence of the tracking error to a bounded zone and the boundedness of all closed-loop signals can be guaranteed. One of the main advantages of the proposed strategy is that the controller can work satisfactorily without any specific knowledge of an upper bound for the unmodeled dynamics and/or the disturbance term.

  12. EMG-Based Estimation of Limb Movement Using Deep Learning With Recurrent Convolutional Neural Networks.

    Science.gov (United States)

    Xia, Peng; Hu, Jie; Peng, Yinghong

    2017-10-25

    A novel model based on deep learning is proposed to estimate kinematic information for myoelectric control from multi-channel electromyogram (EMG) signals. The neural information of limb movement is embedded in EMG signals that are influenced by all kinds of factors. In order to overcome the negative effects of variability in signals, the proposed model employs the deep architecture combining convolutional neural networks (CNNs) and recurrent neural networks (RNNs). The EMG signals are transformed to time-frequency frames as the input to the model. The limb movement is estimated by the model that is trained with the gradient descent and backpropagation procedure. We tested the model for simultaneous and proportional estimation of limb movement in eight healthy subjects and compared it with support vector regression (SVR) and CNNs on the same data set. The experimental studies show that the proposed model has higher estimation accuracy and better robustness with respect to time. The combination of CNNs and RNNs can improve the model performance compared with using CNNs alone. The model of deep architecture is promising in EMG decoding and optimization of network structures can increase the accuracy and robustness. © 2017 International Center for Artificial Organs and Transplantation and Wiley Periodicals, Inc.

  13. A one-layer recurrent neural network for constrained pseudoconvex optimization and its application for dynamic portfolio optimization.

    Science.gov (United States)

    Liu, Qingshan; Guo, Zhishan; Wang, Jun

    2012-02-01

    In this paper, a one-layer recurrent neural network is proposed for solving pseudoconvex optimization problems subject to linear equality and bound constraints. Compared with the existing neural networks for optimization (e.g., the projection neural networks), the proposed neural network is capable of solving more general pseudoconvex optimization problems with equality and bound constraints. Moreover, it is capable of solving constrained fractional programming problems as a special case. The convergence of the state variables of the proposed neural network to achieve solution optimality is guaranteed as long as the designed parameters in the model are larger than the derived lower bounds. Numerical examples with simulation results illustrate the effectiveness and characteristics of the proposed neural network. In addition, an application for dynamic portfolio optimization is discussed. Copyright © 2011 Elsevier Ltd. All rights reserved.

  14. A novel recurrent neural network with one neuron and finite-time convergence for k-winners-take-all operation.

    Science.gov (United States)

    Liu, Qingshan; Dang, Chuangyin; Cao, Jinde

    2010-07-01

    In this paper, based on a one-neuron recurrent neural network, a novel k-winners-take-all ( k -WTA) network is proposed. Finite time convergence of the proposed neural network is proved using the Lyapunov method. The k-WTA operation is first converted equivalently into a linear programming problem. Then, a one-neuron recurrent neural network is proposed to get the kth or (k+1)th largest inputs of the k-WTA problem. Furthermore, a k-WTA network is designed based on the proposed neural network to perform the k-WTA operation. Compared with the existing k-WTA networks, the proposed network has simple structure and finite time convergence. In addition, simulation results on numerical examples show the effectiveness and performance of the proposed k-WTA network.

  15. Different-Level Simultaneous Minimization Scheme for Fault Tolerance of Redundant Manipulator Aided with Discrete-Time Recurrent Neural Network.

    Science.gov (United States)

    Jin, Long; Liao, Bolin; Liu, Mei; Xiao, Lin; Guo, Dongsheng; Yan, Xiaogang

    2017-01-01

    By incorporating the physical constraints in joint space, a different-level simultaneous minimization scheme, which takes both the robot kinematics and robot dynamics into account, is presented and investigated for fault-tolerant motion planning of redundant manipulator in this paper. The scheme is reformulated as a quadratic program (QP) with equality and bound constraints, which is then solved by a discrete-time recurrent neural network. Simulative verifications based on a six-link planar redundant robot manipulator substantiate the efficacy and accuracy of the presented acceleration fault-tolerant scheme, the resultant QP and the corresponding discrete-time recurrent neural network.

  16. Amplification of asynchronous inhibition-mediated synchronization by feedback in recurrent networks.

    Directory of Open Access Journals (Sweden)

    Sashi Marella

    2010-02-01

    Full Text Available Synchronization of 30-80 Hz oscillatory activity of the principle neurons in the olfactory bulb (mitral cells is believed to be important for odor discrimination. Previous theoretical studies of these fast rhythms in other brain areas have proposed that principle neuron synchrony can be mediated by short-latency, rapidly decaying inhibition. This phasic inhibition provides a narrow time window for the principle neurons to fire, thus promoting synchrony. However, in the olfactory bulb, the inhibitory granule cells produce long lasting, small amplitude, asynchronous and aperiodic inhibitory input and thus the narrow time window that is required to synchronize spiking does not exist. Instead, it has been suggested that correlated output of the granule cells could serve to synchronize uncoupled mitral cells through a mechanism called "stochastic synchronization", wherein the synchronization arises through correlation of inputs to two neural oscillators. Almost all work on synchrony due to correlations presumes that the correlation is imposed and fixed. Building on theory and experiments that we and others have developed, we show that increased synchrony in the mitral cells could produce an increase in granule cell activity for those granule cells that share a synchronous group of mitral cells. Common granule cell input increases the input correlation to the mitral cells and hence their synchrony by providing a positive feedback loop in correlation. Thus we demonstrate the emergence and temporal evolution of input correlation in recurrent networks with feedback. We explore several theoretical models of this idea, ranging from spiking models to an analytically tractable model.

  17. Identification of a Typical CSTR Using Optimal Focused Time Lagged Recurrent Neural Network Model with Gamma Memory Filter

    OpenAIRE

    Naikwad, S. N.; Dudul, S. V.

    2009-01-01

    A focused time lagged recurrent neural network (FTLR NN) with gamma memory filter is designed to learn the subtle complex dynamics of a typical CSTR process. Continuous stirred tank reactor exhibits complex nonlinear operations where reaction is exothermic. It is noticed from literature review that process control of CSTR using neuro-fuzzy systems was attempted by many, but optimal neural network model for identification of CSTR process is not yet available. As CSTR process includes tempora...

  18. Neural networks mediating sentence reading in the deaf

    Directory of Open Access Journals (Sweden)

    Elizabeth Ann Hirshorn

    2014-06-01

    Full Text Available The present work addresses the neural bases of sentence reading in deaf populations. To better understand the relative role of deafness and English knowledge in shaping the neural networks that mediate sentence reading, three populations with different degrees of English knowledge and depth of hearing loss were included – deaf signers, oral deaf and hearing individuals. The three groups were matched for reading comprehension and scanned while reading sentences. A similar neural network of left perisylvian areas was observed, supporting the view of a shared network of areas for reading despite differences in hearing and English knowledge. However, differences were observed, in particular in the auditory cortex, with deaf signers and oral deaf showing greatest bilateral superior temporal gyrus (STG recruitment as compared to hearing individuals. Importantly, within deaf individuals, the same STG area in the left hemisphere showed greater recruitment as hearing loss increased. To further understand the functional role of such auditory cortex re-organization after deafness, connectivity analyses were performed from the STG regions identified above. Connectivity from the left STG toward areas typically associated with semantic processing (BA45 and thalami was greater in deaf signers and in oral deaf as compared to hearing. In contrast, connectivity from left STG toward areas identified with speech-based processing was greater in hearing and in oral deaf as compared to deaf signers. These results support the growing literature indicating recruitment of auditory areas after congenital deafness for visually-mediated language functions, and establish that both auditory deprivation and language experience shape its functional reorganization. Implications for differential reliance on semantic vs. phonological pathways during reading in the three groups is discussed.

  19. Training Excitatory-Inhibitory Recurrent Neural Networks for Cognitive Tasks: A Simple and Flexible Framework.

    Directory of Open Access Journals (Sweden)

    H Francis Song

    2016-02-01

    Full Text Available The ability to simultaneously record from large numbers of neurons in behaving animals has ushered in a new era for the study of the neural circuit mechanisms underlying cognitive functions. One promising approach to uncovering the dynamical and computational principles governing population responses is to analyze model recurrent neural networks (RNNs that have been optimized to perform the same tasks as behaving animals. Because the optimization of network parameters specifies the desired output but not the manner in which to achieve this output, "trained" networks serve as a source of mechanistic hypotheses and a testing ground for data analyses that link neural computation to behavior. Complete access to the activity and connectivity of the circuit, and the ability to manipulate them arbitrarily, make trained networks a convenient proxy for biological circuits and a valuable platform for theoretical investigation. However, existing RNNs lack basic biological features such as the distinction between excitatory and inhibitory units (Dale's principle, which are essential if RNNs are to provide insights into the operation of biological circuits. Moreover, trained networks can achieve the same behavioral performance but differ substantially in their structure and dynamics, highlighting the need for a simple and flexible framework for the exploratory training of RNNs. Here, we describe a framework for gradient descent-based training of excitatory-inhibitory RNNs that can incorporate a variety of biological knowledge. We provide an implementation based on the machine learning library Theano, whose automatic differentiation capabilities facilitate modifications and extensions. We validate this framework by applying it to well-known experimental paradigms such as perceptual decision-making, context-dependent integration, multisensory integration, parametric working memory, and motor sequence generation. Our results demonstrate the wide range of neural

  20. PERAMALAN KONSUMSI LISTRIK JANGKA PENDEK DENGAN ARIMA MUSIMAN GANDA DAN ELMAN-RECURRENT NEURAL NETWORK

    Directory of Open Access Journals (Sweden)

    Suhartono Suhartono

    2009-07-01

    Full Text Available Neural network (NN is one of many method used to predict the electricity consumption per hour in many countries. NN method which is used in many previous studies is Feed-Forward Neural Network (FFNN or Autoregressive Neural Network(AR-NN. AR-NN model is not able to capture and explain the effect of moving average (MA order on a time series of data. This research was conducted with the purpose of reviewing the application of other types of NN, that is Elman-Recurrent Neural Network (Elman-RNN which could explain MA order effect and compare the result of prediction accuracy with multiple seasonal ARIMA (Autoregressive Integrated Moving Average models. As a case study, we used data electricity consumption per hour in Mengare Gresik. Result of analysis showed that the best of double seasonal Arima models suited to short-term forecasting in the case study data is ARIMA([1,2,3,4,6,7,9,10,14,21,33],1,8(0,1,124 (1,1,0168. This model produces a white noise residuals, but it does not have a normal distribution due to suspected outlier. Outlier detection in iterative produce 14 innovation outliers. There are 4 inputs of Elman-RNN network that were examined and tested for forecasting the data, the input according to lag Arima, input such as lag Arima plus 14 dummy outlier, inputs are the lag-multiples of 24 up to lag 480, and the inputs are lag 1 and lag multiples of 24+1. All of four network uses one hidden layer with tangent sigmoid activation function and one output with a linear function. The result of comparative forecast accuracy through value of MAPE out-sample showed that the fourth networks, namely Elman-RNN (22, 3, 1, is the best model for forecasting electricity consumption per hour in short term in Mengare Gresik.

  1. Biological oscillations for learning walking coordination: dynamic recurrent neural network functionally models physiological central pattern generator.

    Science.gov (United States)

    Hoellinger, Thomas; Petieau, Mathieu; Duvinage, Matthieu; Castermans, Thierry; Seetharaman, Karthik; Cebolla, Ana-Maria; Bengoetxea, Ana; Ivanenko, Yuri; Dan, Bernard; Cheron, Guy

    2013-01-01

    The existence of dedicated neuronal modules such as those organized in the cerebral cortex, thalamus, basal ganglia, cerebellum, or spinal cord raises the question of how these functional modules are coordinated for appropriate motor behavior. Study of human locomotion offers an interesting field for addressing this central question. The coordination of the elevation of the 3 leg segments under a planar covariation rule (Borghese et al., 1996) was recently modeled (Barliya et al., 2009) by phase-adjusted simple oscillators shedding new light on the understanding of the central pattern generator (CPG) processing relevant oscillation signals. We describe the use of a dynamic recurrent neural network (DRNN) mimicking the natural oscillatory behavior of human locomotion for reproducing the planar covariation rule in both legs at different walking speeds. Neural network learning was based on sinusoid signals integrating frequency and amplitude features of the first three harmonics of the sagittal elevation angles of the thigh, shank, and foot of each lower limb. We verified the biological plausibility of the neural networks. Best results were obtained with oscillations extracted from the first three harmonics in comparison to oscillations outside the harmonic frequency peaks. Physiological replication steadily increased with the number of neuronal units from 1 to 80, where similarity index reached 0.99. Analysis of synaptic weighting showed that the proportion of inhibitory connections consistently increased with the number of neuronal units in the DRNN. This emerging property in the artificial neural networks resonates with recent advances in neurophysiology of inhibitory neurons that are involved in central nervous system oscillatory activities. The main message of this study is that this type of DRNN may offer a useful model of physiological central pattern generator for gaining insights in basic research and developing clinical applications.

  2. Criticality meets learning: Criticality signatures in a self-organizing recurrent neural network.

    Science.gov (United States)

    Del Papa, Bruno; Priesemann, Viola; Triesch, Jochen

    2017-01-01

    Many experiments have suggested that the brain operates close to a critical state, based on signatures of criticality such as power-law distributed neuronal avalanches. In neural network models, criticality is a dynamical state that maximizes information processing capacities, e.g. sensitivity to input, dynamical range and storage capacity, which makes it a favorable candidate state for brain function. Although models that self-organize towards a critical state have been proposed, the relation between criticality signatures and learning is still unclear. Here, we investigate signatures of criticality in a self-organizing recurrent neural network (SORN). Investigating criticality in the SORN is of particular interest because it has not been developed to show criticality. Instead, the SORN has been shown to exhibit spatio-temporal pattern learning through a combination of neural plasticity mechanisms and it reproduces a number of biological findings on neural variability and the statistics and fluctuations of synaptic efficacies. We show that, after a transient, the SORN spontaneously self-organizes into a dynamical state that shows criticality signatures comparable to those found in experiments. The plasticity mechanisms are necessary to attain that dynamical state, but not to maintain it. Furthermore, onset of external input transiently changes the slope of the avalanche distributions - matching recent experimental findings. Interestingly, the membrane noise level necessary for the occurrence of the criticality signatures reduces the model's performance in simple learning tasks. Overall, our work shows that the biologically inspired plasticity and homeostasis mechanisms responsible for the SORN's spatio-temporal learning abilities can give rise to criticality signatures in its activity when driven by random input, but these break down under the structured input of short repeating sequences.

  3. Training Excitatory-Inhibitory Recurrent Neural Networks for Cognitive Tasks: A Simple and Flexible Framework

    Science.gov (United States)

    Wang, Xiao-Jing

    2016-01-01

    The ability to simultaneously record from large numbers of neurons in behaving animals has ushered in a new era for the study of the neural circuit mechanisms underlying cognitive functions. One promising approach to uncovering the dynamical and computational principles governing population responses is to analyze model recurrent neural networks (RNNs) that have been optimized to perform the same tasks as behaving animals. Because the optimization of network parameters specifies the desired output but not the manner in which to achieve this output, “trained” networks serve as a source of mechanistic hypotheses and a testing ground for data analyses that link neural computation to behavior. Complete access to the activity and connectivity of the circuit, and the ability to manipulate them arbitrarily, make trained networks a convenient proxy for biological circuits and a valuable platform for theoretical investigation. However, existing RNNs lack basic biological features such as the distinction between excitatory and inhibitory units (Dale’s principle), which are essential if RNNs are to provide insights into the operation of biological circuits. Moreover, trained networks can achieve the same behavioral performance but differ substantially in their structure and dynamics, highlighting the need for a simple and flexible framework for the exploratory training of RNNs. Here, we describe a framework for gradient descent-based training of excitatory-inhibitory RNNs that can incorporate a variety of biological knowledge. We provide an implementation based on the machine learning library Theano, whose automatic differentiation capabilities facilitate modifications and extensions. We validate this framework by applying it to well-known experimental paradigms such as perceptual decision-making, context-dependent integration, multisensory integration, parametric working memory, and motor sequence generation. Our results demonstrate the wide range of neural activity

  4. A state space approach for piecewise-linear recurrent neural networks for identifying computational dynamics from neural measurements.

    Directory of Open Access Journals (Sweden)

    Daniel Durstewitz

    2017-06-01

    Full Text Available The computational and cognitive properties of neural systems are often thought to be implemented in terms of their (stochastic network dynamics. Hence, recovering the system dynamics from experimentally observed neuronal time series, like multiple single-unit recordings or neuroimaging data, is an important step toward understanding its computations. Ideally, one would not only seek a (lower-dimensional state space representation of the dynamics, but would wish to have access to its statistical properties and their generative equations for in-depth analysis. Recurrent neural networks (RNNs are a computationally powerful and dynamically universal formal framework which has been extensively studied from both the computational and the dynamical systems perspective. Here we develop a semi-analytical maximum-likelihood estimation scheme for piecewise-linear RNNs (PLRNNs within the statistical framework of state space models, which accounts for noise in both the underlying latent dynamics and the observation process. The Expectation-Maximization algorithm is used to infer the latent state distribution, through a global Laplace approximation, and the PLRNN parameters iteratively. After validating the procedure on toy examples, and using inference through particle filters for comparison, the approach is applied to multiple single-unit recordings from the rodent anterior cingulate cortex (ACC obtained during performance of a classical working memory task, delayed alternation. Models estimated from kernel-smoothed spike time data were able to capture the essential computational dynamics underlying task performance, including stimulus-selective delay activity. The estimated models were rarely multi-stable, however, but rather were tuned to exhibit slow dynamics in the vicinity of a bifurcation point. In summary, the present work advances a semi-analytical (thus reasonably fast maximum-likelihood estimation framework for PLRNNs that may enable to recover

  5. Automatic temporal segment detection via bilateral long short-term memory recurrent neural networks

    Science.gov (United States)

    Sun, Bo; Cao, Siming; He, Jun; Yu, Lejun; Li, Liandong

    2017-03-01

    Constrained by the physiology, the temporal factors associated with human behavior, irrespective of facial movement or body gesture, are described by four phases: neutral, onset, apex, and offset. Although they may benefit related recognition tasks, it is not easy to accurately detect such temporal segments. An automatic temporal segment detection framework using bilateral long short-term memory recurrent neural networks (BLSTM-RNN) to learn high-level temporal-spatial features, which synthesizes the local and global temporal-spatial information more efficiently, is presented. The framework is evaluated in detail over the face and body database (FABO). The comparison shows that the proposed framework outperforms state-of-the-art methods for solving the problem of temporal segment detection.

  6. Learning and retrieval behavior in recurrent neural networks with pre-synaptic dependent homeostatic plasticity

    Science.gov (United States)

    Mizusaki, Beatriz E. P.; Agnes, Everton J.; Erichsen, Rubem; Brunnet, Leonardo G.

    2017-08-01

    The plastic character of brain synapses is considered to be one of the foundations for the formation of memories. There are numerous kinds of such phenomenon currently described in the literature, but their role in the development of information pathways in neural networks with recurrent architectures is still not completely clear. In this paper we study the role of an activity-based process, called pre-synaptic dependent homeostatic scaling, in the organization of networks that yield precise-timed spiking patterns. It encodes spatio-temporal information in the synaptic weights as it associates a learned input with a specific response. We introduce a correlation measure to evaluate the precision of the spiking patterns and explore the effects of different inhibitory interactions and learning parameters. We find that large learning periods are important in order to improve the network learning capacity and discuss this ability in the presence of distinct inhibitory currents.

  7. Distributed representations of action sequences in anterior cingulate cortex: A recurrent neural network approach.

    Science.gov (United States)

    Shahnazian, Danesh; Holroyd, Clay B

    2018-02-01

    Anterior cingulate cortex (ACC) has been the subject of intense debate over the past 2 decades, but its specific computational function remains controversial. Here we present a simple computational model of ACC that incorporates distributed representations across a network of interconnected processing units. Based on the proposal that ACC is concerned with the execution of extended, goal-directed action sequences, we trained a recurrent neural network to predict each successive step of several sequences associated with multiple tasks. In keeping with neurophysiological observations from nonhuman animals, the network yields distributed patterns of activity across ACC neurons that track the progression of each sequence, and in keeping with human neuroimaging data, the network produces discrepancy signals when any step of the sequence deviates from the predicted step. These simulations illustrate a novel approach for investigating ACC function.

  8. Identification of Jets Containing $b$-Hadrons with Recurrent Neural Networks at the ATLAS Experiment

    CERN Document Server

    The ATLAS collaboration

    2017-01-01

    A novel $b$-jet identification algorithm is constructed with a Recurrent Neural Network (RNN) at the ATLAS experiment at the CERN Large Hadron Collider. The RNN based $b$-tagging algorithm processes charged particle tracks associated to jets without reliance on secondary vertex finding, and can augment existing secondary-vertex based taggers. In contrast to traditional impact-parameter-based $b$-tagging algorithms which assume that tracks associated to jets are independent from each other, the RNN based $b$-tagging algorithm can exploit the spatial and kinematic correlations between tracks which are initiated from the same $b$-hadrons. This new approach also accommodates an extended set of input variables. This note presents the expected performance of the RNN based $b$-tagging algorithm in simulated $t \\bar t$ events at $\\sqrt{s}=13$ TeV.

  9. Recurrent fuzzy neural network backstepping control for the prescribed output tracking performance of nonlinear dynamic systems.

    Science.gov (United States)

    Han, Seong-Ik; Lee, Jang-Myung

    2014-01-01

    This paper proposes a backstepping control system that uses a tracking error constraint and recurrent fuzzy neural networks (RFNNs) to achieve a prescribed tracking performance for a strict-feedback nonlinear dynamic system. A new constraint variable was defined to generate the virtual control that forces the tracking error to fall within prescribed boundaries. An adaptive RFNN was also used to obtain the required improvement on the approximation performances in order to avoid calculating the explosive number of terms generated by the recursive steps of traditional backstepping control. The boundedness and convergence of the closed-loop system was confirmed based on the Lyapunov stability theory. The prescribed performance of the proposed control scheme was validated by using it to control the prescribed error of a nonlinear system and a robot manipulator. © 2013 ISA. Published by Elsevier Ltd. All rights reserved.

  10. An Incremental Time-delay Neural Network for Dynamical Recurrent Associative Memory

    Institute of Scientific and Technical Information of China (English)

    2002-01-01

    An incremental time-delay neural network based on synapse growth, which is suitable for dynamic control and learning of autonomous robots, is proposed to improve the learning and retrieving performance of dynamical recurrent associative memory architecture. The model allows steady and continuous establishment of associative memory for spatio-temporal regularities and time series in discrete sequence of inputs. The inserted hidden units can be taken as the long-term memories that expand the capacity of network and sometimes may fade away under certain condition. Preliminary experiment has shown that this incremental network may be a promising approach to endow autonomous robots with the ability of adapting to new data without destroying the learned patterns. The system also benefits from its potential chaos character for emergence.

  11. H∞ state estimation for discrete-time memristive recurrent neural networks with stochastic time-delays

    Science.gov (United States)

    Liu, Hongjian; Wang, Zidong; Shen, Bo; Alsaadi, Fuad E.

    2016-07-01

    This paper deals with the robust H∞ state estimation problem for a class of memristive recurrent neural networks with stochastic time-delays. The stochastic time-delays under consideration are governed by a Bernoulli-distributed stochastic sequence. The purpose of the addressed problem is to design the robust state estimator such that the dynamics of the estimation error is exponentially stable in the mean square, and the prescribed ? performance constraint is met. By utilizing the difference inclusion theory and choosing a proper Lyapunov-Krasovskii functional, the existence condition of the desired estimator is derived. Based on it, the explicit expression of the estimator gain is given in terms of the solution to a linear matrix inequality. Finally, a numerical example is employed to demonstrate the effectiveness and applicability of the proposed estimation approach.

  12. A statistical framework for evaluating neural networks to predict recurrent events in breast cancer

    Science.gov (United States)

    Gorunescu, Florin; Gorunescu, Marina; El-Darzi, Elia; Gorunescu, Smaranda

    2010-07-01

    Breast cancer is the second leading cause of cancer deaths in women today. Sometimes, breast cancer can return after primary treatment. A medical diagnosis of recurrent cancer is often a more challenging task than the initial one. In this paper, we investigate the potential contribution of neural networks (NNs) to support health professionals in diagnosing such events. The NN algorithms are tested and applied to two different datasets. An extensive statistical analysis has been performed to verify our experiments. The results show that a simple network structure for both the multi-layer perceptron and radial basis function can produce equally good results, not all attributes are needed to train these algorithms and, finally, the classification performances of all algorithms are statistically robust. Moreover, we have shown that the best performing algorithm will strongly depend on the features of the datasets, and hence, there is not necessarily a single best classifier.

  13. Precision position control of servo systems using adaptive back-stepping and recurrent fuzzy neural networks

    International Nuclear Information System (INIS)

    Kim, Han Me; Kim, Jong Shik; Han, Seong Ik

    2009-01-01

    To improve position tracking performance of servo systems, a position tracking control using adaptive back-stepping control(ABSC) scheme and recurrent fuzzy neural networks(RFNN) is proposed. An adaptive rule of the ABSC based on system dynamics and dynamic friction model is also suggested to compensate nonlinear dynamic friction characteristics. However, it is difficult to reduce the position tracking error of servo systems by using only the ABSC scheme because of the system uncertainties which cannot be exactly identified during the modeling of servo systems. Therefore, in order to overcome system uncertainties and then to improve position tracking performance of servo systems, the RFNN technique is additionally applied to the servo system. The feasibility of the proposed control scheme for a servo system is validated through experiments. Experimental results show that the servo system with ABS controller based on the dual friction observer and RFNN including the reconstruction error estimator can achieve desired tracking performance and robustness

  14. Discrete-time recurrent neural networks with time-varying delays: Exponential stability analysis

    International Nuclear Information System (INIS)

    Liu, Yurong; Wang, Zidong; Serrano, Alan; Liu, Xiaohui

    2007-01-01

    This Letter is concerned with the analysis problem of exponential stability for a class of discrete-time recurrent neural networks (DRNNs) with time delays. The delay is of the time-varying nature, and the activation functions are assumed to be neither differentiable nor strict monotonic. Furthermore, the description of the activation functions is more general than the recently commonly used Lipschitz conditions. Under such mild conditions, we first prove the existence of the equilibrium point. Then, by employing a Lyapunov-Krasovskii functional, a unified linear matrix inequality (LMI) approach is developed to establish sufficient conditions for the DRNNs to be globally exponentially stable. It is shown that the delayed DRNNs are globally exponentially stable if a certain LMI is solvable, where the feasibility of such an LMI can be easily checked by using the numerically efficient Matlab LMI Toolbox. A simulation example is presented to show the usefulness of the derived LMI-based stability condition

  15. Evaluation of the cranial base in amnion rupture sequence involving the anterior neural tube: implications regarding recurrence risk.

    Science.gov (United States)

    Jones, Kenneth Lyons; Robinson, Luther K; Benirschke, Kurt

    2006-09-01

    Amniotic bands can cause disruption of the cranial end of the developing fetus, leading in some cases to a neural tube closure defect. Although recurrence for unaffected parents of an affected child with a defect in which the neural tube closed normally but was subsequently disrupted by amniotic bands is negligible; for a primary defect in closure of the neural tube to which amnion has subsequently adhered, recurrence risk is 1.7%. In that primary defects of neural tube closure are characterized by typical abnormalities of the base of the skull, evaluation of the cranial base in such fetuses provides an approach for making a distinction between these 2 mechanisms. This distinction has implications regarding recurrence risk. The skull base of 2 fetuses with amnion rupture sequence involving the cranial end of the neural tube were compared to that of 1 fetus with anencephaly as well as that of a structurally normal fetus. The skulls were cleaned, fixed in 10% formalin, recleaned, and then exposed to 10% KOH solution. After washing and recleaning, the skulls were exposed to hydrogen peroxide for bleaching and photography. Despite involvement of the anterior neural tube in both fetuses with amnion rupture sequence, in Case 3 the cranial base was normal while in Case 4 the cranial base was similar to that seen in anencephaly. This technique provides a method for determining the developmental pathogenesis of anterior neural tube defects in cases of amnion rupture sequence. As such, it provides information that can be used to counsel parents of affected children with respect to recurrence risk.

  16. Using recurrent neural network models for early detection of heart failure onset.

    Science.gov (United States)

    Choi, Edward; Schuetz, Andy; Stewart, Walter F; Sun, Jimeng

    2017-03-01

    We explored whether use of deep learning to model temporal relations among events in electronic health records (EHRs) would improve model performance in predicting initial diagnosis of heart failure (HF) compared to conventional methods that ignore temporality. Data were from a health system's EHR on 3884 incident HF cases and 28 903 controls, identified as primary care patients, between May 16, 2000, and May 23, 2013. Recurrent neural network (RNN) models using gated recurrent units (GRUs) were adapted to detect relations among time-stamped events (eg, disease diagnosis, medication orders, procedure orders, etc.) with a 12- to 18-month observation window of cases and controls. Model performance metrics were compared to regularized logistic regression, neural network, support vector machine, and K-nearest neighbor classifier approaches. Using a 12-month observation window, the area under the curve (AUC) for the RNN model was 0.777, compared to AUCs for logistic regression (0.747), multilayer perceptron (MLP) with 1 hidden layer (0.765), support vector machine (SVM) (0.743), and K-nearest neighbor (KNN) (0.730). When using an 18-month observation window, the AUC for the RNN model increased to 0.883 and was significantly higher than the 0.834 AUC for the best of the baseline methods (MLP). Deep learning models adapted to leverage temporal relations appear to improve performance of models for detection of incident heart failure with a short observation window of 12-18 months. © The Author 2016. Published by Oxford University Press on behalf of the American Medical Informatics Association.

  17. Design of a heart rate controller for treadmill exercise using a recurrent fuzzy neural network.

    Science.gov (United States)

    Lu, Chun-Hao; Wang, Wei-Cheng; Tai, Cheng-Chi; Chen, Tien-Chi

    2016-05-01

    In this study, we developed a computer controlled treadmill system using a recurrent fuzzy neural network heart rate controller (RFNNHRC). Treadmill speeds and inclines were controlled by corresponding control servo motors. The RFNNHRC was used to generate the control signals to automatically control treadmill speed and incline to minimize the user heart rate deviations from a preset profile. The RFNNHRC combines a fuzzy reasoning capability to accommodate uncertain information and an artificial recurrent neural network learning process that corrects for treadmill system nonlinearities and uncertainties. Treadmill speeds and inclines are controlled by the RFNNHRC to achieve minimal heart rate deviation from a pre-set profile using adjustable parameters and an on-line learning algorithm that provides robust performance against parameter variations. The on-line learning algorithm of RFNNHRC was developed and implemented using a dsPIC 30F4011 DSP. Application of the proposed control scheme to heart rate responses of runners resulted in smaller fluctuations than those produced by using proportional integra control, and treadmill speeds and inclines were smoother. The present experiments demonstrate improved heart rate tracking performance with the proposed control scheme. The RFNNHRC scheme with adjustable parameters and an on-line learning algorithm was applied to a computer controlled treadmill system with heart rate control during treadmill exercise. Novel RFNNHRC structure and controller stability analyses were introduced. The RFNNHRC were tuned using a Lyapunov function to ensure system stability. The superior heart rate control with the proposed RFNNHRC scheme was demonstrated with various pre-set heart rates. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  18. Neural processing of short-term recurrence in songbird vocal communication.

    Directory of Open Access Journals (Sweden)

    Gabriël J L Beckers

    Full Text Available BACKGROUND: Many situations involving animal communication are dominated by recurring, stereotyped signals. How do receivers optimally distinguish between frequently recurring signals and novel ones? Cortical auditory systems are known to be pre-attentively sensitive to short-term delivery statistics of artificial stimuli, but it is unknown if this phenomenon extends to the level of behaviorally relevant delivery patterns, such as those used during communication. METHODOLOGY/PRINCIPAL FINDINGS: We recorded and analyzed complete auditory scenes of spontaneously communicating zebra finch (Taeniopygia guttata pairs over a week-long period, and show that they can produce tens of thousands of short-range contact calls per day. Individual calls recur at time scales (median interval 1.5 s matching those at which mammalian sensory systems are sensitive to recent stimulus history. Next, we presented to anesthetized birds sequences of frequently recurring calls interspersed with rare ones, and recorded, in parallel, action and local field potential responses in the medio-caudal auditory forebrain at 32 unique sites. Variation in call recurrence rate over natural ranges leads to widespread and significant modulation in strength of neural responses. Such modulation is highly call-specific in secondary auditory areas, but not in the main thalamo-recipient, primary auditory area. CONCLUSIONS/SIGNIFICANCE: Our results support the hypothesis that pre-attentive neural sensitivity to short-term stimulus recurrence is involved in the analysis of auditory scenes at the level of delivery patterns of meaningful sounds. This may enable birds to efficiently and automatically distinguish frequently recurring vocalizations from other events in their auditory scene.

  19. Fault diagnosis of rolling bearings with recurrent neural network-based autoencoders.

    Science.gov (United States)

    Liu, Han; Zhou, Jianzhong; Zheng, Yang; Jiang, Wei; Zhang, Yuncheng

    2018-04-19

    As the rolling bearings being the key part of rotary machine, its healthy condition is quite important for safety production. Fault diagnosis of rolling bearing has been research focus for the sake of improving the economic efficiency and guaranteeing the operation security. However, the collected signals are mixed with ambient noise during the operation of rotary machine, which brings great challenge to the exact diagnosis results. Using signals collected from multiple sensors can avoid the loss of local information and extract more helpful characteristics. Recurrent Neural Networks (RNN) is a type of artificial neural network which can deal with multiple time sequence data. The capacity of RNN has been proved outstanding for catching time relevance about time sequence data. This paper proposed a novel method for bearing fault diagnosis with RNN in the form of an autoencoder. In this approach, multiple vibration value of the rolling bearings of the next period are predicted from the previous period by means of Gated Recurrent Unit (GRU)-based denoising autoencoder. These GRU-based non-linear predictive denoising autoencoders (GRU-NP-DAEs) are trained with strong generalization ability for each different fault pattern. Then for the given input data, the reconstruction errors between the next period data and the output data generated by different GRU-NP-DAEs are used to detect anomalous conditions and classify fault type. Classic rotating machinery datasets have been employed to testify the effectiveness of the proposed diagnosis method and its preponderance over some state-of-the-art methods. The experiment results indicate that the proposed method achieves satisfactory performance with strong robustness and high classification accuracy. Copyright © 2018 ISA. Published by Elsevier Ltd. All rights reserved.

  20. From phonemes to images : levels of representation in a recurrent neural model of visually-grounded language learning

    NARCIS (Netherlands)

    Gelderloos, L.J.; Chrupala, Grzegorz

    2016-01-01

    We present a model of visually-grounded language learning based on stacked gated recurrent neural networks which learns to predict visual features given an image description in the form of a sequence of phonemes. The learning task resembles that faced by human language learners who need to discover

  1. Sequence-specific bias correction for RNA-seq data using recurrent neural networks.

    Science.gov (United States)

    Zhang, Yao-Zhong; Yamaguchi, Rui; Imoto, Seiya; Miyano, Satoru

    2017-01-25

    The recent success of deep learning techniques in machine learning and artificial intelligence has stimulated a great deal of interest among bioinformaticians, who now wish to bring the power of deep learning to bare on a host of bioinformatical problems. Deep learning is ideally suited for biological problems that require automatic or hierarchical feature representation for biological data when prior knowledge is limited. In this work, we address the sequence-specific bias correction problem for RNA-seq data redusing Recurrent Neural Networks (RNNs) to model nucleotide sequences without pre-determining sequence structures. The sequence-specific bias of a read is then calculated based on the sequence probabilities estimated by RNNs, and used in the estimation of gene abundance. We explore the application of two popular RNN recurrent units for this task and demonstrate that RNN-based approaches provide a flexible way to model nucleotide sequences without knowledge of predetermined sequence structures. Our experiments show that training a RNN-based nucleotide sequence model is efficient and RNN-based bias correction methods compare well with the-state-of-the-art sequence-specific bias correction method on the commonly used MAQC-III data set. RNNs provides an alternative and flexible way to calculate sequence-specific bias without explicitly pre-determining sequence structures.

  2. Marginally Stable Triangular Recurrent Neural Network Architecture for Time Series Prediction.

    Science.gov (United States)

    Sivakumar, Seshadri; Sivakumar, Shyamala

    2017-09-25

    This paper introduces a discrete-time recurrent neural network architecture using triangular feedback weight matrices that allows a simplified approach to ensuring network and training stability. The triangular structure of the weight matrices is exploited to readily ensure that the eigenvalues of the feedback weight matrix represented by the block diagonal elements lie on the unit circle in the complex z-plane by updating these weights based on the differential of the angular error variable. Such placement of the eigenvalues together with the extended close interaction between state variables facilitated by the nondiagonal triangular elements, enhances the learning ability of the proposed architecture. Simulation results show that the proposed architecture is highly effective in time-series prediction tasks associated with nonlinear and chaotic dynamic systems with underlying oscillatory modes. This modular architecture with dual upper and lower triangular feedback weight matrices mimics fully recurrent network architectures, while maintaining learning stability with a simplified training process. While training, the block-diagonal weights (hence the eigenvalues) of the dual triangular matrices are constrained to the same values during weight updates aimed at minimizing the possibility of overfitting. The dual triangular architecture also exploits the benefit of parsing the input and selectively applying the parsed inputs to the two subnetworks to facilitate enhanced learning performance.

  3. Nonlinear dynamic systems identification using recurrent interval type-2 TSK fuzzy neural network - A novel structure.

    Science.gov (United States)

    El-Nagar, Ahmad M

    2018-01-01

    In this study, a novel structure of a recurrent interval type-2 Takagi-Sugeno-Kang (TSK) fuzzy neural network (FNN) is introduced for nonlinear dynamic and time-varying systems identification. It combines the type-2 fuzzy sets (T2FSs) and a recurrent FNN to avoid the data uncertainties. The fuzzy firing strengths in the proposed structure are returned to the network input as internal variables. The interval type-2 fuzzy sets (IT2FSs) is used to describe the antecedent part for each rule while the consequent part is a TSK-type, which is a linear function of the internal variables and the external inputs with interval weights. All the type-2 fuzzy rules for the proposed RIT2TSKFNN are learned on-line based on structure and parameter learning, which are performed using the type-2 fuzzy clustering. The antecedent and consequent parameters of the proposed RIT2TSKFNN are updated based on the Lyapunov function to achieve network stability. The obtained results indicate that our proposed network has a small root mean square error (RMSE) and a small integral of square error (ISE) with a small number of rules and a small computation time compared with other type-2 FNNs. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.

  4. LFNet: A Novel Bidirectional Recurrent Convolutional Neural Network for Light-Field Image Super-Resolution.

    Science.gov (United States)

    Wang, Yunlong; Liu, Fei; Zhang, Kunbo; Hou, Guangqi; Sun, Zhenan; Tan, Tieniu

    2018-09-01

    The low spatial resolution of light-field image poses significant difficulties in exploiting its advantage. To mitigate the dependency of accurate depth or disparity information as priors for light-field image super-resolution, we propose an implicitly multi-scale fusion scheme to accumulate contextual information from multiple scales for super-resolution reconstruction. The implicitly multi-scale fusion scheme is then incorporated into bidirectional recurrent convolutional neural network, which aims to iteratively model spatial relations between horizontally or vertically adjacent sub-aperture images of light-field data. Within the network, the recurrent convolutions are modified to be more effective and flexible in modeling the spatial correlations between neighboring views. A horizontal sub-network and a vertical sub-network of the same network structure are ensembled for final outputs via stacked generalization. Experimental results on synthetic and real-world data sets demonstrate that the proposed method outperforms other state-of-the-art methods by a large margin in peak signal-to-noise ratio and gray-scale structural similarity indexes, which also achieves superior quality for human visual systems. Furthermore, the proposed method can enhance the performance of light field applications such as depth estimation.

  5. Deep Recurrent Neural Networks for seizure detection and early seizure detection systems

    Energy Technology Data Exchange (ETDEWEB)

    Talathi, S. S. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2017-06-05

    Epilepsy is common neurological diseases, affecting about 0.6-0.8 % of world population. Epileptic patients suffer from chronic unprovoked seizures, which can result in broad spectrum of debilitating medical and social consequences. Since seizures, in general, occur infrequently and are unpredictable, automated seizure detection systems are recommended to screen for seizures during long-term electroencephalogram (EEG) recordings. In addition, systems for early seizure detection can lead to the development of new types of intervention systems that are designed to control or shorten the duration of seizure events. In this article, we investigate the utility of recurrent neural networks (RNNs) in designing seizure detection and early seizure detection systems. We propose a deep learning framework via the use of Gated Recurrent Unit (GRU) RNNs for seizure detection. We use publicly available data in order to evaluate our method and demonstrate very promising evaluation results with overall accuracy close to 100 %. We also systematically investigate the application of our method for early seizure warning systems. Our method can detect about 98% of seizure events within the first 5 seconds of the overall epileptic seizure duration.

  6. Language Identification in Short Utterances Using Long Short-Term Memory (LSTM) Recurrent Neural Networks.

    Science.gov (United States)

    Zazo, Ruben; Lozano-Diez, Alicia; Gonzalez-Dominguez, Javier; Toledano, Doroteo T; Gonzalez-Rodriguez, Joaquin

    2016-01-01

    Long Short Term Memory (LSTM) Recurrent Neural Networks (RNNs) have recently outperformed other state-of-the-art approaches, such as i-vector and Deep Neural Networks (DNNs), in automatic Language Identification (LID), particularly when dealing with very short utterances (∼3s). In this contribution we present an open-source, end-to-end, LSTM RNN system running on limited computational resources (a single GPU) that outperforms a reference i-vector system on a subset of the NIST Language Recognition Evaluation (8 target languages, 3s task) by up to a 26%. This result is in line with previously published research using proprietary LSTM implementations and huge computational resources, which made these former results hardly reproducible. Further, we extend those previous experiments modeling unseen languages (out of set, OOS, modeling), which is crucial in real applications. Results show that a LSTM RNN with OOS modeling is able to detect these languages and generalizes robustly to unseen OOS languages. Finally, we also analyze the effect of even more limited test data (from 2.25s to 0.1s) proving that with as little as 0.5s an accuracy of over 50% can be achieved.

  7. Recurrent neural networks for breast lesion classification based on DCE-MRIs

    Science.gov (United States)

    Antropova, Natasha; Huynh, Benjamin; Giger, Maryellen

    2018-02-01

    Dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) plays a significant role in breast cancer screening, cancer staging, and monitoring response to therapy. Recently, deep learning methods are being rapidly incorporated in image-based breast cancer diagnosis and prognosis. However, most of the current deep learning methods make clinical decisions based on 2-dimentional (2D) or 3D images and are not well suited for temporal image data. In this study, we develop a deep learning methodology that enables integration of clinically valuable temporal components of DCE-MRIs into deep learning-based lesion classification. Our work is performed on a database of 703 DCE-MRI cases for the task of distinguishing benign and malignant lesions, and uses the area under the ROC curve (AUC) as the performance metric in conducting that task. We train a recurrent neural network, specifically a long short-term memory network (LSTM), on sequences of image features extracted from the dynamic MRI sequences. These features are extracted with VGGNet, a convolutional neural network pre-trained on a large dataset of natural images ImageNet. The features are obtained from various levels of the network, to capture low-, mid-, and high-level information about the lesion. Compared to a classification method that takes as input only images at a single time-point (yielding an AUC = 0.81 (se = 0.04)), our LSTM method improves lesion classification with an AUC of 0.85 (se = 0.03).

  8. Using Long-Short-Term-Memory Recurrent Neural Networks to Predict Aviation Engine Vibrations

    Science.gov (United States)

    ElSaid, AbdElRahman Ahmed

    This thesis examines building viable Recurrent Neural Networks (RNN) using Long Short Term Memory (LSTM) neurons to predict aircraft engine vibrations. The different networks are trained on a large database of flight data records obtained from an airline containing flights that suffered from excessive vibration. RNNs can provide a more generalizable and robust method for prediction over analytical calculations of engine vibration, as analytical calculations must be solved iteratively based on specific empirical engine parameters, and this database contains multiple types of engines. Further, LSTM RNNs provide a "memory" of the contribution of previous time series data which can further improve predictions of future vibration values. LSTM RNNs were used over traditional RNNs, as those suffer from vanishing/exploding gradients when trained with back propagation. The study managed to predict vibration values for 1, 5, 10, and 20 seconds in the future, with 2.84% 3.3%, 5.51% and 10.19% mean absolute error, respectively. These neural networks provide a promising means for the future development of warning systems so that suitable actions can be taken before the occurrence of excess vibration to avoid unfavorable situations during flight.

  9. Nonlinear dynamics analysis of a self-organizing recurrent neural network: chaos waning.

    Science.gov (United States)

    Eser, Jürgen; Zheng, Pengsheng; Triesch, Jochen

    2014-01-01

    Self-organization is thought to play an important role in structuring nervous systems. It frequently arises as a consequence of plasticity mechanisms in neural networks: connectivity determines network dynamics which in turn feed back on network structure through various forms of plasticity. Recently, self-organizing recurrent neural network models (SORNs) have been shown to learn non-trivial structure in their inputs and to reproduce the experimentally observed statistics and fluctuations of synaptic connection strengths in cortex and hippocampus. However, the dynamics in these networks and how they change with network evolution are still poorly understood. Here we investigate the degree of chaos in SORNs by studying how the networks' self-organization changes their response to small perturbations. We study the effect of perturbations to the excitatory-to-excitatory weight matrix on connection strengths and on unit activities. We find that the network dynamics, characterized by an estimate of the maximum Lyapunov exponent, becomes less chaotic during its self-organization, developing into a regime where only few perturbations become amplified. We also find that due to the mixing of discrete and (quasi-)continuous variables in SORNs, small perturbations to the synaptic weights may become amplified only after a substantial delay, a phenomenon we propose to call deferred chaos.

  10. Recurrent fuzzy neural network by using feedback error learning approaches for LFC in interconnected power system

    International Nuclear Information System (INIS)

    Sabahi, Kamel; Teshnehlab, Mohammad; Shoorhedeli, Mahdi Aliyari

    2009-01-01

    In this study, a new adaptive controller based on modified feedback error learning (FEL) approaches is proposed for load frequency control (LFC) problem. The FEL strategy consists of intelligent and conventional controllers in feedforward and feedback paths, respectively. In this strategy, a conventional feedback controller (CFC), i.e. proportional, integral and derivative (PID) controller, is essential to guarantee global asymptotic stability of the overall system; and an intelligent feedforward controller (INFC) is adopted to learn the inverse of the controlled system. Therefore, when the INFC learns the inverse of controlled system, the tracking of reference signal is done properly. Generally, the CFC is designed at nominal operating conditions of the system and, therefore, fails to provide the best control performance as well as global stability over a wide range of changes in the operating conditions of the system. So, in this study a supervised controller (SC), a lookup table based controller, is addressed for tuning of the CFC. During abrupt changes of the power system parameters, the SC adjusts the PID parameters according to these operating conditions. Moreover, for improving the performance of overall system, a recurrent fuzzy neural network (RFNN) is adopted in INFC instead of the conventional neural network, which was used in past studies. The proposed FEL controller has been compared with the conventional feedback error learning controller (CFEL) and the PID controller through some performance indices

  11. Exponential stability of delayed recurrent neural networks with Markovian jumping parameters

    International Nuclear Information System (INIS)

    Wang Zidong; Liu Yurong; Yu Li; Liu Xiaohui

    2006-01-01

    In this Letter, the global exponential stability analysis problem is considered for a class of recurrent neural networks (RNNs) with time delays and Markovian jumping parameters. The jumping parameters considered here are generated from a continuous-time discrete-state homogeneous Markov process, which are governed by a Markov process with discrete and finite state space. The purpose of the problem addressed is to derive some easy-to-test conditions such that the dynamics of the neural network is stochastically exponentially stable in the mean square, independent of the time delay. By employing a new Lyapunov-Krasovskii functional, a linear matrix inequality (LMI) approach is developed to establish the desired sufficient conditions, and therefore the global exponential stability in the mean square for the delayed RNNs can be easily checked by utilizing the numerically efficient Matlab LMI toolbox, and no tuning of parameters is required. A numerical example is exploited to show the usefulness of the derived LMI-based stability conditions

  12. A recurrent neural model for proto-object based contour integration and figure-ground segregation.

    Science.gov (United States)

    Hu, Brian; Niebur, Ernst

    2017-12-01

    Visual processing of objects makes use of both feedforward and feedback streams of information. However, the nature of feedback signals is largely unknown, as is the identity of the neuronal populations in lower visual areas that receive them. Here, we develop a recurrent neural model to address these questions in the context of contour integration and figure-ground segregation. A key feature of our model is the use of grouping neurons whose activity represents tentative objects ("proto-objects") based on the integration of local feature information. Grouping neurons receive input from an organized set of local feature neurons, and project modulatory feedback to those same neurons. Additionally, inhibition at both the local feature level and the object representation level biases the interpretation of the visual scene in agreement with principles from Gestalt psychology. Our model explains several sets of neurophysiological results (Zhou et al. Journal of Neuroscience, 20(17), 6594-6611 2000; Qiu et al. Nature Neuroscience, 10(11), 1492-1499 2007; Chen et al. Neuron, 82(3), 682-694 2014), and makes testable predictions about the influence of neuronal feedback and attentional selection on neural responses across different visual areas. Our model also provides a framework for understanding how object-based attention is able to select both objects and the features associated with them.

  13. Coding of level of ambiguity within neural systems mediating choice.

    Science.gov (United States)

    Lopez-Paniagua, Dan; Seger, Carol A

    2013-01-01

    Data from previous neuroimaging studies exploring neural activity associated with uncertainty suggest varying levels of activation associated with changing degrees of uncertainty in neural regions that mediate choice behavior. The present study used a novel task that parametrically controlled the amount of information hidden from the subject; levels of uncertainty ranged from full ambiguity (no information about probability of winning) through multiple levels of partial ambiguity, to a condition of risk only (zero ambiguity with full knowledge of the probability of winning). A parametric analysis compared a linear model in which weighting increased as a function of level of ambiguity, and an inverted-U quadratic models in which partial ambiguity conditions were weighted most heavily. Overall we found that risk and all levels of ambiguity recruited a common "fronto-parietal-striatal" network including regions within the dorsolateral prefrontal cortex, intraparietal sulcus, and dorsal striatum. Activation was greatest across these regions and additional anterior and superior prefrontal regions for the quadratic function which most heavily weighs trials with partial ambiguity. These results suggest that the neural regions involved in decision processes do not merely track the absolute degree ambiguity or type of uncertainty (risk vs. ambiguity). Instead, recruitment of prefrontal regions may result from greater degree of difficulty in conditions of partial ambiguity: when information regarding reward probabilities important for decision making is hidden or not easily obtained the subject must engage in a search for tractable information. Additionally, this study identified regions of activity related to the valuation of potential gains associated with stimuli or options (including the orbitofrontal and medial prefrontal cortices and dorsal striatum) and related to winning (including orbitofrontal cortex and ventral striatum).

  14. Recurrent neural network for non-smooth convex optimization problems with application to the identification of genetic regulatory networks.

    Science.gov (United States)

    Cheng, Long; Hou, Zeng-Guang; Lin, Yingzi; Tan, Min; Zhang, Wenjun Chris; Wu, Fang-Xiang

    2011-05-01

    A recurrent neural network is proposed for solving the non-smooth convex optimization problem with the convex inequality and linear equality constraints. Since the objective function and inequality constraints may not be smooth, the Clarke's generalized gradients of the objective function and inequality constraints are employed to describe the dynamics of the proposed neural network. It is proved that the equilibrium point set of the proposed neural network is equivalent to the optimal solution of the original optimization problem by using the Lagrangian saddle-point theorem. Under weak conditions, the proposed neural network is proved to be stable, and the state of the neural network is convergent to one of its equilibrium points. Compared with the existing neural network models for non-smooth optimization problems, the proposed neural network can deal with a larger class of constraints and is not based on the penalty method. Finally, the proposed neural network is used to solve the identification problem of genetic regulatory networks, which can be transformed into a non-smooth convex optimization problem. The simulation results show the satisfactory identification accuracy, which demonstrates the effectiveness and efficiency of the proposed approach.

  15. Global exponential stability and periodicity of reaction-diffusion recurrent neural networks with distributed delays and Dirichlet boundary conditions

    International Nuclear Information System (INIS)

    Lu Junguo; Lu Linji

    2009-01-01

    In this paper, global exponential stability and periodicity of a class of reaction-diffusion recurrent neural networks with distributed delays and Dirichlet boundary conditions are studied by constructing suitable Lyapunov functionals and utilizing some inequality techniques. We first prove global exponential convergence to 0 of the difference between any two solutions of the original neural networks, the existence and uniqueness of equilibrium is the direct results of this procedure. This approach is different from the usually used one where the existence, uniqueness of equilibrium and stability are proved in two separate steps. Secondly, we prove periodicity. Sufficient conditions ensuring the existence, uniqueness, and global exponential stability of the equilibrium and periodic solution are given. These conditions are easy to verify and our results play an important role in the design and application of globally exponentially stable neural circuits and periodic oscillatory neural circuits.

  16. Stress and Quality of Life in Breast Cancer Recurrence: Moderation or Mediation of Coping?

    Science.gov (United States)

    Yang, Hae-Chung; Brothers, Brittany M.; Andersen, Barbara L.

    2008-01-01

    Background/Purpose Diagnosis with breast cancer recurrence often brings high levels of stress. Successful coping to alleviate stress could improve patients' quality of life (QoL). The intervening role coping plays between stress and QoL may depend on the types of stress encountered and the types of coping strategies used. The present study investigates the longitudinal relationships between stress, coping, and mental health QoL. Methods Breast cancer patients recently diagnosed with recurrence (N=65) were assessed shortly after the diagnosis and 4 months later. Four moderation and four mediation models were tested using hierarchical multiple regressions and path analyses. In the models, either traumatic stress or symptom-related stress at recurrence diagnosis was a predictor of mental health QoL at follow-up. Both engagement and disengagement coping strategies were tested as moderators or mediators between stress and QoL. Results Engagement coping moderated the effect of symptom stress on mental health QoL, whereas disengagement coping mediated the effects of both traumatic stress and symptom stress on mental health QoL. Conclusion The findings imply that interventions teaching engagement coping strategies would be important for patients experiencing high symptom stress, while discouraging the use of disengagement coping strategies would be important for all patients. PMID:18347897

  17. Recurrent neural network based hybrid model for reconstructing gene regulatory network.

    Science.gov (United States)

    Raza, Khalid; Alam, Mansaf

    2016-10-01

    One of the exciting problems in systems biology research is to decipher how genome controls the development of complex biological system. The gene regulatory networks (GRNs) help in the identification of regulatory interactions between genes and offer fruitful information related to functional role of individual gene in a cellular system. Discovering GRNs lead to a wide range of applications, including identification of disease related pathways providing novel tentative drug targets, helps to predict disease response, and also assists in diagnosing various diseases including cancer. Reconstruction of GRNs from available biological data is still an open problem. This paper proposes a recurrent neural network (RNN) based model of GRN, hybridized with generalized extended Kalman filter for weight update in backpropagation through time training algorithm. The RNN is a complex neural network that gives a better settlement between biological closeness and mathematical flexibility to model GRN; and is also able to capture complex, non-linear and dynamic relationships among variables. Gene expression data are inherently noisy and Kalman filter performs well for estimation problem even in noisy data. Hence, we applied non-linear version of Kalman filter, known as generalized extended Kalman filter, for weight update during RNN training. The developed model has been tested on four benchmark networks such as DNA SOS repair network, IRMA network, and two synthetic networks from DREAM Challenge. We performed a comparison of our results with other state-of-the-art techniques which shows superiority of our proposed model. Further, 5% Gaussian noise has been induced in the dataset and result of the proposed model shows negligible effect of noise on results, demonstrating the noise tolerance capability of the model. Copyright © 2016 Elsevier Ltd. All rights reserved.

  18. Monitoring tool usage in surgery videos using boosted convolutional and recurrent neural networks.

    Science.gov (United States)

    Al Hajj, Hassan; Lamard, Mathieu; Conze, Pierre-Henri; Cochener, Béatrice; Quellec, Gwenolé

    2018-05-09

    This paper investigates the automatic monitoring of tool usage during a surgery, with potential applications in report generation, surgical training and real-time decision support. Two surgeries are considered: cataract surgery, the most common surgical procedure, and cholecystectomy, one of the most common digestive surgeries. Tool usage is monitored in videos recorded either through a microscope (cataract surgery) or an endoscope (cholecystectomy). Following state-of-the-art video analysis solutions, each frame of the video is analyzed by convolutional neural networks (CNNs) whose outputs are fed to recurrent neural networks (RNNs) in order to take temporal relationships between events into account. Novelty lies in the way those CNNs and RNNs are trained. Computational complexity prevents the end-to-end training of "CNN+RNN" systems. Therefore, CNNs are usually trained first, independently from the RNNs. This approach is clearly suboptimal for surgical tool analysis: many tools are very similar to one another, but they can generally be differentiated based on past events. CNNs should be trained to extract the most useful visual features in combination with the temporal context. A novel boosting strategy is proposed to achieve this goal: the CNN and RNN parts of the system are simultaneously enriched by progressively adding weak classifiers (either CNNs or RNNs) trained to improve the overall classification accuracy. Experiments were performed in a dataset of 50 cataract surgery videos, where the usage of 21 surgical tools was manually annotated, and a dataset of 80 cholecystectomy videos, where the usage of 7 tools was manually annotated. Very good classification performance are achieved in both datasets: tool usage could be labeled with an average area under the ROC curve of A z =0.9961 and A z =0.9939, respectively, in offline mode (using past, present and future information), and A z =0.9957 and A z =0.9936, respectively, in online mode (using past and present

  19. Modeling long-term human activeness using recurrent neural networks for biometric data.

    Science.gov (United States)

    Kim, Zae Myung; Oh, Hyungrai; Kim, Han-Gyu; Lim, Chae-Gyun; Oh, Kyo-Joong; Choi, Ho-Jin

    2017-05-18

    With the invention of fitness trackers, it has been possible to continuously monitor a user's biometric data such as heart rates, number of footsteps taken, and amount of calories burned. This paper names the time series of these three types of biometric data, the user's "activeness", and investigates the feasibility in modeling and predicting the long-term activeness of the user. The dataset used in this study consisted of several months of biometric time-series data gathered by seven users independently. Four recurrent neural network (RNN) architectures-as well as a deep neural network and a simple regression model-were proposed to investigate the performance on predicting the activeness of the user under various length-related hyper-parameter settings. In addition, the learned model was tested to predict the time period when the user's activeness falls below a certain threshold. A preliminary experimental result shows that each type of activeness data exhibited a short-term autocorrelation; and among the three types of data, the consumed calories and the number of footsteps were positively correlated, while the heart rate data showed almost no correlation with neither of them. It is probably due to this characteristic of the dataset that although the RNN models produced the best results on modeling the user's activeness, the difference was marginal; and other baseline models, especially the linear regression model, performed quite admirably as well. Further experimental results show that it is feasible to predict a user's future activeness with precision, for example, a trained RNN model could predict-with the precision of 84%-when the user would be less active within the next hour given the latest 15 min of his activeness data. This paper defines and investigates the notion of a user's "activeness", and shows that forecasting the long-term activeness of the user is indeed possible. Such information can be utilized by a health-related application to proactively

  20. Centralized and decentralized global outer-synchronization of asymmetric recurrent time-varying neural network by data-sampling.

    Science.gov (United States)

    Lu, Wenlian; Zheng, Ren; Chen, Tianping

    2016-03-01

    In this paper, we discuss outer-synchronization of the asymmetrically connected recurrent time-varying neural networks. By using both centralized and decentralized discretization data sampling principles, we derive several sufficient conditions based on three vector norms to guarantee that the difference of any two trajectories starting from different initial values of the neural network converges to zero. The lower bounds of the common time intervals between data samples in centralized and decentralized principles are proved to be positive, which guarantees exclusion of Zeno behavior. A numerical example is provided to illustrate the efficiency of the theoretical results. Copyright © 2015 Elsevier Ltd. All rights reserved.

  1. Protein Solvent-Accessibility Prediction by a Stacked Deep Bidirectional Recurrent Neural Network

    Directory of Open Access Journals (Sweden)

    Buzhong Zhang

    2018-05-01

    Full Text Available Residue solvent accessibility is closely related to the spatial arrangement and packing of residues. Predicting the solvent accessibility of a protein is an important step to understand its structure and function. In this work, we present a deep learning method to predict residue solvent accessibility, which is based on a stacked deep bidirectional recurrent neural network applied to sequence profiles. To capture more long-range sequence information, a merging operator was proposed when bidirectional information from hidden nodes was merged for outputs. Three types of merging operators were used in our improved model, with a long short-term memory network performing as a hidden computing node. The trained database was constructed from 7361 proteins extracted from the PISCES server using a cut-off of 25% sequence identity. Sequence-derived features including position-specific scoring matrix, physical properties, physicochemical characteristics, conservation score and protein coding were used to represent a residue. Using this method, predictive values of continuous relative solvent-accessible area were obtained, and then, these values were transformed into binary states with predefined thresholds. Our experimental results showed that our deep learning method improved prediction quality relative to current methods, with mean absolute error and Pearson’s correlation coefficient values of 8.8% and 74.8%, respectively, on the CB502 dataset and 8.2% and 78%, respectively, on the Manesh215 dataset.

  2. Protein Solvent-Accessibility Prediction by a Stacked Deep Bidirectional Recurrent Neural Network.

    Science.gov (United States)

    Zhang, Buzhong; Li, Linqing; Lü, Qiang

    2018-05-25

    Residue solvent accessibility is closely related to the spatial arrangement and packing of residues. Predicting the solvent accessibility of a protein is an important step to understand its structure and function. In this work, we present a deep learning method to predict residue solvent accessibility, which is based on a stacked deep bidirectional recurrent neural network applied to sequence profiles. To capture more long-range sequence information, a merging operator was proposed when bidirectional information from hidden nodes was merged for outputs. Three types of merging operators were used in our improved model, with a long short-term memory network performing as a hidden computing node. The trained database was constructed from 7361 proteins extracted from the PISCES server using a cut-off of 25% sequence identity. Sequence-derived features including position-specific scoring matrix, physical properties, physicochemical characteristics, conservation score and protein coding were used to represent a residue. Using this method, predictive values of continuous relative solvent-accessible area were obtained, and then, these values were transformed into binary states with predefined thresholds. Our experimental results showed that our deep learning method improved prediction quality relative to current methods, with mean absolute error and Pearson's correlation coefficient values of 8.8% and 74.8%, respectively, on the CB502 dataset and 8.2% and 78%, respectively, on the Manesh215 dataset.

  3. Adaptive Sliding Mode Control of Dynamic Systems Using Double Loop Recurrent Neural Network Structure.

    Science.gov (United States)

    Fei, Juntao; Lu, Cheng

    2018-04-01

    In this paper, an adaptive sliding mode control system using a double loop recurrent neural network (DLRNN) structure is proposed for a class of nonlinear dynamic systems. A new three-layer RNN is proposed to approximate unknown dynamics with two different kinds of feedback loops where the firing weights and output signal calculated in the last step are stored and used as the feedback signals in each feedback loop. Since the new structure has combined the advantages of internal feedback NN and external feedback NN, it can acquire the internal state information while the output signal is also captured, thus the new designed DLRNN can achieve better approximation performance compared with the regular NNs without feedback loops or the regular RNNs with a single feedback loop. The new proposed DLRNN structure is employed in an equivalent controller to approximate the unknown nonlinear system dynamics, and the parameters of the DLRNN are updated online by adaptive laws to get favorable approximation performance. To investigate the effectiveness of the proposed controller, the designed adaptive sliding mode controller with the DLRNN is applied to a -axis microelectromechanical system gyroscope to control the vibrating dynamics of the proof mass. Simulation results demonstrate that the proposed methodology can achieve good tracking property, and the comparisons of the approximation performance between radial basis function NN, RNN, and DLRNN show that the DLRNN can accurately estimate the unknown dynamics with a fast speed while the internal states of DLRNN are more stable.

  4. A recurrent neural network for classification of unevenly sampled variable stars

    Science.gov (United States)

    Naul, Brett; Bloom, Joshua S.; Pérez, Fernando; van der Walt, Stéfan

    2018-02-01

    Astronomical surveys of celestial sources produce streams of noisy time series measuring flux versus time (`light curves'). Unlike in many other physical domains, however, large (and source-specific) temporal gaps in data arise naturally due to intranight cadence choices as well as diurnal and seasonal constraints1-5. With nightly observations of millions of variable stars and transients from upcoming surveys4,6, efficient and accurate discovery and classification techniques on noisy, irregularly sampled data must be employed with minimal human-in-the-loop involvement. Machine learning for inference tasks on such data traditionally requires the laborious hand-coding of domain-specific numerical summaries of raw data (`features')7. Here, we present a novel unsupervised autoencoding recurrent neural network8 that makes explicit use of sampling times and known heteroskedastic noise properties. When trained on optical variable star catalogues, this network produces supervised classification models that rival other best-in-class approaches. We find that autoencoded features learned in one time-domain survey perform nearly as well when applied to another survey. These networks can continue to learn from new unlabelled observations and may be used in other unsupervised tasks, such as forecasting and anomaly detection.

  5. Recurrent neural network-based modeling of gene regulatory network using elephant swarm water search algorithm.

    Science.gov (United States)

    Mandal, Sudip; Saha, Goutam; Pal, Rajat Kumar

    2017-08-01

    Correct inference of genetic regulations inside a cell from the biological database like time series microarray data is one of the greatest challenges in post genomic era for biologists and researchers. Recurrent Neural Network (RNN) is one of the most popular and simple approach to model the dynamics as well as to infer correct dependencies among genes. Inspired by the behavior of social elephants, we propose a new metaheuristic namely Elephant Swarm Water Search Algorithm (ESWSA) to infer Gene Regulatory Network (GRN). This algorithm is mainly based on the water search strategy of intelligent and social elephants during drought, utilizing the different types of communication techniques. Initially, the algorithm is tested against benchmark small and medium scale artificial genetic networks without and with presence of different noise levels and the efficiency was observed in term of parametric error, minimum fitness value, execution time, accuracy of prediction of true regulation, etc. Next, the proposed algorithm is tested against the real time gene expression data of Escherichia Coli SOS Network and results were also compared with others state of the art optimization methods. The experimental results suggest that ESWSA is very efficient for GRN inference problem and performs better than other methods in many ways.

  6. Applying long short-term memory recurrent neural networks to intrusion detection

    Directory of Open Access Journals (Sweden)

    Ralf C. Staudemeyer

    2015-07-01

    Full Text Available We claim that modelling network traffic as a time series with a supervised learning approach, using known genuine and malicious behaviour, improves intrusion detection. To substantiate this, we trained long short-term memory (LSTM recurrent neural networks with the training data provided by the DARPA / KDD Cup ’99 challenge. To identify suitable LSTM-RNN network parameters and structure we experimented with various network topologies. We found networks with four memory blocks containing two cells each offer a good compromise between computational cost and detection performance. We applied forget gates and shortcut connections respectively. A learning rate of 0.1 and up to 1,000 epochs showed good results. We tested the performance on all features and on extracted minimal feature sets respectively. We evaluated different feature sets for the detection of all attacks within one network and also to train networks specialised on individual attack classes. Our results show that the LSTM classifier provides superior performance in comparison to results previously published results of strong static classifiers. With 93.82% accuracy and 22.13 cost, LSTM outperforms the winning entries of the KDD Cup ’99 challenge by far. This is due to the fact that LSTM learns to look back in time and correlate consecutive connection records. For the first time ever, we have demonstrated the usefulness of LSTM networks to intrusion detection.

  7. Intelligent Noise Removal from EMG Signal Using Focused Time-Lagged Recurrent Neural Network

    Directory of Open Access Journals (Sweden)

    S. N. Kale

    2009-01-01

    Full Text Available Electromyography (EMG signals can be used for clinical/biomedical application and modern human computer interaction. EMG signals acquire noise while traveling through tissue, inherent noise in electronics equipment, ambient noise, and so forth. ANN approach is studied for reduction of noise in EMG signal. In this paper, it is shown that Focused Time-Lagged Recurrent Neural Network (FTLRNN can elegantly solve to reduce the noise from EMG signal. After rigorous computer simulations, authors developed an optimal FTLRNN model, which removes the noise from the EMG signal. Results show that the proposed optimal FTLRNN model has an MSE (Mean Square Error as low as 0.000067 and 0.000048, correlation coefficient as high as 0.99950 and 0.99939 for noise signal and EMG signal, respectively, when validated on the test dataset. It is also noticed that the output of the estimated FTLRNN model closely follows the real one. This network is indeed robust as EMG signal tolerates the noise variance from 0.1 to 0.4 for uniform noise and 0.30 for Gaussian noise. It is clear that the training of the network is independent of specific partitioning of dataset. It is seen that the performance of the proposed FTLRNN model clearly outperforms the best Multilayer perceptron (MLP and Radial Basis Function NN (RBF models. The simple NN model such as the FTLRNN with single-hidden layer can be employed to remove noise from EMG signal.

  8. Interactive natural language acquisition in a multi-modal recurrent neural architecture

    Science.gov (United States)

    Heinrich, Stefan; Wermter, Stefan

    2018-01-01

    For the complex human brain that enables us to communicate in natural language, we gathered good understandings of principles underlying language acquisition and processing, knowledge about sociocultural conditions, and insights into activity patterns in the brain. However, we were not yet able to understand the behavioural and mechanistic characteristics for natural language and how mechanisms in the brain allow to acquire and process language. In bridging the insights from behavioural psychology and neuroscience, the goal of this paper is to contribute a computational understanding of appropriate characteristics that favour language acquisition. Accordingly, we provide concepts and refinements in cognitive modelling regarding principles and mechanisms in the brain and propose a neurocognitively plausible model for embodied language acquisition from real-world interaction of a humanoid robot with its environment. In particular, the architecture consists of a continuous time recurrent neural network, where parts have different leakage characteristics and thus operate on multiple timescales for every modality and the association of the higher level nodes of all modalities into cell assemblies. The model is capable of learning language production grounded in both, temporal dynamic somatosensation and vision, and features hierarchical concept abstraction, concept decomposition, multi-modal integration, and self-organisation of latent representations.

  9. Construction of Gene Regulatory Networks Using Recurrent Neural Networks and Swarm Intelligence.

    Science.gov (United States)

    Khan, Abhinandan; Mandal, Sudip; Pal, Rajat Kumar; Saha, Goutam

    2016-01-01

    We have proposed a methodology for the reverse engineering of biologically plausible gene regulatory networks from temporal genetic expression data. We have used established information and the fundamental mathematical theory for this purpose. We have employed the Recurrent Neural Network formalism to extract the underlying dynamics present in the time series expression data accurately. We have introduced a new hybrid swarm intelligence framework for the accurate training of the model parameters. The proposed methodology has been first applied to a small artificial network, and the results obtained suggest that it can produce the best results available in the contemporary literature, to the best of our knowledge. Subsequently, we have implemented our proposed framework on experimental (in vivo) datasets. Finally, we have investigated two medium sized genetic networks (in silico) extracted from GeneNetWeaver, to understand how the proposed algorithm scales up with network size. Additionally, we have implemented our proposed algorithm with half the number of time points. The results indicate that a reduction of 50% in the number of time points does not have an effect on the accuracy of the proposed methodology significantly, with a maximum of just over 15% deterioration in the worst case.

  10. Using LSTM recurrent neural networks for monitoring the LHC superconducting magnets

    Science.gov (United States)

    Wielgosz, Maciej; Skoczeń, Andrzej; Mertik, Matej

    2017-09-01

    The superconducting LHC magnets are coupled with an electronic monitoring system which records and analyzes voltage time series reflecting their performance. A currently used system is based on a range of preprogrammed triggers which launches protection procedures when a misbehavior of the magnets is detected. All the procedures used in the protection equipment were designed and implemented according to known working scenarios of the system and are updated and monitored by human operators. This paper proposes a novel approach to monitoring and fault protection of the Large Hadron Collider (LHC) superconducting magnets which employs state-of-the-art Deep Learning algorithms. Consequently, the authors of the paper decided to examine the performance of LSTM recurrent neural networks for modeling of voltage time series of the magnets. In order to address this challenging task different network architectures and hyper-parameters were used to achieve the best possible performance of the solution. The regression results were measured in terms of RMSE for different number of future steps and history length taken into account for the prediction. The best result of RMSE = 0 . 00104 was obtained for a network of 128 LSTM cells within the internal layer and 16 steps history buffer.

  11. Confused or not Confused?: Disentangling Brain Activity from EEG Data Using Bidirectional LSTM Recurrent Neural Networks.

    Science.gov (United States)

    Ni, Zhaoheng; Yuksel, Ahmet Cem; Ni, Xiuyan; Mandel, Michael I; Xie, Lei

    2017-08-01

    Brain fog, also known as confusion, is one of the main reasons for low performance in the learning process or any kind of daily task that involves and requires thinking. Detecting confusion in a human's mind in real time is a challenging and important task that can be applied to online education, driver fatigue detection and so on. In this paper, we apply Bidirectional LSTM Recurrent Neural Networks to classify students' confusion in watching online course videos from EEG data. The results show that Bidirectional LSTM model achieves the state-of-the-art performance compared with other machine learning approaches, and shows strong robustness as evaluated by cross-validation. We can predict whether or not a student is confused in the accuracy of 73.3%. Furthermore, we find the most important feature to detecting the brain confusion is the gamma 1 wave of EEG signal. Our results suggest that machine learning is a potentially powerful tool to model and understand brain activity.

  12. Application of Recurrent Neural Networks on El Nino Impact on California Climate

    Science.gov (United States)

    Le, J.; El-Askary, H. M.; Allai, M.

    2017-12-01

    Following our successful paper on the application for the El Nino season of 2015-2016 over Southern California, we use recurrent neural networks (RNNs) to investigate the complex interactions between the long-term trend in dryness and a projected, short but intense, period of wetness due to the 2015-2016 El Niño. Although it was forecasted that this El Niño season would bring significant rainfall to the region, our long-term projections of the Palmer Z Index (PZI) showed a continuing drought trend. We achieved a statistically significant correlation of 0.610 between forecasted and observed PZI on the validation set for a lead time of 1 month. This gives strong confidence to the forecasted precipitation indicator. These predictions were bourne out in the resulting data. This paper details the expansion of our system to the climate of the entire California climate as a whole, dealing with inter-relationships and spatial variations within the state.

  13. Local community detection as pattern restoration by attractor dynamics of recurrent neural networks.

    Science.gov (United States)

    Okamoto, Hiroshi

    2016-08-01

    Densely connected parts in networks are referred to as "communities". Community structure is a hallmark of a variety of real-world networks. Individual communities in networks form functional modules of complex systems described by networks. Therefore, finding communities in networks is essential to approaching and understanding complex systems described by networks. In fact, network science has made a great deal of effort to develop effective and efficient methods for detecting communities in networks. Here we put forward a type of community detection, which has been little examined so far but will be practically useful. Suppose that we are given a set of source nodes that includes some (but not all) of "true" members of a particular community; suppose also that the set includes some nodes that are not the members of this community (i.e., "false" members of the community). We propose to detect the community from this "imperfect" and "inaccurate" set of source nodes using attractor dynamics of recurrent neural networks. Community detection by the proposed method can be viewed as restoration of the original pattern from a deteriorated pattern, which is analogous to cue-triggered recall of short-term memory in the brain. We demonstrate the effectiveness of the proposed method using synthetic networks and real social networks for which correct communities are known. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  14. Feature Set Evaluation for Offline Handwriting Recognition Systems: Application to the Recurrent Neural Network Model.

    Science.gov (United States)

    Chherawala, Youssouf; Roy, Partha Pratim; Cheriet, Mohamed

    2016-12-01

    The performance of handwriting recognition systems is dependent on the features extracted from the word image. A large body of features exists in the literature, but no method has yet been proposed to identify the most promising of these, other than a straightforward comparison based on the recognition rate. In this paper, we propose a framework for feature set evaluation based on a collaborative setting. We use a weighted vote combination of recurrent neural network (RNN) classifiers, each trained with a particular feature set. This combination is modeled in a probabilistic framework as a mixture model and two methods for weight estimation are described. The main contribution of this paper is to quantify the importance of feature sets through the combination weights, which reflect their strength and complementarity. We chose the RNN classifier because of its state-of-the-art performance. Also, we provide the first feature set benchmark for this classifier. We evaluated several feature sets on the IFN/ENIT and RIMES databases of Arabic and Latin script, respectively. The resulting combination model is competitive with state-of-the-art systems.

  15. Emergence of unstable itinerant orbits in a recurrent neural network model

    International Nuclear Information System (INIS)

    Suemitsu, Yoshikazu; Nara, Shigetoshi

    2005-01-01

    A recurrent neural network model with time delay is investigated by numerical methods. The model functions as both conventional associative memory and also enables us to embed a new kind of memory attractor that cannot be realized in models without time delay, for example chain-ring attractors. This is attributed to the fact that the time delay extends the available state space dimension. The difference between the basin structures of chain-ring attractors and of isolated cycle attractors is investigated with respect to the two attractor pattern sets, random memory patterns and designed memory patterns with intended structures. Compared to isolated attractors with random memory patterns, the basins of chain-ring attractors are reduced considerably. Computer experiments confirm that the basin volume of each embedded chain-ring attractor shrinks and the emergence of unstable itinerant orbits in the outer state space of the memory attractor basins is discovered. The instability of such itinerant orbits is investigated. Results show that a 1-bit difference in initial conditions does not exceed 10% of a total dimension within 100 updating steps

  16. A Recurrent Neural Network Approach to Rear Vehicle Detection Which Considered State Dependency

    Directory of Open Access Journals (Sweden)

    Kayichirou Inagaki

    2003-08-01

    Full Text Available Experimental vision-based detection often fails in cases when the acquired image quality is reduced by changing optical environments. In addition, the shape of vehicles in images that are taken from vision sensors change due to approaches by vehicle. Vehicle detection methods are required to perform successfully under these conditions. However, the conventional methods do not consider especially in rapidly varying by brightness conditions. We suggest a new detection method that compensates for those conditions in monocular vision-based vehicle detection. The suggested method employs a Recurrent Neural Network (RNN, which has been applied for spatiotemporal processing. The RNN is able to respond to consecutive scenes involving the target vehicle and can track the movements of the target by the effect of the past network states. The suggested method has a particularly beneficial effect in environments with sudden, extreme variations such as bright sunlight and shield. Finally, we demonstrate effectiveness by state-dependent of the RNN-based method by comparing its detection results with those of a Multi Layered Perceptron (MLP.

  17. Novel criteria for global exponential periodicity and stability of recurrent neural networks with time-varying delays

    International Nuclear Information System (INIS)

    Song Qiankun

    2008-01-01

    In this paper, the global exponential periodicity and stability of recurrent neural networks with time-varying delays are investigated by applying the idea of vector Lyapunov function, M-matrix theory and inequality technique. We assume neither the global Lipschitz conditions on these activation functions nor the differentiability on these time-varying delays, which were needed in other papers. Several novel criteria are found to ascertain the existence, uniqueness and global exponential stability of periodic solution for recurrent neural network with time-varying delays. Moreover, the exponential convergence rate index is estimated, which depends on the system parameters. Some previous results are improved and generalized, and an example is given to show the effectiveness of our method

  18. Improved delay-dependent globally asymptotic stability of delayed uncertain recurrent neural networks with Markovian jumping parameters

    International Nuclear Information System (INIS)

    Yan, Ji; Bao-Tong, Cui

    2010-01-01

    In this paper, we have improved delay-dependent stability criteria for recurrent neural networks with a delay varying over a range and Markovian jumping parameters. The criteria improve over some previous ones in that they have fewer matrix variables yet less conservatism. In addition, a numerical example is provided to illustrate the applicability of the result using the linear matrix inequality toolbox in MATLAB. (general)

  19. Large-Scale Recurrent Neural Network Based Modelling of Gene Regulatory Network Using Cuckoo Search-Flower Pollination Algorithm.

    Science.gov (United States)

    Mandal, Sudip; Khan, Abhinandan; Saha, Goutam; Pal, Rajat K

    2016-01-01

    The accurate prediction of genetic networks using computational tools is one of the greatest challenges in the postgenomic era. Recurrent Neural Network is one of the most popular but simple approaches to model the network dynamics from time-series microarray data. To date, it has been successfully applied to computationally derive small-scale artificial and real-world genetic networks with high accuracy. However, they underperformed for large-scale genetic networks. Here, a new methodology has been proposed where a hybrid Cuckoo Search-Flower Pollination Algorithm has been implemented with Recurrent Neural Network. Cuckoo Search is used to search the best combination of regulators. Moreover, Flower Pollination Algorithm is applied to optimize the model parameters of the Recurrent Neural Network formalism. Initially, the proposed method is tested on a benchmark large-scale artificial network for both noiseless and noisy data. The results obtained show that the proposed methodology is capable of increasing the inference of correct regulations and decreasing false regulations to a high degree. Secondly, the proposed methodology has been validated against the real-world dataset of the DNA SOS repair network of Escherichia coli. However, the proposed method sacrifices computational time complexity in both cases due to the hybrid optimization process.

  20. Learning to Generate Sequences with Combination of Hebbian and Non-hebbian Plasticity in Recurrent Spiking Neural Networks.

    Science.gov (United States)

    Panda, Priyadarshini; Roy, Kaushik

    2017-01-01

    Synaptic Plasticity, the foundation for learning and memory formation in the human brain, manifests in various forms. Here, we combine the standard spike timing correlation based Hebbian plasticity with a non-Hebbian synaptic decay mechanism for training a recurrent spiking neural model to generate sequences. We show that inclusion of the adaptive decay of synaptic weights with standard STDP helps learn stable contextual dependencies between temporal sequences, while reducing the strong attractor states that emerge in recurrent models due to feedback loops. Furthermore, we show that the combined learning scheme suppresses the chaotic activity in the recurrent model substantially, thereby enhancing its' ability to generate sequences consistently even in the presence of perturbations.

  1. A New Local Bipolar Autoassociative Memory Based on External Inputs of Discrete Recurrent Neural Networks With Time Delay.

    Science.gov (United States)

    Zhou, Caigen; Zeng, Xiaoqin; Luo, Chaomin; Zhang, Huaguang

    In this paper, local bipolar auto-associative memories are presented based on discrete recurrent neural networks with a class of gain type activation function. The weight parameters of neural networks are acquired by a set of inequalities without the learning procedure. The global exponential stability criteria are established to ensure the accuracy of the restored patterns by considering time delays and external inputs. The proposed methodology is capable of effectively overcoming spurious memory patterns and achieving memory capacity. The effectiveness, robustness, and fault-tolerant capability are validated by simulated experiments.In this paper, local bipolar auto-associative memories are presented based on discrete recurrent neural networks with a class of gain type activation function. The weight parameters of neural networks are acquired by a set of inequalities without the learning procedure. The global exponential stability criteria are established to ensure the accuracy of the restored patterns by considering time delays and external inputs. The proposed methodology is capable of effectively overcoming spurious memory patterns and achieving memory capacity. The effectiveness, robustness, and fault-tolerant capability are validated by simulated experiments.

  2. REST mediates androgen receptor actions on gene repression and predicts early recurrence of prostate cancer

    DEFF Research Database (Denmark)

    Svensson, Charlotte; Ceder, Jens; Iglesias Gato, Diego

    2014-01-01

    The androgen receptor (AR) is a key regulator of prostate tumorgenesis through actions that are not fully understood. We identified the repressor element (RE)-1 silencing transcription factor (REST) as a mediator of AR actions on gene repression. Chromatin immunoprecipitation showed that AR binds...... in cell cycle progression, including Aurora Kinase A, that has previously been implicated in the growth of NE-like castration-resistant tumors. The analysis of prostate cancer tissue microarrays revealed that tumors with reduced expression of REST have higher probability of early recurrence, independently...... of their Gleason score. The demonstration that REST modulates AR actions in prostate epithelia and that REST expression is negatively correlated with disease recurrence after prostatectomy, invite a deeper characterization of its role in prostate carcinogenesis....

  3. Identifying time-delayed gene regulatory networks via an evolvable hierarchical recurrent neural network.

    Science.gov (United States)

    Kordmahalleh, Mina Moradi; Sefidmazgi, Mohammad Gorji; Harrison, Scott H; Homaifar, Abdollah

    2017-01-01

    The modeling of genetic interactions within a cell is crucial for a basic understanding of physiology and for applied areas such as drug design. Interactions in gene regulatory networks (GRNs) include effects of transcription factors, repressors, small metabolites, and microRNA species. In addition, the effects of regulatory interactions are not always simultaneous, but can occur after a finite time delay, or as a combined outcome of simultaneous and time delayed interactions. Powerful biotechnologies have been rapidly and successfully measuring levels of genetic expression to illuminate different states of biological systems. This has led to an ensuing challenge to improve the identification of specific regulatory mechanisms through regulatory network reconstructions. Solutions to this challenge will ultimately help to spur forward efforts based on the usage of regulatory network reconstructions in systems biology applications. We have developed a hierarchical recurrent neural network (HRNN) that identifies time-delayed gene interactions using time-course data. A customized genetic algorithm (GA) was used to optimize hierarchical connectivity of regulatory genes and a target gene. The proposed design provides a non-fully connected network with the flexibility of using recurrent connections inside the network. These features and the non-linearity of the HRNN facilitate the process of identifying temporal patterns of a GRN. Our HRNN method was implemented with the Python language. It was first evaluated on simulated data representing linear and nonlinear time-delayed gene-gene interaction models across a range of network sizes and variances of noise. We then further demonstrated the capability of our method in reconstructing GRNs of the Saccharomyces cerevisiae synthetic network for in vivo benchmarking of reverse-engineering and modeling approaches (IRMA). We compared the performance of our method to TD-ARACNE, HCC-CLINDE, TSNI and ebdbNet across different network

  4. Physiological modules for generating discrete and rhythmic movements: action identification by a dynamic recurrent neural network.

    Science.gov (United States)

    Bengoetxea, Ana; Leurs, Françoise; Hoellinger, Thomas; Cebolla, Ana M; Dan, Bernard; McIntyre, Joseph; Cheron, Guy

    2014-01-01

    In this study we employed a dynamic recurrent neural network (DRNN) in a novel fashion to reveal characteristics of control modules underlying the generation of muscle activations when drawing figures with the outstretched arm. We asked healthy human subjects to perform four different figure-eight movements in each of two workspaces (frontal plane and sagittal plane). We then trained a DRNN to predict the movement of the wrist from information in the EMG signals from seven different muscles. We trained different instances of the same network on a single movement direction, on all four movement directions in a single movement plane, or on all eight possible movement patterns and looked at the ability of the DRNN to generalize and predict movements for trials that were not included in the training set. Within a single movement plane, a DRNN trained on one movement direction was not able to predict movements of the hand for trials in the other three directions, but a DRNN trained simultaneously on all four movement directions could generalize across movement directions within the same plane. Similarly, the DRNN was able to reproduce the kinematics of the hand for both movement planes, but only if it was trained on examples performed in each one. As we will discuss, these results indicate that there are important dynamical constraints on the mapping of EMG to hand movement that depend on both the time sequence of the movement and on the anatomical constraints of the musculoskeletal system. In a second step, we injected EMG signals constructed from different synergies derived by the PCA in order to identify the mechanical significance of each of these components. From these results, one can surmise that discrete-rhythmic movements may be constructed from three different fundamental modules, one regulating the co-activation of all muscles over the time span of the movement and two others elliciting patterns of reciprocal activation operating in orthogonal directions.

  5. Recurrent neural networks with specialized word embeddings for health-domain named-entity recognition.

    Science.gov (United States)

    Jauregi Unanue, Iñigo; Zare Borzeshi, Ehsan; Piccardi, Massimo

    2017-12-01

    Previous state-of-the-art systems on Drug Name Recognition (DNR) and Clinical Concept Extraction (CCE) have focused on a combination of text "feature engineering" and conventional machine learning algorithms such as conditional random fields and support vector machines. However, developing good features is inherently heavily time-consuming. Conversely, more modern machine learning approaches such as recurrent neural networks (RNNs) have proved capable of automatically learning effective features from either random assignments or automated word "embeddings". (i) To create a highly accurate DNR and CCE system that avoids conventional, time-consuming feature engineering. (ii) To create richer, more specialized word embeddings by using health domain datasets such as MIMIC-III. (iii) To evaluate our systems over three contemporary datasets. Two deep learning methods, namely the Bidirectional LSTM and the Bidirectional LSTM-CRF, are evaluated. A CRF model is set as the baseline to compare the deep learning systems to a traditional machine learning approach. The same features are used for all the models. We have obtained the best results with the Bidirectional LSTM-CRF model, which has outperformed all previously proposed systems. The specialized embeddings have helped to cover unusual words in DrugBank and MedLine, but not in the i2b2/VA dataset. We present a state-of-the-art system for DNR and CCE. Automated word embeddings has allowed us to avoid costly feature engineering and achieve higher accuracy. Nevertheless, the embeddings need to be retrained over datasets that are adequate for the domain, in order to adequately cover the domain-specific vocabulary. Copyright © 2017 Elsevier Inc. All rights reserved.

  6. Learning a Transferable Change Rule from a Recurrent Neural Network for Land Cover Change Detection

    Directory of Open Access Journals (Sweden)

    Haobo Lyu

    2016-06-01

    Full Text Available When exploited in remote sensing analysis, a reliable change rule with transfer ability can detect changes accurately and be applied widely. However, in practice, the complexity of land cover changes makes it difficult to use only one change rule or change feature learned from a given multi-temporal dataset to detect any other new target images without applying other learning processes. In this study, we consider the design of an efficient change rule having transferability to detect both binary and multi-class changes. The proposed method relies on an improved Long Short-Term Memory (LSTM model to acquire and record the change information of long-term sequence remote sensing data. In particular, a core memory cell is utilized to learn the change rule from the information concerning binary changes or multi-class changes. Three gates are utilized to control the input, output and update of the LSTM model for optimization. In addition, the learned rule can be applied to detect changes and transfer the change rule from one learned image to another new target multi-temporal image. In this study, binary experiments, transfer experiments and multi-class change experiments are exploited to demonstrate the superiority of our method. Three contributions of this work can be summarized as follows: (1 the proposed method can learn an effective change rule to provide reliable change information for multi-temporal images; (2 the learned change rule has good transferability for detecting changes in new target images without any extra learning process, and the new target images should have a multi-spectral distribution similar to that of the training images; and (3 to the authors’ best knowledge, this is the first time that deep learning in recurrent neural networks is exploited for change detection. In addition, under the framework of the proposed method, changes can be detected under both binary detection and multi-class change detection.

  7. Deep learning for pharmacovigilance: recurrent neural network architectures for labeling adverse drug reactions in Twitter posts.

    Science.gov (United States)

    Cocos, Anne; Fiks, Alexander G; Masino, Aaron J

    2017-07-01

    Social media is an important pharmacovigilance data source for adverse drug reaction (ADR) identification. Human review of social media data is infeasible due to data quantity, thus natural language processing techniques are necessary. Social media includes informal vocabulary and irregular grammar, which challenge natural language processing methods. Our objective is to develop a scalable, deep-learning approach that exceeds state-of-the-art ADR detection performance in social media. We developed a recurrent neural network (RNN) model that labels words in an input sequence with ADR membership tags. The only input features are word-embedding vectors, which can be formed through task-independent pretraining or during ADR detection training. Our best-performing RNN model used pretrained word embeddings created from a large, non-domain-specific Twitter dataset. It achieved an approximate match F-measure of 0.755 for ADR identification on the dataset, compared to 0.631 for a baseline lexicon system and 0.65 for the state-of-the-art conditional random field model. Feature analysis indicated that semantic information in pretrained word embeddings boosted sensitivity and, combined with contextual awareness captured in the RNN, precision. Our model required no task-specific feature engineering, suggesting generalizability to additional sequence-labeling tasks. Learning curve analysis showed that our model reached optimal performance with fewer training examples than the other models. ADR detection performance in social media is significantly improved by using a contextually aware model and word embeddings formed from large, unlabeled datasets. The approach reduces manual data-labeling requirements and is scalable to large social media datasets. © The Author 2017. Published by Oxford University Press on behalf of the American Medical Informatics Association. All rights reserved. For Permissions, please email: journals.permissions@oup.com

  8. Dynamical Integration of Language and Behavior in a Recurrent Neural Network for Human--Robot Interaction

    Directory of Open Access Journals (Sweden)

    Tatsuro Yamada

    2016-07-01

    Full Text Available To work cooperatively with humans by using language, robots must not only acquire a mapping between language and their behavior but also autonomously utilize the mapping in appropriate contexts of interactive tasks online. To this end, we propose a novel learning method linking language to robot behavior by means of a recurrent neural network. In this method, the network learns from correct examples of the imposed task that are given not as explicitly separated sets of language and behavior but as sequential data constructed from the actual temporal flow of the task. By doing this, the internal dynamics of the network models both language--behavior relationships and the temporal patterns of interaction. Here, ``internal dynamics'' refers to the time development of the system defined on the fixed-dimensional space of the internal states of the context layer. Thus, in the execution phase, by constantly representing where in the interaction context it is as its current state, the network autonomously switches between recognition and generation phases without any explicit signs and utilizes the acquired mapping in appropriate contexts. To evaluate our method, we conducted an experiment in which a robot generates appropriate behavior responding to a human's linguistic instruction. After learning, the network actually formed the attractor structure representing both language--behavior relationships and the task's temporal pattern in its internal dynamics. In the dynamics, language--behavior mapping was achieved by the branching structure. Repetition of human's instruction and robot's behavioral response was represented as the cyclic structure, and besides, waiting to a subsequent instruction was represented as the fixed-point attractor. Thanks to this structure, the robot was able to interact online with a human concerning the given task by autonomously switching phases.

  9. Neural correlates of working memory in first episode and recurrent depression: An fMRI study.

    Science.gov (United States)

    Yüksel, Dilara; Dietsche, Bruno; Konrad, Carsten; Dannlowski, Udo; Kircher, Tilo; Krug, Axel

    2018-06-08

    Patients suffering from major depressive disorder (MDD) show deficits in working memory (WM) performance accompanied by bilateral fronto-parietal BOLD signal changes. It is unclear whether patients with a first depressive episode (FDE) exhibit the same signal changes as patients with recurrent depressive episodes (RDE). We investigated seventy-four MDD inpatients (48 RDE, 26 FDE) and 74 healthy control (HC) subjects performing an n-back WM task (0-back, 2-back, 3-back condition) in a 3T-fMRI. FMRI analyses revealed deviating BOLD signal in MDD in the thalamus (0-back vs. 2-back), the angular gyrus (0-back vs. 3-back), and the superior frontal gyrus (2-back vs. 3-back). Further effects were observed between RDE vs. FDE. Thus, RDE displayed differing neural activation in the middle frontal gyrus (2-back vs. 3-back), the inferior frontal gyrus, and the precentral gyrus (0-back vs. 2-back). In addition, both HC and FDE indicated a linear activation trend depending on task complexity. Although we failed to find behavioral differences between the groups, results suggest differing BOLD signal in fronto-parietal brain regions in MDD vs. HC, and in RDE vs. FDE. Moreover, both HC and FDE show similar trends in activation shapes. This indicates a link between levels of complexity-dependent activation in fronto-parietal brain regions and the stage of MDD. We therefore assume that load-dependent BOLD signal during WM is impaired in MDD, and that it is particularly affected in RDE. We also suspect neurobiological compensatory mechanisms of the reported brain regions in (working) memory functioning. Copyright © 2018 Elsevier Inc. All rights reserved.

  10. Dynamical Integration of Language and Behavior in a Recurrent Neural Network for Human-Robot Interaction.

    Science.gov (United States)

    Yamada, Tatsuro; Murata, Shingo; Arie, Hiroaki; Ogata, Tetsuya

    2016-01-01

    To work cooperatively with humans by using language, robots must not only acquire a mapping between language and their behavior but also autonomously utilize the mapping in appropriate contexts of interactive tasks online. To this end, we propose a novel learning method linking language to robot behavior by means of a recurrent neural network. In this method, the network learns from correct examples of the imposed task that are given not as explicitly separated sets of language and behavior but as sequential data constructed from the actual temporal flow of the task. By doing this, the internal dynamics of the network models both language-behavior relationships and the temporal patterns of interaction. Here, "internal dynamics" refers to the time development of the system defined on the fixed-dimensional space of the internal states of the context layer. Thus, in the execution phase, by constantly representing where in the interaction context it is as its current state, the network autonomously switches between recognition and generation phases without any explicit signs and utilizes the acquired mapping in appropriate contexts. To evaluate our method, we conducted an experiment in which a robot generates appropriate behavior responding to a human's linguistic instruction. After learning, the network actually formed the attractor structure representing both language-behavior relationships and the task's temporal pattern in its internal dynamics. In the dynamics, language-behavior mapping was achieved by the branching structure. Repetition of human's instruction and robot's behavioral response was represented as the cyclic structure, and besides, waiting to a subsequent instruction was represented as the fixed-point attractor. Thanks to this structure, the robot was able to interact online with a human concerning the given task by autonomously switching phases.

  11. De-identification of clinical notes via recurrent neural network and conditional random field.

    Science.gov (United States)

    Liu, Zengjian; Tang, Buzhou; Wang, Xiaolong; Chen, Qingcai

    2017-11-01

    De-identification, identifying information from data, such as protected health information (PHI) present in clinical data, is a critical step to enable data to be shared or published. The 2016 Centers of Excellence in Genomic Science (CEGS) Neuropsychiatric Genome-scale and RDOC Individualized Domains (N-GRID) clinical natural language processing (NLP) challenge contains a de-identification track in de-identifying electronic medical records (EMRs) (i.e., track 1). The challenge organizers provide 1000 annotated mental health records for this track, 600 out of which are used as a training set and 400 as a test set. We develop a hybrid system for the de-identification task on the training set. Firstly, four individual subsystems, that is, a subsystem based on bidirectional LSTM (long-short term memory, a variant of recurrent neural network), a subsystem-based on bidirectional LSTM with features, a subsystem based on conditional random field (CRF) and a rule-based subsystem, are used to identify PHI instances. Then, an ensemble learning-based classifiers is deployed to combine all PHI instances predicted by above three machine learning-based subsystems. Finally, the results of the ensemble learning-based classifier and the rule-based subsystem are merged together. Experiments conducted on the official test set show that our system achieves the highest micro F1-scores of 93.07%, 91.43% and 95.23% under the "token", "strict" and "binary token" criteria respectively, ranking first in the 2016 CEGS N-GRID NLP challenge. In addition, on the dataset of 2014 i2b2 NLP challenge, our system achieves the highest micro F1-scores of 96.98%, 95.11% and 98.28% under the "token", "strict" and "binary token" criteria respectively, outperforming other state-of-the-art systems. All these experiments prove the effectiveness of our proposed method. Copyright © 2017. Published by Elsevier Inc.

  12. A novel prosodic-information synthesizer based on recurrent fuzzy neural network for the Chinese TTS system.

    Science.gov (United States)

    Lin, Chin-Teng; Wu, Rui-Cheng; Chang, Jyh-Yeong; Liang, Sheng-Fu

    2004-02-01

    In this paper, a new technique for the Chinese text-to-speech (TTS) system is proposed. Our major effort focuses on the prosodic information generation. New methodologies for constructing fuzzy rules in a prosodic model simulating human's pronouncing rules are developed. The proposed Recurrent Fuzzy Neural Network (RFNN) is a multilayer recurrent neural network (RNN) which integrates a Self-cOnstructing Neural Fuzzy Inference Network (SONFIN) into a recurrent connectionist structure. The RFNN can be functionally divided into two parts. The first part adopts the SONFIN as a prosodic model to explore the relationship between high-level linguistic features and prosodic information based on fuzzy inference rules. As compared to conventional neural networks, the SONFIN can always construct itself with an economic network size in high learning speed. The second part employs a five-layer network to generate all prosodic parameters by directly using the prosodic fuzzy rules inferred from the first part as well as other important features of syllables. The TTS system combined with the proposed method can behave not only sandhi rules but also the other prosodic phenomena existing in the traditional TTS systems. Moreover, the proposed scheme can even find out some new rules about prosodic phrase structure. The performance of the proposed RFNN-based prosodic model is verified by imbedding it into a Chinese TTS system with a Chinese monosyllable database based on the time-domain pitch synchronous overlap add (TD-PSOLA) method. Our experimental results show that the proposed RFNN can generate proper prosodic parameters including pitch means, pitch shapes, maximum energy levels, syllable duration, and pause duration. Some synthetic sounds are online available for demonstration.

  13. hmmr mediates anterior neural tube closure and morphogenesis in the frog Xenopus.

    Science.gov (United States)

    Prager, Angela; Hagenlocher, Cathrin; Ott, Tim; Schambony, Alexandra; Feistel, Kerstin

    2017-10-01

    Development of the central nervous system requires orchestration of morphogenetic processes which drive elevation and apposition of the neural folds and their fusion into a neural tube. The newly formed tube gives rise to the brain in anterior regions and continues to develop into the spinal cord posteriorly. Conspicuous differences between the anterior and posterior neural tube become visible already during neural tube closure (NTC). Planar cell polarity (PCP)-mediated convergent extension (CE) movements are restricted to the posterior neural plate, i.e. hindbrain and spinal cord, where they propagate neural fold apposition. The lack of CE in the anterior neural plate correlates with a much slower mode of neural fold apposition anteriorly. The morphogenetic processes driving anterior NTC have not been addressed in detail. Here, we report a novel role for the breast cancer susceptibility gene and microtubule (MT) binding protein Hmmr (Hyaluronan-mediated motility receptor, RHAMM) in anterior neurulation and forebrain development in Xenopus laevis. Loss of hmmr function resulted in a lack of telencephalic hemisphere separation, arising from defective roof plate formation, which in turn was caused by impaired neural tissue narrowing. hmmr regulated polarization of neural cells, a function which was dependent on the MT binding domains. hmmr cooperated with the core PCP component vangl2 in regulating cell polarity and neural morphogenesis. Disrupted cell polarization and elongation in hmmr and vangl2 morphants prevented radial intercalation (RI), a cell behavior essential for neural morphogenesis. Our results pinpoint a novel role of hmmr in anterior neural development and support the notion that RI is a major driving force for anterior neurulation and forebrain morphogenesis. Copyright © 2017 Elsevier Inc. All rights reserved.

  14. A recurrent translocation is mediated by homologous recombination between HERV-H elements

    Directory of Open Access Journals (Sweden)

    Hermetz Karen E

    2012-01-01

    Full Text Available Abstract Background Chromosome rearrangements are caused by many mutational mechanisms; of these, recurrent rearrangements can be particularly informative for teasing apart DNA sequence-specific factors. Some recurrent translocations are mediated by homologous recombination between large blocks of segmental duplications on different chromosomes. Here we describe a recurrent unbalanced translocation casued by recombination between shorter homologous regions on chromosomes 4 and 18 in two unrelated children with intellectual disability. Results Array CGH resolved the breakpoints of the 6.97-Megabase (Mb loss of 18q and the 7.30-Mb gain of 4q. Sequencing across the translocation breakpoints revealed that both translocations occurred between 92%-identical human endogenous retrovirus (HERV elements in the same orientation on chromosomes 4 and 18. In addition, we find sequence variation in the chromosome 4 HERV that makes one allele more like the chromosome 18 HERV. Conclusions Homologous recombination between HERVs on the same chromosome is known to cause chromosome deletions, but this is the first report of interchromosomal HERV-HERV recombination leading to a translocation. It is possible that normal sequence variation in substrates of non-allelic homologous recombination (NAHR affects the alignment of recombining segments and influences the propensity to chromosome rearrangement.

  15. Distributed Recurrent Neural Forward Models with Synaptic Adaptation and CPG-based control for Complex Behaviors of Walking Robots

    Directory of Open Access Journals (Sweden)

    Sakyasingha eDasgupta

    2015-09-01

    Full Text Available Walking animals, like stick insects, cockroaches or ants, demonstrate a fascinating range of locomotive abilities and complex behaviors. The locomotive behaviors can consist of a variety of walking patterns along with adaptation that allow the animals to deal with changes in environmental conditions, like uneven terrains, gaps, obstacles etc. Biological study has revealed that such complex behaviors are a result of a combination of biomechanics and neural mechanism thus representing the true nature of embodied interactions. While the biomechanics helps maintain flexibility and sustain a variety of movements, the neural mechanisms generate movements while making appropriate predictions crucial for achieving adaptation. Such predictions or planning ahead can be achieved by way of internal models that are grounded in the overall behavior of the animal. Inspired by these findings, we present here, an artificial bio-inspired walking system which effectively combines biomechanics (in terms of the body and leg structures with the underlying neural mechanisms. The neural mechanisms consist of 1 central pattern generator based control for generating basic rhythmic patterns and coordinated movements, 2 distributed (at each leg recurrent neural network based adaptive forward models with efference copies as internal models for sensory predictions and instantaneous state estimations, and 3 searching and elevation control for adapting the movement of an individual leg to deal with different environmental conditions. Using simulations we show that this bio-inspired approach with adaptive internal models allows the walking robot to perform complex locomotive behaviors as observed in insects, including walking on undulated terrains, crossing large gaps as well as climbing over high obstacles. Furthermore we demonstrate that the newly developed recurrent network based approach to sensorimotor prediction outperforms the previous state of the art adaptive neuron

  16. Identification of a Typical CSTR Using Optimal Focused Time Lagged Recurrent Neural Network Model with Gamma Memory Filter

    Directory of Open Access Journals (Sweden)

    S. N. Naikwad

    2009-01-01

    Full Text Available A focused time lagged recurrent neural network (FTLR NN with gamma memory filter is designed to learn the subtle complex dynamics of a typical CSTR process. Continuous stirred tank reactor exhibits complex nonlinear operations where reaction is exothermic. It is noticed from literature review that process control of CSTR using neuro-fuzzy systems was attempted by many, but optimal neural network model for identification of CSTR process is not yet available. As CSTR process includes temporal relationship in the input-output mappings, time lagged recurrent neural network is particularly used for identification purpose. The standard back propagation algorithm with momentum term has been proposed in this model. The various parameters like number of processing elements, number of hidden layers, training and testing percentage, learning rule and transfer function in hidden and output layer are investigated on the basis of performance measures like MSE, NMSE, and correlation coefficient on testing data set. Finally effects of different norms are tested along with variation in gamma memory filter. It is demonstrated that dynamic NN model has a remarkable system identification capability for the problems considered in this paper. Thus FTLR NN with gamma memory filter can be used to learn underlying highly nonlinear dynamics of the system, which is a major contribution of this paper.

  17. Mediator Med23 deficiency enhances neural differentiation of murine embryonic stem cells through modulating BMP signaling.

    Science.gov (United States)

    Zhu, Wanqu; Yao, Xiao; Liang, Yan; Liang, Dan; Song, Lu; Jing, Naihe; Li, Jinsong; Wang, Gang

    2015-02-01

    Unraveling the mechanisms underlying early neural differentiation of embryonic stem cells (ESCs) is crucial to developing cell-based therapies of neurodegenerative diseases. Neural fate acquisition is proposed to be controlled by a 'default' mechanism, for which the molecular regulation is not well understood. In this study, we investigated the functional roles of Mediator Med23 in pluripotency and lineage commitment of murine ESCs. Unexpectedly, we found that, despite the largely unchanged pluripotency and self-renewal of ESCs, Med23 depletion rendered the cells prone to neural differentiation in different differentiation assays. Knockdown of two other Mediator subunits, Med1 and Med15, did not alter the neural differentiation of ESCs. Med15 knockdown selectively inhibited endoderm differentiation, suggesting the specificity of cell fate control by distinctive Mediator subunits. Gene profiling revealed that Med23 depletion attenuated BMP signaling in ESCs. Mechanistically, MED23 modulated Bmp4 expression by controlling the activity of ETS1, which is involved in Bmp4 promoter-enhancer communication. Interestingly, med23 knockdown in zebrafish embryos also enhanced neural development at early embryogenesis, which could be reversed by co-injection of bmp4 mRNA. Taken together, our study reveals an intrinsic, restrictive role of MED23 in early neural development, thus providing new molecular insights for neural fate determination. © 2015. Published by The Company of Biologists Ltd.

  18. Calcium signaling mediates five types of cell morphological changes to form neural rosettes.

    Science.gov (United States)

    Hříbková, Hana; Grabiec, Marta; Klemová, Dobromila; Slaninová, Iva; Sun, Yuh-Man

    2018-02-12

    Neural rosette formation is a critical morphogenetic process during neural development, whereby neural stem cells are enclosed in rosette niches to equipoise proliferation and differentiation. How neural rosettes form and provide a regulatory micro-environment remains to be elucidated. We employed the human embryonic stem cell-based neural rosette system to investigate the structural development and function of neural rosettes. Our study shows that neural rosette formation consists of five types of morphological change: intercalation, constriction, polarization, elongation and lumen formation. Ca 2+ signaling plays a pivotal role in the five steps by regulating the actions of the cytoskeletal complexes, actin, myosin II and tubulin during intercalation, constriction and elongation. These, in turn, control the polarizing elements, ZO-1, PARD3 and β-catenin during polarization and lumen production for neural rosette formation. We further demonstrate that the dismantlement of neural rosettes, mediated by the destruction of cytoskeletal elements, promotes neurogenesis and astrogenesis prematurely, indicating that an intact rosette structure is essential for orderly neural development. © 2018. Published by The Company of Biologists Ltd.

  19. Novel recurrent neural network for modelling biological networks: oscillatory p53 interaction dynamics.

    Science.gov (United States)

    Ling, Hong; Samarasinghe, Sandhya; Kulasiri, Don

    2013-12-01

    Understanding the control of cellular networks consisting of gene and protein interactions and their emergent properties is a central activity of Systems Biology research. For this, continuous, discrete, hybrid, and stochastic methods have been proposed. Currently, the most common approach to modelling accurate temporal dynamics of networks is ordinary differential equations (ODE). However, critical limitations of ODE models are difficulty in kinetic parameter estimation and numerical solution of a large number of equations, making them more suited to smaller systems. In this article, we introduce a novel recurrent artificial neural network (RNN) that addresses above limitations and produces a continuous model that easily estimates parameters from data, can handle a large number of molecular interactions and quantifies temporal dynamics and emergent systems properties. This RNN is based on a system of ODEs representing molecular interactions in a signalling network. Each neuron represents concentration change of one molecule represented by an ODE. Weights of the RNN correspond to kinetic parameters in the system and can be adjusted incrementally during network training. The method is applied to the p53-Mdm2 oscillation system - a crucial component of the DNA damage response pathways activated by a damage signal. Simulation results indicate that the proposed RNN can successfully represent the behaviour of the p53-Mdm2 oscillation system and solve the parameter estimation problem with high accuracy. Furthermore, we presented a modified form of the RNN that estimates parameters and captures systems dynamics from sparse data collected over relatively large time steps. We also investigate the robustness of the p53-Mdm2 system using the trained RNN under various levels of parameter perturbation to gain a greater understanding of the control of the p53-Mdm2 system. Its outcomes on robustness are consistent with the current biological knowledge of this system. As more

  20. A system of recurrent neural networks for modularising, parameterising and dynamic analysis of cell signalling networks.

    Science.gov (United States)

    Samarasinghe, S; Ling, H

    In this paper, we show how to extend our previously proposed novel continuous time Recurrent Neural Networks (RNN) approach that retains the advantage of continuous dynamics offered by Ordinary Differential Equations (ODE) while enabling parameter estimation through adaptation, to larger signalling networks using a modular approach. Specifically, the signalling network is decomposed into several sub-models based on important temporal events in the network. Each sub-model is represented by the proposed RNN and trained using data generated from the corresponding ODE model. Trained sub-models are assembled into a whole system RNN which is then subjected to systems dynamics and sensitivity analyses. The concept is illustrated by application to G1/S transition in cell cycle using Iwamoto et al. (2008) ODE model. We decomposed the G1/S network into 3 sub-models: (i) E2F transcription factor release; (ii) E2F and CycE positive feedback loop for elevating cyclin levels; and (iii) E2F and CycA negative feedback to degrade E2F. The trained sub-models accurately represented system dynamics and parameters were in good agreement with the ODE model. The whole system RNN however revealed couple of parameters contributing to compounding errors due to feedback and required refinement to sub-model 2. These related to the reversible reaction between CycE/CDK2 and p27, its inhibitor. The revised whole system RNN model very accurately matched dynamics of the ODE system. Local sensitivity analysis of the whole system model further revealed the most dominant influence of the above two parameters in perturbing G1/S transition, giving support to a recent hypothesis that the release of inhibitor p27 from Cyc/CDK complex triggers cell cycle stage transition. To make the model useful in a practical setting, we modified each RNN sub-model with a time relay switch to facilitate larger interval input data (≈20min) (original model used data for 30s or less) and retrained them that produced

  1. Synchronization of chaotic systems and identification of nonlinear systems by using recurrent hierarchical type-2 fuzzy neural networks.

    Science.gov (United States)

    Mohammadzadeh, Ardashir; Ghaemi, Sehraneh

    2015-09-01

    This paper proposes a novel approach for training of proposed recurrent hierarchical interval type-2 fuzzy neural networks (RHT2FNN) based on the square-root cubature Kalman filters (SCKF). The SCKF algorithm is used to adjust the premise part of the type-2 FNN and the weights of defuzzification and the feedback weights. The recurrence property in the proposed network is the output feeding of each membership function to itself. The proposed RHT2FNN is employed in the sliding mode control scheme for the synchronization of chaotic systems. Unknown functions in the sliding mode control approach are estimated by RHT2FNN. Another application of the proposed RHT2FNN is the identification of dynamic nonlinear systems. The effectiveness of the proposed network and its learning algorithm is verified by several simulation examples. Furthermore, the universal approximation of RHT2FNNs is also shown. Copyright © 2015 ISA. Published by Elsevier Ltd. All rights reserved.

  2. Recurrent neural network approach to quantum signal: coherent state restoration for continuous-variable quantum key distribution

    Science.gov (United States)

    Lu, Weizhao; Huang, Chunhui; Hou, Kun; Shi, Liting; Zhao, Huihui; Li, Zhengmei; Qiu, Jianfeng

    2018-05-01

    In continuous-variable quantum key distribution (CV-QKD), weak signal carrying information transmits from Alice to Bob; during this process it is easily influenced by unknown noise which reduces signal-to-noise ratio, and strongly impacts reliability and stability of the communication. Recurrent quantum neural network (RQNN) is an artificial neural network model which can perform stochastic filtering without any prior knowledge of the signal and noise. In this paper, a modified RQNN algorithm with expectation maximization algorithm is proposed to process the signal in CV-QKD, which follows the basic rule of quantum mechanics. After RQNN, noise power decreases about 15 dBm, coherent signal recognition rate of RQNN is 96%, quantum bit error rate (QBER) drops to 4%, which is 6.9% lower than original QBER, and channel capacity is notably enlarged.

  3. Protein-Protein Interaction Article Classification Using a Convolutional Recurrent Neural Network with Pre-trained Word Embeddings.

    Science.gov (United States)

    Matos, Sérgio; Antunes, Rui

    2017-12-13

    Curation of protein interactions from scientific articles is an important task, since interaction networks are essential for the understanding of biological processes associated with disease or pharmacological action for example. However, the increase in the number of publications that potentially contain relevant information turns this into a very challenging and expensive task. In this work we used a convolutional recurrent neural network for identifying relevant articles for extracting information regarding protein interactions. Using the BioCreative III Article Classification Task dataset, we achieved an area under the precision-recall curve of 0.715 and a Matthew's correlation coefficient of 0.600, which represents an improvement over previous works.

  4. Robust stability analysis of Takagi—Sugeno uncertain stochastic fuzzy recurrent neural networks with mixed time-varying delays

    International Nuclear Information System (INIS)

    Ali, M. Syed

    2011-01-01

    In this paper, the global stability of Takagi—Sugeno (TS) uncertain stochastic fuzzy recurrent neural networks with discrete and distributed time-varying delays (TSUSFRNNs) is considered. A novel LMI-based stability criterion is obtained by using Lyapunov functional theory to guarantee the asymptotic stability of TSUSFRNNs. The proposed stability conditions are demonstrated through numerical examples. Furthermore, the supplementary requirement that the time derivative of time-varying delays must be smaller than one is removed. Comparison results are demonstrated to show that the proposed method is more able to guarantee the widest stability region than the other methods available in the existing literature. (general)

  5. Novel delay-distribution-dependent stability analysis for continuous-time recurrent neural networks with stochastic delay

    International Nuclear Information System (INIS)

    Wang Shen-Quan; Feng Jian; Zhao Qing

    2012-01-01

    In this paper, the problem of delay-distribution-dependent stability is investigated for continuous-time recurrent neural networks (CRNNs) with stochastic delay. Different from the common assumptions on time delays, it is assumed that the probability distribution of the delay taking values in some intervals is known a priori. By making full use of the information concerning the probability distribution of the delay and by using a tighter bounding technique (the reciprocally convex combination method), less conservative asymptotic mean-square stable sufficient conditions are derived in terms of linear matrix inequalities (LMIs). Two numerical examples show that our results are better than the existing ones. (general)

  6. Indirect intelligent sliding mode control of a shape memory alloy actuated flexible beam using hysteretic recurrent neural networks

    International Nuclear Information System (INIS)

    Hannen, Jennifer C; Buckner, Gregory D; Crews, John H

    2012-01-01

    This paper introduces an indirect intelligent sliding mode controller (IISMC) for shape memory alloy (SMA) actuators, specifically a flexible beam deflected by a single offset SMA tendon. The controller manipulates applied voltage, which alters SMA tendon temperature to track reference bending angles. A hysteretic recurrent neural network (HRNN) captures the nonlinear, hysteretic relationship between SMA temperature and bending angle. The variable structure control strategy provides robustness to model uncertainties and parameter variations, while effectively compensating for system nonlinearities, achieving superior tracking compared to an optimized PI controller. (paper)

  7. Artificial neural network and falls in community-dwellers: a new approach to identify the risk of recurrent falling?

    Science.gov (United States)

    Kabeshova, Anastasiia; Launay, Cyrille P; Gromov, Vasilii A; Annweiler, Cédric; Fantino, Bruno; Beauchet, Olivier

    2015-04-01

    Identification of the risk of recurrent falls is complex in older adults. The aim of this study was to examine the efficiency of 3 artificial neural networks (ANNs: multilayer perceptron [MLP], modified MLP, and neuroevolution of augmenting topologies [NEAT]) for the classification of recurrent fallers and nonrecurrent fallers using a set of clinical characteristics corresponding to risk factors of falls measured among community-dwelling older adults. Based on a cross-sectional design, 3289 community-dwelling volunteers aged 65 and older were recruited. Age, gender, body mass index (BMI), number of drugs daily taken, use of psychoactive drugs, diphosphonate, calcium, vitamin D supplements and walking aid, fear of falling, distance vision score, Timed Up and Go (TUG) score, lower-limb proprioception, handgrip strength, depressive symptoms, cognitive disorders, and history of falls were recorded. Participants were separated into 2 groups based on the number of falls that occurred over the past year: 0 or 1 fall and 2 or more falls. In addition, total population was separated into training and testing subgroups for ANN analysis. Among 3289 participants, 18.9% (n = 622) were recurrent fallers. NEAT, using 15 clinical characteristics (ie, use of walking aid, fear of falling, use of calcium, depression, use of vitamin D supplements, female, cognitive disorders, BMI 4, vision score 9 seconds, handgrip strength score ≤29 (N), and age ≥75 years), showed the best efficiency for identification of recurrent fallers, sensitivity (80.42%), specificity (92.54%), positive predictive value (84.38), negative predictive value (90.34), accuracy (88.39), and Cohen κ (0.74), compared with MLP and modified MLP. NEAT, using a set of 15 clinical characteristics, was an efficient ANN for the identification of recurrent fallers in older community-dwellers. Copyright © 2015 AMDA – The Society for Post-Acute and Long-Term Care Medicine. Published by Elsevier Inc. All rights reserved.

  8. An adaptive PID like controller using mix locally recurrent neural network for robotic manipulator with variable payload.

    Science.gov (United States)

    Sharma, Richa; Kumar, Vikas; Gaur, Prerna; Mittal, A P

    2016-05-01

    Being complex, non-linear and coupled system, the robotic manipulator cannot be effectively controlled using classical proportional-integral-derivative (PID) controller. To enhance the effectiveness of the conventional PID controller for the nonlinear and uncertain systems, gains of the PID controller should be conservatively tuned and should adapt to the process parameter variations. In this work, a mix locally recurrent neural network (MLRNN) architecture is investigated to mimic a conventional PID controller which consists of at most three hidden nodes which act as proportional, integral and derivative node. The gains of the mix locally recurrent neural network based PID (MLRNNPID) controller scheme are initialized with a newly developed cuckoo search algorithm (CSA) based optimization method rather than assuming randomly. A sequential learning based least square algorithm is then investigated for the on-line adaptation of the gains of MLRNNPID controller. The performance of the proposed controller scheme is tested against the plant parameters uncertainties and external disturbances for both links of the two link robotic manipulator with variable payload (TL-RMWVP). The stability of the proposed controller is analyzed using Lyapunov stability criteria. A performance comparison is carried out among MLRNNPID controller, CSA optimized NNPID (OPTNNPID) controller and CSA optimized conventional PID (OPTPID) controller in order to establish the effectiveness of the MLRNNPID controller. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.

  9. A Study of Recurrent and Convolutional Neural Networks in the Native Language Identification Task

    KAUST Repository

    Werfelmann, Robert

    2018-01-01

    around the world. The neural network models consisted of Long Short-Term Memory and Convolutional networks using the sentences of each document as the input. Additional statistical features were generated from the text to complement the predictions

  10. Wind Turbine Driving a PM Synchronous Generator Using Novel Recurrent Chebyshev Neural Network Control with the Ideal Learning Rate

    Directory of Open Access Journals (Sweden)

    Chih-Hong Lin

    2016-06-01

    Full Text Available A permanent magnet (PM synchronous generator system driven by wind turbine (WT, connected with smart grid via AC-DC converter and DC-AC converter, are controlled by the novel recurrent Chebyshev neural network (NN and amended particle swarm optimization (PSO to regulate output power and output voltage in two power converters in this study. Because a PM synchronous generator system driven by WT is an unknown non-linear and time-varying dynamic system, the on-line training novel recurrent Chebyshev NN control system is developed to regulate DC voltage of the AC-DC converter and AC voltage of the DC-AC converter connected with smart grid. Furthermore, the variable learning rate of the novel recurrent Chebyshev NN is regulated according to discrete-type Lyapunov function for improving the control performance and enhancing convergent speed. Finally, some experimental results are shown to verify the effectiveness of the proposed control method for a WT driving a PM synchronous generator system in smart grid.

  11. Finite-time convergent recurrent neural network with a hard-limiting activation function for constrained optimization with piecewise-linear objective functions.

    Science.gov (United States)

    Liu, Qingshan; Wang, Jun

    2011-04-01

    This paper presents a one-layer recurrent neural network for solving a class of constrained nonsmooth optimization problems with piecewise-linear objective functions. The proposed neural network is guaranteed to be globally convergent in finite time to the optimal solutions under a mild condition on a derived lower bound of a single gain parameter in the model. The number of neurons in the neural network is the same as the number of decision variables of the optimization problem. Compared with existing neural networks for optimization, the proposed neural network has a couple of salient features such as finite-time convergence and a low model complexity. Specific models for two important special cases, namely, linear programming and nonsmooth optimization, are also presented. In addition, applications to the shortest path problem and constrained least absolute deviation problem are discussed with simulation results to demonstrate the effectiveness and characteristics of the proposed neural network.

  12. Modeling the dynamics of the lead bismuth eutectic experimental accelerator driven system by an infinite impulse response locally recurrent neural network

    International Nuclear Information System (INIS)

    Zio, Enrico; Pedroni, Nicola; Broggi, Matteo; Golea, Lucia Roxana

    2009-01-01

    In this paper, an infinite impulse response locally recurrent neural network (IIR-LRNN) is employed for modelling the dynamics of the Lead Bismuth Eutectic eXperimental Accelerator Driven System (LBE-XADS). The network is trained by recursive back-propagation (RBP) and its ability in estimating transients is tested under various conditions. The results demonstrate the robustness of the locally recurrent scheme in the reconstruction of complex nonlinear dynamic relationships

  13. RM-SORN: a reward-modulated self-organizing recurrent neural network.

    Science.gov (United States)

    Aswolinskiy, Witali; Pipa, Gordon

    2015-01-01

    Neural plasticity plays an important role in learning and memory. Reward-modulation of plasticity offers an explanation for the ability of the brain to adapt its neural activity to achieve a rewarded goal. Here, we define a neural network model that learns through the interaction of Intrinsic Plasticity (IP) and reward-modulated Spike-Timing-Dependent Plasticity (STDP). IP enables the network to explore possible output sequences and STDP, modulated by reward, reinforces the creation of the rewarded output sequences. The model is tested on tasks for prediction, recall, non-linear computation, pattern recognition, and sequence generation. It achieves performance comparable to networks trained with supervised learning, while using simple, biologically motivated plasticity rules, and rewarding strategies. The results confirm the importance of investigating the interaction of several plasticity rules in the context of reward-modulated learning and whether reward-modulated self-organization can explain the amazing capabilities of the brain.

  14. Mining e-cigarette adverse events in social media using Bi-LSTM recurrent neural network with word embedding representation.

    Science.gov (United States)

    Xie, Jiaheng; Liu, Xiao; Dajun Zeng, Daniel

    2018-01-01

    Recent years have seen increased worldwide popularity of e-cigarette use. However, the risks of e-cigarettes are underexamined. Most e-cigarette adverse event studies have achieved low detection rates due to limited subject sample sizes in the experiments and surveys. Social media provides a large data repository of consumers' e-cigarette feedback and experiences, which are useful for e-cigarette safety surveillance. However, it is difficult to automatically interpret the informal and nontechnical consumer vocabulary about e-cigarettes in social media. This issue hinders the use of social media content for e-cigarette safety surveillance. Recent developments in deep neural network methods have shown promise for named entity extraction from noisy text. Motivated by these observations, we aimed to design a deep neural network approach to extract e-cigarette safety information in social media. Our deep neural language model utilizes word embedding as the representation of text input and recognizes named entity types with the state-of-the-art Bidirectional Long Short-Term Memory (Bi-LSTM) Recurrent Neural Network. Our Bi-LSTM model achieved the best performance compared to 3 baseline models, with a precision of 94.10%, a recall of 91.80%, and an F-measure of 92.94%. We identified 1591 unique adverse events and 9930 unique e-cigarette components (ie, chemicals, flavors, and devices) from our research testbed. Although the conditional random field baseline model had slightly better precision than our approach, our Bi-LSTM model achieved much higher recall, resulting in the best F-measure. Our method can be generalized to extract medical concepts from social media for other medical applications. © The Author 2017. Published by Oxford University Press on behalf of the American Medical Informatics Association. All rights reserved. For Permissions, please email: journals.permissions@oup.com

  15. Quasi-projective synchronization of fractional-order complex-valued recurrent neural networks.

    Science.gov (United States)

    Yang, Shuai; Yu, Juan; Hu, Cheng; Jiang, Haijun

    2018-08-01

    In this paper, without separating the complex-valued neural networks into two real-valued systems, the quasi-projective synchronization of fractional-order complex-valued neural networks is investigated. First, two new fractional-order inequalities are established by using the theory of complex functions, Laplace transform and Mittag-Leffler functions, which generalize traditional inequalities with the first-order derivative in the real domain. Additionally, different from hybrid control schemes given in the previous work concerning the projective synchronization, a simple and linear control strategy is designed in this paper and several criteria are derived to ensure quasi-projective synchronization of the complex-valued neural networks with fractional-order based on the established fractional-order inequalities and the theory of complex functions. Moreover, the error bounds of quasi-projective synchronization are estimated. Especially, some conditions are also presented for the Mittag-Leffler synchronization of the addressed neural networks. Finally, some numerical examples with simulations are provided to show the effectiveness of the derived theoretical results. Copyright © 2018 Elsevier Ltd. All rights reserved.

  16. Combination of Deep Recurrent Neural Networks and Conditional Random Fields for Extracting Adverse Drug Reactions from User Reviews.

    Science.gov (United States)

    Tutubalina, Elena; Nikolenko, Sergey

    2017-01-01

    Adverse drug reactions (ADRs) are an essential part of the analysis of drug use, measuring drug use benefits, and making policy decisions. Traditional channels for identifying ADRs are reliable but very slow and only produce a small amount of data. Text reviews, either on specialized web sites or in general-purpose social networks, may lead to a data source of unprecedented size, but identifying ADRs in free-form text is a challenging natural language processing problem. In this work, we propose a novel model for this problem, uniting recurrent neural architectures and conditional random fields. We evaluate our model with a comprehensive experimental study, showing improvements over state-of-the-art methods of ADR extraction.

  17. Combination of Deep Recurrent Neural Networks and Conditional Random Fields for Extracting Adverse Drug Reactions from User Reviews

    Directory of Open Access Journals (Sweden)

    Elena Tutubalina

    2017-01-01

    Full Text Available Adverse drug reactions (ADRs are an essential part of the analysis of drug use, measuring drug use benefits, and making policy decisions. Traditional channels for identifying ADRs are reliable but very slow and only produce a small amount of data. Text reviews, either on specialized web sites or in general-purpose social networks, may lead to a data source of unprecedented size, but identifying ADRs in free-form text is a challenging natural language processing problem. In this work, we propose a novel model for this problem, uniting recurrent neural architectures and conditional random fields. We evaluate our model with a comprehensive experimental study, showing improvements over state-of-the-art methods of ADR extraction.

  18. On the Nature of the Intrinsic Connectivity of the Cat Motor Cortex: Evidence for a Recurrent Neural Network Topology

    DEFF Research Database (Denmark)

    Capaday, Charles; Ethier, C; Brizzi, L

    2009-01-01

    and functional significance of the intrinsic horizontal connections between neurons in the motor cortex (MCx) remain to be clarified. To further elucidate the nature of this intracortical connectivity pattern, experiments were done on the MCx of three cats. The anterograde tracer biocytin was ejected......Capaday C, Ethier C, Brizzi L, Sik A, van Vreeswijk C, Gingras D. On the nature of the intrinsic connectivity of the cat motor cortex: evidence for a recurrent neural network topology. J Neurophysiol 102: 2131-2141, 2009. First published July 22, 2009; doi: 10.1152/jn.91319.2008. The details...... iontophoretically in layers II, III, and V. Some 30-50 neurons within a radius of similar to 250 mu m were thus stained. The functional output of the motor cortical point at which biocytin was injected, and of the surrounding points, was identified by microstimulation and electromyographic recordings. The axonal...

  19. Exploring multiple feature combination strategies with a recurrent neural network architecture for off-line handwriting recognition

    Science.gov (United States)

    Mioulet, L.; Bideault, G.; Chatelain, C.; Paquet, T.; Brunessaux, S.

    2015-01-01

    The BLSTM-CTC is a novel recurrent neural network architecture that has outperformed previous state of the art algorithms in tasks such as speech recognition or handwriting recognition. It has the ability to process long term dependencies in temporal signals in order to label unsegmented data. This paper describes different ways of combining features using a BLSTM-CTC architecture. Not only do we explore the low level combination (feature space combination) but we also explore high level combination (decoding combination) and mid-level (internal system representation combination). The results are compared on the RIMES word database. Our results show that the low level combination works best, thanks to the powerful data modeling of the LSTM neurons.

  20. Auto-Associative Recurrent Neural Networks and Long Term Dependencies in Novelty Detection for Audio Surveillance Applications

    Science.gov (United States)

    Rossi, A.; Montefoschi, F.; Rizzo, A.; Diligenti, M.; Festucci, C.

    2017-10-01

    Machine Learning applied to Automatic Audio Surveillance has been attracting increasing attention in recent years. In spite of several investigations based on a large number of different approaches, little attention had been paid to the environmental temporal evolution of the input signal. In this work, we propose an exploration in this direction comparing the temporal correlations extracted at the feature level with the one learned by a representational structure. To this aim we analysed the prediction performances of a Recurrent Neural Network architecture varying the length of the processed input sequence and the size of the time window used in the feature extraction. Results corroborated the hypothesis that sequential models work better when dealing with data characterized by temporal order. However, so far the optimization of the temporal dimension remains an open issue.

  1. A Study of Recurrent and Convolutional Neural Networks in the Native Language Identification Task

    KAUST Repository

    Werfelmann, Robert

    2018-05-24

    Native Language Identification (NLI) is the task of predicting the native language of an author from their text written in a second language. The idea is to find writing habits that transfer from an author’s native language to their second language. Many approaches to this task have been studied, from simple word frequency analysis, to analyzing grammatical and spelling mistakes to find patterns and traits that are common between different authors of the same native language. This can be a very complex task, depending on the native language and the proficiency of the author’s second language. The most common approach that has seen very good results is based on the usage of n-gram features of words and characters. In this thesis, we attempt to extract lexical, grammatical, and semantic features from the sentences of non-native English essays using neural networks. The training and testing data was obtained from a large corpus of publicly available essays written by authors of several countries around the world. The neural network models consisted of Long Short-Term Memory and Convolutional networks using the sentences of each document as the input. Additional statistical features were generated from the text to complement the predictions of the neural networks, which were then used as feature inputs to a Support Vector Machine, making the final prediction. Results show that Long Short-Term Memory neural network can improve performance over a naive bag of words approach, but with a much smaller feature set. With more fine-tuning of neural network hyperparameters, these results will likely improve significantly.

  2. Deep Recurrent Neural Networks for Product Attribute Extraction in eCommerce

    OpenAIRE

    Majumder, Bodhisattwa Prasad; Subramanian, Aditya; Krishnan, Abhinandan; Gandhi, Shreyansh; More, Ajinkya

    2018-01-01

    Extracting accurate attribute qualities from product titles is a vital component in delivering eCommerce customers with a rewarding online shopping experience via an enriched faceted search. We demonstrate the potential of Deep Recurrent Networks in this domain, primarily models such as Bidirectional LSTMs and Bidirectional LSTM-CRF with or without an attention mechanism. These have improved overall F1 scores, as compared to the previous benchmarks (More et al.) by at least 0.0391, showcasing...

  3. DanQ: a hybrid convolutional and recurrent deep neural network for quantifying the function of DNA sequences.

    Science.gov (United States)

    Quang, Daniel; Xie, Xiaohui

    2016-06-20

    Modeling the properties and functions of DNA sequences is an important, but challenging task in the broad field of genomics. This task is particularly difficult for non-coding DNA, the vast majority of which is still poorly understood in terms of function. A powerful predictive model for the function of non-coding DNA can have enormous benefit for both basic science and translational research because over 98% of the human genome is non-coding and 93% of disease-associated variants lie in these regions. To address this need, we propose DanQ, a novel hybrid convolutional and bi-directional long short-term memory recurrent neural network framework for predicting non-coding function de novo from sequence. In the DanQ model, the convolution layer captures regulatory motifs, while the recurrent layer captures long-term dependencies between the motifs in order to learn a regulatory 'grammar' to improve predictions. DanQ improves considerably upon other models across several metrics. For some regulatory markers, DanQ can achieve over a 50% relative improvement in the area under the precision-recall curve metric compared to related models. We have made the source code available at the github repository http://github.com/uci-cbcl/DanQ. © The Author(s) 2016. Published by Oxford University Press on behalf of Nucleic Acids Research.

  4. The attractor recurrent neural network based on fuzzy functions: An effective model for the classification of lung abnormalities.

    Science.gov (United States)

    Khodabakhshi, Mohammad Bagher; Moradi, Mohammad Hassan

    2017-05-01

    The respiratory system dynamic is of high significance when it comes to the detection of lung abnormalities, which highlights the importance of presenting a reliable model for it. In this paper, we introduce a novel dynamic modelling method for the characterization of the lung sounds (LS), based on the attractor recurrent neural network (ARNN). The ARNN structure allows the development of an effective LS model. Additionally, it has the capability to reproduce the distinctive features of the lung sounds using its formed attractors. Furthermore, a novel ARNN topology based on fuzzy functions (FFs-ARNN) is developed. Given the utility of the recurrent quantification analysis (RQA) as a tool to assess the nature of complex systems, it was used to evaluate the performance of both the ARNN and the FFs-ARNN models. The experimental results demonstrate the effectiveness of the proposed approaches for multichannel LS analysis. In particular, a classification accuracy of 91% was achieved using FFs-ARNN with sequences of RQA features. Copyright © 2017 Elsevier Ltd. All rights reserved.

  5. Bioelectric signal classification using a recurrent probabilistic neural network with time-series discriminant component analysis.

    Science.gov (United States)

    Hayashi, Hideaki; Shima, Keisuke; Shibanoki, Taro; Kurita, Yuichi; Tsuji, Toshio

    2013-01-01

    This paper outlines a probabilistic neural network developed on the basis of time-series discriminant component analysis (TSDCA) that can be used to classify high-dimensional time-series patterns. TSDCA involves the compression of high-dimensional time series into a lower-dimensional space using a set of orthogonal transformations and the calculation of posterior probabilities based on a continuous-density hidden Markov model that incorporates a Gaussian mixture model expressed in the reduced-dimensional space. The analysis can be incorporated into a neural network so that parameters can be obtained appropriately as network coefficients according to backpropagation-through-time-based training algorithm. The network is considered to enable high-accuracy classification of high-dimensional time-series patterns and to reduce the computation time taken for network training. In the experiments conducted during the study, the validity of the proposed network was demonstrated for EEG signals.

  6. Stability switches, oscillatory multistability, and spatio-temporal patterns of nonlinear oscillations in recurrently delay coupled neural networks.

    Science.gov (United States)

    Song, Yongli; Makarov, Valeri A; Velarde, Manuel G

    2009-08-01

    A model of time-delay recurrently coupled spatially segregated neural assemblies is here proposed. We show that it operates like some of the hierarchical architectures of the brain. Each assembly is a neural network with no delay in the local couplings between the units. The delay appears in the long range feedforward and feedback inter-assemblies communications. Bifurcation analysis of a simple four-units system in the autonomous case shows the richness of the dynamical behaviors in a biophysically plausible parameter region. We find oscillatory multistability, hysteresis, and stability switches of the rest state provoked by the time delay. Then we investigate the spatio-temporal patterns of bifurcating periodic solutions by using the symmetric local Hopf bifurcation theory of delay differential equations and derive the equation describing the flow on the center manifold that enables us determining the direction of Hopf bifurcations and stability of the bifurcating periodic orbits. We also discuss computational properties of the system due to the delay when an external drive of the network mimicks external sensory input.

  7. Application of a Self-recurrent Wavelet Neural Network in the Modeling and Control of an AC Servo System

    Directory of Open Access Journals (Sweden)

    Run Min HOU

    2014-05-01

    Full Text Available To control the nonlinearity, widespread variations in loads and time varying characteristic of the high power ac servo system, the modeling and control techniques are studied here. A self-recurrent wavelet neural network (SRWNN modeling scheme is proposed, which successfully addresses the issue of the traditional wavelet neural network easily falling into local optimum, and significantly improves the network approximation capability and convergence rate. The control scheme of a SRWNN based on fuzzy compensation is expected. Gradient information is provided in real time for the controller by using a SRWNN identifier, so as to ensure that the learning and adjusting function of the controller of the SRWNN operate well, and fuzzy compensation control is applied to improve rapidity and accuracy of the entire system. Then the Lyapunov function is utilized to judge the stability of the system. The experimental analysis and comparisons with other modeling and control methods, it is clearly shown that the validities of the proposed modeling scheme and control scheme are effective.

  8. Use of Recurrent Neural Networks for Strategic Data Mining of Sales

    OpenAIRE

    Vadhavkar, Sanjeev; Shanmugasundaram, Jayavel; Gupta, Amar; Prasad, M.V. Nagendra

    2002-01-01

    An increasing number of organizations are involved in the development of strategic information systems for effective linkages with their suppliers, customers, and other channel partners involved in transportation, distribution, warehousing and maintenance activities. An efficient inter-organizational inventory management system based on data mining techniques is a significant step in this direction. This paper discusses the use of neural network based data mining and knowledge discovery techn...

  9. Deep Bidirectional and Unidirectional LSTM Recurrent Neural Network for Network-wide Traffic Speed Prediction

    OpenAIRE

    Cui, Zhiyong; Ke, Ruimin; Wang, Yinhai

    2018-01-01

    Short-term traffic forecasting based on deep learning methods, especially long short-term memory (LSTM) neural networks, has received much attention in recent years. However, the potential of deep learning methods in traffic forecasting has not yet fully been exploited in terms of the depth of the model architecture, the spatial scale of the prediction area, and the predictive power of spatial-temporal data. In this paper, a deep stacked bidirectional and unidirectional LSTM (SBU- LSTM) neura...

  10. C-RNN-GAN: Continuous recurrent neural networks with adversarial training

    OpenAIRE

    Mogren, Olof

    2016-01-01

    Generative adversarial networks have been proposed as a way of efficiently training deep generative neural networks. We propose a generative adversarial model that works on continuous sequential data, and apply it by training it on a collection of classical music. We conclude that it generates music that sounds better and better as the model is trained, report statistics on generated music, and let the reader judge the quality by downloading the generated songs.

  11. Learning to Recognize Actions From Limited Training Examples Using a Recurrent Spiking Neural Model

    Science.gov (United States)

    Panda, Priyadarshini; Srinivasa, Narayan

    2018-01-01

    A fundamental challenge in machine learning today is to build a model that can learn from few examples. Here, we describe a reservoir based spiking neural model for learning to recognize actions with a limited number of labeled videos. First, we propose a novel encoding, inspired by how microsaccades influence visual perception, to extract spike information from raw video data while preserving the temporal correlation across different frames. Using this encoding, we show that the reservoir generalizes its rich dynamical activity toward signature action/movements enabling it to learn from few training examples. We evaluate our approach on the UCF-101 dataset. Our experiments demonstrate that our proposed reservoir achieves 81.3/87% Top-1/Top-5 accuracy, respectively, on the 101-class data while requiring just 8 video examples per class for training. Our results establish a new benchmark for action recognition from limited video examples for spiking neural models while yielding competitive accuracy with respect to state-of-the-art non-spiking neural models. PMID:29551962

  12. Neural dynamics as sampling: a model for stochastic computation in recurrent networks of spiking neurons.

    Science.gov (United States)

    Buesing, Lars; Bill, Johannes; Nessler, Bernhard; Maass, Wolfgang

    2011-11-01

    The organization of computations in networks of spiking neurons in the brain is still largely unknown, in particular in view of the inherently stochastic features of their firing activity and the experimentally observed trial-to-trial variability of neural systems in the brain. In principle there exists a powerful computational framework for stochastic computations, probabilistic inference by sampling, which can explain a large number of macroscopic experimental data in neuroscience and cognitive science. But it has turned out to be surprisingly difficult to create a link between these abstract models for stochastic computations and more detailed models of the dynamics of networks of spiking neurons. Here we create such a link and show that under some conditions the stochastic firing activity of networks of spiking neurons can be interpreted as probabilistic inference via Markov chain Monte Carlo (MCMC) sampling. Since common methods for MCMC sampling in distributed systems, such as Gibbs sampling, are inconsistent with the dynamics of spiking neurons, we introduce a different approach based on non-reversible Markov chains that is able to reflect inherent temporal processes of spiking neuronal activity through a suitable choice of random variables. We propose a neural network model and show by a rigorous theoretical analysis that its neural activity implements MCMC sampling of a given distribution, both for the case of discrete and continuous time. This provides a step towards closing the gap between abstract functional models of cortical computation and more detailed models of networks of spiking neurons.

  13. Recurrent-neural-network-based Boolean factor analysis and its application to word clustering.

    Science.gov (United States)

    Frolov, Alexander A; Husek, Dusan; Polyakov, Pavel Yu

    2009-07-01

    The objective of this paper is to introduce a neural-network-based algorithm for word clustering as an extension of the neural-network-based Boolean factor analysis algorithm (Frolov , 2007). It is shown that this extended algorithm supports even the more complex model of signals that are supposed to be related to textual documents. It is hypothesized that every topic in textual data is characterized by a set of words which coherently appear in documents dedicated to a given topic. The appearance of each word in a document is coded by the activity of a particular neuron. In accordance with the Hebbian learning rule implemented in the network, sets of coherently appearing words (treated as factors) create tightly connected groups of neurons, hence, revealing them as attractors of the network dynamics. The found factors are eliminated from the network memory by the Hebbian unlearning rule facilitating the search of other factors. Topics related to the found sets of words can be identified based on the words' semantics. To make the method complete, a special technique based on a Bayesian procedure has been developed for the following purposes: first, to provide a complete description of factors in terms of component probability, and second, to enhance the accuracy of classification of signals to determine whether it contains the factor. Since it is assumed that every word may possibly contribute to several topics, the proposed method might be related to the method of fuzzy clustering. In this paper, we show that the results of Boolean factor analysis and fuzzy clustering are not contradictory, but complementary. To demonstrate the capabilities of this attempt, the method is applied to two types of textual data on neural networks in two different languages. The obtained topics and corresponding words are at a good level of agreement despite the fact that identical topics in Russian and English conferences contain different sets of keywords.

  14. Predictions of SEP events by means of a linear filter and layer-recurrent neural network

    Czech Academy of Sciences Publication Activity Database

    Valach, F.; Revallo, M.; Hejda, Pavel; Bochníček, Josef

    2011-01-01

    Roč. 69, č. 9-10 (2011), s. 758-766 ISSN 0094-5765 R&D Projects: GA AV ČR(CZ) IAA300120608; GA MŠk OC09070 Grant - others:VEGA(SK) 2/0015/11; VEGA(SK) 2/0022/11 Institutional research plan: CEZ:AV0Z30120515 Keywords : coronal mass ejection * X-ray flare * solar energetic particles * artificial neural network Subject RIV: DE - Earth Magnetism, Geodesy, Geography Impact factor: 0.614, year: 2011

  15. Predicting non-linear dynamics by stable local learning in a recurrent spiking neural network.

    Science.gov (United States)

    Gilra, Aditya; Gerstner, Wulfram

    2017-11-27

    The brain needs to predict how the body reacts to motor commands, but how a network of spiking neurons can learn non-linear body dynamics using local, online and stable learning rules is unclear. Here, we present a supervised learning scheme for the feedforward and recurrent connections in a network of heterogeneous spiking neurons. The error in the output is fed back through fixed random connections with a negative gain, causing the network to follow the desired dynamics. The rule for Feedback-based Online Local Learning Of Weights (FOLLOW) is local in the sense that weight changes depend on the presynaptic activity and the error signal projected onto the postsynaptic neuron. We provide examples of learning linear, non-linear and chaotic dynamics, as well as the dynamics of a two-link arm. Under reasonable approximations, we show, using the Lyapunov method, that FOLLOW learning is uniformly stable, with the error going to zero asymptotically.

  16. Learning in fully recurrent neural networks by approaching tangent planes to constraint surfaces.

    Science.gov (United States)

    May, P; Zhou, E; Lee, C W

    2012-10-01

    In this paper we present a new variant of the online real time recurrent learning algorithm proposed by Williams and Zipser (1989). Whilst the original algorithm utilises gradient information to guide the search towards the minimum training error, it is very slow in most applications and often gets stuck in local minima of the search space. It is also sensitive to the choice of learning rate and requires careful tuning. The new variant adjusts weights by moving to the tangent planes to constraint surfaces. It is simple to implement and requires no parameters to be set manually. Experimental results show that this new algorithm gives significantly faster convergence whilst avoiding problems like local minima. Copyright © 2012 Elsevier Ltd. All rights reserved.

  17. Slowly evolving connectivity in recurrent neural networks: I. The extreme dilution regime

    International Nuclear Information System (INIS)

    Wemmenhove, B; Skantzos, N S; Coolen, A C C

    2004-01-01

    We study extremely diluted spin models of neural networks in which the connectivity evolves in time, although adiabatically slowly compared to the neurons, according to stochastic equations which on average aim to reduce frustration. The (fast) neurons and (slow) connectivity variables equilibrate separately, but at different temperatures. Our model is exactly solvable in equilibrium. We obtain phase diagrams upon making the condensed ansatz (i.e. recall of one pattern). These show that, as the connectivity temperature is lowered, the volume of the retrieval phase diverges and the fraction of mis-aligned spins is reduced. Still one always retains a region in the retrieval phase where recall states other than the one corresponding to the 'condensed' pattern are locally stable, so the associative memory character of our model is preserved

  18. Schema generation in recurrent neural nets for intercepting a moving target.

    Science.gov (United States)

    Fleischer, Andreas G

    2010-06-01

    The grasping of a moving object requires the development of a motor strategy to anticipate the trajectory of the target and to compute an optimal course of interception. During the performance of perception-action cycles, a preprogrammed prototypical movement trajectory, a motor schema, may highly reduce the control load. Subjects were asked to hit a target that was moving along a circular path by means of a cursor. Randomized initial target positions and velocities were detected in the periphery of the eyes, resulting in a saccade toward the target. Even when the target disappeared, the eyes followed the target's anticipated course. The Gestalt of the trajectories was dependent on target velocity. The prediction capability of the motor schema was investigated by varying the visibility range of cursor and target. Motor schemata were determined to be of limited precision, and therefore visual feedback was continuously required to intercept the moving target. To intercept a target, the motor schema caused the hand to aim ahead and to adapt to the target trajectory. The control of cursor velocity determined the point of interception. From a modeling point of view, a neural network was developed that allowed the implementation of a motor schema interacting with feedback control in an iterative manner. The neural net of the Wilson type consists of an excitation-diffusion layer allowing the generation of a moving bubble. This activation bubble runs down an eye-centered motor schema and causes a planar arm model to move toward the target. A bubble provides local integration and straightening of the trajectory during repetitive moves. The schema adapts to task demands by learning and serves as forward controller. On the basis of these model considerations the principal problem of embedding motor schemata in generalized control strategies is discussed.

  19. Classification of epileptic seizures using wavelet packet log energy and norm entropies with recurrent Elman neural network classifier.

    Science.gov (United States)

    Raghu, S; Sriraam, N; Kumar, G Pradeep

    2017-02-01

    Electroencephalogram shortly termed as EEG is considered as the fundamental segment for the assessment of the neural activities in the brain. In cognitive neuroscience domain, EEG-based assessment method is found to be superior due to its non-invasive ability to detect deep brain structure while exhibiting superior spatial resolutions. Especially for studying the neurodynamic behavior of epileptic seizures, EEG recordings reflect the neuronal activity of the brain and thus provide required clinical diagnostic information for the neurologist. This specific proposed study makes use of wavelet packet based log and norm entropies with a recurrent Elman neural network (REN) for the automated detection of epileptic seizures. Three conditions, normal, pre-ictal and epileptic EEG recordings were considered for the proposed study. An adaptive Weiner filter was initially applied to remove the power line noise of 50 Hz from raw EEG recordings. Raw EEGs were segmented into 1 s patterns to ensure stationarity of the signal. Then wavelet packet using Haar wavelet with a five level decomposition was introduced and two entropies, log and norm were estimated and were applied to REN classifier to perform binary classification. The non-linear Wilcoxon statistical test was applied to observe the variation in the features under these conditions. The effect of log energy entropy (without wavelets) was also studied. It was found from the simulation results that the wavelet packet log entropy with REN classifier yielded a classification accuracy of 99.70 % for normal-pre-ictal, 99.70 % for normal-epileptic and 99.85 % for pre-ictal-epileptic.

  20. Faulty node detection in wireless sensor networks using a recurrent neural network

    Science.gov (United States)

    Atiga, Jamila; Mbarki, Nour Elhouda; Ejbali, Ridha; Zaied, Mourad

    2018-04-01

    The wireless sensor networks (WSN) consist of a set of sensors that are more and more used in surveillance applications on a large scale in different areas: military, Environment, Health ... etc. Despite the minimization and the reduction of the manufacturing costs of the sensors, they can operate in places difficult to access without the possibility of reloading of battery, they generally have limited resources in terms of power of emission, of processing capacity, data storage and energy. These sensors can be used in a hostile environment, such as, for example, on a field of battle, in the presence of fires, floods, earthquakes. In these environments the sensors can fail, even in a normal operation. It is therefore necessary to develop algorithms tolerant and detection of defects of the nodes for the network of sensor without wires, therefore, the faults of the sensor can reduce the quality of the surveillance if they are not detected. The values that are measured by the sensors are used to estimate the state of the monitored area. We used the Non-linear Auto- Regressive with eXogeneous (NARX), the recursive architecture of the neural network, to predict the state of a node of a sensor from the previous values described by the functions of time series. The experimental results have verified that the prediction of the State is enhanced by our proposed model.

  1. A Recurrent Probabilistic Neural Network with Dimensionality Reduction Based on Time-series Discriminant Component Analysis.

    Science.gov (United States)

    Hayashi, Hideaki; Shibanoki, Taro; Shima, Keisuke; Kurita, Yuichi; Tsuji, Toshio

    2015-12-01

    This paper proposes a probabilistic neural network (NN) developed on the basis of time-series discriminant component analysis (TSDCA) that can be used to classify high-dimensional time-series patterns. TSDCA involves the compression of high-dimensional time series into a lower dimensional space using a set of orthogonal transformations and the calculation of posterior probabilities based on a continuous-density hidden Markov model with a Gaussian mixture model expressed in the reduced-dimensional space. The analysis can be incorporated into an NN, which is named a time-series discriminant component network (TSDCN), so that parameters of dimensionality reduction and classification can be obtained simultaneously as network coefficients according to a backpropagation through time-based learning algorithm with the Lagrange multiplier method. The TSDCN is considered to enable high-accuracy classification of high-dimensional time-series patterns and to reduce the computation time taken for network training. The validity of the TSDCN is demonstrated for high-dimensional artificial data and electroencephalogram signals in the experiments conducted during the study.

  2. Coding the presence of visual objects in a recurrent neural network of visual cortex.

    Science.gov (United States)

    Zwickel, Timm; Wachtler, Thomas; Eckhorn, Reinhard

    2007-01-01

    Before we can recognize a visual object, our visual system has to segregate it from its background. This requires a fast mechanism for establishing the presence and location of objects independently of their identity. Recently, border-ownership neurons were recorded in monkey visual cortex which might be involved in this task [Zhou, H., Friedmann, H., von der Heydt, R., 2000. Coding of border ownership in monkey visual cortex. J. Neurosci. 20 (17), 6594-6611]. In order to explain the basic mechanisms required for fast coding of object presence, we have developed a neural network model of visual cortex consisting of three stages. Feed-forward and lateral connections support coding of Gestalt properties, including similarity, good continuation, and convexity. Neurons of the highest area respond to the presence of an object and encode its position, invariant of its form. Feedback connections to the lowest area facilitate orientation detectors activated by contours belonging to potential objects, and thus generate the experimentally observed border-ownership property. This feedback control acts fast and significantly improves the figure-ground segregation required for the consecutive task of object recognition.

  3. Recurrent-Neural-Network-Based Multivariable Adaptive Control for a Class of Nonlinear Dynamic Systems With Time-Varying Delay.

    Science.gov (United States)

    Hwang, Chih-Lyang; Jan, Chau

    2016-02-01

    At the beginning, an approximate nonlinear autoregressive moving average (NARMA) model is employed to represent a class of multivariable nonlinear dynamic systems with time-varying delay. It is known that the disadvantages of robust control for the NARMA model are as follows: 1) suitable control parameters for larger time delay are more sensitive to achieving desirable performance; 2) it only deals with bounded uncertainty; and 3) the nominal NARMA model must be learned in advance. Due to the dynamic feature of the NARMA model, a recurrent neural network (RNN) is online applied to learn it. However, the system performance becomes deteriorated due to the poor learning of the larger variation of system vector functions. In this situation, a simple network is employed to compensate the upper bound of the residue caused by the linear parameterization of the approximation error of RNN. An e -modification learning law with a projection for weight matrix is applied to guarantee its boundedness without persistent excitation. Under suitable conditions, the semiglobally ultimately bounded tracking with the boundedness of estimated weight matrix is obtained by the proposed RNN-based multivariable adaptive control. Finally, simulations are presented to verify the effectiveness and robustness of the proposed control.

  4. An intelligent nuclear reactor core controller for load following operations, using recurrent neural networks and fuzzy systems

    International Nuclear Information System (INIS)

    Boroushaki, M.; Ghofrani, M.B.; Lucas, C.; Yazdanpanah, M.J.

    2003-01-01

    In the last decade, the intelligent control community has paid great attention to the topic of intelligent control systems for nuclear plants (core, steam generator...). Papers mostly used approximate and simple mathematical SISO (single-input-single-output) model of nuclear plants for testing and/or tuning of the control systems. They also tried to generalize theses models to a real MIMO (multi-input-multi-output) plant, while nuclear plants are typically of complex nonlinear and multivariable nature with high interactions between their state variables and therefore, many of these proposed intelligent control systems are not appropriate for real cases. In this paper, we designed an on-line intelligent core controller for load following operations, based on a heuristic control algorithm, using a valid and updatable recurrent neural network (RNN). We have used an accurate 3-dimensional core calculation code to represent the real plant and to train the RNN. The results of simulation show that this intelligent controller can control the reactor core during load following operations, using optimum control rod groups manoeuvre and variable overlapping strategy. This methodology represents a simple and reliable procedure for controlling other complex nonlinear MIMO plants, and may improve the responses, comparing to other control systems

  5. Long Short-Term Memory Projection Recurrent Neural Network Architectures for Piano’s Continuous Note Recognition

    Directory of Open Access Journals (Sweden)

    YuKang Jia

    2017-01-01

    Full Text Available Long Short-Term Memory (LSTM is a kind of Recurrent Neural Networks (RNN relating to time series, which has achieved good performance in speech recogniton and image recognition. Long Short-Term Memory Projection (LSTMP is a variant of LSTM to further optimize speed and performance of LSTM by adding a projection layer. As LSTM and LSTMP have performed well in pattern recognition, in this paper, we combine them with Connectionist Temporal Classification (CTC to study piano’s continuous note recognition for robotics. Based on the Beijing Forestry University music library, we conduct experiments to show recognition rates and numbers of iterations of LSTM with a single layer, LSTMP with a single layer, and Deep LSTM (DLSTM, LSTM with multilayers. As a result, the single layer LSTMP proves performing much better than the single layer LSTM in both time and the recognition rate; that is, LSTMP has fewer parameters and therefore reduces the training time, and, moreover, benefiting from the projection layer, LSTMP has better performance, too. The best recognition rate of LSTMP is 99.8%. As for DLSTM, the recognition rate can reach 100% because of the effectiveness of the deep structure, but compared with the single layer LSTMP, DLSTM needs more training time.

  6. Word embeddings and recurrent neural networks based on Long-Short Term Memory nodes in supervised biomedical word sense disambiguation.

    Science.gov (United States)

    Jimeno Yepes, Antonio

    2017-09-01

    Word sense disambiguation helps identifying the proper sense of ambiguous words in text. With large terminologies such as the UMLS Metathesaurus ambiguities appear and highly effective disambiguation methods are required. Supervised learning algorithm methods are used as one of the approaches to perform disambiguation. Features extracted from the context of an ambiguous word are used to identify the proper sense of such a word. The type of features have an impact on machine learning methods, thus affect disambiguation performance. In this work, we have evaluated several types of features derived from the context of the ambiguous word and we have explored as well more global features derived from MEDLINE using word embeddings. Results show that word embeddings improve the performance of more traditional features and allow as well using recurrent neural network classifiers based on Long-Short Term Memory (LSTM) nodes. The combination of unigrams and word embeddings with an SVM sets a new state of the art performance with a macro accuracy of 95.97 in the MSH WSD data set. Copyright © 2017 Elsevier Inc. All rights reserved.

  7. Recurrent-neural-network-based identification of a cascade hydraulic actuator for closed-loop automotive power transmission control

    International Nuclear Information System (INIS)

    You, Seung Han; Hahn, Jin Oh

    2012-01-01

    By virtue of its ease of operation compared with its conventional manual counterpart, automatic transmissions are commonly used as automotive power transmission control system in today's passenger cars. In accordance with this trend, research efforts on closed-loop automatic transmission controls have been extensively carried out to improve ride quality and fuel economy. State-of-the-art power transmission control algorithms may have limitations in performance because they rely on the steady-state characteristics of the hydraulic actuator rather than fully exploit its dynamic characteristics. Since the ultimate viability of closed-loop power transmission control is dominated by precise pressure control at the level of hydraulic actuator, closed-loop control can potentially attain superior efficacy in case the hydraulic actuator can be easily incorporated into model-based observer/controller design. In this paper, we propose to use a recurrent neural network (RNN) to establish a nonlinear empirical model of a cascade hydraulic actuator in a passenger car automatic transmission, which has potential to be easily incorporated in designing observers and controllers. Experimental analysis is performed to grasp key system characteristics, based on which a nonlinear system identification procedure is carried out. Extensive experimental validation of the established model suggests that it has superb one-step-ahead prediction capability over appropriate frequency range, making it an attractive approach for model-based observer/controller design applications in automotive systems

  8. A Velocity-Level Bi-Criteria Optimization Scheme for Coordinated Path Tracking of Dual Robot Manipulators Using Recurrent Neural Network.

    Science.gov (United States)

    Xiao, Lin; Zhang, Yongsheng; Liao, Bolin; Zhang, Zhijun; Ding, Lei; Jin, Long

    2017-01-01

    A dual-robot system is a robotic device composed of two robot arms. To eliminate the joint-angle drift and prevent the occurrence of high joint velocity, a velocity-level bi-criteria optimization scheme, which includes two criteria (i.e., the minimum velocity norm and the repetitive motion), is proposed and investigated for coordinated path tracking of dual robot manipulators. Specifically, to realize the coordinated path tracking of dual robot manipulators, two subschemes are first presented for the left and right robot manipulators. After that, such two subschemes are reformulated as two general quadratic programs (QPs), which can be formulated as one unified QP. A recurrent neural network (RNN) is thus presented to solve effectively the unified QP problem. At last, computer simulation results based on a dual three-link planar manipulator further validate the feasibility and the efficacy of the velocity-level optimization scheme for coordinated path tracking using the recurrent neural network.

  9. Adolescent girls' neural response to reward mediates the relation between childhood financial disadvantage and depression.

    Science.gov (United States)

    Romens, Sarah E; Casement, Melynda D; McAloon, Rose; Keenan, Kate; Hipwell, Alison E; Guyer, Amanda E; Forbes, Erika E

    2015-11-01

    Children who experience socioeconomic disadvantage are at heightened risk for developing depression; however, little is known about neurobiological mechanisms underlying this association. Low socioeconomic status (SES) during childhood may confer risk for depression through its stress-related effects on the neural circuitry associated with processing monetary rewards. In a prospective study, we examined the relationships among the number of years of household receipt of public assistance from age 5-16 years, neural activation during monetary reward anticipation and receipt at age 16, and depression symptoms at age 16 in 123 girls. Number of years of household receipt of public assistance was positively associated with heightened response in the medial prefrontal cortex during reward anticipation, and this heightened neural response mediated the relationship between socioeconomic disadvantage and current depression symptoms, controlling for past depression. Chronic exposure to socioeconomic disadvantage in childhood may alter neural circuitry involved in reward anticipation in adolescence, which in turn may confer risk for depression. © 2015 Association for Child and Adolescent Mental Health.

  10. GH mediates exercise-dependent activation of SVZ neural precursor cells in aged mice.

    Directory of Open Access Journals (Sweden)

    Daniel G Blackmore

    Full Text Available Here we demonstrate, both in vivo and in vitro, that growth hormone (GH mediates precursor cell activation in the subventricular zone (SVZ of the aged (12-month-old brain following exercise, and that GH signaling stimulates precursor activation to a similar extent to exercise. Our results reveal that both addition of GH in culture and direct intracerebroventricular infusion of GH stimulate neural precursor cells in the aged brain. In contrast, no increase in neurosphere numbers was observed in GH receptor null animals following exercise. Continuous infusion of a GH antagonist into the lateral ventricle of wild-type animals completely abolished the exercise-induced increase in neural precursor cell number. Given that the aged brain does not recover well after injury, we investigated the direct effect of exercise and GH on neural precursor cell activation following irradiation. This revealed that physical exercise as well as infusion of GH promoted repopulation of neural precursor cells in irradiated aged animals. Conversely, infusion of a GH antagonist during exercise prevented recovery of precursor cells in the SVZ following irradiation.

  11. GH Mediates Exercise-Dependent Activation of SVZ Neural Precursor Cells in Aged Mice

    Science.gov (United States)

    Blackmore, Daniel G.; Vukovic, Jana; Waters, Michael J.; Bartlett, Perry F.

    2012-01-01

    Here we demonstrate, both in vivo and in vitro, that growth hormone (GH) mediates precursor cell activation in the subventricular zone (SVZ) of the aged (12-month-old) brain following exercise, and that GH signaling stimulates precursor activation to a similar extent to exercise. Our results reveal that both addition of GH in culture and direct intracerebroventricular infusion of GH stimulate neural precursor cells in the aged brain. In contrast, no increase in neurosphere numbers was observed in GH receptor null animals following exercise. Continuous infusion of a GH antagonist into the lateral ventricle of wild-type animals completely abolished the exercise-induced increase in neural precursor cell number. Given that the aged brain does not recover well after injury, we investigated the direct effect of exercise and GH on neural precursor cell activation following irradiation. This revealed that physical exercise as well as infusion of GH promoted repopulation of neural precursor cells in irradiated aged animals. Conversely, infusion of a GH antagonist during exercise prevented recovery of precursor cells in the SVZ following irradiation. PMID:23209615

  12. Neural Reactivity to Emotional Faces Mediates the Relationship Between Childhood Empathy and Adolescent Prosocial Behavior

    Science.gov (United States)

    Flournoy, John C.; Pfeifer, Jennifer H.; Moore, William E.; Tackman, Allison; Masten, Carrie L.; Mazziotta, John C.; Iacoboni, Marco; Dapretto, Mirella

    2017-01-01

    Reactivity to others' emotions can result in empathic concern (EC), an important motivator of prosocial behavior, but can also result in personal distress (PD), which may hinder prosocial behavior. Examining neural substrates of emotional reactivity may elucidate how EC and PD differentially influence prosocial behavior. Participants (N=57) provided measures of EC, PD, prosocial behavior, and neural responses to emotional expressions at age 10 and 13. Initial EC predicted subsequent prosocial behavior. Initial EC and PD predicted subsequent reactivity to emotions in the inferior frontal gyrus (IFG) and inferior parietal lobule, respectively. Activity in the IFG, a region linked to mirror neuron processes, as well as cognitive control and language, mediated the relation between initial EC and subsequent prosocial behavior. PMID:28262939

  13. Context-dependent memory following recurrent hypoglycaemia in non-diabetic rats is mediated via glucocorticoid signalling in the dorsal hippocampus.

    Science.gov (United States)

    Osborne, Danielle M; O'Leary, Kelsey E; Fitzgerald, Dennis P; George, Alvin J; Vidal, Michael M; Anderson, Brian M; McNay, Ewan C

    2017-01-01

    Recurrent hypoglycaemia is primarily caused by repeated over-administration of insulin to patients with diabetes. Although cognition is impaired during hypoglycaemia, restoration of euglycaemia after recurrent hypoglycaemia is associated with improved hippocampally mediated memory. Recurrent hypoglycaemia alters glucocorticoid secretion in response to hypoglycaemia; glucocorticoids are well established to regulate hippocampal processes, suggesting a possible mechanism for recurrent hypoglycaemia modulation of subsequent cognition. We tested the hypothesis that glucocorticoids within the dorsal hippocampus might mediate the impact of recurrent hypoglycaemia on hippocampal cognitive processes. We characterised changes in the dorsal hippocampus at several time points to identify specific mechanisms affected by recurrent hypoglycaemia, using a well-validated 3 day model of recurrent hypoglycaemia either alone or with intrahippocampal delivery of glucocorticoid (mifepristone) and mineralocorticoid (spironolactone) receptor antagonists prior to each hypoglycaemic episode. Recurrent hypoglycaemia enhanced learning and also increased hippocampal expression of glucocorticoid receptors, serum/glucocorticoid-regulated kinase 1, cyclic AMP response element binding (CREB) phosphorylation, and plasma membrane levels of α-amino-3-hydroxy-5-methyl-4-isoxazolepropionic acid (AMPA) and N-methyl-D-aspartic acid (NMDA) receptors. Both hippocampus-dependent memory enhancement and the molecular changes were reversed by glucocorticoid receptor antagonist treatment. These results indicate that increased glucocorticoid signalling during recurrent hypoglycaemia produces several changes in the dorsal hippocampus that are conducive to enhanced hippocampus-dependent contextual learning. These changes appear to be adaptive, and in addition to supporting cognition may reduce damage otherwise caused by repeated exposure to severe hypoglycaemia.

  14. A delay-dependent LMI approach to dynamics analysis of discrete-time recurrent neural networks with time-varying delays

    International Nuclear Information System (INIS)

    Song, Qiankun; Wang, Zidong

    2007-01-01

    In this Letter, the analysis problem for the existence and stability of periodic solutions is investigated for a class of general discrete-time recurrent neural networks with time-varying delays. For the neural networks under study, a generalized activation function is considered, and the traditional assumptions on the boundedness, monotony and differentiability of the activation functions are removed. By employing the latest free-weighting matrix method, an appropriate Lyapunov-Krasovskii functional is constructed and several sufficient conditions are established to ensure the existence, uniqueness, and globally exponential stability of the periodic solution for the addressed neural network. The conditions are dependent on both the lower bound and upper bound of the time-varying time delays. Furthermore, the conditions are expressed in terms of the linear matrix inequalities (LMIs), which can be checked numerically using the effective LMI toolbox in MATLAB. Two simulation examples are given to show the effectiveness and less conservatism of the proposed criteria

  15. A mathematical analysis of the effects of Hebbian learning rules on the dynamics and structure of discrete-time random recurrent neural networks.

    Science.gov (United States)

    Siri, Benoît; Berry, Hugues; Cessac, Bruno; Delord, Bruno; Quoy, Mathias

    2008-12-01

    We present a mathematical analysis of the effects of Hebbian learning in random recurrent neural networks, with a generic Hebbian learning rule, including passive forgetting and different timescales, for neuronal activity and learning dynamics. Previous numerical work has reported that Hebbian learning drives the system from chaos to a steady state through a sequence of bifurcations. Here, we interpret these results mathematically and show that these effects, involving a complex coupling between neuronal dynamics and synaptic graph structure, can be analyzed using Jacobian matrices, which introduce both a structural and a dynamical point of view on neural network evolution. Furthermore, we show that sensitivity to a learned pattern is maximal when the largest Lyapunov exponent is close to 0. We discuss how neural networks may take advantage of this regime of high functional interest.

  16. Lunatic fringe-mediated Notch signaling regulates adult hippocampal neural stem cell maintenance.

    Science.gov (United States)

    Semerci, Fatih; Choi, William Tin-Shing; Bajic, Aleksandar; Thakkar, Aarohi; Encinas, Juan Manuel; Depreux, Frederic; Segil, Neil; Groves, Andrew K; Maletic-Savatic, Mirjana

    2017-07-12

    Hippocampal neural stem cells (NSCs) integrate inputs from multiple sources to balance quiescence and activation. Notch signaling plays a key role during this process. Here, we report that Lunatic fringe ( Lfng), a key modifier of the Notch receptor, is selectively expressed in NSCs. Further, Lfng in NSCs and Notch ligands Delta1 and Jagged1, expressed by their progeny, together influence NSC recruitment, cell cycle duration, and terminal fate. We propose a new model in which Lfng-mediated Notch signaling enables direct communication between a NSC and its descendants, so that progeny can send feedback signals to the 'mother' cell to modify its cell cycle status. Lfng-mediated Notch signaling appears to be a key factor governing NSC quiescence, division, and fate.

  17. Use of a Deep Recurrent Neural Network to Reduce Wind Noise: Effects on Judged Speech Intelligibility and Sound Quality

    Science.gov (United States)

    Keshavarzi, Mahmoud; Goehring, Tobias; Zakis, Justin; Turner, Richard E.; Moore, Brian C. J.

    2018-01-01

    Despite great advances in hearing-aid technology, users still experience problems with noise in windy environments. The potential benefits of using a deep recurrent neural network (RNN) for reducing wind noise were assessed. The RNN was trained using recordings of the output of the two microphones of a behind-the-ear hearing aid in response to male and female speech at various azimuths in the presence of noise produced by wind from various azimuths with a velocity of 3 m/s, using the “clean” speech as a reference. A paired-comparison procedure was used to compare all possible combinations of three conditions for subjective intelligibility and for sound quality or comfort. The conditions were unprocessed noisy speech, noisy speech processed using the RNN, and noisy speech that was high-pass filtered (which also reduced wind noise). Eighteen native English-speaking participants were tested, nine with normal hearing and nine with mild-to-moderate hearing impairment. Frequency-dependent linear amplification was provided for the latter. Processing using the RNN was significantly preferred over no processing by both subject groups for both subjective intelligibility and sound quality, although the magnitude of the preferences was small. High-pass filtering (HPF) was not significantly preferred over no processing. Although RNN was significantly preferred over HPF only for sound quality for the hearing-impaired participants, for the results as a whole, there was a preference for RNN over HPF. Overall, the results suggest that reduction of wind noise using an RNN is possible and might have beneficial effects when used in hearing aids. PMID:29708061

  18. Use of a Deep Recurrent Neural Network to Reduce Wind Noise: Effects on Judged Speech Intelligibility and Sound Quality.

    Science.gov (United States)

    Keshavarzi, Mahmoud; Goehring, Tobias; Zakis, Justin; Turner, Richard E; Moore, Brian C J

    2018-01-01

    Despite great advances in hearing-aid technology, users still experience problems with noise in windy environments. The potential benefits of using a deep recurrent neural network (RNN) for reducing wind noise were assessed. The RNN was trained using recordings of the output of the two microphones of a behind-the-ear hearing aid in response to male and female speech at various azimuths in the presence of noise produced by wind from various azimuths with a velocity of 3 m/s, using the "clean" speech as a reference. A paired-comparison procedure was used to compare all possible combinations of three conditions for subjective intelligibility and for sound quality or comfort. The conditions were unprocessed noisy speech, noisy speech processed using the RNN, and noisy speech that was high-pass filtered (which also reduced wind noise). Eighteen native English-speaking participants were tested, nine with normal hearing and nine with mild-to-moderate hearing impairment. Frequency-dependent linear amplification was provided for the latter. Processing using the RNN was significantly preferred over no processing by both subject groups for both subjective intelligibility and sound quality, although the magnitude of the preferences was small. High-pass filtering (HPF) was not significantly preferred over no processing. Although RNN was significantly preferred over HPF only for sound quality for the hearing-impaired participants, for the results as a whole, there was a preference for RNN over HPF. Overall, the results suggest that reduction of wind noise using an RNN is possible and might have beneficial effects when used in hearing aids.

  19. Gene expression profiling identifies mechanisms of protection to recurrent trinitrobenzene sulfonic acid colitis mediated by probiotics

    NARCIS (Netherlands)

    Mariman, R.; Kremer, S.H.A.; Erk, M. van; Lagerweij, T.; Koning, F.; Nagelkerken, L.

    2012-01-01

    Background: Host-microbiota interactions in the intestinal mucosa play a major role in intestinal immune homeostasis and control the threshold of local inflammation. The aim of this study was to evaluate the efficacy of probiotics in the recurrent trinitrobenzene sulfonic acid (TNBS)-induced colitis

  20. Lower prevalence of carotid plaque hemorrhage in women, and its mediator effect on sex differences in recurrent cerebrovascular events.

    Directory of Open Access Journals (Sweden)

    Neghal Kandiyil

    Full Text Available Women are at lower risk of stroke, and appear to benefit less from carotid endarterectomy (CEA than men. We hypothesised that this is due to more benign carotid disease in women mediating a lower risk of recurrent cerebrovascular events. To test this, we investigated sex differences in the prevalence of MRI detectable plaque hemorrhage (MRI PH as an index of plaque instability, and secondly whether MRI PH mediates sex differences in the rate of cerebrovascular recurrence.Prevalence of PH between sexes was analysed in a single centre pooled cohort of 176 patients with recently symptomatic, significant carotid stenosis (106 severe [≥70%], 70 moderate [50-69%] who underwent prospective carotid MRI scanning for identification of MRI PH. Further, a meta-analysis of published evidence was undertaken. Recurrent events were noted during clinical follow up for survival analysis.Women with symptomatic carotid stenosis (50%≥ were less likely to have plaque hemorrhage (PH than men (46% vs. 70% with an adjusted OR of 0.23 [95% CI 0.10-0.50, P<0.0001] controlling for other known vascular risk factors. This negative association was only significant for the severe stenosis subgroup (adjusted OR 0.18, 95% CI 0.067-0.50 not the moderate degree stenosis. Female sex in this subgroup also predicted a longer time to recurrent cerebral ischemic events (HR 0.38 95% CI 0.15-0.98, P = 0.045. Further addition of MRI PH or smoking abolished the sex effects with only MRI PH exerting a direct effect. Meta-analysis confirmed a protective effect of female sex on development of PH: unadjusted OR for presence of PH = 0.54 (95% CI 0.45-0.67, p<0.00001.MRI PH is significantly less prevalent in women. Women with MRI PH and severe stenosis have a similar risk as men for recurrent cerebrovascular events. MRI PH thus allows overcoming the sex bias in selection for CEA.

  1. The experimental study of genetic engineering human neural stem cells mediated by lentivirus to express multigene.

    Science.gov (United States)

    Cai, Pei-qiang; Tang, Xun; Lin, Yue-qiu; Martin, Oudega; Sun, Guang-yun; Xu, Lin; Yang, Yun-kang; Zhou, Tian-hua

    2006-02-01

    To explore the feasibility to construct genetic engineering human neural stem cells (hNSCs) mediated by lentivirus to express multigene in order to provide a graft source for further studies of spinal cord injury (SCI). Human neural stem cells from the brain cortex of human abortus were isolated and cultured, then gene was modified by lentivirus to express both green fluorescence protein (GFP) and rat neurotrophin-3 (NT-3); the transgenic expression was detected by the methods of fluorescence microscope, dorsal root ganglion of fetal rats and slot blot. Genetic engineering hNSCs were successfully constructed. All of the genetic engineering hNSCs which expressed bright green fluorescence were observed under the fluorescence microscope. The conditioned medium of transgenic hNSCs could induce neurite flourishing outgrowth from dorsal root ganglion (DRG). The genetic engineering hNSCs expressed high level NT-3 which could be detected by using slot blot. Genetic engineering hNSCs mediated by lentivirus can be constructed to express multigene successfully.

  2. Using Elman recurrent neural networks with conjugate gradient algorithm in determining the anesthetic the amount of anesthetic medicine to be applied.

    Science.gov (United States)

    Güntürkün, Rüştü

    2010-08-01

    In this study, Elman recurrent neural networks have been defined by using conjugate gradient algorithm in order to determine the depth of anesthesia in the continuation stage of the anesthesia and to estimate the amount of medicine to be applied at that moment. The feed forward neural networks are also used for comparison. The conjugate gradient algorithm is compared with back propagation (BP) for training of the neural Networks. The applied artificial neural network is composed of three layers, namely the input layer, the hidden layer and the output layer. The nonlinear activation function sigmoid (sigmoid function) has been used in the hidden layer and the output layer. EEG data has been recorded with Nihon Kohden 9200 brand 22-channel EEG device. The international 8-channel bipolar 10-20 montage system (8 TB-b system) has been used in assembling the recording electrodes. EEG data have been recorded by being sampled once in every 2 milliseconds. The artificial neural network has been designed so as to have 60 neurons in the input layer, 30 neurons in the hidden layer and 1 neuron in the output layer. The values of the power spectral density (PSD) of 10-second EEG segments which correspond to the 1-50 Hz frequency range; the ratio of the total power of PSD values of the EEG segment at that moment in the same range to the total of PSD values of EEG segment taken prior to the anesthesia.

  3. Recurrent Syncope due to Esophageal Squamous Cell Carcinoma

    OpenAIRE

    Casini, Alessandro; Tschanz, Elisabeth; Dietrich, Pierre-Yves; Nendaz, Mathieu

    2011-01-01

    Syncope is caused by a wide variety of disorders. Recurrent syncope as a complication of malignancy is uncommon and may be difficult to diagnose and to treat. Primary neck carcinoma or metastases spreading in parapharyngeal and carotid spaces can involve the internal carotid artery and cause neurally mediated syncope with a clinical presentation like carotid sinus syndrome. We report the case of a 76-year-old man who suffered from recurrent syncope due to invasion of the right carotid sinus b...

  4. An Improved Recurrent Neural Network for Complex-Valued Systems of Linear Equation and Its Application to Robotic Motion Tracking.

    Science.gov (United States)

    Ding, Lei; Xiao, Lin; Liao, Bolin; Lu, Rongbo; Peng, Hua

    2017-01-01

    To obtain the online solution of complex-valued systems of linear equation in complex domain with higher precision and higher convergence rate, a new neural network based on Zhang neural network (ZNN) is investigated in this paper. First, this new neural network for complex-valued systems of linear equation in complex domain is proposed and theoretically proved to be convergent within finite time. Then, the illustrative results show that the new neural network model has the higher precision and the higher convergence rate, as compared with the gradient neural network (GNN) model and the ZNN model. Finally, the application for controlling the robot using the proposed method for the complex-valued systems of linear equation is realized, and the simulation results verify the effectiveness and superiorness of the new neural network for the complex-valued systems of linear equation.

  5. Neuroautonomic evaluation of patients with unexplained syncope: incidence of complex neurally mediated diagnoses in the elderly

    Directory of Open Access Journals (Sweden)

    Rafanelli M

    2014-02-01

    Full Text Available Martina Rafanelli, Alessandro Morrione, Annalisa Landi, Emilia Ruffolo, Valentina M Chisciotti, Maria A Brunetti, Niccolò Marchionni, Andrea Ungar Syncope Unit, Cardiology and Geriatric Medicine, University of Florence and Azienda Ospedaliero-Universitaria Careggi, Florence, Italy Background: The incidence of syncope increases in individuals over the age of 70 years, but data about this condition in the elderly are limited. Little is known about tilt testing (TT, carotid sinus massage (CSM, or supine and upright blood pressure measurement related to age or about patients with complex diagnoses, for example, those with a double diagnosis, ie, positivity in two of these three tests. Methods: A total of 873 consecutive patients of mean age 66.5±18 years underwent TT, CSM, and blood pressure measurement in the supine and upright positions according to the European Society of Cardiology guidelines on syncope.1 Neuroautonomic evaluation was performed if the first-line evaluation (clinical history, physical examination, electrocardiogram was suggestive of neurally mediated syncope, or if the first-line evaluation was suggestive of cardiac syncope but this diagnosis was excluded after specific diagnostic tests according to European Society of Cardiology guidelines on syncope, or if certain or suspected diagnostic criteria were not present after the first-line evaluation. Results: A diagnosis was reached in 64.3% of cases. TT was diagnostic in 50.4% of cases, CSM was diagnostic in 11.8% of cases, and orthostatic hypotension was present in 19.9% of cases. Predictors of a positive tilt test were prodromal symptoms and typical situational syncope. Increased age and a pathologic electrocardiogram were predictors of carotid sinus syndrome. Varicose veins and alpha-receptor blockers, nitrates, and benzodiazepines were associated with orthostatic hypotension. Twenty-three percent of the patients had a complex diagnosis. The most frequent association was

  6. A Discrete-Time Recurrent Neural Network for Solving Rank-Deficient Matrix Equations With an Application to Output Regulation of Linear Systems.

    Science.gov (United States)

    Liu, Tao; Huang, Jie

    2017-04-17

    This paper presents a discrete-time recurrent neural network approach to solving systems of linear equations with two features. First, the system of linear equations may not have a unique solution. Second, the system matrix is not known precisely, but a sequence of matrices that converges to the unknown system matrix exponentially is known. The problem is motivated from solving the output regulation problem for linear systems. Thus, an application of our main result leads to an online solution to the output regulation problem for linear systems.

  7. Neural Reward Processing Mediates the Relationship between Insomnia Symptoms and Depression in Adolescence.

    Science.gov (United States)

    Casement, Melynda D; Keenan, Kate E; Hipwell, Alison E; Guyer, Amanda E; Forbes, Erika E

    2016-02-01

    Emerging evidence suggests that insomnia may disrupt reward-related brain function-a potentially important factor in the development of depressive disorder. Adolescence may be a period during which such disruption is especially problematic given the rise in the incidence of insomnia and ongoing development of neural systems that support reward processing. The present study uses longitudinal data to test the hypothesis that disruption of neural reward processing is a mechanism by which insomnia symptoms-including nocturnal insomnia symptoms (NIS) and nonrestorative sleep (NRS)-contribute to depressive symptoms in adolescent girls. Participants were 123 adolescent girls and their caregivers from an ongoing longitudinal study of precursors to depression across adolescent development. NIS and NRS were assessed annually from ages 9 to 13 years. Girls completed a monetary reward task during a functional MRI scan at age 16 years. Depressive symptoms were assessed at ages 16 and 17 years. Multivariable regression tested the prospective associations between NIS and NRS, neural response during reward anticipation, and the mean number of depressive symptoms (omitting sleep problems). NRS, but not NIS, during early adolescence was positively associated with late adolescent dorsal medial prefrontal cortex (dmPFC) response to reward anticipation and depressive symptoms. DMPFC response mediated the relationship between early adolescent NRS and late adolescent depressive symptoms. These results suggest that NRS may contribute to depression by disrupting reward processing via altered activity in a region of prefrontal cortex involved in affective control. The results also support the mechanistic differentiation of NIS and NRS. © 2016 Associated Professional Sleep Societies, LLC.

  8. Unsupervised Learning in an Ensemble of Spiking Neural Networks Mediated by ITDP.

    Directory of Open Access Journals (Sweden)

    Yoonsik Shim

    2016-10-01

    Full Text Available We propose a biologically plausible architecture for unsupervised ensemble learning in a population of spiking neural network classifiers. A mixture of experts type organisation is shown to be effective, with the individual classifier outputs combined via a gating network whose operation is driven by input timing dependent plasticity (ITDP. The ITDP gating mechanism is based on recent experimental findings. An abstract, analytically tractable model of the ITDP driven ensemble architecture is derived from a logical model based on the probabilities of neural firing events. A detailed analysis of this model provides insights that allow it to be extended into a full, biologically plausible, computational implementation of the architecture which is demonstrated on a visual classification task. The extended model makes use of a style of spiking network, first introduced as a model of cortical microcircuits, that is capable of Bayesian inference, effectively performing expectation maximization. The unsupervised ensemble learning mechanism, based around such spiking expectation maximization (SEM networks whose combined outputs are mediated by ITDP, is shown to perform the visual classification task well and to generalize to unseen data. The combined ensemble performance is significantly better than that of the individual classifiers, validating the ensemble architecture and learning mechanisms. The properties of the full model are analysed in the light of extensive experiments with the classification task, including an investigation into the influence of different input feature selection schemes and a comparison with a hierarchical STDP based ensemble architecture.

  9. Unsupervised Learning in an Ensemble of Spiking Neural Networks Mediated by ITDP.

    Science.gov (United States)

    Shim, Yoonsik; Philippides, Andrew; Staras, Kevin; Husbands, Phil

    2016-10-01

    We propose a biologically plausible architecture for unsupervised ensemble learning in a population of spiking neural network classifiers. A mixture of experts type organisation is shown to be effective, with the individual classifier outputs combined via a gating network whose operation is driven by input timing dependent plasticity (ITDP). The ITDP gating mechanism is based on recent experimental findings. An abstract, analytically tractable model of the ITDP driven ensemble architecture is derived from a logical model based on the probabilities of neural firing events. A detailed analysis of this model provides insights that allow it to be extended into a full, biologically plausible, computational implementation of the architecture which is demonstrated on a visual classification task. The extended model makes use of a style of spiking network, first introduced as a model of cortical microcircuits, that is capable of Bayesian inference, effectively performing expectation maximization. The unsupervised ensemble learning mechanism, based around such spiking expectation maximization (SEM) networks whose combined outputs are mediated by ITDP, is shown to perform the visual classification task well and to generalize to unseen data. The combined ensemble performance is significantly better than that of the individual classifiers, validating the ensemble architecture and learning mechanisms. The properties of the full model are analysed in the light of extensive experiments with the classification task, including an investigation into the influence of different input feature selection schemes and a comparison with a hierarchical STDP based ensemble architecture.

  10. The neural mechanisms of affect infusion in social economic decision-making: A mediating role of the anterior insula

    NARCIS (Netherlands)

    Harlé, K.M.; Chang, L.J.; Wout, M. van 't; Sanfey, A.G.

    2012-01-01

    Though emotions have been shown to have sometimes dramatic effects on decision-making, the neural mechanisms mediating these biases are relatively unexplored. Here, we investigated how incidental affect (i.e. emotional states unrelated to the decision at hand) may influence decisions, and how these

  11. XenoSite: accurately predicting CYP-mediated sites of metabolism with neural networks.

    Science.gov (United States)

    Zaretzki, Jed; Matlock, Matthew; Swamidass, S Joshua

    2013-12-23

    Understanding how xenobiotic molecules are metabolized is important because it influences the safety, efficacy, and dose of medicines and how they can be modified to improve these properties. The cytochrome P450s (CYPs) are proteins responsible for metabolizing 90% of drugs on the market, and many computational methods can predict which atomic sites of a molecule--sites of metabolism (SOMs)--are modified during CYP-mediated metabolism. This study improves on prior methods of predicting CYP-mediated SOMs by using new descriptors and machine learning based on neural networks. The new method, XenoSite, is faster to train and more accurate by as much as 4% or 5% for some isozymes. Furthermore, some "incorrect" predictions made by XenoSite were subsequently validated as correct predictions by revaluation of the source literature. Moreover, XenoSite output is interpretable as a probability, which reflects both the confidence of the model that a particular atom is metabolized and the statistical likelihood that its prediction for that atom is correct.

  12. Recurrent networks for wave forecasting

    Digital Repository Service at National Institute of Oceanography (India)

    Mandal, S.; Prabaharan, N.

    , merchant vessel routing, nearshore construction, etc. more efficiently and safely. This paper presents an application of the Artificial Neural Network, namely Backpropagation Recurrent Neural Network (BRNN) with rprop update algorithm for wave forecasting...

  13. Modulation of Neurally Mediated Vasodepression and Bradycardia by Electroacupuncture through Opioids in Nucleus Tractus Solitarius.

    Science.gov (United States)

    Tjen-A-Looi, Stephanie C; Fu, Liang-Wu; Guo, Zhi-Ling; Longhurst, John C

    2018-01-30

    Stimulation of vagal afferent endings with intravenous phenylbiguanide (PBG) causes both bradycardia and vasodepression, simulating neurally mediated syncope. Activation of µ-opioid receptors in the nucleus tractus solitarius (NTS) increases blood pressure. Electroacupuncture (EA) stimulation of somatosensory nerves underneath acupoints P5-6, ST36-37, LI6-7 or G37-39 selectively but differentially modulates sympathoexcitatory responses. We therefore hypothesized that EA-stimulation at P5-6 or ST36-37, but not LI6-7 or G37-39 acupoints, inhibits the bradycardia and vasodepression through a µ-opioid receptor mechanism in the NTS. We observed that stimulation at acupoints P5-6 and ST36-37 overlying the deep somatosensory nerves and LI6-7 and G37-39 overlying cutaneous nerves differentially evoked NTS neural activity in anesthetized and ventilated animals. Thirty-min of EA-stimulation at P5-6 or ST36-37 reduced the depressor and bradycardia responses to PBG while EA at LI6-7 or G37-39 did not. Congruent with the hemodynamic responses, EA at P5-6 and ST36-37, but not at LI6-7 and G37-39, reduced vagally evoked activity of cardiovascular NTS cells. Finally, opioid receptor blockade in the NTS with naloxone or a specific μ-receptor antagonist reversed P5-6 EA-inhibition of the depressor, bradycardia and vagally evoked NTS activity. These data suggest that point specific EA stimulation inhibits PBG-induced vasodepression and bradycardia responses through a μ-opioid mechanism in the NTS.

  14. Recurrent Vulvovaginal Candidiasis: Could It Be Related to Cell-Mediated Immunity Defect in Response to Candida Antigen?

    Directory of Open Access Journals (Sweden)

    Zahra Talaei

    2017-09-01

    Full Text Available Background Recurrent vulvovaginal candidiasis (RVVC is a common cause of morbidity affecting millions of women worldwide. Patients with RVVC are thought to have an underlying immunologic defect. This study has been established to evaluate cell-mediated immunity defect in response to candida antigen in RVVC cases. Materials and Methods Our cross-sectional study was performed in 3 groups of RVVC patients (cases, healthy individuals (control I and known cases of chronic mucocutaneous candidiasis (CMC (control II. Patients who met the inclusion criteria of RVVC were selected consecutively and were allocated in the case group. Peripheral blood mononuclear cells were isolated and labeled with CFSE and proliferation rate was measured in exposure to candida antigen via flow cytometry. Results T lymphocyte proliferation in response to candida was significantly lower in RVVC cases (n=24 and CMC patients (n=7 compared to healthy individuals (n=20, P0.05. Family history of primary immunodeficiency diseases (PID differed significantly among groups (P=0.01, RVVC patients has family history of PID more than control I (29.2 vs. 0%, P=0.008 but not statistically different from CMC patients (29.2 vs. 42.9%, P>0.05. Prevalence of atopy was greater in RVVC cases compared to healthy individuals (41.3 vs. 15%, P=0.054. Lymphoproliferative activity and vaginal symptoms were significantly different among RVVC cases with and without allergy (P=0.01, P=0.02. Conclusion Our findings revealed that T cells do not actively proliferate in response to Candida antigen in some RVVC cases. So it is concluded that patients with cell-mediated immunity defect are more susceptible to recurrent fungal infections of vulva and vagina. Nonetheless, some other cases of RVVC showed normal function of T cells. Further evaluations showed that these patients suffer from atopy. It is hypothesized that higher frequency of VVC in patients with history of atopy might be due to allergic response

  15. A Pilot Feasibility Study of Oral 5-Fluorocytosine and Genetically-Modified Neural Stem Cells Expressing E.Coli Cytosine Deaminase for Treatment of Recurrent High Grade Gliomas

    Science.gov (United States)

    2017-11-07

    Adult Anaplastic Astrocytoma; Recurrent Grade III Glioma; Recurrent Grade IV Glioma; Adult Anaplastic Oligodendroglioma; Adult Brain Tumor; Adult Giant Cell Glioblastoma; Adult Glioblastoma; Adult Gliosarcoma; Adult Mixed Glioma; Recurrent Adult Brain Tumor; Adult Anaplastic Oligoastrocytoma; Recurrent High Grade Glioma

  16. Validation and genotyping of multiple human polymorphic inversions mediated by inverted repeats reveals a high degree of recurrence.

    Directory of Open Access Journals (Sweden)

    Cristina Aguado

    2014-03-01

    Full Text Available In recent years different types of structural variants (SVs have been discovered in the human genome and their functional impact has become increasingly clear. Inversions, however, are poorly characterized and more difficult to study, especially those mediated by inverted repeats or segmental duplications. Here, we describe the results of a simple and fast inverse PCR (iPCR protocol for high-throughput genotyping of a wide variety of inversions using a small amount of DNA. In particular, we analyzed 22 inversions predicted in humans ranging from 5.1 kb to 226 kb and mediated by inverted repeat sequences of 1.6-24 kb. First, we validated 17 of the 22 inversions in a panel of nine HapMap individuals from different populations, and we genotyped them in 68 additional individuals of European origin, with correct genetic transmission in ∼ 12 mother-father-child trios. Global inversion minor allele frequency varied between 1% and 49% and inversion genotypes were consistent with Hardy-Weinberg equilibrium. By analyzing the nucleotide variation and the haplotypes in these regions, we found that only four inversions have linked tag-SNPs and that in many cases there are multiple shared SNPs between standard and inverted chromosomes, suggesting an unexpected high degree of inversion recurrence during human evolution. iPCR was also used to check 16 of these inversions in four chimpanzees and two gorillas, and 10 showed both orientations either within or between species, providing additional support for their multiple origin. Finally, we have identified several inversions that include genes in the inverted or breakpoint regions, and at least one disrupts a potential coding gene. Thus, these results represent a significant advance in our understanding of inversion polymorphism in human populations and challenge the common view of a single origin of inversions, with important implications for inversion analysis in SNP-based studies.

  17. Lentiviral vector-mediated genetic modification of human neural progenitor cells for ex vivo gene therapy.

    Science.gov (United States)

    Capowski, Elizabeth E; Schneider, Bernard L; Ebert, Allison D; Seehus, Corey R; Szulc, Jolanta; Zufferey, Romain; Aebischer, Patrick; Svendsen, Clive N

    2007-07-30

    Human neural progenitor cells (hNPC) hold great potential as an ex vivo system for delivery of therapeutic proteins to the central nervous system. When cultured as aggregates, termed neurospheres, hNPC are capable of significant in vitro expansion. In the current study, we present a robust method for lentiviral vector-mediated gene delivery into hNPC that maintains the differentiation and proliferative properties of neurosphere cultures while minimizing the amount of viral vector used and controlling the number of insertion sites per population. This method results in long-term, stable expression even after differentiation of the hNPC to neurons and astrocytes and allows for generation of equivalent transgenic populations of hNPC. In addition, the in vitro analysis presented predicts the behavior of transgenic lines in vivo when transplanted into a rodent model of Parkinson's disease. The methods presented provide a powerful tool for assessing the impact of factors such as promoter systems or different transgenes on the therapeutic utility of these cells.

  18. Acoustic stimulation can induce a selective neural network response mediated by piezoelectric nanoparticles

    Science.gov (United States)

    Rojas, Camilo; Tedesco, Mariateresa; Massobrio, Paolo; Marino, Attilio; Ciofani, Gianni; Martinoia, Sergio; Raiteri, Roberto

    2018-06-01

    Objective. We aim to develop a novel non-invasive or minimally invasive method for neural stimulation to be applied in the study and treatment of brain (dys)functions and neurological disorders. Approach. We investigate the electrophysiological response of in vitro neuronal networks when subjected to low-intensity pulsed acoustic stimulation, mediated by piezoelectric nanoparticles adsorbed on the neuronal membrane. Main results. We show that the presence of piezoelectric barium titanate nanoparticles induces, in a reproducible way, an increase in network activity when excited by stationary ultrasound waves in the MHz regime. Such a response can be fully recovered when switching the ultrasound pulse off, depending on the generated pressure field amplitude, whilst it is insensitive to the duration of the ultrasound pulse in the range 0.5 s–1.5 s. We demonstrate that the presence of piezoelectric nanoparticles is necessary, and when applying the same acoustic stimulation to neuronal cultures without nanoparticles or with non-piezoelectric nanoparticles with the same size distribution, no network response is observed. Significance. We believe that our results open up an extremely interesting approach when coupled with suitable functionalization strategies of the nanoparticles in order to address specific neurons and/or brain areas and applied in vivo, thus enabling remote, non-invasive, and highly selective modulation of the activity of neuronal subpopulations of the central nervous system of mammalians.

  19. Differentiating neural systems mediating the acquisition versus expression of goal-directed and habitual behavioral control

    Science.gov (United States)

    Liljeholm, Mimi; Dunne, Simon; O'Doherty, John P.

    2015-01-01

    Considerable behavioral data indicates that operant actions can become habitual, as evidenced by insensitivity to changes in the action-outcome contingency and in subjective outcome values. Notably, although several studies have investigated the neural substrates of habits, none has clearly differentiated the areas of the human brain that support habit formation from those that implement habitual control. We scanned participants with fMRI as they learned and performed an operant task in which the conditional structure of the environment encouraged either goal-directed encoding of the consequences of actions, or a habit-like mapping of actions to antecedent cues. Participants were also scanned during a subsequent assessment of insensitivity to outcome devaluation. We identified dissociable roles of the cerebellum and ventral striatum, across learning and test performance, in behavioral insensitivity to outcome devaluation. We also show that the inferior parietal lobule – an area previously implicated in several aspects of goal-directed action selection, including the attribution of intent and awareness of agency – predicts sensitivity to outcome devaluation. Finally, we reveal a potential functional homology between the human subgenual cortex and rodent infralimbic cortex in the implementation of habitual control. In summary, our findings suggest a broad systems division, at the cortical and subcortical levels, between brain areas mediating the encoding and expression of action-outcome and stimulus-response associations. PMID:25892332

  20. A solution for two-dimensional mazes with use of chaotic dynamics in a recurrent neural network model.

    Science.gov (United States)

    Suemitsu, Yoshikazu; Nara, Shigetoshi

    2004-09-01

    Chaotic dynamics introduced into a neural network model is applied to solving two-dimensional mazes, which are ill-posed problems. A moving object moves from the position at t to t + 1 by simply defined motion function calculated from firing patterns of the neural network model at each time step t. We have embedded several prototype attractors that correspond to the simple motion of the object orienting toward several directions in two-dimensional space in our neural network model. Introducing chaotic dynamics into the network gives outputs sampled from intermediate state points between embedded attractors in a state space, and these dynamics enable the object to move in various directions. System parameter switching between a chaotic and an attractor regime in the state space of the neural network enables the object to move to a set target in a two-dimensional maze. Results of computer simulations show that the success rate for this method over 300 trials is higher than that of random walk. To investigate why the proposed method gives better performance, we calculate and discuss statistical data with respect to dynamical structure.

  1. Single Layer Recurrent Neural Network for detection of swarm-like earthquakes in W-Bohemia/Vogtland - the method

    Czech Academy of Sciences Publication Activity Database

    Doubravová, Jana; Wiszniowski, J.; Horálek, Josef

    2016-01-01

    Roč. 93, August (2016), s. 138-149 ISSN 0098-3004 R&D Projects: GA ČR GAP210/12/2336; GA MŠk LM2010008 Institutional support: RVO:67985530 Keywords : event detection * artificial neural network * West Bohemia/Vogtland Subject RIV: DC - Siesmology, Volcanology, Earth Structure Impact factor: 2.533, year: 2016

  2. Integrated built-in-test false and missed alarms reduction based on forward infinite impulse response & recurrent finite impulse response dynamic neural networks

    Science.gov (United States)

    Cui, Yiqian; Shi, Junyou; Wang, Zili

    2017-11-01

    Built-in tests (BITs) are widely used in mechanical systems to perform state identification, whereas the BIT false and missed alarms cause trouble to the operators or beneficiaries to make correct judgments. Artificial neural networks (ANN) are previously used for false and missed alarms identification, which has the features such as self-organizing and self-study. However, these ANN models generally do not incorporate the temporal effect of the bottom-level threshold comparison outputs and the historical temporal features are not fully considered. To improve the situation, this paper proposes a new integrated BIT design methodology by incorporating a novel type of dynamic neural networks (DNN) model. The new DNN model is termed as Forward IIR & Recurrent FIR DNN (FIRF-DNN), where its component neurons, network structures, and input/output relationships are discussed. The condition monitoring false and missed alarms reduction implementation scheme based on FIRF-DNN model is also illustrated, which is composed of three stages including model training, false and missed alarms detection, and false and missed alarms suppression. Finally, the proposed methodology is demonstrated in the application study and the experimental results are analyzed.

  3. Dexamethasone-mediated inhibition of Glioblastoma neurosphere dispersal in an ex vivo organotypic neural assay

    Science.gov (United States)

    Meleis, Ahmed M.; Mahtabfar, Aria; Danish, Shabbar

    2017-01-01

    Glioblastoma is highly aggressive. Early dispersal of the primary tumor renders localized therapy ineffective. Recurrence always occurs and leads to patient death. Prior studies have shown that dispersal of Glioblastoma can be significantly reduced by Dexamethasone (Dex), a drug currently used to control brain tumor related edema. However, due to high doses and significant side effects, treatment is tapered and discontinued as soon as edema has resolved. Prior analyses of the dispersal inhibitory effects of Dex were performed on tissue culture plastic, or polystyrene filters seeded with normal human astrocytes, conditions which inherently differ from the parenchymal architecture of neuronal tissue. The aim of this study was to utilize an ex-vivo model to examine Dex-mediated inhibition of tumor cell migration from low-passage, human Glioblastoma neurospheres on multiple substrates including mouse retina, and slices of mouse, pig, and human brain. We also determined the lowest possible Dex dose that can inhibit dispersal. Analysis by Two-Factor ANOVA shows that for GBM-2 and GBM-3, Dex treatment significantly reduces dispersal on all tissue types. However, the magnitude of the effect appears to be tissue-type specific. Moreover, there does not appear to be a difference in Dex-mediated inhibition of dispersal between mouse retina, mouse brain and human brain. To estimate the lowest possible dose at which Dex can inhibit dispersal, LogEC50 values were compared by Extra Sum-of-Squares F-test. We show that it is possible to achieve 50% reduction in dispersal with Dex doses ranging from 3.8 x10-8M to 8.0x10-9M for GBM-2, and 4.3x10-8M to 1.8x10-9M for GBM-3, on mouse retina and brain slices, respectively. These doses are 3-30-fold lower than those used to control edema. This study extends our previous in vitro data and identifies the mouse retina as a potential substrate for in vivo studies of GBM dispersal. PMID:29040322

  4. Dexamethasone-mediated inhibition of Glioblastoma neurosphere dispersal in an ex vivo organotypic neural assay.

    Directory of Open Access Journals (Sweden)

    Ahmed M Meleis

    Full Text Available Glioblastoma is highly aggressive. Early dispersal of the primary tumor renders localized therapy ineffective. Recurrence always occurs and leads to patient death. Prior studies have shown that dispersal of Glioblastoma can be significantly reduced by Dexamethasone (Dex, a drug currently used to control brain tumor related edema. However, due to high doses and significant side effects, treatment is tapered and discontinued as soon as edema has resolved. Prior analyses of the dispersal inhibitory effects of Dex were performed on tissue culture plastic, or polystyrene filters seeded with normal human astrocytes, conditions which inherently differ from the parenchymal architecture of neuronal tissue. The aim of this study was to utilize an ex-vivo model to examine Dex-mediated inhibition of tumor cell migration from low-passage, human Glioblastoma neurospheres on multiple substrates including mouse retina, and slices of mouse, pig, and human brain. We also determined the lowest possible Dex dose that can inhibit dispersal. Analysis by Two-Factor ANOVA shows that for GBM-2 and GBM-3, Dex treatment significantly reduces dispersal on all tissue types. However, the magnitude of the effect appears to be tissue-type specific. Moreover, there does not appear to be a difference in Dex-mediated inhibition of dispersal between mouse retina, mouse brain and human brain. To estimate the lowest possible dose at which Dex can inhibit dispersal, LogEC50 values were compared by Extra Sum-of-Squares F-test. We show that it is possible to achieve 50% reduction in dispersal with Dex doses ranging from 3.8 x10-8M to 8.0x10-9M for GBM-2, and 4.3x10-8M to 1.8x10-9M for GBM-3, on mouse retina and brain slices, respectively. These doses are 3-30-fold lower than those used to control edema. This study extends our previous in vitro data and identifies the mouse retina as a potential substrate for in vivo studies of GBM dispersal.

  5. Are Improvements in Cognitive Content and Depressive Symptoms Correlates or Mediators during Acute-Phase Cognitive Therapy for Recurrent Major Depressive Disorder?

    Science.gov (United States)

    Vittengl, Jeffrey R; Clark, Lee Anna; Thase, Michael E; Jarrett, Robin B

    2014-01-09

    The cognitive model of depression posits that cognitive therapy's (CT) effect on depressive symptoms is mediated by changes in cognitive content (e.g., automatic negative thoughts dysfunctional attitudes, failure attributions). We tested improvement and normalization of cognitive content among outpatients ( N = 523) with recurrent major depressive disorder treated with acute-phase CT (Jarrett & Thase, 2010; Jarrett et al., 2013). We also tested whether improvement in cognitive content accounted for subsequent changes in depressive symptoms and vice versa. Five measures of content improved substantively from pre- to post-CT (median d = 0.96), and the proportions of patients scoring in "healthy" ranges increased (median 45% to 82%). Evidence for cognitive mediation of symptom reduction was limited (median r = .06), as was evidence for symptom mediation of cognitive content improvement (median r = .07). We discuss measurement and design issues relevant to detection of mediators and consider alternative theories of change.

  6. Lymphotropic Virions Affect Chemokine Receptor-Mediated Neural Signaling and Apoptosis: Implications for Human Immunodeficiency Virus Type 1-Associated Dementia

    Science.gov (United States)

    Zheng, Jialin; Ghorpade, Anuja; Niemann, Douglas; Cotter, Robin L.; Thylin, Michael R.; Epstein, Leon; Swartz, Jennifer M.; Shepard, Robin B.; Liu, Xiaojuan; Nukuna, Adeline; Gendelman, Howard E.

    1999-01-01

    Chemokine receptors pivotal for human immunodeficiency virus type 1 (HIV-1) infection in lymphocytes and macrophages (CCR3, CCR5, and CXCR4) are expressed on neural cells (microglia, astrocytes, and/or neurons). It is these cells which are damaged during progressive HIV-1 infection of the central nervous system. We theorize that viral coreceptors could effect neural cell damage during HIV-1-associated dementia (HAD) without simultaneously affecting viral replication. To these ends, we studied the ability of diverse viral strains to affect intracellular signaling and apoptosis of neurons, astrocytes, and monocyte-derived macrophages. Inhibition of cyclic AMP, activation of inositol 1,4,5-trisphosphate, and apoptosis were induced by diverse HIV-1 strains, principally in neurons. Virions from T-cell-tropic (T-tropic) strains (MN, IIIB, and Lai) produced the most significant alterations in signaling of neurons and astrocytes. The HIV-1 envelope glycoprotein, gp120, induced markedly less neural damage than purified virions. Macrophage-tropic (M-tropic) strains (ADA, JR-FL, Bal, MS-CSF, and DJV) produced the least neural damage, while 89.6, a dual-tropic HIV-1 strain, elicited intermediate neural cell damage. All T-tropic strain-mediated neuronal impairments were blocked by the CXCR4 antibody, 12G5. In contrast, the M-tropic strains were only partially blocked by 12G5. CXCR4-mediated neuronal apoptosis was confirmed in pure populations of rat cerebellar granule neurons and was blocked by HA1004, an inhibitor of calcium/calmodulin-dependent protein kinase II, protein kinase A, and protein kinase C. Taken together, these results suggest that progeny HIV-1 virions can influence neuronal signal transduction and apoptosis. This process occurs, in part, through CXCR4 and is independent of CD4 binding. T-tropic viruses that traffic in and out of the brain during progressive HIV-1 disease may play an important role in HAD neuropathogenesis. PMID:10482576

  7. Recurrent myocardial infarction: Mechanisms of free-floating adaptation and autonomic derangement in networked cardiac neural control

    Science.gov (United States)

    Ardell, Jeffrey L.; Shivkumar, Kalyanam; Armour, J. Andrew

    2017-01-01

    The cardiac nervous system continuously controls cardiac function whether or not pathology is present. While myocardial infarction typically has a major and catastrophic impact, population studies have shown that longer-term risk for recurrent myocardial infarction and the related potential for sudden cardiac death depends mainly upon standard atherosclerotic variables and autonomic nervous system maladaptations. Investigative neurocardiology has demonstrated that autonomic control of cardiac function includes local circuit neurons for networked control within the peripheral nervous system. The structural and adaptive characteristics of such networked interactions define the dynamics and a new normal for cardiac control that results in the aftermath of recurrent myocardial infarction and/or unstable angina that may or may not precipitate autonomic derangement. These features are explored here via a mathematical model of cardiac regulation. A main observation is that the control environment during pathology is an extrapolation to a setting outside prior experience. Although global bounds guarantee stability, the resulting closed-loop dynamics exhibited while the network adapts during pathology are aptly described as ‘free-floating’ in order to emphasize their dependence upon details of the network structure. The totality of the results provide a mechanistic reasoning that validates the clinical practice of reducing sympathetic efferent neuronal tone while aggressively targeting autonomic derangement in the treatment of ischemic heart disease. PMID:28692680

  8. A visual sense of number emerges from the dynamics of a recurrent on-center off-surround neural network.

    Science.gov (United States)

    Sengupta, Rakesh; Surampudi, Bapi Raju; Melcher, David

    2014-09-25

    It has been proposed that the ability of humans to quickly perceive numerosity involves a visual sense of number. Different paradigms of enumeration and numerosity comparison have produced a gamut of behavioral and neuroimaging data, but there has been no unified conceptual framework that can explain results across the entire range of numerosity. The current work tries to address the ongoing debate concerning whether the same mechanism operates for enumeration of small and large numbers, through a computational approach. We describe the workings of a single-layered, fully connected network characterized by self-excitation and recurrent inhibition that operates at both subitizing and estimation ranges. We show that such a network can account for classic numerical cognition effects (the distance effect, Fechner׳s law, Weber fraction for numerosity comparison) through the network steady state activation response across different recurrent inhibition values. The model also accounts for fMRI data previously reported for different enumeration related tasks. The model also allows us to generate an estimate of the pattern of reaction times in enumeration tasks. Overall, these findings suggest that a single network architecture can account for both small and large number processing. Copyright © 2014. Published by Elsevier B.V.

  9. Recurrent myocardial infarction: Mechanisms of free-floating adaptation and autonomic derangement in networked cardiac neural control.

    Science.gov (United States)

    Kember, Guy; Ardell, Jeffrey L; Shivkumar, Kalyanam; Armour, J Andrew

    2017-01-01

    The cardiac nervous system continuously controls cardiac function whether or not pathology is present. While myocardial infarction typically has a major and catastrophic impact, population studies have shown that longer-term risk for recurrent myocardial infarction and the related potential for sudden cardiac death depends mainly upon standard atherosclerotic variables and autonomic nervous system maladaptations. Investigative neurocardiology has demonstrated that autonomic control of cardiac function includes local circuit neurons for networked control within the peripheral nervous system. The structural and adaptive characteristics of such networked interactions define the dynamics and a new normal for cardiac control that results in the aftermath of recurrent myocardial infarction and/or unstable angina that may or may not precipitate autonomic derangement. These features are explored here via a mathematical model of cardiac regulation. A main observation is that the control environment during pathology is an extrapolation to a setting outside prior experience. Although global bounds guarantee stability, the resulting closed-loop dynamics exhibited while the network adapts during pathology are aptly described as 'free-floating' in order to emphasize their dependence upon details of the network structure. The totality of the results provide a mechanistic reasoning that validates the clinical practice of reducing sympathetic efferent neuronal tone while aggressively targeting autonomic derangement in the treatment of ischemic heart disease.

  10. Recurrent myocardial infarction: Mechanisms of free-floating adaptation and autonomic derangement in networked cardiac neural control.

    Directory of Open Access Journals (Sweden)

    Guy Kember

    Full Text Available The cardiac nervous system continuously controls cardiac function whether or not pathology is present. While myocardial infarction typically has a major and catastrophic impact, population studies have shown that longer-term risk for recurrent myocardial infarction and the related potential for sudden cardiac death depends mainly upon standard atherosclerotic variables and autonomic nervous system maladaptations. Investigative neurocardiology has demonstrated that autonomic control of cardiac function includes local circuit neurons for networked control within the peripheral nervous system. The structural and adaptive characteristics of such networked interactions define the dynamics and a new normal for cardiac control that results in the aftermath of recurrent myocardial infarction and/or unstable angina that may or may not precipitate autonomic derangement. These features are explored here via a mathematical model of cardiac regulation. A main observation is that the control environment during pathology is an extrapolation to a setting outside prior experience. Although global bounds guarantee stability, the resulting closed-loop dynamics exhibited while the network adapts during pathology are aptly described as 'free-floating' in order to emphasize their dependence upon details of the network structure. The totality of the results provide a mechanistic reasoning that validates the clinical practice of reducing sympathetic efferent neuronal tone while aggressively targeting autonomic derangement in the treatment of ischemic heart disease.

  11. The neural mechanisms of affect infusion in social economic decision-making: a mediating role of the anterior insula.

    Science.gov (United States)

    Harlé, Katia M; Chang, Luke J; van 't Wout, Mascha; Sanfey, Alan G

    2012-05-15

    Though emotions have been shown to have sometimes dramatic effects on decision-making, the neural mechanisms mediating these biases are relatively unexplored. Here, we investigated how incidental affect (i.e. emotional states unrelated to the decision at hand) may influence decisions, and how these biases are implemented in the brain. Nineteen adult participants made decisions which involved accepting or rejecting monetary offers from others in an Ultimatum Game while undergoing functional magnetic resonance imaging (fMRI). Prior to each set of decisions, participants watched a short video clip aimed at inducing either a sad or neutral emotional state. Results demonstrated that, as expected, sad participants rejected more unfair offers than those in the neutral condition. Neuroimaging analyses revealed that receiving unfair offers while in a sad mood elicited activity in brain areas related to aversive emotional states and somatosensory integration (anterior insula) and to cognitive conflict (anterior cingulate cortex). Sad participants also showed a diminished sensitivity in neural regions associated with reward processing (ventral striatum). Importantly, insular activation uniquely mediated the relationship between sadness and decision bias. This study is the first to reveal how subtle mood states can be integrated at the neural level to influence decision-making. Copyright © 2012 Elsevier Inc. All rights reserved.

  12. Sequential neural models with stochastic layers

    DEFF Research Database (Denmark)

    Fraccaro, Marco; Sønderby, Søren Kaae; Paquet, Ulrich

    2016-01-01

    How can we efficiently propagate uncertainty in a latent state representation with recurrent neural networks? This paper introduces stochastic recurrent neural networks which glue a deterministic recurrent neural network and a state space model together to form a stochastic and sequential neural...... generative model. The clear separation of deterministic and stochastic layers allows a structured variational inference network to track the factorization of the model's posterior distribution. By retaining both the nonlinear recursive structure of a recurrent neural network and averaging over...

  13. Self-awareness in neurodegenerative disease relies on neural structures mediating reward-driven attention.

    Science.gov (United States)

    Shany-Ur, Tal; Lin, Nancy; Rosen, Howard J; Sollberger, Marc; Miller, Bruce L; Rankin, Katherine P

    2014-08-01

    versus exaggerating deficits, overestimation and underestimation scores were analysed separately, controlling for age, sex, total intracranial volume and extent of actual functional decline. Atrophy related to overestimating one's functioning included bilateral, right greater than left frontal and subcortical regions, including dorsal superior and middle frontal gyri, lateral and medial orbitofrontal gyri, right anterior insula, putamen, thalamus, and caudate, and midbrain and pons. Thus, our patients' tendency to under-represent their functional decline was related to degeneration of domain-general dorsal frontal regions involved in attention, as well as orbitofrontal and subcortical regions likely involved in assigning a reward value to self-related processing and maintaining accurate self-knowledge. The anatomic correlates of underestimation (right rostral anterior cingulate cortex, uncorrected significance level) were distinct from overestimation and had a substantially smaller effect size. This suggests that underestimation or 'tarnishing' may be influenced by non-structural neurobiological and sociocultural factors, and should not be considered to be on a continuum with overestimation or 'polishing' of functional capacity, which appears to be more directly mediated by neural circuit dysfunction. © The Author (2014). Published by Oxford University Press on behalf of the Guarantors of Brain. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  14. A recurrent neural network approach to quantitatively studying solar wind effects on TEC derived from GPS; preliminary results

    Directory of Open Access Journals (Sweden)

    J. B. Habarulema

    2009-05-01

    Full Text Available This paper attempts to describe the search for the parameter(s to represent solar wind effects in Global Positioning System total electron content (GPS TEC modelling using the technique of neural networks (NNs. A study is carried out by including solar wind velocity (Vsw, proton number density (Np and the Bz component of the interplanetary magnetic field (IMF Bz obtained from the Advanced Composition Explorer (ACE satellite as separate inputs to the NN each along with day number of the year (DN, hour (HR, a 4-month running mean of the daily sunspot number (R4 and the running mean of the previous eight 3-hourly magnetic A index values (A8. Hourly GPS TEC values derived from a dual frequency receiver located at Sutherland (32.38° S, 20.81° E, South Africa for 8 years (2000–2007 have been used to train the Elman neural network (ENN and the result has been used to predict TEC variations for a GPS station located at Cape Town (33.95° S, 18.47° E. Quantitative results indicate that each of the parameters considered may have some degree of influence on GPS TEC at certain periods although a decrease in prediction accuracy is also observed for some parameters for different days and seasons. It is also evident that there is still a difficulty in predicting TEC values during disturbed conditions. The improvements and degradation in prediction accuracies are both close to the benchmark values which lends weight to the belief that diurnal, seasonal, solar and magnetic variabilities may be the major determinants of TEC variability.

  15. Dopaminergic differentiation of human neural stem cells mediated by co-cultured rat striatal brain slices

    DEFF Research Database (Denmark)

    Anwar, Mohammad Raffaqat; Andreasen, Christian Maaløv; Lippert, Solvej Kølvraa

    2008-01-01

    differentiation, we co-cultured cells from a human neural forebrain-derived stem cell line (hNS1) with rat striatal brain slices. In brief, coronal slices of neonatal rat striatum were cultured on semiporous membrane inserts placed in six-well trays overlying monolayers of hNS1 cells. After 12 days of co......Properly committed neural stem cells constitute a promising source of cells for transplantation in Parkinson's disease, but a protocol for controlled dopaminergic differentiation is not yet available. To establish a setting for identification of secreted neural compounds promoting dopaminergic...

  16. Coupled Heuristic Prediction of Long Lead-Time Accumulated Total Inflow of a Reservoir during Typhoons Using Deterministic Recurrent and Fuzzy Inference-Based Neural Network

    Directory of Open Access Journals (Sweden)

    Chien-Lin Huang

    2015-11-01

    Full Text Available This study applies Real-Time Recurrent Learning Neural Network (RTRLNN and Adaptive Network-based Fuzzy Inference System (ANFIS with novel heuristic techniques to develop an advanced prediction model of accumulated total inflow of a reservoir in order to solve the difficulties of future long lead-time highly varied uncertainty during typhoon attacks while using a real-time forecast. For promoting the temporal-spatial forecasted precision, the following original specialized heuristic inputs were coupled: observed-predicted inflow increase/decrease (OPIID rate, total precipitation, and duration from current time to the time of maximum precipitation and direct runoff ending (DRE. This study also investigated the temporal-spatial forecasted error feature to assess the feasibility of the developed models, and analyzed the output sensitivity of both single and combined heuristic inputs to determine whether the heuristic model is susceptible to the impact of future forecasted uncertainty/errors. Validation results showed that the long lead-time–predicted accuracy and stability of the RTRLNN-based accumulated total inflow model are better than that of the ANFIS-based model because of the real-time recurrent deterministic routing mechanism of RTRLNN. Simulations show that the RTRLNN-based model with coupled heuristic inputs (RTRLNN-CHI, average error percentage (AEP/average forecast lead-time (AFLT: 6.3%/49 h can achieve better prediction than the model with non-heuristic inputs (AEP of RTRLNN-NHI and ANFIS-NHI: 15.2%/31.8% because of the full consideration of real-time hydrological initial/boundary conditions. Besides, the RTRLNN-CHI model can promote the forecasted lead-time above 49 h with less than 10% of AEP which can overcome the previous forecasted limits of 6-h AFLT with above 20%–40% of AEP.

  17. An Asynchronous Recurrent Network of Cellular Automaton-Based Neurons and Its Reproduction of Spiking Neural Network Activities.

    Science.gov (United States)

    Matsubara, Takashi; Torikai, Hiroyuki

    2016-04-01

    Modeling and implementation approaches for the reproduction of input-output relationships in biological nervous tissues contribute to the development of engineering and clinical applications. However, because of high nonlinearity, the traditional modeling and implementation approaches encounter difficulties in terms of generalization ability (i.e., performance when reproducing an unknown data set) and computational resources (i.e., computation time and circuit elements). To overcome these difficulties, asynchronous cellular automaton-based neuron (ACAN) models, which are described as special kinds of cellular automata that can be implemented as small asynchronous sequential logic circuits have been proposed. This paper presents a novel type of such ACAN and a theoretical analysis of its excitability. This paper also presents a novel network of such neurons, which can mimic input-output relationships of biological and nonlinear ordinary differential equation model neural networks. Numerical analyses confirm that the presented network has a higher generalization ability than other major modeling and implementation approaches. In addition, Field-Programmable Gate Array-implementations confirm that the presented network requires lower computational resources.

  18. Nanoparticle-mediated transcriptional modification enhances neuronal differentiation of human neural stem cells following transplantation in rat brain.

    Science.gov (United States)

    Li, Xiaowei; Tzeng, Stephany Y; Liu, Xiaoyan; Tammia, Markus; Cheng, Yu-Hao; Rolfe, Andrew; Sun, Dong; Zhang, Ning; Green, Jordan J; Wen, Xuejun; Mao, Hai-Quan

    2016-04-01

    Strategies to enhance survival and direct the differentiation of stem cells in vivo following transplantation in tissue repair site are critical to realizing the potential of stem cell-based therapies. Here we demonstrated an effective approach to promote neuronal differentiation and maturation of human fetal tissue-derived neural stem cells (hNSCs) in a brain lesion site of a rat traumatic brain injury model using biodegradable nanoparticle-mediated transfection method to deliver key transcriptional factor neurogenin-2 to hNSCs when transplanted with a tailored hyaluronic acid (HA) hydrogel, generating larger number of more mature neurons engrafted to the host brain tissue than non-transfected cells. The nanoparticle-mediated transcription activation method together with an HA hydrogel delivery matrix provides a translatable approach for stem cell-based regenerative therapy. Copyright © 2016 Elsevier Ltd. All rights reserved.

  19. Design of a decoupled AP1000 reactor core control system using digital proportional–integral–derivative (PID) control based on a quasi-diagonal recurrent neural network (QDRNN)

    International Nuclear Information System (INIS)

    Wei, Xinyu; Wang, Pengfei; Zhao, Fuyu

    2016-01-01

    Highlights: • We establish a disperse dynamic model for AP1000 reactor core. • A digital PID control based on QDRNN is used to design a decoupling control system. • The decoupling performance is verified and discussed. • The decoupling control system is simulated under the load following operation. - Abstract: The control system of the AP1000 reactor core uses the mechanical shim (MSHIM) strategy, which includes a power control subsystem and an axial power distribution control subsystem. To address the strong coupling between the two subsystems, an interlock between the two subsystems is used, which can only alleviate but not eliminate the coupling. Therefore, sometimes the axial offset (AO) cannot be controlled tightly, and the flexibility of load-following operation is limited. Thus, the decoupling of the original AP1000 reactor core control system is the focus of this paper. First, a two-node disperse dynamic model is established for the AP1000 reactor core to use PID control. Then, a digital PID control system based on a quasi-diagonal recurrent neural network (QDRNN) is designed to decouple the original system. Finally, the decoupling of the control system is verified by the step signal and load-following condition. The results show that the designed control system can decouple the original system as expected and the AO can be controlled much more tightly. Moreover, the flexibility of the load following is increased.

  20. Reactive Power Control of Single-Stage Three-Phase Photovoltaic System during Grid Faults Using Recurrent Fuzzy Cerebellar Model Articulation Neural Network

    Directory of Open Access Journals (Sweden)

    Faa-Jeng Lin

    2014-01-01

    Full Text Available This study presents a new active and reactive power control scheme for a single-stage three-phase grid-connected photovoltaic (PV system during grid faults. The presented PV system utilizes a single-stage three-phase current-controlled voltage-source inverter to achieve the maximum power point tracking (MPPT control of the PV panel with the function of low voltage ride through (LVRT. Moreover, a formula based on positive sequence voltage for evaluating the percentage of voltage sag is derived to determine the ratio of the injected reactive current to satisfy the LVRT regulations. To reduce the risk of overcurrent during LVRT operation, a current limit is predefined for the injection of reactive current. Furthermore, the control of active and reactive power is designed using a two-dimensional recurrent fuzzy cerebellar model articulation neural network (2D-RFCMANN. In addition, the online learning laws of 2D-RFCMANN are derived according to gradient descent method with varied learning-rate coefficients for network parameters to assure the convergence of the tracking error. Finally, some experimental tests are realized to validate the effectiveness of the proposed control scheme.

  1. arXiv The prototype of the HL-LHC magnets monitoring system based on Recurrent Neural Networks and adaptive quantization

    CERN Document Server

    Wielgosz, Maciej; Skoczeń, Andrzej

    This paper focuses on an examination of an applicability of Recurrent Neural Network models for detecting anomalous behavior of the CERN superconducting magnets. In order to conduct the experiments, the authors designed and implemented an adaptive signal quantization algorithm and a custom GRU-based detector and developed a method for the detector parameters selection. Three different datasets were used for testing the detector. Two artificially generated datasets were used to assess the raw performance of the system whereas the 231 MB dataset composed of the signals acquired from HiLumi magnets was intended for real-life experiments and model training. Several different setups of the developed anomaly detection system were evaluated and compared with state-of-the-art OC-SVM reference model operating on the same data. The OC-SVM model was equipped with a rich set of feature extractors accounting for a range of the input signal properties. It was determined in the course of the experiments that the detector, a...

  2. Design of a decoupled AP1000 reactor core control system using digital proportional–integral–derivative (PID) control based on a quasi-diagonal recurrent neural network (QDRNN)

    Energy Technology Data Exchange (ETDEWEB)

    Wei, Xinyu, E-mail: xyuwei@mail.xjtu.edu.cn; Wang, Pengfei, E-mail: pengfeixiaoli@yahoo.cn; Zhao, Fuyu, E-mail: fuyuzhao_xj@163.com

    2016-08-01

    Highlights: • We establish a disperse dynamic model for AP1000 reactor core. • A digital PID control based on QDRNN is used to design a decoupling control system. • The decoupling performance is verified and discussed. • The decoupling control system is simulated under the load following operation. - Abstract: The control system of the AP1000 reactor core uses the mechanical shim (MSHIM) strategy, which includes a power control subsystem and an axial power distribution control subsystem. To address the strong coupling between the two subsystems, an interlock between the two subsystems is used, which can only alleviate but not eliminate the coupling. Therefore, sometimes the axial offset (AO) cannot be controlled tightly, and the flexibility of load-following operation is limited. Thus, the decoupling of the original AP1000 reactor core control system is the focus of this paper. First, a two-node disperse dynamic model is established for the AP1000 reactor core to use PID control. Then, a digital PID control system based on a quasi-diagonal recurrent neural network (QDRNN) is designed to decouple the original system. Finally, the decoupling of the control system is verified by the step signal and load-following condition. The results show that the designed control system can decouple the original system as expected and the AO can be controlled much more tightly. Moreover, the flexibility of the load following is increased.

  3. An adaptive recurrent neural-network controller using a stabilization matrix and predictive inputs to solve a tracking problem under disturbances.

    Science.gov (United States)

    Fairbank, Michael; Li, Shuhui; Fu, Xingang; Alonso, Eduardo; Wunsch, Donald

    2014-01-01

    We present a recurrent neural-network (RNN) controller designed to solve the tracking problem for control systems. We demonstrate that a major difficulty in training any RNN is the problem of exploding gradients, and we propose a solution to this in the case of tracking problems, by introducing a stabilization matrix and by using carefully constrained context units. This solution allows us to achieve consistently lower training errors, and hence allows us to more easily introduce adaptive capabilities. The resulting RNN is one that has been trained off-line to be rapidly adaptive to changing plant conditions and changing tracking targets. The case study we use is a renewable-energy generator application; that of producing an efficient controller for a three-phase grid-connected converter. The controller we produce can cope with the random variation of system parameters and fluctuating grid voltages. It produces tracking control with almost instantaneous response to changing reference states, and virtually zero oscillation. This compares very favorably to the classical proportional integrator (PI) controllers, which we show produce a much slower response and settling time. In addition, the RNN we propose exhibits better learning stability and convergence properties, and can exhibit faster adaptation, than has been achieved with adaptive critic designs. Copyright © 2013 Elsevier Ltd. All rights reserved.

  4. Toward a new task assignment and path evolution (TAPE) for missile defense system (MDS) using intelligent adaptive SOM with recurrent neural networks (RNNs).

    Science.gov (United States)

    Wang, Chi-Hsu; Chen, Chun-Yao; Hung, Kun-Neng

    2015-06-01

    In this paper, a new adaptive self-organizing map (SOM) with recurrent neural network (RNN) controller is proposed for task assignment and path evolution of missile defense system (MDS). We address the problem of N agents (defending missiles) and D targets (incoming missiles) in MDS. A new RNN controller is designed to force an agent (or defending missile) toward a target (or incoming missile), and a monitoring controller is also designed to reduce the error between RNN controller and ideal controller. A new SOM with RNN controller is then designed to dispatch agents to their corresponding targets by minimizing total damaging cost. This is actually an important application of the multiagent system. The SOM with RNN controller is the main controller. After task assignment, the weighting factors of our new SOM with RNN controller are activated to dispatch the agents toward their corresponding targets. Using the Lyapunov constraints, the weighting factors for the proposed SOM with RNN controller are updated to guarantee the stability of the path evolution (or planning) system. Excellent simulations are obtained using this new approach for MDS, which show that our RNN has the lowest average miss distance among the several techniques.

  5. Prediction of beta-turns and beta-turn types by a novel bidirectional Elman-type recurrent neural network with multiple output layers (MOLEBRNN).

    Science.gov (United States)

    Kirschner, Andreas; Frishman, Dmitrij

    2008-10-01

    Prediction of beta-turns from amino acid sequences has long been recognized as an important problem in structural bioinformatics due to their frequent occurrence as well as their structural and functional significance. Because various structural features of proteins are intercorrelated, secondary structure information has been often employed as an additional input for machine learning algorithms while predicting beta-turns. Here we present a novel bidirectional Elman-type recurrent neural network with multiple output layers (MOLEBRNN) capable of predicting multiple mutually dependent structural motifs and demonstrate its efficiency in recognizing three aspects of protein structure: beta-turns, beta-turn types, and secondary structure. The advantage of our method compared to other predictors is that it does not require any external input except for sequence profiles because interdependencies between different structural features are taken into account implicitly during the learning process. In a sevenfold cross-validation experiment on a standard test dataset our method exhibits the total prediction accuracy of 77.9% and the Mathew's Correlation Coefficient of 0.45, the highest performance reported so far. It also outperforms other known methods in delineating individual turn types. We demonstrate how simultaneous prediction of multiple targets influences prediction performance on single targets. The MOLEBRNN presented here is a generic method applicable in a variety of research fields where multiple mutually depending target classes need to be predicted. http://webclu.bio.wzw.tum.de/predator-web/.

  6. Reversal of rocuronium-induced neuromuscular blockade by sugammadex allows for optimization of neural monitoring of the recurrent laryngeal nerve.

    Science.gov (United States)

    Lu, I-Cheng; Wu, Che-Wei; Chang, Pi-Ying; Chen, Hsiu-Ya; Tseng, Kuang-Yi; Randolph, Gregory W; Cheng, Kuang-I; Chiang, Feng-Yu

    2016-04-01

    The use of neuromuscular blocking agent may effect intraoperative neuromonitoring (IONM) during thyroid surgery. An enhanced neuromuscular-blockade (NMB) recovery protocol was investigated in a porcine model and subsequently clinically applied during human thyroid neural monitoring surgery. Prospective animal and retrospective clinical study. In the animal experiment, 12 piglets were injected with rocuronium 0.6 mg/kg and randomly allocated to receive normal saline, sugammadex 2 mg/kg, or sugammadex 4 mg/kg to compare the recovery of laryngeal electromyography (EMG). In a subsequent clinical application study, 50 patients who underwent thyroidectomy with IONM followed an enhanced NMB recovery protocol-rocuronium 0.6 mg/kg at anesthesia induction and sugammadex 2 mg/kg at the operation start. The train-of-four (TOF) ratio was used for continuous quantitative monitoring of neuromuscular transmission. In our porcine model, it took 49 ± 15, 13.2 ± 5.6, and 4.2 ± 1.5 minutes for the 80% recovery of laryngeal EMG after injection of saline, sugammadex 2 mg/kg, and sugammadex 4 mg/kg, respectively. In subsequent clinical human application, the TOF ratio recovered from 0 to >0.9 within 5 minutes after administration of sugammadex 2 mg/kg at the operation start. All patients had positive and high EMG amplitude at the early stage of the operation, and intubation was without difficulty in 96% of patients. Both porcine modeling and clinical human application demonstrated that sugammadex 2 mg/kg allows effective and rapid restoration of neuromuscular function suppressed by rocuronium. Implementation of this enhanced NMB recovery protocol assures optimal conditions for tracheal intubation as well as IONM in thyroid surgery. NA. © 2016 The American Laryngological, Rhinological and Otological Society, Inc.

  7. Development of biomaterial scaffold for nerve tissue engineering: Biomaterial mediated neural regeneration

    Directory of Open Access Journals (Sweden)

    Sethuraman Swaminathan

    2009-11-01

    Full Text Available Abstract Neural tissue repair and regeneration strategies have received a great deal of attention because it directly affects the quality of the patient's life. There are many scientific challenges to regenerate nerve while using conventional autologous nerve grafts and from the newly developed therapeutic strategies for the reconstruction of damaged nerves. Recent advancements in nerve regeneration have involved the application of tissue engineering principles and this has evolved a new perspective to neural therapy. The success of neural tissue engineering is mainly based on the regulation of cell behavior and tissue progression through the development of a synthetic scaffold that is analogous to the natural extracellular matrix and can support three-dimensional cell cultures. As the natural extracellular matrix provides an ideal environment for topographical, electrical and chemical cues to the adhesion and proliferation of neural cells, there exists a need to develop a synthetic scaffold that would be biocompatible, immunologically inert, conducting, biodegradable, and infection-resistant biomaterial to support neurite outgrowth. This review outlines the rationale for effective neural tissue engineering through the use of suitable biomaterials and scaffolding techniques for fabrication of a construct that would allow the neurons to adhere, proliferate and eventually form nerves.

  8. Development of biomaterial scaffold for nerve tissue engineering: Biomaterial mediated neural regeneration

    Science.gov (United States)

    2009-01-01

    Neural tissue repair and regeneration strategies have received a great deal of attention because it directly affects the quality of the patient's life. There are many scientific challenges to regenerate nerve while using conventional autologous nerve grafts and from the newly developed therapeutic strategies for the reconstruction of damaged nerves. Recent advancements in nerve regeneration have involved the application of tissue engineering principles and this has evolved a new perspective to neural therapy. The success of neural tissue engineering is mainly based on the regulation of cell behavior and tissue progression through the development of a synthetic scaffold that is analogous to the natural extracellular matrix and can support three-dimensional cell cultures. As the natural extracellular matrix provides an ideal environment for topographical, electrical and chemical cues to the adhesion and proliferation of neural cells, there exists a need to develop a synthetic scaffold that would be biocompatible, immunologically inert, conducting, biodegradable, and infection-resistant biomaterial to support neurite outgrowth. This review outlines the rationale for effective neural tissue engineering through the use of suitable biomaterials and scaffolding techniques for fabrication of a construct that would allow the neurons to adhere, proliferate and eventually form nerves. PMID:19939265

  9. Neural fate decisions mediated by combinatorial regulation of Hes1 and miR-9.

    Science.gov (United States)

    Li, Shanshan; Liu, Yanwei; Liu, Zengrong; Wang, Ruiqi

    2016-01-01

    In the nervous system, Hes1 shows an oscillatory manner in neural progenitors but a persistent one in neurons. Many models involving Hes1 have been provided for the study of neural differentiation but few of them take the role of microRNA into account. It is known that a microRNA, miR-9, plays crucial roles in modulating Hes1 oscillations. However, the roles of miR-9 in controlling Hes1 oscillations and inducing transition between different cell fates still need to be further explored. Here we provide a mathematical model to show the interaction between miR-9 and Hes1, with the aim of understanding how the Hes1 oscillations are produced, how they are controlled, and further, how they are terminated. Based on the experimental findings, the model demonstrates the essential roles of Hes1 and miR-9 in regulating the dynamics of the system. In particular, the model suggests that the balance between miR-9 and Hes1 plays important roles in the choice between progenitor maintenance and neural differentiation. In addition, the synergistic (or antagonistic) effects of several important regulations are investigated so as to elucidate the effects of combinatorial regulation in neural decision-making. Our model provides a qualitative mechanism for understanding the process in neural fate decisions regulated by Hes1 and miR-9.

  10. Augmented BMPRIA-mediated BMP signaling in cranial neural crest lineage leads to cleft palate formation and delayed tooth differentiation.

    Directory of Open Access Journals (Sweden)

    Lu Li

    Full Text Available The importance of BMP receptor Ia (BMPRIa mediated signaling in the development of craniofacial organs, including the tooth and palate, has been well illuminated in several mouse models of loss of function, and by its mutations associated with juvenile polyposis syndrome and facial defects in humans. In this study, we took a gain-of-function approach to further address the role of BMPR-IA-mediated signaling in the mesenchymal compartment during tooth and palate development. We generated transgenic mice expressing a constitutively active form of BmprIa (caBmprIa in cranial neural crest (CNC cells that contributes to the dental and palatal mesenchyme. Mice bearing enhanced BMPRIa-mediated signaling in CNC cells exhibit complete cleft palate and delayed odontogenic differentiation. We showed that the cleft palate defect in the transgenic animals is attributed to an altered cell proliferation rate in the anterior palatal mesenchyme and to the delayed palatal elevation in the posterior portion associated with ectopic cartilage formation. Despite enhanced activity of BMP signaling in the dental mesenchyme, tooth development and patterning in transgenic mice appeared normal except delayed odontogenic differentiation. These data support the hypothesis that a finely tuned level of BMPRIa-mediated signaling is essential for normal palate and tooth development.

  11. Perceived Parenting Mediates Serotonin Transporter Gene (5-HTTLPR) and Neural System Function during Facial Recognition: A Pilot Study

    Science.gov (United States)

    Nishikawa, Saori

    2015-01-01

    This study examined changes in prefrontal oxy-Hb levels measured by NIRS (Near-Infrared Spectroscopy) during a facial-emotion recognition task in healthy adults, testing a mediational/moderational model of these variables. Fifty-three healthy adults (male = 35, female = 18) aged between 22 to 37 years old (mean age = 24.05 years old) provided saliva samples, completed a EMBU questionnaire (Swedish acronym for Egna Minnen Beträffande Uppfostran [My memories of upbringing]), and participated in a facial-emotion recognition task during NIRS recording. There was a main effect of maternal rejection on RoxH (right frontal activation during an ambiguous task), and a gene × environment (G×E) interaction on RoxH, suggesting that individuals who carry the SL or LL genotype and who endorse greater perceived maternal rejection show less right frontal activation than SL/LL carriers with lower perceived maternal rejection. Finally, perceived parenting style played a mediating role in right frontal activation via the 5-HTTLPR genotype. Early-perceived parenting might influence neural activity in an uncertain situation i.e. rating ambiguous faces among individuals with certain genotypes. This preliminary study makes a small contribution to the mapping of an influence of gene and behaviour on the neural system. More such attempts should be made in order to clarify the links. PMID:26418317

  12. Perceived Parenting Mediates Serotonin Transporter Gene (5-HTTLPR and Neural System Function during Facial Recognition: A Pilot Study.

    Directory of Open Access Journals (Sweden)

    Saori Nishikawa

    Full Text Available This study examined changes in prefrontal oxy-Hb levels measured by NIRS (Near-Infrared Spectroscopy during a facial-emotion recognition task in healthy adults, testing a mediational/moderational model of these variables. Fifty-three healthy adults (male = 35, female = 18 aged between 22 to 37 years old (mean age = 24.05 years old provided saliva samples, completed a EMBU questionnaire (Swedish acronym for Egna Minnen Beträffande Uppfostran [My memories of upbringing], and participated in a facial-emotion recognition task during NIRS recording. There was a main effect of maternal rejection on RoxH (right frontal activation during an ambiguous task, and a gene × environment (G × E interaction on RoxH, suggesting that individuals who carry the SL or LL genotype and who endorse greater perceived maternal rejection show less right frontal activation than SL/LL carriers with lower perceived maternal rejection. Finally, perceived parenting style played a mediating role in right frontal activation via the 5-HTTLPR genotype. Early-perceived parenting might influence neural activity in an uncertain situation i.e. rating ambiguous faces among individuals with certain genotypes. This preliminary study makes a small contribution to the mapping of an influence of gene and behaviour on the neural system. More such attempts should be made in order to clarify the links.

  13. Sex differences in the neural substrates of spatial working memory during adolescence are not mediated by endogenous testosterone.

    Science.gov (United States)

    Alarcón, Gabriela; Cservenka, Anita; Fair, Damien A; Nagel, Bonnie J

    2014-12-17

    Adolescence is a developmental period characterized by notable changes in behavior, physical attributes, and an increase in endogenous sex steroid hormones, which may impact cognitive functioning. Moreover, sex differences in brain structure are present, leading to differences in neural function and cognition. Here, we examine sex differences in performance and blood oxygen level-dependent (BOLD) activation in a sample of adolescents during a spatial working memory (SWM) task. We also examine whether endogenous testosterone levels mediate differential brain activity between the sexes. Adolescents between ages 10 and 16 years completed a SWM functional magnetic resonance imaging (fMRI) task, and serum hormone levels were assessed within seven days of scanning. While there were no sex differences in task performance (accuracy and reaction time), differences in BOLD response between girls and boys emerged, with girls deactivating brain regions in the default mode network and boys showing increased response in SWM-related brain regions of the frontal cortex. These results suggest that adolescent boys and girls adopted distinct neural strategies, while maintaining spatial cognitive strategies that facilitated comparable cognitive performance of a SWM task. A nonparametric bootstrapping procedure revealed that testosterone did not mediate sex-specific brain activity, suggesting that sex differences in BOLD activation during SWM may be better explained by other factors, such as early organizational effects of sex steroids or environmental influences. Elucidating sex differences in neural function and the influence of gonadal hormones can serve as a basis of comparison for understanding sexually dimorphic neurodevelopment and inform sex-specific psychopathology that emerges in adolescence. Copyright © 2014 Elsevier B.V. All rights reserved.

  14. Modeling of biologically motivated self-learning equivalent-convolutional recurrent-multilayer neural structures (BLM_SL_EC_RMNS) for image fragments clustering and recognition

    Science.gov (United States)

    Krasilenko, Vladimir G.; Lazarev, Alexander A.; Nikitovich, Diana V.

    2018-03-01

    The biologically-motivated self-learning equivalence-convolutional recurrent-multilayer neural structures (BLM_SL_EC_RMNS) for fragments images clustering and recognition will be discussed. We shall consider these neural structures and their spatial-invariant equivalental models (SIEMs) based on proposed equivalent two-dimensional functions of image similarity and the corresponding matrix-matrix (or tensor) procedures using as basic operations of continuous logic and nonlinear processing. These SIEMs can simply describe the signals processing during the all training and recognition stages and they are suitable for unipolar-coding multilevel signals. The clustering efficiency in such models and their implementation depends on the discriminant properties of neural elements of hidden layers. Therefore, the main models and architecture parameters and characteristics depends on the applied types of non-linear processing and function used for image comparison or for adaptive-equivalent weighing of input patterns. We show that these SL_EC_RMNSs have several advantages, such as the self-study and self-identification of features and signs of the similarity of fragments, ability to clustering and recognize of image fragments with best efficiency and strong mutual correlation. The proposed combined with learning-recognition clustering method of fragments with regard to their structural features is suitable not only for binary, but also color images and combines self-learning and the formation of weight clustered matrix-patterns. Its model is constructed and designed on the basis of recursively continuous logic and nonlinear processing algorithms and to k-average method or method the winner takes all (WTA). The experimental results confirmed that fragments with a large numbers of elements may be clustered. For the first time the possibility of generalization of these models for space invariant case is shown. The experiment for an images of different dimensions (a reference

  15. VEGF-mediated angiogenesis stimulates neural stem cell proliferation and differentiation in the premature brain

    International Nuclear Information System (INIS)

    Sun, Jinqiao; Sha, Bin; Zhou, Wenhao; Yang, Yi

    2010-01-01

    This study investigated the effects of angiogenesis on the proliferation and differentiation of neural stem cells in the premature brain. We observed the changes in neurogenesis that followed the stimulation and inhibition of angiogenesis by altering vascular endothelial growth factor (VEGF) expression in a 3-day-old rat model. VEGF expression was overexpressed by adenovirus transfection and down-regulated by siRNA interference. Using immunofluorescence assays, Western blot analysis, and real-time PCR methods, we observed angiogenesis and the proliferation and differentiation of neural stem cells. Immunofluorescence assays showed that the number of vWF-positive areas peaked at day 7, and they were highest in the VEGF up-regulation group and lowest in the VEGF down-regulation group at every time point. The number of neural stem cells, neurons, astrocytes, and oligodendrocytes in the subventricular zone gradually increased over time in the VEGF up-regulation group. Among the three groups, the number of these cells was highest in the VEGF up-regulation group and lowest in the VEGF down-regulation group at the same time point. Western blot analysis and real-time PCR confirmed these results. These data suggest that angiogenesis may stimulate the proliferation of neural stem cells and differentiation into neurons, astrocytes, and oligodendrocytes in the premature brain.

  16. Endogenous and Exogenous Attention Shifts are Mediated by the Same Large-Scale Neural Network.

    NARCIS (Netherlands)

    Peelen, M.V.; Heslenfeld, D.J.; Theeuwes, J.

    2004-01-01

    Event-related fMRI was used to examine the neural basis of endogenous (top-down) and exogenous (bottom-up) spatial orienting. Shifts of attention were induced by central (endogenous) or peripheral (exogenous) cues. Reaction times on subsequently presented targets showed the expected pattern of

  17. What Types of Visual Recognition Tasks Are Mediated by the Neural Subsystem that Subserves Face Recognition?

    Science.gov (United States)

    Brooks, Brian E.; Cooper, Eric E.

    2006-01-01

    Three divided visual field experiments tested current hypotheses about the types of visual shape representation tasks that recruit the cognitive and neural mechanisms underlying face recognition. Experiment 1 found a right hemisphere advantage for subordinate but not basic-level face recognition. Experiment 2 found a right hemisphere advantage for…

  18. Neural Reactivity to Emotional Faces May Mediate the Relationship between Childhood Empathy and Adolescent Prosocial Behavior

    Science.gov (United States)

    Flournoy, John C.; Pfeifer, Jennifer H.; Moore, William E.; Tackman, Allison M.; Masten, Carrie L.; Mazziotta, John C.; Iacoboni, Marco; Dapretto, Mirella

    2016-01-01

    Reactivity to others' emotions not only can result in empathic concern (EC), an important motivator of prosocial behavior, but can also result in personal distress (PD), which may hinder prosocial behavior. Examining neural substrates of emotional reactivity may elucidate how EC and PD differentially influence prosocial behavior. Participants…

  19. Neural systems and hormones mediating attraction to infant and child faces

    Directory of Open Access Journals (Sweden)

    Lizhu eLuo

    2015-07-01

    Full Text Available We find infant faces highly attractive as a result of specific features which Konrad Lorenz termed Kindchenschema or baby schema, and this is considered to be an important adaptive trait for promoting protective and caregiving behaviors in adults, thereby increasing the chances of infant survival. This review first examines the behavioral support for this effect and physical and behavioral factors which can influence it. It next reviews the increasing number of neuroimaging and electrophysiological studies investigating the neural circuitry underlying this baby schema effect in both parents and non-parents of both sexes. Next it considers potential hormonal contributions to the baby schema effect in both sexes and then neural effects associated with reduced responses to infant cues in post-partum depression, anxiety and drug taking. Overall the findings reviewed reveal a very extensive neural circuitry involved in our perception of cutenessin infant faces with enhanced activation compared to adult faces being found in brain regions involved in face perception, attention, emotion, empathy, memory, reward and attachment, theory of mind and also control of motor responses.Both mothers and fathers also show evidence for enhanced responses in these same neural systems when viewing their own as opposed to another child. Furthermore, responses to infant cues in many of these neural systems are reduced in mothers with post-partum depression or anxiety or have taken addictive drugs throughout pregnancy. In general reproductively active women tend to rate infant faces as cuter than men, which may reflect both heightened attention to relevant cues and a stronger activation in their brain reward circuitry. Perception of infant cuteness may also be influenced by reproductive hormones with the hypothalamic neuropeptide oxytocin being most strongly associated to date with increased attention andattractionto infant cues in both sexes.

  20. Recurrent laughter-induced syncope.

    Science.gov (United States)

    Gaitatzis, Athanasios; Petzold, Axel

    2012-07-01

    Syncope is a common presenting complaint in Neurology clinics or Emergency departments, but its causes are sometimes difficult to diagnose. Apart from vasovagal attacks, other benign, neurally mediated syncopes include "situational" syncopes, which occur after urination, coughing, swallowing, or defecation. A healthy 42-year-old male patient presented to the neurology clinic with a long history of faints triggered by spontaneous laughter, especially after funny jokes. Physical and neurological examination, and electroencephalography and magnetic resonance imaging were unremarkable. There was no evidence to suggest cardiogenic causes, epilepsy, or cataplexy and a diagnosis of laughing syncope was made. Laughter-induced syncope is usually a single event in the majority of cases, but may present as recurrent attacks as in our case. Some cases occur in association with underlying neurological conditions. Prognosis is good in the case of neurally mediated attacks. Laughter may not be recognized by physicians as a cause of syncope, which may lead to unnecessary investigations or misdiagnosis, and affect patients' quality of life.

  1. Neural cell adhesion molecule-180-mediated homophilic binding induces epidermal growth factor receptor (EGFR) down-regulation and uncouples the inhibitory function of EGFR in neurite outgrowth

    DEFF Research Database (Denmark)

    Povlsen, Gro Klitgaard; Berezin, Vladimir; Bock, Elisabeth

    2008-01-01

    The neural cell adhesion molecule (NCAM) plays important roles in neuronal development, regeneration, and synaptic plasticity. NCAM homophilic binding mediates cell adhesion and induces intracellular signals, in which the fibroblast growth factor receptor plays a prominent role. Recent studies...... this NCAM-180-induced EGFR down-regulation involves increased EGFR ubiquitination and lysosomal EGFR degradation. Furthermore, NCAM-180-mediated EGFR down-regulation requires NCAM homophilic binding and interactions of the cytoplasmic domain of NCAM-180 with intracellular interaction partners, but does...

  2. A CREB-Sirt1-Hes1 Circuitry Mediates Neural Stem Cell Response to Glucose Availability

    Directory of Open Access Journals (Sweden)

    Salvatore Fusco

    2016-02-01

    Full Text Available Summary: Adult neurogenesis plays increasingly recognized roles in brain homeostasis and repair and is profoundly affected by energy balance and nutrients. We found that the expression of Hes-1 (hairy and enhancer of split 1 is modulated in neural stem and progenitor cells (NSCs by extracellular glucose through the coordinated action of CREB (cyclic AMP responsive element binding protein and Sirt-1 (Sirtuin 1, two cellular nutrient sensors. Excess glucose reduced CREB-activated Hes-1 expression and results in impaired cell proliferation. CREB-deficient NSCs expanded poorly in vitro and did not respond to glucose availability. Elevated glucose also promoted Sirt-1-dependent repression of the Hes-1 promoter. Conversely, in low glucose, CREB replaced Sirt-1 on the chromatin associated with the Hes-1 promoter enhancing Hes-1 expression and cell proliferation. Thus, the glucose-regulated antagonism between CREB and Sirt-1 for Hes-1 transcription participates in the metabolic regulation of neurogenesis. : Using a combination of in vitro and in vivo studies, Fusco et al. find that excess glucose impairs the self-renewal capacity of neural stem cells through a molecular circuit that involves the transcription factor CREB and Sirtuin 1. The authors suggest that this circuitry may link nutrient excess with neurodegeneration and brain aging. Keywords: neural stem cells, adult neurogenesis, CREB, Sirt-1, nutrients, metabolism, diabetes

  3. Neurally mediated airway constriction in human and other species: a comparative study using precision-cut lung slices (PCLS.

    Directory of Open Access Journals (Sweden)

    Marco Schlepütz

    Full Text Available The peripheral airway innervation of the lower respiratory tract of mammals is not completely functionally characterized. Recently, we have shown in rats that precision-cut lung slices (PCLS respond to electric field stimulation (EFS and provide a useful model to study neural airway responses in distal airways. Since airway responses are known to exhibit considerable species differences, here we examined the neural responses of PCLS prepared from mice, rats, guinea pigs, sheep, marmosets and humans. Peripheral neurons were activated either by EFS or by capsaicin. Bronchoconstriction in response to identical EFS conditions varied between species in magnitude. Frequency response curves did reveal further species-dependent differences of nerve activation in PCLS. Atropine antagonized the EFS-induced bronchoconstriction in human, guinea pig, sheep, rat and marmoset PCLS, showing cholinergic responses. Capsaicin (10 µM caused bronchoconstriction in human (4 from 7 and guinea pig lungs only, indicating excitatory non-adrenergic non-cholinergic responses (eNANC. However, this effect was notably smaller in human responder (30 ± 7.1% than in guinea pig (79 ± 5.1% PCLS. The transient receptor potential (TRP channel blockers SKF96365 and ruthenium red antagonized airway contractions after exposure to EFS or capsaicin in guinea pigs. In conclusion, the different species show distinct patterns of nerve-mediated bronchoconstriction. In the most common experimental animals, i.e. in mice and rats, these responses differ considerably from those in humans. On the other hand, guinea pig and marmoset monkey mimic human responses well and may thus serve as clinically relevant models to study neural airway responses.

  4. Vasoactive intestinal peptide is a local mediator in a gut-brain neural axis activating intestinal gluconeogenesis.

    Science.gov (United States)

    De Vadder, F; Plessier, F; Gautier-Stein, A; Mithieux, G

    2015-03-01

    Intestinal gluconeogenesis (IGN) promotes metabolic benefits through activation of a gut-brain neural axis. However, the local mediator activating gluconeogenic genes in the enterocytes remains unknown. We show that (i) vasoactive intestinal peptide (VIP) signaling through VPAC1 receptor activates the intestinal glucose-6-phosphatase gene in vivo, (ii) the activation of IGN by propionate is counteracted by VPAC1 antagonism, and (iii) VIP-positive intrinsic neurons in the submucosal plexus are increased under the action of propionate. These data support the role of VIP as a local neuromodulator released by intrinsic enteric neurons and responsible for the induction of IGN through a VPAC1 receptor-dependent mechanism in enterocytes. © 2015 John Wiley & Sons Ltd.

  5. The role of phosphatidylinositol 3-kinase in neural cell adhesion molecule-mediated neuronal differentiation and survival

    DEFF Research Database (Denmark)

    Ditlevsen, Dorte K; Køhler, Lene B; Pedersen, Martin Volmer

    2003-01-01

    The neural cell adhesion molecule, NCAM, is known to stimulate neurite outgrowth from primary neurones and PC12 cells presumably through signalling pathways involving the fibroblast growth factor receptor (FGFR), protein kinase A (PKA), protein kinase C (PKC), the Ras-mitogen activated protein...... kinase (MAPK) pathway and an increase in intracellular Ca2+ levels. Stimulation of neurones with the synthetic NCAM-ligand, C3, induces neurite outgrowth through signalling pathways similar to the pathways activated through physiological, homophilic NCAM-stimulation. We present here data indicating...... that phosphatidylinositol 3-kinase (PI3K) is required for NCAM-mediated neurite outgrowth from PC12-E2 cells and from cerebellar and dopaminergic neurones in primary culture, and that the thr/ser kinase Akt/protein kinase B (PKB) is phosphorylated downstream of PI3K after stimulation with C3. Moreover, we present data...

  6. Menadione-mediated WST1 reduction assay for the determination of metabolic activity of cultured neural cells.

    Science.gov (United States)

    Stapelfeldt, Karsten; Ehrke, Eric; Steinmeier, Johann; Rastedt, Wiebke; Dringen, Ralf

    2017-12-01

    Cellular reduction of tetrazolium salts to their respective formazans is frequently used to determine the metabolic activity of cultured cells as an indicator of cell viability. For membrane-impermeable tetrazolium salts such as WST1 the application of a membrane-permeable electron cycler is usually required to mediate the transfer of intracellular electrons for extracellular WST1 reduction. Here we demonstrate that in addition to the commonly used electron cycler M-PMS, menadione can also serve as an efficient electron cycler for extracellular WST1 reduction in cultured neural cells. The increase in formazan absorbance in glial cell cultures for the WST1 reduction by menadione involves enzymatic menadione reduction and was twice that recorded for the cytosolic enzyme-independent WST1 reduction in the presence of M-PMS. The optimized WST1 reduction assay allowed within 30 min of incubation a highly reliable detection of compromised cell metabolism caused by 3-bromopyruvate and impaired membrane integrity caused by Triton X-100, with a sensitivity as good as that of spectrophotometric assays which determine cellular MTT reduction or lactate dehydrogenase release. The short incubation period of 30 min and the observed good sensitivity make this optimized menadione-mediated WST1 reduction assay a quick and reliable alternative to other viability and toxicity assays. Copyright © 2017 Elsevier Inc. All rights reserved.

  7. Chromosomal instability mediated by non-B DNA: cruciform conformation and not DNA sequence is responsible for recurrent translocation in humans.

    Science.gov (United States)

    Inagaki, Hidehito; Ohye, Tamae; Kogo, Hiroshi; Kato, Takema; Bolor, Hasbaira; Taniguchi, Mariko; Shaikh, Tamim H; Emanuel, Beverly S; Kurahashi, Hiroki

    2009-02-01

    Chromosomal aberrations have been thought to be random events. However, recent findings introduce a new paradigm in which certain DNA segments have the potential to adopt unusual conformations that lead to genomic instability and nonrandom chromosomal rearrangement. One of the best-studied examples is the palindromic AT-rich repeat (PATRR), which induces recurrent constitutional translocations in humans. Here, we established a plasmid-based model that promotes frequent intermolecular rearrangements between two PATRRs in HEK293 cells. In this model system, the proportion of PATRR plasmid that extrudes a cruciform structure correlates to the levels of rearrangement. Our data suggest that PATRR-mediated translocations are attributable to unusual DNA conformations that confer a common pathway for chromosomal rearrangements in humans.

  8. Sex differences in the neural circuit that mediates female sexual receptivity

    Science.gov (United States)

    Flanagan-Cato, Loretta M.

    2011-01-01

    Female sexual behavior in rodents, typified by the lordosis posture, is hormone-dependent and sex-specific. Ovarian hormones control this behavior via receptors in the hypothalamic ventromedial nucleus (VMH). This review considers the sex differences in the morphology, neurochemistry and neural circuitry of the VMH to gain insights into the mechanisms that control lordosis. The VMH is larger in males compared with females, due to more synaptic connections. Another sex difference is the responsiveness to estradiol, with males exhibiting muted, and in some cases reverse, effects compared with females. The lack of lordosis in males may be explained by differences in synaptic organization or estrogen responsiveness, or both, in the VMH. However, given that damage to other brain regions unmasks lordosis behavior in males, a male-typical VMH is unlikely the main factor that prevents lordosis. In females, key questions remain regarding the mechanisms whereby ovarian hormones modulate VMH function to promote lordosis. PMID:21338620

  9. Spontaneous calcium transients in human neural progenitor cells mediated by transient receptor potential channels.

    Science.gov (United States)

    Morgan, Peter J; Hübner, Rayk; Rolfs, Arndt; Frech, Moritz J

    2013-09-15

    Calcium signals affect many developmental processes, including proliferation, migration, survival, and apoptosis, processes that are of particular importance in stem cells intended for cell replacement therapies. The mechanisms underlying Ca(2+) signals, therefore, have a role in determining how stem cells respond to their environment, and how these responses might be controlled in vitro. In this study, we examined the spontaneous Ca(2+) activity in human neural progenitor cells during proliferation and differentiation. Pharmacological characterization indicates that in proliferating cells, most activity is the result of transient receptor potential (TRP) channels that are sensitive to Gd(3+) and La(3+), with the more subtype selective antagonist Ruthenium red also reducing activity, suggesting the involvement of transient receptor potential vanilloid (TRPV) channels. In differentiating cells, Gd(3+) and La(3+)-sensitive TRP channels also appear to underlie the spontaneous activity; however, no sub-type-specific antagonists had any effect. Protein levels of TRPV2 and TRPV3 decreased in differentiated cells, which is demonstrated by western blot. Thus, it appears that TRP channels represent the main route of Ca(2+) entry in human neural progenitor cells (hNPCs), but the responsible channel types are subject to substitution under differentiating conditions. The level of spontaneous activity could be increased and decreased by lowering and raising the extracellular K(+) concentration. Proliferating cells in low K(+) slowed the cell cycle, with a disproportionate increased percentage of cells in G1 phase and a reduction in S phase. Taken together, these results suggest a link between external K(+) concentration, spontaneous Ca(2+) transients, and cell cycle distribution, which is able to influence the fate of stem and progenitor cells.

  10. Alterations in neural systems mediating cognitive flexibility and inhibition in mood disorders.

    Science.gov (United States)

    Piguet, Camille; Cojan, Yann; Sterpenich, Virginie; Desseilles, Martin; Bertschy, Gilles; Vuilleumier, Patrik

    2016-04-01

    Impairment in mental flexibility may be a key component contributing to cardinal cognitive symptoms among mood disorders patients, particularly thought control disorders. Impaired ability to switch from one thought to another might reflect difficulties in either generating new mental states, inhibiting previous states, or both. However, the neural underpinnings of impaired cognitive flexibility in mood disorders remain largely unresolved. We compared a group of mood disorders patients (n = 29) and a group of matched healthy subjects (n = 32) on a novel task-switching paradigm involving happy and sad faces, that allowed us to separate generation of a new mental set (Switch Cost) and inhibition of the previous set during switching (Inhibition Cost), using fMRI. Behavioral data showed a larger Switch Cost in patients relative to controls, but the average Inhibition Cost did not differ between groups. At the neural level, a main effect of group was found with stronger activation of the subgenual cingulate cortex in patients. The larger Switch Cost in patients was reflected by a stronger recruitment of brain regions involved in attention and executive control, including the left intraparietal sulcus, precuneus, left inferior fontal gyrus, and right anterior cingulate. Critically, activity in the subgenual cingulate cortex was not downregulated by inhibition in patients relative to controls. In conclusion, mood disorder patients have exaggerated Switch Cost relative to controls, and this deficit in cognitive flexibility is associated with increased activation of the fronto-parietal attention networks, combined with impaired modulation of the subgenual cingulate cortex when inhibition of previous mental states is needed. © 2016 Wiley Periodicals, Inc.

  11. Recurrent Syncope due to Esophageal Squamous Cell Carcinoma

    Directory of Open Access Journals (Sweden)

    A. Casini

    2011-09-01

    Full Text Available Syncope is caused by a wide variety of disorders. Recurrent syncope as a complication of malignancy is uncommon and may be difficult to diagnose and to treat. Primary neck carcinoma or metastases spreading in parapharyngeal and carotid spaces can involve the internal carotid artery and cause neurally mediated syncope with a clinical presentation like carotid sinus syndrome. We report the case of a 76-year-old man who suffered from recurrent syncope due to invasion of the right carotid sinus by metastases of a carcinoma of the esophagus, successfully treated by radiotherapy. In such cases, surgery, chemotherapy or radiotherapy can be performed. Because syncope may be an early sign of neck or cervical cancer, the diagnostic approach of syncope in patients with a past history of cancer should include the possibility of neck tumor recurrence or metastasis and an oncologic workout should be considered.

  12. Neural processes mediating the preparation and release of focal motor output are suppressed or absent during imagined movement

    Science.gov (United States)

    Eagles, Jeremy S.; Carlsen, Anthony N.

    2016-01-01

    Movements that are executed or imagined activate a similar subset of cortical regions, but the extent to which this activity represents functionally equivalent neural processes is unclear. During preparation for an executed movement, presentation of a startling acoustic stimulus (SAS) evokes a premature release of the planned movement with the spatial and temporal features of the tasks essentially intact. If imagined movement incorporates the same preparatory processes as executed movement, then a SAS should release the planned movement during preparation. This hypothesis was tested using an instructed-delay cueing paradigm during which subjects were required to rapidly release a handheld weight while maintaining the posture of the arm or to perform first-person imagery of the same task while holding the weight. In a subset of trials, a SAS was presented at 1500, 500, or 200 ms prior to the release cue. Task-appropriate preparation during executed and imagined movements was confirmed by electroencephalographic recording of a contingent negative variation waveform. During preparation for executed movement, a SAS often resulted in premature release of the weight with the probability of release progressively increasing from 24 % at −1500 ms to 80 % at −200 ms. In contrast, the SAS rarely (movement. However, the SAS frequently evoked the planned postural response (suppression of bicep brachii muscle activity) irrespective of the task or timing of stimulation (even during periods of postural hold without preparation). These findings provide evidence that neural processes mediating the preparation and release of the focal motor task (release of the weight) are markedly attenuated or absent during imagined movement and that postural and focal components of the task are prepared independently. PMID:25744055

  13. Reorganization of neural systems mediating peripheral visual selective attention in the deaf: An optical imaging study.

    Science.gov (United States)

    Seymour, Jenessa L; Low, Kathy A; Maclin, Edward L; Chiarelli, Antonio M; Mathewson, Kyle E; Fabiani, Monica; Gratton, Gabriele; Dye, Matthew W G

    2017-01-01

    Theories of brain plasticity propose that, in the absence of input from the preferred sensory modality, some specialized brain areas may be recruited when processing information from other modalities, which may result in improved performance. The Useful Field of View task has previously been used to demonstrate that early deafness positively impacts peripheral visual attention. The current study sought to determine the neural changes associated with those deafness-related enhancements in visual performance. Based on previous findings, we hypothesized that recruitment of posterior portions of Brodmann area 22, a brain region most commonly associated with auditory processing, would be correlated with peripheral selective attention as measured using the Useful Field of View task. We report data from severe to profoundly deaf adults and normal-hearing controls who performed the Useful Field of View task while cortical activity was recorded using the event-related optical signal. Behavioral performance, obtained in a separate session, showed that deaf subjects had lower thresholds (i.e., better performance) on the Useful Field of View task. The event-related optical data indicated greater activity for the deaf adults than for the normal-hearing controls during the task in the posterior portion of Brodmann area 22 in the right hemisphere. Furthermore, the behavioral thresholds correlated significantly with this neural activity. This work provides further support for the hypothesis that cross-modal plasticity in deaf individuals appears in higher-order auditory cortices, whereas no similar evidence was obtained for primary auditory areas. It is also the only neuroimaging study to date that has linked deaf-related changes in the right temporal lobe to visual task performance outside of the imaging environment. The event-related optical signal is a valuable technique for studying cross-modal plasticity in deaf humans. The non-invasive and relatively quiet characteristics of

  14. Liposome-mediated transfer of IL-1 receptor antagonist gene to dispersed islet cells does not prevent recurrence of disease in syngeneically transplanted NOD mice

    DEFF Research Database (Denmark)

    Saldeen, J; Sandler, S; Bendtzen, K

    2000-01-01

    transplanted non-obese diabetic (NOD) mice. NOD mouse islet cells were transfected using liposome-mediated gene transfer with a human IL-1ra cDNA construct and transplanted two days later to prediabetic NOD mice. Graft infiltration and destruction were monitored three, five and eight days posttransplantation...... by histology and determination of insulin and cytokine content. IL-1ra gene transfer resulted in transient expression of IL-1ra protein in islet cells in vitro as assessed by ELISA and of IL-1ra mRNA in transplanted islets as revealed by RT-PCR. However, both control and IL-1ra transfected NOD grafts exhibited......IL-1beta is cytotoxic to pancreatic beta-cells in vitro but its role in the vicinity of beta-cells in vivo is unknown. We explored whether liposome-mediated transfer of the interleukin 1 receptor antagonist (IL-1ra) gene to islet cells might prevent recurrence of disease in syngeneically...

  15. The neural mediators of kindness-based meditation: a theoretical model

    Directory of Open Access Journals (Sweden)

    Jennifer Streiffer Mascaro

    2015-02-01

    Full Text Available Although kindness-based contemplative practices are increasingly employed by clinicians and cognitive researchers to enhance prosocial emotions, social cognitive skills, and well-being, and as a tool to understand the basic workings of the social mind, we lack a coherent theoretical model with which to test the mechanisms by which kindness-based meditation may alter the brain and body. Here we link contemplative accounts of compassion and loving-kindness practices with research from social cognitive neuroscience and social psychology to generate predictions about how diverse practices may alter brain structure and function and related aspects of social cognition. Contingent on the nuances of the practice, kindness-based meditation may enhance the neural systems related to faster and more basic perceptual or motor simulation processes, simulation of another’s affective body state, slower and higher-level perspective-taking, modulatory processes such as emotion regulation and self/other discrimination, and combinations thereof. This theoretical model will be discussed alongside best practices for testing such a model and potential implications and applications of future work.

  16. Neuropilin-1 interacts with the second branchial arch microenvironment to mediate chick neural crest cell dynamics

    Science.gov (United States)

    McLennan, Rebecca; Kulesa, Paul M.

    2011-01-01

    Cranial neural crest cells (NCCs) require neuropilin signaling to reach and invade the branchial arches. Here, we use an in vivo chick model to investigate whether the neuropilin-1 knockdown phenotype is specific to the second branchial arch (ba2), changes in NCC behaviors and phenotypic consequences, and whether neuropilins work together to facilitate entry into and invasion of ba2. We find that cranial NCCs with reduced neuropilin-1 expression displayed shorter protrusions and decreased cell body and nuclear length-to-width ratios characteristic of a loss in polarity and motility, after specific interaction with ba2. Directed NCC migration was rescued by transplantation of transfected cells into rhombomere 4 of younger hosts. Lastly, reduction of neuropilin-2 expression by shRNA either solely or with reduction of neuropilin-1 expression did not lead to a stronger head phenotype. Thus, NCCs, independent of rhombomere origin, require neuropilin-1, but not neuropilin-2 to maintain polarity and directed migration into ba2. PMID:20503363

  17. Sex differences in the neural mechanisms mediating addiction: a new synthesis and hypothesis

    Directory of Open Access Journals (Sweden)

    Becker Jill B

    2012-06-01

    Full Text Available Abstract In this review we propose that there are sex differences in how men and women enter onto the path that can lead to addiction. Males are more likely than females to engage in risky behaviors that include experimenting with drugs of abuse, and in susceptible individuals, they are drawn into the spiral that can eventually lead to addiction. Women and girls are more likely to begin taking drugs as self-medication to reduce stress or alleviate depression. For this reason women enter into the downward spiral further along the path to addiction, and so transition to addiction more rapidly. We propose that this sex difference is due, at least in part, to sex differences in the organization of the neural systems responsible for motivation and addiction. Additionally, we suggest that sex differences in these systems and their functioning are accentuated with addiction. In the current review we discuss historical, cultural, social and biological bases for sex differences in addiction with an emphasis on sex differences in the neurotransmitter systems that are implicated.

  18. Melatonin antagonizes interleukin-18-mediated inhibition on neural stem cell proliferation and differentiation.

    Science.gov (United States)

    Li, Zheng; Li, Xingye; Chan, Matthew T V; Wu, William Ka Kei; Tan, DunXian; Shen, Jianxiong

    2017-09-01

    Neural stem cells (NSCs) are self-renewing, pluripotent and undifferentiated cells which have the potential to differentiate into neurons, oligodendrocytes and astrocytes. NSC therapy for tissue regeneration, thus, gains popularity. However, the low survivals rate of the transplanted cell impedes its utilities. In this study, we tested whether melatonin, a potent antioxidant, could promote the NSC proliferation and neuronal differentiation, especially, in the presence of the pro-inflammatory cytokine interleukin-18 (IL-18). Our results showed that melatonin per se indeed exhibited beneficial effects on NSCs and IL-18 inhibited NSC proliferation, neurosphere formation and their differentiation into neurons. All inhibitory effects of IL-18 on NSCs were significantly reduced by melatonin treatment. Moreover, melatonin application increased the production of both brain-derived and glial cell-derived neurotrophic factors (BDNF, GDNF) in IL-18-stimulated NSCs. It was observed that inhibition of BDNF or GDNF hindered the protective effects of melatonin on NSCs. A potentially protective mechanism of melatonin on the inhibition of NSC's differentiation caused IL-18 may attribute to the up-regulation of these two major neurotrophic factors, BNDF and GNDF. The findings indicate that melatonin may play an important role promoting the survival of NSCs in neuroinflammatory diseases. © 2017 The Authors. Journal of Cellular and Molecular Medicine published by John Wiley & Sons Ltd and Foundation for Cellular and Molecular Medicine.

  19. DNA methyltransferase mediates dose-dependent stimulation of neural stem cell proliferation by folate.

    Science.gov (United States)

    Li, Wen; Yu, Min; Luo, Suhui; Liu, Huan; Gao, Yuxia; Wilson, John X; Huang, Guowei

    2013-07-01

    The proliferative response of neural stem cells (NSCs) to folate may play a critical role in the development, function and repair of the central nervous system. It is important to determine the dose-dependent effects of folate in NSC cultures that are potential sources of transplantable cells for therapies for neurodegenerative diseases. To determine the optimal concentration and mechanism of action of folate for stimulation of NSC proliferation in vitro, NSCs were exposed to folic acid or 5-methyltetrahydrofolate (5-MTHF) (0-200 μmol/L) for 24, 48 or 72 h. Immunocytochemistry and methyl thiazolyl tetrazolium assay showed that the optimal concentration of folic acid for NSC proliferation was 20-40 μmol/L. Stimulation of NSC proliferation by folic acid was associated with DNA methyltransferase (DNMT) activation and was attenuated by the DNMT inhibitor zebularine, which implies that folate dose-dependently stimulates NSC proliferation through a DNMT-dependent mechanism. Based on these new findings and previously published evidence, we have identified a mechanism by which folate stimulates NSC growth. Copyright © 2013 Elsevier Inc. All rights reserved.

  20. Rac1 Guides Porf-2 to Wnt Pathway to Mediate Neural Stem Cell Proliferation

    Directory of Open Access Journals (Sweden)

    Xi-Tao Yang

    2017-06-01

    Full Text Available The molecular and cellular mechanisms underlying the anti-proliferative effects of preoptic regulator factor 2 (Porf-2 on neural stem cells (NSCs remain largely unknown. Here, we found that Porf-2 inhibits the activity of ras-related C3 botulinum toxin substrate 1 (Rac1 protein in hippocampus-derived rat NSCs. Reduced Rac1 activity impaired the nuclear translocation of β-catenin, ultimately causing a repression of NSCs proliferation. Porf-2 knockdown enhanced NSCs proliferation but not in the presence of small molecule inhibitors of Rac1 or Wnt. At the same time, the repression of NSCs proliferation caused by Porf-2 overexpression was counteracted by small molecule activators of Rac1 or Wnt. By using a rat optic nerve crush model, we observed that Porf-2 knockdown enhanced the recovery of visual function. In particular, optic nerve injury in rats led to increased Wnt family member 3a (Wnt3a protein expression, which we found responsible for enhancing Porf-2 knockdown-induced NSCs proliferation. These findings suggest that Porf-2 exerts its inhibitory effect on NSCs proliferation via Rac1-Wnt/β-catenin pathway. Porf-2 may therefore represent and interesting target for optic nerve injury recovery and therapy.

  1. Cannabinoid receptor-mediated disruption of sensory gating and neural oscillations: A translational study in rats and humans.

    Science.gov (United States)

    Skosnik, Patrick D; Hajós, Mihály; Cortes-Briones, Jose A; Edwards, Chad R; Pittman, Brian P; Hoffmann, William E; Sewell, Andrew R; D'Souza, Deepak C; Ranganathan, Mohini

    2018-06-01

    Cannabis use has been associated with altered sensory gating and neural oscillations. However, it is unclear which constituent in cannabis is responsible for these effects, or whether these are cannabinoid receptor 1 (CB1R) mediated. Therefore, the present study in humans and rats examined whether cannabinoid administration would disrupt sensory gating and evoked oscillations utilizing electroencephalography (EEG) and local field potentials (LFPs), respectively. Human subjects (n = 15) completed four test days during which they received intravenous delta-9-tetrahydrocannabinol (Δ 9 -THC), cannabidiol (CBD), Δ 9 -THC + CBD, or placebo. Subjects engaged in a dual-click paradigm, and outcome measures included P50 gating ratio (S2/S1) and evoked power to S1 and S2. In order to examine CB1R specificity, rats (n = 6) were administered the CB1R agonist CP-55940, CP-55940+AM-251 (a CB1R antagonist), or vehicle using the same paradigm. LFPs were recorded from CA3 and entorhinal cortex. Both Δ 9 -THC (p < 0.007) and Δ 9 -THC + CBD (p < 0.004) disrupted P50 gating ratio compared to placebo, while CBD alone had no effect. Δ 9 -THC (p < 0.048) and Δ 9 -THC + CBD (p < 0.035) decreased S1 evoked theta power, and in the Δ 9 -THC condition, S1 theta negatively correlated with gating ratios (r = -0.629, p < 0.012 (p < 0.048 adjusted)). In rats, CP-55940 disrupted gating in both brain regions (p < 0.0001), and this was reversed by AM-251. Further, CP-55940 decreased evoked theta (p < 0.0077) and gamma (p < 0.011) power to S1, which was partially blocked by AM-251. These convergent human/animal data suggest that CB1R agonists disrupt sensory gating by altering neural oscillations in the theta-band. Moreover, this suggests that the endocannabinoid system mediates theta oscillations relevant to perception and cognition. Copyright © 2018 Elsevier Ltd. All rights reserved.

  2. Surgical Stress Abrogates Pre-Existing Protective T Cell Mediated Anti-Tumor Immunity Leading to Postoperative Cancer Recurrence.

    Directory of Open Access Journals (Sweden)

    Abhirami A Ananth

    Full Text Available Anti-tumor CD8+ T cells are a key determinant for overall survival in patients following surgical resection for solid malignancies. Using a mouse model of cancer vaccination (adenovirus expressing melanoma tumor-associated antigen (TAA-dopachrome tautomerase (AdDCT and resection resulting in major surgical stress (abdominal nephrectomy, we demonstrate that surgical stress results in a reduction in the number of CD8+ T cell that produce cytokines (IFNγ, TNFα, Granzyme B in response to TAA. This effect is secondary to both reduced proliferation and impaired T cell function following antigen binding. In a prophylactic model, surgical stress completely abrogates tumor protection conferred by vaccination in the immediate postoperative period. In a clinically relevant surgical resection model, vaccinated mice undergoing a positive margin resection with surgical stress had decreased survival compared to mice with positive margin resection alone. Preoperative immunotherapy with IFNα significantly extends survival in surgically stressed mice. Importantly, myeloid derived suppressor cell (MDSC population numbers and functional impairment of TAA-specific CD8+ T cell were altered in surgically stressed mice. Our observations suggest that cancer progression may result from surgery-induced suppression of tumor-specific CD8+ T cells. Preoperative immunotherapies aimed at targeting the prometastatic effects of cancer surgery will reduce recurrence and improve survival in cancer surgery patients.

  3. Immunohistochemical detection and correlation between MHC antigen and cell-mediated immune system in recurrent glioma by APAAP method.

    Science.gov (United States)

    Miyagi, K; Ingram, M; Techy, G B; Jacques, D B; Freshwater, D B; Sheldon, H

    1990-09-01

    As part of an on-going clinical trial of immunotherapy for recurrent malignant gliomas, using alkaline phosphatase-anti-alkaline phosphatase method with monoclonal antibodies, we investigated the correlation between expression of the major histocompatibility complex (MHC) and the subpopulation of tumor-infiltrating lymphocytes (TILs) in 38 glioma specimens (20 grade IV, 11 grade III, and 7 grade II) from 33 patients. Thirty specimens (78.9%) were positive to class I MHC antigen and 20 (52.6%) were positive to class II MHC antigen. The correlations between class I MHC antigen expression and the number of infiltrating T8 (p less than 0.01), and also between class II MHC antigen expression and the number of infiltrating T4 (p less than 0.05) were significant. We conclude that TILs are the result of immunoreaction (host-defense mechanism). 31.6% of specimens had perivascular infiltration of T cells. The main infiltrating lymphocyte subset in moderate to marked perivascular cuffing was T4. Our results may indicate that lack of MHC antigen on the glioma cell surface has a share in the poor immunogenicity in glioma-bearing patients. In addition, considering the effector/target ratio, the number of infiltrating lymphocytes against glioma cells was too small, so the immunological intervention seems to be essential in glioma therapy. Previous radiation therapy and chemotherapy, including steroid therapy, did not influence lymphocyte and macrophage infiltration.

  4. AKT signaling mediates IGF-I survival actions on otic neural progenitors.

    Directory of Open Access Journals (Sweden)

    Maria R Aburto

    Full Text Available BACKGROUND: Otic neurons and sensory cells derive from common progenitors whose transition into mature cells requires the coordination of cell survival, proliferation and differentiation programmes. Neurotrophic support and survival of post-mitotic otic neurons have been intensively studied, but the bases underlying the regulation of programmed cell death in immature proliferative otic neuroblasts remains poorly understood. The protein kinase AKT acts as a node, playing a critical role in controlling cell survival and cell cycle progression. AKT is activated by trophic factors, including insulin-like growth factor I (IGF-I, through the generation of the lipidic second messenger phosphatidylinositol 3-phosphate by phosphatidylinositol 3-kinase (PI3K. Here we have investigated the role of IGF-dependent activation of the PI3K-AKT pathway in maintenance of otic neuroblasts. METHODOLOGY/PRINCIPAL FINDINGS: By using a combination of organotypic cultures of chicken (Gallus gallus otic vesicles and acoustic-vestibular ganglia, Western blotting, immunohistochemistry and in situ hybridization, we show that IGF-I-activation of AKT protects neural progenitors from programmed cell death. IGF-I maintains otic neuroblasts in an undifferentiated and proliferative state, which is characterised by the upregulation of the forkhead box M1 (FoxM1 transcription factor. By contrast, our results indicate that post-mitotic p27(Kip-positive neurons become IGF-I independent as they extend their neuronal processes. Neurons gradually reduce their expression of the Igf1r, while they increase that of the neurotrophin receptor, TrkC. CONCLUSIONS/SIGNIFICANCE: Proliferative otic neuroblasts are dependent on the activation of the PI3K-AKT pathway by IGF-I for survival during the otic neuronal progenitor phase of early inner ear development.

  5. Neural Networks Mediating High-Level Mentalizing in Patients With Right Cerebral Hemispheric Gliomas

    Directory of Open Access Journals (Sweden)

    Riho Nakajima

    2018-03-01

    Full Text Available Mentalizing is the ability to understand others’ mental state through external cues. It consists of two networks, namely low-level and high-level metalizing. Although it is an essential function in our daily social life, surgical resection of right cerebral hemisphere disturbs mentalizing processing with high possibility. In the past, little was known about the white matter related to high-level mentalizing, and the conservation of high-level mentalizing during surgery has not been a focus of attention. Therefore, the main purpose of this study was to examine the neural networks underlying high-level mentalizing and then, secondarily, investigate the usefulness of awake surgery in preserving the mentalizing network. A total of 20 patients with glioma localized in the right hemisphere who underwent awake surgery participated in this study. All patients were assigned to two groups: with or without intraoperative assessment of high-level mentalizing. Their high-level mentalizing abilities were assessed before surgery and 1 week and 3 months after surgery. At 3 months after surgery, only patients who received the intraoperative high-level mentalizing test showed the same score as normal healthy volunteers. The tract-based lesion symptom analysis was performed to confirm the severity of damage of associated fibers and high-level mentalizing accuracy. This analysis revealed the superior longitudinal fascicles (SLF III and fronto-striatal tract (FST to be associated with high-level mentalizing processing. Moreover, the voxel-based lesion symptom analysis demonstrated that resection of orbito-frontal cortex (OFC causes persistent mentalizing dysfunction. Our study indicates that damage of the OFC and structural connectivity of the SLF and FST causes the disorder of mentalizing after surgery, and assessing high-level mentalizing during surgery may be useful to preserve these pathways.

  6. Xanomeline suppresses excessive pro-inflammatory cytokine responses through neural signal-mediated pathways and improves survival in lethal inflammation

    Science.gov (United States)

    Rosas-Ballina, Mauricio; Ferrer, Sergio Valdés; Dancho, Meghan; Ochani, Mahendar; Katz, David; Cheng, Kai Fan; Olofsson, Peder S.; Chavan, Sangeeta S.; Al-Abed, Yousef; Tracey, Kevin J.; Pavlov, Valentin A.

    2014-01-01

    Inflammatory conditions characterized by excessive immune cell activation and cytokine release, are associated with bidirectional immune system-brain communication, underlying sickness behavior and other physiological responses. The vagus nerve has an important role in this communication by conveying sensory information to the brain, and brain-derived immunoregulatory signals that suppress peripheral cytokine levels and inflammation. Brain muscarinic acetylcholine receptor (mAChR)-mediated cholinergic signaling has been implicated in this regulation. However, the possibility of controlling inflammation by peripheral administration of centrally-acting mAChR agonists is unexplored. To provide insight we used the centrally-acting M1 mAChR agonist xanomeline, previously developed in the context of Alzheimer’s disease and schizophrenia. Intraperitoneal administration of xanomeline significantly suppressed serum and splenic TNF levels, alleviated sickness behavior, and increased survival during lethal murine endotoxemia. The anti-inflammatory effects of xanomeline were brain mAChR-mediated and required intact vagus nerve and splenic nerve signaling. The anti-inflammatory efficacy of xanomeline was retained for at least 20h, associated with alterations in splenic lymphocyte, and dendritic cell proportions, and decreased splenocyte responsiveness to endotoxin. These results highlight an important role of the M1 mAChR in a neural circuitry to spleen in which brain cholinergic activation lowers peripheral pro-inflammatory cytokines to levels favoring survival. The therapeutic efficacy of xanomeline was also manifested by significantly improved survival in preclinical settings of severe sepsis. These findings are of interest for strategizing novel therapeutic approaches in inflammatory diseases. PMID:25063706

  7. Recurrent varicocele

    Directory of Open Access Journals (Sweden)

    Katherine Rotker

    2016-01-01

    Full Text Available Varicocele recurrence is one of the most common complications associated with varicocele repair. A systematic review was performed to evaluate varicocele recurrence rates, anatomic causes of recurrence, and methods of management of recurrent varicoceles. The PubMed database was evaluated using keywords "recurrent" and "varicocele" as well as MESH criteria "recurrent" and "varicocele." Articles were not included that were not in English, represented single case reports, focused solely on subclinical varicocele, or focused solely on a pediatric population (age <18. Rates of recurrence vary with the technique of varicocele repair from 0% to 35%. Anatomy of recurrence can be defined by venography. Management of varicocele recurrence can be surgical or via embolization.

  8. Neural Computations Mediating One-Shot Learning in the Human Brain

    Science.gov (United States)

    Lee, Sang Wan; O’Doherty, John P.; Shimojo, Shinsuke

    2015-01-01

    Incremental learning, in which new knowledge is acquired gradually through trial and error, can be distinguished from one-shot learning, in which the brain learns rapidly from only a single pairing of a stimulus and a consequence. Very little is known about how the brain transitions between these two fundamentally different forms of learning. Here we test a computational hypothesis that uncertainty about the causal relationship between a stimulus and an outcome induces rapid changes in the rate of learning, which in turn mediates the transition between incremental and one-shot learning. By using a novel behavioral task in combination with functional magnetic resonance imaging (fMRI) data from human volunteers, we found evidence implicating the ventrolateral prefrontal cortex and hippocampus in this process. The hippocampus was selectively “switched” on when one-shot learning was predicted to occur, while the ventrolateral prefrontal cortex was found to encode uncertainty about the causal association, exhibiting increased coupling with the hippocampus for high-learning rates, suggesting this region may act as a “switch,” turning on and off one-shot learning as required. PMID:25919291

  9. HO-1-mediated macroautophagy: a mechanism for unregulated iron deposition in aging and degenerating neural tissues.

    Science.gov (United States)

    Zukor, Hillel; Song, Wei; Liberman, Adrienne; Mui, Jeannie; Vali, Hojatollah; Fillebeen, Carine; Pantopoulos, Kostas; Wu, Ting-Di; Guerquin-Kern, Jean-Luc; Schipper, Hyman M

    2009-05-01

    Oxidative stress, deposition of non-transferrin iron, and mitochondrial insufficiency occur in the brains of patients with Alzheimer disease (AD) and Parkinson disease (PD). We previously demonstrated that heme oxygenase-1 (HO-1) is up-regulated in AD and PD brain and promotes the accumulation of non-transferrin iron in astroglial mitochondria. Herein, dynamic secondary ion mass spectrometry (SIMS) and other techniques were employed to ascertain (i) the impact of HO-1 over-expression on astroglial mitochondrial morphology in vitro, (ii) the topography of aberrant iron sequestration in astrocytes over-expressing HO-1, and (iii) the role of iron regulatory proteins (IRP) in HO-1-mediated iron deposition. Astroglial hHO-1 over-expression induced cytoplasmic vacuolation, mitochondrial membrane damage, and macroautophagy. HO-1 promoted trapping of redox-active iron and sulfur within many cytopathological profiles without impacting ferroportin, transferrin receptor, ferritin, and IRP2 protein levels or IRP1 activity. Thus, HO-1 activity promotes mitochondrial macroautophagy and sequestration of redox-active iron in astroglia independently of classical iron mobilization pathways. Glial HO-1 may be a rational therapeutic target in AD, PD, and other human CNS conditions characterized by the unregulated deposition of brain iron.

  10. Neural computations mediating one-shot learning in the human brain.

    Directory of Open Access Journals (Sweden)

    Sang Wan Lee

    2015-04-01

    Full Text Available Incremental learning, in which new knowledge is acquired gradually through trial and error, can be distinguished from one-shot learning, in which the brain learns rapidly from only a single pairing of a stimulus and a consequence. Very little is known about how the brain transitions between these two fundamentally different forms of learning. Here we test a computational hypothesis that uncertainty about the causal relationship between a stimulus and an outcome induces rapid changes in the rate of learning, which in turn mediates the transition between incremental and one-shot learning. By using a novel behavioral task in combination with functional magnetic resonance imaging (fMRI data from human volunteers, we found evidence implicating the ventrolateral prefrontal cortex and hippocampus in this process. The hippocampus was selectively "switched" on when one-shot learning was predicted to occur, while the ventrolateral prefrontal cortex was found to encode uncertainty about the causal association, exhibiting increased coupling with the hippocampus for high-learning rates, suggesting this region may act as a "switch," turning on and off one-shot learning as required.

  11. Neural evidence for competition-mediated suppression in the perception of a single object.

    Science.gov (United States)

    Cacciamani, Laura; Scalf, Paige E; Peterson, Mary A

    2015-11-01

    Multiple objects compete for representation in visual cortex. Competition may also underlie the perception of a single object. Computational models implement object perception as competition between units on opposite sides of a border. The border is assigned to the winning side, which is perceived as an object (or "figure"), whereas the other side is perceived as a shapeless ground. Behavioral experiments suggest that the ground is inhibited to a degree that depends on the extent to which it competed for object status, and that this inhibition is relayed to low-level brain areas. Here, we used fMRI to assess activation for ground regions of task-irrelevant novel silhouettes presented in the left or right visual field (LVF or RVF) while participants performed a difficult task at fixation. Silhouettes were designed so that the insides would win the competition for object status. The outsides (grounds) suggested portions of familiar objects in half of the silhouettes and novel objects in the other half. Because matches to object memories affect the competition, these two types of silhouettes operationalized, respectively, high competition and low competition from the grounds. The results showed that activation corresponding to ground regions was reduced for high- versus low-competition silhouettes in V4, where receptive fields (RFs) are large enough to encompass the familiar objects in the grounds, and in V1/V2, where RFs are much smaller. These results support a theory of object perception involving competition-mediated ground suppression and feedback from higher to lower levels. This pattern of results was observed in the left hemisphere (RVF), but not in the right hemisphere (LVF). One explanation of the lateralized findings is that task-irrelevant silhouettes in the RVF captured attention, allowing us to observe these effects, whereas those in the LVF did not. Experiment 2 provided preliminary behavioral evidence consistent with this possibility. Copyright

  12. Organic cation transporter-mediated ergothioneine uptake in mouse neural progenitor cells suppresses proliferation and promotes differentiation into neurons.

    Directory of Open Access Journals (Sweden)

    Takahiro Ishimoto

    Full Text Available The aim of the present study is to clarify the functional expression and physiological role in neural progenitor cells (NPCs of carnitine/organic cation transporter OCTN1/SLC22A4, which accepts the naturally occurring food-derived antioxidant ergothioneine (ERGO as a substrate in vivo. Real-time PCR analysis revealed that mRNA expression of OCTN1 was much higher than that of other organic cation transporters in mouse cultured cortical NPCs. Immunocytochemical analysis showed colocalization of OCTN1 with the NPC marker nestin in cultured NPCs and mouse embryonic carcinoma P19 cells differentiated into neural progenitor-like cells (P19-NPCs. These cells exhibited time-dependent [(3H]ERGO uptake. These results demonstrate that OCTN1 is functionally expressed in murine NPCs. Cultured NPCs and P19-NPCs formed neurospheres from clusters of proliferating cells in a culture time-dependent manner. Exposure of cultured NPCs to ERGO or other antioxidants (edaravone and ascorbic acid led to a significant decrease in the area of neurospheres with concomitant elimination of intracellular reactive oxygen species. Transfection of P19-NPCs with small interfering RNA for OCTN1 markedly promoted formation of neurospheres with a concomitant decrease of [(3H]ERGO uptake. On the other hand, exposure of cultured NPCs to ERGO markedly increased the number of cells immunoreactive for the neuronal marker βIII-tubulin, but decreased the number immunoreactive for the astroglial marker glial fibrillary acidic protein (GFAP, with concomitant up-regulation of neuronal differentiation activator gene Math1. Interestingly, edaravone and ascorbic acid did not affect such differentiation of NPCs, in contrast to the case of proliferation. Knockdown of OCTN1 increased the number of cells immunoreactive for GFAP, but decreased the number immunoreactive for βIII-tubulin, with concomitant down-regulation of Math1 in P19-NPCs. Thus, OCTN1-mediated uptake of ERGO in NPCs inhibits

  13. Mediatization

    DEFF Research Database (Denmark)

    Hjarvard, Stig

    2017-01-01

    Mediatization research shares media effects studies' ambition of answering the difficult questions with regard to whether and how media matter and influence contemporary culture and society. The two approaches nevertheless differ fundamentally in that mediatization research seeks answers...... to these general questions by distinguishing between two concepts: mediation and mediatization. The media effects tradition generally considers the effects of the media to be a result of individuals being exposed to media content, i.e. effects are seen as an outcome of mediated communication. Mediatization...... research is concerned with long-term structural changes involving media, culture, and society, i.e. the influences of the media are understood in relation to how media are implicated in social and cultural changes and how these processes come to create new conditions for human communication and interaction...

  14. Cyclosporin A-Mediated Activation of Endogenous Neural Precursor Cells Promotes Cognitive Recovery in a Mouse Model of Stroke

    Directory of Open Access Journals (Sweden)

    Labeeba Nusrat

    2018-04-01

    Full Text Available Cognitive dysfunction following stroke significantly impacts quality of life and functional independance; yet, despite the prevalence and negative impact of cognitive deficits, post-stroke interventions almost exclusively target motor impairments. As a result, current treatment options are limited in their ability to promote post-stroke cognitive recovery. Cyclosporin A (CsA has been previously shown to improve post-stroke functional recovery of sensorimotor deficits. Interestingly, CsA is a commonly used immunosuppressant and also acts directly on endogenous neural precursor cells (NPCs in the neurogenic regions of the brain (the periventricular region and the dentate gyrus. The immunosuppressive and NPC activation effects are mediated by calcineurin-dependent and calcineurin-independent pathways, respectively. To develop a cognitive stroke model, focal bilateral lesions were induced in the medial prefrontal cortex (mPFC of adult mice using endothelin-1. First, we characterized this stroke model in the acute and chronic phase, using problem-solving and memory-based cognitive tests. mPFC stroke resulted in early and persistent deficits in short-term memory, problem-solving and behavioral flexibility, without affecting anxiety. Second, we investigated the effects of acute and chronic CsA treatment on NPC activation, neuroprotection, and tissue damage. Acute CsA administration post-stroke increased the size of the NPC pool. There was no effect on neurodegeneration or lesion volume. Lastly, we looked at the effects of chronic CsA treatment on cognitive recovery. Long-term CsA administration promoted NPC migration toward the lesion site and rescued cognitive deficits to control levels. This study demonstrates that CsA treatment activates the NPC population, promotes migration of NPCs to the site of injury, and leads to improved cognitive recovery following long-term treatment.

  15. Connexin 43-mediated modulation of polarized cell movement and the directional migration of cardiac neural crest cells.

    Science.gov (United States)

    Xu, Xin; Francis, Richard; Wei, Chih Jen; Linask, Kaari L; Lo, Cecilia W

    2006-09-01

    Connexin 43 knockout (Cx43alpha1KO) mice have conotruncal heart defects that are associated with a reduction in the abundance of cardiac neural crest cells (CNCs) targeted to the heart. In this study, we show CNCs can respond to changing fibronectin matrix density by adjusting their migratory behavior, with directionality increasing and speed decreasing with increasing fibronectin density. However, compared with wild-type CNCs, Cx43alpha1KO CNCs show reduced directionality and speed, while CNCs overexpressing Cx43alpha1 from the CMV43 transgenic mice show increased directionality and speed. Altered integrin signaling was indicated by changes in the distribution of vinculin containing focal contacts, and altered temporal response of Cx43alpha1KO and CMV43 CNCs to beta1 integrin function blocking antibody treatment. High resolution motion analysis showed Cx43alpha1KO CNCs have increased cell protrusive activity accompanied by the loss of polarized cell movement. They exhibited an unusual polygonal arrangement of actin stress fibers that indicated a profound change in cytoskeletal organization. Semaphorin 3A, a chemorepellent known to inhibit integrin activation, was found to inhibit CNC motility, but in the Cx43alpha1KO and CMV43 CNCs, cell processes failed to retract with semaphorin 3A treatment. Immunohistochemical and biochemical analyses suggested close interactions between Cx43alpha1, vinculin and other actin-binding proteins. However, dye coupling analysis showed no correlation between gap junction communication level and fibronectin plating density. Overall, these findings indicate Cx43alpha1 may have a novel function in mediating crosstalk with cell signaling pathways that regulate polarized cell movement essential for the directional migration of CNCs.

  16. Exponential stability for stochastic delayed recurrent neural networks with mixed time-varying delays and impulses: the continuous-time case

    International Nuclear Information System (INIS)

    Karthik Raja, U; Leelamani, A; Raja, R; Samidurai, R

    2013-01-01

    In this paper, the exponential stability for a class of stochastic neural networks with time-varying delays and impulsive effects is considered. By constructing suitable Lyapunov functionals and by using the linear matrix inequality optimization approach, we obtain sufficient delay-dependent criteria to ensure the exponential stability of stochastic neural networks with time-varying delays and impulses. Two numerical examples with simulation results are provided to illustrate the effectiveness of the obtained results over those already existing in the literature. (paper)

  17. A Mediator Role of Perceived Organizational Support in Workplace Deviance Behaviors, Organizational Citizenship and Job Satisfaction Relations: A Survey Conducted With Artificial Neural Network

    Directory of Open Access Journals (Sweden)

    Kürşad Zorlu

    2016-01-01

    Full Text Available The aim of the research is to estimate the effect of workplace deviance behavior on organizational citizenship and job satisfaction and to put forward the mediator role of the organizational support perception in possible relations. The information based on hypothetical and literature are provided in the research principally and then the research part including the questionnaire applied to the employees of Kirsehir Municipality is presented. The validity and reliability tests have been performed successfully and the artificial neural network method has been used as the analysis method. In parallel with the averages and correlation values of the variables in the analysis the Artificial Neural Networks have been modelled by determining the inputs and outputs. In accordance with the findings obtained the workplace deviance behavior has a negative impact on the organizational citizenship and job satisfaction and the organizational support perception can take the mediator role as a relative for eliminating the abovementioned effect. When the artificial neural networks’ being used as the analysis method and the difficulties in measuring the workplace deviance behavior are taken into consideration it can be stated that the findings obtained have at a certain level of originality in terms of management discipline.

  18. A Mediator Role of Perceived Organizational Support in Workplace Deviance Behaviors, Organizational Citizenship and Job Satisfaction Relations: A Survey Conducted With Artificial Neural Network

    Directory of Open Access Journals (Sweden)

    Kursad Zorlu

    2014-07-01

    Full Text Available The aim of the research is to estimate the effect of workplace deviance behavior on organizational citizenship and job satisfaction and to put forward the mediator role of the organizational support perception in possible relations. The information based on hypothetical and literature are provided in the research principally and then the research part including the questionnaire applied to the employees of Kirsehir Municipality is presented. The validity and reliability tests have been performed successfully and the artificial neural network method has been used as the analysis method. In parallel with the averages and correlation values of the variables in the analysis the Artificial Neural Networks have been modelled by determining the inputs and outputs. In accordance with the findings obtained the workplace deviance behavior has a negative impact on the organizational citizenship and job satisfaction and the organizational support perception can take the mediator role as a relative for eliminating the abovementioned effect. When the artificial neural networks’ being used as the analysis method and the difficulties in measuring the workplace deviance behavior are taken into consideration it can be stated that the findings obtained have at a certain level of originality in terms of management discipline.

  19. Recurrent Laughter-induced Syncope

    NARCIS (Netherlands)

    Gaitatzis, A.; Petzold, A.F.S.

    2012-01-01

    Introduction: Syncope is a common presenting complaint in Neurology clinics or Emergency departments, but its causes are sometimes difficult to diagnose. Apart from vasovagal attacks, other benign, neurally mediated syncopes include "situational" syncopes, which occur after urination, coughing,

  20. Gold nanoparticles functionalized with a fragment of the neural cell adhesion molecule L1 stimulate L1-mediated functions

    Science.gov (United States)

    Schulz, Florian; Lutz, David; Rusche, Norman; Bastús, Neus G.; Stieben, Martin; Höltig, Michael; Grüner, Florian; Weller, Horst; Schachner, Melitta; Vossmeyer, Tobias; Loers, Gabriele

    2013-10-01

    The neural cell adhesion molecule L1 is involved in nervous system development and promotes regeneration in animal models of acute and chronic injury of the adult nervous system. To translate these conducive functions into therapeutic approaches, a 22-mer peptide that encompasses a minimal and functional L1 sequence of the third fibronectin type III domain of murine L1 was identified and conjugated to gold nanoparticles (AuNPs) to obtain constructs that interact homophilically with the extracellular domain of L1 and trigger the cognate beneficial L1-mediated functions. Covalent conjugation was achieved by reacting mixtures of two cysteine-terminated forms of this L1 peptide and thiolated poly(ethylene) glycol (PEG) ligands (~2.1 kDa) with citrate stabilized AuNPs of two different sizes (~14 and 40 nm in diameter). By varying the ratio of the L1 peptide-PEG mixtures, an optimized layer composition was achieved that resulted in the expected homophilic interaction of the AuNPs. These AuNPs were stable as tested over a time period of 30 days in artificial cerebrospinal fluid and interacted with the extracellular domain of L1 on neurons and Schwann cells, as could be shown by using cells from wild-type and L1-deficient mice. In vitro, the L1-derivatized particles promoted neurite outgrowth and survival of neurons from the central and peripheral nervous system and stimulated Schwann cell process formation and proliferation. These observations raise the hope that, in combination with other therapeutic approaches, L1 peptide-functionalized AuNPs may become a useful tool to ameliorate the deficits resulting from acute and chronic injuries of the mammalian nervous system.The neural cell adhesion molecule L1 is involved in nervous system development and promotes regeneration in animal models of acute and chronic injury of the adult nervous system. To translate these conducive functions into therapeutic approaches, a 22-mer peptide that encompasses a minimal and functional L1

  1. Recurrent Meningitis.

    Science.gov (United States)

    Rosenberg, Jon; Galen, Benjamin T

    2017-07-01

    Recurrent meningitis is a rare clinical scenario that can be self-limiting or life threatening depending on the underlying etiology. This review describes the causes, risk factors, treatment, and prognosis for recurrent meningitis. As a general overview of a broad topic, the aim of this review is to provide clinicians with a comprehensive differential diagnosis to aide in the evaluation and management of a patient with recurrent meningitis. New developments related to understanding the pathophysiology of recurrent meningitis are as scarce as studies evaluating the treatment and prevention of this rare disorder. A trial evaluating oral valacyclovir suppression after HSV-2 meningitis did not demonstrate a benefit in preventing recurrences. The data on prophylactic antibiotics after basilar skull fractures do not support their use. Intrathecal trastuzumab has shown promise in treating leptomeningeal carcinomatosis from HER-2 positive breast cancer. Monoclonal antibodies used to treat cancer and autoimmune diseases are new potential causes of drug-induced aseptic meningitis. Despite their potential for causing recurrent meningitis, the clinical entities reviewed herein are not frequently discussed together given that they are a heterogeneous collection of unrelated, rare diseases. Epidemiologic data on recurrent meningitis are lacking. The syndrome of recurrent benign lymphocytic meningitis described by Mollaret in 1944 was later found to be closely related to HSV-2 reactivation, but HSV-2 is by no means the only etiology of recurrent aseptic meningitis. While the mainstay of treatment for recurrent meningitis is supportive care, it is paramount to ensure that reversible and treatable causes have been addressed for further prevention.

  2. Equivalence of Equilibrium Propagation and Recurrent Backpropagation

    OpenAIRE

    Scellier, Benjamin; Bengio, Yoshua

    2017-01-01

    Recurrent Backpropagation and Equilibrium Propagation are algorithms for fixed point recurrent neural networks which differ in their second phase. In the first phase, both algorithms converge to a fixed point which corresponds to the configuration where the prediction is made. In the second phase, Recurrent Backpropagation computes error derivatives whereas Equilibrium Propagation relaxes to another nearby fixed point. In this work we establish a close connection between these two algorithms....

  3. Clinically oriented device programming in bradycardia patients: part 2 (atrioventricular blocks and neurally mediated syncope). Proposals from AIAC (Italian Association of Arrhythmology and Cardiac Pacing).

    Science.gov (United States)

    Palmisano, Pietro; Ziacchi, Matteo; Biffi, Mauro; Ricci, Renato P; Landolina, Maurizio; Zoni-Berisso, Massimo; Occhetta, Eraldo; Maglia, Giampiero; Botto, Gianluca; Padeletti, Luigi; Boriani, Giuseppe

    2018-04-01

    : The purpose of this two-part consensus document is to provide specific suggestions (based on an extensive literature review) on appropriate pacemaker setting in relation to patients' clinical features. In part 2, criteria for pacemaker choice and programming in atrioventricular blocks and neurally mediate syncope are proposed. The atrioventricular blocks can be paroxysmal or persistent, isolated or associated with sinus node disease. Neurally mediated syncope can be related to carotid sinus syndrome or cardioinhibitory vasovagal syncope. In sinus rhythm, with persistent atrioventricular block, we considered appropriate the activation of mode-switch algorithms, and algorithms for auto-adaptive management of the ventricular pacing output. If the atrioventricular block is paroxysmal, in addition to algorithms mentioned above, algorithms to maximize intrinsic atrioventricular conduction should be activated. When sinus node disease is associated with atrioventricular block, the activation of rate-responsive function in patients with chronotropic incompetence is appropriate. In permanent atrial fibrillation with atrioventricular block, algorithms for auto-adaptive management of the ventricular pacing output should be activated. If the atrioventricular block is persistent, the activation of rate-responsive function is appropriate. In carotid sinus syndrome, adequate rate hysteresis should be programmed. In vasovagal syncope, specialized sensing and pacing algorithms designed for reflex syncope prevention should be activated.

  4. Novel High-Viscosity Polyacrylamidated Chitosan for Neural Tissue Engineering: Fabrication of Anisotropic Neurodurable Scaffold via Molecular Disposition of Persulfate-Mediated Polymer Slicing and Complexation

    Directory of Open Access Journals (Sweden)

    Viness Pillay

    2012-10-01

    Full Text Available Macroporous polyacrylamide-grafted-chitosan scaffolds for neural tissue engineering were fabricated with varied synthetic and viscosity profiles. A novel approach and mechanism was utilized for polyacrylamide grafting onto chitosan using potassium persulfate (KPS mediated degradation of both polymers under a thermally controlled environment. Commercially available high molecular mass polyacrylamide was used instead of the acrylamide monomer for graft copolymerization. This grafting strategy yielded an enhanced grafting efficiency (GE = 92%, grafting ratio (GR = 263%, intrinsic viscosity (IV = 5.231 dL/g and viscometric average molecular mass (MW = 1.63 × 106 Da compared with known acrylamide that has a GE = 83%, GR = 178%, IV = 3.901 dL/g and MW = 1.22 × 106 Da. Image processing analysis of SEM images of the newly grafted neurodurable scaffold was undertaken based on the polymer-pore threshold. Attenuated Total Reflectance-FTIR spectral analyses in conjugation with DSC were used for the characterization and comparison of the newly grafted copolymers. Static Lattice Atomistic Simulations were employed to investigate and elucidate the copolymeric assembly and reaction mechanism by exploring the spatial disposition of chitosan and polyacrylamide with respect to the reactional profile of potassium persulfate. Interestingly, potassium persulfate, a peroxide, was found to play a dual role initially degrading the polymers—“polymer slicing”—thereby initiating the formation of free radicals and subsequently leading to synthesis of the high molecular mass polyacrylamide-grafted-chitosan (PAAm-g-CHT—“polymer complexation”. Furthermore, the applicability of the uniquely grafted scaffold for neural tissue engineering was evaluated via PC12 neuronal cell seeding. The novel PAAm-g-CHT exhibited superior neurocompatibility in terms of cell infiltration owing to the anisotropic porous architecture, high molecular mass mediated robustness

  5. Multiple data fusion for rainfall estimation using a NARX-based recurrent neural network – the development of the REIINN model

    International Nuclear Information System (INIS)

    Ang, M R C O; Gonzalez, R M; Castro, P P M

    2014-01-01

    Rainfall, one of the important elements of the hydrologic cycle, is also the most difficult to model. Thus, accurate rainfall estimation is necessary especially in localized catchment areas where variability of rainfall is extremely high. Moreover, early warning of severe rainfall through timely and accurate estimation and forecasting could help prevent disasters from flooding. This paper presents the development of two rainfall estimation models that utilize a NARX-based neural network architecture namely: REIINN 1 and REIINN 2. These REIINN models, or Rainfall Estimation by Information Integration using Neural Networks, were trained using MTSAT cloud-top temperature (CTT) images and rainfall rates from the combined rain gauge and TMPA 3B40RT datasets. Model performance was assessed using two metrics – root mean square error (RMSE) and correlation coefficient (R). REIINN 1 yielded an RMSE of 8.1423 mm/3h and an overall R of 0.74652 while REIINN 2 yielded an RMSE of 5.2303 and an overall R of 0.90373. The results, especially that of REIINN 2, are very promising for satellite-based rainfall estimation in a catchment scale. It is believed that model performance and accuracy will greatly improve with a denser and more spatially distributed in-situ rainfall measurements to calibrate the model with. The models proved the viability of using remote sensing images, with their good spatial coverage, near real time availability, and relatively inexpensive to acquire, as an alternative source for rainfall estimation to complement existing ground-based measurements

  6. The effect of the inner-hair-cell mediated transduction on the shape of neural tuning curves

    Science.gov (United States)

    Altoè, Alessandro; Pulkki, Ville; Verhulst, Sarah

    2018-05-01

    The inner hair cells of the mammalian cochlea transform the vibrations of their stereocilia into releases of neurotransmitter at the ribbon synapses, thereby controlling the activity of the afferent auditory fibers. The mechanical-to-neural transduction is a highly nonlinear process and it introduces differences between the frequency-tuning of the stereocilia and that of the afferent fibers. Using a computational model of the inner hair cell that is based on in vitro data, we estimated that smaller vibrations of the stereocilia are necessary to drive the afferent fibers above threshold at low (≤0.5 kHz) than at high (≥4 kHz) driving frequencies. In the base of the cochlea, the transduction process affects the low-frequency tails of neural tuning curves. In particular, it introduces differences between the frequency-tuning of the stereocilia and that of the auditory fibers resembling those between basilar membrane velocity and auditory fibers tuning curves in the chinchilla base. For units with a characteristic frequency between 1 and 4 kHz, the transduction process yields shallower neural than stereocilia tuning curves as the characteristic frequency decreases. This study proposes that transduction contributes to the progressive broadening of neural tuning curves from the base to the apex.

  7. HDAC inhibition amplifies gap junction communication in neural progenitors: Potential for cell-mediated enzyme prodrug therapy

    International Nuclear Information System (INIS)

    Khan, Zahidul; Akhtar, Monira; Asklund, Thomas; Juliusson, Bengt; Almqvist, Per M.; Ekstroem, Tomas J.

    2007-01-01

    Enzyme prodrug therapy using neural progenitor cells (NPCs) as delivery vehicles has been applied in animal models of gliomas and relies on gap junction communication (GJC) between delivery and target cells. This study investigated the effects of histone deacetylase (HDAC) inhibitors on GJC for the purpose of facilitating transfer of therapeutic molecules from recombinant NPCs. We studied a novel immortalized midbrain cell line, NGC-407 of embryonic human origin having neural precursor characteristics, as a potential delivery vehicle. The expression of gap junction protein connexin 43 (C x 43) was analyzed by western blot and immunocytochemistry. While C x 43 levels were decreased in untreated differentiating NGC-407 cells, the HDAC inhibitor 4-phenylbutyrate (4-PB) increased C x 43 expression along with increased membranous deposition in both proliferating and differentiating cells. Simultaneously, Ser 279/282-phosphorylated form of C x 43 was declined in both culture conditions by 4-PB. The 4-PB effect in NGC-407 cells was verified by using HNSC.100 human neural progenitors and Trichostatin A. Improved functional GJC is of imperative importance for therapeutic strategies involving intercellular transport of low molecular-weight compounds. We show here an enhancement by 4-PB, of the functional GJC among NGC-407 cells, as well as between NGC-407 and human glioma cells, as indicated by increased fluorescent dye transfer

  8. Estimating Time Series Soil Moisture by Applying Recurrent Nonlinear Autoregressive Neural Networks to Passive Microwave Data over the Heihe River Basin, China

    Directory of Open Access Journals (Sweden)

    Zheng Lu

    2017-06-01

    Full Text Available A method using a nonlinear auto-regressive neural network with exogenous input (NARXnn to retrieve time series soil moisture (SM that is spatially and temporally continuous and high quality over the Heihe River Basin (HRB in China was investigated in this study. The input training data consisted of the X-band dual polarization brightness temperature (TB and the Ka-band V polarization TB from the Advanced Microwave Scanning Radiometer II (AMSR2, Global Land Satellite product (GLASS Leaf Area Index (LAI, precipitation from the Tropical Rainfall Measuring Mission (TRMM and the Global Precipitation Measurement (GPM, and a global 30 arc-second elevation (GTOPO-30. The output training data were generated from fused SM products of the Japan Aerospace Exploration Agency (JAXA and the Land Surface Parameter Model (LPRM. The reprocessed fused SM from two years (2013 and 2014 was inputted into the NARXnn for training; subsequently, SM during a third year (2015 was estimated. Direct and indirect validations were then performed during the period 2015 by comparing with in situ measurements, SM from JAXA, LPRM and the Global Land Data Assimilation System (GLDAS, as well as precipitation data from TRMM and GPM. The results showed that the SM predictions from NARXnn performed best, as indicated by their higher correlation coefficients (R ≥ 0.85 for the whole year of 2015, lower Bias values (absolute value of Bias ≤ 0.02 and root mean square error values (RMSE ≤ 0.06, and their improved response to precipitation. This method is being used to produce the NARXnn SM product over the HRB in China.

  9. Recurrent vulvovaginitis.

    Science.gov (United States)

    Powell, Anna M; Nyirjesy, Paul

    2014-10-01

    Vulvovaginitis (VV) is one of the most commonly encountered problems by a gynecologist. Many women frequently self-treat with over-the-counter medications, and may present to their health-care provider after a treatment failure. Vulvovaginal candidiasis, bacterial vaginosis, and trichomoniasis may occur as discreet or recurrent episodes, and have been associated with significant treatment cost and morbidity. We present an update on diagnostic capabilities and treatment modalities that address recurrent and refractory episodes of VV. Copyright © 2014 Elsevier Ltd. All rights reserved.

  10. Dynamic training algorithm for dynamic neural networks

    International Nuclear Information System (INIS)

    Tan, Y.; Van Cauwenberghe, A.; Liu, Z.

    1996-01-01

    The widely used backpropagation algorithm for training neural networks based on the gradient descent has a significant drawback of slow convergence. A Gauss-Newton method based recursive least squares (RLS) type algorithm with dynamic error backpropagation is presented to speed-up the learning procedure of neural networks with local recurrent terms. Finally, simulation examples concerning the applications of the RLS type algorithm to identification of nonlinear processes using a local recurrent neural network are also included in this paper

  11. The default mode network and recurrent depression: a neurobiological model of cognitive risk factors.

    Science.gov (United States)

    Marchetti, Igor; Koster, Ernst H W; Sonuga-Barke, Edmund J; De Raedt, Rudi

    2012-09-01

    A neurobiological account of cognitive vulnerability for recurrent depression is presented based on recent developments of resting state neural networks. We propose that alterations in the interplay between task positive (TP) and task negative (TN) elements of the Default Mode Network (DMN) act as a neurobiological risk factor for recurrent depression mediated by cognitive mechanisms. In the framework, depression is characterized by an imbalance between TN-TP components leading to an overpowering of TP by TN activity. The TN-TP imbalance is associated with a dysfunctional internally-focused cognitive style as well as a failure to attenuate TN activity in the transition from rest to task. Thus we propose the TN-TP imbalance as overarching neural mechanism involved in crucial cognitive risk factors for recurrent depression, namely rumination, impaired attentional control, and cognitive reactivity. During remission the TN-TP imbalance persists predisposing to vulnerability of recurrent depression. Empirical data to support this model is reviewed. Finally, we specify how this framework can guide future research efforts.

  12. Türkiye’de Enflasyonun İleri ve Geri Beslemeli Yapay Sinir Ağlarının Melez Yaklaşımı ile Öngörüsü = Forecasting of Turkey Inflation with Hybrid of Feed Forward and Recurrent Artifical Neural Networks

    Directory of Open Access Journals (Sweden)

    V. Rezan USLU

    2010-01-01

    Full Text Available Obtaining the inflation prediction is an important problem. Having this prediction accurately will lead to more accurate decisions. Various time series techniques have been used in the literature for inflation prediction. Recently, Artificial Neural Network (ANN is being preferred in the time series prediction problem due to its flexible modeling capacity. Artificial neural network can be applied easily to any time series since it does not require prior conditions such as a linear or curved specific model pattern, stationary and normal distribution. In this study, the predictions have been obtained using the feed forward and recurrent artificial neural network for the Consumer Price Index (CPI. A new combined forecast has been proposed based on ANN in which the ANN model predictions employed in analysis were used as data.

  13. Ethanol mediated As(III) adsorption onto Zn-loaded pinecone biochar: Experimental investigation, modeling, and optimization using hybrid artificial neural network-genetic algorithm approach.

    Science.gov (United States)

    Zafar, Mohd; Van Vinh, N; Behera, Shishir Kumar; Park, Hung-Suck

    2017-04-01

    Organic matters (OMs) and their oxidization products often influence the fate and transport of heavy metals in the subsurface aqueous systems through interaction with the mineral surfaces. This study investigates the ethanol (EtOH)-mediated As(III) adsorption onto Zn-loaded pinecone (PC) biochar through batch experiments conducted under Box-Behnken design. The effect of EtOH on As(III) adsorption mechanism was quantitatively elucidated by fitting the experimental data using artificial neural network and quadratic modeling approaches. The quadratic model could describe the limiting nature of EtOH and pH on As(III) adsorption, whereas neural network revealed the stronger influence of EtOH (64.5%) followed by pH (20.75%) and As(III) concentration (14.75%) on the adsorption phenomena. Besides, the interaction among process variables indicated that EtOH enhances As(III) adsorption over a pH range of 2 to 7, possibly due to facilitation of ligand-metal(Zn) binding complexation mechanism. Eventually, hybrid response surface model-genetic algorithm (RSM-GA) approach predicted a better optimal solution than RSM, i.e., the adsorptive removal of As(III) (10.47μg/g) is facilitated at 30.22mg C/L of EtOH with initial As(III) concentration of 196.77μg/L at pH5.8. The implication of this investigation might help in understanding the application of biochar for removal of various As(III) species in the presence of OM. Copyright © 2016. Published by Elsevier B.V.

  14. Parallel neural pathways in higher visual centers of the Drosophila brain that mediate wavelength-specific behavior

    Directory of Open Access Journals (Sweden)

    Hideo eOtsuna

    2014-02-01

    Full Text Available Compared with connections between the retinae and primary visual centers, relatively less is known in both mammals and insects about the functional segregation of neural pathways connecting primary and higher centers of the visual processing cascade. Here, using the Drosophila visual system as a model, we demonstrate two levels of parallel computation in the pathways that connect primary visual centers of the optic lobe to computational circuits embedded within deeper centers in the central brain. We show that a seemingly simple achromatic behavior, namely phototaxis, is under the control of several independent pathways, each of which is responsible for navigation towards unique wavelengths. Silencing just one pathway is enough to disturb phototaxis towards one characteristic monochromatic source, whereas phototactic behavior towards white light is not affected. The response spectrum of each demonstrable pathway is different from that of individual photoreceptors, suggesting subtractive computations. A choice assay between two colors showed that these pathways are responsible for navigation towards, but not for the detection itself of, the monochromatic light. The present study provides novel insights about how visual information is separated and processed in parallel to achieve robust control of an innate behavior.

  15. Delamination of neural crest cells requires transient and reversible Wnt inhibition mediated by Dact1/2.

    Science.gov (United States)

    Rabadán, M Angeles; Herrera, Antonio; Fanlo, Lucia; Usieto, Susana; Carmona-Fontaine, Carlos; Barriga, Elias H; Mayor, Roberto; Pons, Sebastián; Martí, Elisa

    2016-06-15

    Delamination of neural crest (NC) cells is a bona fide physiological model of epithelial-to-mesenchymal transition (EMT), a process that is influenced by Wnt/β-catenin signalling. Using two in vivo models, we show that Wnt/β-catenin signalling is transiently inhibited at the time of NC delamination. In attempting to define the mechanism underlying this inhibition, we found that the scaffold proteins Dact1 and Dact2, which are expressed in pre-migratory NC cells, are required for NC delamination in Xenopus and chick embryos, whereas they do not affect the motile properties of migratory NC cells. Dact1/2 inhibit Wnt/β-catenin signalling upstream of the transcriptional activity of T cell factor (TCF), which is required for EMT to proceed. Dact1/2 regulate the subcellular distribution of β-catenin, preventing β-catenin from acting as a transcriptional co-activator to TCF, yet without affecting its stability. Together, these data identify a novel yet important regulatory element that inhibits β-catenin signalling, which then affects NC delamination. © 2016. Published by The Company of Biologists Ltd.

  16. Recurrent Spatial Transformer Networks

    DEFF Research Database (Denmark)

    Sønderby, Søren Kaae; Sønderby, Casper Kaae; Maaløe, Lars

    2015-01-01

    We integrate the recently proposed spatial transformer network (SPN) [Jaderberg et. al 2015] into a recurrent neural network (RNN) to form an RNN-SPN model. We use the RNN-SPN to classify digits in cluttered MNIST sequences. The proposed model achieves a single digit error of 1.5% compared to 2.......9% for a convolutional networks and 2.0% for convolutional networks with SPN layers. The SPN outputs a zoomed, rotated and skewed version of the input image. We investigate different down-sampling factors (ratio of pixel in input and output) for the SPN and show that the RNN-SPN model is able to down-sample the input...

  17. Automated Item Generation with Recurrent Neural Networks.

    Science.gov (United States)

    von Davier, Matthias

    2018-03-12

    Utilizing technology for automated item generation is not a new idea. However, test items used in commercial testing programs or in research are still predominantly written by humans, in most cases by content experts or professional item writers. Human experts are a limited resource and testing agencies incur high costs in the process of continuous renewal of item banks to sustain testing programs. Using algorithms instead holds the promise of providing unlimited resources for this crucial part of assessment development. The approach presented here deviates in several ways from previous attempts to solve this problem. In the past, automatic item generation relied either on generating clones of narrowly defined item types such as those found in language free intelligence tests (e.g., Raven's progressive matrices) or on an extensive analysis of task components and derivation of schemata to produce items with pre-specified variability that are hoped to have predictable levels of difficulty. It is somewhat unlikely that researchers utilizing these previous approaches would look at the proposed approach with favor; however, recent applications of machine learning show success in solving tasks that seemed impossible for machines not too long ago. The proposed approach uses deep learning to implement probabilistic language models, not unlike what Google brain and Amazon Alexa use for language processing and generation.

  18. The role of arachidonic acid metabolites in signal transduction in an identified neural network mediating presynaptic inhibition in Aplysia

    International Nuclear Information System (INIS)

    Shapiro, E.; Piomelli, D.; Feinmark, S.; Vogel, S.; Chin, G.; Schwartz, J.H.

    1988-01-01

    Neuromodulation is a form of signal transduction that results in the biochemical control of neuronal excitability. Many neurotransmitters act through second messengers, and the examination of biochemical cascades initiated by neurotransmitter-receptor interaction has advanced the understanding of how information is acquired and stored in the nervous system. For example, 5-HT and other facilitory transmitters increase cAMP in sensory neurons of Aplysia, which enhances excitability and facilitates transmitter output. The authors have examined the role of arachidonic acid metabolites in a neuronal circuit mediating presynaptic inhibition. L32 cells are a cluster of putative histaminergic neurons that each make dual-action synaptic potentials onto two follower neurons, L10 and L14. The synaptic connections, biophysical properties, and roles in behavior of the L10 and L14 follower cells have been well studied. The types of ion channels causing each component of the L32-L10 and L32-L14 dual actions have been characterized and application of histamine mimics the effects of stimulating L32 in both L10 and L14

  19. Capturing non-local interactions by long short-term memory bidirectional recurrent neural networks for improving prediction of protein secondary structure, backbone angles, contact numbers and solvent accessibility.

    Science.gov (United States)

    Heffernan, Rhys; Yang, Yuedong; Paliwal, Kuldip; Zhou, Yaoqi

    2017-09-15

    The accuracy of predicting protein local and global structural properties such as secondary structure and solvent accessible surface area has been stagnant for many years because of the challenge of accounting for non-local interactions between amino acid residues that are close in three-dimensional structural space but far from each other in their sequence positions. All existing machine-learning techniques relied on a sliding window of 10-20 amino acid residues to capture some 'short to intermediate' non-local interactions. Here, we employed Long Short-Term Memory (LSTM) Bidirectional Recurrent Neural Networks (BRNNs) which are capable of capturing long range interactions without using a window. We showed that the application of LSTM-BRNN to the prediction of protein structural properties makes the most significant improvement for residues with the most long-range contacts (|i-j| >19) over a previous window-based, deep-learning method SPIDER2. Capturing long-range interactions allows the accuracy of three-state secondary structure prediction to reach 84% and the correlation coefficient between predicted and actual solvent accessible surface areas to reach 0.80, plus a reduction of 5%, 10%, 5% and 10% in the mean absolute error for backbone ϕ , ψ , θ and τ angles, respectively, from SPIDER2. More significantly, 27% of 182724 40-residue models directly constructed from predicted C α atom-based θ and τ have similar structures to their corresponding native structures (6Å RMSD or less), which is 3% better than models built by ϕ and ψ angles. We expect the method to be useful for assisting protein structure and function prediction. The method is available as a SPIDER3 server and standalone package at http://sparks-lab.org . yaoqi.zhou@griffith.edu.au or yuedong.yang@griffith.edu.au. Supplementary data are available at Bioinformatics online. © The Author (2017). Published by Oxford University Press. All rights reserved. For Permissions, please email

  20. Neural correlates of cerebellar-mediated timing during finger tapping in children with fetal alcohol spectrum disorders.

    Science.gov (United States)

    du Plessis, Lindie; Jacobson, Sandra W; Molteno, Christopher D; Robertson, Frances C; Peterson, Bradley S; Jacobson, Joseph L; Meintjes, Ernesta M

    2015-01-01

    Classical eyeblink conditioning (EBC), an elemental form of learning, is among the most sensitive indicators of fetal alcohol spectrum disorders. The cerebellum plays a key role in maintaining timed movements with millisecond accuracy required for EBC. Functional MRI (fMRI) was used to identify cerebellar regions that mediate timing in healthy controls and the degree to which these areas are also recruited in children with prenatal alcohol exposure. fMRI data were acquired during an auditory rhythmic/non-rhythmic finger tapping task. We present results for 17 children with fetal alcohol syndrome (FAS) or partial FAS, 17 heavily exposed (HE) nonsyndromal children and 16 non- or minimally exposed controls. Controls showed greater cerebellar blood oxygen level dependent (BOLD) activation in right crus I, vermis IV-VI, and right lobule VI during rhythmic than non-rhythmic finger tapping. The alcohol-exposed children showed smaller activation increases during rhythmic tapping in right crus I than the control children and the most severely affected children with either FAS or PFAS showed smaller increases in vermis IV-V. Higher levels of maternal alcohol intake per occasion during pregnancy were associated with reduced activation increases during rhythmic tapping in all four regions associated with rhythmic tapping in controls. The four cerebellar areas activated by the controls more during rhythmic than non-rhythmic tapping have been implicated in the production of timed responses in several previous studies. These data provide evidence linking binge-like drinking during pregnancy to poorer function in cerebellar regions involved in timing and somatosensory processing needed for complex tasks requiring precise timing.

  1. Contemporary deep recurrent learning for recognition

    Science.gov (United States)

    Iftekharuddin, K. M.; Alam, M.; Vidyaratne, L.

    2017-05-01

    Large-scale feed-forward neural networks have seen intense application in many computer vision problems. However, these networks can get hefty and computationally intensive with increasing complexity of the task. Our work, for the first time in literature, introduces a Cellular Simultaneous Recurrent Network (CSRN) based hierarchical neural network for object detection. CSRN has shown to be more effective to solving complex tasks such as maze traversal and image processing when compared to generic feed forward networks. While deep neural networks (DNN) have exhibited excellent performance in object detection and recognition, such hierarchical structure has largely been absent in neural networks with recurrency. Further, our work introduces deep hierarchy in SRN for object recognition. The simultaneous recurrency results in an unfolding effect of the SRN through time, potentially enabling the design of an arbitrarily deep network. This paper shows experiments using face, facial expression and character recognition tasks using novel deep recurrent model and compares recognition performance with that of generic deep feed forward model. Finally, we demonstrate the flexibility of incorporating our proposed deep SRN based recognition framework in a humanoid robotic platform called NAO.

  2. Application of a fuzzy neural network model in predicting polycyclic aromatic hydrocarbon-mediated perturbations of the Cyp1b1 transcriptional regulatory network in mouse skin

    Energy Technology Data Exchange (ETDEWEB)

    Larkin, Andrew [Department of Environmental and Molecular Toxicology, Oregon State University (United States); Department of Statistics, Oregon State University (United States); Superfund Research Center, Oregon State University (United States); Siddens, Lisbeth K. [Department of Environmental and Molecular Toxicology, Oregon State University (United States); Superfund Research Center, Oregon State University (United States); Krueger, Sharon K. [Superfund Research Center, Oregon State University (United States); Linus Pauling Institute, Oregon State University (United States); Tilton, Susan C.; Waters, Katrina M. [Superfund Research Center, Oregon State University (United States); Computational Biology and Bioinformatics Group, Pacific Northwest National Laboratory, Richland, WA 99352 (United States); Williams, David E., E-mail: david.williams@oregonstate.edu [Department of Environmental and Molecular Toxicology, Oregon State University (United States); Superfund Research Center, Oregon State University (United States); Linus Pauling Institute, Oregon State University (United States); Environmental Health Sciences Center, Oregon State University, Corvallis, OR 97331 (United States); Baird, William M. [Department of Environmental and Molecular Toxicology, Oregon State University (United States); Superfund Research Center, Oregon State University (United States); Environmental Health Sciences Center, Oregon State University, Corvallis, OR 97331 (United States)

    2013-03-01

    Polycyclic aromatic hydrocarbons (PAHs) are present in the environment as complex mixtures with components that have diverse carcinogenic potencies and mostly unknown interactive effects. Non-additive PAH interactions have been observed in regulation of cytochrome P450 (CYP) gene expression in the CYP1 family. To better understand and predict biological effects of complex mixtures, such as environmental PAHs, an 11 gene input-1 gene output fuzzy neural network (FNN) was developed for predicting PAH-mediated perturbations of dermal Cyp1b1 transcription in mice. Input values were generalized using fuzzy logic into low, medium, and high fuzzy subsets, and sorted using k-means clustering to create Mamdani logic functions for predicting Cyp1b1 mRNA expression. Model testing was performed with data from microarray analysis of skin samples from FVB/N mice treated with toluene (vehicle control), dibenzo[def,p]chrysene (DBC), benzo[a]pyrene (BaP), or 1 of 3 combinations of diesel particulate extract (DPE), coal tar extract (CTE) and cigarette smoke condensate (CSC) using leave-one-out cross-validation. Predictions were within 1 log{sub 2} fold change unit of microarray data, with the exception of the DBC treatment group, where the unexpected down-regulation of Cyp1b1 expression was predicted but did not reach statistical significance on the microarrays. Adding CTE to DPE was predicted to increase Cyp1b1 expression, whereas adding CSC to CTE and DPE was predicted to have no effect, in agreement with microarray results. The aryl hydrocarbon receptor repressor (Ahrr) was determined to be the most significant input variable for model predictions using back-propagation and normalization of FNN weights. - Highlights: ► Tested a model to predict PAH mixture-mediated changes in Cyp1b1 expression ► Quantitative predictions in agreement with microarrays for Cyp1b1 induction ► Unexpected difference in expression between DBC and other treatments predicted ► Model predictions

  3. Use of recurrent neural networks for determination of 7-epiclusianone acidity constants in ethanol-water mixtures; Uso de redes neurais recorrentes na determinacao das constantes de acidez para a 7-epiclusianona em misturas etanol-agua

    Energy Technology Data Exchange (ETDEWEB)

    Costa, Ederson D' Martin; Lemes, Nelson Henrique Teixeira, E-mail: nelson.lemes@unifal-mg.edu.br [Instituto de Ciencias Exatas, Universidade Federal de Alfenas, Alfenas, MG (Brazil); Santos, Marcelo Henrique dos [Instituto de Ciencias Farmaceuticas, Universidade Federal de Alfenas, Alfenas, MG (Brazil); Braga, Joao Pedro [Departamento de Quimica, Universidade Federal de Minas Gerais (UFMG), Belo Horizonte, MG (Brazil)

    2012-07-01

    This work propose a recursive neural network to solve inverse equilibrium problem. The acidity constants of 7-epiclusianone in ethanol-water binary mixtures were determined from multiwavelength spectrophotometric data. A linear relationship between acidity constants and the % w/v of ethanol in the solvent mixture was observed. The proposed method efficiency is compared with the Simplex method, commonly used in nonlinear optimization techniques. The neural network method is simple, numerically stable and has a broad range of applicability. (author)

  4. Recurrent Intracerebral Hemorrhage

    DEFF Research Database (Denmark)

    Schmidt, Linnea Boegeskov; Goertz, Sanne; Wohlfahrt, Jan

    2016-01-01

    BACKGROUND: Intracerebral hemorrhage (ICH) is a disease with high mortality and a substantial risk of recurrence. However, the recurrence risk is poorly documented and the knowledge of potential predictors for recurrence among co-morbidities and medicine with antithrombotic effect is limited....... OBJECTIVES: 1) To estimate the short- and long-term cumulative risks of recurrent intracerebral hemorrhage (ICH). 2) To investigate associations between typical comorbid diseases, surgical treatment, use of medicine with antithrombotic effects, including antithrombotic treatment (ATT), selective serotonin...

  5. International Neural Network Society Annual Meeting (1994) Held in San Diego, California on 5-9 June 1994. Volume 3.

    Science.gov (United States)

    1994-06-09

    PROBLEM BASED ON LEARNING IN THE RECURRENT RANDOM NEURAL NETWORK Jose AGUILAR EHEI. UFR de Mathematiques et d’Informatique. Universiti Rene Descartes 45...parallelisme optimal". PHD thesis. Rene Descartes University, Paris, France, 1992. 9. GELENBE, E. "Learning in the recurrent Random Neural Network", Neural

  6. International Study on Syncope of Uncertain Etiology 2: the management of patients with suspected or certain neurally mediated syncope after the initial evaluation Rationale and study design

    NARCIS (Netherlands)

    Brignole, M.; Sutton, R.; Menozzi, C.; Moya, A.; Garcia-Civera, R.; Benditt, D.; Vardas, P.; Wieling, W.; Andresen, D.; Migliorini, R.; Hollinworth, D.

    2003-01-01

    Study design Multi-centre, prospective observational study Objectives Main objective is to verify the value of implantable loop recorder (ILR) in assessing the mechanism of syncope and the efficacy of the ILR-guided therapy after syncope recurrence. Inclusion criteria Patients who met the following

  7. Training trajectories by continuous recurrent multilayer networks.

    Science.gov (United States)

    Leistritz, L; Galicki, M; Witte, H; Kochs, E

    2002-01-01

    This paper addresses the problem of training trajectories by means of continuous recurrent neural networks whose feedforward parts are multilayer perceptrons. Such networks can approximate a general nonlinear dynamic system with arbitrary accuracy. The learning process is transformed into an optimal control framework where the weights are the controls to be determined. A training algorithm based upon a variational formulation of Pontryagin's maximum principle is proposed for such networks. Computer examples demonstrating the efficiency of the given approach are also presented.

  8. BRITS: Bidirectional Recurrent Imputation for Time Series

    OpenAIRE

    Cao, Wei; Wang, Dong; Li, Jian; Zhou, Hao; Li, Lei; Li, Yitan

    2018-01-01

    Time series are widely used as signals in many classification/regression tasks. It is ubiquitous that time series contains many missing values. Given multiple correlated time series data, how to fill in missing values and to predict their class labels? Existing imputation methods often impose strong assumptions of the underlying data generating process, such as linear dynamics in the state space. In this paper, we propose BRITS, a novel method based on recurrent neural networks for missing va...

  9. Vitamin E-Mediated Modulation of Glutamate Receptor Expression in an Oxidative Stress Model of Neural Cells Derived from Embryonic Stem Cell Cultures

    Directory of Open Access Journals (Sweden)

    Afifah Abd Jalil

    2017-01-01

    Full Text Available Glutamate is the primary excitatory neurotransmitter in the central nervous system. Excessive concentrations of glutamate in the brain can be excitotoxic and cause oxidative stress, which is associated with Alzheimer’s disease. In the present study, the effects of vitamin E in the form of tocotrienol-rich fraction (TRF and alpha-tocopherol (α-TCP in modulating the glutamate receptor and neuron injury markers in an in vitro model of oxidative stress in neural-derived embryonic stem (ES cell cultures were elucidated. A transgenic mouse ES cell line (46C was differentiated into a neural lineage in vitro via induction with retinoic acid. These cells were then subjected to oxidative stress with a significantly high concentration of glutamate. Measurement of reactive oxygen species (ROS was performed after inducing glutamate excitotoxicity, and recovery from this toxicity in response to vitamin E was determined. The gene expression levels of glutamate receptors and neuron-specific enolase were elucidated using real-time PCR. The results reveal that neural cells derived from 46C cells and subjected to oxidative stress exhibit downregulation of NMDA, kainate receptor, and NSE after posttreatment with different concentrations of TRF and α-TCP, a sign of neurorecovery. Treatment of either TRF or α-TCP reduced the levels of ROS in neural cells subjected to glutamate-induced oxidative stress; these results indicated that vitamin E is a potent antioxidant.

  10. Boolean Factor Analysis by Attractor Neural Network

    Czech Academy of Sciences Publication Activity Database

    Frolov, A. A.; Húsek, Dušan; Muraviev, I. P.; Polyakov, P.Y.

    2007-01-01

    Roč. 18, č. 3 (2007), s. 698-707 ISSN 1045-9227 R&D Projects: GA AV ČR 1ET100300419; GA ČR GA201/05/0079 Institutional research plan: CEZ:AV0Z10300504 Keywords : recurrent neural network * Hopfield-like neural network * associative memory * unsupervised learning * neural network architecture * neural network application * statistics * Boolean factor analysis * dimensionality reduction * features clustering * concepts search * information retrieval Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 2.769, year: 2007

  11. Variable synaptic strengths controls the firing rate distribution in feedforward neural networks.

    Science.gov (United States)

    Ly, Cheng; Marsat, Gary

    2018-02-01

    Heterogeneity of firing rate statistics is known to have severe consequences on neural coding. Recent experimental recordings in weakly electric fish indicate that the distribution-width of superficial pyramidal cell firing rates (trial- and time-averaged) in the electrosensory lateral line lobe (ELL) depends on the stimulus, and also that network inputs can mediate changes in the firing rate distribution across the population. We previously developed theoretical methods to understand how two attributes (synaptic and intrinsic heterogeneity) interact and alter the firing rate distribution in a population of integrate-and-fire neurons with random recurrent coupling. Inspired by our experimental data, we extend these theoretical results to a delayed feedforward spiking network that qualitatively capture the changes of firing rate heterogeneity observed in in-vivo recordings. We demonstrate how heterogeneous neural attributes alter firing rate heterogeneity, accounting for the effect with various sensory stimuli. The model predicts how the strength of the effective network connectivity is related to intrinsic heterogeneity in such delayed feedforward networks: the strength of the feedforward input is positively correlated with excitability (threshold value for spiking) when firing rate heterogeneity is low and is negatively correlated with excitability with high firing rate heterogeneity. We also show how our theory can be used to predict effective neural architecture. We demonstrate that neural attributes do not interact in a simple manner but rather in a complex stimulus-dependent fashion to control neural heterogeneity and discuss how it can ultimately shape population codes.

  12. Attention-based Memory Selection Recurrent Network for Language Modeling

    OpenAIRE

    Liu, Da-Rong; Chuang, Shun-Po; Lee, Hung-yi

    2016-01-01

    Recurrent neural networks (RNNs) have achieved great success in language modeling. However, since the RNNs have fixed size of memory, their memory cannot store all the information about the words it have seen before in the sentence, and thus the useful long-term information may be ignored when predicting the next words. In this paper, we propose Attention-based Memory Selection Recurrent Network (AMSRN), in which the model can review the information stored in the memory at each previous time ...

  13. Immunologically mediated oral diseases

    OpenAIRE

    Jimson, Sudha; Balachader, N.; Anita, N.; Babu, R.

    2015-01-01

    Immune mediated diseases of oral cavity are uncommon. The lesions may be self-limiting and undergo remission spontaneously. Among the immune mediated oral lesions the most important are lichen planus, pemphigus, erythema multiformi, epidermolysis bullosa, systemic lupus erythematosis. Cellular and humoral mediated immunity play a major role directed against epithelial and connective tissue in chronic and recurrent patterns. Confirmatory diagnosis can be made by biopsy, direct and indirect imm...

  14. Recurrence in affective disorder

    DEFF Research Database (Denmark)

    Kessing, L V; Olsen, E W; Andersen, P K

    1999-01-01

    The risk of recurrence in affective disorder is influenced by the number of prior episodes and by a person's tendency toward recurrence. Newly developed frailty models were used to estimate the effect of the number of episodes on the rate of recurrence, taking into account individual frailty toward...... recurrence. The study base was the Danish psychiatric case register of all hospital admissions for primary affective disorder in Denmark during 1971-1993. A total of 20,350 first-admission patients were discharged with a diagnosis of major affective disorder. For women with unipolar disorder and for all...... kinds of patients with bipolar disorder, the rate of recurrence was affected by the number of prior episodes even when the effect was adjusted for individual frailty toward recurrence. No effect of episodes but a large effect of the frailty parameter was found for unipolar men. The authors concluded...

  15. Learning State Space Dynamics in Recurrent Networks

    Science.gov (United States)

    Simard, Patrice Yvon

    Fully recurrent (asymmetrical) networks can be used to learn temporal trajectories. The network is unfolded in time, and backpropagation is used to train the weights. The presence of recurrent connections creates internal states in the system which vary as a function of time. The resulting dynamics can provide interesting additional computing power but learning is made more difficult by the existence of internal memories. This study first exhibits the properties of recurrent networks in terms of convergence when the internal states of the system are unknown. A new energy functional is provided to change the weights of the units in order to the control the stability of the fixed points of the network's dynamics. The power of the resultant algorithm is illustrated with the simulation of a content addressable memory. Next, the more general case of time trajectories on a recurrent network is studied. An application is proposed in which trajectories are generated to draw letters as a function of an input. In another application of recurrent systems, a neural network certain temporal properties observed in human callosally sectioned brains. Finally the proposed algorithm for stabilizing dynamics around fixed points is extended to one for stabilizing dynamics around time trajectories. Its effects are illustrated on a network which generates Lisajous curves.

  16. Recurrent hamburger thyrotoxicosis

    Science.gov (United States)

    Parmar, Malvinder S.; Sturge, Cecil

    2003-01-01

    RECURRENT EPISODES OF SPONTANEOUSLY RESOLVING HYPERTHYROIDISM may be caused by release of preformed hormone from the thyroid gland after it has been damaged by inflammation (recurrent silent thyroiditis) or by exogenous administration of thyroid hormone, which might be intentional or surreptitious (thyrotoxicosis factitia). Community-wide outbreaks of “hamburger thyrotoxicosis” resulting from inadvertent consumption of beef contaminated with bovine thyroid gland have been previously reported. Here we describe a single patient who experienced recurrent episodes of this phenomenon over an 11-year period and present an approach to systematically evaluating patients with recurrent hyperthyroidism. PMID:12952802

  17. Recurrent Takotsubo Cardiomyopathy Related to Recurrent Thyrotoxicosis.

    Science.gov (United States)

    Patel, Keval; Griffing, George T; Hauptman, Paul J; Stolker, Joshua M

    2016-04-01

    Takotsubo cardiomyopathy, or transient left ventricular apical ballooning syndrome, is characterized by acute left ventricular dysfunction caused by transient wall-motion abnormalities of the left ventricular apex and mid ventricle in the absence of obstructive coronary artery disease. Recurrent episodes are rare but have been reported, and several cases of takotsubo cardiomyopathy have been described in the presence of hyperthyroidism. We report the case of a 55-year-old woman who had recurrent takotsubo cardiomyopathy, documented by repeat coronary angiography and evaluations of left ventricular function, in the presence of recurrent hyperthyroidism related to Graves disease. After both episodes, the patient's left ventricular function returned to normal when her thyroid function normalized. These findings suggest a possible role of thyroid-hormone excess in the pathophysiology of some patients who have takotsubo cardiomyopathy.

  18. An interpretable LSTM neural network for autoregressive exogenous model

    OpenAIRE

    Guo, Tian; Lin, Tao; Lu, Yao

    2018-01-01

    In this paper, we propose an interpretable LSTM recurrent neural network, i.e., multi-variable LSTM for time series with exogenous variables. Currently, widely used attention mechanism in recurrent neural networks mostly focuses on the temporal aspect of data and falls short of characterizing variable importance. To this end, our multi-variable LSTM equipped with tensorized hidden states is developed to learn variable specific representations, which give rise to both temporal and variable lev...

  19. Neural networks

    International Nuclear Information System (INIS)

    Denby, Bruce; Lindsey, Clark; Lyons, Louis

    1992-01-01

    The 1980s saw a tremendous renewal of interest in 'neural' information processing systems, or 'artificial neural networks', among computer scientists and computational biologists studying cognition. Since then, the growth of interest in neural networks in high energy physics, fueled by the need for new information processing technologies for the next generation of high energy proton colliders, can only be described as explosive

  20. Persistent and recurrent hyperparathyroidism.

    Science.gov (United States)

    Guerin, Carole; Paladino, Nunzia Cinzia; Lowery, Aoife; Castinetti, Fréderic; Taieb, David; Sebag, Fréderic

    2017-06-01

    Despite remarkable progress in imaging modalities and surgical management, persistence or recurrence of primary hyperparathyroidism (PHPT) still occurs in 2.5-5% of cases of PHPT. The aim of this review is to expose the management of persistent and recurrent hyperparathyroidism. A literature search was performed on MEDLINE using the search terms "recurrent" or "persistent" and "hyperparathyroidism" within the past 10 years. We also searched the reference lists of articles identified by this search strategy and selected those we judged relevant. Before considering reoperation, the surgeon must confirm the diagnosis of PHPT. Then, the patient must be evaluated with new imaging modalities. A single adenoma is found in 68% of cases, multiglandular disease in 28%, and parathyroid carcinoma in 3%. Others causes (<1%) include parathyromatosis and graft recurrence. The surgeon must balance the benefits against the risks of a reoperation (permanent hypocalcemia and recurrent laryngeal nerve palsy). If surgery is necessary, a focused approach can be considered in cases of significant imaging foci, but in the case of multiglandular disease, a bilateral neck exploration could be necessary. Patients with multiple endocrine neoplasia syndromes are at high risk of recurrence and should be managed regarding their hereditary pathology. The cure rate of persistent-PHPT or recurrent-PHPT in expert centers is estimated from 93 to 97%. After confirming the diagnosis of PHPT, patients with persistent-PHPT and recurrent-PHPT should be managed in an expert center with all dedicated competencies.