Chaotic diagonal recurrent neural network
Wang Xing-Yuan; Zhang Yi
2012-01-01
We propose a novel neural network based on a diagonal recurrent neural network and chaos,and its structure andlearning algorithm are designed.The multilayer feedforward neural network,diagonal recurrent neural network,and chaotic diagonal recurrent neural network are used to approach the cubic symmetry map.The simulation results show that the approximation capability of the chaotic diagonal recurrent neural network is better than the other two neural networks.
Recurrent neural collective classification.
Monner, Derek D; Reggia, James A
2013-12-01
With the recent surge in availability of data sets containing not only individual attributes but also relationships, classification techniques that take advantage of predictive relationship information have gained in popularity. The most popular existing collective classification techniques have a number of limitations-some of them generate arbitrary and potentially lossy summaries of the relationship data, whereas others ignore directionality and strength of relationships. Popular existing techniques make use of only direct neighbor relationships when classifying a given entity, ignoring potentially useful information contained in expanded neighborhoods of radius greater than one. We present a new technique that we call recurrent neural collective classification (RNCC), which avoids arbitrary summarization, uses information about relationship directionality and strength, and through recursive encoding, learns to leverage larger relational neighborhoods around each entity. Experiments with synthetic data sets show that RNCC can make effective use of relationship data for both direct and expanded neighborhoods. Further experiments demonstrate that our technique outperforms previously published results of several collective classification methods on a number of real-world data sets.
Ocean wave forecasting using recurrent neural networks
Mandal, S.; Prabaharan, N.
, merchant vessel routing, nearshore construction, etc. more efficiently and safely. This paper describes an artificial neural network, namely recurrent neural network with rprop update algorithm and is applied for wave forecasting. Measured ocean waves off...
Discontinuities in recurrent neural networks.
Gavaldá, R; Siegelmann, H T
1999-04-01
This article studies the computational power of various discontinuous real computational models that are based on the classical analog recurrent neural network (ARNN). This ARNN consists of finite number of neurons; each neuron computes a polynomial net function and a sigmoid-like continuous activation function. We introduce arithmetic networks as ARNN augmented with a few simple discontinuous (e.g., threshold or zero test) neurons. We argue that even with weights restricted to polynomial time computable reals, arithmetic networks are able to compute arbitrarily complex recursive functions. We identify many types of neural networks that are at least as powerful as arithmetic nets, some of which are not in fact discontinuous, but they boost other arithmetic operations in the net function (e.g., neurons that can use divisions and polynomial net functions inside sigmoid-like continuous activation functions). These arithmetic networks are equivalent to the Blum-Shub-Smale model, when the latter is restricted to a bounded number of registers. With respect to implementation on digital computers, we show that arithmetic networks with rational weights can be simulated with exponential precision, but even with polynomial-time computable real weights, arithmetic networks are not subject to any fixed precision bounds. This is in contrast with the ARNN that are known to demand precision that is linear in the computation time. When nontrivial periodic functions (e.g., fractional part, sine, tangent) are added to arithmetic networks, the resulting networks are computationally equivalent to a massively parallel machine. Thus, these highly discontinuous networks can solve the presumably intractable class of PSPACE-complete problems in polynomial time.
Interpretation of Recurrent Neural Networks
Pedersen, Morten With; Larsen, Jan
1997-01-01
This paper addresses techniques for interpretation and characterization of trained recurrent nets for time series problems. In particular, we focus on assessment of effective memory and suggest an operational definition of memory. Further we discuss the evaluation of learning curves. Various nume...
Recurrent Neural Network for Computing Outer Inverse.
Živković, Ivan S; Stanimirović, Predrag S; Wei, Yimin
2016-05-01
Two linear recurrent neural networks for generating outer inverses with prescribed range and null space are defined. Each of the proposed recurrent neural networks is based on the matrix-valued differential equation, a generalization of dynamic equations proposed earlier for the nonsingular matrix inversion, the Moore-Penrose inversion, as well as the Drazin inversion, under the condition of zero initial state. The application of the first approach is conditioned by the properties of the spectrum of a certain matrix; the second approach eliminates this drawback, though at the cost of increasing the number of matrix operations. The cases corresponding to the most common generalized inverses are defined. The conditions that ensure stability of the proposed neural network are presented. Illustrative examples present the results of numerical simulations.
A Direct Feedback Control Based on Fuzzy Recurrent Neural Network
李明; 马小平
2002-01-01
A direct feedback control system based on fuzzy-recurrent neural network is proposed, and a method of training weights of fuzzy-recurrent neural network was designed by applying modified contract mapping genetic algorithm. Computer simul ation results indicate that fuzzy-recurrent neural network controller has perfect dynamic and static performances .
Markovian architectural bias of recurrent neural networks.
Tino, Peter; Cernanský, Michal; Benusková, Lubica
2004-01-01
In this paper, we elaborate upon the claim that clustering in the recurrent layer of recurrent neural networks (RNNs) reflects meaningful information processing states even prior to training [1], [2]. By concentrating on activation clusters in RNNs, while not throwing away the continuous state space network dynamics, we extract predictive models that we call neural prediction machines (NPMs). When RNNs with sigmoid activation functions are initialized with small weights (a common technique in the RNN community), the clusters of recurrent activations emerging prior to training are indeed meaningful and correspond to Markov prediction contexts. In this case, the extracted NPMs correspond to a class of Markov models, called variable memory length Markov models (VLMMs). In order to appreciate how much information has really been induced during the training, the RNN performance should always be compared with that of VLMMs and NPMs extracted before training as the "null" base models. Our arguments are supported by experiments on a chaotic symbolic sequence and a context-free language with a deep recursive structure. Index Terms-Complex symbolic sequences, information latching problem, iterative function systems, Markov models, recurrent neural networks (RNNs).
Supervised Sequence Labelling with Recurrent Neural Networks
Graves, Alex
2012-01-01
Supervised sequence labelling is a vital area of machine learning, encompassing tasks such as speech, handwriting and gesture recognition, protein secondary structure prediction and part-of-speech tagging. Recurrent neural networks are powerful sequence learning tools—robust to input noise and distortion, able to exploit long-range contextual information—that would seem ideally suited to such problems. However their role in large-scale sequence labelling systems has so far been auxiliary. The goal of this book is a complete framework for classifying and transcribing sequential data with recurrent neural networks only. Three main innovations are introduced in order to realise this goal. Firstly, the connectionist temporal classification output layer allows the framework to be trained with unsegmented target sequences, such as phoneme-level speech transcriptions; this is in contrast to previous connectionist approaches, which were dependent on error-prone prior segmentation. Secondly, multidimensional...
Multi-Dimensional Recurrent Neural Networks
Graves, Alex; Schmidhuber, Juergen
2007-01-01
Recurrent neural networks (RNNs) have proved effective at one dimensional sequence learning tasks, such as speech and online handwriting recognition. Some of the properties that make RNNs suitable for such tasks, for example robustness to input warping, and the ability to access contextual information, are also desirable in multidimensional domains. However, there has so far been no direct way of applying RNNs to data with more than one spatio-temporal dimension. This paper introduces multi-dimensional recurrent neural networks (MDRNNs), thereby extending the potential applicability of RNNs to vision, video processing, medical imaging and many other areas, while avoiding the scaling problems that have plagued other multi-dimensional models. Experimental results are provided for two image segmentation tasks.
Incremental construction of LSTM recurrent neural network
Ribeiro, Evandsa Sabrine Lopes-Lima; Alquézar Mancho, René
2002-01-01
Long Short--Term Memory (LSTM) is a recurrent neural network that uses structures called memory blocks to allow the net remember significant events distant in the past input sequence in order to solve long time lag tasks, where other RNN approaches fail. Throughout this work we have performed experiments using LSTM networks extended with growing abilities, which we call GLSTM. Four methods of training growing LSTM has been compared. These methods include cascade and ...
Segmented-memory recurrent neural networks.
Chen, Jinmiao; Chaudhari, Narendra S
2009-08-01
Conventional recurrent neural networks (RNNs) have difficulties in learning long-term dependencies. To tackle this problem, we propose an architecture called segmented-memory recurrent neural network (SMRNN). A symbolic sequence is broken into segments and then presented as inputs to the SMRNN one symbol per cycle. The SMRNN uses separate internal states to store symbol-level context, as well as segment-level context. The symbol-level context is updated for each symbol presented for input. The segment-level context is updated after each segment. The SMRNN is trained using an extended real-time recurrent learning algorithm. We test the performance of SMRNN on the information latching problem, the "two-sequence problem" and the problem of protein secondary structure (PSS) prediction. Our implementation results indicate that SMRNN performs better on long-term dependency problems than conventional RNNs. Besides, we also theoretically analyze how the segmented memory of SMRNN helps learning long-term temporal dependencies and study the impact of the segment length.
Analysis of Recurrent Analog Neural Networks
Z. Raida
1998-06-01
Full Text Available In this paper, an original rigorous analysis of recurrent analog neural networks, which are built from opamp neurons, is presented. The analysis, which comes from the approximate model of the operational amplifier, reveals causes of possible non-stable states and enables to determine convergence properties of the network. Results of the analysis are discussed in order to enable development of original robust and fast analog networks. In the analysis, the special attention is turned to the examination of the influence of real circuit elements and of the statistical parameters of processed signals to the parameters of the network.
Adaptive Filtering Using Recurrent Neural Networks
Parlos, Alexander G.; Menon, Sunil K.; Atiya, Amir F.
2005-01-01
A method for adaptive (or, optionally, nonadaptive) filtering has been developed for estimating the states of complex process systems (e.g., chemical plants, factories, or manufacturing processes at some level of abstraction) from time series of measurements of system inputs and outputs. The method is based partly on the fundamental principles of the Kalman filter and partly on the use of recurrent neural networks. The standard Kalman filter involves an assumption of linearity of the mathematical model used to describe a process system. The extended Kalman filter accommodates a nonlinear process model but still requires linearization about the state estimate. Both the standard and extended Kalman filters involve the often unrealistic assumption that process and measurement noise are zero-mean, Gaussian, and white. In contrast, the present method does not involve any assumptions of linearity of process models or of the nature of process noise; on the contrary, few (if any) assumptions are made about process models, noise models, or the parameters of such models. In this regard, the method can be characterized as one of nonlinear, nonparametric filtering. The method exploits the unique ability of neural networks to approximate nonlinear functions. In a given case, the process model is limited mainly by limitations of the approximation ability of the neural networks chosen for that case. Moreover, despite the lack of assumptions regarding process noise, the method yields minimum- variance filters. In that they do not require statistical models of noise, the neural- network-based state filters of this method are comparable to conventional nonlinear least-squares estimators.
Identification of Non-Linear Structures using Recurrent Neural Networks
Kirkegaard, Poul Henning; Nielsen, Søren R. K.; Hansen, H. I.
1995-01-01
Two different partially recurrent neural networks structured as Multi Layer Perceptrons (MLP) are investigated for time domain identification of a non-linear structure.......Two different partially recurrent neural networks structured as Multi Layer Perceptrons (MLP) are investigated for time domain identification of a non-linear structure....
Identification of Non-Linear Structures using Recurrent Neural Networks
Kirkegaard, Poul Henning; Nielsen, Søren R. K.; Hansen, H. I.
Two different partially recurrent neural networks structured as Multi Layer Perceptrons (MLP) are investigated for time domain identification of a non-linear structure.......Two different partially recurrent neural networks structured as Multi Layer Perceptrons (MLP) are investigated for time domain identification of a non-linear structure....
Identification of Non-Linear Structures using Recurrent Neural Networks
Kirkegaard, Poul Henning; Nielsen, Søren R. K.; Hansen, H. I.
1995-01-01
Two different partially recurrent neural networks structured as Multi Layer Perceptrons (MLP) are investigated for time domain identification of a non-linear structure.......Two different partially recurrent neural networks structured as Multi Layer Perceptrons (MLP) are investigated for time domain identification of a non-linear structure....
Precipitation Nowcast using Deep Recurrent Neural Network
Akbari Asanjan, A.; Yang, T.; Gao, X.; Hsu, K. L.; Sorooshian, S.
2016-12-01
An accurate precipitation nowcast (0-6 hours) with a fine temporal and spatial resolution has always been an important prerequisite for flood warning, streamflow prediction and risk management. Most of the popular approaches used for forecasting precipitation can be categorized into two groups. One type of precipitation forecast relies on numerical modeling of the physical dynamics of atmosphere and another is based on empirical and statistical regression models derived by local hydrologists or meteorologists. Given the recent advances in artificial intelligence, in this study a powerful Deep Recurrent Neural Network, termed as Long Short-Term Memory (LSTM) model, is creatively used to extract the patterns and forecast the spatial and temporal variability of Cloud Top Brightness Temperature (CTBT) observed from GOES satellite. Then, a 0-6 hours precipitation nowcast is produced using a Precipitation Estimation from Remote Sensing Information using Artificial Neural Network (PERSIANN) algorithm, in which the CTBT nowcast is used as the PERSIANN algorithm's raw inputs. Two case studies over the continental U.S. have been conducted that demonstrate the improvement of proposed approach as compared to a classical Feed Forward Neural Network and a couple simple regression models. The advantages and disadvantages of the proposed method are summarized with regard to its capability of pattern recognition through time, handling of vanishing gradient during model learning, and working with sparse data. The studies show that the LSTM model performs better than other methods, and it is able to learn the temporal evolution of the precipitation events through over 1000 time lags. The uniqueness of PERSIANN's algorithm enables an alternative precipitation nowcast approach as demonstrated in this study, in which the CTBT prediction is produced and used as the inputs for generating precipitation nowcast.
Deep Recurrent Neural Networks for Supernovae Classification
Charnock, Tom; Moss, Adam
2017-03-01
We apply deep recurrent neural networks, which are capable of learning complex sequential information, to classify supernovae (code available at https://github.com/adammoss/supernovae). The observational time and filter fluxes are used as inputs to the network, but since the inputs are agnostic, additional data such as host galaxy information can also be included. Using the Supernovae Photometric Classification Challenge (SPCC) data, we find that deep networks are capable of learning about light curves, however the performance of the network is highly sensitive to the amount of training data. For a training size of 50% of the representational SPCC data set (around 104 supernovae) we obtain a type-Ia versus non-type-Ia classification accuracy of 94.7%, an area under the Receiver Operating Characteristic curve AUC of 0.986 and an SPCC figure-of-merit F 1 = 0.64. When using only the data for the early-epoch challenge defined by the SPCC, we achieve a classification accuracy of 93.1%, AUC of 0.977, and F 1 = 0.58, results almost as good as with the whole light curve. By employing bidirectional neural networks, we can acquire impressive classification results between supernovae types I, II and III at an accuracy of 90.4% and AUC of 0.974. We also apply a pre-trained model to obtain classification probabilities as a function of time and show that it can give early indications of supernovae type. Our method is competitive with existing algorithms and has applications for future large-scale photometric surveys.
Deep Recurrent Neural Networks for Supernovae Classification
Charnock, Tom
2016-01-01
We apply deep recurrent neural networks, which are capable of learning complex sequential information, to classify supernovae. The observational time and filter fluxes are used as inputs to the network, but since the inputs are agnostic additional data such as host galaxy information can also be included. Using the Supernovae Photometric Classification Challenge (SPCC) data, we find that deep networks are capable of learning about light curves, however the performance of the network is highly sensitive to the amount of training data. For a training size of 50% of the representational SPCC dataset (around 104 supernovae) we obtain a type Ia vs non type Ia classification accuracy of 94.8%, an area under the Receiver Operating Characteristic curve AUC of 0.986 and a SPCC figure-of-merit F1 = 0.64. We also apply a pre-trained model to obtain classification probabilities as a function of time, and show it can give early indications of supernovae type. Our method is competitive with existing algorithms and has appl...
Bayesian Recurrent Neural Network for Language Modeling.
Chien, Jen-Tzung; Ku, Yuan-Chu
2016-02-01
A language model (LM) is calculated as the probability of a word sequence that provides the solution to word prediction for a variety of information systems. A recurrent neural network (RNN) is powerful to learn the large-span dynamics of a word sequence in the continuous space. However, the training of the RNN-LM is an ill-posed problem because of too many parameters from a large dictionary size and a high-dimensional hidden layer. This paper presents a Bayesian approach to regularize the RNN-LM and apply it for continuous speech recognition. We aim to penalize the too complicated RNN-LM by compensating for the uncertainty of the estimated model parameters, which is represented by a Gaussian prior. The objective function in a Bayesian classification network is formed as the regularized cross-entropy error function. The regularized model is constructed not only by calculating the regularized parameters according to the maximum a posteriori criterion but also by estimating the Gaussian hyperparameter by maximizing the marginal likelihood. A rapid approximation to a Hessian matrix is developed to implement the Bayesian RNN-LM (BRNN-LM) by selecting a small set of salient outer-products. The proposed BRNN-LM achieves a sparser model than the RNN-LM. Experiments on different corpora show the robustness of system performance by applying the rapid BRNN-LM under different conditions.
Phenotyping of Clinical Time Series with LSTM Recurrent Neural Networks
Lipton, Zachary C.; Kale, David C.; Wetzell, Randall C.
2015-01-01
We present a novel application of LSTM recurrent neural networks to multilabel classification of diagnoses given variable-length time series of clinical measurements. Our method outperforms a strong baseline on a variety of metrics.
Optimization of recurrent neural networks for time series modeling
Pedersen, Morten With
1997-01-01
The present thesis is about optimization of recurrent neural networks applied to time series modeling. In particular is considered fully recurrent networks working from only a single external input, one layer of nonlinear hidden units and a li near output unit applied to prediction of discrete time...
Using Recurrent Neural Network for Learning Expressive Ontologies
Petrucci, Giulio; Ghidini, Chiara; Rospocher, Marco
2016-01-01
Recently, Neural Networks have been proven extremely effective in many natural language processing tasks such as sentiment analysis, question answering, or machine translation. Aiming to exploit such advantages in the Ontology Learning process, in this technical report we present a detailed description of a Recurrent Neural Network based system to be used to pursue such goal.
An evolutionary approach to associative memory in recurrent neural networks
Fujita, Sh; Fujita, Sh; Nishimura, H
1994-01-01
Abstract: In this paper, we investigate the associative memory in recurrent neural networks, based on the model of evolving neural networks proposed by Nolfi, Miglino and Parisi. Experimentally developed network has highly asymmetric synaptic weights and dilute connections, quite different from those of the Hopfield model. Some results on the effect of learning efficiency on the evolution are also presented.
Using Recurrent Neural Network for Learning Expressive Ontologies
Petrucci, Giulio; Ghidini, Chiara; Rospocher, Marco
2016-01-01
Recently, Neural Networks have been proven extremely effective in many natural language processing tasks such as sentiment analysis, question answering, or machine translation. Aiming to exploit such advantages in the Ontology Learning process, in this technical report we present a detailed description of a Recurrent Neural Network based system to be used to pursue such goal.
Financial Time Series Prediction Using Elman Recurrent Random Neural Networks.
Wang, Jie; Wang, Jun; Fang, Wen; Niu, Hongli
2016-01-01
In recent years, financial market dynamics forecasting has been a focus of economic research. To predict the price indices of stock markets, we developed an architecture which combined Elman recurrent neural networks with stochastic time effective function. By analyzing the proposed model with the linear regression, complexity invariant distance (CID), and multiscale CID (MCID) analysis methods and taking the model compared with different models such as the backpropagation neural network (BPNN), the stochastic time effective neural network (STNN), and the Elman recurrent neural network (ERNN), the empirical results show that the proposed neural network displays the best performance among these neural networks in financial time series forecasting. Further, the empirical research is performed in testing the predictive effects of SSE, TWSE, KOSPI, and Nikkei225 with the established model, and the corresponding statistical comparisons of the above market indices are also exhibited. The experimental results show that this approach gives good performance in predicting the values from the stock market indices.
Recurrent neural network for vehicle dead-reckoning
Ma Haibo; Zhang Liguo; Chen Yangzhou
2008-01-01
For vehicle integrated navigation systems, real-time estimating states of the dead reckoning (DR) unit is much more difficult than that of the other measuring sensors under indefinite noises and nonlinear characteristics.Compared with the well known, extended Kalman filter (EKF), a recurrent neural network is proposed for the solution, which not only improves the location precision and the adaptive ability of resisting disturbances, but also avoids calculating the analytic derivation and Jacobiaa matrices of the nonlinear system model. To test the performances of the recurrent neural network, these two methods are used to estimate the state of the vehicle's DR navigation system. Simulation results show that the recurrent neural network is superior to the EKF and is a more ideal filtering method for vehicle DR navigation.
A multilayer recurrent neural network for solving continuous-time algebraic Riccati equations.
Wang, Jun; Wu, Guang
1998-07-01
A multilayer recurrent neural network is proposed for solving continuous-time algebraic matrix Riccati equations in real time. The proposed recurrent neural network consists of four bidirectionally connected layers. Each layer consists of an array of neurons. The proposed recurrent neural network is shown to be capable of solving algebraic Riccati equations and synthesizing linear-quadratic control systems in real time. Analytical results on stability of the recurrent neural network and solvability of algebraic Riccati equations by use of the recurrent neural network are discussed. The operating characteristics of the recurrent neural network are also demonstrated through three illustrative examples.
The computational power of interactive recurrent neural networks.
Cabessa, Jérémie; Siegelmann, Hava T
2012-04-01
In classical computation, rational- and real-weighted recurrent neural networks were shown to be respectively equivalent to and strictly more powerful than the standard Turing machine model. Here, we study the computational power of recurrent neural networks in a more biologically oriented computational framework, capturing the aspects of sequential interactivity and persistence of memory. In this context, we prove that so-called interactive rational- and real-weighted neural networks show the same computational powers as interactive Turing machines and interactive Turing machines with advice, respectively. A mathematical characterization of each of these computational powers is also provided. It follows from these results that interactive real-weighted neural networks can perform uncountably many more translations of information than interactive Turing machines, making them capable of super-Turing capabilities.
Exponential Stability of Complex-Valued Memristive Recurrent Neural Networks.
Wang, Huamin; Duan, Shukai; Huang, Tingwen; Wang, Lidan; Li, Chuandong
2017-03-01
In this brief, we establish a novel complex-valued memristive recurrent neural network (CVMRNN) to study its stability. As a generalization of real-valued memristive neural networks, CVMRNN can be separated into real and imaginary parts. By means of M -matrix and Lyapunov function, the existence, uniqueness, and exponential stability of the equilibrium point for CVMRNNs are investigated, and sufficient conditions are presented. Finally, the effectiveness of obtained results is illustrated by two numerical examples.
Recognizing recurrent neural networks (rRNN): Bayesian inference for recurrent neural networks.
Bitzer, Sebastian; Kiebel, Stefan J
2012-07-01
Recurrent neural networks (RNNs) are widely used in computational neuroscience and machine learning applications. In an RNN, each neuron computes its output as a nonlinear function of its integrated input. While the importance of RNNs, especially as models of brain processing, is undisputed, it is also widely acknowledged that the computations in standard RNN models may be an over-simplification of what real neuronal networks compute. Here, we suggest that the RNN approach may be made computationally more powerful by its fusion with Bayesian inference techniques for nonlinear dynamical systems. In this scheme, we use an RNN as a generative model of dynamic input caused by the environment, e.g. of speech or kinematics. Given this generative RNN model, we derive Bayesian update equations that can decode its output. Critically, these updates define a 'recognizing RNN' (rRNN), in which neurons compute and exchange prediction and prediction error messages. The rRNN has several desirable features that a conventional RNN does not have, e.g. fast decoding of dynamic stimuli and robustness to initial conditions and noise. Furthermore, it implements a predictive coding scheme for dynamic inputs. We suggest that the Bayesian inversion of RNNs may be useful both as a model of brain function and as a machine learning tool. We illustrate the use of the rRNN by an application to the online decoding (i.e. recognition) of human kinematics.
Bach in 2014: Music Composition with Recurrent Neural Network
Liu, I-Ting; Ramakrishnan, Bhiksha
2014-01-01
We propose a framework for computer music composition that uses resilient propagation (RProp) and long short term memory (LSTM) recurrent neural network. In this paper, we show that LSTM network learns the structure and characteristics of music pieces properly by demonstrating its ability to recreate music. We also show that predicting existing music using RProp outperforms Back propagation through time (BPTT).
Active Control of Sound based on Diagonal Recurrent Neural Network
Jayawardhana, Bayu; Xie, Lihua; Yuan, Shuqing
2002-01-01
Recurrent neural network has been known for its dynamic mapping and better suited for nonlinear dynamical system. Nonlinear controller may be needed in cases where the actuators exhibit the nonlinear characteristics, or in cases when the structure to be controlled exhibits nonlinear behavior. The fe
Probing the basins of attraction of a recurrent neural network
M. Heerema; W.A. van Leeuwen
2000-01-01
Analytical expressions for the weights $w_{ij}(b)$ of the connections of a recurrent neural network are found by taking explicitly into account basins of attraction, the size of which is characterized by a basin parameter $b$. It is shown that a network with $b \
Chaotifying delayed recurrent neural networks via impulsive effects
Şaylı, Mustafa; Yılmaz, Enes
2016-02-01
In this paper, chaotification of delayed recurrent neural networks via chaotically changing moments of impulsive actions is considered. Sufficient conditions for the presence of Li-Yorke chaos with its ingredients proximality, frequent separation, and existence of infinitely many periodic solutions are theoretically proved. Finally, effectiveness of our theoretical results is illustrated by an example with numerical simulations.
A recurrent neural network with ever changing synapses
M. Heerema; W.A. van Leeuwen
2000-01-01
A recurrent neural network with noisy input is studied analytically, on the basis of a Discrete Time Master Equation. The latter is derived from a biologically realizable learning rule for the weights of the connections. In a numerical study it is found that the fixed points of the dynamics of the n
Active Control of Sound based on Diagonal Recurrent Neural Network
Jayawardhana, Bayu; Xie, Lihua; Yuan, Shuqing
2002-01-01
Recurrent neural network has been known for its dynamic mapping and better suited for nonlinear dynamical system. Nonlinear controller may be needed in cases where the actuators exhibit the nonlinear characteristics, or in cases when the structure to be controlled exhibits nonlinear behavior. The fe
Recursive Bayesian recurrent neural networks for time-series modeling.
Mirikitani, Derrick T; Nikolaev, Nikolay
2010-02-01
This paper develops a probabilistic approach to recursive second-order training of recurrent neural networks (RNNs) for improved time-series modeling. A general recursive Bayesian Levenberg-Marquardt algorithm is derived to sequentially update the weights and the covariance (Hessian) matrix. The main strengths of the approach are a principled handling of the regularization hyperparameters that leads to better generalization, and stable numerical performance. The framework involves the adaptation of a noise hyperparameter and local weight prior hyperparameters, which represent the noise in the data and the uncertainties in the model parameters. Experimental investigations using artificial and real-world data sets show that RNNs equipped with the proposed approach outperform standard real-time recurrent learning and extended Kalman training algorithms for recurrent networks, as well as other contemporary nonlinear neural models, on time-series modeling.
Synthesis of recurrent neural networks for dynamical system simulation.
Trischler, Adam P; D'Eleuterio, Gabriele M T
2016-08-01
We review several of the most widely used techniques for training recurrent neural networks to approximate dynamical systems, then describe a novel algorithm for this task. The algorithm is based on an earlier theoretical result that guarantees the quality of the network approximation. We show that a feedforward neural network can be trained on the vector-field representation of a given dynamical system using backpropagation, then recast it as a recurrent network that replicates the original system's dynamics. After detailing this algorithm and its relation to earlier approaches, we present numerical examples that demonstrate its capabilities. One of the distinguishing features of our approach is that both the original dynamical systems and the recurrent networks that simulate them operate in continuous time.
Fractional Hopfield Neural Networks: Fractional Dynamic Associative Recurrent Neural Networks.
Pu, Yi-Fei; Yi, Zhang; Zhou, Ji-Liu
2016-07-14
This paper mainly discusses a novel conceptual framework: fractional Hopfield neural networks (FHNN). As is commonly known, fractional calculus has been incorporated into artificial neural networks, mainly because of its long-term memory and nonlocality. Some researchers have made interesting attempts at fractional neural networks and gained competitive advantages over integer-order neural networks. Therefore, it is naturally makes one ponder how to generalize the first-order Hopfield neural networks to the fractional-order ones, and how to implement FHNN by means of fractional calculus. We propose to introduce a novel mathematical method: fractional calculus to implement FHNN. First, we implement fractor in the form of an analog circuit. Second, we implement FHNN by utilizing fractor and the fractional steepest descent approach, construct its Lyapunov function, and further analyze its attractors. Third, we perform experiments to analyze the stability and convergence of FHNN, and further discuss its applications to the defense against chip cloning attacks for anticounterfeiting. The main contribution of our work is to propose FHNN in the form of an analog circuit by utilizing a fractor and the fractional steepest descent approach, construct its Lyapunov function, prove its Lyapunov stability, analyze its attractors, and apply FHNN to the defense against chip cloning attacks for anticounterfeiting. A significant advantage of FHNN is that its attractors essentially relate to the neuron's fractional order. FHNN possesses the fractional-order-stability and fractional-order-sensitivity characteristics.
Efficient Training of Recurrent Neural Network with Time Delays.
Marom, Emanuel; Saad, David; Cohen, Barak
1997-01-01
Training recurrent neural networks to perform certain tasks is known to be difficult. The possibility of adding synaptic delays to the network properties makes the training task more difficult. However, the disadvantage of tough training procedure is diminished by the improved network performance. During our research of training neural networks with time delays we encountered a robust method for accomplishing the training task. The method is based on adaptive simulated annealing algorithm (ASA) which was found to be superior to other training algorithms. It requires no tuning and is fast enough to enable training to be held on low end platforms such as personal computers. The implementation of the algorithm is presented over a set of typical benchmark tests of training recurrent neural networks with time delays. Copyright 1996 Elsevier Science Ltd.
Application of dynamic recurrent neural networks in nonlinear system identification
Du, Yun; Wu, Xueli; Sun, Huiqin; Zhang, Suying; Tian, Qiang
2006-11-01
An adaptive identification method of simple dynamic recurrent neural network (SRNN) for nonlinear dynamic systems is presented in this paper. This method based on the theory that by using the inner-states feed-back of dynamic network to describe the nonlinear kinetic characteristics of system can reflect the dynamic characteristics more directly, deduces the recursive prediction error (RPE) learning algorithm of SRNN, and improves the algorithm by studying topological structure on recursion layer without the weight values. The simulation results indicate that this kind of neural network can be used in real-time control, due to its less weight values, simpler learning algorithm, higher identification speed, and higher precision of model. It solves the problems of intricate in training algorithm and slow rate in convergence caused by the complicate topological structure in usual dynamic recurrent neural network.
Analysis of surface ozone using a recurrent neural network.
Biancofiore, Fabio; Verdecchia, Marco; Di Carlo, Piero; Tomassetti, Barbara; Aruffo, Eleonora; Busilacchio, Marcella; Bianco, Sebastiano; Di Tommaso, Sinibaldo; Colangeli, Carlo
2015-05-01
Hourly concentrations of ozone (O₃) and nitrogen dioxide (NO₂) have been measured for 16 years, from 1998 to 2013, in a seaside town in central Italy. The seasonal trends of O₃ and NO₂ recorded in this period have been studied. Furthermore, we used the data collected during one year (2005), to define the characteristics of a multiple linear regression model and a neural network model. Both models are used to model the hourly O₃ concentration, using, two scenarios: 1) in the first as inputs, only meteorological parameters and 2) in the second adding photochemical parameters at those of the first scenario. In order to evaluate the performance of the model four statistical criteria are used: correlation coefficient, fractional bias, normalized mean squared error and a factor of two. All the criteria show that the neural network gives better results, compared to the regression model, in all the model scenarios. Predictions of O₃ have been carried out by many authors using a feed forward neural architecture. In this paper we show that a recurrent architecture significantly improves the performances of neural predictors. Using only the meteorological parameters as input, the recurrent architecture shows performance better than the multiple linear regression model that uses meteorological and photochemical data as input, making the neural network model with recurrent architecture a more useful tool in areas where only weather measurements are available. Finally, we used the neural network model to forecast the O₃ hourly concentrations 1, 3, 6, 12, 24 and 48 h ahead. The performances of the model in predicting O₃ levels are discussed. Emphasis is given to the possibility of using the neural network model in operational ways in areas where only meteorological data are available, in order to predict O₃ also in sites where it has not been measured yet. Copyright © 2015 Elsevier B.V. All rights reserved.
Iterative free-energy optimization for recurrent neural networks (INFERNO).
Pitti, Alexandre; Gaussier, Philippe; Quoy, Mathias
2017-01-01
The intra-parietal lobe coupled with the Basal Ganglia forms a working memory that demonstrates strong planning capabilities for generating robust yet flexible neuronal sequences. Neurocomputational models however, often fails to control long range neural synchrony in recurrent spiking networks due to spontaneous activity. As a novel framework based on the free-energy principle, we propose to see the problem of spikes' synchrony as an optimization problem of the neurons sub-threshold activity for the generation of long neuronal chains. Using a stochastic gradient descent, a reinforcement signal (presumably dopaminergic) evaluates the quality of one input vector to move the recurrent neural network to a desired activity; depending on the error made, this input vector is strengthened to hill-climb the gradient or elicited to search for another solution. This vector can be learned then by one associative memory as a model of the basal-ganglia to control the recurrent neural network. Experiments on habit learning and on sequence retrieving demonstrate the capabilities of the dual system to generate very long and precise spatio-temporal sequences, above two hundred iterations. Its features are applied then to the sequential planning of arm movements. In line with neurobiological theories, we discuss its relevance for modeling the cortico-basal working memory to initiate flexible goal-directed neuronal chains of causation and its relation to novel architectures such as Deep Networks, Neural Turing Machines and the Free-Energy Principle.
Iterative free-energy optimization for recurrent neural networks (INFERNO)
2017-01-01
The intra-parietal lobe coupled with the Basal Ganglia forms a working memory that demonstrates strong planning capabilities for generating robust yet flexible neuronal sequences. Neurocomputational models however, often fails to control long range neural synchrony in recurrent spiking networks due to spontaneous activity. As a novel framework based on the free-energy principle, we propose to see the problem of spikes’ synchrony as an optimization problem of the neurons sub-threshold activity for the generation of long neuronal chains. Using a stochastic gradient descent, a reinforcement signal (presumably dopaminergic) evaluates the quality of one input vector to move the recurrent neural network to a desired activity; depending on the error made, this input vector is strengthened to hill-climb the gradient or elicited to search for another solution. This vector can be learned then by one associative memory as a model of the basal-ganglia to control the recurrent neural network. Experiments on habit learning and on sequence retrieving demonstrate the capabilities of the dual system to generate very long and precise spatio-temporal sequences, above two hundred iterations. Its features are applied then to the sequential planning of arm movements. In line with neurobiological theories, we discuss its relevance for modeling the cortico-basal working memory to initiate flexible goal-directed neuronal chains of causation and its relation to novel architectures such as Deep Networks, Neural Turing Machines and the Free-Energy Principle. PMID:28282439
Learning Contextual Dependence With Convolutional Hierarchical Recurrent Neural Networks
Zuo, Zhen; Shuai, Bing; Wang, Gang; Liu, Xiao; Wang, Xingxing; Wang, Bing; Chen, Yushi
2016-07-01
Existing deep convolutional neural networks (CNNs) have shown their great success on image classification. CNNs mainly consist of convolutional and pooling layers, both of which are performed on local image areas without considering the dependencies among different image regions. However, such dependencies are very important for generating explicit image representation. In contrast, recurrent neural networks (RNNs) are well known for their ability of encoding contextual information among sequential data, and they only require a limited number of network parameters. General RNNs can hardly be directly applied on non-sequential data. Thus, we proposed the hierarchical RNNs (HRNNs). In HRNNs, each RNN layer focuses on modeling spatial dependencies among image regions from the same scale but different locations. While the cross RNN scale connections target on modeling scale dependencies among regions from the same location but different scales. Specifically, we propose two recurrent neural network models: 1) hierarchical simple recurrent network (HSRN), which is fast and has low computational cost; and 2) hierarchical long-short term memory recurrent network (HLSTM), which performs better than HSRN with the price of more computational cost. In this manuscript, we integrate CNNs with HRNNs, and develop end-to-end convolutional hierarchical recurrent neural networks (C-HRNNs). C-HRNNs not only make use of the representation power of CNNs, but also efficiently encodes spatial and scale dependencies among different image regions. On four of the most challenging object/scene image classification benchmarks, our C-HRNNs achieve state-of-the-art results on Places 205, SUN 397, MIT indoor, and competitive results on ILSVRC 2012.
Microscopic instability in recurrent neural networks
Yamanaka, Yuzuru; Amari, Shun-ichi; Shinomoto, Shigeru
2015-03-01
In a manner similar to the molecular chaos that underlies the stable thermodynamics of gases, a neuronal system may exhibit microscopic instability in individual neuronal dynamics while a macroscopic order of the entire population possibly remains stable. In this study, we analyze the microscopic stability of a network of neurons whose macroscopic activity obeys stable dynamics, expressing either monostable, bistable, or periodic state. We reveal that the network exhibits a variety of dynamical states for microscopic instability residing in a given stable macroscopic dynamics. The presence of a variety of dynamical states in such a simple random network implies more abundant microscopic fluctuations in real neural networks which consist of more complex and hierarchically structured interactions.
A recurrent neural network for solving bilevel linear programming problem.
He, Xing; Li, Chuandong; Huang, Tingwen; Li, Chaojie; Huang, Junjian
2014-04-01
In this brief, based on the method of penalty functions, a recurrent neural network (NN) modeled by means of a differential inclusion is proposed for solving the bilevel linear programming problem (BLPP). Compared with the existing NNs for BLPP, the model has the least number of state variables and simple structure. Using nonsmooth analysis, the theory of differential inclusions, and Lyapunov-like method, the equilibrium point sequence of the proposed NNs can approximately converge to an optimal solution of BLPP under certain conditions. Finally, the numerical simulations of a supply chain distribution model have shown excellent performance of the proposed recurrent NNs.
Synchronization of an uncertain chaotic system via recurrent neural networks
谭文; 王耀南
2005-01-01
Incorporating distributed recurrent networks with high-order connections between neurons, the identification and synchronization problem of an unknown chaotic system in the presence of unmodelled dynamics is investigated. Based on the Lyapunov stability theory, the weights learning algorithm for the recurrent high-order neural network model is presented. Also, analytical results concerning the stability properties of the scheme are obtained. Then adaptive control law for eliminating synchronization error of uncertain chaotic plant is developed via Lyapunov methodology.The proposed scheme is applied to model and synchronize an unknown Rossler system.
Nonlinear system identification based on internal recurrent neural networks.
Puscasu, Gheorghe; Codres, Bogdan; Stancu, Alexandru; Murariu, Gabriel
2009-04-01
A novel approach for nonlinear complex system identification based on internal recurrent neural networks (IRNN) is proposed in this paper. The computational complexity of neural identification can be greatly reduced if the whole system is decomposed into several subsystems. This approach employs internal state estimation when no measurements coming from the sensors are available for the system states. A modified backpropagation algorithm is introduced in order to train the IRNN for nonlinear system identification. The performance of the proposed design approach is proven on a car simulator case study.
Translation rescoring through recurrent neural network language models
PERIS ABRIL, ÁLVARO
2014-01-01
This work is framed into the Statistical Machine Translation field, more specifically into the language modeling challenge. In this area, have classically predominated the n-gram approach, but, in the latest years, different approaches have arisen in order to tackle this problem. One of this approaches is the use of artificial recurrent neural networks, which are supposed to outperform the n-gram language models. The aim of this work is to test empirically these new language...
Natural Language Video Description using Deep Recurrent Neural Networks
2015-11-23
videos by exploiting temporal structure. arXiv:1502.08029v4, 2015. 15, 17, 18, 19, 20, 27 [108] Haonan Yu and Jeffrey Mark Siskind. Grounded language ...Examples 32 Translating Videos to Natural Language CNN [Venugopalan et. al. NAACL’15] 33 Does not consider temporal sequence of frames. Can our model be...Natural Language Video Description using Deep Recurrent Neural Networks Subhashini Venugopalan University of Texas at Austin vsub@cs.utexas.edu
Web server's reliability improvements using recurrent neural networks
Madsen, Henrik; Albu, Rǎzvan-Daniel; Felea, Ioan
2012-01-01
In this paper we describe an interesting approach to error prediction illustrated by experimental results. The application consists of monitoring the activity for the web servers in order to collect the specific data. Predicting an error with severe consequences for the performance of a server (the...... usage, network usage and memory usage. We collect different data sets from monitoring the web server's activity and for each one we predict the server's reliability with the proposed recurrent neural network. © 2012 Taylor & Francis Group...
Training Input-Output Recurrent Neural Networks through Spectral Methods
Sedghi, Hanie; Anandkumar, Anima
2016-01-01
We consider the problem of training input-output recurrent neural networks (RNN) for sequence labeling tasks. We propose a novel spectral approach for learning the network parameters. It is based on decomposition of the cross-moment tensor between the output and a non-linear transformation of the input, based on score functions. We guarantee consistent learning with polynomial sample and computational complexity under transparent conditions such as non-degeneracy of model parameters, polynomi...
On the Efficiency of Recurrent Neural Network Optimization Algorithms
Krause, Ben; Lu, Liang; Murray, Iain; Renals, Steve
2015-01-01
This study compares the sequential and parallel efficiency of training Recurrent Neural Networks (RNNs) with Hessian-free optimization versus a gradient descent variant. Experiments are performed using the long short term memory (LSTM)architecture and the newly proposed multiplicative LSTM (mLSTM) architecture.Results demonstrate a number of insights into these architectures and optimizationalgorithms, including that Hessian-free optimization has the potential for largeefficiency gains in a h...
Homeostatic scaling of excitability in recurrent neural networks.
Michiel W H Remme
Full Text Available Neurons adjust their intrinsic excitability when experiencing a persistent change in synaptic drive. This process can prevent neural activity from moving into either a quiescent state or a saturated state in the face of ongoing plasticity, and is thought to promote stability of the network in which neurons reside. However, most neurons are embedded in recurrent networks, which require a delicate balance between excitation and inhibition to maintain network stability. This balance could be disrupted when neurons independently adjust their intrinsic excitability. Here, we study the functioning of activity-dependent homeostatic scaling of intrinsic excitability (HSE in a recurrent neural network. Using both simulations of a recurrent network consisting of excitatory and inhibitory neurons that implement HSE, and a mean-field description of adapting excitatory and inhibitory populations, we show that the stability of such adapting networks critically depends on the relationship between the adaptation time scales of both neuron populations. In a stable adapting network, HSE can keep all neurons functioning within their dynamic range, while the network is undergoing several (pathophysiologically relevant types of plasticity, such as persistent changes in external drive, changes in connection strengths, or the loss of inhibitory cells from the network. However, HSE cannot prevent the unstable network dynamics that result when, due to such plasticity, recurrent excitation in the network becomes too strong compared to feedback inhibition. This suggests that keeping a neural network in a stable and functional state requires the coordination of distinct homeostatic mechanisms that operate not only by adjusting neural excitability, but also by controlling network connectivity.
Predicting Chaotic Time Series Using Recurrent Neural Network
ZHANG Jia-Shu; XIAO Xian-Ci
2000-01-01
A new proposed method, i.e. the recurrent neural network (RNN), is introduced to predict chaotic time series. The effectiveness of using RNN for making one-step and multi-step predictions is tested based on remarkable few datum points by computer-generated chaotic time series. Numerical results show that the RNN proposed here is a very powerful tool for making prediction of chaotic time series.
Web server's reliability improvements using recurrent neural networks
Madsen, Henrik; Albu, Rǎzvan-Daniel; Felea, Ioan
2012-01-01
In this paper we describe an interesting approach to error prediction illustrated by experimental results. The application consists of monitoring the activity for the web servers in order to collect the specific data. Predicting an error with severe consequences for the performance of a server (t...... usage, network usage and memory usage. We collect different data sets from monitoring the web server's activity and for each one we predict the server's reliability with the proposed recurrent neural network. © 2012 Taylor & Francis Group...
A Recurrent Neural Network for Warpage Prediction in Injection Molding
A. Alvarado-Iniesta
2012-11-01
Full Text Available Injection molding is classified as one of the most flexible and economical manufacturing processes with high volumeof plastic molded parts. Causes of variations in the process are related to the vast number of factors acting during aregular production run, which directly impacts the quality of final products. A common quality trouble in finishedproducts is the presence of warpage. Thus, this study aimed to design a system based on recurrent neural networksto predict warpage defects in products manufactured through injection molding. Five process parameters areemployed for being considered to be critical and have a great impact on the warpage of plastic components. Thisstudy used the finite element analysis software Moldflow to simulate the injection molding process to collect data inorder to train and test the recurrent neural network. Recurrent neural networks were used to understand the dynamicsof the process and due to their memorization ability, warpage values might be predicted accurately. Results show thedesigned network works well in prediction tasks, overcoming those predictions generated by feedforward neuralnetworks.
Parameter estimation in space systems using recurrent neural networks
Parlos, Alexander G.; Atiya, Amir F.; Sunkel, John W.
1991-01-01
The identification of time-varying parameters encountered in space systems is addressed, using artificial neural systems. A hybrid feedforward/feedback neural network, namely a recurrent multilayer perception, is used as the model structure in the nonlinear system identification. The feedforward portion of the network architecture provides its well-known interpolation property, while through recurrency and cross-talk, the local information feedback enables representation of temporal variations in the system nonlinearities. The standard back-propagation-learning algorithm is modified and it is used for both the off-line and on-line supervised training of the proposed hybrid network. The performance of recurrent multilayer perceptron networks in identifying parameters of nonlinear dynamic systems is investigated by estimating the mass properties of a representative large spacecraft. The changes in the spacecraft inertia are predicted using a trained neural network, during two configurations corresponding to the early and late stages of the spacecraft on-orbit assembly sequence. The proposed on-line mass properties estimation capability offers encouraging results, though, further research is warranted for training and testing the predictive capabilities of these networks beyond nominal spacecraft operations.
Recurrent Neural Network for Computing the Drazin Inverse.
Stanimirović, Predrag S; Zivković, Ivan S; Wei, Yimin
2015-11-01
This paper presents a recurrent neural network (RNN) for computing the Drazin inverse of a real matrix in real time. This recurrent neural network (RNN) is composed of n independent parts (subnetworks), where n is the order of the input matrix. These subnetworks can operate concurrently, so parallel and distributed processing can be achieved. In this way, the computational advantages over the existing sequential algorithms can be attained in real-time applications. The RNN defined in this paper is convenient for an implementation in an electronic circuit. The number of neurons in the neural network is the same as the number of elements in the output matrix, which represents the Drazin inverse. The difference between the proposed RNN and the existing ones for the Drazin inverse computation lies in their network architecture and dynamics. The conditions that ensure the stability of the defined RNN as well as its convergence toward the Drazin inverse are considered. In addition, illustrative examples and examples of application to the practical engineering problems are discussed to show the efficacy of the proposed neural network.
Ideomotor feedback control in a recurrent neural network.
Galtier, Mathieu
2015-06-01
The architecture of a neural network controlling an unknown environment is presented. It is based on a randomly connected recurrent neural network from which both perception and action are simultaneously read and fed back. There are two concurrent learning rules implementing a sort of ideomotor control: (i) perception is learned along the principle that the network should predict reliably its incoming stimuli; (ii) action is learned along the principle that the prediction of the network should match a target time series. The coherent behavior of the neural network in its environment is a consequence of the interaction between the two principles. Numerical simulations show a promising performance of the approach, which can be turned into a local and better "biologically plausible" algorithm.
A Recurrent Neural Network for Nonlinear Fractional Programming
Quan-Ju Zhang
2012-01-01
Full Text Available This paper presents a novel recurrent time continuous neural network model which performs nonlinear fractional optimization subject to interval constraints on each of the optimization variables. The network is proved to be complete in the sense that the set of optima of the objective function to be minimized with interval constraints coincides with the set of equilibria of the neural network. It is also shown that the network is primal and globally convergent in the sense that its trajectory cannot escape from the feasible region and will converge to an exact optimal solution for any initial point being chosen in the feasible interval region. Simulation results are given to demonstrate further the global convergence and good performance of the proposing neural network for nonlinear fractional programming problems with interval constraints.
Convolutional neural networks for prostate cancer recurrence prediction
Kumar, Neeraj; Verma, Ruchika; Arora, Ashish; Kumar, Abhay; Gupta, Sanchit; Sethi, Amit; Gann, Peter H.
2017-03-01
Accurate prediction of the treatment outcome is important for cancer treatment planning. We present an approach to predict prostate cancer (PCa) recurrence after radical prostatectomy using tissue images. We used a cohort whose case vs. control (recurrent vs. non-recurrent) status had been determined using post-treatment follow up. Further, to aid the development of novel biomarkers of PCa recurrence, cases and controls were paired based on matching of other predictive clinical variables such as Gleason grade, stage, age, and race. For this cohort, tissue resection microarray with up to four cores per patient was available. The proposed approach is based on deep learning, and its novelty lies in the use of two separate convolutional neural networks (CNNs) - one to detect individual nuclei even in the crowded areas, and the other to classify them. To detect nuclear centers in an image, the first CNN predicts distance transform of the underlying (but unknown) multi-nuclear map from the input HE image. The second CNN classifies the patches centered at nuclear centers into those belonging to cases or controls. Voting across patches extracted from image(s) of a patient yields the probability of recurrence for the patient. The proposed approach gave 0.81 AUC for a sample of 30 recurrent cases and 30 non-recurrent controls, after being trained on an independent set of 80 case-controls pairs. If validated further, such an approach might help in choosing between a combination of treatment options such as active surveillance, radical prostatectomy, radiation, and hormone therapy. It can also generalize to the prediction of treatment outcomes in other cancers.
Simultaneous perturbation learning rule for recurrent neural networks and its FPGA implementation.
Maeda, Yutaka; Wakamura, Masatoshi
2005-11-01
Recurrent neural networks have interesting properties and can handle dynamic information processing unlike ordinary feedforward neural networks. However, they are generally difficult to use because there is no convenient learning scheme. In this paper, a recursive learning scheme for recurrent neural networks using the simultaneous perturbation method is described. The detailed procedure of the scheme for recurrent neural networks is explained. Unlike ordinary correlation learning, this method is applicable to analog learning and the learning of oscillatory solutions of recurrent neural networks. Moreover, as a typical example of recurrent neural networks, we consider the hardware implementation of Hopfield neural networks using a field-programmable gate array (FPGA). The details of the implementation are described. Two examples of a Hopfield neural network system for analog and oscillatory targets are shown. These results show that the learning scheme proposed here is feasible.
Fine-tuning and the stability of recurrent neural networks.
David MacNeil
Full Text Available A central criticism of standard theoretical approaches to constructing stable, recurrent model networks is that the synaptic connection weights need to be finely-tuned. This criticism is severe because proposed rules for learning these weights have been shown to have various limitations to their biological plausibility. Hence it is unlikely that such rules are used to continuously fine-tune the network in vivo. We describe a learning rule that is able to tune synaptic weights in a biologically plausible manner. We demonstrate and test this rule in the context of the oculomotor integrator, showing that only known neural signals are needed to tune the weights. We demonstrate that the rule appropriately accounts for a wide variety of experimental results, and is robust under several kinds of perturbation. Furthermore, we show that the rule is able to achieve stability as good as or better than that provided by the linearly optimal weights often used in recurrent models of the integrator. Finally, we discuss how this rule can be generalized to tune a wide variety of recurrent attractor networks, such as those found in head direction and path integration systems, suggesting that it may be used to tune a wide variety of stable neural systems.
Miconi, Thomas
2017-02-23
Neural activity during cognitive tasks exhibits complex dynamics that flexibly encode task-relevant variables. Chaotic recurrent networks, which spontaneously generate rich dynamics, have been proposed as a model of cortical computation during cognitive tasks. However, existing methods for training these networks are either biologically implausible, and/or require a continuous, real-time error signal to guide learning. Here we show that a biologically plausible learning rule can train such recurrent networks, guided solely by delayed, phasic rewards at the end of each trial. Networks endowed with this learning rule can successfully learn nontrivial tasks requiring flexible (context-dependent) associations, memory maintenance, nonlinear mixed selectivities, and coordination among multiple outputs. The resulting networks replicate complex dynamics previously observed in animal cortex, such as dynamic encoding of task features and selective integration of sensory inputs. We conclude that recurrent neural networks offer a plausible model of cortical dynamics during both learning and performance of flexible behavior.
Recurrent Neural Network for Single Machine Power System Stabilizer
Widi Aribowo
2010-04-01
Full Text Available In this paper, recurrent neural network (RNN is used to design power system stabilizer (PSS due to its advantage on the dependence not only on present input but also on past condition. A RNN-PSS is able to capture the dynamic response of a system without any delays caused by external feedback, primarily by the internal feedback loop in recurrent neuron. In this paper, RNNPSS consists of a RNN-identifier and a RNN-controller. The RNN-Identifier functions as the tracker of dynamics characteristics of the plant, while the RNN-controller is used to damp the system’s low frequency oscillations. Simulation results using MATLAB demonstrate that the RNNPSS can successfully damp out oscillation and improve the performance of the system.
Estimating Ads’ Click through Rate with Recurrent Neural Network
Chen Qiao-Hong
2016-01-01
Full Text Available With the development of the Internet, online advertising spreads across every corner of the world, the ads' click through rate (CTR estimation is an important method to improve the online advertising revenue. Compared with the linear model, the nonlinear models can study much more complex relationships between a large number of nonlinear characteristics, so as to improve the accuracy of the estimation of the ads’ CTR. The recurrent neural network (RNN based on Long-Short Term Memory (LSTM is an improved model of the feedback neural network with ring structure. The model overcomes the problem of the gradient of the general RNN. Experiments show that the RNN based on LSTM exceeds the linear models, and it can effectively improve the estimation effect of the ads’ click through rate.
Delay-slope-dependent stability results of recurrent neural networks.
Li, Tao; Zheng, Wei Xing; Lin, Chong
2011-12-01
By using the fact that the neuron activation functions are sector bounded and nondecreasing, this brief presents a new method, named the delay-slope-dependent method, for stability analysis of a class of recurrent neural networks with time-varying delays. This method includes more information on the slope of neuron activation functions and fewer matrix variables in the constructed Lyapunov-Krasovskii functional. Then some improved delay-dependent stability criteria with less computational burden and conservatism are obtained. Numerical examples are given to illustrate the effectiveness and the benefits of the proposed method.
Blind Separation by Redundancy Reduction in a Recurrent Neural Network
LIU Ju; NIE Kaibao; HE Zhenya
2001-01-01
In this paper, a novel information the-ory criterion is proposed for blind source separationbased on a fully recurrent neural network, and a learn-ing algorithm is then developed. Stochastic naturalgradient descent algorithm is used in this algorithm.The proposed algorithm can ensure the maximizationof transferred information when a Hebb term is intro-duced to express the derivative of information missing.At the same time, the mutual information of outputs isminimized so as to make the outputs mutually statis-tically independent. The computer simulation showsthe validity and the good performance of the proposedalgorithm.
Learning text representation using recurrent convolutional neural network with highway layers
Wen, Ying; Zhang, Weinan; Luo, Rui; Wang, Jun
2016-01-01
Recently, the rapid development of word embedding and neural networks has brought new inspiration to various NLP and IR tasks. In this paper, we describe a staged hybrid model combining Recurrent Convolutional Neural Networks (RCNN) with highway layers. The highway network module is incorporated in the middle takes the output of the bi-directional Recurrent Neural Network (Bi-RNN) module in the first stage and provides the Convolutional Neural Network (CNN) module in the last stage with the i...
Lambda and the edge of chaos in recurrent neural networks.
Seifter, Jared; Reggia, James A
2015-01-01
The idea that there is an edge of chaos, a region in the space of dynamical systems having special meaning for complex living entities, has a long history in artificial life. The significance of this region was first emphasized in cellular automata models when a single simple measure, λCA, identified it as a transitional region between order and chaos. Here we introduce a parameter λNN that is inspired by λCA but is defined for recurrent neural networks. We show through a series of systematic computational experiments that λNN generally orders the dynamical behaviors of randomly connected/weighted recurrent neural networks in the same way that λCA does for cellular automata. By extending this ordering to larger values of λNN than has typically been done with λCA and cellular automata, we find that a second edge-of-chaos region exists on the opposite side of the chaotic region. These basic results are found to hold under different assumptions about network connectivity, but vary substantially in their details. The results show that the basic concept underlying the lambda parameter can usefully be extended to other types of complex dynamical systems than just cellular automata.
Tuning Recurrent Neural Networks for Recognizing Handwritten Arabic Words
Qaralleh, Esam
2013-10-01
Artificial neural networks have the abilities to learn by example and are capable of solving problems that are hard to solve using ordinary rule-based programming. They have many design parameters that affect their performance such as the number and sizes of the hidden layers. Large sizes are slow and small sizes are generally not accurate. Tuning the neural network size is a hard task because the design space is often large and training is often a long process. We use design of experiments techniques to tune the recurrent neural network used in an Arabic handwriting recognition system. We show that best results are achieved with three hidden layers and two subsampling layers. To tune the sizes of these five layers, we use fractional factorial experiment design to limit the number of experiments to a feasible number. Moreover, we replicate the experiment configuration multiple times to overcome the randomness in the training process. The accuracy and time measurements are analyzed and modeled. The two models are then used to locate network sizes that are on the Pareto optimal frontier. The approach described in this paper reduces the label error from 26.2% to 19.8%.
A modular architecture for transparent computation in recurrent neural networks.
Carmantini, Giovanni S; Beim Graben, Peter; Desroches, Mathieu; Rodrigues, Serafim
2017-01-01
Computation is classically studied in terms of automata, formal languages and algorithms; yet, the relation between neural dynamics and symbolic representations and operations is still unclear in traditional eliminative connectionism. Therefore, we suggest a unique perspective on this central issue, to which we would like to refer as transparent connectionism, by proposing accounts of how symbolic computation can be implemented in neural substrates. In this study we first introduce a new model of dynamics on a symbolic space, the versatile shift, showing that it supports the real-time simulation of a range of automata. We then show that the Gödelization of versatile shifts defines nonlinear dynamical automata, dynamical systems evolving on a vectorial space. Finally, we present a mapping between nonlinear dynamical automata and recurrent artificial neural networks. The mapping defines an architecture characterized by its granular modularity, where data, symbolic operations and their control are not only distinguishable in activation space, but also spatially localizable in the network itself, while maintaining a distributed encoding of symbolic representations. The resulting networks simulate automata in real-time and are programmed directly, in the absence of network training. To discuss the unique characteristics of the architecture and their consequences, we present two examples: (i) the design of a Central Pattern Generator from a finite-state locomotive controller, and (ii) the creation of a network simulating a system of interactive automata that supports the parsing of garden-path sentences as investigated in psycholinguistics experiments.
A novel recurrent neural network with finite-time convergence for linear programming.
Liu, Qingshan; Cao, Jinde; Chen, Guanrong
2010-11-01
In this letter, a novel recurrent neural network based on the gradient method is proposed for solving linear programming problems. Finite-time convergence of the proposed neural network is proved by using the Lyapunov method. Compared with the existing neural networks for linear programming, the proposed neural network is globally convergent to exact optimal solutions in finite time, which is remarkable and rare in the literature of neural networks for optimization. Some numerical examples are given to show the effectiveness and excellent performance of the new recurrent neural network.
Recurrent Neural Network Approach Based on the Integral Representation of the Drazin Inverse.
Stanimirović, Predrag S; Živković, Ivan S; Wei, Yimin
2015-10-01
In this letter, we present the dynamical equation and corresponding artificial recurrent neural network for computing the Drazin inverse for arbitrary square real matrix, without any restriction on its eigenvalues. Conditions that ensure the stability of the defined recurrent neural network as well as its convergence toward the Drazin inverse are considered. Several illustrative examples present the results of computer simulations.
Application of simple dynamic recurrent neural networks in solid granule flowrate modeling
Du, Yun; Sun, Huiqin; Tian, Qiang; Ren, Haiping; Zhang, Suying
2008-10-01
To build the solid granule flowrate model by the simple dynamic recurrent neural network (SRNN) is presented in this paper. Because of the dynamic recurrent neural network has the characteristic of intricate network structure and slow training algorithm rate, the simple recurrent neural network without the weight values on recursion layer is studied. The recurrent prediction error (RPE) learning algorithm for SRNN by adjustment the weight value and the threshold value is reduced. The modeling result of solid granule flowrate indicates that it has fast convergence rate and the high precision the model. It can be used on real time.
Recurrent neural networks-based multivariable system PID predictive control
ZHANG Yan; WANG Fanzhen; SONG Ying; CHEN Zengqiang; YUAN Zhuzhi
2007-01-01
A nonlinear proportion integration differentiation (PID) controller is proposed on the basis of recurrent neural networks,due to the difficulty of tuning the parameters of conventional PID controller.In the control process of nonlinear multivariable system,a decoupling controller was constructed,which took advantage of multi-nonlinear PID controllers in parallel.With the idea of predictive control,two multivariable predictive control strategies were established.One strategy involved the use of the general minimum variance control function on the basis of recursive multi-step predictive method.The other involved the adoption of multistep predictive cost energy to train the weights of the decoupling controller.Simulation studies have shown the efficiency of these strategies.
Dual extended Kalman filtering in recurrent neural networks(1).
Leung, Chi-Sing; Chan, Lai-Wan
2003-03-01
In the classical deterministic Elman model, the estimation of parameters must be very accurate. Otherwise, the system performance is very poor. To improve the system performance, we can use a Kalman filtering algorithm to guide the operation of a trained recurrent neural network (RNN). In this case, during training, we need to estimate the state of hidden layer, as well as the weights of the RNN. This paper discusses how to use the dual extended Kalman filtering (DEKF) for this dual estimation and how to use our proposing DEKF for removing some unimportant weights from a trained RNN. In our approach, one Kalman algorithm is used for estimating the state of the hidden layer, and one recursive least square (RLS) algorithm is used for estimating the weights. After training, we use the error covariance matrix of the RLS algorithm to remove unimportant weights. Simulation showed that our approach is an effective joint-learning-pruning method for RNNs under the online operation.
A recurrent neural network for adaptive beamforming and array correction.
Che, Hangjun; Li, Chuandong; He, Xing; Huang, Tingwen
2016-08-01
In this paper, a recurrent neural network (RNN) is proposed for solving adaptive beamforming problem. In order to minimize sidelobe interference, the problem is described as a convex optimization problem based on linear array model. RNN is designed to optimize system's weight values in the feasible region which is derived from arrays' state and plane wave's information. The new algorithm is proven to be stable and converge to optimal solution in the sense of Lyapunov. So as to verify new algorithm's performance, we apply it to beamforming under array mismatch situation. Comparing with other optimization algorithms, simulations suggest that RNN has strong ability to search for exact solutions under the condition of large scale constraints.
On-line learning algorithms for locally recurrent neural networks.
Campolucci, P; Uncini, A; Piazza, F; Rao, B D
1999-01-01
This paper focuses on on-line learning procedures for locally recurrent neural networks with emphasis on multilayer perceptron (MLP) with infinite impulse response (IIR) synapses and its variations which include generalized output and activation feedback multilayer networks (MLN's). We propose a new gradient-based procedure called recursive backpropagation (RBP) whose on-line version, causal recursive backpropagation (CRBP), presents some advantages with respect to the other on-line training methods. The new CRBP algorithm includes as particular cases backpropagation (BP), temporal backpropagation (TBP), backpropagation for sequences (BPS), Back-Tsoi algorithm among others, thereby providing a unifying view on gradient calculation techniques for recurrent networks with local feedback. The only learning method that has been proposed for locally recurrent networks with no architectural restriction is the one by Back and Tsoi. The proposed algorithm has better stability and higher speed of convergence with respect to the Back-Tsoi algorithm, which is supported by the theoretical development and confirmed by simulations. The computational complexity of the CRBP is comparable with that of the Back-Tsoi algorithm, e.g., less that a factor of 1.5 for usual architectures and parameter settings. The superior performance of the new algorithm, however, easily justifies this small increase in computational burden. In addition, the general paradigms of truncated BPTT and RTRL are applied to networks with local feedback and compared with the new CRBP method. The simulations show that CRBP exhibits similar performances and the detailed analysis of complexity reveals that CRBP is much simpler and easier to implement, e.g., CRBP is local in space and in time while RTRL is not local in space.
Detecting behavioral microsleeps using EEG and LSTM recurrent neural networks.
Davidson, P R; Jones, R D; Peiris, M T
2005-01-01
Lapses in visuomotor performance are often associated with behavioral microsleep events. Experiencing a lapse of this type while performing an important task can have catastrophic consequences. A warning system capable of reliably detecting patterns in EEG occurring before or during a lapse has the potential to save many lives. We are developing a behavioral microsleep detection system which employs Long Short'Term Memory (LSTM) recurrent neural networks. To train and validate the system, EEG, facial video and tracking data were collected from 15 subjects performing a visuomotor tracking task for 2 1-hour sessions. This provided behavioral information on lapse events with good temporal resolution. We developed an automated behavior rating system and trained it to estimate the mean opinion of 3 human raters on the likelihood of a lapse. We then trained an LSTM neural network to estimate the output of the lapse rating system given only EEG spectral data. The detection system was designed to operate in real-time without calibration for individual subjects. Preliminary results show the system is not reliable enough for general use, but results from some tracking sessions encourage further investigation of the reported approach.
Recurrent Neural Network Applications for Astronomical Time Series
Protopapas, Pavlos
2017-06-01
The benefits of good predictive models in astronomy lie in early event prediction systems and effective resource allocation. Current time series methods applicable to regular time series have not evolved to generalize for irregular time series. In this talk, I will describe two Recurrent Neural Network methods, Long Short-Term Memory (LSTM) and Echo State Networks (ESNs) for predicting irregular time series. Feature engineering along with a non-linear modeling proved to be an effective predictor. For noisy time series, the prediction is improved by training the network on error realizations using the error estimates from astronomical light curves. In addition to this, we propose a new neural network architecture to remove correlation from the residuals in order to improve prediction and compensate for the noisy data. Finally, I show how to set hyperparameters for a stable and performant solution correctly. In this work, we circumvent this obstacle by optimizing ESN hyperparameters using Bayesian optimization with Gaussian Process priors. This automates the tuning procedure, enabling users to employ the power of RNN without needing an in-depth understanding of the tuning procedure.
Discussion of stability in a class of models on recurrent wavelet neural networks
DENG Ren; LI Zhu-xin; FAN You-hong
2007-01-01
Based on wavelet neural networks (WNNs) and recurrent neural networks (RNNs), a class of models on recurrent wavelet neural networks (RWNNs) is proposed.The new networks possess the advantages of WNNs and RNNs. In this paper, asymptotic stability of RWNNs is researched according to the Lyapunov theorem, and some theorems and formulae are given. The simulation results show the excellent performance of the networks in nonlinear dynamic system recognition.
Permitted and forbidden sets in discrete-time linear threshold recurrent neural networks.
Yi, Zhang; Zhang, Lei; Yu, Jiali; Tan, Kok Kiong
2009-06-01
The concepts of permitted and forbidden sets enable a new perspective of the memory in neural networks. Such concepts exhibit interesting dynamics in recurrent neural networks. This paper studies the basic theories of permitted and forbidden sets of the linear threshold discrete-time recurrent neural networks. The linear threshold transfer function has been regarded as an adequate transfer function for recurrent neural networks. Networks with this transfer function form a class of hybrid analog and digital networks which are especially useful for perceptual computations. Networks in discrete time can directly provide algorithms for efficient implementation in digital hardware. The main contribution of this paper is to establish foundations of permitted and forbidden sets. Necessary and sufficient conditions for the linear threshold discrete-time recurrent neural networks are obtained for complete convergence, existence of permitted and forbidden sets, as well as conditionally multiattractivity, respectively. Simulation studies explore some possible interesting practical applications.
Zhou, Liqun; Zhang, Yanyan
2016-01-01
In this paper, a class of recurrent neural networks with multi-proportional delays is studied. The nonlinear transformation transforms a class of recurrent neural networks with multi-proportional delays into a class of recurrent neural networks with constant delays and time-varying coefficients. By constructing Lyapunov functional and establishing the delay differential inequality, several delay-dependent and delay-independent sufficient conditions are derived to ensure global exponential periodicity and stability of the system. And several examples and their simulations are given to illustrate the effectiveness of obtained results.
Application of recurrent neural networks for drought projections in California
Le, J. A.; El-Askary, H. M.; Allali, M.; Struppa, D. C.
2017-05-01
We use recurrent neural networks (RNNs) to investigate the complex interactions between the long-term trend in dryness and a projected, short but intense, period of wetness due to the 2015-2016 El Niño. Although it was forecasted that this El Niño season would bring significant rainfall to the region, our long-term projections of the Palmer Z Index (PZI) showed a continuing drought trend, contrasting with the 1998-1999 El Niño event. RNN training considered PZI data during 1896-2006 that was validated against the 2006-2015 period to evaluate the potential of extreme precipitation forecast. We achieved a statistically significant correlation of 0.610 between forecasted and observed PZI on the validation set for a lead time of 1 month. This gives strong confidence to the forecasted precipitation indicator. The 2015-2016 El Niño season proved to be relatively weak as compared with the 1997-1998, with a peak PZI anomaly of 0.242 standard deviations below historical averages, continuing drought conditions.
Railway Track Circuit Fault Diagnosis Using Recurrent Neural Networks.
de Bruin, Tim; Verbert, Kim; Babuska, Robert
2017-03-01
Timely detection and identification of faults in railway track circuits are crucial for the safety and availability of railway networks. In this paper, the use of the long-short-term memory (LSTM) recurrent neural network is proposed to accomplish these tasks based on the commonly available measurement signals. By considering the signals from multiple track circuits in a geographic area, faults are diagnosed from their spatial and temporal dependences. A generative model is used to show that the LSTM network can learn these dependences directly from the data. The network correctly classifies 99.7% of the test input sequences, with no false positive fault detections. In addition, the t-Distributed Stochastic Neighbor Embedding (t-SNE) method is used to examine the resulting network, further showing that it has learned the relevant dependences in the data. Finally, we compare our LSTM network with a convolutional network trained on the same task. From this comparison, we conclude that the LSTM network architecture is better suited for the railway track circuit fault detection and identification tasks than the convolutional network.
Multiplex visibility graphs to investigate recurrent neural network dynamics
Bianchi, Filippo Maria; Livi, Lorenzo; Alippi, Cesare; Jenssen, Robert
2017-03-01
A recurrent neural network (RNN) is a universal approximator of dynamical systems, whose performance often depends on sensitive hyperparameters. Tuning them properly may be difficult and, typically, based on a trial-and-error approach. In this work, we adopt a graph-based framework to interpret and characterize internal dynamics of a class of RNNs called echo state networks (ESNs). We design principled unsupervised methods to derive hyperparameters configurations yielding maximal ESN performance, expressed in terms of prediction error and memory capacity. In particular, we propose to model time series generated by each neuron activations with a horizontal visibility graph, whose topological properties have been shown to be related to the underlying system dynamics. Successively, horizontal visibility graphs associated with all neurons become layers of a larger structure called a multiplex. We show that topological properties of such a multiplex reflect important features of ESN dynamics that can be used to guide the tuning of its hyperparamers. Results obtained on several benchmarks and a real-world dataset of telephone call data records show the effectiveness of the proposed methods.
On the Emergent Properties of Recurrent Neural Networks at Criticality
Karimipanah, Yahya; Ma, Zhengyu; Wessel, Ralf
Irregular spiking is a widespread phenomenon in neuronal activities in vivo. In addition, it has been shown that the firing rate variability decreases after the onset of external stimuli. Since these are known as two universal features of cortical activity, it is natural to ask whether there is a universal mechanism underlying such phenomena. Independently, there has been mounting evidence that superficial layers of cortex operate near a second-order phase transition (critical point), which is manifested in the form of scale free activity. However, despite the strong evidence for such a criticality hypothesis, it is still very little known on how it can be leveraged to facilitate neural coding. As the decline in response variability is regarded as an essential mechanism to enhance coding efficiency, we asked whether the criticality hypothesis could bridge between scale free activity and other ubiquitous features of cortical activity. Using a simple binary probabilistic model, we show that irregular spiking and decline in response variability, both arise as emergent properties of a recurrent network poised at criticality. Our results provide us with a unified explanation for the ubiquity of these two features, without a need to exploit any further mechanism.
Memory in linear recurrent neural networks in continuous time.
Hermans, Michiel; Schrauwen, Benjamin
2010-04-01
Reservoir Computing is a novel technique which employs recurrent neural networks while circumventing difficult training algorithms. A very recent trend in Reservoir Computing is the use of real physical dynamical systems as implementation platforms, rather than the customary digital emulations. Physical systems operate in continuous time, creating a fundamental difference with the classic discrete time definitions of Reservoir Computing. The specific goal of this paper is to study the memory properties of such systems, where we will limit ourselves to linear dynamics. We develop an analytical model which allows the calculation of the memory function for continuous time linear dynamical systems, which can be considered as networks of linear leaky integrator neurons. We then use this model to research memory properties for different types of reservoir. We start with random connection matrices with a shifted eigenvalue spectrum, which perform very poorly. Next, we transform two specific reservoir types, which are known to give good performance in discrete time, to the continuous time domain. Reservoirs based on uniform spreading of connection matrix eigenvalues on the unit disk in discrete time give much better memory properties than reservoirs with random connection matrices, where reservoirs based on orthogonal connection matrices in discrete time are very robust against noise and their memory properties can be tuned. The overall results found in this work yield important insights into how to design networks for continuous time.
Low-dimensional recurrent neural network-based Kalman filter for speech enhancement.
Xia, Youshen; Wang, Jun
2015-07-01
This paper proposes a new recurrent neural network-based Kalman filter for speech enhancement, based on a noise-constrained least squares estimate. The parameters of speech signal modeled as autoregressive process are first estimated by using the proposed recurrent neural network and the speech signal is then recovered from Kalman filtering. The proposed recurrent neural network is globally asymptomatically stable to the noise-constrained estimate. Because the noise-constrained estimate has a robust performance against non-Gaussian noise, the proposed recurrent neural network-based speech enhancement algorithm can minimize the estimation error of Kalman filter parameters in non-Gaussian noise. Furthermore, having a low-dimensional model feature, the proposed neural network-based speech enhancement algorithm has a much faster speed than two existing recurrent neural networks-based speech enhancement algorithms. Simulation results show that the proposed recurrent neural network-based speech enhancement algorithm can produce a good performance with fast computation and noise reduction.
A one-layer recurrent neural network for support vector machine learning.
Xia, Youshen; Wang, Jun
2004-04-01
This paper presents a one-layer recurrent neural network for support vector machine (SVM) learning in pattern classification and regression. The SVM learning problem is first converted into an equivalent formulation, and then a one-layer recurrent neural network for SVM learning is proposed. The proposed neural network is guaranteed to obtain the optimal solution of support vector classification and regression. Compared with the existing two-layer neural network for the SVM classification, the proposed neural network has a low complexity for implementation. Moreover, the proposed neural network can converge exponentially to the optimal solution of SVM learning. The rate of the exponential convergence can be made arbitrarily high by simply turning up a scaling parameter. Simulation examples based on benchmark problems are discussed to show the good performance of the proposed neural network for SVM learning.
Automatic Cloud Resource Scaling Algorithm based on Long Short-Term Memory Recurrent Neural Network
Ashraf A. Shahin
2016-01-01
.... This paper has proposed dynamic threshold based auto-scaling algorithms that predict required resources using Long Short-Term Memory Recurrent Neural Network and auto-scale virtual resources based on predicted values...
Speed up Training of the Recurrent Neural Network Based on Constrained Optimization Techniques
陈珂; 包威权; 等
1996-01-01
In this paper,the constrained optimization technique for a substantial problem is explored,that is accelerating training the globally recurrent neural network.Unlike most of the previous methods in feedforware neural networks,the authors adopt the constrained optimization technique to improve the gradientbased algorithm of the globally recurrent neural network for the adaptive learning rate during tracining.Using the recurrent network with the improved algorithm,some experiments in two real-world problems,namely,filtering additive noises in acoustic data and classification of temporat signals for speaker identification,have been performed.The experimental results show that the recurrent neural network with the improved learning algorithm yields significantly faster training and achieves the satisfactory performance.
Simplified Gating in Long Short-term Memory (LSTM) Recurrent Neural Networks
Lu, Yuzhen; Salem, Fathi M.
2017-01-01
The standard LSTM recurrent neural networks while very powerful in long-range dependency sequence applications have highly complex structure and relatively large (adaptive) parameters. In this work, we present empirical comparison between the standard LSTM recurrent neural network architecture and three new parameter-reduced variants obtained by eliminating combinations of the input signal, bias, and hidden unit signals from individual gating signals. The experiments on two sequence datasets ...
Recurrent Convolutional Neural Networks: A Better Model of Biological Object Recognition.
Spoerer, Courtney J; McClure, Patrick; Kriegeskorte, Nikolaus
2017-01-01
Feedforward neural networks provide the dominant model of how the brain performs visual object recognition. However, these networks lack the lateral and feedback connections, and the resulting recurrent neuronal dynamics, of the ventral visual pathway in the human and non-human primate brain. Here we investigate recurrent convolutional neural networks with bottom-up (B), lateral (L), and top-down (T) connections. Combining these types of connections yields four architectures (B, BT, BL, and BLT), which we systematically test and compare. We hypothesized that recurrent dynamics might improve recognition performance in the challenging scenario of partial occlusion. We introduce two novel occluded object recognition tasks to test the efficacy of the models, digit clutter (where multiple target digits occlude one another) and digit debris (where target digits are occluded by digit fragments). We find that recurrent neural networks outperform feedforward control models (approximately matched in parametric complexity) at recognizing objects, both in the absence of occlusion and in all occlusion conditions. Recurrent networks were also found to be more robust to the inclusion of additive Gaussian noise. Recurrent neural networks are better in two respects: (1) they are more neurobiologically realistic than their feedforward counterparts; (2) they are better in terms of their ability to recognize objects, especially under challenging conditions. This work shows that computer vision can benefit from using recurrent convolutional architectures and suggests that the ubiquitous recurrent connections in biological brains are essential for task performance.
Grinke, Eduard; Tetzlaff, Christian; Wörgötter, Florentin
2015-01-01
mechanisms with plasticity, exteroceptive sensory feedback, and biomechanics. The neural mechanisms consist of adaptive neural sensory processing and modular neural locomotion control. The sensory processing is based on a small recurrent neural network consisting of two fully connected neurons. Online...... correlation-based learning with synaptic scaling is applied to adequately change the connections of the network. By doing so, we can effectively exploit neural dynamics (i.e., hysteresis effects and single attractors) in the network to generate different turning angles with short-term memory for a walking...
Qin, Sitian; Fan, Dejun; Su, Peng; Liu, Qinghe
2014-04-01
In this paper, the optimization techniques for solving pseudoconvex optimization problems are investigated. A simplified recurrent neural network is proposed according to the optimization problem. We prove that the optimal solution of the optimization problem is just the equilibrium point of the neural network, and vice versa if the equilibrium point satisfies the linear constraints. The proposed neural network is proven to be globally stable in the sense of Lyapunov and convergent to an exact optimal solution of the optimization problem. A numerical simulation is given to illustrate the global convergence of the neural network. Applications in business and chemistry are given to demonstrate the effectiveness of the neural network.
A novel compensation-based recurrent fuzzy neural network and its learning algorithm
WU Bo; WU Ke; LU JianHong
2009-01-01
Based on detailed atudy on aeveral kinds of fuzzy neural networks, we propose a novel compensation. based recurrent fuzzy neural network (CRFNN) by adding recurrent element and compensatory element to the conventional fuzzy neural network. Then, we propose a sequential learning method for the structure Identification of the CRFNN In order to confirm the fuzzy rules and their correlaUve parameters effectively. Furthermore, we Improve the BP algorithm based on the characteristics of the proposed CRFNN to train the network. By modeling the typical nonlinear systems, we draw the conclusion that the proposed CRFNN has excellent dynamic response and strong learning ability.
Zhang, Wei; Li, Chuandong; Huang, Tingwen; He, Xing
2015-12-01
Synchronization of an array of linearly coupled memristor-based recurrent neural networks with impulses and time-varying delays is investigated in this brief. Based on the Lyapunov function method, an extended Halanay differential inequality and a new delay impulsive differential inequality, some sufficient conditions are derived, which depend on impulsive and coupling delays to guarantee the exponential synchronization of the memristor-based recurrent neural networks. Impulses with and without delay and time-varying delay are considered for modeling the coupled neural networks simultaneously, which renders more practical significance of our current research. Finally, numerical simulations are given to verify the effectiveness of the theoretical results.
Adaptive learning with guaranteed stability for discrete-time recurrent neural networks
无
2007-01-01
To avoid unstable learning, a stable adaptive learning algorithm was proposed for discrete-time recurrent neural networks. Unlike the dynamic gradient methods, such as the backpropagation through time and the real time recurrent learning, the weights of the recurrent neural networks were updated online in terms of Lyapunov stability theory in the proposed learning algorithm, so the learning stability was guaranteed. With the inversion of the activation function of the recurrent neural networks, the proposed learning algorithm can be easily implemented for solving varying nonlinear adaptive learning problems and fast convergence of the adaptive learning process can be achieved. Simulation experiments in pattern recognition show that only 5 iterations are needed for the storage of a 15X15 binary image pattern and only 9 iterations are needed for the perfect realization of an analog vector by an equilibrium state with the proposed learning algorithm.
Multi-step-prediction of chaotic time series based on co-evolutionary recurrent neural network
Ma Qian-Li; Zheng Qi-Lun; Peng Hong; Zhong Tan-Wei; Qin Jiang-Wei
2008-01-01
This paper proposes a co-evolutionary recurrent neural network (CERNN) for the multi-step-prediction of chaotic time series,it estimates the proper parameters of phase space reconstruction and optimizes the structure of recurrent neural networks by co-evolutionary strategy.The searching space was separated into two subspaces and the individuals are trained in a parallel computational procedure.It can dynamically combine the embedding method with the capability of recurrent neural network to incorporate past experience due to internal recurrence.The effectiveness of CERNN is evaluated by using three benchmark chaotic time series data sets:the Lorenz series,Mackey-Glass series and real-world sun spot series.The simulation results show that CERNN improves the performances of multi-step-prediction of chaotic time series.
Computationally efficient locally-recurrent neural networks for online signal processing
Hussain, A; Shim, I
1999-01-01
A general class of computationally efficient locally recurrent networks (CERN) is described for real-time adaptive signal processing. The structure of the CERN is based on linear-in-the- parameters single-hidden-layered feedforward neural networks such as the radial basis function (RBF) network, the Volterra neural network (VNN) and the functionally expanded neural network (FENN), adapted to employ local output feedback. The corresponding learning algorithms are derived and key structural and computational complexity comparisons are made between the CERN and conventional recurrent neural networks. Two case studies are performed involving the real- time adaptive nonlinear prediction of real-world chaotic, highly non- stationary laser time series and an actual speech signal, which show that a recurrent FENN based adaptive CERN predictor can significantly outperform the corresponding feedforward FENN and conventionally employed linear adaptive filtering models. (13 refs).
An attractor-based complexity measurement for Boolean recurrent neural networks.
Cabessa, Jérémie; Villa, Alessandro E P
2014-01-01
We provide a novel refined attractor-based complexity measurement for Boolean recurrent neural networks that represents an assessment of their computational power in terms of the significance of their attractor dynamics. This complexity measurement is achieved by first proving a computational equivalence between Boolean recurrent neural networks and some specific class of ω-automata, and then translating the most refined classification of ω-automata to the Boolean neural network context. As a result, a hierarchical classification of Boolean neural networks based on their attractive dynamics is obtained, thus providing a novel refined attractor-based complexity measurement for Boolean recurrent neural networks. These results provide new theoretical insights to the computational and dynamical capabilities of neural networks according to their attractive potentialities. An application of our findings is illustrated by the analysis of the dynamics of a simplified model of the basal ganglia-thalamocortical network simulated by a Boolean recurrent neural network. This example shows the significance of measuring network complexity, and how our results bear new founding elements for the understanding of the complexity of real brain circuits.
An attractor-based complexity measurement for Boolean recurrent neural networks.
Jérémie Cabessa
Full Text Available We provide a novel refined attractor-based complexity measurement for Boolean recurrent neural networks that represents an assessment of their computational power in terms of the significance of their attractor dynamics. This complexity measurement is achieved by first proving a computational equivalence between Boolean recurrent neural networks and some specific class of ω-automata, and then translating the most refined classification of ω-automata to the Boolean neural network context. As a result, a hierarchical classification of Boolean neural networks based on their attractive dynamics is obtained, thus providing a novel refined attractor-based complexity measurement for Boolean recurrent neural networks. These results provide new theoretical insights to the computational and dynamical capabilities of neural networks according to their attractive potentialities. An application of our findings is illustrated by the analysis of the dynamics of a simplified model of the basal ganglia-thalamocortical network simulated by a Boolean recurrent neural network. This example shows the significance of measuring network complexity, and how our results bear new founding elements for the understanding of the complexity of real brain circuits.
Liu, Qingshan; Dang, Chuangyin; Huang, Tingwen
2013-02-01
This paper presents a decision-making model described by a recurrent neural network for dynamic portfolio optimization. The portfolio-optimization problem is first converted into a constrained fractional programming problem. Since the objective function in the programming problem is not convex, the traditional optimization techniques are no longer applicable for solving this problem. Fortunately, the objective function in the fractional programming is pseudoconvex on the feasible region. It leads to a one-layer recurrent neural network modeled by means of a discontinuous dynamic system. To ensure the optimal solutions for portfolio optimization, the convergence of the proposed neural network is analyzed and proved. In fact, the neural network guarantees to get the optimal solutions for portfolio-investment advice if some mild conditions are satisfied. A numerical example with simulation results substantiates the effectiveness and illustrates the characteristics of the proposed neural network.
EMP response modeling of TVS based on the recurrent neural network
Zhiqiang JI
2015-04-01
Full Text Available Due to the larger workload in the implementation process and the poor consistence between the test results and actual situation problems when using the transmission line pulse (TLP testing methods, a modeling method based on the recurrent neural network is proposed for EMP response forecast. Based on the TLP testing system, two categories of EMP are increased, which are the machine model ESD EMP and human metal model ESD EMP. Elman neural network, Jordan neural network and their combination namely Elman-Jordan neural network are established for response modeling of NUP2105L transient voltage suppressor (TVS forecasting the response under different EMP. The simulation results show that the recurrent neural network has satisfying modeling effects and high computation efficiency.
Predicting recurrent aphthous ulceration using genetic algorithms-optimized neural networks
Najla S Dar-Odeh
2010-05-01
Full Text Available Najla S Dar-Odeh1, Othman M Alsmadi2, Faris Bakri3, Zaer Abu-Hammour2, Asem A Shehabi3, Mahmoud K Al-Omiri1, Shatha M K Abu-Hammad4, Hamzeh Al-Mashni4, Mohammad B Saeed4, Wael Muqbil4, Osama A Abu-Hammad1 1Faculty of Dentistry, 2Faculty of Engineering and Technology, 3Faculty of Medicine, University of Jordan, Amman, Jordan; 4Dental Department, University of Jordan Hospital, Amman, JordanObjective: To construct and optimize a neural network that is capable of predicting the occurrence of recurrent aphthous ulceration (RAU based on a set of appropriate input data.Participants and methods: Artificial neural networks (ANN software employing genetic algorithms to optimize the architecture neural networks was used. Input and output data of 86 participants (predisposing factors and status of the participants with regards to recurrent aphthous ulceration were used to construct and train the neural networks. The optimized neural networks were then tested using untrained data of a further 10 participants.Results: The optimized neural network, which produced the most accurate predictions for the presence or absence of recurrent aphthous ulceration was found to employ: gender, hematological (with or without ferritin and mycological data of the participants, frequency of tooth brushing, and consumption of vegetables and fruits.Conclusions: Factors appearing to be related to recurrent aphthous ulceration and appropriate for use as input data to construct ANNs that predict recurrent aphthous ulceration were found to include the following: gender, hemoglobin, serum vitamin B12, serum ferritin, red cell folate, salivary candidal colony count, frequency of tooth brushing, and the number of fruits or vegetables consumed daily.Keywords: artifical neural networks, recurrent, aphthous ulceration, ulcer
AN INTELLIGENT CONTROL SYSTEM BASED ON RECURRENT NEURAL FUZZY NETWORK AND ITS APPLICATION TO CSTR
JIA Li; YU Jinshou
2005-01-01
In this paper, an intelligent control system based on recurrent neural fuzzy network is presented for complex, uncertain and nonlinear processes, in which a recurrent neural fuzzy network is used as controller (RNFNC) to control a process adaptively and a recurrent neural network based on recursive predictive error algorithm (RNNM) is utilized to estimate the gradient information (ey)/(e)u for optimizing the parameters of controller.Compared with many neural fuzzy control systems, it uses recurrent neural network to realize the fuzzy controller. Moreover, recursive predictive error algorithm (RPE) is implemented to construct RNNM on line. Lastly, in order to evaluate the performance of theproposed control system, the presented control system is applied to continuously stirred tank reactor (CSTR). Simulation comparisons, based on control effect and output error,with general fuzzy controller and feed-forward neural fuzzy network controller (FNFNC),are conducted. In addition, the rates of convergence of RNNM respectively using RPE algorithm and gradient learning algorithm are also compared. The results show that the proposed control system is better for controlling uncertain and nonlinear processes.
Phase transitions in contagion processes mediated by recurrent mobility patterns
Balcan, Duygu; 10.1038/nphys1944
2011-01-01
Human mobility and activity patterns mediate contagion on many levels, including the spatial spread of infectious diseases, diffusion of rumors, and emergence of consensus. These patterns however are often dominated by specific locations and recurrent flows and poorly modeled by the random diffusive dynamics generally used to study them. Here we develop a theoretical framework to analyze contagion within a network of locations where individuals recall their geographic origins. We find a phase transition between a regime in which the contagion affects a large fraction of the system and one in which only a small fraction is affected. This transition cannot be uncovered by continuous deterministic models due to the stochastic features of the contagion process and defines an invasion threshold that depends on mobility parameters, providing guidance for controlling contagion spread by constraining mobility processes. We recover the threshold behavior by analyzing diffusion processes mediated by real human commutin...
Financial Time Series Prediction Using Elman Recurrent Random Neural Networks
Jie Wang
2016-01-01
(ERNN, the empirical results show that the proposed neural network displays the best performance among these neural networks in financial time series forecasting. Further, the empirical research is performed in testing the predictive effects of SSE, TWSE, KOSPI, and Nikkei225 with the established model, and the corresponding statistical comparisons of the above market indices are also exhibited. The experimental results show that this approach gives good performance in predicting the values from the stock market indices.
A novel nonlinear adaptive filter using a pipelined second-order Volterra recurrent neural network.
Zhao, Haiquan; Zhang, Jiashu
2009-12-01
To enhance the performance and overcome the heavy computational complexity of recurrent neural networks (RNN), a novel nonlinear adaptive filter based on a pipelined second-order Volterra recurrent neural network (PSOVRNN) is proposed in this paper. A modified real-time recurrent learning (RTRL) algorithm of the proposed filter is derived in much more detail. The PSOVRNN comprises of a number of simple small-scale second-order Volterra recurrent neural network (SOVRNN) modules. In contrast to the standard RNN, these modules of a PSOVRNN can be performed simultaneously in a pipelined parallelism fashion, which can lead to a significant improvement in its total computational efficiency. Moreover, since each module of the PSOVRNN is a SOVRNN in which nonlinearity is introduced by the recursive second-order Volterra (RSOV) expansion, its performance can be further improved. Computer simulations have demonstrated that the PSOVRNN performs better than the pipelined recurrent neural network (PRNN) and RNN for nonlinear colored signals prediction and nonlinear channel equalization. However, the superiority of the PSOVRNN over the PRNN is at the cost of increasing computational complexity due to the introduced nonlinear expansion of each module.
Lin, Yang-Yin; Chang, Jyh-Yeong; Lin, Chin-Teng
2013-02-01
This paper presents a novel recurrent fuzzy neural network, called an interactively recurrent self-evolving fuzzy neural network (IRSFNN), for prediction and identification of dynamic systems. The recurrent structure in an IRSFNN is formed as an external loops and internal feedback by feeding the rule firing strength of each rule to others rules and itself. The consequent part in the IRSFNN is composed of a Takagi-Sugeno-Kang (TSK) or functional-link-based type. The proposed IRSFNN employs a functional link neural network (FLNN) to the consequent part of fuzzy rules for promoting the mapping ability. Unlike a TSK-type fuzzy neural network, the FLNN in the consequent part is a nonlinear function of input variables. An IRSFNNs learning starts with an empty rule base and all of the rules are generated and learned online through a simultaneous structure and parameter learning. An on-line clustering algorithm is effective in generating fuzzy rules. The consequent update parameters are derived by a variable-dimensional Kalman filter algorithm. The premise and recurrent parameters are learned through a gradient descent algorithm. We test the IRSFNN for the prediction and identification of dynamic plants and compare it to other well-known recurrent FNNs. The proposed model obtains enhanced performance results.
Dynamic Hand Gesture Recognition for Wearable Devices with Low Complexity Recurrent Neural Networks
Shin, Sungho; Sung, Wonyong
2016-01-01
Gesture recognition is a very essential technology for many wearable devices. While previous algorithms are mostly based on statistical methods including the hidden Markov model, we develop two dynamic hand gesture recognition techniques using low complexity recurrent neural network (RNN) algorithms. One is based on video signal and employs a combined structure of a convolutional neural network (CNN) and an RNN. The other uses accelerometer data and only requires an RNN. Fixed-point optimizat...
Reduced-Order Modeling for Flutter/LCO Using Recurrent Artificial Neural Network
Yao, Weigang; Liou, Meng-Sing
2012-01-01
The present study demonstrates the efficacy of a recurrent artificial neural network to provide a high fidelity time-dependent nonlinear reduced-order model (ROM) for flutter/limit-cycle oscillation (LCO) modeling. An artificial neural network is a relatively straightforward nonlinear method for modeling an input-output relationship from a set of known data, for which we use the radial basis function (RBF) with its parameters determined through a training process. The resulting RBF neural network, however, is only static and is not yet adequate for an application to problems of dynamic nature. The recurrent neural network method [1] is applied to construct a reduced order model resulting from a series of high-fidelity time-dependent data of aero-elastic simulations. Once the RBF neural network ROM is constructed properly, an accurate approximate solution can be obtained at a fraction of the cost of a full-order computation. The method derived during the study has been validated for predicting nonlinear aerodynamic forces in transonic flow and is capable of accurate flutter/LCO simulations. The obtained results indicate that the present recurrent RBF neural network is accurate and efficient for nonlinear aero-elastic system analysis
Reduced-Order Modeling for Flutter/LCO Using Recurrent Artificial Neural Network
Yao, Weigang; Liou, Meng-Sing
2012-01-01
The present study demonstrates the efficacy of a recurrent artificial neural network to provide a high fidelity time-dependent nonlinear reduced-order model (ROM) for flutter/limit-cycle oscillation (LCO) modeling. An artificial neural network is a relatively straightforward nonlinear method for modeling an input-output relationship from a set of known data, for which we use the radial basis function (RBF) with its parameters determined through a training process. The resulting RBF neural network, however, is only static and is not yet adequate for an application to problems of dynamic nature. The recurrent neural network method [1] is applied to construct a reduced order model resulting from a series of high-fidelity time-dependent data of aero-elastic simulations. Once the RBF neural network ROM is constructed properly, an accurate approximate solution can be obtained at a fraction of the cost of a full-order computation. The method derived during the study has been validated for predicting nonlinear aerodynamic forces in transonic flow and is capable of accurate flutter/LCO simulations. The obtained results indicate that the present recurrent RBF neural network is accurate and efficient for nonlinear aero-elastic system analysis
Learning Topology and Dynamics of Large Recurrent Neural Networks
She, Yiyuan; He, Yuejia; Wu, Dapeng
2014-11-01
Large-scale recurrent networks have drawn increasing attention recently because of their capabilities in modeling a large variety of real-world phenomena and physical mechanisms. This paper studies how to identify all authentic connections and estimate system parameters of a recurrent network, given a sequence of node observations. This task becomes extremely challenging in modern network applications, because the available observations are usually very noisy and limited, and the associated dynamical system is strongly nonlinear. By formulating the problem as multivariate sparse sigmoidal regression, we develop simple-to-implement network learning algorithms, with rigorous convergence guarantee in theory, for a variety of sparsity-promoting penalty forms. A quantile variant of progressive recurrent network screening is proposed for efficient computation and allows for direct cardinality control of network topology in estimation. Moreover, we investigate recurrent network stability conditions in Lyapunov's sense, and integrate such stability constraints into sparse network learning. Experiments show excellent performance of the proposed algorithms in network topology identification and forecasting.
Lai, Dihui; Brandt, Sebastian; Luksch, Harald; Wessel, Ralf
2011-02-01
Topographically organized neurons represent multiple stimuli within complex visual scenes and compete for subsequent processing in higher visual centers. The underlying neural mechanisms of this process have long been elusive. We investigate an experimentally constrained model of a midbrain structure: the optic tectum and the reciprocally connected nucleus isthmi. We show that a recurrent antitopographic inhibition mediates the competitive stimulus selection between distant sensory inputs in this visual pathway. This recurrent antitopographic inhibition is fundamentally different from surround inhibition in that it projects on all locations of its input layer, except to the locus from which it receives input. At a larger scale, the model shows how a focal top-down input from a forebrain region, the arcopallial gaze field, biases the competitive stimulus selection via the combined activation of a local excitation and the recurrent antitopographic inhibition. Our findings reveal circuit mechanisms of competitive stimulus selection and should motivate a search for anatomical implementations of these mechanisms in a range of vertebrate attentional systems.
Recurrent Artificial Neural Networks and Finite State Natural Language Processing.
Moisl, Hermann
It is argued that pessimistic assessments of the adequacy of artificial neural networks (ANNs) for natural language processing (NLP) on the grounds that they have a finite state architecture are unjustified, and that their adequacy in this regard is an empirical issue. First, arguments that counter standard objections to finite state NLP on the…
Homeostatic scaling of excitability in recurrent neural networks.
M.W.H. Remme; W.J. Wadman
2012-01-01
Neurons adjust their intrinsic excitability when experiencing a persistent change in synaptic drive. This process can prevent neural activity from moving into either a quiescent state or a saturated state in the face of ongoing plasticity, and is thought to promote stability of the network in which
Synchronization control of memristor-based recurrent neural networks with perturbations.
Wang, Weiping; Li, Lixiang; Peng, Haipeng; Xiao, Jinghua; Yang, Yixian
2014-05-01
In this paper, the synchronization control of memristor-based recurrent neural networks with impulsive perturbations or boundary perturbations is studied. We find that the memristive connection weights have a certain relationship with the stability of the system. Some criteria are obtained to guarantee that memristive neural networks have strong noise tolerance capability. Two kinds of controllers are designed so that the memristive neural networks with perturbations can converge to the equilibrium points, which evoke human's memory patterns. The analysis in this paper employs the differential inclusions theory and the Lyapunov functional method. Numerical examples are given to show the effectiveness of our results.
Kato, Hideyuki; Ikeguchi, Tohru
2016-01-01
Specific memory might be stored in a subnetwork consisting of a small population of neurons. To select neurons involved in memory formation, neural competition might be essential. In this paper, we show that excitable neurons are competitive and organize into two assemblies in a recurrent network with spike timing-dependent synaptic plasticity (STDP) and axonal conduction delays. Neural competition is established by the cooperation of spontaneously induced neural oscillation, axonal conduction delays, and STDP. We also suggest that the competition mechanism in this paper is one of the basic functions required to organize memory-storing subnetworks into fine-scale cortical networks.
Finite-time synchronization control of a class of memristor-based recurrent neural networks.
Jiang, Minghui; Wang, Shuangtao; Mei, Jun; Shen, Yanjun
2015-03-01
This paper presents a global and local finite-time synchronization control law for memristor neural networks. By utilizing the drive-response concept, differential inclusions theory, and Lyapunov functional method, we establish several sufficient conditions for finite-time synchronization between the master and corresponding slave memristor-based neural network with the designed controller. In comparison with the existing results, the proposed stability conditions are new, and the obtained results extend some previous works on conventional recurrent neural networks. Two numerical examples are provided to illustrate the effective of the design method.
Xia, Youshen; Feng, Gang; Wang, Jun
2004-09-01
This paper presents a recurrent neural network for solving strict convex quadratic programming problems and related linear piecewise equations. Compared with the existing neural networks for quadratic program, the proposed neural network has a one-layer structure with a low model complexity. Moreover, the proposed neural network is shown to have a finite-time convergence and exponential convergence. Illustrative examples further show the good performance of the proposed neural network in real-time applications.
Qin, Sitian; Yang, Xiudong; Xue, Xiaoping; Song, Jiahui
2017-10-01
Pseudoconvex optimization problem, as an important nonconvex optimization problem, plays an important role in scientific and engineering applications. In this paper, a recurrent one-layer neural network is proposed for solving the pseudoconvex optimization problem with equality and inequality constraints. It is proved that from any initial state, the state of the proposed neural network reaches the feasible region in finite time and stays there thereafter. It is also proved that the state of the proposed neural network is convergent to an optimal solution of the related problem. Compared with the related existing recurrent neural networks for the pseudoconvex optimization problems, the proposed neural network in this paper does not need the penalty parameters and has a better convergence. Meanwhile, the proposed neural network is used to solve three nonsmooth optimization problems, and we make some detailed comparisons with the known related conclusions. In the end, some numerical examples are provided to illustrate the effectiveness of the performance of the proposed neural network.
Stack- and Queue-like Dynamics in Recurrent Neural Networks
Grüning, A
2006-01-01
What dynamics do simple recurrent networks (SRNs) develop to represent stack-like and queue-like memories? SRNs have been widely used as models in cognitive science. However, they are interesting in their own right as non-symbolic computing devices from the viewpoints of analogue computing and dynamical systems theory. In this paper, SRNs are trained oil two prototypical formal languages with recursive structures that need stack-like or queue-like memories for processing, respectively. The ev...
Direction-of-change forecasting using a volatility-based recurrent neural network
Bekiros, S.D.; Georgoutsos, D.A.
2008-01-01
This paper investigates the profitability of a trading strategy, based on recurrent neural networks, that attempts to predict the direction-of-change of the market in the case of the NASDAQ composite index. The sample extends over the period 8 February 1971 to 7 April 1998, while the sub-period 8 Ap
Evaluation of Heart Rate Variability by Using Wavelet Transform and a Recurrent Neural Network
2007-11-02
variability is proposed. This method combines the wavelet transform with a recurrent neural network. The features of the proposed method are as follows...1. The wavelet transform is utilized for the feature extraction so that the local change of heart rate variability in the time-frequency domain can
Folk music style modelling by recurrent neural networks with long short term memory units
Sturm, Bob; Santos, João Felipe; Korshunova, Iryna
2015-01-01
We demonstrate two generative models created by training a recurrent neural network (RNN) with three hidden layers of long short-term memory (LSTM) units. This extends past work in numerous directions, including training deeper models with nearly 24,000 high-level transcriptions of folk tunes. We discuss our on-going work.
Direction-of-change forecasting using a volatility-based recurrent neural network
Bekiros, S.D.; Georgoutsos, D.A.
2008-01-01
This paper investigates the profitability of a trading strategy, based on recurrent neural networks, that attempts to predict the direction-of-change of the market in the case of the NASDAQ composite index. The sample extends over the period 8 February 1971 to 7 April 1998, while the sub-period 8
Tyukin, Ivan; van Leeuwen, Cees
2007-01-01
We address the important theoretical question why a recurrent neural network with fixed weights can adaptively classify time-varied signals in the presence of additive noise and parametric perturbations. We provide a mathematical proof assuming that unknown parameters are allowed to enter the signal nonlinearly and the noise amplitude is sufficiently small.
Congestion Control for ATM Networks Based on Diagonal Recurrent Neural Networks
HuangYunxian; YanWei
1997-01-01
An adaptive control model and its algorithms based on simple diagonal recurrent neural networks are presented for the dynamic congestion control in broadband ATM networks.Two simple dynamic queuing models of real networks are used to test the performance of the suggested control scheme.
A one-layer recurrent neural network for constrained nonconvex optimization.
Li, Guocheng; Yan, Zheng; Wang, Jun
2015-01-01
In this paper, a one-layer recurrent neural network is proposed for solving nonconvex optimization problems subject to general inequality constraints, designed based on an exact penalty function method. It is proved herein that any neuron state of the proposed neural network is convergent to the feasible region in finite time and stays there thereafter, provided that the penalty parameter is sufficiently large. The lower bounds of the penalty parameter and convergence time are also estimated. In addition, any neural state of the proposed neural network is convergent to its equilibrium point set which satisfies the Karush-Kuhn-Tucker conditions of the optimization problem. Moreover, the equilibrium point set is equivalent to the optimal solution to the nonconvex optimization problem if the objective function and constraints satisfy given conditions. Four numerical examples are provided to illustrate the performances of the proposed neural network.
A one-layer recurrent neural network for constrained nonsmooth invex optimization.
Li, Guocheng; Yan, Zheng; Wang, Jun
2014-02-01
Invexity is an important notion in nonconvex optimization. In this paper, a one-layer recurrent neural network is proposed for solving constrained nonsmooth invex optimization problems, designed based on an exact penalty function method. It is proved herein that any state of the proposed neural network is globally convergent to the optimal solution set of constrained invex optimization problems, with a sufficiently large penalty parameter. In addition, any neural state is globally convergent to the unique optimal solution, provided that the objective function and constraint functions are pseudoconvex. Moreover, any neural state is globally convergent to the feasible region in finite time and stays there thereafter. The lower bounds of the penalty parameter and convergence time are also estimated. Two numerical examples are provided to illustrate the performances of the proposed neural network.
Dual coding with STDP in a spiking recurrent neural network model of the hippocampus.
Daniel Bush
Full Text Available The firing rate of single neurons in the mammalian hippocampus has been demonstrated to encode for a range of spatial and non-spatial stimuli. It has also been demonstrated that phase of firing, with respect to the theta oscillation that dominates the hippocampal EEG during stereotype learning behaviour, correlates with an animal's spatial location. These findings have led to the hypothesis that the hippocampus operates using a dual (rate and temporal coding system. To investigate the phenomenon of dual coding in the hippocampus, we examine a spiking recurrent network model with theta coded neural dynamics and an STDP rule that mediates rate-coded Hebbian learning when pre- and post-synaptic firing is stochastic. We demonstrate that this plasticity rule can generate both symmetric and asymmetric connections between neurons that fire at concurrent or successive theta phase, respectively, and subsequently produce both pattern completion and sequence prediction from partial cues. This unifies previously disparate auto- and hetero-associative network models of hippocampal function and provides them with a firmer basis in modern neurobiology. Furthermore, the encoding and reactivation of activity in mutually exciting Hebbian cell assemblies demonstrated here is believed to represent a fundamental mechanism of cognitive processing in the brain.
Towards a Unified Recurrent Neural Network Theory:The Uniformly Pseudo-Projection-Anti-Monotone Net
Zong Ben XU; Chen QIAO
2011-01-01
In the past decades, various neural network models have been developed for modeling the behavior of human brain or performing problem-solving through simulating the behavior of human brain. The recurrent neural networks are the type of neural networks to model or simulate associative memory behavior of human being. A recurrent neural network (RNN) can be generally formalized as a dynamic system associated with two fundamental operators: one is the nonlinear activation operator deduced from the input-output properties of the involved neurons, and the other is the synaptic connections (a matrix) among the neurons. Through carefully examining properties of various activation functions used, we introduce a novel type of monotone operators, the uniformly pseudo-projectionanti-monotone (UPPAM) operators, to unify the various RNN models appeared in the literature. We develop a unified encoding and stability theory for the UPPAM network model when the time is discrete.The established model and theory not only unify but also jointly generalize the most known results of RNNs. The approach has lunched a visible step towards establishment of a unified mathematical theory of recurrent neural networks.
Artificial neural network in studying factors of hepatic cancer recurrence after hepatectomy
HE Jia; HE Xian-min; ZHANG Zhi-jian
2002-01-01
Objective: To explore the affecting factors of liver cancer recurrence after hepatectomy. Methods:The BP artificial neural network - Cox regression was introduced to analyze the factors of recurrence in1 457 patients. Results: The affecting factors statistically significant to liver cancer prognosis was selected.There were 18 factors to be selected by uni-factor analysis, and 9 factors to be selected by multi-factor analysis. Conclusion: The 9 factors selected can be used as important indexes to evaluate the recurrence of liver cancer after hepatectomy. The artificial neural network is a better method to analyze the clinical data, which provides scientific and objective data for evaluating prognosis of liver cancer.
Spatiotemporal Dynamics and Reliable Computations in Recurrent Spiking Neural Networks
Pyle, Ryan; Rosenbaum, Robert
2017-01-01
Randomly connected networks of excitatory and inhibitory spiking neurons provide a parsimonious model of neural variability, but are notoriously unreliable for performing computations. We show that this difficulty is overcome by incorporating the well-documented dependence of connection probability on distance. Spatially extended spiking networks exhibit symmetry-breaking bifurcations and generate spatiotemporal patterns that can be trained to perform dynamical computations under a reservoir computing framework.
Xing Yin
2011-01-01
uncertain periodic switched recurrent neural networks with time-varying delays. When uncertain discrete-time recurrent neural network is a periodic system, it is expressed as switched neural network for the finite switching state. Based on the switched quadratic Lyapunov functional approach (SQLF and free-weighting matrix approach (FWM, some linear matrix inequality criteria are found to guarantee the delay-dependent asymptotical stability of these systems. Two examples illustrate the exactness of the proposed criteria.
Lo, James Ting-Ho
2009-11-01
By a fundamental neural filtering theorem, a recurrent neural network with fixed weights is known to be capable of adapting to an uncertain environment. This letter reports some mathematical results on the performance of such adaptation for series-parallel identification of a dynamical system as compared with the performance of the best series-parallel identifier possible under the assumption that the precise value of the uncertain environmental process is given. In short, if an uncertain environmental process is observable (not necessarily constant) from the output of a dynamical system or constant (not necessarily observable), then a recurrent neural network exists as a series-parallel identifier of the dynamical system whose output approaches the output of an optimal series-parallel identifier using the environmental process as an additional input.
Rigotti, Mattia; Rubin, Daniel Ben Dayan; Wang, Xiao-Jing; Fusi, Stefano
2010-01-01
Neural activity of behaving animals, especially in the prefrontal cortex, is highly heterogeneous, with selective responses to diverse aspects of the executed task. We propose a general model of recurrent neural networks that perform complex rule-based tasks, and we show that the diversity of neuronal responses plays a fundamental role when the behavioral responses are context-dependent. Specifically, we found that when the inner mental states encoding the task rules are represented by stable patterns of neural activity (attractors of the neural dynamics), the neurons must be selective for combinations of sensory stimuli and inner mental states. Such mixed selectivity is easily obtained by neurons that connect with random synaptic strengths both to the recurrent network and to neurons encoding sensory inputs. The number of randomly connected neurons needed to solve a task is on average only three times as large as the number of neurons needed in a network designed ad hoc. Moreover, the number of needed neurons grows only linearly with the number of task-relevant events and mental states, provided that each neuron responds to a large proportion of events (dense/distributed coding). A biologically realistic implementation of the model captures several aspects of the activity recorded from monkeys performing context-dependent tasks. Our findings explain the importance of the diversity of neural responses and provide us with simple and general principles for designing attractor neural networks that perform complex computation. PMID:21048899
Firing rate dynamics in recurrent spiking neural networks with intrinsic and network heterogeneity.
Ly, Cheng
2015-12-01
Heterogeneity of neural attributes has recently gained a lot of attention and is increasing recognized as a crucial feature in neural processing. Despite its importance, this physiological feature has traditionally been neglected in theoretical studies of cortical neural networks. Thus, there is still a lot unknown about the consequences of cellular and circuit heterogeneity in spiking neural networks. In particular, combining network or synaptic heterogeneity and intrinsic heterogeneity has yet to be considered systematically despite the fact that both are known to exist and likely have significant roles in neural network dynamics. In a canonical recurrent spiking neural network model, we study how these two forms of heterogeneity lead to different distributions of excitatory firing rates. To analytically characterize how these types of heterogeneities affect the network, we employ a dimension reduction method that relies on a combination of Monte Carlo simulations and probability density function equations. We find that the relationship between intrinsic and network heterogeneity has a strong effect on the overall level of heterogeneity of the firing rates. Specifically, this relationship can lead to amplification or attenuation of firing rate heterogeneity, and these effects depend on whether the recurrent network is firing asynchronously or rhythmically firing. These observations are captured with the aforementioned reduction method, and furthermore simpler analytic descriptions based on this dimension reduction method are developed. The final analytic descriptions provide compact and descriptive formulas for how the relationship between intrinsic and network heterogeneity determines the firing rate heterogeneity dynamics in various settings.
Mattia Rigotti
2010-10-01
Full Text Available Neural activity of behaving animals, especially in the prefrontal cortex, is highly heterogeneous, with selective responses to diverse aspects of the executed task. We propose a general model of recurrent neural networks that perform complex rule-based tasks, and we show that the diversity of neuronal responses plays a fundamental role when the behavioral responses are context dependent. Specifically, we found that when the inner mental states encoding the task rules are represented by stable patterns of neural activity (attractors of the neural dynamics, the neurons must be selective for combinations of sensory stimuli and inner mental states. Such mixed selectivity is easily obtained by neurons that connect with random synaptic strengths both to the recurrent network and to neurons encoding sensory inputs. The number of randomly connected neurons needed to solve a task is on average only three times as large as the number of neurons needed in a network designed ad hoc. Moreover, the number of needed neurons grows only linearly with the number of task-relevant events and mental states, provided that each neuron responds to a large proportion of events (dense/distributed coding. A biologically realistic implementation of the model captures several aspects of the activity recorded from monkeys performing context dependent tasks. Our findings explain the importance of the diversity of neural responses and provide us with simple and general principles for designing attractor neural networks that perform complex computation.
A two-layer recurrent neural network for nonsmooth convex optimization problems.
Qin, Sitian; Xue, Xiaoping
2015-06-01
In this paper, a two-layer recurrent neural network is proposed to solve the nonsmooth convex optimization problem subject to convex inequality and linear equality constraints. Compared with existing neural network models, the proposed neural network has a low model complexity and avoids penalty parameters. It is proved that from any initial point, the state of the proposed neural network reaches the equality feasible region in finite time and stays there thereafter. Moreover, the state is unique if the initial point lies in the equality feasible region. The equilibrium point set of the proposed neural network is proved to be equivalent to the Karush-Kuhn-Tucker optimality set of the original optimization problem. It is further proved that the equilibrium point of the proposed neural network is stable in the sense of Lyapunov. Moreover, from any initial point, the state is proved to be convergent to an equilibrium point of the proposed neural network. Finally, as applications, the proposed neural network is used to solve nonlinear convex programming with linear constraints and L1 -norm minimization problems.
Dynamic stability conditions for Lotka-Volterra recurrent neural networks with delays.
Yi, Zhang; Tan, K K
2002-07-01
The Lotka-Volterra model of neural networks, derived from the membrane dynamics of competing neurons, have found successful applications in many "winner-take-all" types of problems. This paper studies the dynamic stability properties of general Lotka-Volterra recurrent neural networks with delays. Conditions for nondivergence of the neural networks are derived. These conditions are based on local inhibition of networks, thereby allowing these networks to possess a multistability property. Multistability is a necessary property of a network that will enable important neural computations such as those governing the decision making process. Under these nondivergence conditions, a compact set that globally attracts all the trajectories of a network can be computed explicitly. If the connection weight matrix of a network is symmetric in some sense, and the delays of the network are in L2 space, we can prove that the network will have the property of complete stability.
Simultaneous multichannel signal transfers via chaos in a recurrent neural network.
Soma, Ken-ichiro; Mori, Ryota; Sato, Ryuichi; Furumai, Noriyuki; Nara, Shigetoshi
2015-05-01
We propose neural network model that demonstrates the phenomenon of signal transfer between separated neuron groups via other chaotic neurons that show no apparent correlations with the input signal. The model is a recurrent neural network in which it is supposed that synchronous behavior between small groups of input and output neurons has been learned as fragments of high-dimensional memory patterns, and depletion of neural connections results in chaotic wandering dynamics. Computer experiments show that when a strong oscillatory signal is applied to an input group in the chaotic regime, the signal is successfully transferred to the corresponding output group, although no correlation is observed between the input signal and the intermediary neurons. Signal transfer is also observed when multiple signals are applied simultaneously to separate input groups belonging to different memory attractors. In this sense simultaneous multichannel communications are realized, and the chaotic neural dynamics acts as a signal transfer medium in which the signal appears to be hidden.
A non-penalty recurrent neural network for solving a class of constrained optimization problems.
Hosseini, Alireza
2016-01-01
In this paper, we explain a methodology to analyze convergence of some differential inclusion-based neural networks for solving nonsmooth optimization problems. For a general differential inclusion, we show that if its right hand-side set valued map satisfies some conditions, then solution trajectory of the differential inclusion converges to optimal solution set of its corresponding in optimization problem. Based on the obtained methodology, we introduce a new recurrent neural network for solving nonsmooth optimization problems. Objective function does not need to be convex on R(n) nor does the new neural network model require any penalty parameter. We compare our new method with some penalty-based and non-penalty based models. Moreover for differentiable cases, we implement circuit diagram of the new neural network.
Ship motion extreme short time prediction of ship pitch based on diagonal recurrent neural network
SHEN Yan; XIE Mei-ping
2005-01-01
A DRNN (diagonal recurrent neural network) and its RPE (recurrent prediction error) learning algorithm are proposed in this paper .Using of the simple structure of DRNN can reduce the capacity of calculation. The principle of RPE learning algorithm is to adjust weights along the direction of Gauss-Newton. Meanwhile, it is unnecessary to calculate the second local derivative and the inverse matrixes, whose unbiasedness is proved. With application to the extremely short time prediction of large ship pitch, satisfactory results are obtained. Prediction effect of this algorithm is compared with that of auto-regression and periodical diagram method, and comparison results show that the proposed algorithm is feasible.
Duan, Lian; Huang, Lihong; Guo, Zhenyuan
2016-07-01
In this paper, the problems of robust dissipativity and robust exponential dissipativity are discussed for a class of recurrent neural networks with time-varying delay and discontinuous activations. We extend an invariance principle for the study of the dissipativity problem of delay systems to the discontinuous case. Based on the developed theory, some novel criteria for checking the global robust dissipativity and global robust exponential dissipativity of the addressed neural network model are established by constructing appropriate Lyapunov functionals and employing the theory of Filippov systems and matrix inequality techniques. The effectiveness of the theoretical results is shown by two examples with numerical simulations.
鄢田云; 张翠芳; 靳蕃
2003-01-01
Identification simulation for dynamical system which is based on genetic algorithm (GA) and recurrent multilayer neural network (RMNN) is presented. In order to reduce the inputs of the model, RMNN which can remember and store some previous parameters is used for identifier. And for its high efficiency and optimization, genetic algorithm is introduced into training RMNN. Simulation results show the effectiveness of the proposed scheme. Under the same training algorithm, the identification performance of RMNN is superior to that of nonrecurrent multilayer neural network (NRMNN).
Duan, Lian; Huang, Lihong; Guo, Zhenyuan
2016-07-01
In this paper, the problems of robust dissipativity and robust exponential dissipativity are discussed for a class of recurrent neural networks with time-varying delay and discontinuous activations. We extend an invariance principle for the study of the dissipativity problem of delay systems to the discontinuous case. Based on the developed theory, some novel criteria for checking the global robust dissipativity and global robust exponential dissipativity of the addressed neural network model are established by constructing appropriate Lyapunov functionals and employing the theory of Filippov systems and matrix inequality techniques. The effectiveness of the theoretical results is shown by two examples with numerical simulations.
Stability Analysis for Recurrent Neural Networks with Time-varying Delay
Yuan-Yuan Wu; Yu-Qiang Wu
2009-01-01
This paper is concerned with the stability analysis for static recurrent neural networks (RNNs) with time-varying delay. By Lyapunov functional method and linear matrix inequality technique, some new delay-dependent conditions are established to ensure the asymptotic stability of the neural network. Expressed in linear matrix inequalities (LMIs), the proposed delay-dependent stability conditions can be checked using the recently developed algorithms. A numerical example is given to show that the obtained conditions can provide less conservative results than some existing ones.
Dynamical stability analysis of delayed recurrent neural networks with ring structure
Zhang, Huaguang; Huang, Yujiao; Cai, Tiaoyang; Wang, Zhanshan
2014-04-01
In this paper, multistability is discussed for delayed recurrent neural networks with ring structure and multi-step piecewise linear activation functions. Sufficient criteria are obtained to check the existence of multiple equilibria. A lemma is proposed to explore the number and the cross-direction of purely imaginary roots for the characteristic equation, which corresponds to the neural network model. Stability of all of equilibria is investigated. The work improves and extends the existing stability results in the literature. Finally, two examples are given to illustrate the effectiveness of the obtained results.
Design and analysis of a novel chaotic diagonal recurrent neural network
Wang, Libiao; Meng, Zhuo; Sun, Yize; Guo, Lei; Zhou, Mingxing
2015-09-01
A chaotic neural network model with logistic mapping is proposed to improve the performance of the conventional diagonal recurrent neural network. The network shows rich dynamic behaviors that contribute to escaping from a local minimum to reach the global minimum easily. Then, a simple parameter modulated chaos controller is adopted to enhance convergence speed of the network. Furthermore, an adaptive learning algorithm with the robust adaptive dead zone vector is designed to improve the generalization performance of the network, and weights convergence for the network with the adaptive dead zone vectors is proved in the sense of Lyapunov functions. Finally, the numerical simulation is carried out to demonstrate the correctness of the theory.
A Study on Protein Residue Contacts Prediction by Recurrent Neural Network
Liu Gui-xia; Zhu Yuan-xian; Zhou Wen-gang; Huang Yan-xin; Zhou Chun-guang; Wang Rong-xing
2005-01-01
A new method was described for using a recurrent neural network with bias units to predict contact maps in proteins.The main inputs to the neural network include residues pairwise, residue classification according to hydrophobicity, polar,acidic, basic and secondary structure information and residue separation between two residues. In our work, a dataset was used which was composed of 53 globulin proteins of known 3D structure. An average predictive accuracy of 0. 29 was obtained. Our results demonstrate the viability of the approach for predicting contact maps.
A recurrent neural network for solving a class of generalized convex optimization problems.
Hosseini, Alireza; Wang, Jun; Hosseini, S Mohammad
2013-08-01
In this paper, we propose a penalty-based recurrent neural network for solving a class of constrained optimization problems with generalized convex objective functions. The model has a simple structure described by using a differential inclusion. It is also applicable for any nonsmooth optimization problem with affine equality and convex inequality constraints, provided that the objective function is regular and pseudoconvex on feasible region of the problem. It is proven herein that the state vector of the proposed neural network globally converges to and stays thereafter in the feasible region in finite time, and converges to the optimal solution set of the problem.
Nonlinear dynamics of direction-selective recurrent neural media.
Xie, Xiaohui; Giese, Martin A
2002-05-01
The direction selectivity of cortical neurons can be accounted for by asymmetric lateral connections. Such lateral connectivity leads to a network dynamics with characteristic properties that can be exploited for distinguishing in neurophysiological experiments this mechanism for direction selectivity from other possible mechanisms. We present a mathematical analysis for a class of direction-selective neural models with asymmetric lateral connections. Contrasting with earlier theoretical studies that have analyzed approximations of the network dynamics by neglecting nonlinearities using methods from linear systems theory, we study the network dynamics with nonlinearity taken into consideration. We show that asymmetrically coupled networks can stabilize stimulus-locked traveling pulse solutions that are appropriate for the modeling of the responses of direction-selective neurons. In addition, our analysis shows that outside a certain regime of stimulus speeds the stability of these solutions breaks down, giving rise to lurching activity waves with specific spatiotemporal periodicity. These solutions, and the bifurcation by which they arise, cannot be easily accounted for by classical models for direction selectivity.
Understanding Gating Operations in Recurrent Neural Networks through Opinion Expression Extraction
Xin Wang
2016-08-01
Full Text Available Extracting opinion expressions from text is an essential task of sentiment analysis, which is usually treated as one of the word-level sequence labeling problems. In such problems, compositional models with multiplicative gating operations provide efficient ways to encode the contexts, as well as to choose critical information. Thus, in this paper, we adopt Long Short-Term Memory (LSTM recurrent neural networks to address the task of opinion expression extraction and explore the internal mechanisms of the model. The proposed approach is evaluated on the Multi-Perspective Question Answering (MPQA opinion corpus. The experimental results demonstrate improvement over previous approaches, including the state-of-the-art method based on simple recurrent neural networks. We also provide a novel micro perspective to analyze the run-time processes and gain new insights into the advantages of LSTM selecting the source of information with its flexible connections and multiplicative gating operations.
Delay dependent stability criteria for recurrent neural networks with time varying delays
Zhanshan WANG; Huaguang ZHANG
2009-01-01
This paper aims to present some delay-dependent global asymptotic stability criteria for recurrent neural networks with time varying delays.The obtained results have no restriction on the magnitude of derivative of time varying delay,and can be easily checked due to the form of linear matrix inequality.By comparison with some previous results,the obtained results are less conservative.A numerical example is utilized to demonstrate the effectiveness of the obtained results.
Complex Dynamical Network Control for Trajectory Tracking Using Delayed Recurrent Neural Networks
Jose P. Perez
2014-01-01
Full Text Available In this paper, the problem of trajectory tracking is studied. Based on the V-stability and Lyapunov theory, a control law that achieves the global asymptotic stability of the tracking error between a delayed recurrent neural network and a complex dynamical network is obtained. To illustrate the analytic results, we present a tracking simulation of a dynamical network with each node being just one Lorenz’s dynamical system and three identical Chen’s dynamical systems.
Chuangxia Huang
2011-01-01
Full Text Available Stability of reaction-diffusion recurrent neural networks (RNNs with continuously distributed delays and stochastic influence are considered. Some new sufficient conditions to guarantee the almost sure exponential stability and mean square exponential stability of an equilibrium solution are obtained, respectively. Lyapunov's functional method, M-matrix properties, some inequality technique, and nonnegative semimartingale convergence theorem are used in our approach. The obtained conclusions improve some published results.
Li, Xiangang; Wu, Xihong
2014-01-01
Long short-term memory (LSTM) based acoustic modeling methods have recently been shown to give state-of-the-art performance on some speech recognition tasks. To achieve a further performance improvement, in this research, deep extensions on LSTM are investigated considering that deep hierarchical model has turned out to be more efficient than a shallow one. Motivated by previous research on constructing deep recurrent neural networks (RNNs), alternative deep LSTM architectures are proposed an...
Generalized cost-criterion-based learning algorithm for diagonal recurrent neural networks
Wang, Yongji; Wang, Hong
2000-05-01
A new generalized cost criterion based learning algorithm for diagonal recurrent neural networks is presented, which is with form of recursive prediction error (RPE) and has second convergent order. A guideline for the choice of the optimal learning rate is derived from convergence analysis. The application of this method to dynamic modeling of typical chemical processes shows that the generalized cost criterion RPE (QRPE) has higher modeling precision than BP trained MLP and quadratic cost criterion trained RPE (QRPE).
LIU Hai-feng; WANG Chun-hua; WEI Guo-liang
2008-01-01
The exponential stability problem is investigated fora class of stochastic recurrent neural networks with time delay and Markovian switching.By using It(o)'s differential formula and the Lyapunov stabifity theory,sufficient condition for the solvability of this problem is derived in telm of linear matrix inequalities,which can be easily checked by resorting to available software packages.A numerical example and the simulation are exploited to demonstrate the effectiveness of the proposed results.
INFLUENCE OF NOISE AND DELAY ON REACTION-DIFFUSION RECURRENT NEURAL NETWORKS
Li Wu
2006-01-01
In this paper, the influence of the noise and delay upon the stability property of reaction-diffusion recurrent neural networks (RNNs) with the time-varying delay is discussed. The new and easily verifiable conditions to guarantee the mean value exponential stability of an equilibrium solution are derived. The rate of exponential convergence can be estimated by means of a simple computation based on these criteria.
Non-Minimum Phase Nonlinear System Predictive Control Based on Local Recurrent Neural Networks
张燕; 陈增强; 袁著祉
2003-01-01
After a recursive multi-step-ahead predictor for nonlinear systems based on local recurrent neural networks is introduced, an intelligent PID controller is adopted to correct the errors including identified model errors and accumulated errors produced in the recursive process. Characterized by predictive control, this method can achieve a good control accuracy and has good robustness. A simulation study shows that this control algorithm is very effective.
R. Selva Santhose Kumar; S.M. Girirajkumar
2014-01-01
In this study, the proposal is made for Particle Swarm Optimization (PSO) Recurrent Neural Network (RNN) based Z-Source Inverter Fed Induction Motor Drive. The proposed method is used to enhance the performance of the induction motor while reducing the Total Harmonic Distortion (THD), eliminating the oscillation period of the stator current, torque and speed. Here, the PSO technique uses the induction motor speed and reference speed as the input parameters. From the input parameters, it optim...
Kumar, Rajesh; Srivastava, Smriti; Gupta, J R P
2017-03-01
In this paper adaptive control of nonlinear dynamical systems using diagonal recurrent neural network (DRNN) is proposed. The structure of DRNN is a modification of fully connected recurrent neural network (FCRNN). Presence of self-recurrent neurons in the hidden layer of DRNN gives it an ability to capture the dynamic behaviour of the nonlinear plant under consideration (to be controlled). To ensure stability, update rules are developed using lyapunov stability criterion. These rules are then used for adjusting the various parameters of DRNN. The responses of plants obtained with DRNN are compared with those obtained when multi-layer feed forward neural network (MLFFNN) is used as a controller. Also, in example 4, FCRNN is also investigated and compared with DRNN and MLFFNN. Robustness of the proposed control scheme is also tested against parameter variations and disturbance signals. Four simulation examples including one-link robotic manipulator and inverted pendulum are considered on which the proposed controller is applied. The results so obtained show the superiority of DRNN over MLFFNN as a controller. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.
Hajihosseini, Amirhossein, E-mail: hajihosseini@khayam.ut.ac.ir [School of Mathematics, Institute for Research in Fundamental Sciences (IPM), Tehran 19395-5746 (Iran, Islamic Republic of); Center of Excellence in Biomathematics, School of Mathematics, Statistics and Computer Science, University of Tehran, Tehran 14176-14411 (Iran, Islamic Republic of); Maleki, Farzaneh, E-mail: farzanmaleki83@khayam.ut.ac.ir [School of Mathematics, Statistics and Computer Science, College of Science, University of Tehran, Tehran 14176-14411 (Iran, Islamic Republic of); School of Mathematics, Institute for Research in Fundamental Sciences (IPM), Tehran 19395-5746 (Iran, Islamic Republic of); Center of Excellence in Biomathematics, School of Mathematics, Statistics and Computer Science, University of Tehran, Tehran 14176-14411 (Iran, Islamic Republic of); Rokni Lamooki, Gholam Reza, E-mail: rokni@khayam.ut.ac.ir [School of Mathematics, Statistics and Computer Science, College of Science, University of Tehran, Tehran 14176-14411 (Iran, Islamic Republic of); School of Mathematics, Institute for Research in Fundamental Sciences (IPM), Tehran 19395-5746 (Iran, Islamic Republic of); Center of Excellence in Biomathematics, School of Mathematics, Statistics and Computer Science, University of Tehran, Tehran 14176-14411 (Iran, Islamic Republic of)
2011-11-15
Highlights: > We construct a recurrent neural network by generalizing a specific n-neuron network. > Several codimension 1 and 2 bifurcations take place in the newly constructed network. > The newly constructed network has higher capabilities to learn periodic signals. > The normal form theorem is applied to investigate dynamics of the network. > A series of bifurcation diagrams is given to support theoretical results. - Abstract: A class of recurrent neural networks is constructed by generalizing a specific class of n-neuron networks. It is shown that the newly constructed network experiences generic pitchfork and Hopf codimension one bifurcations. It is also proved that the emergence of generic Bogdanov-Takens, pitchfork-Hopf and Hopf-Hopf codimension two, and the degenerate Bogdanov-Takens bifurcation points in the parameter space is possible due to the intersections of codimension one bifurcation curves. The occurrence of bifurcations of higher codimensions significantly increases the capability of the newly constructed recurrent neural network to learn broader families of periodic signals.
Banaei, M.R., E-mail: m.banaei@azaruniv.ed [Electrical Engineering Department, Faculty of Engineering, Azarbaijan University of Tarbiat Moallem, Tabriz (Iran, Islamic Republic of); Kami, A. [Electrical Engineering Department, Faculty of Engineering, Azarbaijan University of Tarbiat Moallem, Tabriz (Iran, Islamic Republic of)
2011-07-15
Highlights: {yields} A method is presented to improve power system stability using IPFC. {yields} Recurrent neural network controllers damp oscillations in a power system. {yields} Training is based on back propagation with adaptive training parameters. {yields} Selection of effectiveness damping control signal carried out using SVD method. -- Abstract: This paper presents a method to improve power system stability using IPFC based damping online learning recurrent neural network controllers for damping oscillations in a power system. Parameters of equipped controllers for enhancing dynamical stability at the IPFC are tuned using mathematical methods. Therefore these control parameters are often fixed and are set for particular system configurations or operating points. Multilayer recurrent neural network, which can be tuned for changing system conditions, is used in this paper for effectively damp the oscillations. Training is based on back propagation with adaptive training parameters. This controller is tested to variations in system loading and fault in the power system and its performance is compared with performance of a controller that the phase compensation method is used to set its parameters. Selection of effectiveness damping control signal for the design of robust IPFC damping controller carried out through singular value decomposition (SVD) method. Simulation studies show the superior robustness and stabilizing effect of the proposed controller in comparison with phase compensation method.
Model for a flexible motor memory based on a self-active recurrent neural network.
Boström, Kim Joris; Wagner, Heiko; Prieske, Markus; de Lussanet, Marc
2013-10-01
Using recent recurrent network architecture based on the reservoir computing approach, we propose and numerically simulate a model that is focused on the aspects of a flexible motor memory for the storage of elementary movement patterns into the synaptic weights of a neural network, so that the patterns can be retrieved at any time by simple static commands. The resulting motor memory is flexible in that it is capable to continuously modulate the stored patterns. The modulation consists in an approximately linear inter- and extrapolation, generating a large space of possible movements that have not been learned before. A recurrent network of thousand neurons is trained in a manner that corresponds to a realistic exercising scenario, with experimentally measured muscular activations and with kinetic data representing proprioceptive feedback. The network is "self-active" in that it maintains recurrent flow of activation even in the absence of input, a feature that resembles the "resting-state activity" found in the human and animal brain. The model involves the concept of "neural outsourcing" which amounts to the permanent shifting of computational load from higher to lower-level neural structures, which might help to explain why humans are able to execute learned skills in a fluent and flexible manner without the need for attention to the details of the movement.
Improved Generalization in Recurrent Neural Networks Using the Tangent Plane Algorithm
P May
2014-01-01
Full Text Available The tangent plane algorithm for real time recurrent learning (TPA-RTRL is an effective online training method for fully recurrent neural networks. TPA-RTRL uses the method of approaching tangent planes to accelerate the learning processes. Compared to the original gradient descent real time recurrent learning algorithm (GD-RTRL it is very fast and avoids problems like local minima of the search space. However, the TPA-RTRL algorithm actively encourages the formation of large weight values that can be harmful to generalization. This paper presents a new TPA-RTRL variant that encourages small weight values to decay to zero by using a weight elimination procedure built into the geometry of the algorithm. Experimental results show that the new algorithm gives good generalization over a range of network sizes whilst retaining the fast convergence speed of the TPA-RTRL algorithm.
Lin, Faa-Jeng; Shieh, Po-Huang
2006-12-01
A recurrent radial basis function network (RBFN) based fuzzy neural network (FNN) control system is proposed to control the position of an X-Y-theta motion control stage using linear ultrasonic motors (LUSMs) to track various contours in this study. The proposed recurrent RBFN-based FNN combines the merits of self-constructing fuzzy neural network (SCFNN), recurrent neural network (RNN), and RBFN. Moreover, the structure and the parameter learning phases of the recurrent RBFN-based FNN are performed concurrently and on line. The structure learning is based on the partition of input space, and the parameter learning is based on the supervised gradient decent method using a delta adaptation law. The experimental results due to various contours show that the dynamic behaviors of the proposed recurrent RBFN-based FNN control system are robust with regard to uncertainties.
An R implementation of a Recurrent Neural Network Trained by Extended Kalman Filter
Bogdan Oancea
2016-06-01
Full Text Available Nowadays there are several techniques used for forecasting with different performances and accuracies. One of the most performant techniques for time series prediction is neural networks. The accuracy of the predictions greatly depends on the network architecture and training method. In this paper we describe an R implementation of a recurrent neural network trained by the Extended Kalman Filter. For the implementation of the network we used the Matrix package that allows efficient vector-matrix and matrix-matrix operations. We tested the performance of our R implementation comparing it with a pure C++ implementation and we showed that R can achieve about 75% of the C++ programs. Considering the other advantages of R, our results recommend R as a serious alternative to classical programming languages for high performance implementations of neural networks.
Complete stability of delayed recurrent neural networks with Gaussian activation functions.
Liu, Peng; Zeng, Zhigang; Wang, Jun
2017-01-01
This paper addresses the complete stability of delayed recurrent neural networks with Gaussian activation functions. By means of the geometrical properties of Gaussian function and algebraic properties of nonsingular M-matrix, some sufficient conditions are obtained to ensure that for an n-neuron neural network, there are exactly 3(k) equilibrium points with 0≤k≤n, among which 2(k) and 3(k)-2(k) equilibrium points are locally exponentially stable and unstable, respectively. Moreover, it concludes that all the states converge to one of the equilibrium points; i.e., the neural networks are completely stable. The derived conditions herein can be easily tested. Finally, a numerical example is given to illustrate the theoretical results. Copyright © 2016 Elsevier Ltd. All rights reserved.
Xiao, Min; Zheng, Wei Xing; Jiang, Guoping; Cao, Jinde
2015-12-01
In this paper, a fractional-order recurrent neural network is proposed and several topics related to the dynamics of such a network are investigated, such as the stability, Hopf bifurcations, and undamped oscillations. The stability domain of the trivial steady state is completely characterized with respect to network parameters and orders of the commensurate-order neural network. Based on the stability analysis, the critical values of the fractional order are identified, where Hopf bifurcations occur and a family of oscillations bifurcate from the trivial steady state. Then, the parametric range of undamped oscillations is also estimated and the frequency and amplitude of oscillations are determined analytically and numerically for such commensurate-order networks. Meanwhile, it is shown that the incommensurate-order neural network can also exhibit a Hopf bifurcation as the network parameter passes through a critical value which can be determined exactly. The frequency and amplitude of bifurcated oscillations are determined.
Robust recurrent neural network modeling for software fault detection and correction prediction
Hu, Q.P. [Quality and Innovation Research Centre, Department of Industrial and Systems Engineering, National University of Singapore, Singapore 119260 (Singapore)]. E-mail: g0305835@nus.edu.sg; Xie, M. [Quality and Innovation Research Centre, Department of Industrial and Systems Engineering, National University of Singapore, Singapore 119260 (Singapore)]. E-mail: mxie@nus.edu.sg; Ng, S.H. [Quality and Innovation Research Centre, Department of Industrial and Systems Engineering, National University of Singapore, Singapore 119260 (Singapore)]. E-mail: isensh@nus.edu.sg; Levitin, G. [Israel Electric Corporation, Reliability and Equipment Department, R and D Division, Aaifa 31000 (Israel)]. E-mail: levitin@iec.co.il
2007-03-15
Software fault detection and correction processes are related although different, and they should be studied together. A practical approach is to apply software reliability growth models to model fault detection, and fault correction process is assumed to be a delayed process. On the other hand, the artificial neural networks model, as a data-driven approach, tries to model these two processes together with no assumptions. Specifically, feedforward backpropagation networks have shown their advantages over analytical models in fault number predictions. In this paper, the following approach is explored. First, recurrent neural networks are applied to model these two processes together. Within this framework, a systematic networks configuration approach is developed with genetic algorithm according to the prediction performance. In order to provide robust predictions, an extra factor characterizing the dispersion of prediction repetitions is incorporated into the performance function. Comparisons with feedforward neural networks and analytical models are developed with respect to a real data set.
Robust exponential stability analysis of a larger class of discrete-time recurrent neural networks
无
2007-01-01
The robust exponential stability of a larger class of discrete-time recurrent neural networks (RNNs) is explored in this paper. A novel neural network model, named standard neural network model (SNNM), is introduced to provide a general framework for stability analysis of RNNs. Most of the existing RNNs can be transformed into SNNMs to be analyzed in a unified way.Applying Lyapunov stability theory method and S-Procedure technique, two useful criteria of robust exponential stability for the discrete-time SNNMs are derived. The conditions presented are formulated as linear matrix inequalities (LMIs) to be easily solved using existing efficient convex optimization techniques. An example is presented to demonstrate the transformation procedure and the effectiveness of the results.
Haojie Liu
2016-01-01
Full Text Available The paper presents a digital adaptive controller of recurrent neural networks for the active flutter suppression of a wing structure over a wide transonic range. The basic idea behind the controller is as follows. At first, the parameters of recurrent neural networks, such as the number of neurons and the learning rate, are initially determined so as to suppress the flutter under a specific flight condition in the transonic regime. Then, the controller automatically adjusts itself for a new flight condition by updating the synaptic weights of networks online via the real-time recurrent learning algorithm. Hence, the controller is able to suppress the aeroelastic instability of the wing structure over a range of flight conditions in the transonic regime. To demonstrate the effectiveness and robustness of the controller, the aeroservoelastic model of a typical fighter wing with a tip missile was established and a single-input/single-output controller was synthesized. Numerical simulations of the open/closed-loop aeroservoelastic simulations were made to demonstrate the efficacy of the adaptive controller with respect to the change of flight parameters in the transonic regime.
Deep Convolutional and LSTM Recurrent Neural Networks for Multimodal Wearable Activity Recognition
Ordóñez, Francisco Javier; Roggen, Daniel
2016-01-01
Human activity recognition (HAR) tasks have traditionally been solved using engineered features obtained by heuristic processes. Current research suggests that deep convolutional neural networks are suited to automate feature extraction from raw sensor inputs. However, human activities are made of complex sequences of motor movements, and capturing this temporal dynamics is fundamental for successful HAR. Based on the recent success of recurrent neural networks for time series domains, we propose a generic deep framework for activity recognition based on convolutional and LSTM recurrent units, which: (i) is suitable for multimodal wearable sensors; (ii) can perform sensor fusion naturally; (iii) does not require expert knowledge in designing features; and (iv) explicitly models the temporal dynamics of feature activations. We evaluate our framework on two datasets, one of which has been used in a public activity recognition challenge. Our results show that our framework outperforms competing deep non-recurrent networks on the challenge dataset by 4% on average; outperforming some of the previous reported results by up to 9%. Our results show that the framework can be applied to homogeneous sensor modalities, but can also fuse multimodal sensors to improve performance. We characterise key architectural hyperparameters’ influence on performance to provide insights about their optimisation. PMID:26797612
Deep Recurrent Neural Network-Based Autoencoders for Acoustic Novelty Detection
Erik Marchi
2017-01-01
Full Text Available In the emerging field of acoustic novelty detection, most research efforts are devoted to probabilistic approaches such as mixture models or state-space models. Only recent studies introduced (pseudo-generative models for acoustic novelty detection with recurrent neural networks in the form of an autoencoder. In these approaches, auditory spectral features of the next short term frame are predicted from the previous frames by means of Long-Short Term Memory recurrent denoising autoencoders. The reconstruction error between the input and the output of the autoencoder is used as activation signal to detect novel events. There is no evidence of studies focused on comparing previous efforts to automatically recognize novel events from audio signals and giving a broad and in depth evaluation of recurrent neural network-based autoencoders. The present contribution aims to consistently evaluate our recent novel approaches to fill this white spot in the literature and provide insight by extensive evaluations carried out on three databases: A3Novelty, PASCAL CHiME, and PROMETHEUS. Besides providing an extensive analysis of novel and state-of-the-art methods, the article shows how RNN-based autoencoders outperform statistical approaches up to an absolute improvement of 16.4% average F-measure over the three databases.
Deep Recurrent Neural Network-Based Autoencoders for Acoustic Novelty Detection
Vesperini, Fabio; Schuller, Björn
2017-01-01
In the emerging field of acoustic novelty detection, most research efforts are devoted to probabilistic approaches such as mixture models or state-space models. Only recent studies introduced (pseudo-)generative models for acoustic novelty detection with recurrent neural networks in the form of an autoencoder. In these approaches, auditory spectral features of the next short term frame are predicted from the previous frames by means of Long-Short Term Memory recurrent denoising autoencoders. The reconstruction error between the input and the output of the autoencoder is used as activation signal to detect novel events. There is no evidence of studies focused on comparing previous efforts to automatically recognize novel events from audio signals and giving a broad and in depth evaluation of recurrent neural network-based autoencoders. The present contribution aims to consistently evaluate our recent novel approaches to fill this white spot in the literature and provide insight by extensive evaluations carried out on three databases: A3Novelty, PASCAL CHiME, and PROMETHEUS. Besides providing an extensive analysis of novel and state-of-the-art methods, the article shows how RNN-based autoencoders outperform statistical approaches up to an absolute improvement of 16.4% average F-measure over the three databases. PMID:28182121
Deep Convolutional and LSTM Recurrent Neural Networks for Multimodal Wearable Activity Recognition
Francisco Javier Ordóñez
2016-01-01
Full Text Available Human activity recognition (HAR tasks have traditionally been solved using engineered features obtained by heuristic processes. Current research suggests that deep convolutional neural networks are suited to automate feature extraction from raw sensor inputs. However, human activities are made of complex sequences of motor movements, and capturing this temporal dynamics is fundamental for successful HAR. Based on the recent success of recurrent neural networks for time series domains, we propose a generic deep framework for activity recognition based on convolutional and LSTM recurrent units, which: (i is suitable for multimodal wearable sensors; (ii can perform sensor fusion naturally; (iii does not require expert knowledge in designing features; and (iv explicitly models the temporal dynamics of feature activations. We evaluate our framework on two datasets, one of which has been used in a public activity recognition challenge. Our results show that our framework outperforms competing deep non-recurrent networks on the challenge dataset by 4% on average; outperforming some of the previous reported results by up to 9%. Our results show that the framework can be applied to homogeneous sensor modalities, but can also fuse multimodal sensors to improve performance. We characterise key architectural hyperparameters’ influence on performance to provide insights about their optimisation.
Deep Convolutional and LSTM Recurrent Neural Networks for Multimodal Wearable Activity Recognition.
Ordóñez, Francisco Javier; Roggen, Daniel
2016-01-18
Human activity recognition (HAR) tasks have traditionally been solved using engineered features obtained by heuristic processes. Current research suggests that deep convolutional neural networks are suited to automate feature extraction from raw sensor inputs. However, human activities are made of complex sequences of motor movements, and capturing this temporal dynamics is fundamental for successful HAR. Based on the recent success of recurrent neural networks for time series domains, we propose a generic deep framework for activity recognition based on convolutional and LSTM recurrent units, which: (i) is suitable for multimodal wearable sensors; (ii) can perform sensor fusion naturally; (iii) does not require expert knowledge in designing features; and (iv) explicitly models the temporal dynamics of feature activations. We evaluate our framework on two datasets, one of which has been used in a public activity recognition challenge. Our results show that our framework outperforms competing deep non-recurrent networks on the challenge dataset by 4% on average; outperforming some of the previous reported results by up to 9%. Our results show that the framework can be applied to homogeneous sensor modalities, but can also fuse multimodal sensors to improve performance. We characterise key architectural hyperparameters' influence on performance to provide insights about their optimisation.
A generalized LSTM-like training algorithm for second-order recurrent neural networks.
Monner, Derek; Reggia, James A
2012-01-01
The long short term memory (LSTM) is a second-order recurrent neural network architecture that excels at storing sequential short-term memories and retrieving them many time-steps later. LSTM's original training algorithm provides the important properties of spatial and temporal locality, which are missing from other training approaches, at the cost of limiting its applicability to a small set of network architectures. Here we introduce the generalized long short-term memory(LSTM-g) training algorithm, which provides LSTM-like locality while being applicable without modification to a much wider range of second-order network architectures. With LSTM-g, all units have an identical set of operating instructions for both activation and learning, subject only to the configuration of their local environment in the network; this is in contrast to the original LSTM training algorithm, where each type of unit has its own activation and training instructions. When applied to LSTM architectures with peephole connections, LSTM-g takes advantage of an additional source of back-propagated error which can enable better performance than the original algorithm. Enabled by the broad architectural applicability of LSTM-g, we demonstrate that training recurrent networks engineered for specific tasks can produce better results than single-layer networks. We conclude that LSTM-g has the potential to both improve the performance and broaden the applicability of spatially and temporally local gradient-based training algorithms for recurrent neural networks.
Brain Dynamics in Predicting Driving Fatigue Using a Recurrent Self-Evolving Fuzzy Neural Network.
Liu, Yu-Ting; Lin, Yang-Yin; Wu, Shang-Lin; Chuang, Chun-Hsiang; Lin, Chin-Teng
2016-02-01
This paper proposes a generalized prediction system called a recurrent self-evolving fuzzy neural network (RSEFNN) that employs an on-line gradient descent learning rule to address the electroencephalography (EEG) regression problem in brain dynamics for driving fatigue. The cognitive states of drivers significantly affect driving safety; in particular, fatigue driving, or drowsy driving, endangers both the individual and the public. For this reason, the development of brain-computer interfaces (BCIs) that can identify drowsy driving states is a crucial and urgent topic of study. Many EEG-based BCIs have been developed as artificial auxiliary systems for use in various practical applications because of the benefits of measuring EEG signals. In the literature, the efficacy of EEG-based BCIs in recognition tasks has been limited by low resolutions. The system proposed in this paper represents the first attempt to use the recurrent fuzzy neural network (RFNN) architecture to increase adaptability in realistic EEG applications to overcome this bottleneck. This paper further analyzes brain dynamics in a simulated car driving task in a virtual-reality environment. The proposed RSEFNN model is evaluated using the generalized cross-subject approach, and the results indicate that the RSEFNN is superior to competing models regardless of the use of recurrent or nonrecurrent structures.
Low-complexity nonlinear adaptive filter based on a pipelined bilinear recurrent neural network.
Zhao, Haiquan; Zeng, Xiangping; He, Zhengyou
2011-09-01
To reduce the computational complexity of the bilinear recurrent neural network (BLRNN), a novel low-complexity nonlinear adaptive filter with a pipelined bilinear recurrent neural network (PBLRNN) is presented in this paper. The PBLRNN, inheriting the modular architectures of the pipelined RNN proposed by Haykin and Li, comprises a number of BLRNN modules that are cascaded in a chained form. Each module is implemented by a small-scale BLRNN with internal dynamics. Since those modules of the PBLRNN can be performed simultaneously in a pipelined parallelism fashion, it would result in a significant improvement of computational efficiency. Moreover, due to nesting module, the performance of the PBLRNN can be further improved. To suit for the modular architectures, a modified adaptive amplitude real-time recurrent learning algorithm is derived on the gradient descent approach. Extensive simulations are carried out to evaluate the performance of the PBLRNN on nonlinear system identification, nonlinear channel equalization, and chaotic time series prediction. Experimental results show that the PBLRNN provides considerably better performance compared to the single BLRNN and RNN models.
Eduard eGrinke
2015-10-01
Full Text Available Walking animals, like insects, with little neural computing can effectively perform complex behaviors. They can walk around their environment, escape from corners/deadlocks, and avoid or climb over obstacles. While performing all these behaviors, they can also adapt their movements to deal with an unknown situation. As a consequence, they successfully navigate through their complex environment. The versatile and adaptive abilities are the result of an integration of several ingredients embedded in their sensorimotor loop. Biological studies reveal that the ingredients include neural dynamics, plasticity, sensory feedback, and biomechanics. Generating such versatile and adaptive behaviors for a walking robot is a challenging task. In this study, we present a bio-inspired approach to solve this task. Specifically, the approach combines neural mechanisms with plasticity, sensory feedback, and biomechanics. The neural mechanisms consist of adaptive neural sensory processing and modular neural locomotion control. The sensory processing is based on a small recurrent network consisting of two fully connected neurons. Online correlation-based learning with synaptic scaling is applied to adequately change the connections of the network. By doing so, we can effectively exploit neural dynamics (i.e., hysteresis effects and single attractors in the network to generate different turning angles with short-term memory for a biomechanical walking robot. The turning information is transmitted as descending steering signals to the locomotion control which translates the signals into motor actions. As a result, the robot can walk around and adapt its turning angle for avoiding obstacles in different situations as well as escaping from sharp corners or deadlocks. Using backbone joint control embedded in the locomotion control allows the robot to climb over small obstacles. Consequently, it can successfully explore and navigate in complex environments.
Hanson, Jack; Yang, Yuedong; Paliwal, Kuldip; Zhou, Yaoqi
2017-03-01
Capturing long-range interactions between structural but not sequence neighbors of proteins is a long-standing challenging problem in bioinformatics. Recently, long short-term memory (LSTM) networks have significantly improved the accuracy of speech and image classification problems by remembering useful past information in long sequential events. Here, we have implemented deep bidirectional LSTM recurrent neural networks in the problem of protein intrinsic disorder prediction. The new method, named SPOT-Disorder, has steadily improved over a similar method using a traditional, window-based neural network (SPINE-D) in all datasets tested without separate training on short and long disordered regions. Independent tests on four other datasets including the datasets from critical assessment of structure prediction (CASP) techniques and >10 000 annotated proteins from MobiDB, confirmed SPOT-Disorder as one of the best methods in disorder prediction. Moreover, initial studies indicate that the method is more accurate in predicting functional sites in disordered regions. These results highlight the usefulness combining LSTM with deep bidirectional recurrent neural networks in capturing non-local, long-range interactions for bioinformatics applications. SPOT-disorder is available as a web server and as a standalone program at: http://sparks-lab.org/server/SPOT-disorder/index.php . j.hanson@griffith.edu.au or yuedong.yang@griffith.edu.au or yaoqi.zhou@griffith.edu.au. Supplementary data is available at Bioinformatics online.
Iterative prediction of chaotic time series using a recurrent neural network
Essawy, M.A.; Bodruzzaman, M. [Tennessee State Univ., Nashville, TN (United States). Dept. of Electrical and Computer Engineering; Shamsi, A.; Noel, S. [USDOE Morgantown Energy Technology Center, WV (United States)
1996-12-31
Chaotic systems are known for their unpredictability due to their sensitive dependence on initial conditions. When only time series measurements from such systems are available, neural network based models are preferred due to their simplicity, availability, and robustness. However, the type of neutral network used should be capable of modeling the highly non-linear behavior and the multi-attractor nature of such systems. In this paper the authors use a special type of recurrent neural network called the ``Dynamic System Imitator (DSI)``, that has been proven to be capable of modeling very complex dynamic behaviors. The DSI is a fully recurrent neural network that is specially designed to model a wide variety of dynamic systems. The prediction method presented in this paper is based upon predicting one step ahead in the time series, and using that predicted value to iteratively predict the following steps. This method was applied to chaotic time series generated from the logistic, Henon, and the cubic equations, in addition to experimental pressure drop time series measured from a Fluidized Bed Reactor (FBR), which is known to exhibit chaotic behavior. The time behavior and state space attractor of the actual and network synthetic chaotic time series were analyzed and compared. The correlation dimension and the Kolmogorov entropy for both the original and network synthetic data were computed. They were found to resemble each other, confirming the success of the DSI based chaotic system modeling.
Chu, Chia-Chi; Tsai, Hung-Chi; Chang, Wei-Neng
A Lyapunov-based recurrent neural networks unified power flow controller (UPFC) is developed for improving transient stability of power systems. First, a simple UPFC dynamical model, composed of a controllable shunt susceptance on the shunt side and an ideal complex transformer on the series side, is utilized to analyze UPFC dynamical characteristics. Secondly, we study the control configuration of the UPFC with two major blocks: the primary control, and the supplementary control. The primary control is implemented by standard PI techniques when the power system is operated in a normal condition. The supplementary control will be effective only when the power system is subjected by large disturbances. We propose a new Lyapunov-based UPFC controller of the classical single-machine-infinite-bus system for damping enhancement. In order to consider more complicated detailed generator models, we also propose a Lyapunov-based adaptive recurrent neural network controller to deal with such model uncertainties. This controller can be treated as neural network approximations of Lyapunov control actions. In addition, this controller also provides online learning ability to adjust the corresponding weights with the back propagation algorithm built in the hidden layer. The proposed control scheme has been tested on two simple power systems. Simulation results demonstrate that the proposed control strategy is very effective for suppressing power swing even under severe system conditions.
Reinforced recurrent neural networks for multi-step-ahead flood forecasts
Chen, Pin-An; Chang, Li-Chiu; Chang, Fi-John
2013-08-01
Considering true values cannot be available at every time step in an online learning algorithm for multi-step-ahead (MSA) forecasts, a MSA reinforced real-time recurrent learning algorithm for recurrent neural networks (R-RTRL NN) is proposed. The main merit of the proposed method is to repeatedly adjust model parameters with the current information including the latest observed values and model's outputs to enhance the reliability and the forecast accuracy of the proposed method. The sequential formulation of the R-RTRL NN is derived. To demonstrate its reliability and effectiveness, the proposed R-RTRL NN is implemented to make 2-, 4- and 6-step-ahead forecasts in a famous benchmark chaotic time series and a reservoir flood inflow series in North Taiwan. For comparison purpose, three comparative neural networks (two dynamic and one static neural networks) were performed. Numerical and experimental results indicate that the R-RTRL NN not only achieves superior performance to comparative networks but significantly improves the precision of MSA forecasts for both chaotic time series and reservoir inflow case during typhoon events with effective mitigation in the time-lag problem.
Neural mediator of the schizotypy-antisocial behavior relationship.
Lam, B Y H; Yang, Y; Raine, A; Lee, T M C
2015-11-03
Prior studies have established that schizotypal personality traits (schizotypy) were associated with antisocial behavior (crime), but it is unclear what neural factors mediate this relationship. This study assessed the mediating effect that sub-regional prefrontal gray, specifically the orbitofrontal gray matter volume, has on the schizotypy-antisocial behavior relationship. Five prefrontal sub-regional (superior, middle, inferior, orbitofrontal and rectal gyral) gray matter volumes were assessed using structural magnetic resonance imaging in 90 adults from the community, together with schizotypy and antisocial behavior. Among all five prefrontal sub-regions, the orbitofrontal cortex (OFC) was the major region-of-interest in the present study. Mediation analyses showed that orbitofrontal gray fully mediated the association between schizotypy and antisocial behavior. After having controlled the sex, age, socio-economic statuses, whole brain volumes and substance abuse/dependence of test subjects, the orbitofrontal gray still significantly mediated the effect of schizotypy on antisocial behavior by 53.5%. These findings are the first that document a neural mediator of the schizotypy-antisocial behavior relationship. Findings also suggest that functions subserved by the OFC, including impulse control and inhibition, emotion processing and decision-making, may contribute to the above comorbidity.
A Multilayer Recurrent Fuzzy Neural Network for Accurate Dynamic System Modeling
LIU He; HUANG Dao
2008-01-01
A muitilayer recurrent fuzzy neural network (MRFNN)is proposed for accurate dynamic system modeling.The proposed MRFNN has six layers combined with T-S fuzzy model.The recurrent structures are formed by local feedback connections in the membership layer and the rule layer.With these feedbacks,the fuzzy sets are time-varying and the temporal problem of dynamic system can he solved well.The parameters of MRFNN are learned by chaotic search(CS)and least square estimation(LSE)simultaneously,where CS is for tuning the premise parameters and LSE is for updating the consequent coefficients accordingly.Results of simulations show the proposed approach is effective for dynamic system modeling with high accuracy.
REN Shou-xin; GAO Ling
2004-01-01
This paper covers a novel method named wavelet packet transform based Elman recurrent neural network(WPTERNN) for the simultaneous kinetic determination of periodate and iodate. The wavelet packet representations of signals provide a local time-frequency description, thus in the wavelet packet domain, the quality of the noise removal can be improved. The Elman recurrent network was applied to non-linear multivariate calibration. In this case, by means of optimization, the wavelet function, decomposition level and number of hidden nodes for WPTERNN method were selected as D4, 5 and 5 respectively. A program PWPTERNN was designed to perform multicomponent kinetic determination. The relative standard error of prediction(RSEP) for all the components with WPTERNN, Elman RNN and PLS were 3.23%, 11.8% and 10.9% respectively. The experimental results show that the method is better than the others.
Identification of Jets Containing b-Hadrons with Recurrent Neural Networks at the ATLAS Experiment
CERN. Geneva
2017-01-01
A novel b-jet identification algorithm is constructed with a Recurrent Neural Network (RNN) at the ATLAS Experiment. This talk presents the expected performance of the RNN based b-tagging in simulated $t \\bar t$ events. The RNN based b-tagging processes properties of tracks associated to jets which are represented in sequences. In contrast to traditional impact-parameter-based b-tagging algorithms which assume the tracks of jets are independent from each other, RNN based b-tagging can exploit the spatial and kinematic correlations of tracks which are initiated from the same b-hadrons. The neural network nature of the tagging algorithm also allows the flexibility of extending input features to include more track properties than can be effectively used in traditional algorithms.
Object class segmentation of RGB-D video using recurrent convolutional neural networks.
Pavel, Mircea Serban; Schulz, Hannes; Behnke, Sven
2017-04-01
Object class segmentation is a computer vision task which requires labeling each pixel of an image with the class of the object it belongs to. Deep convolutional neural networks (DNN) are able to learn and take advantage of local spatial correlations required for this task. They are, however, restricted by their small, fixed-sized filters, which limits their ability to learn long-range dependencies. Recurrent Neural Networks (RNN), on the other hand, do not suffer from this restriction. Their iterative interpretation allows them to model long-range dependencies by propagating activity. This property is especially useful when labeling video sequences, where both spatial and temporal long-range dependencies occur. In this work, a novel RNN architecture for object class segmentation is presented. We investigate several ways to train such a network. We evaluate our models on the challenging NYU Depth v2 dataset for object class segmentation and obtain competitive results. Copyright © 2017 Elsevier Ltd. All rights reserved.
Qihong Chen
2014-01-01
Full Text Available This paper presents a neural network predictive control strategy to optimize power distribution for a fuel cell/ultracapacitor hybrid power system of a robot. We model the nonlinear power system by employing time variant auto-regressive moving average with exogenous (ARMAX, and using recurrent neural network to represent the complicated coefficients of the ARMAX model. Because the dynamic of the system is viewed as operating- state- dependent time varying local linear behavior in this frame, a linear constrained model predictive control algorithm is developed to optimize the power splitting between the fuel cell and ultracapacitor. The proposed algorithm significantly simplifies implementation of the controller and can handle multiple constraints, such as limiting substantial fluctuation of fuel cell current. Experiment and simulation results demonstrate that the control strategy can optimally split power between the fuel cell and ultracapacitor, limit the change rate of the fuel cell current, and so as to extend the lifetime of the fuel cell.
Chen, Qihong; Long, Rong; Quan, Shuhai; Zhang, Liyan
2014-01-01
This paper presents a neural network predictive control strategy to optimize power distribution for a fuel cell/ultracapacitor hybrid power system of a robot. We model the nonlinear power system by employing time variant auto-regressive moving average with exogenous (ARMAX), and using recurrent neural network to represent the complicated coefficients of the ARMAX model. Because the dynamic of the system is viewed as operating- state- dependent time varying local linear behavior in this frame, a linear constrained model predictive control algorithm is developed to optimize the power splitting between the fuel cell and ultracapacitor. The proposed algorithm significantly simplifies implementation of the controller and can handle multiple constraints, such as limiting substantial fluctuation of fuel cell current. Experiment and simulation results demonstrate that the control strategy can optimally split power between the fuel cell and ultracapacitor, limit the change rate of the fuel cell current, and so as to extend the lifetime of the fuel cell.
Continuous attractors of Lotka-Volterra recurrent neural networks with infinite neurons.
Yu, Jiali; Yi, Zhang; Zhou, Jiliu
2010-10-01
Continuous attractors of Lotka-Volterra recurrent neural networks (LV RNNs) with infinite neurons are studied in this brief. A continuous attractor is a collection of connected equilibria, and it has been recognized as a suitable model for describing the encoding of continuous stimuli in neural networks. The existence of the continuous attractors depends on many factors such as the connectivity and the external inputs of the network. A continuous attractor can be stable or unstable. It is shown in this brief that a LV RNN can possess multiple continuous attractors if the synaptic connections and the external inputs are Gussian-like in shape. Moreover, both stable and unstable continuous attractors can coexist in a network. Explicit expressions of the continuous attractors are calculated. Simulations are employed to illustrate the theory.
无
2006-01-01
The recurrent neural network (RNN) model based on projective operator was studied. Different from the former study, the value region of projective operator in the neural network in this paper is a general closed convex subset of n-dimensional Euclidean space and it is not a compact convex set in general, that is, the value region of projective operator is probably unbounded. It was proved that the network has a global solution and its solution trajectory converges to some equilibrium set whenever objective function satisfies some conditions. After that, the model was applied to continuously differentiable optimization and nonlinear or implicit complementarity problems. In addition, simulation experiments confirm the efficiency of the RNN.
A TWO-LAYER RECURRENT NEURAL NETWORK BASED APPROACH FOR OVERLAY MULTICAST
Liu Shidong; Zhang Shunyi; Zhou Jinquan; Qiu Gong'an
2008-01-01
Overlay multicast has become one of the most promising multicast solutions for IP network, and Neutral Network(NN) has been a good candidate for searching optimal solutions to the constrained shortest routing path in virtue of its powerful capacity for parallel computation. Though traditional Hopfield NN can tackle the optimization problem, it is incapable of dealing with large scale networks due to the large number of neurons. In this paper, a neural network for overlay multicast tree computation is presented to reliably implement routing algorithm in real time. The neural network is constructed as a two-layer recurrent architecture, which is comprised of Independent Variable Neurons (IDVN) and Dependent Variable Neurons (DVN), according to the independence of the decision variables associated with the edges in directed graph. Compared with the heuristic routing algorithms, it is characterized as shorter computational time, fewer neurons, and better precision.
Prenatal Diagnosis, Fetal Surgery, Recurrence Risk and Differential Diagnosis of Neural Tube Defects
Chih-Ping Chen
2008-09-01
Full Text Available Prenatal screening with α-fetoprotein (AFP and ultrasonography have allowed the prenatal diagnosis of neural tube defects (NTDs in current obstetric care, and open spina bifida has been considered a potential candidate for in utero treatment in modern pediatric surgery. This article provides an overview of maternal serum AFP screening, amniotic fluid AFP assays, amniotic fluid acetylcholinesterase immunoassays and level II ultrasound for NTDs, prenatal repair of fetal myelomeningocele, recurrence risk of NTDs, and differential diagnosis of NTDs on prenatal ultrasound.
Chaos control and synchronization, with input saturation, via recurrent neural networks.
Sanchez, Edgar N; Ricalde, Luis J
2003-01-01
This paper deals with the adaptive tracking problem of non-linear systems in presence of unknown parameters, unmodelled dynamics and input saturation. A high order recurrent neural network is used in order to identify the unknown system and a learning law is obtained using the Lyapunov methodology. Then a stabilizing control law for the reference tracking error dynamics is developed using the Lyapunov methodology and the Sontag control law. Tracking error boundedness is established as a function of a design parameter. The new approach is illustrated by examples of complex dynamical systems: chaos control and synchronization.
Recurrent Neural Networks for Polyphonic Sound Event Detection in Real Life Recordings
Parascandolo, Giambattista; Huttunen, Heikki; Virtanen, Tuomas
2016-01-01
In this paper we present an approach to polyphonic sound event detection in real life recordings based on bi-directional long short term memory (BLSTM) recurrent neural networks (RNNs). A single multilabel BLSTM RNN is trained to map acoustic features of a mixture signal consisting of sounds from multiple classes, to binary activity indicators of each event class. Our method is tested on a large database of real-life recordings, with 61 classes (e.g. music, car, speech) from 10 different ever...
Hu, Xiaolin; Zhang, Bo
2009-12-01
There exist many recurrent neural networks for solving optimization-related problems. In this paper, we present a method for deriving such networks from existing ones by changing connections between computing blocks. Although the dynamic systems may become much different, some distinguished properties may be retained. One example is discussed to solve variational inequalities and related optimization problems with mixed linear and nonlinear constraints. A new network is obtained from two classical models by this means, and its performance is comparable to its predecessors. Thus, an alternative choice for circuits implementation is offered to accomplish such computing tasks.
A generalized LSTM-like training algorithm for second-order recurrent neural networks
Monner, Derek; Reggia, James A.
2011-01-01
The Long Short Term Memory (LSTM) is a second-order recurrent neural network architecture that excels at storing sequential short-term memories and retrieving them many time-steps later. LSTM’s original training algorithm provides the important properties of spatial and temporal locality, which are missing from other training approaches, at the cost of limiting it’s applicability to a small set of network architectures. Here we introduce the Generalized Long Short-Term Memory (LSTM-g) trainin...
Guodong Zhang; Yi Shen; Quan Yin; Junwei Sun
2015-01-01
In this paper, based on the knowledge of memristor and recurrent neural networks (RNNs), the model of the memristor-based RNNs with discrete and distributed delays is established. By constructing proper Lyapunov functionals and using inequality technique, several sufficient conditions are given to ensure the passivity of the memristor-based RNNs with discrete and distributed delays in the sense of Filippov solutions. The passivity conditions here are presented in terms of linear matrix inequalities, which can be easily solved by using Matlab Tools. In addition, the results of this paper complement and extend the earlier publications. Finally, numerical simulations are employed to illustrate the effectiveness of the obtained results.
Symmetric sequence processing in a recurrent neural network model with a synchronous dynamics
Metz, F L; Theumann, W K [Instituto de Fisica, Universidade Federal do Rio Grande do Sul, Caixa Postal 15051, 91501-970 Porto Alegre (Brazil)], E-mail: fernando@itf.fys.kuleuven.be, E-mail: theumann@if.ufrgs.br
2009-09-25
The synchronous dynamics and the stationary states of a recurrent attractor neural network model with competing synapses between symmetric sequence processing and Hebbian pattern reconstruction are studied in this work allowing for the presence of a self-interaction for each unit. Phase diagrams of stationary states are obtained exhibiting phases of retrieval, symmetric and period-two cyclic states as well as correlated and frozen-in states, in the absence of noise. The frozen-in states are destabilized by synaptic noise and well-separated regions of correlated and cyclic states are obtained. Excitatory or inhibitory self-interactions yield enlarged phases of fixed-point or cyclic behaviour.
Adaptive recurrent neural network control of uncertain constrained nonholonomic mobile manipulators
Wang, Z. P.; Zhou, T.; Mao, Y.; Chen, Q. J.
2014-02-01
In this article, motion/force control problem of a class of constrained mobile manipulators with unknown dynamics is considered. The system is subject to both holonomic and nonholonomic constraints. An adaptive recurrent neural network controller is proposed to deal with the unmodelled system dynamics. The proposed control strategy guarantees that the system motion asymptotically converges to the desired manifold while the constraint force remains bounded. In addition, an adaptive method is proposed to identify the contact surface. Simulation studies are carried out to verify the validation of the proposed approach.
Using recurrent neural networks to optimize dynamical decoupling for quantum memory
August, Moritz; Ni, Xiaotong
2017-01-01
We utilize machine learning models that are based on recurrent neural networks to optimize dynamical decoupling (DD) sequences. Dynamical decoupling is a relatively simple technique for suppressing the errors in quantum memory for certain noise models. In numerical simulations, we show that with minimum use of prior knowledge and starting from random sequences, the models are able to improve over time and eventually output DD sequences with performance better than that of the well known DD families. Furthermore, our algorithm is easy to implement in experiments to find solutions tailored to the specific hardware, as it treats the figure of merit as a black box.
Johanns, Tanner M; Law, Calvin Y; Kalekar, Lokeshchandra A; O'Donnell, Hope; Ertelt, James M; Rowe, Jared H; Way, Sing Sing
2011-04-01
Typhoid fever is a systemic, persistent infection caused by host-specific strains of Salmonella. Although the use of antibiotics has reduced the complications associated with primary infection, recurrent infection remains an important cause of ongoing human morbidity and mortality. Herein, we investigated the impacts of antibiotic eradication of primary infection on protection against secondary recurrent infection. Using a murine model of persistent Salmonella infection, we demonstrate protection against recurrent infection is sustained despite early eradication of primary infection. In this model, protection is not mediated by CD4(+) or CD8(+) T cells because depletion of these cells either alone or in combination prior to rechallenge does not abrogate protection. Instead, infection followed by antibiotic-mediated clearance primes robust levels of Salmonella-specific antibody that can adoptively transfer protection to naïve mice. Thus, eradication of persistent Salmonella infection primes antibody-mediated protective immunity to recurrent infection.
Liu, Q; Wang, J
2008-04-01
In this paper, a one-layer recurrent neural network with a discontinuous hard-limiting activation function is proposed for quadratic programming. This neural network is capable of solving a large class of quadratic programming problems. The state variables of the neural network are proven to be globally stable and the output variables are proven to be convergent to optimal solutions as long as the objective function is strictly convex on a set defined by the equality constraints. In addition, a sequential quadratic programming approach based on the proposed recurrent neural network is developed for general nonlinear programming. Simulation results on numerical examples and support vector machine (SVM) learning show the effectiveness and performance of the neural network.
The super-Turing computational power of plastic recurrent neural networks.
Cabessa, Jérémie; Siegelmann, Hava T
2014-12-01
We study the computational capabilities of a biologically inspired neural model where the synaptic weights, the connectivity pattern, and the number of neurons can evolve over time rather than stay static. Our study focuses on the mere concept of plasticity of the model so that the nature of the updates is assumed to be not constrained. In this context, we show that the so-called plastic recurrent neural networks (RNNs) are capable of the precise super-Turing computational power--as the static analog neural networks--irrespective of whether their synaptic weights are modeled by rational or real numbers, and moreover, irrespective of whether their patterns of plasticity are restricted to bi-valued updates or expressed by any other more general form of updating. Consequently, the incorporation of only bi-valued plastic capabilities in a basic model of RNNs suffices to break the Turing barrier and achieve the super-Turing level of computation. The consideration of more general mechanisms of architectural plasticity or of real synaptic weights does not further increase the capabilities of the networks. These results support the claim that the general mechanism of plasticity is crucially involved in the computational and dynamical capabilities of biological neural networks. They further show that the super-Turing level of computation reflects in a suitable way the capabilities of brain-like models of computation.
Modeling the motor cortex: Optimality, recurrent neural networks, and spatial dynamics.
Tanaka, Hirokazu
2016-03-01
Specialization of motor function in the frontal lobe was first discovered in the seminal experiments by Fritsch and Hitzig and subsequently by Ferrier in the 19th century. It is, however, ironical that the functional and computational role of the motor cortex still remains unresolved. A computational understanding of the motor cortex equals to understanding what movement variables the motor neurons represent (movement representation problem) and how such movement variables are computed through the interaction with anatomically connected areas (neural computation problem). Electrophysiological experiments in the 20th century demonstrated that the neural activities in motor cortex correlated with a number of motor-related and cognitive variables, thereby igniting the controversy over movement representations in motor cortex. Despite substantial experimental efforts, the overwhelming complexity found in neural activities has impeded our understanding of how movements are represented in the motor cortex. Recent progresses in computational modeling have rekindled this controversy in the 21st century. Here, I review the recent developments in computational models of the motor cortex, with a focus on optimality models, recurrent neural network models and spatial dynamics models. Although individual models provide consistent pictures within their domains, our current understanding about functions of the motor cortex is still fragmented.
Nonlinear Model Predictive Control Based on a Self-Organizing Recurrent Neural Network.
Han, Hong-Gui; Zhang, Lu; Hou, Ying; Qiao, Jun-Fei
2016-02-01
A nonlinear model predictive control (NMPC) scheme is developed in this paper based on a self-organizing recurrent radial basis function (SR-RBF) neural network, whose structure and parameters are adjusted concurrently in the training process. The proposed SR-RBF neural network is represented in a general nonlinear form for predicting the future dynamic behaviors of nonlinear systems. To improve the modeling accuracy, a spiking-based growing and pruning algorithm and an adaptive learning algorithm are developed to tune the structure and parameters of the SR-RBF neural network, respectively. Meanwhile, for the control problem, an improved gradient method is utilized for the solution of the optimization problem in NMPC. The stability of the resulting control system is proved based on the Lyapunov stability theory. Finally, the proposed SR-RBF neural network-based NMPC (SR-RBF-NMPC) is used to control the dissolved oxygen (DO) concentration in a wastewater treatment process (WWTP). Comparisons with other existing methods demonstrate that the SR-RBF-NMPC can achieve a considerably better model fitting for WWTP and a better control performance for DO concentration.
Grinke, Eduard; Tetzlaff, Christian; Wörgötter, Florentin; Manoonpong, Poramate
2015-01-01
Walking animals, like insects, with little neural computing can effectively perform complex behaviors. For example, they can walk around their environment, escape from corners/deadlocks, and avoid or climb over obstacles. While performing all these behaviors, they can also adapt their movements to deal with an unknown situation. As a consequence, they successfully navigate through their complex environment. The versatile and adaptive abilities are the result of an integration of several ingredients embedded in their sensorimotor loop. Biological studies reveal that the ingredients include neural dynamics, plasticity, sensory feedback, and biomechanics. Generating such versatile and adaptive behaviors for a many degrees-of-freedom (DOFs) walking robot is a challenging task. Thus, in this study, we present a bio-inspired approach to solve this task. Specifically, the approach combines neural mechanisms with plasticity, exteroceptive sensory feedback, and biomechanics. The neural mechanisms consist of adaptive neural sensory processing and modular neural locomotion control. The sensory processing is based on a small recurrent neural network consisting of two fully connected neurons. Online correlation-based learning with synaptic scaling is applied to adequately change the connections of the network. By doing so, we can effectively exploit neural dynamics (i.e., hysteresis effects and single attractors) in the network to generate different turning angles with short-term memory for a walking robot. The turning information is transmitted as descending steering signals to the neural locomotion control which translates the signals into motor actions. As a result, the robot can walk around and adapt its turning angle for avoiding obstacles in different situations. The adaptation also enables the robot to effectively escape from sharp corners or deadlocks. Using backbone joint control embedded in the the locomotion control allows the robot to climb over small obstacles
Neural mediators of the intergenerational transmission of family aggression
Saxbe, Darby; Del Piero, Larissa Borofsky; Immordino-Yang, Mary Helen; Kaplan, Jonas Todd; Margolin, Gayla
2015-01-01
Youth exposed to family aggression may become more aggressive themselves, but the mechanisms of intergenerational transmission are understudied. In a longitudinal study, we found that adolescents’ reduced neural activation when rating their parents’ emotions, assessed via magnetic resonance imaging, mediated the association between parents’ past aggression and adolescents’ subsequent aggressive behavior toward parents. A subsample of 21 youth, drawn from the larger study, underwent magnetic r...
Luis A. Vázquez
2015-01-01
Full Text Available A decentralized recurrent wavelet first-order neural network (RWFONN structure is presented. The use of a wavelet Morlet activation function allows proposing a neural structure in continuous time of a single layer and a single neuron in order to identify online in a series-parallel configuration, using the filtered error (FE training algorithm, the dynamics behavior of each joint for a two-degree-of-freedom (DOF vertical robot manipulator, whose parameters such as friction and inertia are unknown. Based on the RWFONN subsystem, a decentralized neural controller is designed via backstepping approach. The performance of the decentralized wavelet neural controller is validated via real-time results.
Separating monocular and binocular neural mechanisms mediating chromatic contextual interactions.
D'Antona, Anthony D; Christiansen, Jens H; Shevell, Steven K
2014-04-17
When seen in isolation, a light that varies in chromaticity over time is perceived to oscillate in color. Perception of that same time-varying light may be altered by a surrounding light that is also temporally varying in chromaticity. The neural mechanisms that mediate these contextual interactions are the focus of this article. Observers viewed a central test stimulus that varied in chromaticity over time within a larger surround that also varied in chromaticity at the same temporal frequency. Center and surround were presented either to the same eye (monocular condition) or to opposite eyes (dichoptic condition) at the same frequency (3.125, 6.25, or 9.375 Hz). Relative phase between center and surround modulation was varied. In both the monocular and dichoptic conditions, the perceived modulation depth of the central light depended on the relative phase of the surround. A simple model implementing a linear combination of center and surround modulation fit the measurements well. At the lowest temporal frequency (3.125 Hz), the surround's influence was virtually identical for monocular and dichoptic conditions, suggesting that at this frequency, the surround's influence is mediated primarily by a binocular neural mechanism. At higher frequencies, the surround's influence was greater for the monocular condition than for the dichoptic condition, and this difference increased with temporal frequency. Our findings show that two separate neural mechanisms mediate chromatic contextual interactions: one binocular and dominant at lower temporal frequencies and the other monocular and dominant at higher frequencies (6-10 Hz).
Bennett, C.; Dunne, J. F.; Trimby, S.; Richardson, D.
2017-02-01
A recurrent non-linear autoregressive with exogenous input (NARX) neural network is proposed, and a suitable fully-recurrent training methodology is adapted and tuned, for reconstructing cylinder pressure in multi-cylinder IC engines using measured crank kinematics. This type of indirect sensing is important for cost effective closed-loop combustion control and for On-Board Diagnostics. The challenge addressed is to accurately predict cylinder pressure traces within the cycle under generalisation conditions: i.e. using data not previously seen by the network during training. This involves direct construction and calibration of a suitable inverse crank dynamic model, which owing to singular behaviour at top-dead-centre (TDC), has proved difficult via physical model construction, calibration, and inversion. The NARX architecture is specialised and adapted to cylinder pressure reconstruction, using a fully-recurrent training methodology which is needed because the alternatives are too slow and unreliable for practical network training on production engines. The fully-recurrent Robust Adaptive Gradient Descent (RAGD) algorithm, is tuned initially using synthesised crank kinematics, and then tested on real engine data to assess the reconstruction capability. Real data is obtained from a 1.125 l, 3-cylinder, in-line, direct injection spark ignition (DISI) engine involving synchronised measurements of crank kinematics and cylinder pressure across a range of steady-state speed and load conditions. The paper shows that a RAGD-trained NARX network using both crank velocity and crank acceleration as input information, provides fast and robust training. By using the optimum epoch identified during RAGD training, acceptably accurate cylinder pressures, and especially accurate location-of-peak-pressure, can be reconstructed robustly under generalisation conditions, making it the most practical NARX configuration and recurrent training methodology for use on production engines.
Role of Expression of Inflammatory Mediators in Primary and Recurrent Lumbar Disc Herniation
Dagistan, Yasar; Cukur, Selma; Dagistan, Emine; Gezici, Ali Riza
2017-01-01
Objective To assess role of some inflammatory mediators in patients with primary and recurrent lumbar disc herniation. Expression of IL-6, transforming growth factor (TGF)-1, insulin-like growth factor (IGF)-1, and Bcl-2-associated X protein (BAX) have been shown to be more intense in the primary group than the recurrent goup, but this mediators may be important aspects prognostic. Methods 19 patients underwent primary and revision operations between June 1, 2009 and June 1, 2014, and they were included in this study. The 19 patients’ intervertebral disc specimens obtained from the primary procedures and reoperations were evaluated. Expression of IL-6, TGF-1, IGF-1, and BAX were examined immunohistochemically in the 38 biopsy tissues obtained from the primary and recurrent herniated intervertebral discs during the operation. Results For IL-6 expression in the intervertebral disc specimens, there was no difference between the groups. The immunohistochemical study showed that the intervertebral disc specimens in the primary group were stained intensely by TGF-1 compared with the recurrent group. Expression of IGF-1 in the primary group was found moderate. In contrast, in the recurrent group of patients was mild expression of IGF-1. The primary group intervertebral disc specimens were stained moderately by BAX compared with the recurrent group. Conclusion The results of our prognostic evaluation of patients in the recurrent group who were operated due to disc herniation suggest that mediators may be important parameters. PMID:28061491
Rao, Mukta; Dhaka, Vijaypal Singh
2010-01-01
The associatie memory feature of the Hopfield type recurrent neural network is used for the pattern storage and pattern authentication.This paper outlines an optimization relaxation approach for signature verification based on the Hopfield neural network (HNN)which is a recurrent network.The standard sample signature of the customer is cross matched with the one supplied on the Cheque.The difference percentage is obtained by calculating the different pixels in both the images.The network topology is built so that each pixel in the difference image is a neuron in the network.Each neuron is categorized by its states,which in turn signifies that if the particular pixel is changed.The network converges to unwavering condition based on the energy function which is derived in experiments.The Hopfield's model allows each node to take on two binary state values (changed/unchanged)for each pixel.The performance of the proposed technique is evaluated by applying it in various binary and gray scale images.This paper con...
Optimal Formation of Multirobot Systems Based on a Recurrent Neural Network.
Wang, Yunpeng; Cheng, Long; Hou, Zeng-Guang; Yu, Junzhi; Tan, Min
2016-02-01
The optimal formation problem of multirobot systems is solved by a recurrent neural network in this paper. The desired formation is described by the shape theory. This theory can generate a set of feasible formations that share the same relative relation among robots. An optimal formation means that finding one formation from the feasible formation set, which has the minimum distance to the initial formation of the multirobot system. Then, the formation problem is transformed into an optimization problem. In addition, the orientation, scale, and admissible range of the formation can also be considered as the constraints in the optimization problem. Furthermore, if all robots are identical, their positions in the system are exchangeable. Then, each robot does not necessarily move to one specific position in the formation. In this case, the optimal formation problem becomes a combinational optimization problem, whose optimal solution is very hard to obtain. Inspired by the penalty method, this combinational optimization problem can be approximately transformed into a convex optimization problem. Due to the involvement of the Euclidean norm in the distance, the objective function of these optimization problems are nonsmooth. To solve these nonsmooth optimization problems efficiently, a recurrent neural network approach is employed, owing to its parallel computation ability. Finally, some simulations and experiments are given to validate the effectiveness and efficiency of the proposed optimal formation approach.
Huang, Yu-Jiao; Hu, Hai-Gen
2015-12-01
In this paper, the multistability issue is discussed for delayed complex-valued recurrent neural networks with discontinuous real-imaginary-type activation functions. Based on a fixed theorem and stability definition, sufficient criteria are established for the existence and stability of multiple equilibria of complex-valued recurrent neural networks. The number of stable equilibria is larger than that of real-valued recurrent neural networks, which can be used to achieve high-capacity associative memories. One numerical example is provided to show the effectiveness and superiority of the presented results. Project supported by the National Natural Science Foundation of China (Grant Nos. 61374094 and 61503338) and the Natural Science Foundation of Zhejiang Province, China (Grant No. LQ15F030005).
Reinforcement Learning of Linking and Tracing Contours in Recurrent Neural Networks
Brosch, Tobias; Neumann, Heiko; Roelfsema, Pieter R.
2015-01-01
The processing of a visual stimulus can be subdivided into a number of stages. Upon stimulus presentation there is an early phase of feedforward processing where the visual information is propagated from lower to higher visual areas for the extraction of basic and complex stimulus features. This is followed by a later phase where horizontal connections within areas and feedback connections from higher areas back to lower areas come into play. In this later phase, image elements that are behaviorally relevant are grouped by Gestalt grouping rules and are labeled in the cortex with enhanced neuronal activity (object-based attention in psychology). Recent neurophysiological studies revealed that reward-based learning influences these recurrent grouping processes, but it is not well understood how rewards train recurrent circuits for perceptual organization. This paper examines the mechanisms for reward-based learning of new grouping rules. We derive a learning rule that can explain how rewards influence the information flow through feedforward, horizontal and feedback connections. We illustrate the efficiency with two tasks that have been used to study the neuronal correlates of perceptual organization in early visual cortex. The first task is called contour-integration and demands the integration of collinear contour elements into an elongated curve. We show how reward-based learning causes an enhancement of the representation of the to-be-grouped elements at early levels of a recurrent neural network, just as is observed in the visual cortex of monkeys. The second task is curve-tracing where the aim is to determine the endpoint of an elongated curve composed of connected image elements. If trained with the new learning rule, neural networks learn to propagate enhanced activity over the curve, in accordance with neurophysiological data. We close the paper with a number of model predictions that can be tested in future neurophysiological and computational studies
Reinforcement Learning of Linking and Tracing Contours in Recurrent Neural Networks.
Brosch, Tobias; Neumann, Heiko; Roelfsema, Pieter R
2015-10-01
The processing of a visual stimulus can be subdivided into a number of stages. Upon stimulus presentation there is an early phase of feedforward processing where the visual information is propagated from lower to higher visual areas for the extraction of basic and complex stimulus features. This is followed by a later phase where horizontal connections within areas and feedback connections from higher areas back to lower areas come into play. In this later phase, image elements that are behaviorally relevant are grouped by Gestalt grouping rules and are labeled in the cortex with enhanced neuronal activity (object-based attention in psychology). Recent neurophysiological studies revealed that reward-based learning influences these recurrent grouping processes, but it is not well understood how rewards train recurrent circuits for perceptual organization. This paper examines the mechanisms for reward-based learning of new grouping rules. We derive a learning rule that can explain how rewards influence the information flow through feedforward, horizontal and feedback connections. We illustrate the efficiency with two tasks that have been used to study the neuronal correlates of perceptual organization in early visual cortex. The first task is called contour-integration and demands the integration of collinear contour elements into an elongated curve. We show how reward-based learning causes an enhancement of the representation of the to-be-grouped elements at early levels of a recurrent neural network, just as is observed in the visual cortex of monkeys. The second task is curve-tracing where the aim is to determine the endpoint of an elongated curve composed of connected image elements. If trained with the new learning rule, neural networks learn to propagate enhanced activity over the curve, in accordance with neurophysiological data. We close the paper with a number of model predictions that can be tested in future neurophysiological and computational studies.
HeShichun; HeZhenya
1997-01-01
This paper investigates the application of a Recurrent Wavelet Neural Network(RWNN)to the blind equalization of nonlinear communication channels.In contrast to the wavelet networks introduced in,the RWNN is well suited for use in real time adaptive signal processing.Furthermore,the RWNN has the advantage that a priori information of the underlying system need not be known,the dynamics of the system are configured in the recurrent connections and the network approximates the system over time.An RWNN based structure and a novel training approach for blind equalization was proposed and its performance evaluated via computer simulations for nolnlinear communication channel model.It is shown that the RWNN blind equalizer performs much better than the linear Constant Modulus Algorithm(CMA) and the Recurrent Radial Basis Function(RRBF) Networks based blind equalizers in nonlinear channel case.The small size and high performance of the RWNN equalizer make it suitable for high speed channel blind equalization.
A Three-Threshold Learning Rule Approaches the Maximal Capacity of Recurrent Neural Networks.
Alireza Alemi
2015-08-01
Full Text Available Understanding the theoretical foundations of how memories are encoded and retrieved in neural populations is a central challenge in neuroscience. A popular theoretical scenario for modeling memory function is the attractor neural network scenario, whose prototype is the Hopfield model. The model simplicity and the locality of the synaptic update rules come at the cost of a poor storage capacity, compared with the capacity achieved with perceptron learning algorithms. Here, by transforming the perceptron learning rule, we present an online learning rule for a recurrent neural network that achieves near-maximal storage capacity without an explicit supervisory error signal, relying only upon locally accessible information. The fully-connected network consists of excitatory binary neurons with plastic recurrent connections and non-plastic inhibitory feedback stabilizing the network dynamics; the memory patterns to be memorized are presented online as strong afferent currents, producing a bimodal distribution for the neuron synaptic inputs. Synapses corresponding to active inputs are modified as a function of the value of the local fields with respect to three thresholds. Above the highest threshold, and below the lowest threshold, no plasticity occurs. In between these two thresholds, potentiation/depression occurs when the local field is above/below an intermediate threshold. We simulated and analyzed a network of binary neurons implementing this rule and measured its storage capacity for different sizes of the basins of attraction. The storage capacity obtained through numerical simulations is shown to be close to the value predicted by analytical calculations. We also measured the dependence of capacity on the strength of external inputs. Finally, we quantified the statistics of the resulting synaptic connectivity matrix, and found that both the fraction of zero weight synapses and the degree of symmetry of the weight matrix increase with the
A Three-Threshold Learning Rule Approaches the Maximal Capacity of Recurrent Neural Networks.
Alemi, Alireza; Baldassi, Carlo; Brunel, Nicolas; Zecchina, Riccardo
2015-08-01
Understanding the theoretical foundations of how memories are encoded and retrieved in neural populations is a central challenge in neuroscience. A popular theoretical scenario for modeling memory function is the attractor neural network scenario, whose prototype is the Hopfield model. The model simplicity and the locality of the synaptic update rules come at the cost of a poor storage capacity, compared with the capacity achieved with perceptron learning algorithms. Here, by transforming the perceptron learning rule, we present an online learning rule for a recurrent neural network that achieves near-maximal storage capacity without an explicit supervisory error signal, relying only upon locally accessible information. The fully-connected network consists of excitatory binary neurons with plastic recurrent connections and non-plastic inhibitory feedback stabilizing the network dynamics; the memory patterns to be memorized are presented online as strong afferent currents, producing a bimodal distribution for the neuron synaptic inputs. Synapses corresponding to active inputs are modified as a function of the value of the local fields with respect to three thresholds. Above the highest threshold, and below the lowest threshold, no plasticity occurs. In between these two thresholds, potentiation/depression occurs when the local field is above/below an intermediate threshold. We simulated and analyzed a network of binary neurons implementing this rule and measured its storage capacity for different sizes of the basins of attraction. The storage capacity obtained through numerical simulations is shown to be close to the value predicted by analytical calculations. We also measured the dependence of capacity on the strength of external inputs. Finally, we quantified the statistics of the resulting synaptic connectivity matrix, and found that both the fraction of zero weight synapses and the degree of symmetry of the weight matrix increase with the number of stored
Reward-based training of recurrent neural networks for cognitive and value-based tasks
Song, H Francis; Yang, Guangyu R; Wang, Xiao-Jing
2017-01-01
Trained neural network models, which exhibit features of neural activity recorded from behaving animals, may provide insights into the circuit mechanisms of cognitive functions through systematic analysis of network activity and connectivity. However, in contrast to the graded error signals commonly used to train networks through supervised learning, animals learn from reward feedback on definite actions through reinforcement learning. Reward maximization is particularly relevant when optimal behavior depends on an animal’s internal judgment of confidence or subjective preferences. Here, we implement reward-based training of recurrent neural networks in which a value network guides learning by using the activity of the decision network to predict future reward. We show that such models capture behavioral and electrophysiological findings from well-known experimental paradigms. Our work provides a unified framework for investigating diverse cognitive and value-based computations, and predicts a role for value representation that is essential for learning, but not executing, a task. DOI: http://dx.doi.org/10.7554/eLife.21492.001 PMID:28084991
Speed Control of BLDC Motor Based on Recurrent Wavelet Neural Network
Adel A. Obed
2014-12-01
Full Text Available In recent years, artificial intelligence techniques such as wavelet neural network have been applied to control the speed of the BLDC motor drive. The BLDC motor is a multivariable and nonlinear system due to variations in stator resistance and moment of inertia. Therefore, it is not easy to obtain a good performance by applying conventional PID controller. The Recurrent Wavelet Neural Network (RWNN is proposed, in this paper, with PID controller in parallel to produce a modified controller called RWNN-PID controller, which combines the capability of the artificial neural networks for learning from the BLDC motor drive and the capability of wavelet decomposition for identification and control of dynamic system and also having the ability of self-learning and self-adapting. The proposed controller is applied for controlling the speed of BLDC motor which provides a better performance than using conventional controllers with a wide range of speed. The parameters of the proposed controller are optimized using Particle Swarm Optimization (PSO algorithm. The BLDC motor drive with RWNN-PID controller through simulation results proves a better in the performance and stability compared with using conventional PID and classical WNN-PID controllers.
R. Selva Santhose Kumar
2014-06-01
Full Text Available In this study, the proposal is made for Particle Swarm Optimization (PSO Recurrent Neural Network (RNN based Z-Source Inverter Fed Induction Motor Drive. The proposed method is used to enhance the performance of the induction motor while reducing the Total Harmonic Distortion (THD, eliminating the oscillation period of the stator current, torque and speed. Here, the PSO technique uses the induction motor speed and reference speed as the input parameters. From the input parameters, it optimizes the gain of the PI controller and generates the reference quadrature axis current. By using the RNN, the reference three phase current for accurate control pulses of the voltage source inverter is predicted. The RNN is trained by the input motor actual quadrature axis current and the reference quadrature axis current with the corresponding target reference three phase current. The training process utilized the supervised learning process. Then the proposed technique is implemented in the MATLAB/SIMULINK platform and the effectiveness is analyzed by comparing with the other techniques such as PSO-Radial Biased Neural Network (RBNN and PSO-Artificial Neural Network (ANN. The comparison results demonstrate the superiority of the proposed approach and confirm its potential to solve the problem.
Manjunath, G; Jaeger, H
2013-03-01
The echo state property is a key for the design and training of recurrent neural networks within the paradigm of reservoir computing. In intuitive terms, this is a passivity condition: a network having this property, when driven by an input signal, will become entrained by the input and develop an internal response signal. This excited internal dynamics can be seen as a high-dimensional, nonlinear, unique transform of the input with a rich memory content. This view has implications for understanding neural dynamics beyond the field of reservoir computing. Available definitions and theorems concerning the echo state property, however, are of little practical use because they do not relate the network response to temporal or statistical properties of the driving input. Here we present a new definition of the echo state property that directly connects it to such properties. We derive a fundamental 0-1 law: if the input comes from an ergodic source, the network response has the echo state property with probability one or zero, independent of the given network. Furthermore, we give a sufficient condition for the echo state property that connects statistical characteristics of the input to algebraic properties of the network connection matrix. The mathematical methods that we employ are freshly imported from the young field of nonautonomous dynamical systems theory. Since these methods are not yet well known in neural computation research, we introduce them in some detail. As a side story, we hope to demonstrate the eminent usefulness of these methods.
LRP2 mediates folate uptake in the developing neural tube.
Kur, Esther; Mecklenburg, Nora; Cabrera, Robert M; Willnow, Thomas E; Hammes, Annette
2014-05-15
The low-density lipoprotein (LDL) receptor-related protein 2 (LRP2) is a multifunctional cell-surface receptor expressed in the embryonic neuroepithelium. Loss of LRP2 in the developing murine central nervous system (CNS) causes impaired closure of the rostral neural tube at embryonic stage (E) 9.0. Similar neural tube defects (NTDs) have previously been attributed to impaired folate metabolism in mice. We therefore asked whether LRP2 might be required for the delivery of folate to neuroepithelial cells during neurulation. Uptake assays in whole-embryo cultures showed that LRP2-deficient neuroepithelial cells are unable to mediate the uptake of folate bound to soluble folate receptor 1 (sFOLR1). Consequently, folate concentrations are significantly reduced in Lrp2(-/-) embryos compared with control littermates. Moreover, the folic-acid-dependent gene Alx3 is significantly downregulated in Lrp2 mutants. In conclusion, we show that LRP2 is essential for cellular folate uptake in the developing neural tube, a crucial step for proper neural tube closure.
New delay-dependent criterion for the stability of recurrent neural networks with time-varying delay
ZHANG HuaGuang; WANG ZhanShan
2009-01-01
This paper is concerned with the global asymptotic stability of a class of recurrent neural networks with interval time-varying delay. By constructing a suitable Lyapunov functional, a new criterion is established to ensure the global asymptotic stability of the concerned neural networks, which can be expressed in the form of linear matrix inequality and independent of the size of derivative of time varying delay. Two numerical examples show the effectiveness of the obtained results.
Hoellinger, Thomas; Petieau, Mathieu; Duvinage, Matthieu; Castermans, Thierry; Seetharaman, Karthik; Cebolla, Ana-Maria; Bengoetxea, Ana; Ivanenko, Yuri; Dan, Bernard; Cheron, Guy
2013-01-01
The existence of dedicated neuronal modules such as those organized in the cerebral cortex, thalamus, basal ganglia, cerebellum, or spinal cord raises the question of how these functional modules are coordinated for appropriate motor behavior. Study of human locomotion offers an interesting field for addressing this central question. The coordination of the elevation of the 3 leg segments under a planar covariation rule (Borghese et al., 1996) was recently modeled (Barliya et al., 2009) by phase-adjusted simple oscillators shedding new light on the understanding of the central pattern generator (CPG) processing relevant oscillation signals. We describe the use of a dynamic recurrent neural network (DRNN) mimicking the natural oscillatory behavior of human locomotion for reproducing the planar covariation rule in both legs at different walking speeds. Neural network learning was based on sinusoid signals integrating frequency and amplitude features of the first three harmonics of the sagittal elevation angles of the thigh, shank, and foot of each lower limb. We verified the biological plausibility of the neural networks. Best results were obtained with oscillations extracted from the first three harmonics in comparison to oscillations outside the harmonic frequency peaks. Physiological replication steadily increased with the number of neuronal units from 1 to 80, where similarity index reached 0.99. Analysis of synaptic weighting showed that the proportion of inhibitory connections consistently increased with the number of neuronal units in the DRNN. This emerging property in the artificial neural networks resonates with recent advances in neurophysiology of inhibitory neurons that are involved in central nervous system oscillatory activities. The main message of this study is that this type of DRNN may offer a useful model of physiological central pattern generator for gaining insights in basic research and developing clinical applications.
Thomas eHoellinger
2013-05-01
Full Text Available The existence of dedicated neuronal modules such as those organized in the cerebral cortex, thalamus, basal ganglia, cerebellum or spinal cord raises the question of how these functional modules are coordinated for appropriate motor behavior. Study of human locomotion offers an interesting field for addressing this central question. The coordination of the elevation of the 3 leg segments under a planar covariation rule (Borghese et al., 1996 was recently modeled (Barliya et al., 2009 by phase-adjusted simple oscillators shedding new light on the understanding of the central pattern generator processing relevant oscillation signals. We describe the use of a dynamic recurrent neural network (DRNN mimicking the natural oscillatory behavior of human locomotion for reproducing the planar covariation rule in both legs at different walking speeds. Neural network learning was based on sinusoid signals integrating frequency and amplitude features of the first three harmonics of the sagittal elevation angles of the thigh, shank and foot of each lower limb. We verified the biological plausibility of the neural networks. Best results were obtained with oscillations extracted from the first three harmonics in comparison to oscillations outside the harmonic frequency peaks. Physiological replication steadily increased with the number of neuronal units from 1 to 80, where similarity index reached 0.99. Analysis of synaptic weighting showed that the proportion of inhibitory connections consistently increased with the number of neuronal units in the DRNN. This emerging property in the artificial neural networks resonates with recent advances in neurophysiology of inhibitory neurons that are involved in central nervous system oscillatory activities. The main message of this study is that this type of DRNN may offer a useful model of physiological central pattern generator for gaining insights in basic research and developing clinical applications.
H Francis Song
2016-02-01
Full Text Available The ability to simultaneously record from large numbers of neurons in behaving animals has ushered in a new era for the study of the neural circuit mechanisms underlying cognitive functions. One promising approach to uncovering the dynamical and computational principles governing population responses is to analyze model recurrent neural networks (RNNs that have been optimized to perform the same tasks as behaving animals. Because the optimization of network parameters specifies the desired output but not the manner in which to achieve this output, "trained" networks serve as a source of mechanistic hypotheses and a testing ground for data analyses that link neural computation to behavior. Complete access to the activity and connectivity of the circuit, and the ability to manipulate them arbitrarily, make trained networks a convenient proxy for biological circuits and a valuable platform for theoretical investigation. However, existing RNNs lack basic biological features such as the distinction between excitatory and inhibitory units (Dale's principle, which are essential if RNNs are to provide insights into the operation of biological circuits. Moreover, trained networks can achieve the same behavioral performance but differ substantially in their structure and dynamics, highlighting the need for a simple and flexible framework for the exploratory training of RNNs. Here, we describe a framework for gradient descent-based training of excitatory-inhibitory RNNs that can incorporate a variety of biological knowledge. We provide an implementation based on the machine learning library Theano, whose automatic differentiation capabilities facilitate modifications and extensions. We validate this framework by applying it to well-known experimental paradigms such as perceptual decision-making, context-dependent integration, multisensory integration, parametric working memory, and motor sequence generation. Our results demonstrate the wide range of neural
Song, H Francis; Yang, Guangyu R; Wang, Xiao-Jing
2016-02-01
The ability to simultaneously record from large numbers of neurons in behaving animals has ushered in a new era for the study of the neural circuit mechanisms underlying cognitive functions. One promising approach to uncovering the dynamical and computational principles governing population responses is to analyze model recurrent neural networks (RNNs) that have been optimized to perform the same tasks as behaving animals. Because the optimization of network parameters specifies the desired output but not the manner in which to achieve this output, "trained" networks serve as a source of mechanistic hypotheses and a testing ground for data analyses that link neural computation to behavior. Complete access to the activity and connectivity of the circuit, and the ability to manipulate them arbitrarily, make trained networks a convenient proxy for biological circuits and a valuable platform for theoretical investigation. However, existing RNNs lack basic biological features such as the distinction between excitatory and inhibitory units (Dale's principle), which are essential if RNNs are to provide insights into the operation of biological circuits. Moreover, trained networks can achieve the same behavioral performance but differ substantially in their structure and dynamics, highlighting the need for a simple and flexible framework for the exploratory training of RNNs. Here, we describe a framework for gradient descent-based training of excitatory-inhibitory RNNs that can incorporate a variety of biological knowledge. We provide an implementation based on the machine learning library Theano, whose automatic differentiation capabilities facilitate modifications and extensions. We validate this framework by applying it to well-known experimental paradigms such as perceptual decision-making, context-dependent integration, multisensory integration, parametric working memory, and motor sequence generation. Our results demonstrate the wide range of neural activity patterns
Suhartono Suhartono
2009-07-01
Full Text Available Neural network (NN is one of many method used to predict the electricity consumption per hour in many countries. NN method which is used in many previous studies is Feed-Forward Neural Network (FFNN or Autoregressive Neural Network(AR-NN. AR-NN model is not able to capture and explain the effect of moving average (MA order on a time series of data. This research was conducted with the purpose of reviewing the application of other types of NN, that is Elman-Recurrent Neural Network (Elman-RNN which could explain MA order effect and compare the result of prediction accuracy with multiple seasonal ARIMA (Autoregressive Integrated Moving Average models. As a case study, we used data electricity consumption per hour in Mengare Gresik. Result of analysis showed that the best of double seasonal Arima models suited to short-term forecasting in the case study data is ARIMA([1,2,3,4,6,7,9,10,14,21,33],1,8(0,1,124 (1,1,0168. This model produces a white noise residuals, but it does not have a normal distribution due to suspected outlier. Outlier detection in iterative produce 14 innovation outliers. There are 4 inputs of Elman-RNN network that were examined and tested for forecasting the data, the input according to lag Arima, input such as lag Arima plus 14 dummy outlier, inputs are the lag-multiples of 24 up to lag 480, and the inputs are lag 1 and lag multiples of 24+1. All of four network uses one hidden layer with tangent sigmoid activation function and one output with a linear function. The result of comparative forecast accuracy through value of MAPE out-sample showed that the fourth networks, namely Elman-RNN (22, 3, 1, is the best model for forecasting electricity consumption per hour in short term in Mengare Gresik.
Neural stem cell-derived exosomes mediate viral entry
Sims B
2014-10-01
Full Text Available Brian Sims,1,2,* Linlin Gu,3,* Alexandre Krendelchtchikov,3 Qiana L Matthews3,4 1Division of Neonatology, Department of Pediatrics, 2Department of Cell, Developmental, and Integrative Biology, 3Division of Infectious Diseases, Department of Medicine, 4Center for AIDS Research, University of Alabama at Birmingham, Birmingham, AL, USA *These authors contributed equally to this work Background: Viruses enter host cells through interactions of viral ligands with cellular receptors. Viruses can also enter cells in a receptor-independent fashion. Mechanisms regarding the receptor-independent viral entry into cells have not been fully elucidated. Exosomal trafficking between cells may offer a mechanism by which viruses can enter cells.Methods: To investigate the role of exosomes on cellular viral entry, we employed neural stem cell-derived exosomes and adenovirus type 5 (Ad5 for the proof-of-principle study. Results: Exosomes significantly enhanced Ad5 entry in Coxsackie virus and adenovirus receptor (CAR-deficient cells, in which Ad5 only had very limited entry. The exosomes were shown to contain T-cell immunoglobulin mucin protein 4 (TIM-4, which binds phosphatidylserine. Treatment with anti-TIM-4 antibody significantly blocked the exosome-mediated Ad5 entry.Conclusion: Neural stem cell-derived exosomes mediated significant cellular entry of Ad5 in a receptor-independent fashion. This mediation may be hampered by an antibody specifically targeting TIM-4 on exosomes. This set of results will benefit further elucidation of virus/exosome pathways, which would contribute to reducing natural viral infection by developing therapeutic agents or vaccines. Keywords: neural stem cell-derived exosomes, adenovirus type 5, TIM-4, viral entry, phospholipids
Durstewitz, Daniel
2017-06-01
The computational and cognitive properties of neural systems are often thought to be implemented in terms of their (stochastic) network dynamics. Hence, recovering the system dynamics from experimentally observed neuronal time series, like multiple single-unit recordings or neuroimaging data, is an important step toward understanding its computations. Ideally, one would not only seek a (lower-dimensional) state space representation of the dynamics, but would wish to have access to its statistical properties and their generative equations for in-depth analysis. Recurrent neural networks (RNNs) are a computationally powerful and dynamically universal formal framework which has been extensively studied from both the computational and the dynamical systems perspective. Here we develop a semi-analytical maximum-likelihood estimation scheme for piecewise-linear RNNs (PLRNNs) within the statistical framework of state space models, which accounts for noise in both the underlying latent dynamics and the observation process. The Expectation-Maximization algorithm is used to infer the latent state distribution, through a global Laplace approximation, and the PLRNN parameters iteratively. After validating the procedure on toy examples, and using inference through particle filters for comparison, the approach is applied to multiple single-unit recordings from the rodent anterior cingulate cortex (ACC) obtained during performance of a classical working memory task, delayed alternation. Models estimated from kernel-smoothed spike time data were able to capture the essential computational dynamics underlying task performance, including stimulus-selective delay activity. The estimated models were rarely multi-stable, however, but rather were tuned to exhibit slow dynamics in the vicinity of a bifurcation point. In summary, the present work advances a semi-analytical (thus reasonably fast) maximum-likelihood estimation framework for PLRNNs that may enable to recover relevant aspects
Daniel Durstewitz
2017-06-01
Full Text Available The computational and cognitive properties of neural systems are often thought to be implemented in terms of their (stochastic network dynamics. Hence, recovering the system dynamics from experimentally observed neuronal time series, like multiple single-unit recordings or neuroimaging data, is an important step toward understanding its computations. Ideally, one would not only seek a (lower-dimensional state space representation of the dynamics, but would wish to have access to its statistical properties and their generative equations for in-depth analysis. Recurrent neural networks (RNNs are a computationally powerful and dynamically universal formal framework which has been extensively studied from both the computational and the dynamical systems perspective. Here we develop a semi-analytical maximum-likelihood estimation scheme for piecewise-linear RNNs (PLRNNs within the statistical framework of state space models, which accounts for noise in both the underlying latent dynamics and the observation process. The Expectation-Maximization algorithm is used to infer the latent state distribution, through a global Laplace approximation, and the PLRNN parameters iteratively. After validating the procedure on toy examples, and using inference through particle filters for comparison, the approach is applied to multiple single-unit recordings from the rodent anterior cingulate cortex (ACC obtained during performance of a classical working memory task, delayed alternation. Models estimated from kernel-smoothed spike time data were able to capture the essential computational dynamics underlying task performance, including stimulus-selective delay activity. The estimated models were rarely multi-stable, however, but rather were tuned to exhibit slow dynamics in the vicinity of a bifurcation point. In summary, the present work advances a semi-analytical (thus reasonably fast maximum-likelihood estimation framework for PLRNNs that may enable to recover
Tseng, Ting-Chen; Hsieh, Fu-Yu; Dai, Niann-Tzyy; Hsu, Shan-Hui
2016-09-01
Cell- and gene-based therapies have emerged as promising strategies for treating neurological diseases. The sources of neural stem cells are limited while the induced pluripotent stem (iPS) cells have risk of tumor formation. Here, we proposed the generation of self-renewable, multipotent, and neural lineage-related neural crest stem-like cells by chitosan substrate-mediated gene transfer of a single factor forkhead box D3 (FOXD3) for the use in neural repair. A simple, non-toxic, substrate-mediated method was applied to deliver the naked FOXD3 plasmid into human fibroblasts. The transfection of FOXD3 increased cell proliferation and up-regulated the neural crest marker genes (FOXD3, SOX2, and CD271), stemness marker genes (OCT4, NANOG, and SOX2), and neural lineage-related genes (Nestin, β-tubulin and GFAP). The expression levels of stemness marker genes and neural crest maker genes in the FOXD3-transfected fibroblasts were maintained until the fifth passage. The FOXD3 reprogrammed fibroblasts based on the new method significantly rescued the neural function of the impaired zebrafish. The chitosan substrate-mediated delivery of naked plasmid showed feasibility in reprogramming somatic cells. Particularly, the FOXD3 reprogrammed fibroblasts hold promise as an easily accessible cellular source with neural crest stem-like behavior for treating neural diseases in the future.
Laxmi V Yaliwal
2012-01-01
Full Text Available Methylenetetrahydrofolate reductase (MTHFR gene mutations have been implicated as risk factors for neural tube defects (NTDs. The best-characterized MTHFR genetic mutation 677C→T is associated with a 2-4 fold increased risk of NTD if patient is homozygous for this mutation. This risk factor is modulated by folate levels in the body. A second mutation in the MTHFR gene is an A→C transition at position 1298. The 1298A→C mutation is also a risk factor for NTD, but with a smaller relative risk than 677C→T mutation. Under conditions of low folate intake or high folate requirements, such as pregnancy, this mutation could become of clinical importance. We present a case report with MTHFR genetic mutation, who presented with recurrent familial pregnancy losses due to anencephaly/NTDs.
Identification of Jets Containing $b$-Hadrons with Recurrent Neural Networks at the ATLAS Experiment
The ATLAS collaboration
2017-01-01
A novel $b$-jet identification algorithm is constructed with a Recurrent Neural Network (RNN) at the ATLAS experiment at the CERN Large Hadron Collider. The RNN based $b$-tagging algorithm processes charged particle tracks associated to jets without reliance on secondary vertex finding, and can augment existing secondary-vertex based taggers. In contrast to traditional impact-parameter-based $b$-tagging algorithms which assume that tracks associated to jets are independent from each other, the RNN based $b$-tagging algorithm can exploit the spatial and kinematic correlations between tracks which are initiated from the same $b$-hadrons. This new approach also accommodates an extended set of input variables. This note presents the expected performance of the RNN based $b$-tagging algorithm in simulated $t \\bar t$ events at $\\sqrt{s}=13$ TeV.
Automatic temporal segment detection via bilateral long short-term memory recurrent neural networks
Sun, Bo; Cao, Siming; He, Jun; Yu, Lejun; Li, Liandong
2017-03-01
Constrained by the physiology, the temporal factors associated with human behavior, irrespective of facial movement or body gesture, are described by four phases: neutral, onset, apex, and offset. Although they may benefit related recognition tasks, it is not easy to accurately detect such temporal segments. An automatic temporal segment detection framework using bilateral long short-term memory recurrent neural networks (BLSTM-RNN) to learn high-level temporal-spatial features, which synthesizes the local and global temporal-spatial information more efficiently, is presented. The framework is evaluated in detail over the face and body database (FABO). The comparison shows that the proposed framework outperforms state-of-the-art methods for solving the problem of temporal segment detection.
Using LSTM recurrent neural networks for detecting anomalous behavior of LHC superconducting magnets
Wielgosz, Maciej; Mertik, Matej
2016-01-01
The superconducting LHC magnets are coupled with an electronic monitoring system which records and analyses voltage time series reflecting their performance. A currently used system is based on a range of preprogrammed triggers which launches protection procedures when a misbehavior of the magnets is detected. All the procedures used in the protection equipment were designed and implemented according to known working scenarios of the system and are updated and monitored by human operators. This paper proposes a novel approach to monitoring and fault protection of the Large Hadron Collider (LHC) superconducting magnets which employs state-of-the-art Deep Learning algorithms. Consequently, the authors of the paper decided to examine the performance of LSTM recurrent neural networks for anomaly detection in voltage time series of the magnets. In order to address this challenging task different network architectures and hyper-parameters were used to achieve the best possible performance of the solution. The regre...
Convergence study in extended Kalman filter-based training of recurrent neural networks.
Wang, Xiaoyu; Huang, Yong
2011-04-01
Recurrent neural network (RNN) has emerged as a promising tool in modeling nonlinear dynamical systems, but the training convergence is still of concern. This paper aims to develop an effective extended Kalman filter-based RNN training approach with a controllable training convergence. The training convergence problem during extended Kalman filter-based RNN training has been proposed and studied by adapting two artificial training noise parameters: the covariance of measurement noise (R) and the covariance of process noise (Q) of Kalman filter. The R and Q adaption laws have been developed using the Lyapunov method and the maximum likelihood method, respectively. The effectiveness of the proposed adaption laws has been tested using a nonlinear dynamical benchmark system and further applied in cutting tool wear modeling. The results show that the R adaption law can effectively avoid the divergence problem and ensure the training convergence, whereas the Q adaption law helps improve the training convergence speed.
Han, Seong-Ik; Lee, Jang-Myung
2014-01-01
This paper proposes a backstepping control system that uses a tracking error constraint and recurrent fuzzy neural networks (RFNNs) to achieve a prescribed tracking performance for a strict-feedback nonlinear dynamic system. A new constraint variable was defined to generate the virtual control that forces the tracking error to fall within prescribed boundaries. An adaptive RFNN was also used to obtain the required improvement on the approximation performances in order to avoid calculating the explosive number of terms generated by the recursive steps of traditional backstepping control. The boundedness and convergence of the closed-loop system was confirmed based on the Lyapunov stability theory. The prescribed performance of the proposed control scheme was validated by using it to control the prescribed error of a nonlinear system and a robot manipulator.
Robust passivity analysis for discrete-time recurrent neural networks with mixed delays
Huang, Chuan-Kuei; Shu, Yu-Jeng; Chang, Koan-Yuh; Shou, Ho-Nien; Lu, Chien-Yu
2015-02-01
This article considers the robust passivity analysis for a class of discrete-time recurrent neural networks (DRNNs) with mixed time-delays and uncertain parameters. The mixed time-delays that consist of both the discrete time-varying and distributed time-delays in a given range are presented, and the uncertain parameters are norm-bounded. The activation functions are assumed to be globally Lipschitz continuous. Based on new bounding technique and appropriate type of Lyapunov functional, a sufficient condition is investigated to guarantee the existence of the desired robust passivity condition for the DRNNs, which can be derived in terms of a family of linear matrix inequality (LMI). Some free-weighting matrices are introduced to reduce the conservatism of the criterion by using the bounding technique. A numerical example is given to illustrate the effectiveness and applicability.
Distributed Fault Detection in Sensor Networks using a Recurrent Neural Network
Obst, Oliver
2009-01-01
In long-term deployments of sensor networks, monitoring the quality of gathered data is a critical issue. Over the time of deployment, sensors are exposed to harsh conditions, causing some of them to fail or to deliver less accurate data. If such a degradation remains undetected, the usefulness of a sensor network can be greatly reduced. We present an approach that learns spatio-temporal correlations between different sensors, and makes use of the learned model to detect misbehaving sensors by using distributed computation and only local communication between nodes. We introduce SODESN, a distributed recurrent neural network architecture, and a learning method to train SODESN for fault detection in a distributed scenario. Our approach is evaluated using data from different types of sensors and is able to work well even with less-than-perfect link qualities and more than 50% of failed nodes.
D-optimal Bayesian Interrogation for Parameter and Noise Identification of Recurrent Neural Networks
Poczos, Barnabas
2008-01-01
We introduce a novel online Bayesian method for the identification of a family of noisy recurrent neural networks (RNNs). We develop Bayesian active learning technique in order to optimize the interrogating stimuli given past experiences. In particular, we consider the unknown parameters as stochastic variables and use the D-optimality principle, also known as `\\emph{infomax method}', to choose optimal stimuli. We apply a greedy technique to maximize the information gain concerning network parameters at each time step. We also derive the D-optimal estimation of the additive noise that perturbs the dynamical system of the RNN. Our analytical results are approximation-free. The analytic derivation gives rise to attractive quadratic update rules.
Liu, Hongjian; Wang, Zidong; Shen, Bo; Alsaadi, Fuad E.
2016-07-01
This paper deals with the robust H∞ state estimation problem for a class of memristive recurrent neural networks with stochastic time-delays. The stochastic time-delays under consideration are governed by a Bernoulli-distributed stochastic sequence. The purpose of the addressed problem is to design the robust state estimator such that the dynamics of the estimation error is exponentially stable in the mean square, and the prescribed ? performance constraint is met. By utilizing the difference inclusion theory and choosing a proper Lyapunov-Krasovskii functional, the existence condition of the desired estimator is derived. Based on it, the explicit expression of the estimator gain is given in terms of the solution to a linear matrix inequality. Finally, a numerical example is employed to demonstrate the effectiveness and applicability of the proposed estimation approach.
Wiszniowski, Jan; Plesiewicz, Beata; Trojanowski, Jacek
2014-06-01
This study is an application of a Real Time Recurrent Neural Network (RTRN) in the detection of small natural seismic events in Poland. Most of the events studied are from the Podhale region with a magnitude of 0.4 to 2.5. The population distribution of the region required that seismic signals be recorded using temporary stations deployed in populated areas. As a consequence, the high level of seismic noise that cannot be removed by filtration made it impossible to detect small events by STA/LTA based algorithms. The presence of high noise requires an alternate method of seismic detection capable of recognizing small seismic events. We applied the RTRN, which potentially can detect seismic signals in the frequency domain as well as in the phase arrival times. Data results of small local seismic events showed that the RTRN has the ability to correctly detect most of the events with fewer false detections than STA/LTA methods.
Yoo, Sung Jin; Park, Jin Bae; Choi, Yoon Ho
2006-12-01
A new method for the robust control of flexible-joint (FJ) robots with model uncertainties in both robot dynamics and actuator dynamics is proposed. The proposed control system is a combination of the adaptive dynamic surface control (DSC) technique and the self-recurrent wavelet neural network (SRWNN). The adaptive DSC technique provides the ability to overcome the "explosion of complexity" problem in backstepping controllers. The SRWNNs are used to observe the arbitrary model uncertainties of FJ robots, and all their weights are trained online. From the Lyapunov stability analysis, their adaptation laws are induced, and the uniformly ultimately boundedness of all signals in a closed-loop adaptive system is proved. Finally, simulation results for a three-link FJ robot are utilized to validate the good position tracking performance and robustness against payload uncertainties and external disturbances of the proposed control system.
An Incremental Time-delay Neural Network for Dynamical Recurrent Associative Memory
无
2002-01-01
An incremental time-delay neural network based on synapse growth, which is suitable for dynamic control and learning of autonomous robots, is proposed to improve the learning and retrieving performance of dynamical recurrent associative memory architecture. The model allows steady and continuous establishment of associative memory for spatio-temporal regularities and time series in discrete sequence of inputs. The inserted hidden units can be taken as the long-term memories that expand the capacity of network and sometimes may fade away under certain condition. Preliminary experiment has shown that this incremental network may be a promising approach to endow autonomous robots with the ability of adapting to new data without destroying the learned patterns. The system also benefits from its potential chaos character for emergence.
Stošovic, Miona V Andrejevic; Litovski, Vanco B
2013-11-01
Simulation is indispensable during the design of many biomedical prostheses that are based on fundamental electrical and electronic actions. However, simulation necessitates the use of adequate models. The main difficulties related to the modeling of such devices are their nonlinearity and dynamic behavior. Here we report the application of recurrent artificial neural networks for modeling of a nonlinear, two-terminal circuit equivalent to a specific implantable hearing device. The method is general in the sense that any nonlinear dynamic two-terminal device or circuit may be modeled in the same way. The model generated was successfully used for simulation and optimization of a driver (operational amplifier)-transducer ensemble. This confirms our claim that in addition to the proper design and optimization of the hearing actuator, optimization in the electronic domain, at the electronic driver circuit-to-actuator interface, should take place in order to achieve best performance of the complete hearing aid.
Mizusaki, Beatriz E. P.; Agnes, Everton J.; Erichsen, Rubem; Brunnet, Leonardo G.
2017-08-01
The plastic character of brain synapses is considered to be one of the foundations for the formation of memories. There are numerous kinds of such phenomenon currently described in the literature, but their role in the development of information pathways in neural networks with recurrent architectures is still not completely clear. In this paper we study the role of an activity-based process, called pre-synaptic dependent homeostatic scaling, in the organization of networks that yield precise-timed spiking patterns. It encodes spatio-temporal information in the synaptic weights as it associates a learned input with a specific response. We introduce a correlation measure to evaluate the precision of the spiking patterns and explore the effects of different inhibitory interactions and learning parameters. We find that large learning periods are important in order to improve the network learning capacity and discuss this ability in the presence of distinct inhibitory currents.
Amplification of asynchronous inhibition-mediated synchronization by feedback in recurrent networks.
Sashi Marella
2010-02-01
Full Text Available Synchronization of 30-80 Hz oscillatory activity of the principle neurons in the olfactory bulb (mitral cells is believed to be important for odor discrimination. Previous theoretical studies of these fast rhythms in other brain areas have proposed that principle neuron synchrony can be mediated by short-latency, rapidly decaying inhibition. This phasic inhibition provides a narrow time window for the principle neurons to fire, thus promoting synchrony. However, in the olfactory bulb, the inhibitory granule cells produce long lasting, small amplitude, asynchronous and aperiodic inhibitory input and thus the narrow time window that is required to synchronize spiking does not exist. Instead, it has been suggested that correlated output of the granule cells could serve to synchronize uncoupled mitral cells through a mechanism called "stochastic synchronization", wherein the synchronization arises through correlation of inputs to two neural oscillators. Almost all work on synchrony due to correlations presumes that the correlation is imposed and fixed. Building on theory and experiments that we and others have developed, we show that increased synchrony in the mitral cells could produce an increase in granule cell activity for those granule cells that share a synchronous group of mitral cells. Common granule cell input increases the input correlation to the mitral cells and hence their synchrony by providing a positive feedback loop in correlation. Thus we demonstrate the emergence and temporal evolution of input correlation in recurrent networks with feedback. We explore several theoretical models of this idea, ranging from spiking models to an analytically tractable model.
Neural processing of short-term recurrence in songbird vocal communication.
Gabriël J L Beckers
Full Text Available BACKGROUND: Many situations involving animal communication are dominated by recurring, stereotyped signals. How do receivers optimally distinguish between frequently recurring signals and novel ones? Cortical auditory systems are known to be pre-attentively sensitive to short-term delivery statistics of artificial stimuli, but it is unknown if this phenomenon extends to the level of behaviorally relevant delivery patterns, such as those used during communication. METHODOLOGY/PRINCIPAL FINDINGS: We recorded and analyzed complete auditory scenes of spontaneously communicating zebra finch (Taeniopygia guttata pairs over a week-long period, and show that they can produce tens of thousands of short-range contact calls per day. Individual calls recur at time scales (median interval 1.5 s matching those at which mammalian sensory systems are sensitive to recent stimulus history. Next, we presented to anesthetized birds sequences of frequently recurring calls interspersed with rare ones, and recorded, in parallel, action and local field potential responses in the medio-caudal auditory forebrain at 32 unique sites. Variation in call recurrence rate over natural ranges leads to widespread and significant modulation in strength of neural responses. Such modulation is highly call-specific in secondary auditory areas, but not in the main thalamo-recipient, primary auditory area. CONCLUSIONS/SIGNIFICANCE: Our results support the hypothesis that pre-attentive neural sensitivity to short-term stimulus recurrence is involved in the analysis of auditory scenes at the level of delivery patterns of meaningful sounds. This may enable birds to efficiently and automatically distinguish frequently recurring vocalizations from other events in their auditory scene.
Foundations of implementing the competitive layer model by Lotka-Volterra recurrent neural networks.
Yi, Zhang
2010-03-01
The competitive layer model (CLM) can be described by an optimization problem. The problem can be further formulated by an energy function, called the CLM energy function, in the subspace of nonnegative orthant. The set of minimum points of the CLM energy function forms the set of solutions of the CLM problem. Solving the CLM problem means to find out such solutions. Recurrent neural networks (RNNs) can be used to implement the CLM to solve the CLM problem. The key point is to make the set of minimum points of the CLM energy function just correspond to the set of stable attractors of the recurrent neural networks. This paper proposes to use Lotka-Volterra RNNs (LV RNNs) to implement the CLM. The contribution of this paper is to establish foundations of implementing the CLM by LV RNNs. The contribution mainly contains three parts. The first part is on the CLM energy function. Necessary and sufficient conditions for minimum points of the CLM energy function are established by detailed study. The second part is on the convergence of the proposed model of the LV RNNs. It is proven that interesting trajectories are convergent. The third part is the most important. It proves that the set of stable attractors of the proposed LV RNN just equals the set of minimum points of the CLM energy function in the nonnegative orthant. Thus, the LV RNNs can be used to solve the problem of the CLM. It is believed that by establishing such basic rigorous theories, more and interesting applications of the CLM can be found.
无
2001-01-01
We theoretically investigate the asymptotical stability, localbifurcations and chaos of discrete-time recurrent neural networks with the form ofwhere the input-output function is defined as a generalized sigmoid function, such as vi=tanh(μiui), etc. Numerical simulations are also provided to demonstrate the theoretical results.
Deep Recurrent Neural Networks for seizure detection and early seizure detection systems
Talathi, S. S. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)
2017-06-05
Epilepsy is common neurological diseases, affecting about 0.6-0.8 % of world population. Epileptic patients suffer from chronic unprovoked seizures, which can result in broad spectrum of debilitating medical and social consequences. Since seizures, in general, occur infrequently and are unpredictable, automated seizure detection systems are recommended to screen for seizures during long-term electroencephalogram (EEG) recordings. In addition, systems for early seizure detection can lead to the development of new types of intervention systems that are designed to control or shorten the duration of seizure events. In this article, we investigate the utility of recurrent neural networks (RNNs) in designing seizure detection and early seizure detection systems. We propose a deep learning framework via the use of Gated Recurrent Unit (GRU) RNNs for seizure detection. We use publicly available data in order to evaluate our method and demonstrate very promising evaluation results with overall accuracy close to 100 %. We also systematically investigate the application of our method for early seizure warning systems. Our method can detect about 98% of seizure events within the first 5 seconds of the overall epileptic seizure duration.
Doubravová, Jana; Wiszniowski, Jan; Horálek, Josef
2016-08-01
In this paper, we present a new method of local event detection of swarm-like earthquakes based on neural networks. The proposed algorithm uses unique neural network architecture. It combines features used in other neural network concepts such as the Real Time Recurrent Network and Nonlinear Autoregressive Neural Network to achieve good performance of detection. We use the recurrence combined with various delays applied to recurrent inputs so the network remembers history of many samples. This method has been tested on data from a local seismic network in West Bohemia with promising results. We found that phases not picked in training data diminish the detection capability of the neural network and proper preparation of training data is therefore fundamental. To train the network we define a parameter called the learning importance weight of events and show that it affects the number of acceptable solutions achieved by many trials of the Back Propagation Through Time algorithm. We also compare the individual training of stations with training all of them simultaneously, and we conclude that results of joint training are better for some stations than training only one station.
Nonlinear dynamics analysis of a self-organizing recurrent neural network: chaos waning.
Jürgen Eser
Full Text Available Self-organization is thought to play an important role in structuring nervous systems. It frequently arises as a consequence of plasticity mechanisms in neural networks: connectivity determines network dynamics which in turn feed back on network structure through various forms of plasticity. Recently, self-organizing recurrent neural network models (SORNs have been shown to learn non-trivial structure in their inputs and to reproduce the experimentally observed statistics and fluctuations of synaptic connection strengths in cortex and hippocampus. However, the dynamics in these networks and how they change with network evolution are still poorly understood. Here we investigate the degree of chaos in SORNs by studying how the networks' self-organization changes their response to small perturbations. We study the effect of perturbations to the excitatory-to-excitatory weight matrix on connection strengths and on unit activities. We find that the network dynamics, characterized by an estimate of the maximum Lyapunov exponent, becomes less chaotic during its self-organization, developing into a regime where only few perturbations become amplified. We also find that due to the mixing of discrete and (quasi-continuous variables in SORNs, small perturbations to the synaptic weights may become amplified only after a substantial delay, a phenomenon we propose to call deferred chaos.
Emergence of hierarchical structure mirroring linguistic composition in a recurrent neural network.
Hinoshita, Wataru; Arie, Hiroaki; Tani, Jun; Okuno, Hiroshi G; Ogata, Tetsuya
2011-05-01
We show that a Multiple Timescale Recurrent Neural Network (MTRNN) can acquire the capabilities to recognize, generate, and correct sentences by self-organizing in a way that mirrors the hierarchical structure of sentences: characters grouped into words, and words into sentences. The model can control which sentence to generate depending on its initial states (generation phase) and the initial states can be calculated from the target sentence (recognition phase). In an experiment, we trained our model over a set of unannotated sentences from an artificial language, represented as sequences of characters. Once trained, the model could recognize and generate grammatical sentences, even if they were not learned. Moreover, we found that our model could correct a few substitution errors in a sentence, and the correction performance was improved by adding the errors to the training sentences in each training iteration with a certain probability. An analysis of the neural activations in our model revealed that the MTRNN had self-organized, reflecting the hierarchical linguistic structure by taking advantage of the differences in timescale among its neurons: in particular, neurons that change the fastest represented "characters", those that change more slowly, "words", and those that change the slowest, "sentences".
Fault Detection and Isolation of Wind Energy Conversion Systems using Recurrent Neural Networks
N. Talebi
2014-07-01
Full Text Available Reliability of Wind Energy Conversion Systems (WECSs is greatly important regarding to extract the maximum amount of available wind energy. In order to accurately study WECSs during occurrence of faults and to explore the impact of faults on each component of WECSs, a detailed model is required in which mechanical and electrical parts of WECSs are properly involved. In addition, a Fault Detection and Isolation System (FDIS is required by which occurred faults can be diagnosed at the appropriate time in order to ensure safe system operation and avoid heavy economic losses. This can be performed by subsequent actions through fast and accurate detection and isolation of faults. In this paper, by utilizing a comprehensive dynamic model of the WECS, an FDIS is presented using dynamic recurrent neural networks. In industrial processes, dynamic neural networks are known as a good mathematical tool for fault detection. Simulation results show that the proposed FDIS detects faults of the generator's angular velocity sensor, pitch angle sensors and pitch actuators appropriately. The suggested FDIS is capable to detect and isolate the faults shortly while owing very low false alarms rate. The presented FDIS scheme can be used to identify faults in other parts of the WECS.
Zazo, Ruben; Lozano-Diez, Alicia; Gonzalez-Dominguez, Javier; Toledano, Doroteo T; Gonzalez-Rodriguez, Joaquin
2016-01-01
Long Short Term Memory (LSTM) Recurrent Neural Networks (RNNs) have recently outperformed other state-of-the-art approaches, such as i-vector and Deep Neural Networks (DNNs), in automatic Language Identification (LID), particularly when dealing with very short utterances (∼3s). In this contribution we present an open-source, end-to-end, LSTM RNN system running on limited computational resources (a single GPU) that outperforms a reference i-vector system on a subset of the NIST Language Recognition Evaluation (8 target languages, 3s task) by up to a 26%. This result is in line with previously published research using proprietary LSTM implementations and huge computational resources, which made these former results hardly reproducible. Further, we extend those previous experiments modeling unseen languages (out of set, OOS, modeling), which is crucial in real applications. Results show that a LSTM RNN with OOS modeling is able to detect these languages and generalizes robustly to unseen OOS languages. Finally, we also analyze the effect of even more limited test data (from 2.25s to 0.1s) proving that with as little as 0.5s an accuracy of over 50% can be achieved.
A biologically plausible learning rule for the Infomax on recurrent neural networks.
Hayakawa, Takashi; Kaneko, Takeshi; Aoyagi, Toshio
2014-01-01
A fundamental issue in neuroscience is to understand how neuronal circuits in the cerebral cortex play their functional roles through their characteristic firing activity. Several characteristics of spontaneous and sensory-evoked cortical activity have been reproduced by Infomax learning of neural networks in computational studies. There are, however, still few models of the underlying learning mechanisms that allow cortical circuits to maximize information and produce the characteristics of spontaneous and sensory-evoked cortical activity. In the present article, we derive a biologically plausible learning rule for the maximization of information retained through time in dynamics of simple recurrent neural networks. Applying the derived learning rule in a numerical simulation, we reproduce the characteristics of spontaneous and sensory-evoked cortical activity: cell-assembly-like repeats of precise firing sequences, neuronal avalanches, spontaneous replays of learned firing sequences and orientation selectivity observed in the primary visual cortex. We further discuss the similarity between the derived learning rule and the spike timing-dependent plasticity of cortical neurons.
Ruben Zazo
Full Text Available Long Short Term Memory (LSTM Recurrent Neural Networks (RNNs have recently outperformed other state-of-the-art approaches, such as i-vector and Deep Neural Networks (DNNs, in automatic Language Identification (LID, particularly when dealing with very short utterances (∼3s. In this contribution we present an open-source, end-to-end, LSTM RNN system running on limited computational resources (a single GPU that outperforms a reference i-vector system on a subset of the NIST Language Recognition Evaluation (8 target languages, 3s task by up to a 26%. This result is in line with previously published research using proprietary LSTM implementations and huge computational resources, which made these former results hardly reproducible. Further, we extend those previous experiments modeling unseen languages (out of set, OOS, modeling, which is crucial in real applications. Results show that a LSTM RNN with OOS modeling is able to detect these languages and generalizes robustly to unseen OOS languages. Finally, we also analyze the effect of even more limited test data (from 2.25s to 0.1s proving that with as little as 0.5s an accuracy of over 50% can be achieved.
Liu, Xiwei; Chen, Tianping
2016-03-01
In this paper, we investigate the global exponential stability for complex-valued recurrent neural networks with asynchronous time delays by decomposing complex-valued networks to real and imaginary parts and construct an equivalent real-valued system. The network model is described by a continuous-time equation. There are two main differences of this paper with previous works: 1) time delays can be asynchronous, i.e., delays between different nodes are different, which make our model more general and 2) we prove the exponential convergence directly, while the existence and uniqueness of the equilibrium point is just a direct consequence of the exponential convergence. Using three generalized norms, we present some sufficient conditions for the uniqueness and global exponential stability of the equilibrium point for delayed complex-valued neural networks. These conditions in our results are less restrictive because of our consideration of the excitatory and inhibitory effects between neurons; so previous works of other researchers can be extended. Finally, some numerical simulations are given to demonstrate the correctness of our obtained results.
Using Long-Short-Term-Memory Recurrent Neural Networks to Predict Aviation Engine Vibrations
ElSaid, AbdElRahman Ahmed
This thesis examines building viable Recurrent Neural Networks (RNN) using Long Short Term Memory (LSTM) neurons to predict aircraft engine vibrations. The different networks are trained on a large database of flight data records obtained from an airline containing flights that suffered from excessive vibration. RNNs can provide a more generalizable and robust method for prediction over analytical calculations of engine vibration, as analytical calculations must be solved iteratively based on specific empirical engine parameters, and this database contains multiple types of engines. Further, LSTM RNNs provide a "memory" of the contribution of previous time series data which can further improve predictions of future vibration values. LSTM RNNs were used over traditional RNNs, as those suffer from vanishing/exploding gradients when trained with back propagation. The study managed to predict vibration values for 1, 5, 10, and 20 seconds in the future, with 2.84% 3.3%, 5.51% and 10.19% mean absolute error, respectively. These neural networks provide a promising means for the future development of warning systems so that suitable actions can be taken before the occurrence of excess vibration to avoid unfavorable situations during flight.
Initialization and self-organized optimization of recurrent neural network connectivity.
Boedecker, Joschka; Obst, Oliver; Mayer, N Michael; Asada, Minoru
2009-10-01
Reservoir computing (RC) is a recent paradigm in the field of recurrent neural networks. Networks in RC have a sparsely and randomly connected fixed hidden layer, and only output connections are trained. RC networks have recently received increased attention as a mathematical model for generic neural microcircuits to investigate and explain computations in neocortical columns. Applied to specific tasks, their fixed random connectivity, however, leads to significant variation in performance. Few problem-specific optimization procedures are known, which would be important for engineering applications, but also in order to understand how networks in biology are shaped to be optimally adapted to requirements of their environment. We study a general network initialization method using permutation matrices and derive a new unsupervised learning rule based on intrinsic plasticity (IP). The IP-based learning uses only local learning, and its aim is to improve network performance in a self-organized way. Using three different benchmarks, we show that networks with permutation matrices for the reservoir connectivity have much more persistent memory than the other methods but are also able to perform highly nonlinear mappings. We also show that IP-based on sigmoid transfer functions is limited concerning the output distributions that can be achieved.
A novel recurrent neural network forecasting model for power intelligence center
LIU Ji-cheng; NIU Dong-xiao
2008-01-01
In order to accurately forecast the load of power system and enhance the stability of the power network, a novel unascertained mathematics based recurrent neural network (UMRNN) for power intelligence center (PIC) was created through three steps. First, by combining with the general project uncertain element transmission theory (GPUET), the basic definitions of stochastic,fuzzy, and grey uncertain elements were given based on the principal types of uncertain information. Second, a power dynamic alliance including four sectors: generation sector, transmission sector, distribution sector and customers was established. The key factors were amended according to the four transmission topologies of uncertain elements, thus the new factors entered the power Intelligence center as the input elements. Finally, in the intelligence handing background of PIC, by performing uncertain and recursive process to the input values of network, and combining unascertained mathematics, the novel load forecasting model was built. Three different approaches were put forward to forecast an eastern regional power grid load in China. The root mean square error (ERMS) demonstrates that the forecasting accuracy of the proposed model UMRNN is 3% higher than that of BP neural network (BPNN), and 5% higher than that of autoregressive integrated moving average (ARIMA). Besides, an example also shows that the average relative error of the first quarter of 2008 forecasted by UMRNN is only 2.59%, which has high precision.
Chandra, Rohitash
2015-12-01
Collaboration enables weak species to survive in an environment where different species compete for limited resources. Cooperative coevolution (CC) is a nature-inspired optimization method that divides a problem into subcomponents and evolves them while genetically isolating them. Problem decomposition is an important aspect in using CC for neuroevolution. CC employs different problem decomposition methods to decompose the neural network training problem into subcomponents. Different problem decomposition methods have features that are helpful at different stages in the evolutionary process. Adaptation, collaboration, and competition are needed for CC, as multiple subpopulations are used to represent the problem. It is important to add collaboration and competition in CC. This paper presents a competitive CC method for training recurrent neural networks for chaotic time-series prediction. Two different instances of the competitive method are proposed that employs different problem decomposition methods to enforce island-based competition. The results show improvement in the performance of the proposed methods in most cases when compared with standalone CC and other methods from the literature.
Sabahi, Kamel; Teshnehlab, Mohammad; Shoorhedeli, Mahdi Aliyari [Department of Electrical Engineering, K.N. Toosi University of Technology, Intelligent System Lab, Tehran (Iran)
2009-04-15
In this study, a new adaptive controller based on modified feedback error learning (FEL) approaches is proposed for load frequency control (LFC) problem. The FEL strategy consists of intelligent and conventional controllers in feedforward and feedback paths, respectively. In this strategy, a conventional feedback controller (CFC), i.e. proportional, integral and derivative (PID) controller, is essential to guarantee global asymptotic stability of the overall system; and an intelligent feedforward controller (INFC) is adopted to learn the inverse of the controlled system. Therefore, when the INFC learns the inverse of controlled system, the tracking of reference signal is done properly. Generally, the CFC is designed at nominal operating conditions of the system and, therefore, fails to provide the best control performance as well as global stability over a wide range of changes in the operating conditions of the system. So, in this study a supervised controller (SC), a lookup table based controller, is addressed for tuning of the CFC. During abrupt changes of the power system parameters, the SC adjusts the PID parameters according to these operating conditions. Moreover, for improving the performance of overall system, a recurrent fuzzy neural network (RFNN) is adopted in INFC instead of the conventional neural network, which was used in past studies. The proposed FEL controller has been compared with the conventional feedback error learning controller (CFEL) and the PID controller through some performance indices. (author)
Nonlinear dynamics analysis of a self-organizing recurrent neural network: chaos waning.
Eser, Jürgen; Zheng, Pengsheng; Triesch, Jochen
2014-01-01
Self-organization is thought to play an important role in structuring nervous systems. It frequently arises as a consequence of plasticity mechanisms in neural networks: connectivity determines network dynamics which in turn feed back on network structure through various forms of plasticity. Recently, self-organizing recurrent neural network models (SORNs) have been shown to learn non-trivial structure in their inputs and to reproduce the experimentally observed statistics and fluctuations of synaptic connection strengths in cortex and hippocampus. However, the dynamics in these networks and how they change with network evolution are still poorly understood. Here we investigate the degree of chaos in SORNs by studying how the networks' self-organization changes their response to small perturbations. We study the effect of perturbations to the excitatory-to-excitatory weight matrix on connection strengths and on unit activities. We find that the network dynamics, characterized by an estimate of the maximum Lyapunov exponent, becomes less chaotic during its self-organization, developing into a regime where only few perturbations become amplified. We also find that due to the mixing of discrete and (quasi-)continuous variables in SORNs, small perturbations to the synaptic weights may become amplified only after a substantial delay, a phenomenon we propose to call deferred chaos.
Guo, Zhenyuan; Wang, Jun; Yan, Zheng
2013-12-01
This paper addresses the global exponential dissipativity of memristor-based recurrent neural networks with time-varying delays. By constructing proper Lyapunov functionals and using M-matrix theory and LaSalle invariant principle, the sets of global exponentially dissipativity are characterized parametrically. It is proven herein that there are 2(2n(2)-n) equilibria for an n-neuron memristor-based neural network and they are located in the derived globally attractive sets. It is also shown that memristor-based recurrent neural networks with time-varying delays are stabilizable at the origin of the state space by using a linear state feedback control law with appropriate gains. Finally, two numerical examples are discussed in detail to illustrate the characteristics of the results. Copyright © 2013 Elsevier Ltd. All rights reserved.
Neural networks mediating sentence reading in the deaf
Elizabeth Ann Hirshorn
2014-06-01
Full Text Available The present work addresses the neural bases of sentence reading in deaf populations. To better understand the relative role of deafness and English knowledge in shaping the neural networks that mediate sentence reading, three populations with different degrees of English knowledge and depth of hearing loss were included – deaf signers, oral deaf and hearing individuals. The three groups were matched for reading comprehension and scanned while reading sentences. A similar neural network of left perisylvian areas was observed, supporting the view of a shared network of areas for reading despite differences in hearing and English knowledge. However, differences were observed, in particular in the auditory cortex, with deaf signers and oral deaf showing greatest bilateral superior temporal gyrus (STG recruitment as compared to hearing individuals. Importantly, within deaf individuals, the same STG area in the left hemisphere showed greater recruitment as hearing loss increased. To further understand the functional role of such auditory cortex re-organization after deafness, connectivity analyses were performed from the STG regions identified above. Connectivity from the left STG toward areas typically associated with semantic processing (BA45 and thalami was greater in deaf signers and in oral deaf as compared to hearing. In contrast, connectivity from left STG toward areas identified with speech-based processing was greater in hearing and in oral deaf as compared to deaf signers. These results support the growing literature indicating recruitment of auditory areas after congenital deafness for visually-mediated language functions, and establish that both auditory deprivation and language experience shape its functional reorganization. Implications for differential reliance on semantic vs. phonological pathways during reading in the three groups is discussed.
M, Unterhuber; W, Rauhe; P, Sgobino; F, Pescoller; M, Manfrin; M, Tomaino
2016-01-01
Through a retrospective study concerning the experience of our center in patients affected by Neurally Mediated reflex Syncope (NMS) we wanted to verify not only the diagnostic yield of the Implantable Loop Recorder (ILR) but its possible placebo therapeutic effect. In the context of patients affected by a severe clinical presentation of NMS identified through a careful clinical evaluation, we selected those who followed a diagnostic iter using the ILR. We analysed 84 patients (39 male and 45 female, mean age 71 years), during the period 2009-2013. 34 patients (40.5%) had no recurrences after a mean follow-up (FU) of 35 months, among these 17 concluded a FU of 4 years. 50 patients (59.5%) had recurrences and a specific diagnosis after an average period of 7 months. We found an important number of patients who showed a disappearance of syncope during an observation period of 2-3 and 4 years. At first glance this results could be explained considering the possible placebo therapeutic effect of ILR.
Coding of level of ambiguity within neural systems mediating choice
Lopez-Paniagua, Dan; Seger, Carol A.
2013-01-01
Data from previous neuroimaging studies exploring neural activity associated with uncertainty suggest varying levels of activation associated with changing degrees of uncertainty in neural regions that mediate choice behavior. The present study used a novel task that parametrically controlled the amount of information hidden from the subject; levels of uncertainty ranged from full ambiguity (no information about probability of winning) through multiple levels of partial ambiguity, to a condition of risk only (zero ambiguity with full knowledge of the probability of winning). A parametric analysis compared a linear model in which weighting increased as a function of level of ambiguity, and an inverted-U quadratic models in which partial ambiguity conditions were weighted most heavily. Overall we found that risk and all levels of ambiguity recruited a common “fronto—parietal—striatal” network including regions within the dorsolateral prefrontal cortex, intraparietal sulcus, and dorsal striatum. Activation was greatest across these regions and additional anterior and superior prefrontal regions for the quadratic function which most heavily weighs trials with partial ambiguity. These results suggest that the neural regions involved in decision processes do not merely track the absolute degree ambiguity or type of uncertainty (risk vs. ambiguity). Instead, recruitment of prefrontal regions may result from greater degree of difficulty in conditions of partial ambiguity: when information regarding reward probabilities important for decision making is hidden or not easily obtained the subject must engage in a search for tractable information. Additionally, this study identified regions of activity related to the valuation of potential gains associated with stimuli or options (including the orbitofrontal and medial prefrontal cortices and dorsal striatum) and related to winning (including orbitofrontal cortex and ventral striatum). PMID:24367286
Neural circuits mediating olfactory-driven behavior in fish
Florence eKermen
2013-04-01
Full Text Available The fish olfactory system processes odor signals and mediates behaviors that are crucial for survival such as foraging, courtship and alarm response. Although the upstream olfactory brain areas (olfactory epithelium and olfactory bulb are well studied, less is known about their target brain areas and the role they play in generating odor-driven behaviors. Here we review a broad range of literature on the anatomy, physiology and behavioral output of the olfactory system and its target areas in a wide range of teleost fish. Additionally, we discuss how applying recent technological advancements to the zebrafish (Danio rerio could help in understanding the function of these target areas. We hope to provide a framework for elucidating the neural circuit computations underlying the odor-driven behaviors in this small, transparent and genetically amenable vertebrate.
Neurally mediated syncope presenting with paroxysmal positional vertigo and tinnitus.
Goto, Fumiyuki; Tsutsumi, Tomoko; Nakamura, Iwao; Ogawa, Kaoru
2012-10-01
A 72-year-old man with positional vertigo and tinnitus was referred to us. He did not want to perform provoking test except once due to his fear. No positional nystagmus was provoked. He found that his attacks usually occurred when he lay on his right ear. From his clinical history, benign paroxysmal positional vertigo was suspected. Conventional pharmacotherapy as well as non-specific physical therapy did not have significant effect. His feeling of positional vertigo with pyrosis was actually presyncope. We suspected cardiovascular disorders, and referred him to a cardiologist. Portable cardiogram monitoring revealed paroxysmal bradycardia. He was diagnosed with neurally mediated syncope, and a pacemaker was implanted. His paroxysmal dizziness soon disappeared. It is important to study the clinical history of the patients in detail, as they are not always able to accurately explain their symptoms. We should carefully rule out cardiovascular disorders, especially when we see the patients with suspected BPPV without the characteristic positional nystagmus.
Physiological phenomenology of neurally-mediated syncope with management implications.
Christoph Schroeder
Full Text Available BACKGROUND: Due to lack of efficacy in recent trials, current guidelines for the treatment of neurally-mediated (vasovagal syncope do not promote cardiac pacemaker implantation. However, the finding of asystole during head-up tilt -induced (presyncope may lead to excessive cardioinhibitory syncope diagnosis and treatment with cardiac pacemakers as blood pressure is often discontinuously measured. Furthermore, physicians may be more inclined to implant cardiac pacemakers in older patients. We hypothesized that true cardioinhibitory syncope in which the decrease in heart rate precedes the fall in blood pressure is a very rare finding which might explain the lack of efficacy of pacemakers in neurally-mediated syncope. METHODS: We studied 173 consecutive patients referred for unexplained syncope (114 women, 59 men, 42 ± 1 years, 17 ± 2 syncopal episodes. All had experienced (presyncope during head-up tilt testing followed by additional lower body negative suction. We classified hemodynamic responses according to the modified Vasovagal Syncope International Study (VASIS classification as mixed response (VASIS I, cardioinhibitory without (VASIS IIa or with asystole (VASIS IIb, and vasodepressor (VASIS III. Then, we defined the exact temporal relationship between hypotension and bradycardia to identify patients with true cardioinhibitory syncope. RESULTS: Of the (presyncopal events during tilt testing, 63% were classified as VASIS I, 6% as VASIS IIb, 2% as VASIS IIa, and 29% as VASIS III. Cardioinhibitory responses (VASIS class II progressively decreased from the youngest to the oldest age quartile. With more detailed temporal analysis, blood pressure reduction preceded the heart-rate decrease in all but six individuals (97% overall and in 10 out of 11 patients with asystole (VASIS IIb. CONCLUSIONS: Hypotension precedes bradycardia onset during head-up tilt-induced (presyncope in the vast majority of patients, even in those classified as
Lu, Wenlian; Zheng, Ren; Chen, Tianping
2016-03-01
In this paper, we discuss outer-synchronization of the asymmetrically connected recurrent time-varying neural networks. By using both centralized and decentralized discretization data sampling principles, we derive several sufficient conditions based on three vector norms to guarantee that the difference of any two trajectories starting from different initial values of the neural network converges to zero. The lower bounds of the common time intervals between data samples in centralized and decentralized principles are proved to be positive, which guarantees exclusion of Zeno behavior. A numerical example is provided to illustrate the efficiency of the theoretical results.
Gong, Weiqiang; Liang, Jinling; Cao, Jinde
2015-10-01
In this paper, based on the matrix measure method and the Halanay inequality, global exponential stability problem is investigated for the complex-valued recurrent neural networks with time-varying delays. Without constructing any Lyapunov functions, several sufficient criteria are obtained to ascertain the global exponential stability of the addressed complex-valued neural networks under different activation functions. Here, the activation functions are no longer assumed to be derivative which is always demanded in relating references. In addition, the obtained results are easy to be verified and implemented in practice. Finally, two examples are given to illustrate the effectiveness of the obtained results.
Fei, Juntao; Lu, Cheng
2017-03-06
In this paper, an adaptive sliding mode control system using a double loop recurrent neural network (DLRNN) structure is proposed for a class of nonlinear dynamic systems. A new three-layer RNN is proposed to approximate unknown dynamics with two different kinds of feedback loops where the firing weights and output signal calculated in the last step are stored and used as the feedback signals in each feedback loop. Since the new structure has combined the advantages of internal feedback NN and external feedback NN, it can acquire the internal state information while the output signal is also captured, thus the new designed DLRNN can achieve better approximation performance compared with the regular NNs without feedback loops or the regular RNNs with a single feedback loop. The new proposed DLRNN structure is employed in an equivalent controller to approximate the unknown nonlinear system dynamics, and the parameters of the DLRNN are updated online by adaptive laws to get favorable approximation performance. To investigate the effectiveness of the proposed controller, the designed adaptive sliding mode controller with the DLRNN is applied to a z-axis microelectromechanical system gyroscope to control the vibrating dynamics of the proof mass. Simulation results demonstrate that the proposed methodology can achieve good tracking property, and the comparisons of the approximation performance between radial basis function NN, RNN, and DLRNN show that the DLRNN can accurately estimate the unknown dynamics with a fast speed while the internal states of DLRNN are more stable.
Applying long short-term memory recurrent neural networks to intrusion detection
Ralf C. Staudemeyer
2015-07-01
Full Text Available We claim that modelling network traffic as a time series with a supervised learning approach, using known genuine and malicious behaviour, improves intrusion detection. To substantiate this, we trained long short-term memory (LSTM recurrent neural networks with the training data provided by the DARPA / KDD Cup ’99 challenge. To identify suitable LSTM-RNN network parameters and structure we experimented with various network topologies. We found networks with four memory blocks containing two cells each offer a good compromise between computational cost and detection performance. We applied forget gates and shortcut connections respectively. A learning rate of 0.1 and up to 1,000 epochs showed good results. We tested the performance on all features and on extracted minimal feature sets respectively. We evaluated different feature sets for the detection of all attacks within one network and also to train networks specialised on individual attack classes. Our results show that the LSTM classifier provides superior performance in comparison to results previously published results of strong static classifiers. With 93.82% accuracy and 22.13 cost, LSTM outperforms the winning entries of the KDD Cup ’99 challenge by far. This is due to the fact that LSTM learns to look back in time and correlate consecutive connection records. For the first time ever, we have demonstrated the usefulness of LSTM networks to intrusion detection.
Local community detection as pattern restoration by attractor dynamics of recurrent neural networks.
Okamoto, Hiroshi
2016-08-01
Densely connected parts in networks are referred to as "communities". Community structure is a hallmark of a variety of real-world networks. Individual communities in networks form functional modules of complex systems described by networks. Therefore, finding communities in networks is essential to approaching and understanding complex systems described by networks. In fact, network science has made a great deal of effort to develop effective and efficient methods for detecting communities in networks. Here we put forward a type of community detection, which has been little examined so far but will be practically useful. Suppose that we are given a set of source nodes that includes some (but not all) of "true" members of a particular community; suppose also that the set includes some nodes that are not the members of this community (i.e., "false" members of the community). We propose to detect the community from this "imperfect" and "inaccurate" set of source nodes using attractor dynamics of recurrent neural networks. Community detection by the proposed method can be viewed as restoration of the original pattern from a deteriorated pattern, which is analogous to cue-triggered recall of short-term memory in the brain. We demonstrate the effectiveness of the proposed method using synthetic networks and real social networks for which correct communities are known.
Wan Zhao
Full Text Available To investigate migration and differentiation of neural progenitor cells (NPCs from the ependymal layer to the nucleus ambiguus (NA after recurrent laryngeal nerve (RLN avulsion. All of the animals received a CM-DiI injection in the left lateral ventricle. Forty-five adult rats were subjected to a left RLN avulsion injury, and nine rats were used as controls. 5-Bromo-2-deoxyuridine (BrdU was injected intraperitoneally. Immunohistochemical analyses were performed in the brain stems at different time points after RLN injury. After RLN avulsion, the CM-DiI+ NPCs from the ependymal layer migrated to the lesioned NA. CM-DiI+/GFAP+ astrocytes, CM-DiI+/DCX+ neuroblasts and CM-DiI+/NeuN+ neurons were observed in the migratory stream. However, the ipsilateral NA included only CM-DiI+ astrocytes, not newborn neurons. After RLN avulsion, the NPCs in the ependymal layer of the 4th ventricle or central canal attempt to restore the damaged NA. We first confirm that the migratory stream includes both neurons and glia differentiated from the NPCs. However, only differentiated astrocytes are successfully incorporated into the NA. The presence of both cell types in the migratory process may play a role in repairing RLN injuries.
Zhao, Wan; Xu, Wen
2014-01-01
To investigate migration and differentiation of neural progenitor cells (NPCs) from the ependymal layer to the nucleus ambiguus (NA) after recurrent laryngeal nerve (RLN) avulsion. All of the animals received a CM-DiI injection in the left lateral ventricle. Forty-five adult rats were subjected to a left RLN avulsion injury, and nine rats were used as controls. 5-Bromo-2-deoxyuridine (BrdU) was injected intraperitoneally. Immunohistochemical analyses were performed in the brain stems at different time points after RLN injury. After RLN avulsion, the CM-DiI+ NPCs from the ependymal layer migrated to the lesioned NA. CM-DiI+/GFAP+ astrocytes, CM-DiI+/DCX+ neuroblasts and CM-DiI+/NeuN+ neurons were observed in the migratory stream. However, the ipsilateral NA included only CM-DiI+ astrocytes, not newborn neurons. After RLN avulsion, the NPCs in the ependymal layer of the 4th ventricle or central canal attempt to restore the damaged NA. We first confirm that the migratory stream includes both neurons and glia differentiated from the NPCs. However, only differentiated astrocytes are successfully incorporated into the NA. The presence of both cell types in the migratory process may play a role in repairing RLN injuries.
Wei, Qikang; Chen, Tao; Xu, Ruifeng; He, Yulan; Gui, Lin
2016-01-01
The recognition of disease and chemical named entities in scientific articles is a very important subtask in information extraction in the biomedical domain. Due to the diversity and complexity of disease names, the recognition of named entities of diseases is rather tougher than those of chemical names. Although there are some remarkable chemical named entity recognition systems available online such as ChemSpot and tmChem, the publicly available recognition systems of disease named entities are rare. This article presents a system for disease named entity recognition (DNER) and normalization. First, two separate DNER models are developed. One is based on conditional random fields model with a rule-based post-processing module. The other one is based on the bidirectional recurrent neural networks. Then the named entities recognized by each of the DNER model are fed into a support vector machine classifier for combining results. Finally, each recognized disease named entity is normalized to a medical subject heading disease name by using a vector space model based method. Experimental results show that using 1000 PubMed abstracts for training, our proposed system achieves an F1-measure of 0.8428 at the mention level and 0.7804 at the concept level, respectively, on the testing data of the chemical-disease relation task in BioCreative V. Database URL: http://219.223.252.210:8080/SS/cdr.html PMID:27777244
Using LSTM recurrent neural networks for monitoring the LHC superconducting magnets
Wielgosz, Maciej; Skoczeń, Andrzej; Mertik, Matej
2017-09-01
The superconducting LHC magnets are coupled with an electronic monitoring system which records and analyzes voltage time series reflecting their performance. A currently used system is based on a range of preprogrammed triggers which launches protection procedures when a misbehavior of the magnets is detected. All the procedures used in the protection equipment were designed and implemented according to known working scenarios of the system and are updated and monitored by human operators. This paper proposes a novel approach to monitoring and fault protection of the Large Hadron Collider (LHC) superconducting magnets which employs state-of-the-art Deep Learning algorithms. Consequently, the authors of the paper decided to examine the performance of LSTM recurrent neural networks for modeling of voltage time series of the magnets. In order to address this challenging task different network architectures and hyper-parameters were used to achieve the best possible performance of the solution. The regression results were measured in terms of RMSE for different number of future steps and history length taken into account for the prediction. The best result of RMSE = 0 . 00104 was obtained for a network of 128 LSTM cells within the internal layer and 16 steps history buffer.
Automatic Estimation of the Dynamics of Channel Conductance Using a Recurrent Neural Network
Masaaki Takahashi
2009-01-01
Full Text Available In order to simulate neuronal electrical activities, we must estimate the dynamics of channel conductances from physiological experimental data. However, this approach requires the formulation of differential equations that express the time course of channel conductance. On the other hand, if the dynamics are automatically estimated, neuronal activities can be easily simulated. By using a recurrent neural network (RNN, it is possible to estimate the dynamics of channel conductances without formulating the differential equations. In the present study, we estimated the dynamics of the Na+ and K+ conductances of a squid giant axon using two different fully connected RNNs and were able to reproduce various neuronal activities of the axon. The reproduced activities were an action potential, a threshold, a refractory phenomenon, a rebound action potential, and periodic action potentials with a constant stimulation. RNNs can be trained using channels other than the Na+ and K+ channels. Therefore, using our RNN estimation method, the dynamics of channel conductance can be automatically estimated and the neuronal activities can be simulated using the channel RNNs. An RNN can be a useful tool to estimate the dynamics of the channel conductance of a neuron, and by using the method presented here, it is possible to simulate neuronal activities more easily than by using the previous methods.
Study of Sentiment Classification for Chinese Microblog Based on Recurrent Neural Network
ZHANG Yangsen,JIANG Yuru; TONG Yixuan
2016-01-01
The sentiment classification of Chinese Microblog is a meaningful topic. Many studies has been done based on the methods of rule and word-bag, and to understand the structure information of a sentence will be the next target. We proposed a sentiment classifica-tion method based on Recurrent neural network (RNN). We adopted the technology of distributed word represen-tation to construct a vector for each word in a sentence;then train sentence vectors with fixed dimension for dif-ferent length sentences with RNN, so that the sentence vectors contain both word semantic features and word se-quence features; at last use softmax regression classifier in the output layer to predict each sentence’s sentiment ori-entation. Experiment results revealed that our method can understand the structure information of negative sentence and double negative sentence and achieve better accuracy. The way of calculating sentence vector can help to learn the deep structure of sentence and will be valuable for dif-ferent research area.
A Recurrent Neural Network Approach to Rear Vehicle Detection Which Considered State Dependency
Kayichirou Inagaki
2003-08-01
Full Text Available Experimental vision-based detection often fails in cases when the acquired image quality is reduced by changing optical environments. In addition, the shape of vehicles in images that are taken from vision sensors change due to approaches by vehicle. Vehicle detection methods are required to perform successfully under these conditions. However, the conventional methods do not consider especially in rapidly varying by brightness conditions. We suggest a new detection method that compensates for those conditions in monocular vision-based vehicle detection. The suggested method employs a Recurrent Neural Network (RNN, which has been applied for spatiotemporal processing. The RNN is able to respond to consecutive scenes involving the target vehicle and can track the movements of the target by the effect of the past network states. The suggested method has a particularly beneficial effect in environments with sudden, extreme variations such as bright sunlight and shield. Finally, we demonstrate effectiveness by state-dependent of the RNN-based method by comparing its detection results with those of a Multi Layered Perceptron (MLP.
Using Layer Recurrent Neural Network to Generate Pseudo Random Number Sequences
Veena Desai
2012-03-01
Full Text Available Pseudo Random Numbers (PRNs are required for many cryptographic applications. This paper proposes a new method for generating PRNs using Layer Recurrent Neural Network (LRNN. The proposed technique generates PRNs from the weight matrix obtained from the layer weights of the LRNN. The LRNN random number generator (RNG uses a short keyword as a seed and generates a long sequence as a pseudo PRN sequence. The number of bits generated in the PRN sequence depends on the number of neurons in the input layer of the LRNN. The generated PRN sequence changes, with a change in the training function of the LRNN .The sequences generated are a function of the keyword, initial state of network and the training function. In our implementation the PRN sequences have been generated using 3 training functions: 1Scaled Gradient Descent 2Levenberg-Marquartz (TRAINLM and 3 TRAINBGF. The generated sequences are tested for randomness using ENT and NIST test suites. The ENT test can be applied for sequences of small size. NIST has 16 tests to test random numbers. The LRNN generated PRNs pass in 11 tests, show no observations for 4 tests, and fail in 1 test when subjected to NIST .This paper presents the test results for random number sequence ranging from 25 bits to 1000 bits, generated using LRNN.
Construction of Gene Regulatory Networks Using Recurrent Neural Networks and Swarm Intelligence.
Khan, Abhinandan; Mandal, Sudip; Pal, Rajat Kumar; Saha, Goutam
2016-01-01
We have proposed a methodology for the reverse engineering of biologically plausible gene regulatory networks from temporal genetic expression data. We have used established information and the fundamental mathematical theory for this purpose. We have employed the Recurrent Neural Network formalism to extract the underlying dynamics present in the time series expression data accurately. We have introduced a new hybrid swarm intelligence framework for the accurate training of the model parameters. The proposed methodology has been first applied to a small artificial network, and the results obtained suggest that it can produce the best results available in the contemporary literature, to the best of our knowledge. Subsequently, we have implemented our proposed framework on experimental (in vivo) datasets. Finally, we have investigated two medium sized genetic networks (in silico) extracted from GeneNetWeaver, to understand how the proposed algorithm scales up with network size. Additionally, we have implemented our proposed algorithm with half the number of time points. The results indicate that a reduction of 50% in the number of time points does not have an effect on the accuracy of the proposed methodology significantly, with a maximum of just over 15% deterioration in the worst case.
Denève, Sophie; Duhamel, Jean-René; Pouget, Alexandre
2007-05-23
Several behavioral experiments suggest that the nervous system uses an internal model of the dynamics of the body to implement a close approximation to a Kalman filter. This filter can be used to perform a variety of tasks nearly optimally, such as predicting the sensory consequence of motor action, integrating sensory and body posture signals, and computing motor commands. We propose that the neural implementation of this Kalman filter involves recurrent basis function networks with attractor dynamics, a kind of architecture that can be readily mapped onto cortical circuits. In such networks, the tuning curves to variables such as arm velocity are remarkably noninvariant in the sense that the amplitude and width of the tuning curves of a given neuron can vary greatly depending on other variables such as the position of the arm or the reliability of the sensory feedback. This property could explain some puzzling properties of tuning curves in the motor and premotor cortex, and it leads to several new predictions.
Hu, Jin; Wang, Jun
2015-06-01
In recent years, complex-valued recurrent neural networks have been developed and analysed in-depth in view of that they have good modelling performance for some applications involving complex-valued elements. In implementing continuous-time dynamical systems for simulation or computational purposes, it is quite necessary to utilize a discrete-time model which is an analogue of the continuous-time system. In this paper, we analyse a discrete-time complex-valued recurrent neural network model and obtain the sufficient conditions on its global exponential periodicity and exponential stability. Simulation results of several numerical examples are delineated to illustrate the theoretical results and an application on associative memory is also given.
Neural mediators of the intergenerational transmission of family aggression.
Saxbe, Darby; Del Piero, Larissa Borofsky; Immordino-Yang, Mary Helen; Kaplan, Jonas Todd; Margolin, Gayla
2016-05-01
Youth exposed to family aggression may become more aggressive themselves, but the mechanisms of intergenerational transmission are understudied. In a longitudinal study, we found that adolescents' reduced neural activation when rating their parents' emotions, assessed via magnetic resonance imaging, mediated the association between parents' past aggression and adolescents' subsequent aggressive behavior toward parents. A subsample of 21 youth, drawn from the larger study, underwent magnetic resonance imaging scanning proximate to the second of two assessments of the family environment. At Time 1 (when youth were on average 15.51 years old) we measured parents' aggressive marital and parent-child conflict behaviors, and at Time 2 (≈2 years later), we measured youth aggression directed toward parents. Youth from more aggressive families showed relatively less activation to parent stimuli in brain areas associated with salience and socioemotional processing, including the insula and limbic structures. Activation patterns in these same areas were also associated with youths' subsequent parent-directed aggression. The association between parents' aggression and youths' subsequent parent-directed aggression was statistically mediated by signal change coefficients in the insula, right amygdala, thalamus, and putamen. These signal change coefficients were also positively associated with scores on a mentalizing measure. Hypoarousal of the emotional brain to family stimuli may support the intergenerational transmission of family aggression.
无
2012-01-01
In this paper,a class of bidirectional associative memory(BAM) recurrent neural networks with delays are studied.By a fixed point theorem and a Lyapunov functional,some new sufficient conditions for the existence,uniqueness and global exponential stability of the almost periodic solutions are established.These conditions are easy to be verified and our results complement the previous known results.
Mandal, Sudip; Khan, Abhinandan; Saha, Goutam; Pal, Rajat K
2016-01-01
The accurate prediction of genetic networks using computational tools is one of the greatest challenges in the postgenomic era. Recurrent Neural Network is one of the most popular but simple approaches to model the network dynamics from time-series microarray data. To date, it has been successfully applied to computationally derive small-scale artificial and real-world genetic networks with high accuracy. However, they underperformed for large-scale genetic networks. Here, a new methodology has been proposed where a hybrid Cuckoo Search-Flower Pollination Algorithm has been implemented with Recurrent Neural Network. Cuckoo Search is used to search the best combination of regulators. Moreover, Flower Pollination Algorithm is applied to optimize the model parameters of the Recurrent Neural Network formalism. Initially, the proposed method is tested on a benchmark large-scale artificial network for both noiseless and noisy data. The results obtained show that the proposed methodology is capable of increasing the inference of correct regulations and decreasing false regulations to a high degree. Secondly, the proposed methodology has been validated against the real-world dataset of the DNA SOS repair network of Escherichia coli. However, the proposed method sacrifices computational time complexity in both cases due to the hybrid optimization process.
A QoS Provisioning Recurrent Neural Network based Call Admission Control for beyond 3G Networks
Ramesh Babu H. S.
2010-03-01
Full Text Available The Call admission control (CAC is one of the Radio Resource Management (RRM techniques that plays influential role in ensuring the desired Quality of Service (QoS to the users and applications in next generation networks. This paper proposes a fuzzy neural approach for making the call admission control decision in multi class traffic based Next Generation Wireless Networks (NGWN. The proposed Fuzzy Neural call admission control (FNCAC scheme is an integrated CAC module that combines the linguistic control capabilities of the fuzzy logic controller and the learning capabilities of the neural networks. The model is based on recurrent radial basis function networks which have better learning and adaptability that can be used to develop intelligent system to handle the incoming traffic in an heterogeneous network environment. The simulation results are optimistic and indicates that the proposed FNCAC algorithm performs better than the other two methods and the call blocking probability is minimal when compared to other two methods.
A QoS Provisioning Recurrent Neural Network based Call Admission Control for beyond 3G Networks
S., Ramesh Babu H; S, Satyanarayana P
2010-01-01
The Call admission control (CAC) is one of the Radio Resource Management (RRM) techniques that plays influential role in ensuring the desired Quality of Service (QoS) to the users and applications in next generation networks. This paper proposes a fuzzy neural approach for making the call admission control decision in multi class traffic based Next Generation Wireless Networks (NGWN). The proposed Fuzzy Neural call admission control (FNCAC) scheme is an integrated CAC module that combines the linguistic control capabilities of the fuzzy logic controller and the learning capabilities of the neural networks. The model is based on recurrent radial basis function networks which have better learning and adaptability that can be used to develop intelligent system to handle the incoming traffic in an heterogeneous network environment. The simulation results are optimistic and indicates that the proposed FNCAC algorithm performs better than the other two methods and the call blocking probability is minimal when compa...
Kordmahalleh, Mina Moradi; Sefidmazgi, Mohammad Gorji; Harrison, Scott H; Homaifar, Abdollah
2017-01-01
The modeling of genetic interactions within a cell is crucial for a basic understanding of physiology and for applied areas such as drug design. Interactions in gene regulatory networks (GRNs) include effects of transcription factors, repressors, small metabolites, and microRNA species. In addition, the effects of regulatory interactions are not always simultaneous, but can occur after a finite time delay, or as a combined outcome of simultaneous and time delayed interactions. Powerful biotechnologies have been rapidly and successfully measuring levels of genetic expression to illuminate different states of biological systems. This has led to an ensuing challenge to improve the identification of specific regulatory mechanisms through regulatory network reconstructions. Solutions to this challenge will ultimately help to spur forward efforts based on the usage of regulatory network reconstructions in systems biology applications. We have developed a hierarchical recurrent neural network (HRNN) that identifies time-delayed gene interactions using time-course data. A customized genetic algorithm (GA) was used to optimize hierarchical connectivity of regulatory genes and a target gene. The proposed design provides a non-fully connected network with the flexibility of using recurrent connections inside the network. These features and the non-linearity of the HRNN facilitate the process of identifying temporal patterns of a GRN. Our HRNN method was implemented with the Python language. It was first evaluated on simulated data representing linear and nonlinear time-delayed gene-gene interaction models across a range of network sizes and variances of noise. We then further demonstrated the capability of our method in reconstructing GRNs of the Saccharomyces cerevisiae synthetic network for in vivo benchmarking of reverse-engineering and modeling approaches (IRMA). We compared the performance of our method to TD-ARACNE, HCC-CLINDE, TSNI and ebdbNet across different network
Ou, Zhishuo; Stankiewicz, Paweł; Xia, Zhilian; Breman, Amy M; Dawson, Brian; Wiszniewska, Joanna; Szafranski, Przemyslaw; Cooper, M Lance; Rao, Mitchell; Shao, Lina; South, Sarah T; Coleman, Karlene; Fernhoff, Paul M; Deray, Marcel J; Rosengren, Sally; Roeder, Elizabeth R; Enciso, Victoria B; Chinault, A Craig; Patel, Ankita; Kang, Sung-Hae L; Shaw, Chad A; Lupski, James R; Cheung, Sau W
2011-01-01
Four unrelated families with the same unbalanced translocation der(4)t(4;11)(p16.2;p15.4) were analyzed. Both of the breakpoint regions in 4p16.2 and 11p15.4 were narrowed to large ∼359-kb and ∼215-kb low-copy repeat (LCR) clusters, respectively, by aCGH and SNP array analyses. DNA sequencing enabled mapping the breakpoints of one translocation to 24 bp within interchromosomal paralogous LCRs of ∼130 kb in length and 94.7% DNA sequence identity located in olfactory receptor gene clusters, indicating nonallelic homologous recombination (NAHR) as the mechanism for translocation formation. To investigate the potential involvement of interchromosomal LCRs in recurrent chromosomal translocation formation, we performed computational genome-wide analyses and identified 1143 interchromosomal LCR substrate pairs, >5 kb in size and sharing >94% sequence identity that can potentially mediate chromosomal translocations. Additional evidence for interchromosomal NAHR mediated translocation formation was provided by sequencing the breakpoints of another recurrent translocation, der(8)t(8;12)(p23.1;p13.31). The NAHR sites were mapped within 55 bp in ∼7.8-kb paralogous subunits of 95.3% sequence identity located in the ∼579-kb (chr 8) and ∼287-kb (chr 12) LCR clusters. We demonstrate that NAHR mediates recurrent constitutional translocations t(4;11) and t(8;12) and potentially many other interchromosomal translocations throughout the human genome. Furthermore, we provide a computationally determined genome-wide "recurrent translocation map."
Yamada, Tatsuro; Murata, Shingo; Arie, Hiroaki; Ogata, Tetsuya
2016-01-01
To work cooperatively with humans by using language, robots must not only acquire a mapping between language and their behavior but also autonomously utilize the mapping in appropriate contexts of interactive tasks online. To this end, we propose a novel learning method linking language to robot behavior by means of a recurrent neural network. In this method, the network learns from correct examples of the imposed task that are given not as explicitly separated sets of language and behavior but as sequential data constructed from the actual temporal flow of the task. By doing this, the internal dynamics of the network models both language–behavior relationships and the temporal patterns of interaction. Here, “internal dynamics” refers to the time development of the system defined on the fixed-dimensional space of the internal states of the context layer. Thus, in the execution phase, by constantly representing where in the interaction context it is as its current state, the network autonomously switches between recognition and generation phases without any explicit signs and utilizes the acquired mapping in appropriate contexts. To evaluate our method, we conducted an experiment in which a robot generates appropriate behavior responding to a human’s linguistic instruction. After learning, the network actually formed the attractor structure representing both language–behavior relationships and the task’s temporal pattern in its internal dynamics. In the dynamics, language–behavior mapping was achieved by the branching structure. Repetition of human’s instruction and robot’s behavioral response was represented as the cyclic structure, and besides, waiting to a subsequent instruction was represented as the fixed-point attractor. Thanks to this structure, the robot was able to interact online with a human concerning the given task by autonomously switching phases. PMID:27471463
Yamada, Tatsuro; Murata, Shingo; Arie, Hiroaki; Ogata, Tetsuya
2016-01-01
To work cooperatively with humans by using language, robots must not only acquire a mapping between language and their behavior but also autonomously utilize the mapping in appropriate contexts of interactive tasks online. To this end, we propose a novel learning method linking language to robot behavior by means of a recurrent neural network. In this method, the network learns from correct examples of the imposed task that are given not as explicitly separated sets of language and behavior but as sequential data constructed from the actual temporal flow of the task. By doing this, the internal dynamics of the network models both language-behavior relationships and the temporal patterns of interaction. Here, "internal dynamics" refers to the time development of the system defined on the fixed-dimensional space of the internal states of the context layer. Thus, in the execution phase, by constantly representing where in the interaction context it is as its current state, the network autonomously switches between recognition and generation phases without any explicit signs and utilizes the acquired mapping in appropriate contexts. To evaluate our method, we conducted an experiment in which a robot generates appropriate behavior responding to a human's linguistic instruction. After learning, the network actually formed the attractor structure representing both language-behavior relationships and the task's temporal pattern in its internal dynamics. In the dynamics, language-behavior mapping was achieved by the branching structure. Repetition of human's instruction and robot's behavioral response was represented as the cyclic structure, and besides, waiting to a subsequent instruction was represented as the fixed-point attractor. Thanks to this structure, the robot was able to interact online with a human concerning the given task by autonomously switching phases.
Kreider, J.F.; Curtiss, P.; Dodier, R.; Krarti, M. [Univ. of Colorado, Boulder, CO (United States); Claridge, D.E.; Haberl, J.S. [Texas A and M Univ., College Station, TX (United States). Dept. of Mechanical Engineering
1995-11-01
Following several successful applications of feedforward neural networks (NNs) to the building energy prediction problem a more difficult problem has been addressed recently: namely, the prediction of building energy consumption well into the future without knowledge of immediately past energy consumption. This paper reports results of a recent study of six months of hourly data recorded at the Zachry Engineering Center (ZEC) in College Station, Texas. An early study demonstrated the success of NNs used as predictors for hourly consumption of electricity, chilled water and hot water for the ZEC. Relatively simple networks with less than a dozen inputs were able to predict these three hourly, whole building energy end uses to within errors of 5--10% RMS, the difference depending on the specifics of energy type and time of year. These predictions were made for selected future months given network training data of between one and three past months. Inputs to these networks included measured energy consumption for one or two immediately past hours. Such data are available, for example, if one is trying to conduct hourly diagnostics on heating, ventilating and air conditioning (HVAC) systems in commercial buildings. The success of this study prompted a second study of a more difficult problem. In this case, the goal was to predict energy consumption into the future without knowledge of consumption of the various energies for the immediate past. Such a prediction is of value when estimating what a building, retrofitted with energy conservation features, would have consumed had it not been retrofitted. This prediction can be compared to actual consumption to estimate the savings, if any, that accrue due to the installation of the energy conservation subsystems or components. Because one is predicting for several months, not for one hour, into the future, the problem is more difficult. Results presented show that recurrent NNs can be used for this prediction task.
Ana eBengoetxea
2014-09-01
Full Text Available In this study we employed a dynamic recurrent neural network (DRNN in a novel fashion to reveal characteristics of control modules underlying the generation of muscle activations when drawing figures with the outstretched arm. We asked healthy human subjects to perform four different figure-eight movements in each of two workspaces (frontal plane and sagittal plane. We then trained a DRNN to predict the movement of the wrist from information in the EMG signals from seven different muscles. We trained different instances of the same network on a single movement direction, on all four movement directions in a single movement plane, or on all eight possible movement patterns and looked at the ability of the DRNN to generalize and predict movements for trials that were not included in the training set. Within a single movement plane, a DRNN trained on one movement direction was not able to predict movements of the hand for trials in the other three directions, but a DRNN trained simultaneously on all four movement directions could generalize across movement directions within the same plane. Similarly, the DRNN was able to reproduce the kinematics of the hand for both movement planes, but only if it was trained on examples performed in each one. As we will discuss, these results indicate that there are important dynamical constraints on the mapping of EMG to hand movement that depend on both the time sequence of the movement and on the anatomical constraints of the musculoskeletal system. In a second step, we injected EMG signals constructed from different synergies derived by the PCA in order to identify the mechanical significance of each of these components. From these results, one can surmise that discrete-rhythmic movements may be constructed from three different fundamental modules, one regulating the co-activation of all muscles over the time span of the movement and two others patterns of reciprocal activation operating in orthogonal
Cheron, Guy; Cebolla, Ana Maria; Bengoetxea, Ana; Leurs, Françoise; Dan, Bernard
2007-03-06
Triphasic electromyographic (EMG) patterns with a sequence of activity in agonist (AG1), antagonist (ANT) and again in agonist (AG2) muscles are characteristic of ballistic movements. They have been studied in terms of rectangular pulse-width or pulse-height modulation. In order to take into account the complexity of the EMG signal within the bursts, we used a dynamic recurrent neural network (DRNN) for the identification of this pattern in subjects performing fast elbow flexion movements. Biceps and triceps EMGs were fed to all 35 fully-connected hidden units of the DRNN for mapping onto elbow angular acceleration signals. DRNN training was supervised, involving learning rule adaptations of synaptic weights and time constants of each unit. We demonstrated that the DRNN is able to perfectly reproduce the acceleration profile of the ballistic movements. Then we tested the physiological plausibility of all the networks that reached an error level below 0.001 by selectively increasing the amplitude of each burst of the triphasic pattern and evaluating the effects on the simulated accelerating profile. Nineteen percent of these simulations reproduced the physiological action classically attributed to the 3 EMG bursts: AG1 increase showed an increase of the first accelerating pulse, ANT an increase of the braking pulse and AG2 an increase of the clamping pulse. These networks also recognized the physiological function of the time interval between AG1 and ANT, reproducing the linear relationship between time interval and movement amplitude. This task-dynamics recognition has implications for the development of DRNN as diagnostic tools and prosthetic controllers.
Learning a Transferable Change Rule from a Recurrent Neural Network for Land Cover Change Detection
Haobo Lyu
2016-06-01
Full Text Available When exploited in remote sensing analysis, a reliable change rule with transfer ability can detect changes accurately and be applied widely. However, in practice, the complexity of land cover changes makes it difficult to use only one change rule or change feature learned from a given multi-temporal dataset to detect any other new target images without applying other learning processes. In this study, we consider the design of an efficient change rule having transferability to detect both binary and multi-class changes. The proposed method relies on an improved Long Short-Term Memory (LSTM model to acquire and record the change information of long-term sequence remote sensing data. In particular, a core memory cell is utilized to learn the change rule from the information concerning binary changes or multi-class changes. Three gates are utilized to control the input, output and update of the LSTM model for optimization. In addition, the learned rule can be applied to detect changes and transfer the change rule from one learned image to another new target multi-temporal image. In this study, binary experiments, transfer experiments and multi-class change experiments are exploited to demonstrate the superiority of our method. Three contributions of this work can be summarized as follows: (1 the proposed method can learn an effective change rule to provide reliable change information for multi-temporal images; (2 the learned change rule has good transferability for detecting changes in new target images without any extra learning process, and the new target images should have a multi-spectral distribution similar to that of the training images; and (3 to the authors’ best knowledge, this is the first time that deep learning in recurrent neural networks is exploited for change detection. In addition, under the framework of the proposed method, changes can be detected under both binary detection and multi-class change detection.
Bengoetxea, Ana; Leurs, Françoise; Hoellinger, Thomas; Cebolla, Ana M; Dan, Bernard; McIntyre, Joseph; Cheron, Guy
2014-01-01
In this study we employed a dynamic recurrent neural network (DRNN) in a novel fashion to reveal characteristics of control modules underlying the generation of muscle activations when drawing figures with the outstretched arm. We asked healthy human subjects to perform four different figure-eight movements in each of two workspaces (frontal plane and sagittal plane). We then trained a DRNN to predict the movement of the wrist from information in the EMG signals from seven different muscles. We trained different instances of the same network on a single movement direction, on all four movement directions in a single movement plane, or on all eight possible movement patterns and looked at the ability of the DRNN to generalize and predict movements for trials that were not included in the training set. Within a single movement plane, a DRNN trained on one movement direction was not able to predict movements of the hand for trials in the other three directions, but a DRNN trained simultaneously on all four movement directions could generalize across movement directions within the same plane. Similarly, the DRNN was able to reproduce the kinematics of the hand for both movement planes, but only if it was trained on examples performed in each one. As we will discuss, these results indicate that there are important dynamical constraints on the mapping of EMG to hand movement that depend on both the time sequence of the movement and on the anatomical constraints of the musculoskeletal system. In a second step, we injected EMG signals constructed from different synergies derived by the PCA in order to identify the mechanical significance of each of these components. From these results, one can surmise that discrete-rhythmic movements may be constructed from three different fundamental modules, one regulating the co-activation of all muscles over the time span of the movement and two others elliciting patterns of reciprocal activation operating in orthogonal directions.
Tatsuro Yamada
2016-07-01
Full Text Available To work cooperatively with humans by using language, robots must not only acquire a mapping between language and their behavior but also autonomously utilize the mapping in appropriate contexts of interactive tasks online. To this end, we propose a novel learning method linking language to robot behavior by means of a recurrent neural network. In this method, the network learns from correct examples of the imposed task that are given not as explicitly separated sets of language and behavior but as sequential data constructed from the actual temporal flow of the task. By doing this, the internal dynamics of the network models both language--behavior relationships and the temporal patterns of interaction. Here, ``internal dynamics'' refers to the time development of the system defined on the fixed-dimensional space of the internal states of the context layer. Thus, in the execution phase, by constantly representing where in the interaction context it is as its current state, the network autonomously switches between recognition and generation phases without any explicit signs and utilizes the acquired mapping in appropriate contexts. To evaluate our method, we conducted an experiment in which a robot generates appropriate behavior responding to a human's linguistic instruction. After learning, the network actually formed the attractor structure representing both language--behavior relationships and the task's temporal pattern in its internal dynamics. In the dynamics, language--behavior mapping was achieved by the branching structure. Repetition of human's instruction and robot's behavioral response was represented as the cyclic structure, and besides, waiting to a subsequent instruction was represented as the fixed-point attractor. Thanks to this structure, the robot was able to interact online with a human concerning the given task by autonomously switching phases.
Electronic realisation of recurrent neural network for solving simultaneous linear equations
Wang, J.
1992-02-01
An electronic neural network for solving simultaneous linear equations is presented. The proposed electronic neural network is able to generate real-time solutions to large-scale problems. The operating characteristics of an opamp based neural network is demonstrated via an illustrative example.
Meuleman, T; van Beelen, E; Kaaja, R J; van Lith, J M M; Claas, F H J; Bloemenkamp, K W M
2016-01-01
HLA-C is the only polymorphic classical HLA I antigen expressed on trophoblast cells. It is known that higher incidence of C4d deposition on trophoblast cells is present in women with recurrent miscarriage. C4d is a footprint of antibody-mediated classical complement activation. Therefore, this stud
Skeith, Leslie; Rodger, Marc
2017-03-01
Placenta-mediated pregnancy complications, such as pre-eclampsia, placental abruption, birth of a small-for-gestational age infant and late pregnancy loss, are common and carry significant morbidity and mortality. The etiology of placenta-mediated pregnancy complications is likely multifactorial and may include abnormal coagulation activation of the maternal-fetal interface. The use of antepartum low-molecular-weight heparin (LMWH) prophylaxis to prevent recurrent placenta-mediated pregnancy complications has become common practice despite limited and conflicting evidence to support its use. This paper reviews the evidence, including recently published data from an individual patient level meta-analysis, which challenges the role of LMWH in preventing recurrent placenta-mediated pregnancy complications. Incorporating this recent evidence, we recommend against the use of LMWH to prevent recurrent placenta-mediated pregnancy complications in women with and without inherited thrombophilia.
Sakyasingha eDasgupta
2015-09-01
Full Text Available Walking animals, like stick insects, cockroaches or ants, demonstrate a fascinating range of locomotive abilities and complex behaviors. The locomotive behaviors can consist of a variety of walking patterns along with adaptation that allow the animals to deal with changes in environmental conditions, like uneven terrains, gaps, obstacles etc. Biological study has revealed that such complex behaviors are a result of a combination of biomechanics and neural mechanism thus representing the true nature of embodied interactions. While the biomechanics helps maintain flexibility and sustain a variety of movements, the neural mechanisms generate movements while making appropriate predictions crucial for achieving adaptation. Such predictions or planning ahead can be achieved by way of internal models that are grounded in the overall behavior of the animal. Inspired by these findings, we present here, an artificial bio-inspired walking system which effectively combines biomechanics (in terms of the body and leg structures with the underlying neural mechanisms. The neural mechanisms consist of 1 central pattern generator based control for generating basic rhythmic patterns and coordinated movements, 2 distributed (at each leg recurrent neural network based adaptive forward models with efference copies as internal models for sensory predictions and instantaneous state estimations, and 3 searching and elevation control for adapting the movement of an individual leg to deal with different environmental conditions. Using simulations we show that this bio-inspired approach with adaptive internal models allows the walking robot to perform complex locomotive behaviors as observed in insects, including walking on undulated terrains, crossing large gaps as well as climbing over high obstacles. Furthermore we demonstrate that the newly developed recurrent network based approach to sensorimotor prediction outperforms the previous state of the art adaptive neuron
Dasgupta, Sakyasingha; Goldschmidt, Dennis; Wörgötter, Florentin; Manoonpong, Poramate
2015-01-01
Walking animals, like stick insects, cockroaches or ants, demonstrate a fascinating range of locomotive abilities and complex behaviors. The locomotive behaviors can consist of a variety of walking patterns along with adaptation that allow the animals to deal with changes in environmental conditions, like uneven terrains, gaps, obstacles etc. Biological study has revealed that such complex behaviors are a result of a combination of biomechanics and neural mechanism thus representing the true nature of embodied interactions. While the biomechanics helps maintain flexibility and sustain a variety of movements, the neural mechanisms generate movements while making appropriate predictions crucial for achieving adaptation. Such predictions or planning ahead can be achieved by way of internal models that are grounded in the overall behavior of the animal. Inspired by these findings, we present here, an artificial bio-inspired walking system which effectively combines biomechanics (in terms of the body and leg structures) with the underlying neural mechanisms. The neural mechanisms consist of (1) central pattern generator based control for generating basic rhythmic patterns and coordinated movements, (2) distributed (at each leg) recurrent neural network based adaptive forward models with efference copies as internal models for sensory predictions and instantaneous state estimations, and (3) searching and elevation control for adapting the movement of an individual leg to deal with different environmental conditions. Using simulations we show that this bio-inspired approach with adaptive internal models allows the walking robot to perform complex locomotive behaviors as observed in insects, including walking on undulated terrains, crossing large gaps, leg damage adaptations, as well as climbing over high obstacles. Furthermore, we demonstrate that the newly developed recurrent network based approach to online forward models outperforms the adaptive neuron forward models
S. N. Naikwad
2009-01-01
Full Text Available A focused time lagged recurrent neural network (FTLR NN with gamma memory filter is designed to learn the subtle complex dynamics of a typical CSTR process. Continuous stirred tank reactor exhibits complex nonlinear operations where reaction is exothermic. It is noticed from literature review that process control of CSTR using neuro-fuzzy systems was attempted by many, but optimal neural network model for identification of CSTR process is not yet available. As CSTR process includes temporal relationship in the input-output mappings, time lagged recurrent neural network is particularly used for identification purpose. The standard back propagation algorithm with momentum term has been proposed in this model. The various parameters like number of processing elements, number of hidden layers, training and testing percentage, learning rule and transfer function in hidden and output layer are investigated on the basis of performance measures like MSE, NMSE, and correlation coefficient on testing data set. Finally effects of different norms are tested along with variation in gamma memory filter. It is demonstrated that dynamic NN model has a remarkable system identification capability for the problems considered in this paper. Thus FTLR NN with gamma memory filter can be used to learn underlying highly nonlinear dynamics of the system, which is a major contribution of this paper.
Margarita Gutova
2017-03-01
Full Text Available Despite improved survival for children with newly diagnosed neuroblastoma (NB, recurrent disease is a significant problem, with treatment options limited by anti-tumor efficacy, patient drug tolerance, and cumulative toxicity. We previously demonstrated that neural stem cells (NSCs expressing a modified rabbit carboxylesterase (rCE can distribute to metastatic NB tumor foci in multiple organs in mice and convert the prodrug irinotecan (CPT-11 to the 1,000-fold more toxic topoisomerase-1 inhibitor SN-38, resulting in significant therapeutic efficacy. We sought to extend these studies by using a clinically relevant NSC line expressing a modified human CE (hCE1m6-NSCs to establish proof of concept and identify an intravenous dose and treatment schedule that gave maximal efficacy. Human-derived NB cell lines were significantly more sensitive to treatment with hCE1m6-NSCs and irinotecan as compared with drug alone. This was supported by pharmacokinetic studies in subcutaneous NB mouse models demonstrating tumor-specific conversion of irinotecan to SN-38. Furthermore, NB-bearing mice that received repeat treatment with intravenous hCE1m6-NSCs and irinotecan showed significantly lower tumor burden (1.4-fold, p = 0.0093 and increased long-term survival compared with mice treated with drug alone. These studies support the continued development of NSC-mediated gene therapy for improved clinical outcome in NB patients.
Samarasinghe, S; Ling, H
2017-02-04
In this paper, we show how to extend our previously proposed novel continuous time Recurrent Neural Networks (RNN) approach that retains the advantage of continuous dynamics offered by Ordinary Differential Equations (ODE) while enabling parameter estimation through adaptation, to larger signalling networks using a modular approach. Specifically, the signalling network is decomposed into several sub-models based on important temporal events in the network. Each sub-model is represented by the proposed RNN and trained using data generated from the corresponding ODE model. Trained sub-models are assembled into a whole system RNN which is then subjected to systems dynamics and sensitivity analyses. The concept is illustrated by application to G1/S transition in cell cycle using Iwamoto et al. (2008) ODE model. We decomposed the G1/S network into 3 sub-models: (i) E2F transcription factor release; (ii) E2F and CycE positive feedback loop for elevating cyclin levels; and (iii) E2F and CycA negative feedback to degrade E2F. The trained sub-models accurately represented system dynamics and parameters were in good agreement with the ODE model. The whole system RNN however revealed couple of parameters contributing to compounding errors due to feedback and required refinement to sub-model 2. These related to the reversible reaction between CycE/CDK2 and p27, its inhibitor. The revised whole system RNN model very accurately matched dynamics of the ODE system. Local sensitivity analysis of the whole system model further revealed the most dominant influence of the above two parameters in perturbing G1/S transition, giving support to a recent hypothesis that the release of inhibitor p27 from Cyc/CDK complex triggers cell cycle stage transition. To make the model useful in a practical setting, we modified each RNN sub-model with a time relay switch to facilitate larger interval input data (≈20min) (original model used data for 30s or less) and retrained them that produced
Ling, Hong; Samarasinghe, Sandhya; Kulasiri, Don
2013-12-01
Understanding the control of cellular networks consisting of gene and protein interactions and their emergent properties is a central activity of Systems Biology research. For this, continuous, discrete, hybrid, and stochastic methods have been proposed. Currently, the most common approach to modelling accurate temporal dynamics of networks is ordinary differential equations (ODE). However, critical limitations of ODE models are difficulty in kinetic parameter estimation and numerical solution of a large number of equations, making them more suited to smaller systems. In this article, we introduce a novel recurrent artificial neural network (RNN) that addresses above limitations and produces a continuous model that easily estimates parameters from data, can handle a large number of molecular interactions and quantifies temporal dynamics and emergent systems properties. This RNN is based on a system of ODEs representing molecular interactions in a signalling network. Each neuron represents concentration change of one molecule represented by an ODE. Weights of the RNN correspond to kinetic parameters in the system and can be adjusted incrementally during network training. The method is applied to the p53-Mdm2 oscillation system - a crucial component of the DNA damage response pathways activated by a damage signal. Simulation results indicate that the proposed RNN can successfully represent the behaviour of the p53-Mdm2 oscillation system and solve the parameter estimation problem with high accuracy. Furthermore, we presented a modified form of the RNN that estimates parameters and captures systems dynamics from sparse data collected over relatively large time steps. We also investigate the robustness of the p53-Mdm2 system using the trained RNN under various levels of parameter perturbation to gain a greater understanding of the control of the p53-Mdm2 system. Its outcomes on robustness are consistent with the current biological knowledge of this system. As more
Mohammadzadeh, Ardashir; Ghaemi, Sehraneh
2015-09-01
This paper proposes a novel approach for training of proposed recurrent hierarchical interval type-2 fuzzy neural networks (RHT2FNN) based on the square-root cubature Kalman filters (SCKF). The SCKF algorithm is used to adjust the premise part of the type-2 FNN and the weights of defuzzification and the feedback weights. The recurrence property in the proposed network is the output feeding of each membership function to itself. The proposed RHT2FNN is employed in the sliding mode control scheme for the synchronization of chaotic systems. Unknown functions in the sliding mode control approach are estimated by RHT2FNN. Another application of the proposed RHT2FNN is the identification of dynamic nonlinear systems. The effectiveness of the proposed network and its learning algorithm is verified by several simulation examples. Furthermore, the universal approximation of RHT2FNNs is also shown.
Zeng, Zhigang; Wang, Jun
2008-12-01
This paper presents a design method for synthesizing associative memories based on discrete-time recurrent neural networks. The proposed procedure enables both hetero- and autoassociative memories to be synthesized with high storage capacity and assured global asymptotic stability. The stored patterns are retrieved by feeding probes via external inputs rather than initial conditions. As typical representatives, discrete-time cellular neural networks (CNNs) designed with space-invariant cloning templates are examined in detail. In particular, it is shown that procedure herein can determine the input matrix of any CNN based on a space-invariant cloning template which involves only a few design parameters. Two specific examples and many experimental results are included to demonstrate the characteristics and performance of the designed associative memories.
Ching-Hung Lee
2011-01-01
Full Text Available This paper proposes a new type fuzzy neural systems, denoted IT2RFNS-A (interval type-2 recurrent fuzzy neural system with asymmetric membership function, for nonlinear systems identification and control. To enhance the performance and approximation ability, the triangular asymmetric fuzzy membership function (AFMF and TSK-type consequent part are adopted for IT2RFNS-A. The gradient information of the IT2RFNS-A is not easy to obtain due to the asymmetric membership functions and interval valued sets. The corresponding stable learning is derived by simultaneous perturbation stochastic approximation (SPSA algorithm which guarantees the convergence and stability of the closed-loop systems. Simulation and comparison results for the chaotic system identification and the control of Chua's chaotic circuit are shown to illustrate the feasibility and effectiveness of the proposed method.
Güntürkün, Rüştü
2010-08-01
In this study, Elman recurrent neural networks have been defined by using Resilient Back Propagation in order to determine the depth of anesthesia in the continuation stage of the anesthesia and to estimate the amount of medicine to be applied at that moment. From 30 patients, 57 distinct EEG recordings have been collected prior to during anaesthesia of different levels. The applied artificial neural network is composed of three layers, namely the input layer, the middle layer and the output layer. The nonlinear activation function sigmoid (sigmoid function) has been used in the hidden layer and the output layer. Prediction has been made by means of ANN. Training and testing the ANN have been used previous anaesthesia amount, total power/normal power and total power/previous. The system has been able to correctly purposeful responses in average accuracy of 95% of the cases. This method is also computationally fast and acceptable real-time clinical performance has been obtained.
Capaday, Charles; Ethier, C; Brizzi, L
2009-01-01
Capaday C, Ethier C, Brizzi L, Sik A, van Vreeswijk C, Gingras D. On the nature of the intrinsic connectivity of the cat motor cortex: evidence for a recurrent neural network topology. J Neurophysiol 102: 2131-2141, 2009. First published July 22, 2009; doi: 10.1152/jn.91319.2008. The details...... and functional significance of the intrinsic horizontal connections between neurons in the motor cortex (MCx) remain to be clarified. To further elucidate the nature of this intracortical connectivity pattern, experiments were done on the MCx of three cats. The anterograde tracer biocytin was ejected...
Beritelli, Francesco; Capizzi, Giacomo; Lo Sciuto, Grazia; Napoli, Christian; Tramontana, Emiliano; Woźniak, Marcin
2015-09-01
Solving channel equalization problem in communication systems is based on adaptive filtering algorithms. Today, Mobile Agents (MAs) with Recurrent Neural Networks (RNNs) can be also adopted for effective interference reduction in modern wireless communication systems (WCSs). In this paper MAs with RNNs are proposed as novel computing algorithms for reducing interferences in WCSs performing an adaptive channel equalization. The method to provide it is so called MAs-RNNs. We perform the implementation of this new paradigm for interferences reduction. Simulations results and evaluations demonstrates the effectiveness of this approach and as better transmission performance in wireless communication network can be achieved by using the MAs-RNNs based adaptive filtering algorithm.
M.Syed Ali
2011-01-01
In this paper,the global stability of Takagi-Sugeno(TS)uncertain stochastic fuzzy recurrent neural networks with discrete and distributed time-varying delays(TSUSFRNNs)is considered.A novel LMI-based stability criterion is obtained by using Lyapunov functional theory to guarantee the asymptotic stability of TSUSFRNNs.The proposed stability conditions are demonstrated through numerical examples.Furthermore,the supplementary requirement that the time derivative of time-varying delays must be smaller than one is removed.Comparison results are demonstrated to show that the proposed method is more able to guarantee the widest stability region than the other methods available in the existing literature.
Dynamic recurrent Elman neural network based on immune clonal selection algorithm
Wang, Limin; Han, Xuming; Li, Ming; Sun, Haibo; Li, Qingzhao
2012-04-01
Owing to the immune clonal selection algorithm introduced into dynamic threshold strategy has better advantage on optimizing multi-parameters, therefore a novel approach that the immune clonal selection algorithm introduced into dynamic threshold strategy, is used to optimize the dynamic recursion Elman neural network is proposed in the paper. The concrete structure of the recursion neural network, the connect weight and the initial values of the contact units etc. are done by evolving training and learning automatically. Thus it could realize to construct and design for dynamic recursion Elman neural networks. It could provide a new effective approach for immune clonal selection algorithm optimizing dynamic recursion neural networks.
A recurrent translocation is mediated by homologous recombination between HERV-H elements
Hermetz Karen E
2012-01-01
Full Text Available Abstract Background Chromosome rearrangements are caused by many mutational mechanisms; of these, recurrent rearrangements can be particularly informative for teasing apart DNA sequence-specific factors. Some recurrent translocations are mediated by homologous recombination between large blocks of segmental duplications on different chromosomes. Here we describe a recurrent unbalanced translocation casued by recombination between shorter homologous regions on chromosomes 4 and 18 in two unrelated children with intellectual disability. Results Array CGH resolved the breakpoints of the 6.97-Megabase (Mb loss of 18q and the 7.30-Mb gain of 4q. Sequencing across the translocation breakpoints revealed that both translocations occurred between 92%-identical human endogenous retrovirus (HERV elements in the same orientation on chromosomes 4 and 18. In addition, we find sequence variation in the chromosome 4 HERV that makes one allele more like the chromosome 18 HERV. Conclusions Homologous recombination between HERVs on the same chromosome is known to cause chromosome deletions, but this is the first report of interchromosomal HERV-HERV recombination leading to a translocation. It is possible that normal sequence variation in substrates of non-allelic homologous recombination (NAHR affects the alignment of recombining segments and influences the propensity to chromosome rearrangement.
Mr. M. Karthik
2014-05-01
Full Text Available Artificial Neural Network (ANN has become a significant modeling tool for predicting the performance of complex systems that provide appropriate mapping between input-output variables without acquiring any empirical relationship due to the intrinsic properties. This paper is focussed towards the modeling of Proton Exchange Membrane (PEM Fuel Cell system using Artificial Neural Networks especially for automotive applications. Three different neural networks such as Static Feed Forward Network (SFFN, Cascaded Feed Forward Network (CFFN & Fully Connected Dynamic Recurrent Network (FCRN are discussed in this paper for modeling the PEM Fuel Cell System. The numerical analysis is carried out between the three Neural Network architectures for predicting the output performance of the PEM Fuel Cell. The performance of the proposed Networks is evaluated using various error criteria such as Mean Square Error, Mean Absolute Percentage Error, Mean Absolute Error, Coefficient of correlation and Iteration Values. The optimum network with high performance indices (low prediction error values and iteration values can be used as an ancillary model in developing the PEM Fuel Cell powered vehicle system. The development of the fuel cell driven vehicle model also incorporates the modeling of DC-DC Power Converter and Vehicle Dynamics. Finally the Performance of the Electric vehicle model is analyzed for two different drive cycle such as M-NEDC & M-UDDS.
Chon, K H; Hoyer, D; Armoundas, A A;
1999-01-01
part of the stochastic ARMA model are first estimated via a three-layer artificial neural network (deterministic estimation step) and then reestimated using the prediction error as one of the inputs to the artificial neural networks in an iterative algorithm (stochastic estimation step). The prediction...
Sharma, Richa; Kumar, Vikas; Gaur, Prerna; Mittal, A P
2016-05-01
Being complex, non-linear and coupled system, the robotic manipulator cannot be effectively controlled using classical proportional-integral-derivative (PID) controller. To enhance the effectiveness of the conventional PID controller for the nonlinear and uncertain systems, gains of the PID controller should be conservatively tuned and should adapt to the process parameter variations. In this work, a mix locally recurrent neural network (MLRNN) architecture is investigated to mimic a conventional PID controller which consists of at most three hidden nodes which act as proportional, integral and derivative node. The gains of the mix locally recurrent neural network based PID (MLRNNPID) controller scheme are initialized with a newly developed cuckoo search algorithm (CSA) based optimization method rather than assuming randomly. A sequential learning based least square algorithm is then investigated for the on-line adaptation of the gains of MLRNNPID controller. The performance of the proposed controller scheme is tested against the plant parameters uncertainties and external disturbances for both links of the two link robotic manipulator with variable payload (TL-RMWVP). The stability of the proposed controller is analyzed using Lyapunov stability criteria. A performance comparison is carried out among MLRNNPID controller, CSA optimized NNPID (OPTNNPID) controller and CSA optimized conventional PID (OPTPID) controller in order to establish the effectiveness of the MLRNNPID controller.
Chih-Hong Lin
2016-06-01
Full Text Available A permanent magnet (PM synchronous generator system driven by wind turbine (WT, connected with smart grid via AC-DC converter and DC-AC converter, are controlled by the novel recurrent Chebyshev neural network (NN and amended particle swarm optimization (PSO to regulate output power and output voltage in two power converters in this study. Because a PM synchronous generator system driven by WT is an unknown non-linear and time-varying dynamic system, the on-line training novel recurrent Chebyshev NN control system is developed to regulate DC voltage of the AC-DC converter and AC voltage of the DC-AC converter connected with smart grid. Furthermore, the variable learning rate of the novel recurrent Chebyshev NN is regulated according to discrete-type Lyapunov function for improving the control performance and enhancing convergent speed. Finally, some experimental results are shown to verify the effectiveness of the proposed control method for a WT driving a PM synchronous generator system in smart grid.
Wang, Jiang; Han, Ruixue; Wei, Xilei; Qin, Yingmei; Yu, Haitao; Deng, Bin
2016-12-01
Reliable signal propagation across distributed brain areas provides the basis for neural circuit function. Modeling studies on cortical circuits have shown that multilayered feed-forward networks (FFNs), if strongly and/or densely connected, can enable robust signal propagation. However, cortical networks are typically neither densely connected nor have strong synapses. This paper investigates under which conditions spiking activity can be propagated reliably across diluted FFNs. Extending previous works, we model each layer as a recurrent sub-network constituting both excitatory (E) and inhibitory (I) neurons and consider the effect of interactions between local excitation and inhibition on signal propagation. It is shown that elevation of cellular excitation-inhibition (EI) balance in the local sub-networks (layers) softens the requirement for dense/strong anatomical connections and thereby promotes weak signal propagation in weakly connected networks. By means of iterated maps, we show how elevated local excitability state compensates for the decreased gain of synchrony transfer function that is due to sparse long-range connectivity. Finally, we report that modulations of EI balance and background activity provide a mechanism for selectively gating and routing neural signal. Our results highlight the essential role of intrinsic network states in neural computation.
Zio, Enrico; Pedroni, Nicola; Broggi, Matteo; Golea, Lucia Roxana [Polytechnic of Milan, Milan (Italy)
2009-12-15
In this paper, an infinite impulse response locally recurrent neural network (IIR-LRNN) is employed for modelling the dynamics of the Lead Bismuth Eutectic eXperimental Accelerator Driven System (LBE-XADS). The network is trained by recursive back-propagation (RBP) and its ability in estimating transients is tested under various conditions. The results demonstrate the robustness of the locally recurrent scheme in the reconstruction of complex nonlinear dynamic relationships
Zeng, Zhigang; Wang, Jun
2007-08-01
In this letter, some sufficient conditions are obtained to guarantee recurrent neural networks with linear saturation activation functions, and time-varying delays have multiequilibria located in the saturation region and the boundaries of the saturation region. These results on pattern characterization are used to analyze and design autoassociative memories, which are directly based on the parameters of the neural networks. Moreover, a formula for the numbers of spurious equilibria is also derived. Four design procedures for recurrent neural networks with linear saturation activation functions and time-varying delays are developed based on stability results. Two of these procedures allow the neural network to be capable of learning and forgetting. Finally, simulation results demonstrate the validity and characteristics of the proposed approach.
Prefrontally driven downregulation of neural synchrony mediates goal-directed forgetting.
Hanslmayr, Simon; Volberg, Gregor; Wimber, Maria; Oehler, Nora; Staudigl, Tobias; Hartmann, Thomas; Raabe, Markus; Greenlee, Mark W; Bäuml, Karl-Heinz T
2012-10-17
Neural synchronization between distant cell assemblies is crucial for the formation of new memories. To date, however, it remains unclear whether higher-order brain regions can adaptively regulate neural synchrony to control memory processing in humans. We explored this question in two experiments using a voluntary forgetting task. In the first experiment, we simultaneously recorded electroencephalography along with fMRI. The results show that a reduction in neural synchrony goes hand-in-hand with a BOLD signal increase in the left dorsolateral prefrontal cortex (dlPFC) when participants are cued to forget previously studied information. In the second experiment, we directly stimulated the left dlPFC with repetitive transcranial magnetic stimulation during the same task, and show that such stimulation specifically boosts the behavioral forgetting effect and induces a reduction in neural synchrony. These results suggest that prefrontally driven downregulation of long-range neural synchronization mediates goal-directed forgetting of long-term memories.
递归T-S模糊模型的神经网络%Neural Network Based on Recurrent T-S Fuzzy Model
宋春宁; 刘少东
2013-01-01
The dynamic recursive elements were added to the general T-S fuzzy neural network to propose a recurrent T-S fuzzy neural network.In the system identification,the unsupervised clustering algorithm and dynamic back-propagation algorithm were applied to the parameter training of this recurrent neural network and the approximation of the fuzzy neural network was proved.Comparing the identification results of the two fuzzy neural networks shows that the recurrent T-S fuzzy neural network can perform well in nonlinear system identification.%在常规T-S模糊神经网络的基础上加入动态递归元件,提出了递归T-S模糊模型的神经网络.在系统辨识中采用无监督聚类算法和动态反向传播算法训练该递归神经网络的参数,给出了该递归网络的逼近性证明.辨识效果与常规T-S模糊模型作比较,说明递归T-S模糊模型的神经网络在非线性系统辨识中表现出更好的性能.
Xie, Jiaheng; Liu, Xiao; Dajun Zeng, Daniel
2017-05-13
Recent years have seen increased worldwide popularity of e-cigarette use. However, the risks of e-cigarettes are underexamined. Most e-cigarette adverse event studies have achieved low detection rates due to limited subject sample sizes in the experiments and surveys. Social media provides a large data repository of consumers' e-cigarette feedback and experiences, which are useful for e-cigarette safety surveillance. However, it is difficult to automatically interpret the informal and nontechnical consumer vocabulary about e-cigarettes in social media. This issue hinders the use of social media content for e-cigarette safety surveillance. Recent developments in deep neural network methods have shown promise for named entity extraction from noisy text. Motivated by these observations, we aimed to design a deep neural network approach to extract e-cigarette safety information in social media. Our deep neural language model utilizes word embedding as the representation of text input and recognizes named entity types with the state-of-the-art Bidirectional Long Short-Term Memory (Bi-LSTM) Recurrent Neural Network. Our Bi-LSTM model achieved the best performance compared to 3 baseline models, with a precision of 94.10%, a recall of 91.80%, and an F-measure of 92.94%. We identified 1591 unique adverse events and 9930 unique e-cigarette components (ie, chemicals, flavors, and devices) from our research testbed. Although the conditional random field baseline model had slightly better precision than our approach, our Bi-LSTM model achieved much higher recall, resulting in the best F-measure. Our method can be generalized to extract medical concepts from social media for other medical applications.
RM-SORN: a reward-modulated self-organizing recurrent neural network.
Aswolinskiy, Witali; Pipa, Gordon
2015-01-01
Neural plasticity plays an important role in learning and memory. Reward-modulation of plasticity offers an explanation for the ability of the brain to adapt its neural activity to achieve a rewarded goal. Here, we define a neural network model that learns through the interaction of Intrinsic Plasticity (IP) and reward-modulated Spike-Timing-Dependent Plasticity (STDP). IP enables the network to explore possible output sequences and STDP, modulated by reward, reinforces the creation of the rewarded output sequences. The model is tested on tasks for prediction, recall, non-linear computation, pattern recognition, and sequence generation. It achieves performance comparable to networks trained with supervised learning, while using simple, biologically motivated plasticity rules, and rewarding strategies. The results confirm the importance of investigating the interaction of several plasticity rules in the context of reward-modulated learning and whether reward-modulated self-organization can explain the amazing capabilities of the brain.
Meckel Gruber syndrome--a single gene cause of recurrent neural tube defects.
de Silva, D; Suriyawansa, D; Mangalika, M; Samarasinghe, D
2001-03-01
Meckel Gruber syndrome (MGS), an autosomal recessive disorder characterised by posterior encephalocoele, multicystic kidneys and post-axial polydactyly should be recognised by obstetricians and paediatricians to counsel parents regarding the 25% recurrence risk. We report a consanguineous family with MGS affecting three infants.
Milos Miljanovic
2012-02-01
Full Text Available The purpose of this paper is to perform evaluation of two different neural network architectures used for solving temporal problems, i.e. time series prediction. The data sets in this project include Mackey-Glass,Sunspots, and Standard & Poor's 500, the stock market index. The study also presents a comparison study on the two networks and their performance.
Mioulet, L.; Bideault, G.; Chatelain, C.; Paquet, T.; Brunessaux, S.
2015-01-01
The BLSTM-CTC is a novel recurrent neural network architecture that has outperformed previous state of the art algorithms in tasks such as speech recognition or handwriting recognition. It has the ability to process long term dependencies in temporal signals in order to label unsegmented data. This paper describes different ways of combining features using a BLSTM-CTC architecture. Not only do we explore the low level combination (feature space combination) but we also explore high level combination (decoding combination) and mid-level (internal system representation combination). The results are compared on the RIMES word database. Our results show that the low level combination works best, thanks to the powerful data modeling of the LSTM neurons.
Wang Shen-Quan; Feng Jian; Zhao Qing
2012-01-01
In this paper,the problem of delay-distribution-dependent stability is investigated for continuous-time recurrent neural networks (CRNNs) with stochastic delay.Different from the common assumptions on time delays,it is assumed that the probability distribution of the delay taking values in some intervals is known a priori.By making full use of the information concerning the probability distribution of the delay and by using a tighter bounding technique (the reciprocally convex combination method),less conservative asymptotic mean-square stable sufficient conditions are derived in terms of linear matrix inequalities (LMIs).Two numerical examples show that our results are better than the existing ones.
Narges Talebi Motlagh
2016-07-01
Full Text Available Gold price prediction is a very complex nonlinear problem which is severely difficult. Real-time price prediction, as a principle of many economic models, is one of the most challenging tasks for economists since the context of the financial agents are often dynamic. Since in financial time series, direction prediction is important, in this work, an innovative Recurrent Neural Network (RNN is utilized to obtain accurate Two-Step- Ahead (2SA prediction results and ameliorate forecasting per- formances of gold market. The training method of the proposed network has been combined with an adaptive learning rate algorithm and a linear combination of Directional Symmetry (DS is utilized in the training phase. The proposed method has been developed for online and offline applications. Simulations and experiments on the daily Gold market data and the benchmark time series of Lorenz and Rossler shows the high efficiency of proposed method which could forecast future gold price precisely.
Radhika, Thirunavukkarasu; Nagamani, Gnaneswaran
2016-01-01
In this paper, based on the knowledge of memristor-based recurrent neural networks (MRNNs), the model of the stochastic MRNNs with discrete and distributed delays is established. In real nervous systems and in the implementation of very large-scale integration (VLSI) circuits, noise is unavoidable, which leads to the stochastic model of the MRNNs. In this model, the delay interval is decomposed into two subintervals by using the tuning parameter α such that 0 stochastic MRNNs with discrete and distributed delays in the sense of Filippov solutions. Using the stochastic analysis theory and Itô's formula for stochastic differential equations, we establish sufficient conditions for dissipativity criterion. The dissipativity and passivity conditions are presented in terms of linear matrix inequalities, which can be easily solved by using Matlab Tools. Finally, three numerical examples with simulations are presented to demonstrate the effectiveness of the theoretical results.
Numerical discrimination is mediated by neural coding variation.
Prather, Richard W
2014-12-01
One foundation of numerical cognition is that discrimination accuracy depends on the proportional difference between compared values, closely following the Weber-Fechner discrimination law. Performance in non-symbolic numerical discrimination is used to calculate individual Weber fraction, a measure of relative acuity of the approximate number system (ANS). Individual Weber fraction is linked to symbolic arithmetic skills and long-term educational and economic outcomes. The present findings suggest that numerical discrimination performance depends on both the proportional difference and absolute value, deviating from the Weber-Fechner law. The effect of absolute value is predicted via computational model based on the neural correlates of numerical perception. Specifically, that the neural coding "noise" varies across corresponding numerosities. A computational model using firing rate variation based on neural data demonstrates a significant interaction between ratio difference and absolute value in predicting numerical discriminability. We find that both behavioral and computational data show an interaction between ratio difference and absolute value on numerical discrimination accuracy. These results further suggest a reexamination of the mechanisms involved in non-symbolic numerical discrimination, how researchers may measure individual performance, and what outcomes performance may predict. Copyright © 2014 Elsevier B.V. All rights reserved.
N-cadherin-mediated cell adhesion restricts cell proliferation in the dorsal neural tube.
Chalasani, Kavita; Brewster, Rachel M
2011-05-01
Neural progenitors are organized as a pseudostratified epithelium held together by adherens junctions (AJs), multiprotein complexes composed of cadherins and α- and β-catenin. Catenins are known to control neural progenitor division; however, it is not known whether they function in this capacity as cadherin binding partners, as there is little evidence that cadherins themselves regulate neural proliferation. We show here that zebrafish N-cadherin (N-cad) restricts cell proliferation in the dorsal region of the neural tube by regulating cell-cycle length. We further reveal that N-cad couples cell-cycle exit and differentiation, as a fraction of neurons are mitotic in N-cad mutants. Enhanced proliferation in N-cad mutants is mediated by ligand-independent activation of Hedgehog (Hh) signaling, possibly caused by defective ciliogenesis. Furthermore, depletion of Hh signaling results in the loss of junctional markers. We therefore propose that N-cad restricts the response of dorsal neural progenitors to Hh and that Hh signaling limits the range of its own activity by promoting AJ assembly. Taken together, these observations emphasize a key role for N-cad-mediated adhesion in controlling neural progenitor proliferation. In addition, these findings are the first to demonstrate a requirement for cadherins in synchronizing cell-cycle exit and differentiation and a reciprocal interaction between AJs and Hh signaling.
Quang, Daniel; Xie, Xiaohui
2016-06-20
Modeling the properties and functions of DNA sequences is an important, but challenging task in the broad field of genomics. This task is particularly difficult for non-coding DNA, the vast majority of which is still poorly understood in terms of function. A powerful predictive model for the function of non-coding DNA can have enormous benefit for both basic science and translational research because over 98% of the human genome is non-coding and 93% of disease-associated variants lie in these regions. To address this need, we propose DanQ, a novel hybrid convolutional and bi-directional long short-term memory recurrent neural network framework for predicting non-coding function de novo from sequence. In the DanQ model, the convolution layer captures regulatory motifs, while the recurrent layer captures long-term dependencies between the motifs in order to learn a regulatory 'grammar' to improve predictions. DanQ improves considerably upon other models across several metrics. For some regulatory markers, DanQ can achieve over a 50% relative improvement in the area under the precision-recall curve metric compared to related models. We have made the source code available at the github repository http://github.com/uci-cbcl/DanQ.
Hayashi, Hideaki; Shima, Keisuke; Shibanoki, Taro; Kurita, Yuichi; Tsuji, Toshio
2013-01-01
This paper outlines a probabilistic neural network developed on the basis of time-series discriminant component analysis (TSDCA) that can be used to classify high-dimensional time-series patterns. TSDCA involves the compression of high-dimensional time series into a lower-dimensional space using a set of orthogonal transformations and the calculation of posterior probabilities based on a continuous-density hidden Markov model that incorporates a Gaussian mixture model expressed in the reduced-dimensional space. The analysis can be incorporated into a neural network so that parameters can be obtained appropriately as network coefficients according to backpropagation-through-time-based training algorithm. The network is considered to enable high-accuracy classification of high-dimensional time-series patterns and to reduce the computation time taken for network training. In the experiments conducted during the study, the validity of the proposed network was demonstrated for EEG signals.
Hongjie Li
2012-01-01
Full Text Available The paper investigates the state estimation problem for a class of recurrent neural networks with sampled-data information and time-varying delays. The main purpose is to estimate the neuron states through output sampled measurement; a novel event-triggered scheme is proposed, which can lead to a significant reduction of the information communication burden in the network; the feature of this scheme is that whether or not the sampled data should be transmitted is determined by the current sampled data and the error between the current sampled data and the latest transmitted data. By using a delayed-input approach, the error dynamic system is equivalent to a dynamic system with two different time-varying delays. Based on the Lyapunov-krasovskii functional approach, a state estimator of the considered neural networks can be achieved by solving some linear matrix inequalities, which can be easily facilitated by using the standard numerical software. Finally, a numerical example is provided to show the effectiveness of the proposed event-triggered scheme.
Chang, Fi-John; Chen, Pin-An; Lu, Ying-Ray; Huang, Eric; Chang, Kai-Yao
2014-09-01
Urban flood control is a crucial task, which commonly faces fast rising peak flows resulting from urbanization. To mitigate future flood damages, it is imperative to construct an on-line accurate model to forecast inundation levels during flood periods. The Yu-Cheng Pumping Station located in Taipei City of Taiwan is selected as the study area. Firstly, historical hydrologic data are fully explored by statistical techniques to identify the time span of rainfall affecting the rise of the water level in the floodwater storage pond (FSP) at the pumping station. Secondly, effective factors (rainfall stations) that significantly affect the FSP water level are extracted by the Gamma test (GT). Thirdly, one static artificial neural network (ANN) (backpropagation neural network-BPNN) and two dynamic ANNs (Elman neural network-Elman NN; nonlinear autoregressive network with exogenous inputs-NARX network) are used to construct multi-step-ahead FSP water level forecast models through two scenarios, in which scenario I adopts rainfall and FSP water level data as model inputs while scenario II adopts only rainfall data as model inputs. The results demonstrate that the GT can efficiently identify the effective rainfall stations as important inputs to the three ANNs; the recurrent connections from the output layer (NARX network) impose more effects on the output than those of the hidden layer (Elman NN) do; and the NARX network performs the best in real-time forecasting. The NARX network produces coefficients of efficiency within 0.9-0.7 (scenario I) and 0.7-0.5 (scenario II) in the testing stages for 10-60-min-ahead forecasts accordingly. This study suggests that the proposed NARX models can be valuable and beneficial to the government authority for urban flood control.
Building energy use prediction and system identification using recurrent neural networks
Kreider, J.F.; Curtiss, P.; Dodier, R.; Krarti, M. [Univ. of Colorado, Boulder, CO (United States); Claridge, D.E.; Haberl, J.S. [Texas A and M Univ., College Station, TX (United States). Dept. of Mechanical Engineering
1995-08-01
Following several successful applications of feedforward neural networks (NNs) to the building energy prediction problem a more difficult problem has been addressed recently: namely, the prediction of building energy consumption well into the future without knowledge of immediately past energy consumption. This paper will report results on a recent study of six months of hourly data recorded at the Zachry Engineering Center (ZEC) in College Station, TX. Also reported are results on finding the R and C values for buildings from networks trained on building data.
Schema generation in recurrent neural nets for intercepting a moving target.
Fleischer, Andreas G
2010-06-01
The grasping of a moving object requires the development of a motor strategy to anticipate the trajectory of the target and to compute an optimal course of interception. During the performance of perception-action cycles, a preprogrammed prototypical movement trajectory, a motor schema, may highly reduce the control load. Subjects were asked to hit a target that was moving along a circular path by means of a cursor. Randomized initial target positions and velocities were detected in the periphery of the eyes, resulting in a saccade toward the target. Even when the target disappeared, the eyes followed the target's anticipated course. The Gestalt of the trajectories was dependent on target velocity. The prediction capability of the motor schema was investigated by varying the visibility range of cursor and target. Motor schemata were determined to be of limited precision, and therefore visual feedback was continuously required to intercept the moving target. To intercept a target, the motor schema caused the hand to aim ahead and to adapt to the target trajectory. The control of cursor velocity determined the point of interception. From a modeling point of view, a neural network was developed that allowed the implementation of a motor schema interacting with feedback control in an iterative manner. The neural net of the Wilson type consists of an excitation-diffusion layer allowing the generation of a moving bubble. This activation bubble runs down an eye-centered motor schema and causes a planar arm model to move toward the target. A bubble provides local integration and straightening of the trajectory during repetitive moves. The schema adapts to task demands by learning and serves as forward controller. On the basis of these model considerations the principal problem of embedding motor schemata in generalized control strategies is discussed.
Familiarity and priming are mediated by overlapping neural substrates.
Thakral, Preston P; Kensinger, Elizabeth A; Slotnick, Scott D
2016-02-01
Explicit memory is widely assumed to reflect the conscious processes of recollection and familiarity. However, familiarity has been hypothesized to be supported by nonconscious processing. In the present functional magnetic resonance imaging (fMRI) experiment, we assessed whether familiarity is mediated by some of the same regions that mediate repetition priming, a form of nonconscious memory. Participants completed an implicit (indirect) memory task and an explicit (direct) memory task during fMRI. During phase I of each task, participants viewed novel abstract shapes with internal colored oriented lines and judged whether each shape was relatively "pleasant" or "unpleasant". During phase II of the indirect memory task, repeated (old) and new shapes were presented and participants made the same judgments. During phase II of the direct memory task, a surprise recognition test was given in which old and new shapes were presented and participants made "remember", "know", or "new" responses. Activity associated with priming was isolated by comparing novel versus repeated shapes during phase II of the indirect memory task. Activity associated with familiarity was isolated by comparing accurate "know" responses versus misses during phase II of the direct memory task. Priming and familiarity were associated with common activity within the superior parietal lobule and motor cortex, which we attribute to shared attentional and motor processing, respectively. The present fMRI results support the hypothesis that familiarity is supported by some of the same processes that support implicit memory.
Wang, Baohua; Song, Ning; Yu, Tong; Zhou, Lianya; Zhang, Helin; Duan, Lin; He, Wenshu; Zhu, Yihua; Bai, Yunfei; Zhu, Miao
2014-01-01
In this study, we conducted a meta-analysis on high-throughput gene expression data to identify TNF-α-mediated genes implicated in lung cancer. We first investigated the gene expression profiles of two independent TNF-α/TNFR KO murine models. The EGF receptor signaling pathway was the top pathway associated with genes mediated by TNF-α. After matching the TNF-α-mediated mouse genes to their human orthologs, we compared the expression patterns of the TNF-α-mediated genes in normal and tumor lung tissues obtained from humans. Based on the TNF-α-mediated genes that were dysregulated in lung tumors, we developed a prognostic gene signature that effectively predicted recurrence-free survival in lung cancer in two validation cohorts. Resampling tests suggested that the prognostic power of the gene signature was not by chance, and multivariate analysis suggested that this gene signature was independent of the traditional clinical factors and enhanced the identification of lung cancer patients at greater risk for recurrence.
Baohua Wang
Full Text Available In this study, we conducted a meta-analysis on high-throughput gene expression data to identify TNF-α-mediated genes implicated in lung cancer. We first investigated the gene expression profiles of two independent TNF-α/TNFR KO murine models. The EGF receptor signaling pathway was the top pathway associated with genes mediated by TNF-α. After matching the TNF-α-mediated mouse genes to their human orthologs, we compared the expression patterns of the TNF-α-mediated genes in normal and tumor lung tissues obtained from humans. Based on the TNF-α-mediated genes that were dysregulated in lung tumors, we developed a prognostic gene signature that effectively predicted recurrence-free survival in lung cancer in two validation cohorts. Resampling tests suggested that the prognostic power of the gene signature was not by chance, and multivariate analysis suggested that this gene signature was independent of the traditional clinical factors and enhanced the identification of lung cancer patients at greater risk for recurrence.
Hayashi, Hideaki; Shibanoki, Taro; Shima, Keisuke; Kurita, Yuichi; Tsuji, Toshio
2015-12-01
This paper proposes a probabilistic neural network (NN) developed on the basis of time-series discriminant component analysis (TSDCA) that can be used to classify high-dimensional time-series patterns. TSDCA involves the compression of high-dimensional time series into a lower dimensional space using a set of orthogonal transformations and the calculation of posterior probabilities based on a continuous-density hidden Markov model with a Gaussian mixture model expressed in the reduced-dimensional space. The analysis can be incorporated into an NN, which is named a time-series discriminant component network (TSDCN), so that parameters of dimensionality reduction and classification can be obtained simultaneously as network coefficients according to a backpropagation through time-based learning algorithm with the Lagrange multiplier method. The TSDCN is considered to enable high-accuracy classification of high-dimensional time-series patterns and to reduce the computation time taken for network training. The validity of the TSDCN is demonstrated for high-dimensional artificial data and electroencephalogram signals in the experiments conducted during the study.
Miller, Daniel; Salo, Paul; Hart, David A; Leonard, Catherine; Mammoto, Takeo; Bray, Robert C
2010-01-01
Chronic inflammation associated with osteoarthritis (OA) alters normal responses and modifies the functionality of the articular vasculature. Altered responsiveness of the vasculature may be due to excessive neural activity associated with chronic pain and inflammation, or from the production of inflammatory mediators which induce vasodilation. Using laser speckle perfusion imaging (LSPI), blood flow to the medial collateral ligament (MCL) of adult rabbits was measured in denervated ACL transected knees (n = 6) and compared to unoperated control (n = 6) and 6-week anterial cruciate ligament (ACL)-transected knees (n = 6). Phenylephrine and neuropeptide Y were applied to the MCL vasculature in topical boluses of 100 microL (dose range 10(-14) to 10(-8) mol and 10(-14) to 10(-9) mol, respectively). Denervation diminished vasoconstrictive responsiveness to phenylephrine compared to both control and ACL-transected knees. Denervation minimally enhanced vascular responses to neuropeptide Y (NPY) compared to ACL deficiency alone, which nevertheless remained significantly diminished from control responses. To evaluate the potential role of inflammatory dilators in the diminished contractile responses, phenylephrine was coadministered with histamine, substance P, and prostaglandin E(2). High-dose histamine, and low-dose substance P and PGE(2) were able to inhibit contractile responses in the MCL of control knees. Excessive neural input does not mediate diminished vasoconstrictive responses in the ACL transected knee; inflammatory mediators may play a role in the deficient vascular responsiveness of the ACL transected knee.
Fu, Xingang; Li, Shuhui; Fairbank, Michael; Wunsch, Donald C; Alonso, Eduardo
2015-09-01
This paper investigates how to train a recurrent neural network (RNN) using the Levenberg-Marquardt (LM) algorithm as well as how to implement optimal control of a grid-connected converter (GCC) using an RNN. To successfully and efficiently train an RNN using the LM algorithm, a new forward accumulation through time (FATT) algorithm is proposed to calculate the Jacobian matrix required by the LM algorithm. This paper explores how to incorporate FATT into the LM algorithm. The results show that the combination of the LM and FATT algorithms trains RNNs better than the conventional backpropagation through time algorithm. This paper presents an analytical study on the optimal control of GCCs, including theoretically ideal optimal and suboptimal controllers. To overcome the inapplicability of the optimal GCC controller under practical conditions, a new RNN controller with an improved input structure is proposed to approximate the ideal optimal controller. The performance of an ideal optimal controller and a well-trained RNN controller was compared in close to real-life power converter switching environments, demonstrating that the proposed RNN controller can achieve close to ideal optimal control performance even under low sampling rate conditions. The excellent performance of the proposed RNN controller under challenging and distorted system conditions further indicates the feasibility of using an RNN to approximate optimal control in practical applications.
YuKang Jia
2017-01-01
Full Text Available Long Short-Term Memory (LSTM is a kind of Recurrent Neural Networks (RNN relating to time series, which has achieved good performance in speech recogniton and image recognition. Long Short-Term Memory Projection (LSTMP is a variant of LSTM to further optimize speed and performance of LSTM by adding a projection layer. As LSTM and LSTMP have performed well in pattern recognition, in this paper, we combine them with Connectionist Temporal Classification (CTC to study piano’s continuous note recognition for robotics. Based on the Beijing Forestry University music library, we conduct experiments to show recognition rates and numbers of iterations of LSTM with a single layer, LSTMP with a single layer, and Deep LSTM (DLSTM, LSTM with multilayers. As a result, the single layer LSTMP proves performing much better than the single layer LSTM in both time and the recognition rate; that is, LSTMP has fewer parameters and therefore reduces the training time, and, moreover, benefiting from the projection layer, LSTMP has better performance, too. The best recognition rate of LSTMP is 99.8%. As for DLSTM, the recognition rate can reach 100% because of the effectiveness of the deep structure, but compared with the single layer LSTMP, DLSTM needs more training time.
C. R. Hema
2008-01-01
Full Text Available Brain machine interface provides a communication channel between the human brain and an external device. Brain interfaces are studied to provide rehabilitation to patients with neurodegenerative diseases; such patients loose all communication pathways except for their sensory and cognitive functions. One of the possible rehabilitation methods for these patients is to provide a brain machine interface (BMI for communication; the BMI uses the electrical activity of the brain detected by scalp EEG electrodes. Classification of EEG signals extracted during mental tasks is a technique for designing a BMI. In this paper a BMI design using five mental tasks from two subjects were studied, a combination of two tasks is studied per subject. An Elman recurrent neural network is proposed for classification of EEG signals. Two feature extraction algorithms using overlapped and non overlapped signal segments are analyzed. Principal component analysis is used for extracting features from the EEG signal segments. Classification performance of overlapping EEG signal segments is observed to be better in terms of average classification with a range of 78.5% to 100%, while the non overlapping EEG signal segments show better classification in terms of maximum classifications.
Jimeno Yepes, Antonio
2017-09-01
Word sense disambiguation helps identifying the proper sense of ambiguous words in text. With large terminologies such as the UMLS Metathesaurus ambiguities appear and highly effective disambiguation methods are required. Supervised learning algorithm methods are used as one of the approaches to perform disambiguation. Features extracted from the context of an ambiguous word are used to identify the proper sense of such a word. The type of features have an impact on machine learning methods, thus affect disambiguation performance. In this work, we have evaluated several types of features derived from the context of the ambiguous word and we have explored as well more global features derived from MEDLINE using word embeddings. Results show that word embeddings improve the performance of more traditional features and allow as well using recurrent neural network classifiers based on Long-Short Term Memory (LSTM) nodes. The combination of unigrams and word embeddings with an SVM sets a new state of the art performance with a macro accuracy of 95.97 in the MSH WSD data set. Copyright © 2017 Elsevier Inc. All rights reserved.
Hwang, Chih-Lyang; Jan, Chau
2016-02-01
At the beginning, an approximate nonlinear autoregressive moving average (NARMA) model is employed to represent a class of multivariable nonlinear dynamic systems with time-varying delay. It is known that the disadvantages of robust control for the NARMA model are as follows: 1) suitable control parameters for larger time delay are more sensitive to achieving desirable performance; 2) it only deals with bounded uncertainty; and 3) the nominal NARMA model must be learned in advance. Due to the dynamic feature of the NARMA model, a recurrent neural network (RNN) is online applied to learn it. However, the system performance becomes deteriorated due to the poor learning of the larger variation of system vector functions. In this situation, a simple network is employed to compensate the upper bound of the residue caused by the linear parameterization of the approximation error of RNN. An e -modification learning law with a projection for weight matrix is applied to guarantee its boundedness without persistent excitation. Under suitable conditions, the semiglobally ultimately bounded tracking with the boundedness of estimated weight matrix is obtained by the proposed RNN-based multivariable adaptive control. Finally, simulations are presented to verify the effectiveness and robustness of the proposed control.
You, Seung Han [Hyundai Motor Company, Seoul (Korea, Republic of); Hahn, Jin Oh [University of Alberta, Edmonton (Canada)
2012-05-15
By virtue of its ease of operation compared with its conventional manual counterpart, automatic transmissions are commonly used as automotive power transmission control system in today's passenger cars. In accordance with this trend, research efforts on closed-loop automatic transmission controls have been extensively carried out to improve ride quality and fuel economy. State-of-the-art power transmission control algorithms may have limitations in performance because they rely on the steady-state characteristics of the hydraulic actuator rather than fully exploit its dynamic characteristics. Since the ultimate viability of closed-loop power transmission control is dominated by precise pressure control at the level of hydraulic actuator, closed-loop control can potentially attain superior efficacy in case the hydraulic actuator can be easily incorporated into model-based observer/controller design. In this paper, we propose to use a recurrent neural network (RNN) to establish a nonlinear empirical model of a cascade hydraulic actuator in a passenger car automatic transmission, which has potential to be easily incorporated in designing observers and controllers. Experimental analysis is performed to grasp key system characteristics, based on which a nonlinear system identification procedure is carried out. Extensive experimental validation of the established model suggests that it has superb one-step-ahead prediction capability over appropriate frequency range, making it an attractive approach for model-based observer/controller design applications in automotive systems.
Briñez de León, Juan C.; Restrepo M., Alejandro; Branch, John W.
2016-09-01
Digital photoelasticity is based on image analysis techniques to describe the stress distribution in birefringent materials subjected to mechanical loads. However, optical assemblies for capturing the images, the steps to extract the information, and the ambiguities of the results limit the analysis in zones with stress concentrations. These zones contain stress values that could produce a failure, making important their identification. This paper identifies zones with stress concentration in a sequence of photoelasticity images, which was captured from a circular disc under diametral compression. The capturing process was developed assembling a plane polariscope around the disc, and a digital camera stored the temporal fringe colors generated during the load application. Stress concentration zones were identified modeling the temporal intensities captured by every pixel contained into the sequence. In this case, an Elman artificial recurrent neural network was trained to model the temporal intensities. Pixel positions near to the stress concentration zones trained different network parameters in comparison with pixel positions belonging to zones of lower stress concentration.
Xiao, Lin; Zhang, Yongsheng; Liao, Bolin; Zhang, Zhijun; Ding, Lei; Jin, Long
2017-01-01
A dual-robot system is a robotic device composed of two robot arms. To eliminate the joint-angle drift and prevent the occurrence of high joint velocity, a velocity-level bi-criteria optimization scheme, which includes two criteria (i.e., the minimum velocity norm and the repetitive motion), is proposed and investigated for coordinated path tracking of dual robot manipulators. Specifically, to realize the coordinated path tracking of dual robot manipulators, two subschemes are first presented for the left and right robot manipulators. After that, such two subschemes are reformulated as two general quadratic programs (QPs), which can be formulated as one unified QP. A recurrent neural network (RNN) is thus presented to solve effectively the unified QP problem. At last, computer simulation results based on a dual three-link planar manipulator further validate the feasibility and the efficacy of the velocity-level optimization scheme for coordinated path tracking using the recurrent neural network.
Chien-Yu Lu
2009-01-01
Full Text Available This paper examines a passivity analysis for a class of discrete-time recurrent neural networks (DRNNs with norm-bounded time-varying parameter uncertainties and interval time-varying delay. The activation functions are assumed to be globally Lipschitz continuous. Based on an appropriate type of Lyapunov functional, sufficient passivity conditions for the DRNNs are derived in terms of a family of linear matrix inequalities (LMIs. Two numerical examples are given to illustrate the effectiveness and applicability.
Hu, Huping; Wu, Maoxin
2004-01-01
A novel theory of consciousness is proposed in this paper. We postulate that consciousness is intrinsically connected to quantum spin since the latter is the origin of quantum effects in both Bohm and Hestenes quantum formulism and a fundamental quantum process associated with the structure of space-time. That is, spin is the "mind-pixel". The unity of mind is achieved by entanglement of the mind-pixels. Applying these ideas to the particular structures and dynamics of the brain, we theorize that human brain works as follows: through action potential modulated nuclear spin interactions and paramagnetic O2/NO driven activations, the nuclear spins inside neural membranes and proteins form various entangled quantum states some of which survive decoherence through quantum Zeno effects or in decoherence-free subspaces and then collapse contextually via irreversible and non-computable means producing consciousness and, in turn, the collective spin dynamics associated with said collapses have effects through spin chemistry on classical neural activities thus influencing the neural networks of the brain. Our proposal calls for extension of associative encoding of neural memories to the dynamical structures of neural membranes and proteins. Thus, according our theory, the nuclear spin ensembles are the "mind-screen" with nuclear spins as its pixels, the neural membranes and proteins are the mind-screen and memory matrices, and the biologically available paramagnetic species such as O2 and NO are pixel-activating agents. Together, they form the neural substrates of consciousness. We also present supporting evidence and make important predictions. We stress that our theory is experimentally verifiable with present technologies. Further, experimental realizations of intra-/inter-molecular nuclear spin coherence and entanglement, macroscopic entanglement of spin ensembles and NMR quantum computation, all in room temperatures, strongly suggest the possibility of a spin-mediated
Liu, Peng; Zeng, Zhigang; Wang, Jun
2016-07-01
This paper addresses the multistability for a general class of recurrent neural networks with time-varying delays. Without assuming the linearity or monotonicity of the activation functions, several new sufficient conditions are obtained to ensure the existence of (2K+1)(n) equilibrium points and the exponential stability of (K+1)(n) equilibrium points among them for n-neuron neural networks, where K is a positive integer and determined by the type of activation functions and the parameters of neural network jointly. The obtained results generalize and improve the earlier publications. Furthermore, the attraction basins of these exponentially stable equilibrium points are estimated. It is revealed that the attraction basins of these exponentially stable equilibrium points can be larger than their originally partitioned subsets. Finally, three illustrative numerical examples show the effectiveness of theoretical results.
GH mediates exercise-dependent activation of SVZ neural precursor cells in aged mice.
Daniel G Blackmore
Full Text Available Here we demonstrate, both in vivo and in vitro, that growth hormone (GH mediates precursor cell activation in the subventricular zone (SVZ of the aged (12-month-old brain following exercise, and that GH signaling stimulates precursor activation to a similar extent to exercise. Our results reveal that both addition of GH in culture and direct intracerebroventricular infusion of GH stimulate neural precursor cells in the aged brain. In contrast, no increase in neurosphere numbers was observed in GH receptor null animals following exercise. Continuous infusion of a GH antagonist into the lateral ventricle of wild-type animals completely abolished the exercise-induced increase in neural precursor cell number. Given that the aged brain does not recover well after injury, we investigated the direct effect of exercise and GH on neural precursor cell activation following irradiation. This revealed that physical exercise as well as infusion of GH promoted repopulation of neural precursor cells in irradiated aged animals. Conversely, infusion of a GH antagonist during exercise prevented recovery of precursor cells in the SVZ following irradiation.
Flournoy, John C; Pfeifer, Jennifer H; Moore, William E; Tackman, Allison M; Masten, Carrie L; Mazziotta, John C; Iacoboni, Marco; Dapretto, Mirella
2016-11-01
Reactivity to others' emotions not only can result in empathic concern (EC), an important motivator of prosocial behavior, but can also result in personal distress (PD), which may hinder prosocial behavior. Examining neural substrates of emotional reactivity may elucidate how EC and PD differentially influence prosocial behavior. Participants (N = 57) provided measures of EC, PD, prosocial behavior, and neural responses to emotional expressions at ages 10 and 13. Initial EC predicted subsequent prosocial behavior. Initial EC and PD predicted subsequent reactivity to emotions in the inferior frontal gyrus (IFG) and inferior parietal lobule, respectively. Activity in the IFG, a region linked to mirror neuron processes, as well as cognitive control and language, mediated the relation between initial EC and subsequent prosocial behavior. © 2016 The Authors. Child Development © 2016 Society for Research in Child Development, Inc.
Güntürkün, Rüştü
2010-08-01
In this study, Elman recurrent neural networks have been defined by using conjugate gradient algorithm in order to determine the depth of anesthesia in the continuation stage of the anesthesia and to estimate the amount of medicine to be applied at that moment. The feed forward neural networks are also used for comparison. The conjugate gradient algorithm is compared with back propagation (BP) for training of the neural Networks. The applied artificial neural network is composed of three layers, namely the input layer, the hidden layer and the output layer. The nonlinear activation function sigmoid (sigmoid function) has been used in the hidden layer and the output layer. EEG data has been recorded with Nihon Kohden 9200 brand 22-channel EEG device. The international 8-channel bipolar 10-20 montage system (8 TB-b system) has been used in assembling the recording electrodes. EEG data have been recorded by being sampled once in every 2 milliseconds. The artificial neural network has been designed so as to have 60 neurons in the input layer, 30 neurons in the hidden layer and 1 neuron in the output layer. The values of the power spectral density (PSD) of 10-second EEG segments which correspond to the 1-50 Hz frequency range; the ratio of the total power of PSD values of the EEG segment at that moment in the same range to the total of PSD values of EEG segment taken prior to the anesthesia.
Mariman, R.; Kremer, S.H.A.; Erk, M. van; Lagerweij, T.; Koning, F.; Nagelkerken, L.
2012-01-01
Background: Host-microbiota interactions in the intestinal mucosa play a major role in intestinal immune homeostasis and control the threshold of local inflammation. The aim of this study was to evaluate the efficacy of probiotics in the recurrent trinitrobenzene sulfonic acid (TNBS)-induced colitis
Recurrent Syncope due to Esophageal Squamous Cell Carcinoma
2011-01-01
Syncope is caused by a wide variety of disorders. Recurrent syncope as a complication of malignancy is uncommon and may be difficult to diagnose and to treat. Primary neck carcinoma or metastases spreading in parapharyngeal and carotid spaces can involve the internal carotid artery and cause neurally mediated syncope with a clinical presentation like carotid sinus syndrome. We report the case of a 76-year-old man who suffered from recurrent syncope due to invasion of the right carotid sinus b...
Neghal Kandiyil
Full Text Available Women are at lower risk of stroke, and appear to benefit less from carotid endarterectomy (CEA than men. We hypothesised that this is due to more benign carotid disease in women mediating a lower risk of recurrent cerebrovascular events. To test this, we investigated sex differences in the prevalence of MRI detectable plaque hemorrhage (MRI PH as an index of plaque instability, and secondly whether MRI PH mediates sex differences in the rate of cerebrovascular recurrence.Prevalence of PH between sexes was analysed in a single centre pooled cohort of 176 patients with recently symptomatic, significant carotid stenosis (106 severe [≥70%], 70 moderate [50-69%] who underwent prospective carotid MRI scanning for identification of MRI PH. Further, a meta-analysis of published evidence was undertaken. Recurrent events were noted during clinical follow up for survival analysis.Women with symptomatic carotid stenosis (50%≥ were less likely to have plaque hemorrhage (PH than men (46% vs. 70% with an adjusted OR of 0.23 [95% CI 0.10-0.50, P<0.0001] controlling for other known vascular risk factors. This negative association was only significant for the severe stenosis subgroup (adjusted OR 0.18, 95% CI 0.067-0.50 not the moderate degree stenosis. Female sex in this subgroup also predicted a longer time to recurrent cerebral ischemic events (HR 0.38 95% CI 0.15-0.98, P = 0.045. Further addition of MRI PH or smoking abolished the sex effects with only MRI PH exerting a direct effect. Meta-analysis confirmed a protective effect of female sex on development of PH: unadjusted OR for presence of PH = 0.54 (95% CI 0.45-0.67, p<0.00001.MRI PH is significantly less prevalent in women. Women with MRI PH and severe stenosis have a similar risk as men for recurrent cerebrovascular events. MRI PH thus allows overcoming the sex bias in selection for CEA.
Ding, Lei; Xiao, Lin; Liao, Bolin; Lu, Rongbo; Peng, Hua
2017-01-01
To obtain the online solution of complex-valued systems of linear equation in complex domain with higher precision and higher convergence rate, a new neural network based on Zhang neural network (ZNN) is investigated in this paper. First, this new neural network for complex-valued systems of linear equation in complex domain is proposed and theoretically proved to be convergent within finite time. Then, the illustrative results show that the new neural network model has the higher precision and the higher convergence rate, as compared with the gradient neural network (GNN) model and the ZNN model. Finally, the application for controlling the robot using the proposed method for the complex-valued systems of linear equation is realized, and the simulation results verify the effectiveness and superiorness of the new neural network for the complex-valued systems of linear equation.
Carey, H.V.; Tien, X.Y.; Wallace, L.J.; Cooke, H.J.
1987-09-01
Muscarinic receptors involved in the secretory response evoked by electrical stimulation of submucosal neutrons were investigated in muscle-stripped flat sheets of guinea pig ileum set up in flux chambers. Neural stimulation produced a biphasic increase in short-circuit current due to active chloride secretion. Atropine and 4-diphenylacetoxy-N-methylpiperadine methiodide (4-DAMP) (10/sup -7/ M) were more potent inhibitors of the cholinergic phase of the response than was pirenzepine. Dose-dependent increases in base-line short-circuit current were evoked by carbachol and bethanechol; 4-hydroxy-2-butynyl trimethylammonium chloride (McN A343) produced a much smaller effect. Tetrodotoxin abolished the effects of McN A343 but did not alter the responses of carbachol and bethanechol. McN A343 significantly reduced the cholinergic phase of the neurally evoked response and caused a rightward shift of the carbachol dose-response curve. All muscarinic compounds inhibited (/sup 3/H)quinuclidinyl benzilate binding to membranes from muscosal scrapings, with a rank order of potency of 4-DAMP > pirenzepine > McN A343 > carbachol > bethanechol. These results suggest that acetylcholine released from submucosal neurons mediates chloride secretion by interacting with muscarinic cholinergic receptors that display a high binding affinity for 4-DAMP. Activation of neural muscarinic receptors makes a relatively small contribution to the overall secretory response.
Ammar, Boudour; Chérif, Farouk; Alimi, Adel M
2012-01-01
This paper is concerned with the existence and uniqueness of pseudo almost-periodic solutions to recurrent delayed neural networks. Several conditions guaranteeing the existence and uniqueness of such solutions are obtained in a suitable convex domain. Furthermore, several methods are applied to establish sufficient criteria for the globally exponential stability of this system. The approaches are based on constructing suitable Lyapunov functionals and the well-known Banach contraction mapping principle. Moreover, the attractivity and exponential stability of the pseudo almost-periodic solution are also considered for the system. A numerical example is given to illustrate the effectiveness of our results.
Li, Xiaodi; Song, Shiji
2013-06-01
In this paper, a class of recurrent neural networks with discrete and continuously distributed delays is considered. Sufficient conditions for the existence, uniqueness, and global exponential stability of a periodic solution are obtained by using contraction mapping theorem and stability theory on impulsive functional differential equations. The proposed method, which differs from the existing results in the literature, shows that network models may admit a periodic solution which is globally exponentially stable via proper impulsive control strategies even if it is originally unstable or divergent. Two numerical examples and their computer simulations are offered to show the effectiveness of our new results.
CAI Pei-qiang; TANG Xun; LIN Yue-qiu; Oudega Martin; SUN Guang-yun; XU Lin; YANG Yun-kang; ZHOU Tian-hua
2006-01-01
Objective:To explore the feasibility to construct genetic engineering human neural stem cells (hNSCs)mediated by lentivirus to express multigene in order to provide a graft source for further studies of spinal cord injury (SCI).Methods: Human neural stem cells from the brain cortex of human abortus were isolated and cultured, then gene was modified by lentivirus to express both green fluorescence protein (GFP) and rat neurotrophin-3(NT-3); the transgenic expression was detected by the methods of fluorescence microscope, dorsal root ganglion of fetal rats and slot blot.Results: Genetic engineering hNSCs were successfully constructed. All of the genetic engineering hNSCs which expressed bright green fluorescence were observed under the fluorescence microscope. The conditioned medium of transgenic hNSCs could induce neurite flourishing outgrowth from dorsal root ganglion (DRG). The genetic engineering hNSCs expressed high level NT-3 which could be detected by using slot blot.Conclusions: Genetic engineering hNSCs mediated by lentivirus can be constructed to express multigene successfully.
Hu, Xiaolin; Zhang, Bo
2009-04-01
In this paper, a new recurrent neural network is proposed for solving convex quadratic programming (QP) problems. Compared with existing neural networks, the proposed one features global convergence property under weak conditions, low structural complexity, and no calculation of matrix inverse. It serves as a competitive alternative in the neural network family for solving linear or quadratic programming problems. In addition, it is found that by some variable substitution, the proposed network turns out to be an existing model for solving minimax problems. In this sense, it can be also viewed as a special case of the minimax neural network. Based on this scheme, a k-winners-take-all ( k-WTA) network with O(n) complexity is designed, which is characterized by simple structure, global convergence, and capability to deal with some ill cases. Numerical simulations are provided to validate the theoretical results obtained. More importantly, the network design method proposed in this paper has great potential to inspire other competitive inventions along the same line.
Chiang, Tung-Sheng; Chiu, Chian-Song
This paper proposes the sliding mode control using LMI techniques and adaptive recurrent fuzzy neural network (RFNN) for a class of uncertain nonlinear time-delay systems. First, a novel TS recurrent fuzzy neural network (TS-RFNN) is developed to provide more flexible and powerful compensation of system uncertainty. Then, the TS-RFNN based sliding model control is proposed for uncertain time-delay systems. In detail, sliding surface design is derived to cope with the non-Isidori-Bynes canonical form of dynamics, unknown delay time, and mismatched uncertainties. Based on the Lyapunov-Krasoviskii method, the asymptotic stability condition of the sliding motion is formulated into solving a Linear Matrix Inequality (LMI) problem which is independent on the time-varying delay. Furthermore, the input coupling uncertainty is also taken into our consideration. The overall controlled system achieves asymptotic stability even if considering poor modeling. The contributions include: i) asymptotic sliding surface is designed from solving a simple and legible delay-independent LMI; and ii) the TS-RFNN is more realizable (due to fewer fuzzy rules being used). Finally, simulation results demonstrate the validity of the proposed control scheme.
Naikwad, S. N; Dudul, S. V
2009-01-01
.... It is noticed from literature review that process control of CSTR using neuro-fuzzy systems was attempted by many, but optimal neural network model for identification of CSTR process is not yet available...
Svensson, Charlotte; Ceder, Jens; Iglesias Gato, Diego
2014-01-01
The androgen receptor (AR) is a key regulator of prostate tumorgenesis through actions that are not fully understood. We identified the repressor element (RE)-1 silencing transcription factor (REST) as a mediator of AR actions on gene repression. Chromatin immunoprecipitation showed that AR binds...
Recurrent networks for wave forecasting
Mandal, S.; Prabaharan, N.
, merchant vessel routing, nearshore construction, etc. more efficiently and safely. This paper presents an application of the Artificial Neural Network, namely Backpropagation Recurrent Neural Network (BRNN) with rprop update algorithm for wave forecasting...
Casement, Melynda D; Keenan, Kate E; Hipwell, Alison E; Guyer, Amanda E; Forbes, Erika E
2016-02-01
Emerging evidence suggests that insomnia may disrupt reward-related brain function-a potentially important factor in the development of depressive disorder. Adolescence may be a period during which such disruption is especially problematic given the rise in the incidence of insomnia and ongoing development of neural systems that support reward processing. The present study uses longitudinal data to test the hypothesis that disruption of neural reward processing is a mechanism by which insomnia symptoms-including nocturnal insomnia symptoms (NIS) and nonrestorative sleep (NRS)-contribute to depressive symptoms in adolescent girls. Participants were 123 adolescent girls and their caregivers from an ongoing longitudinal study of precursors to depression across adolescent development. NIS and NRS were assessed annually from ages 9 to 13 years. Girls completed a monetary reward task during a functional MRI scan at age 16 years. Depressive symptoms were assessed at ages 16 and 17 years. Multivariable regression tested the prospective associations between NIS and NRS, neural response during reward anticipation, and the mean number of depressive symptoms (omitting sleep problems). NRS, but not NIS, during early adolescence was positively associated with late adolescent dorsal medial prefrontal cortex (dmPFC) response to reward anticipation and depressive symptoms. DMPFC response mediated the relationship between early adolescent NRS and late adolescent depressive symptoms. These results suggest that NRS may contribute to depression by disrupting reward processing via altered activity in a region of prefrontal cortex involved in affective control. The results also support the mechanistic differentiation of NIS and NRS. © 2016 Associated Professional Sleep Societies, LLC.
Morales Diaz, Heidi; Mejares, Emil; Newman-Smith, Erin; Smith, William C
2016-01-01
The neural IgCAM family of cell adhesion molecules, which includes NCAM and related molecules, has evolved via gene duplication and alternative splicing to allow for a wide range of isoforms with distinct functions and homophilic binding properties. A search for neural IgCAMs in ascidians (Ciona intestinalis, Ciona savignyi, and Phallusia mammillata) has identified a novel set of truncated family members that, unlike the known members, lack fibronectin III domains and consist of only repeated Ig domains. Within the tunicates this form appears to be unique to the ascidians, and it was designated ACAM, for Ascidian Cell Adhesion Molecule. In C. intestinalis ACAM is expressed in the developing neural plate and neural tube, with strongest expression in the anterior sensory vesicle precursor. Unlike the two other conventional neural IgCAMs in C. intestinalis, which are expressed maternally and throughout the morula and blastula stages, ACAM expression initiates at the gastrula stage. Moreover, C. intestinalis ACAM is a target of the homeodomain transcription factor OTX, which plays an essential role in the development of the anterior central nervous system. Morpholino (MO) knockdown shows that ACAM is required for neural tube closure. In MO-injected embryos neural tube closure was normal caudally, but the anterior neuropore remained open. A similar phenotype was seen with overexpression of a secreted version of ACAM. The presence of ACAM in ascidians highlights the diversity of this gene family in morphogenesis and neurodevelopment.
Ortega, Francisco J; Vukovic, Jana; Rodríguez, Manuel J; Bartlett, Perry F
2014-02-01
Microglia positively affect neural progenitor cell physiology through the release of inflammatory mediators or trophic factors. We demonstrated previously that reactive microglia foster K(ATP) -channel expression and that blocking this channel using glibenclamide administration enhances striatal neurogenesis after stroke. In this study, we investigated whether the microglial K(ATP) -channel directly influences the activation of neural precursor cells (NPCs) from the subventricular zone using transgenic Csf1r-GFP mice. In vitro exposure of NPCs to lipopolysaccharide and interferon-gamma resulted in a significant decrease in precursor cell number. The complete removal of microglia from the culture or exposure to enriched microglia culture also decreased the precursor cell number. The addition of glibenclamide rescued the negative effects of enriched microglia on neurosphere formation and promoted a ∼20% improvement in precursor cell number. Similar results were found using microglial-conditioned media from isolated microglia. Using primary mixed glial and pure microglial cultures, glibenclamide specifically targeted reactive microglia to restore neurogenesis and increased the microglial production of the chemokine monocyte chemoattractant protein-1 (MCP-1). These findings provide the first direct evidence that the microglial K(ATP) -channel is a regulator of the proliferation of NPCs under inflammatory conditions.
Infrared neural stimulation induces intracellular Ca(2+) release mediated by phospholipase C.
Moreau, David; Lefort, Claire; Pas, Jolien; Bardet, Sylvia M; Leveque, Philippe; O'Connor, Rodney P
2017-07-12
The influence of infrared laser pulses on intracellular Ca(2+) signaling was investigated in neural cell lines with fluorescent live cell imaging. The probe Fluo-4 was used to measure Ca(2+) in HT22 mouse hippocampal neurons and nonelectrically excitable U87 human glioblastoma cells exposed to 50 to 500 ms infrared pulses at 1470 nm. Fluorescence recordings of Fluo-4 demonstrated that infrared stimulation induced an instantaneous intracellular Ca(2+) transient with similar dose-response characteristics in hippocampal neurons and glioblastoma cells (half-maximal effective energy density EC50 of around 58 J.cm(-2) ). For both type of cells, the source of the infrared-induced Ca(2+) transients was found to originate from intracellular stores and to be mediated by phospholipase C and IP3 -induced Ca(2+) release from the endoplasmic reticulum. The activation of phosphoinositide signaling by IR light is a new mechanism of interaction relevant to infrared neural stimulation that will also be widely applicable to nonexcitable cell types. The prospect of infrared optostimulation of the PLC/IP3 cell signaling cascade has many potential applications including the development of optoceutical therapeutics. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Unsupervised Learning in an Ensemble of Spiking Neural Networks Mediated by ITDP
Staras, Kevin
2016-01-01
We propose a biologically plausible architecture for unsupervised ensemble learning in a population of spiking neural network classifiers. A mixture of experts type organisation is shown to be effective, with the individual classifier outputs combined via a gating network whose operation is driven by input timing dependent plasticity (ITDP). The ITDP gating mechanism is based on recent experimental findings. An abstract, analytically tractable model of the ITDP driven ensemble architecture is derived from a logical model based on the probabilities of neural firing events. A detailed analysis of this model provides insights that allow it to be extended into a full, biologically plausible, computational implementation of the architecture which is demonstrated on a visual classification task. The extended model makes use of a style of spiking network, first introduced as a model of cortical microcircuits, that is capable of Bayesian inference, effectively performing expectation maximization. The unsupervised ensemble learning mechanism, based around such spiking expectation maximization (SEM) networks whose combined outputs are mediated by ITDP, is shown to perform the visual classification task well and to generalize to unseen data. The combined ensemble performance is significantly better than that of the individual classifiers, validating the ensemble architecture and learning mechanisms. The properties of the full model are analysed in the light of extensive experiments with the classification task, including an investigation into the influence of different input feature selection schemes and a comparison with a hierarchical STDP based ensemble architecture. PMID:27760125
Unsupervised Learning in an Ensemble of Spiking Neural Networks Mediated by ITDP.
Shim, Yoonsik; Philippides, Andrew; Staras, Kevin; Husbands, Phil
2016-10-01
We propose a biologically plausible architecture for unsupervised ensemble learning in a population of spiking neural network classifiers. A mixture of experts type organisation is shown to be effective, with the individual classifier outputs combined via a gating network whose operation is driven by input timing dependent plasticity (ITDP). The ITDP gating mechanism is based on recent experimental findings. An abstract, analytically tractable model of the ITDP driven ensemble architecture is derived from a logical model based on the probabilities of neural firing events. A detailed analysis of this model provides insights that allow it to be extended into a full, biologically plausible, computational implementation of the architecture which is demonstrated on a visual classification task. The extended model makes use of a style of spiking network, first introduced as a model of cortical microcircuits, that is capable of Bayesian inference, effectively performing expectation maximization. The unsupervised ensemble learning mechanism, based around such spiking expectation maximization (SEM) networks whose combined outputs are mediated by ITDP, is shown to perform the visual classification task well and to generalize to unseen data. The combined ensemble performance is significantly better than that of the individual classifiers, validating the ensemble architecture and learning mechanisms. The properties of the full model are analysed in the light of extensive experiments with the classification task, including an investigation into the influence of different input feature selection schemes and a comparison with a hierarchical STDP based ensemble architecture.
Barembaum, Meyer; Bronner, Marianne E
2013-10-15
Neural crest cells form diverse derivatives that vary according to their level of origin along the body axis, with only cranial neural crest cells contributing to facial skeleton. Interestingly, the transcription factor Ets-1 is uniquely expressed in cranial but not trunk neural crest, where it functions as a direct input into neural crest specifier genes, Sox10 and FoxD3. We have isolated and interrogated a cis-regulatory element, conserved between birds and mammals, that drives reporter expression in a manner that recapitulates that of endogenous Ets-1 expression in the neural crest. Within a minimal Ets-1 enhancer region, mutation of putative binding sites for SoxE, homeobox, Ets, TFAP2 or Fox proteins results in loss or reduction of neural crest enhancer activity. Morpholino-mediated loss-of-function experiments show that Sox9, Pax7, Msx1/2, Ets-1, TFAP2A and FoxD3, all are required for enhancer activity. In contrast, mutation of a putative cMyc/E-box sequence augments reporter expression, consistent with this being a repressor binding site. Taken together, these results uncover new inputs into Ets-1, revealing critical links in the cranial neural crest gene regulatory network. © 2013 Elsevier Inc. All rights reserved.
Mr.S.Sundaram
2014-10-01
Full Text Available Epileptic attack persons are detected largely on the analysis of Electroencephalogram (EEG signals. The EEG signals recordings generate very bulk data which require a skilled and careful analysis. This method can be automated based on Elman Neural Network by using a time frequency domain characteristics of EEG signal called Approximate Entropy (ApEn. This method consists of EEG collection of data, extraction and classification. EEG data from normal persons and epileptic affected persons was collected, digitized and then fed into the Elman neural network. This proposed system proposes a neural-network-based automated epileptic EEG detection system that uses approximate entropy (ApEn as the input feature. Approximate Entropy (ApEn [1] is a statistical parameter that measures the predictability of the current amplitude values of a physiological signal based on its previous amplitude values. It is known that the value of the Approximate Entropy drops sharply during an epileptic attack[2]and this fact is used in the proposed system. Type of a neural network namely, Elman neural network is considered in this paper. The experimental results portray that this proposed approach efficiently detects the presence of epileptic seizures[3] in EEG signals and showed a reasonable accuracy.
Hou, Yan; Mattson, Mark P; Cheng, Aiwu
2013-01-01
In the process of neurogenesis, neural progenitor cells (NPCs) cease dividing and differentiate into postmitotic neurons that grow dendrites and an axon, become excitable, and establish synapses with other neurons. Mitochondrial biogenesis and aerobic metabolism provide energy substrates required to support the differentiation, growth and synaptic activity of neurons. Mitochondria may also serve signaling functions and, in this regard, it was recently reported that mitochondria can generate rapid bursts of superoxide (superoxide flashes), the frequency of which changes in response to environmental conditions and signals including oxygen levels and Ca(2+) fluxes. Here we show that the frequency of mitochondrial superoxide flashes increases as embryonic cerebral cortical neurons differentiate from NPCs, and provide evidence that the superoxide flashes serve a signaling function that is critical for the differentiation process. The superoxide flashes are mediated by mitochondrial permeability transition pore (mPTP) opening, and pharmacological inhibition of the mPTP suppresses neuronal differentiation. Moreover, superoxide flashes and neuronal differentiation are inhibited by scavenging of mitochondrial superoxide. Conversely, manipulations that increase superoxide flash frequency accelerate neuronal differentiation. Our findings reveal a regulatory role for mitochondrial superoxide flashes, mediated by mPTP opening, in neuronal differentiation.
Recurrent Domestication by Lepidoptera of Genes from Their Parasites Mediated by Bracoviruses.
Laila Gasmi
2015-09-01
Full Text Available Bracoviruses are symbiotic viruses associated with tens of thousands of species of parasitic wasps that develop within the body of lepidopteran hosts and that collectively parasitize caterpillars of virtually every lepidopteran species. Viral particles are produced in the wasp ovaries and injected into host larvae with the wasp eggs. Once in the host body, the viral DNA circles enclosed in the particles integrate into lepidopteran host cell DNA. Here we show that bracovirus DNA sequences have been inserted repeatedly into lepidopteran genomes, indicating this viral DNA can also enter germline cells. The original mode of Horizontal Gene Transfer (HGT unveiled here is based on the integrative properties of an endogenous virus that has evolved as a gene transfer agent within parasitic wasp genomes for ≈100 million years. Among the bracovirus genes thus transferred, a phylogenetic analysis indicated that those encoding C-type-lectins most likely originated from the wasp gene set, showing that a bracovirus-mediated gene flux exists between the 2 insect orders Hymenoptera and Lepidoptera. Furthermore, the acquisition of bracovirus sequences that can be expressed by Lepidoptera has resulted in the domestication of several genes that could result in adaptive advantages for the host. Indeed, functional analyses suggest that two of the acquired genes could have a protective role against a common pathogen in the field, baculovirus. From these results, we hypothesize that bracovirus-mediated HGT has played an important role in the evolutionary arms race between Lepidoptera and their pathogens.
Recurrent Domestication by Lepidoptera of Genes from Their Parasites Mediated by Bracoviruses.
Gasmi, Laila; Boulain, Helene; Gauthier, Jeremy; Hua-Van, Aurelie; Musset, Karine; Jakubowska, Agata K; Aury, Jean-Marc; Volkoff, Anne-Nathalie; Huguet, Elisabeth; Herrero, Salvador; Drezen, Jean-Michel
2015-09-01
Bracoviruses are symbiotic viruses associated with tens of thousands of species of parasitic wasps that develop within the body of lepidopteran hosts and that collectively parasitize caterpillars of virtually every lepidopteran species. Viral particles are produced in the wasp ovaries and injected into host larvae with the wasp eggs. Once in the host body, the viral DNA circles enclosed in the particles integrate into lepidopteran host cell DNA. Here we show that bracovirus DNA sequences have been inserted repeatedly into lepidopteran genomes, indicating this viral DNA can also enter germline cells. The original mode of Horizontal Gene Transfer (HGT) unveiled here is based on the integrative properties of an endogenous virus that has evolved as a gene transfer agent within parasitic wasp genomes for ≈100 million years. Among the bracovirus genes thus transferred, a phylogenetic analysis indicated that those encoding C-type-lectins most likely originated from the wasp gene set, showing that a bracovirus-mediated gene flux exists between the 2 insect orders Hymenoptera and Lepidoptera. Furthermore, the acquisition of bracovirus sequences that can be expressed by Lepidoptera has resulted in the domestication of several genes that could result in adaptive advantages for the host. Indeed, functional analyses suggest that two of the acquired genes could have a protective role against a common pathogen in the field, baculovirus. From these results, we hypothesize that bracovirus-mediated HGT has played an important role in the evolutionary arms race between Lepidoptera and their pathogens.
Roxana A Stefanescu
2015-11-01
Full Text Available Auditory information relayed by auditory nerve fibers and somatosensory information relayed by granule cell parallel fibers converge on the fusiform cells (FCs of the dorsal cochlear nucleus, the first brain station of the auditory pathway. In vitro, parallel fiber synapses on FCs exhibit spike-timing-dependent plasticity with Hebbian learning rules, partially mediated by the NMDA receptor (NMDAr. Well-timed bimodal auditory-somatosensory stimulation, in vivo equivalent of spike-timing-dependent plasticity, can induce stimulus-timing-dependent plasticity (StTDP of the FCs spontaneous and tone-evoked firing rates. In healthy guinea pigs, the resulting distribution of StTDP learning rules across a FC neural population is dominated by a Hebbian profile while anti-Hebbian, suppressive and enhancing LRs are less frequent. In this study, we investigate in vivo, the NMDAr contribution to FC baseline activity and long term plasticity. We find that blocking the NMDAr decreases the synchronization of FC- spontaneous activity and mediates differential modulation of FC rate-level functions such that low, and high threshold units are more likely to increase, and decrease, respectively, their maximum amplitudes. Three significant alterations in mean learning-rule profiles were identified: transitions from an initial Hebbian profile towards (1 an anti-Hebbian and (2 a suppressive profile, and (3 transitions from an anti-Hebbian to a Hebbian profile. FC units preserving their learning rules showed instead, NMDAr-dependent plasticity to unimodal acoustic stimulation, with persistent depression of tone-evoked responses changing to persistent enhancement following the NMDAr antagonist. These results reveal a crucial role of the NMDAr in mediating FC baseline activity and long-term plasticity which have important implications for signal processing and auditory pathologies related to maladaptive plasticity of dorsal cochlear nucleus circuitry.
An increased endothelial-independent vasodilation is the hallmark of the neurally mediated syncope.
Santini, Luca; Capria, Ambrogio; Brusca, Valentina; Violo, Arianna; Smurra, Francesca; Scarfò, Iside; Forleo, Giovanni B; Papavasileiou, Lida P; Borzi, Mauro; Romeo, Francesco
2012-02-01
The neurally mediated syncope (NMS) is sustained by complex cardiac and vascular reflexes, acting on and amplified by central autonomic loops, resulting in bradycardia and hypotension. Our aim was to assess whether the pathophysiology of NMS is also related to an abnormal peripheral vasoreactivity. We evaluated by ultrasound the flow-mediated vasodilation (FMD) and the nitrate-mediated dilation (NMD) in 17 patients with NMS, induced by drug-free tilt test in 6 subjects and by nitrate-potentiated tilt test in the other 11 cases; the syncope was classified as vasodepressive (VD) in 8 cases, cardioinhibitory (CI) in 7, and mixed in 2. The FMD was not different from controls (10.2 ± 4.5 vs 11.4 ± 3.9, P = ns), with normal recovery times; the NMD was greater in fainting subjects than in controls (26.7 ± 7.3 vs 19.0 ± 3.6, P < 0.05), with higher values in VD than in CI syncope (31.1 ± 7.0 vs 23.1 ± 5.0, P = ns); compared to controls, subjects with NMS showed normal recovery times after FMD but longer recovery times after nitrate administration (13.0 ± 5.6 vs 6.3 ± 0.7 minutes, P < 0.05). The evaluation of endothelial function supports evidence that NMS is characterized by a marked and sustained endothelial-independent vasodilation, in the presence of a normal FMD; vascular hyperreactivity in response to nitrate administration is particularly overt in vasodepressive syncope and can explain the high rate of responses to nitrate administration during tilt test. © 2011 Wiley Periodicals, Inc.
P2X7 receptors mediate innate phagocytosis by human neural precursor cells and neuroblasts.
Lovelace, Michael D; Gu, Ben J; Eamegdool, Steven S; Weible, Michael W; Wiley, James S; Allen, David G; Chan-Ling, Tailoi
2015-02-01
During early human neurogenesis there is overproduction of neuroblasts and neurons accompanied by widespread programmed cell death (PCD). While it is understood that CD68(+) microglia and astrocytes mediate phagocytosis during target-dependent PCD, little is known of the cell identity or the scavenger molecules used to remove apoptotic corpses during the earliest stages of human neurogenesis. Using a combination of multiple-marker immunohistochemical staining, functional blocking antibodies and antagonists, we showed that human neural precursor cells (hNPCs) and neuroblasts express functional P2X7 receptors. Furthermore, using live-cell imaging, flow cytometry, phagocytic assays, and siRNA knockdown, we showed that in a serum-free environment, doublecortin(+) (DCX) neuroblasts and hNPCs can clear apoptotic cells by innate phagocytosis mediated via P2X7. We found that both P2X7(high) DCX(low) hNPCs and P2X7(high) DCX(high) neuroblasts, derived from primary cultures of human fetal telencephalon, phagocytosed targets including latex beads, apoptotic ReNcells, and apoptotic hNPC/neuroblasts. Pretreatment of neuroblasts and hNPCs with 1 mM adenosine triphosphate (ATP), 100 µM OxATP (P2X7 antagonist), or siRNA knockdown of P2X7 inhibited phagocytosis of these targets. Our results show that P2X7 functions as a scavenger receptor under serum-free conditions resembling those in early neurogenesis. This is the first demonstration that hNPCs and neuroblasts may participate in clearance of apoptotic corpses during pre target-dependent neurogenesis and mediate phagocytosis using P2X7 as a scavenger receptor.
Green, Ridgely Fisk; Ehrhardt, Joan; Ruttenber, Margaret F.; Olney, Richard S.
2011-01-01
A family history of neural tube defects (NTDs) can increase the risk of a pregnancy affected by an NTD. Periconceptional folic acid use decreases this risk. Purpose: Our objective was to determine whether second-degree relatives of NTD-affected children showed differences in folic acid use compared with the general population and to provide them…
Sengupta, Rakesh; Surampudi, Bapi Raju; Melcher, David
2014-09-25
It has been proposed that the ability of humans to quickly perceive numerosity involves a visual sense of number. Different paradigms of enumeration and numerosity comparison have produced a gamut of behavioral and neuroimaging data, but there has been no unified conceptual framework that can explain results across the entire range of numerosity. The current work tries to address the ongoing debate concerning whether the same mechanism operates for enumeration of small and large numbers, through a computational approach. We describe the workings of a single-layered, fully connected network characterized by self-excitation and recurrent inhibition that operates at both subitizing and estimation ranges. We show that such a network can account for classic numerical cognition effects (the distance effect, Fechner׳s law, Weber fraction for numerosity comparison) through the network steady state activation response across different recurrent inhibition values. The model also accounts for fMRI data previously reported for different enumeration related tasks. The model also allows us to generate an estimate of the pattern of reaction times in enumeration tasks. Overall, these findings suggest that a single network architecture can account for both small and large number processing.
Kyung Min eChung
2016-05-01
Full Text Available Cytoplasmic Ca2+ actively engages in diverse intracellular processes from protein synthesis, folding and trafficking to cell survival and death. Dysregulation of intracellular Ca2+ levels is observed in various neuropathological states including Alzheimer’s and Parkinson’s diseases. Ryanodine receptors (RyRs and IP3 receptors (IP3Rs, the main Ca2+ release channels located in endoplasmic reticulum (ER membranes, are known to direct various cellular events such as autophagy and apoptosis. Here we investigated the intracellular Ca2+-mediated regulation of survival and death of adult hippocampal neural stem (HCN cells utilizing an insulin withdrawal model of autophagic cell death. Despite comparable expression levels of RyR and IP3R transcripts in HCN cells at normal state, the expression levels of RyRs — especially RyR3 — were markedly upregulated upon insulin withdrawal. While treatment with the RyR agonist caffeine significantly promoted the autophagic death of insulin-deficient HCN cells, treatment with its inhibitor dantrolene prevented the induction of autophagy following insulin withdrawal. Furthermore, CRISPR/Cas9-mediated knockout of the RyR3 gene abolished autophagic cell death of HCN cells. This study delineates a distinct, RyR3-mediated ER Ca2+ regulation of autophagy and programmed cell death in neural stem cells. Our findings provide novel insights into the critical, yet understudied mechanisms underlying the regulatory function of ER Ca2+ in neural stem cell biology.
Chen, Xiaoli; Wang, Jun; Mitchell, Elyse; Guo, Jin; Wang, Liwen; Zhang, Yu; Hodge, Jennelle C; Shen, Yiping
2014-08-19
Human endogenous retroviral (HERV) sequences are the remnants of ancient retroviral infection and comprise approximately 8% of the human genome. The high abundance and interspersed nature of homologous HERV sequences make them ideal substrates for genomic rearrangements. A role for HERV sequences in mediating human disease-associated rearrangement has been reported but is likely currently underappreciated. In the present study, two independent de novo 8q13.2-13.3 microdeletion events were identified in patients with clinical features of Branchio-Oto-Renal (BOR) syndrome. Nucleotide-level mapping demonstrated the identical breakpoints, suggesting a recurrent microdeletion including multiple genes such as EYA1, SULF1, and SLCO5A1, which is mediated by HERV1 homologous sequences. These findings raise the potential that HERV sequences may more commonly underlie recombination of dosage sensitive regions associated with recurrent syndromes.
Ang, M. R. C. O.; Gonzalez, R. M.; Castro, P. P. M.
2014-03-01
Rainfall, one of the important elements of the hydrologic cycle, is also the most difficult to model. Thus, accurate rainfall estimation is necessary especially in localized catchment areas where variability of rainfall is extremely high. Moreover, early warning of severe rainfall through timely and accurate estimation and forecasting could help prevent disasters from flooding. This paper presents the development of two rainfall estimation models that utilize a NARX-based neural network architecture namely: REIINN 1 and REIINN 2. These REIINN models, or Rainfall Estimation by Information Integration using Neural Networks, were trained using MTSAT cloud-top temperature (CTT) images and rainfall rates from the combined rain gauge and TMPA 3B40RT datasets. Model performance was assessed using two metrics - root mean square error (RMSE) and correlation coefficient (R). REIINN 1 yielded an RMSE of 8.1423 mm/3h and an overall R of 0.74652 while REIINN 2 yielded an RMSE of 5.2303 and an overall R of 0.90373. The results, especially that of REIINN 2, are very promising for satellite-based rainfall estimation in a catchment scale. It is believed that model performance and accuracy will greatly improve with a denser and more spatially distributed in-situ rainfall measurements to calibrate the model with. The models proved the viability of using remote sensing images, with their good spatial coverage, near real time availability, and relatively inexpensive to acquire, as an alternative source for rainfall estimation to complement existing ground-based measurements.
Shany-Ur, Tal; Lin, Nancy; Rosen, Howard J; Sollberger, Marc; Miller, Bruce L; Rankin, Katherine P
2014-08-01
versus exaggerating deficits, overestimation and underestimation scores were analysed separately, controlling for age, sex, total intracranial volume and extent of actual functional decline. Atrophy related to overestimating one's functioning included bilateral, right greater than left frontal and subcortical regions, including dorsal superior and middle frontal gyri, lateral and medial orbitofrontal gyri, right anterior insula, putamen, thalamus, and caudate, and midbrain and pons. Thus, our patients' tendency to under-represent their functional decline was related to degeneration of domain-general dorsal frontal regions involved in attention, as well as orbitofrontal and subcortical regions likely involved in assigning a reward value to self-related processing and maintaining accurate self-knowledge. The anatomic correlates of underestimation (right rostral anterior cingulate cortex, uncorrected significance level) were distinct from overestimation and had a substantially smaller effect size. This suggests that underestimation or 'tarnishing' may be influenced by non-structural neurobiological and sociocultural factors, and should not be considered to be on a continuum with overestimation or 'polishing' of functional capacity, which appears to be more directly mediated by neural circuit dysfunction.
J. B. Habarulema
2009-05-01
Full Text Available This paper attempts to describe the search for the parameter(s to represent solar wind effects in Global Positioning System total electron content (GPS TEC modelling using the technique of neural networks (NNs. A study is carried out by including solar wind velocity (V_{sw}, proton number density (N_{p} and the B_{z} component of the interplanetary magnetic field (IMF B_{z} obtained from the Advanced Composition Explorer (ACE satellite as separate inputs to the NN each along with day number of the year (DN, hour (HR, a 4-month running mean of the daily sunspot number (R4 and the running mean of the previous eight 3-hourly magnetic A index values (A8. Hourly GPS TEC values derived from a dual frequency receiver located at Sutherland (32.38° S, 20.81° E, South Africa for 8 years (2000–2007 have been used to train the Elman neural network (ENN and the result has been used to predict TEC variations for a GPS station located at Cape Town (33.95° S, 18.47° E. Quantitative results indicate that each of the parameters considered may have some degree of influence on GPS TEC at certain periods although a decrease in prediction accuracy is also observed for some parameters for different days and seasons. It is also evident that there is still a difficulty in predicting TEC values during disturbed conditions. The improvements and degradation in prediction accuracies are both close to the benchmark values which lends weight to the belief that diurnal, seasonal, solar and magnetic variabilities may be the major determinants of TEC variability.
Zheng, Jialin; Ghorpade, Anuja; Niemann, Douglas; Cotter, Robin L.; Thylin, Michael R.; Epstein, Leon; Swartz, Jennifer M.; Shepard, Robin B.; Liu, Xiaojuan; Nukuna, Adeline; Gendelman, Howard E.
1999-01-01
Chemokine receptors pivotal for human immunodeficiency virus type 1 (HIV-1) infection in lymphocytes and macrophages (CCR3, CCR5, and CXCR4) are expressed on neural cells (microglia, astrocytes, and/or neurons). It is these cells which are damaged during progressive HIV-1 infection of the central nervous system. We theorize that viral coreceptors could effect neural cell damage during HIV-1-associated dementia (HAD) without simultaneously affecting viral replication. To these ends, we studied the ability of diverse viral strains to affect intracellular signaling and apoptosis of neurons, astrocytes, and monocyte-derived macrophages. Inhibition of cyclic AMP, activation of inositol 1,4,5-trisphosphate, and apoptosis were induced by diverse HIV-1 strains, principally in neurons. Virions from T-cell-tropic (T-tropic) strains (MN, IIIB, and Lai) produced the most significant alterations in signaling of neurons and astrocytes. The HIV-1 envelope glycoprotein, gp120, induced markedly less neural damage than purified virions. Macrophage-tropic (M-tropic) strains (ADA, JR-FL, Bal, MS-CSF, and DJV) produced the least neural damage, while 89.6, a dual-tropic HIV-1 strain, elicited intermediate neural cell damage. All T-tropic strain-mediated neuronal impairments were blocked by the CXCR4 antibody, 12G5. In contrast, the M-tropic strains were only partially blocked by 12G5. CXCR4-mediated neuronal apoptosis was confirmed in pure populations of rat cerebellar granule neurons and was blocked by HA1004, an inhibitor of calcium/calmodulin-dependent protein kinase II, protein kinase A, and protein kinase C. Taken together, these results suggest that progeny HIV-1 virions can influence neuronal signal transduction and apoptosis. This process occurs, in part, through CXCR4 and is independent of CD4 binding. T-tropic viruses that traffic in and out of the brain during progressive HIV-1 disease may play an important role in HAD neuropathogenesis. PMID:10482576
Neural crest-mediated bone resorption is a determinant of species-specific jaw length.
Ealba, Erin L; Jheon, Andrew H; Hall, Jane; Curantz, Camille; Butcher, Kristin D; Schneider, Richard A
2015-12-01
Precise control of jaw length during development is crucial for proper form and function. Previously we have shown that in birds, neural crest mesenchyme (NCM) confers species-specific size and shape to the beak by regulating molecular and histological programs for the induction and deposition of cartilage and bone. Here we reveal that a hitherto unrecognized but similarly essential mechanism for establishing jaw length is the ability of NCM to mediate bone resorption. Osteoclasts are considered the predominant cells that resorb bone, although osteocytes have also been shown to participate in this process. In adults, bone resorption is tightly coupled to bone deposition as a means to maintain skeletal homeostasis. Yet, the role and regulation of bone resorption during growth of the embryonic skeleton have remained relatively unexplored. We compare jaw development in short-beaked quail versus long-billed duck and find that quail have substantially higher levels of enzymes expressed by bone-resorbing cells including tartrate-resistant acid phosphatase (TRAP), Matrix metalloproteinase 13 (Mmp13), and Mmp9. Then, we transplant NCM destined to form the jaw skeleton from quail to duck and generate chimeras in which osteocytes arise from quail donor NCM and osteoclasts come exclusively from the duck host. Chimeras develop quail-like jaw skeletons coincident with dramatically elevated expression of TRAP, Mmp13, and Mmp9. To test for a link between bone resorption and jaw length, we block resorption using a bisphosphonate, osteoprotegerin protein, or an MMP13 inhibitor, and this significantly lengthens the jaw. Conversely, activating resorption with RANKL protein shortens the jaw. Finally, we find that higher resorption in quail presages their relatively lower adult jaw bone mineral density (BMD) and that BMD is also NCM-mediated. Thus, our experiments suggest that NCM not only controls bone resorption by its own derivatives but also modulates the activity of mesoderm
Cavallari, Stefano; Panzeri, Stefano; Mazzoni, Alberto
2014-01-01
Models of networks of Leaky Integrate-and-Fire (LIF) neurons are a widely used tool for theoretical investigations of brain function. These models have been used both with current- and conductance-based synapses. However, the differences in the dynamics expressed by these two approaches have been so far mainly studied at the single neuron level. To investigate how these synaptic models affect network activity, we compared the single neuron and neural population dynamics of conductance-based networks (COBNs) and current-based networks (CUBNs) of LIF neurons. These networks were endowed with sparse excitatory and inhibitory recurrent connections, and were tested in conditions including both low- and high-conductance states. We developed a novel procedure to obtain comparable networks by properly tuning the synaptic parameters not shared by the models. The so defined comparable networks displayed an excellent and robust match of first order statistics (average single neuron firing rates and average frequency spectrum of network activity). However, these comparable networks showed profound differences in the second order statistics of neural population interactions and in the modulation of these properties by external inputs. The correlation between inhibitory and excitatory synaptic currents and the cross-neuron correlation between synaptic inputs, membrane potentials and spike trains were stronger and more stimulus-modulated in the COBN. Because of these properties, the spike train correlation carried more information about the strength of the input in the COBN, although the firing rates were equally informative in both network models. Moreover, the network activity of COBN showed stronger synchronization in the gamma band, and spectral information about the input higher and spread over a broader range of frequencies. These results suggest that the second order statistics of network dynamics depend strongly on the choice of synaptic model.
Miller, Aaron; Jin, Dezhe Z
2013-12-01
Synfire chains are thought to underlie precisely timed sequences of spikes observed in various brain regions and across species. How they are formed is not understood. Here we analyze self-organization of synfire chains through the spike-timing dependent plasticity (STDP) of the synapses, axon remodeling, and potentiation decay of synaptic weights in networks of neurons driven by noisy external inputs and subject to dominant feedback inhibition. Potentiation decay is the gradual, activity-independent reduction of synaptic weights over time. We show that potentiation decay enables a dynamic and statistically stable network connectivity when neurons spike spontaneously. Periodic stimulation of a subset of neurons leads to formation of synfire chains through a random recruitment process, which terminates when the chain connects to itself and forms a loop. We demonstrate that chain length distributions depend on the potentiation decay. Fast potentiation decay leads to long chains with wide distributions, while slow potentiation decay leads to short chains with narrow distributions. We suggest that the potentiation decay, which corresponds to the decay of early long-term potentiation of synapses, is an important synaptic plasticity rule in regulating formation of neural circuity through STDP.
Matsubara, Takashi; Torikai, Hiroyuki
2016-04-01
Modeling and implementation approaches for the reproduction of input-output relationships in biological nervous tissues contribute to the development of engineering and clinical applications. However, because of high nonlinearity, the traditional modeling and implementation approaches encounter difficulties in terms of generalization ability (i.e., performance when reproducing an unknown data set) and computational resources (i.e., computation time and circuit elements). To overcome these difficulties, asynchronous cellular automaton-based neuron (ACAN) models, which are described as special kinds of cellular automata that can be implemented as small asynchronous sequential logic circuits have been proposed. This paper presents a novel type of such ACAN and a theoretical analysis of its excitability. This paper also presents a novel network of such neurons, which can mimic input-output relationships of biological and nonlinear ordinary differential equation model neural networks. Numerical analyses confirm that the presented network has a higher generalization ability than other major modeling and implementation approaches. In addition, Field-Programmable Gate Array-implementations confirm that the presented network requires lower computational resources.
Wei, Xinyu, E-mail: xyuwei@mail.xjtu.edu.cn; Wang, Pengfei, E-mail: pengfeixiaoli@yahoo.cn; Zhao, Fuyu, E-mail: fuyuzhao_xj@163.com
2016-08-01
Highlights: • We establish a disperse dynamic model for AP1000 reactor core. • A digital PID control based on QDRNN is used to design a decoupling control system. • The decoupling performance is verified and discussed. • The decoupling control system is simulated under the load following operation. - Abstract: The control system of the AP1000 reactor core uses the mechanical shim (MSHIM) strategy, which includes a power control subsystem and an axial power distribution control subsystem. To address the strong coupling between the two subsystems, an interlock between the two subsystems is used, which can only alleviate but not eliminate the coupling. Therefore, sometimes the axial offset (AO) cannot be controlled tightly, and the flexibility of load-following operation is limited. Thus, the decoupling of the original AP1000 reactor core control system is the focus of this paper. First, a two-node disperse dynamic model is established for the AP1000 reactor core to use PID control. Then, a digital PID control system based on a quasi-diagonal recurrent neural network (QDRNN) is designed to decouple the original system. Finally, the decoupling of the control system is verified by the step signal and load-following condition. The results show that the designed control system can decouple the original system as expected and the AO can be controlled much more tightly. Moreover, the flexibility of the load following is increased.
Fairbank, Michael; Li, Shuhui; Fu, Xingang; Alonso, Eduardo; Wunsch, Donald
2014-01-01
We present a recurrent neural-network (RNN) controller designed to solve the tracking problem for control systems. We demonstrate that a major difficulty in training any RNN is the problem of exploding gradients, and we propose a solution to this in the case of tracking problems, by introducing a stabilization matrix and by using carefully constrained context units. This solution allows us to achieve consistently lower training errors, and hence allows us to more easily introduce adaptive capabilities. The resulting RNN is one that has been trained off-line to be rapidly adaptive to changing plant conditions and changing tracking targets. The case study we use is a renewable-energy generator application; that of producing an efficient controller for a three-phase grid-connected converter. The controller we produce can cope with the random variation of system parameters and fluctuating grid voltages. It produces tracking control with almost instantaneous response to changing reference states, and virtually zero oscillation. This compares very favorably to the classical proportional integrator (PI) controllers, which we show produce a much slower response and settling time. In addition, the RNN we propose exhibits better learning stability and convergence properties, and can exhibit faster adaptation, than has been achieved with adaptive critic designs.
Nielsen, Janne; Kulahin, Nikolaj; Walmod, Peter
2008-01-01
Cell adhesion molecules (CAMs) mediate cell-to-cell interactions and interactions between cells and the extracellular matrix (ECM). The neural cell adhesion molecule (NCAM), a prototypic member of the immunoglobulin (Ig) superfamily of CAMs, mediates adhesion through homophilic and heterophilic i...
Lu, I-Cheng; Wu, Che-Wei; Chang, Pi-Ying; Chen, Hsiu-Ya; Tseng, Kuang-Yi; Randolph, Gregory W; Cheng, Kuang-I; Chiang, Feng-Yu
2016-04-01
The use of neuromuscular blocking agent may effect intraoperative neuromonitoring (IONM) during thyroid surgery. An enhanced neuromuscular-blockade (NMB) recovery protocol was investigated in a porcine model and subsequently clinically applied during human thyroid neural monitoring surgery. Prospective animal and retrospective clinical study. In the animal experiment, 12 piglets were injected with rocuronium 0.6 mg/kg and randomly allocated to receive normal saline, sugammadex 2 mg/kg, or sugammadex 4 mg/kg to compare the recovery of laryngeal electromyography (EMG). In a subsequent clinical application study, 50 patients who underwent thyroidectomy with IONM followed an enhanced NMB recovery protocol-rocuronium 0.6 mg/kg at anesthesia induction and sugammadex 2 mg/kg at the operation start. The train-of-four (TOF) ratio was used for continuous quantitative monitoring of neuromuscular transmission. In our porcine model, it took 49 ± 15, 13.2 ± 5.6, and 4.2 ± 1.5 minutes for the 80% recovery of laryngeal EMG after injection of saline, sugammadex 2 mg/kg, and sugammadex 4 mg/kg, respectively. In subsequent clinical human application, the TOF ratio recovered from 0 to >0.9 within 5 minutes after administration of sugammadex 2 mg/kg at the operation start. All patients had positive and high EMG amplitude at the early stage of the operation, and intubation was without difficulty in 96% of patients. Both porcine modeling and clinical human application demonstrated that sugammadex 2 mg/kg allows effective and rapid restoration of neuromuscular function suppressed by rocuronium. Implementation of this enhanced NMB recovery protocol assures optimal conditions for tracheal intubation as well as IONM in thyroid surgery. NA. © 2016 The American Laryngological, Rhinological and Otological Society, Inc.
Shany-Ur, Tal; Lin, Nancy; Rosen, Howard J.; Sollberger, Marc; Miller, Bruce L.
2014-01-01
overlooking versus exaggerating deficits, overestimation and underestimation scores were analysed separately, controlling for age, sex, total intracranial volume and extent of actual functional decline. Atrophy related to overestimating one’s functioning included bilateral, right greater than left frontal and subcortical regions, including dorsal superior and middle frontal gyri, lateral and medial orbitofrontal gyri, right anterior insula, putamen, thalamus, and caudate, and midbrain and pons. Thus, our patients’ tendency to under-represent their functional decline was related to degeneration of domain-general dorsal frontal regions involved in attention, as well as orbitofrontal and subcortical regions likely involved in assigning a reward value to self-related processing and maintaining accurate self-knowledge. The anatomic correlates of underestimation (right rostral anterior cingulate cortex, uncorrected significance level) were distinct from overestimation and had a substantially smaller effect size. This suggests that underestimation or ‘tarnishing’ may be influenced by non-structural neurobiological and sociocultural factors, and should not be considered to be on a continuum with overestimation or ‘polishing’ of functional capacity, which appears to be more directly mediated by neural circuit dysfunction. PMID:24951639
张燕; 陈增强; 杨鹏; 袁著祉
2004-01-01
A nonlinear proportional-integral-derivative (PID) controller is constructed based on recurrent neural networks. In the control process of nonlinear multivariable systems, several nonlinear PID controllers have been adopted in parallel. Under the decoupling cost function, a decoupling control strategy is proposed. Then the stability condition of the controller is presented based on the Lyapunov theory. Simulation examples are given to show effectiveness of the proposed decoupling control.
Tong, Jing; Wang, Youwei; Lu, Yuanan
2016-03-01
To extend the current understanding of the mercury-mediated cytotoxic effect, five neural cell lines established from different animal species were comparatively analyzed using three different endpoint bioassays: thiazolyl blue tetrazolium bromide, 3-(4,5-dimethylthiazol-2-yl)-2,5-diphenyl tetrazolium bromide assay (MTT), neutral red uptake assay (NRU), and Coomassie blue assay (CB). Following a 24-hr exposure to selected concentrations of mercury chloride (HgCl2) and methylmercury (II) chloride (MeHgCl), the cytotoxic effect on test cells was characterized by comparing their 50% inhibition concentration (IC50) values. Experimental results indicated that both these forms of mercury were toxic to all the neural cells, but at very different degrees. The IC50 values of MeHgCl among these cell lines ranged from 1.15±0.22 to 10.31±0.70μmol/L while the IC50 values for HgCl2 were much higher, ranging from 6.44±0.36 to 160.97±19.63μmol/L, indicating the more toxic nature of MeHgCl. The IC50 ratio between HgCl2 and MeHgCl ranged from 1.75 to 96.0, which confirms that organic mercury is much more toxic to these neural cells than inorganic mercury. Among these cell lines, HGST-BR and TriG44 derived from marine sea turtles showed a significantly high tolerance to HgCl2 as compared to the three mammalian neural cells. Among these neural cells, SK-N-SH represented the most sensitive cells to both chemical forms of mercury. Copyright © 2015. Published by Elsevier B.V.
Casement, Melynda D; Keenan, Kate E; Hipwell, Alison E; Guyer, Amanda E; Forbes, Erika E
2016-01-01
.... Adolescence may be a period during which such disruption is especially problematic given the rise in the incidence of insomnia and ongoing development of neural systems that support reward processing...
Kuriu, Takayuki; Kakimoto, Yuta; Araki, Osamu
2015-09-01
Although recent reports have suggested that synchronous neuronal UP states are mediated by astrocytic activity, the mechanism responsible for this remains unknown. Astrocytic glutamate release synchronously depolarizes adjacent neurons, while synaptic transmissions are blocked. The purpose of this study was to confirm that astrocytic depolarization, propagated through synaptic connections, can lead to synchronous neuronal UP states. We applied astrocytic currents to local neurons in a neural network consisting of model cortical neurons. Our results show that astrocytic depolarization may generate synchronous UP states for hundreds of milliseconds in neurons even if they do not directly receive glutamate release from the activated astrocyte.
De Vadder, F; Plessier, F; Gautier-Stein, A; Mithieux, G
2015-03-01
Intestinal gluconeogenesis (IGN) promotes metabolic benefits through activation of a gut-brain neural axis. However, the local mediator activating gluconeogenic genes in the enterocytes remains unknown. We show that (i) vasoactive intestinal peptide (VIP) signaling through VPAC1 receptor activates the intestinal glucose-6-phosphatase gene in vivo, (ii) the activation of IGN by propionate is counteracted by VPAC1 antagonism, and (iii) VIP-positive intrinsic neurons in the submucosal plexus are increased under the action of propionate. These data support the role of VIP as a local neuromodulator released by intrinsic enteric neurons and responsible for the induction of IGN through a VPAC1 receptor-dependent mechanism in enterocytes.
Weixia Ye; Xueping Huang; Yangyang Sun; Hao Liu; Jin Jiang; Youde Cao
2012-01-01
In the present study, ultrasound-mediated microbubble destruction (UMMD) alone and combined with liposome technology was used as a novel nonviral technique to transfect a Nogo receptor (Nogo-R) shRNA plasmid (pNogo-R shRNA) into neural stem cells (NSCs). Using green fluorescent protein as a reporter gene, transfection efficiency of NSCs was significantly higher in the group transfected with UMMD combined with liposomes compared with that of the group transfected with UMMD or liposomes alone, and did not affect cell vitality. In addition, Nogo-R mRNA and protein expression was dramatically decreased in the UMMD combined with liposome-mediated group compared with that of other groups after 24 hours of transfection. The UMMD technique combined with liposomes is a noninvasive gene transfer method, which showed minimal effects on cell viability and effectively increased transfer of Nogo-R shRNA into NSCs.
Ye, Weixia; Huang, Xueping; Sun, Yangyang; Liu, Hao; Jiang, Jin; Cao, Youde
2012-01-01
In the present study, ultrasound-mediated microbubble destruction (UMMD) alone and combined with liposome technology was used as a novel nonviral technique to transfect a Nogo receptor (Nogo-R) shRNA plasmid (pNogo-R shRNA) into neural stem cells (NSCs). Using green fluorescent protein as a reporter gene, transfection efficiency of NSCs was significantly higher in the group transfected with UMMD combined with liposomes compared with that of the group transfected with UMMD or liposomes alone, and did not affect cell vitality. In addition, Nogo-R mRNA and protein expression was dramatically decreased in the UMMD combined with liposome-mediated group compared with that of other groups after 24 hours of transfection. The UMMD technique combined with liposomes is a noninvasive gene transfer method, which showed minimal effects on cell viability and effectively increased transfer of Nogo-R shRNA into NSCs.
Li, Xiaowei; Tzeng, Stephany Y; Liu, Xiaoyan; Tammia, Markus; Cheng, Yu-Hao; Rolfe, Andrew; Sun, Dong; Zhang, Ning; Green, Jordan J; Wen, Xuejun; Mao, Hai-Quan
2016-04-01
Strategies to enhance survival and direct the differentiation of stem cells in vivo following transplantation in tissue repair site are critical to realizing the potential of stem cell-based therapies. Here we demonstrated an effective approach to promote neuronal differentiation and maturation of human fetal tissue-derived neural stem cells (hNSCs) in a brain lesion site of a rat traumatic brain injury model using biodegradable nanoparticle-mediated transfection method to deliver key transcriptional factor neurogenin-2 to hNSCs when transplanted with a tailored hyaluronic acid (HA) hydrogel, generating larger number of more mature neurons engrafted to the host brain tissue than non-transfected cells. The nanoparticle-mediated transcription activation method together with an HA hydrogel delivery matrix provides a translatable approach for stem cell-based regenerative therapy.
Sethuraman Swaminathan
2009-11-01
Full Text Available Abstract Neural tissue repair and regeneration strategies have received a great deal of attention because it directly affects the quality of the patient's life. There are many scientific challenges to regenerate nerve while using conventional autologous nerve grafts and from the newly developed therapeutic strategies for the reconstruction of damaged nerves. Recent advancements in nerve regeneration have involved the application of tissue engineering principles and this has evolved a new perspective to neural therapy. The success of neural tissue engineering is mainly based on the regulation of cell behavior and tissue progression through the development of a synthetic scaffold that is analogous to the natural extracellular matrix and can support three-dimensional cell cultures. As the natural extracellular matrix provides an ideal environment for topographical, electrical and chemical cues to the adhesion and proliferation of neural cells, there exists a need to develop a synthetic scaffold that would be biocompatible, immunologically inert, conducting, biodegradable, and infection-resistant biomaterial to support neurite outgrowth. This review outlines the rationale for effective neural tissue engineering through the use of suitable biomaterials and scaffolding techniques for fabrication of a construct that would allow the neurons to adhere, proliferate and eventually form nerves.
Rest-mediated regulation of extracellular matrix is crucial for neural development.
Yuh-Man Sun
Full Text Available Neural development from blastocysts is strictly controlled by intricate transcriptional programmes that initiate the down-regulation of pluripotent genes, Oct4, Nanog and Rex1 in blastocysts followed by up-regulation of lineage-specific genes as neural development proceeds. Here, we demonstrate that the expression pattern of the transcription factor Rest mirrors those of pluripotent genes during neural development from embryonic stem (ES cells and an early abrogation of Rest in ES cells using a combination of gene targeting and RNAi approaches causes defects in this process. Specifically, Rest ablation does not alter ES cell pluripotency, but impedes the production of Nestin(+ neural stem cells, neural progenitor cells and neurons, and results in defective adhesion, decrease in cell proliferation, increase in cell death and neuronal phenotypic defects typified by a reduction in migration and neurite elaboration. We also show that these Rest-null phenotypes are due to the dysregulation of its direct or indirect target genes, Lama1, Lamb1, Lamc1 and Lama2 and that these aberrant phenotypes can be rescued by laminins.
Subramanian, Anuradha; Krishnan, Uma Maheswari; Sethuraman, Swaminathan
2009-11-25
Neural tissue repair and regeneration strategies have received a great deal of attention because it directly affects the quality of the patient's life. There are many scientific challenges to regenerate nerve while using conventional autologous nerve grafts and from the newly developed therapeutic strategies for the reconstruction of damaged nerves. Recent advancements in nerve regeneration have involved the application of tissue engineering principles and this has evolved a new perspective to neural therapy. The success of neural tissue engineering is mainly based on the regulation of cell behavior and tissue progression through the development of a synthetic scaffold that is analogous to the natural extracellular matrix and can support three-dimensional cell cultures. As the natural extracellular matrix provides an ideal environment for topographical, electrical and chemical cues to the adhesion and proliferation of neural cells, there exists a need to develop a synthetic scaffold that would be biocompatible, immunologically inert, conducting, biodegradable, and infection-resistant biomaterial to support neurite outgrowth. This review outlines the rationale for effective neural tissue engineering through the use of suitable biomaterials and scaffolding techniques for fabrication of a construct that would allow the neurons to adhere, proliferate and eventually form nerves.
Recurrent recurrent gallstone ileus.
Hussain, Z; Ahmed, M S; Alexander, D J; Miller, G V; Chintapatla, S
2010-07-01
We describe the second reported case of three consecutive episodes of gallstone ileus and ask the question whether recurrent gallstone ileus justifies definitive surgery to the fistula itself or can be safely managed by repeated enterotomies.
Rodrigo Albors, Aida; Tazaki, Akira; Rost, Fabian; Nowoshilow, Sergej; Chara, Osvaldo; Tanaka, Elly M
2015-11-14
Axolotls are uniquely able to mobilize neural stem cells to regenerate all missing regions of the spinal cord. How a neural stem cell under homeostasis converts after injury to a highly regenerative cell remains unknown. Here, we show that during regeneration, axolotl neural stem cells repress neurogenic genes and reactivate a transcriptional program similar to embryonic neuroepithelial cells. This dedifferentiation includes the acquisition of rapid cell cycles, the switch from neurogenic to proliferative divisions, and the re-expression of planar cell polarity (PCP) pathway components. We show that PCP induction is essential to reorient mitotic spindles along the anterior-posterior axis of elongation, and orthogonal to the cell apical-basal axis. Disruption of this property results in premature neurogenesis and halts regeneration. Our findings reveal a key role for PCP in coordinating the morphogenesis of spinal cord outgrowth with the switch from a homeostatic to a regenerative stem cell that restores missing tissue.
Neural circuit changes mediating lasting brain and behavioral response to predator stress.
Adamec, Robert E; Blundell, Jacqueline; Burton, Paul
2005-01-01
This paper reviews recent work which points to critical neural circuitry involved in lasting changes in anxiety like behavior following unprotected exposure of rats to cats (predator stress). Predator stress may increase anxiety like behavior in a variety of behavioral tests including: elevated plus maze, light dark box, acoustic startle, and social interaction. Studies of neural transmission in two limbic pathways, combined with path and covariance analysis relating physiology to behavior, suggest long term potentiation like changes in one or both of these pathways in the right hemisphere accounts for stress induced changes in all behaviors changed by predator stress except light dark box and social interaction. Findings will be discussed within the context of what is known about neural substrates activated by predator odor.
Anwar, Mohammad Raffaqat; Andreasen, Christian Maaløv; Lippert, Solvej Kølvraa
2008-01-01
Properly committed neural stem cells constitute a promising source of cells for transplantation in Parkinson's disease, but a protocol for controlled dopaminergic differentiation is not yet available. To establish a setting for identification of secreted neural compounds promoting dopaminergic...... differentiation, we co-cultured cells from a human neural forebrain-derived stem cell line (hNS1) with rat striatal brain slices. In brief, coronal slices of neonatal rat striatum were cultured on semiporous membrane inserts placed in six-well trays overlying monolayers of hNS1 cells. After 12 days of co......-culture, large numbers of tyrosine hydroxylase (TH)-immunoreactive, catecholaminergic cells could be found underneath individual striatal slices. Cell counting revealed that up to 25.3% (average 16.1%) of the total number of cells in these areas were TH-positive, contrasting a few TH-positive cells (
Nishikawa, Saori; Toshima, Tamotsu; Kobayashi, Masao
2015-01-01
This study examined changes in prefrontal oxy-Hb levels measured by NIRS (Near-Infrared Spectroscopy) during a facial-emotion recognition task in healthy adults, testing a mediational/moderational model of these variables. Fifty-three healthy adults (male = 35, female = 18) aged between 22 to 37 years old (mean age = 24.05 years old) provided saliva samples, completed a EMBU questionnaire (Swedish acronym for Egna Minnen Beträffande Uppfostran [My memories of upbringing]), and participated in a facial-emotion recognition task during NIRS recording. There was a main effect of maternal rejection on RoxH (right frontal activation during an ambiguous task), and a gene × environment (G × E) interaction on RoxH, suggesting that individuals who carry the SL or LL genotype and who endorse greater perceived maternal rejection show less right frontal activation than SL/LL carriers with lower perceived maternal rejection. Finally, perceived parenting style played a mediating role in right frontal activation via the 5-HTTLPR genotype. Early-perceived parenting might influence neural activity in an uncertain situation i.e. rating ambiguous faces among individuals with certain genotypes. This preliminary study makes a small contribution to the mapping of an influence of gene and behaviour on the neural system. More such attempts should be made in order to clarify the links.
Saori Nishikawa
Full Text Available This study examined changes in prefrontal oxy-Hb levels measured by NIRS (Near-Infrared Spectroscopy during a facial-emotion recognition task in healthy adults, testing a mediational/moderational model of these variables. Fifty-three healthy adults (male = 35, female = 18 aged between 22 to 37 years old (mean age = 24.05 years old provided saliva samples, completed a EMBU questionnaire (Swedish acronym for Egna Minnen Beträffande Uppfostran [My memories of upbringing], and participated in a facial-emotion recognition task during NIRS recording. There was a main effect of maternal rejection on RoxH (right frontal activation during an ambiguous task, and a gene × environment (G × E interaction on RoxH, suggesting that individuals who carry the SL or LL genotype and who endorse greater perceived maternal rejection show less right frontal activation than SL/LL carriers with lower perceived maternal rejection. Finally, perceived parenting style played a mediating role in right frontal activation via the 5-HTTLPR genotype. Early-perceived parenting might influence neural activity in an uncertain situation i.e. rating ambiguous faces among individuals with certain genotypes. This preliminary study makes a small contribution to the mapping of an influence of gene and behaviour on the neural system. More such attempts should be made in order to clarify the links.
Zou, Runmei; Wang, Shuo; Zhu, Liping; Wu, Lijia; Lin, Ping; Li, Fang; Xie, Zhenwu; Li, Xiaohong; Wang, Cheng
2017-01-01
To evaluate the value of Calgary score and modified Calgary score in differential diagnosis between neurally mediated syncope and epilepsy in children. 201 children experienced one or more episodes of loss of consciousness and diagnosed as neurally mediated syncope or epilepsy were enrolled. Calgary score, modified Calgary score and receiver-operating characteristic curve were used to explore the predictive value in differential diagnosis. There were significant differences in median Calgary score between syncope [-4.00 (-6, 1)] and epilepsy [2 (-3, 5)] (z = -11.63, P epilepsy were 91.46 and 95.80 %, suggesting a diagnosis of epilepsy. There were significant differences in median modified Calgary score between syncope [-4.00 (-6, 1)] and epilepsy [3 (-3, 6)] (z = -11.71, P epilepsy. The sensitivity and specificity of modified Calgary score and Calgary score did not show significant differences (P > 0.05). Calgary score and modified Calgary score could be used to differential diagnosis between syncope and epilepsy in children.
Lu Li
Full Text Available The importance of BMP receptor Ia (BMPRIa mediated signaling in the development of craniofacial organs, including the tooth and palate, has been well illuminated in several mouse models of loss of function, and by its mutations associated with juvenile polyposis syndrome and facial defects in humans. In this study, we took a gain-of-function approach to further address the role of BMPR-IA-mediated signaling in the mesenchymal compartment during tooth and palate development. We generated transgenic mice expressing a constitutively active form of BmprIa (caBmprIa in cranial neural crest (CNC cells that contributes to the dental and palatal mesenchyme. Mice bearing enhanced BMPRIa-mediated signaling in CNC cells exhibit complete cleft palate and delayed odontogenic differentiation. We showed that the cleft palate defect in the transgenic animals is attributed to an altered cell proliferation rate in the anterior palatal mesenchyme and to the delayed palatal elevation in the posterior portion associated with ectopic cartilage formation. Despite enhanced activity of BMP signaling in the dental mesenchyme, tooth development and patterning in transgenic mice appeared normal except delayed odontogenic differentiation. These data support the hypothesis that a finely tuned level of BMPRIa-mediated signaling is essential for normal palate and tooth development.
Hannah Verdin
Full Text Available Genomic disorders are often caused by recurrent copy number variations (CNVs, with nonallelic homologous recombination (NAHR as the underlying mechanism. Recently, several microhomology-mediated repair mechanisms--such as microhomology-mediated end-joining (MMEJ, fork stalling and template switching (FoSTeS, microhomology-mediated break-induced replication (MMBIR, serial replication slippage (SRS, and break-induced SRS (BISRS--were described in the etiology of non-recurrent CNVs in human disease. In addition, their formation may be stimulated by genomic architectural features. It is, however, largely unexplored to what extent these mechanisms contribute to rare, locus-specific pathogenic CNVs. Here, fine-mapping of 42 microdeletions of the FOXL2 locus, encompassing FOXL2 (32 or its regulatory domain (10, serves as a model for rare, locus-specific CNVs implicated in genetic disease. These deletions lead to blepharophimosis syndrome (BPES, a developmental condition affecting the eyelids and the ovary. For breakpoint mapping we used targeted array-based comparative genomic hybridization (aCGH, quantitative PCR (qPCR, long-range PCR, and Sanger sequencing of the junction products. Microhomology, ranging from 1 bp to 66 bp, was found in 91.7% of 24 characterized breakpoint junctions, being significantly enriched in comparison with a random control sample. Our results show that microhomology-mediated repair mechanisms underlie at least 50% of these microdeletions. Moreover, genomic architectural features, like sequence motifs, non-B DNA conformations, and repetitive elements, were found in all breakpoint regions. In conclusion, the majority of these microdeletions result from microhomology-mediated mechanisms like MMEJ, FoSTeS, MMBIR, SRS, or BISRS. Moreover, we hypothesize that the genomic architecture might drive their formation by increasing the susceptibility for DNA breakage or promote replication fork stalling. Finally, our locus-centered study
Sun, Jinqiao, E-mail: jinqiao1977@163.com [Institute of Pediatrics, Children' s Hospital of Fudan University (China); Sha, Bin [Department of Neonatology, Children' s Hospital of Fudan University, 399 Wanyuan Road, Shanghai 201102 (China); Zhou, Wenhao, E-mail: zhou_wenhao@yahoo.com.cn [Department of Neonatology, Children' s Hospital of Fudan University, 399 Wanyuan Road, Shanghai 201102 (China); Yang, Yi [Institute of Pediatrics, Children' s Hospital of Fudan University (China)
2010-03-26
This study investigated the effects of angiogenesis on the proliferation and differentiation of neural stem cells in the premature brain. We observed the changes in neurogenesis that followed the stimulation and inhibition of angiogenesis by altering vascular endothelial growth factor (VEGF) expression in a 3-day-old rat model. VEGF expression was overexpressed by adenovirus transfection and down-regulated by siRNA interference. Using immunofluorescence assays, Western blot analysis, and real-time PCR methods, we observed angiogenesis and the proliferation and differentiation of neural stem cells. Immunofluorescence assays showed that the number of vWF-positive areas peaked at day 7, and they were highest in the VEGF up-regulation group and lowest in the VEGF down-regulation group at every time point. The number of neural stem cells, neurons, astrocytes, and oligodendrocytes in the subventricular zone gradually increased over time in the VEGF up-regulation group. Among the three groups, the number of these cells was highest in the VEGF up-regulation group and lowest in the VEGF down-regulation group at the same time point. Western blot analysis and real-time PCR confirmed these results. These data suggest that angiogenesis may stimulate the proliferation of neural stem cells and differentiation into neurons, astrocytes, and oligodendrocytes in the premature brain.
Brooks, Brian E.; Cooper, Eric E.
2006-01-01
Three divided visual field experiments tested current hypotheses about the types of visual shape representation tasks that recruit the cognitive and neural mechanisms underlying face recognition. Experiment 1 found a right hemisphere advantage for subordinate but not basic-level face recognition. Experiment 2 found a right hemisphere advantage for…
Neural systems and hormones mediating attraction to infant and child faces.
Luo, Lizhu; Ma, Xiaole; Zheng, Xiaoxiao; Zhao, Weihua; Xu, Lei; Becker, Benjamin; Kendrick, Keith M
2015-01-01
We find infant faces highly attractive as a result of specific features which Konrad Lorenz termed "Kindchenschema" or "baby schema," and this is considered to be an important adaptive trait for promoting protective and caregiving behaviors in adults, thereby increasing the chances of infant survival. This review first examines the behavioral support for this effect and physical and behavioral factors which can influence it. It then provides details of the increasing number of neuroimaging and electrophysiological studies investigating the neural circuitry underlying this baby schema effect in parents and non-parents of both sexes. Next it considers potential hormonal contributions to the baby schema effect in both sexes and the neural effects associated with reduced responses to infant cues in post-partum depression, anxiety and drug taking. Overall the findings reviewed reveal a very extensive neural circuitry involved in our perception of cuteness in infant faces, with enhanced activation compared to adult faces being found in brain regions involved in face perception, attention, emotion, empathy, memory, reward and attachment, theory of mind and also control of motor responses. Both mothers and fathers also show evidence for enhanced responses in these same neural systems when viewing their own as opposed to another child. Furthermore, responses to infant cues in many of these neural systems are reduced in mothers with post-partum depression or anxiety or have taken addictive drugs throughout pregnancy. In general reproductively active women tend to rate infant faces as cuter than men, which may reflect both heightened attention to relevant cues and a stronger activation in their brain reward circuitry. Perception of infant cuteness may also be influenced by reproductive hormones with the hypothalamic neuropeptide oxytocin being most strongly associated to date with increased attention and attraction to infant cues in both sexes.
Neural systems and hormones mediating attraction to infant and child faces
Lizhu eLuo
2015-07-01
Full Text Available We find infant faces highly attractive as a result of specific features which Konrad Lorenz termed Kindchenschema or baby schema, and this is considered to be an important adaptive trait for promoting protective and caregiving behaviors in adults, thereby increasing the chances of infant survival. This review first examines the behavioral support for this effect and physical and behavioral factors which can influence it. It next reviews the increasing number of neuroimaging and electrophysiological studies investigating the neural circuitry underlying this baby schema effect in both parents and non-parents of both sexes. Next it considers potential hormonal contributions to the baby schema effect in both sexes and then neural effects associated with reduced responses to infant cues in post-partum depression, anxiety and drug taking. Overall the findings reviewed reveal a very extensive neural circuitry involved in our perception of cutenessin infant faces with enhanced activation compared to adult faces being found in brain regions involved in face perception, attention, emotion, empathy, memory, reward and attachment, theory of mind and also control of motor responses.Both mothers and fathers also show evidence for enhanced responses in these same neural systems when viewing their own as opposed to another child. Furthermore, responses to infant cues in many of these neural systems are reduced in mothers with post-partum depression or anxiety or have taken addictive drugs throughout pregnancy. In general reproductively active women tend to rate infant faces as cuter than men, which may reflect both heightened attention to relevant cues and a stronger activation in their brain reward circuitry. Perception of infant cuteness may also be influenced by reproductive hormones with the hypothalamic neuropeptide oxytocin being most strongly associated to date with increased attention andattractionto infant cues in both sexes.
Recurrent Spatial Transformer Networks
Sønderby, Søren Kaae; Sønderby, Casper Kaae; Maaløe, Lars;
2015-01-01
We integrate the recently proposed spatial transformer network (SPN) [Jaderberg et. al 2015] into a recurrent neural network (RNN) to form an RNN-SPN model. We use the RNN-SPN to classify digits in cluttered MNIST sequences. The proposed model achieves a single digit error of 1.5% compared to 2...
Diverse ETS transcription factors mediate FGF signaling in the Ciona anterior neural plate.
Gainous, T Blair; Wagner, Eileen; Levine, Michael
2015-03-15
The ascidian Ciona intestinalis is a marine invertebrate belonging to the sister group of the vertebrates, the tunicates. Its compact genome and simple, experimentally tractable embryos make Ciona well-suited for the study of cell-fate specification in chordates. Tunicate larvae possess a characteristic chordate body plan, and many developmental pathways are conserved between tunicates and vertebrates. Previous studies have shown that FGF signals are essential for neural induction and patterning at sequential steps of Ciona embryogenesis. Here we show that two different ETS family transcription factors, Ets1/2 and Elk1/3/4, have partially redundant activities in the anterior neural plate of gastrulating embryos. Whereas Ets1/2 promotes pigment cell formation in lateral lineages, both Ets1/2 and Elk1/3/4 are involved in the activation of Myt1L in medial lineages and the restriction of Six3/6 expression to the anterior-most regions of the neural tube. We also provide evidence that photoreceptor cells arise from posterior regions of the presumptive sensory vesicle, and do not depend on FGF signaling. Cells previously identified as photoreceptor progenitors instead form ependymal cells and neurons of the larval brain. Our results extend recent findings on FGF-dependent patterning of anterior-posterior compartments in the Ciona central nervous system. Copyright © 2015. Published by Elsevier Inc.
Speech Recognition Model Based on Recurrent Neural Networks%基于循环神经网络的语音识别模型
朱小燕; 王昱; 徐伟
2001-01-01
近年来基于隐马尔可夫模型（HMM）的语音识别技术得到很大发展.然而HMM模型有着一定的局限性，如何克服HMM的一阶假设和独立性假设带来的问题一直是研究讨论的热点.在语音识别中引入神经网络的方法是克服HMM局限性的一条途径.该文将循环神经网络应用于汉语语音识别，修改了原网络模型并提出了相应的训练方法.实验结果表明该模型具有良好的连续信号处理性能，与传统的HMM模型效果相当.新的训练策略能够在提高训练速度的同时，使得模型分类性能有明显提高.%To overcome some weaknesses of hidden Markov model in speech recognition, HMM/NN hybrid systems had been explored by many researchers in recent years. In the previous HMM/NN hybrid systems, the neural networks adopted are mostly multilayer perceptron (MLP). In our system, recurrent neural networks (RNN) were used to take the place of MLP as the syllable probability estimator. RNN is MLP incorporated with a feedback which can transport the output of some neurons to other neurons or themselves. The incorporation of feedback into a MLP gives the net the ability to efficiently process the context information of time sequence, which is especially useful for speech recognition. In this paper, the architecture of the RNN is modified and corresponding training schema is presented. Following techniques have been adopted in our system. 1. A network with a single layer has been adopted, while the content of feedback is different from the network used by previous researchers, i.e., the external output is included in the feedback, not just the internal state output. 2. The training algorithm adopted in our system is back-propagation through time (BPTT) algorithm. In the common BPTT algorithm, the initial feedback values are set arbitrarily according to experience. This means that the initial feedback is not specific to the problem we are dealing with
Tonk, Elisa C M; Pennings, Jeroen L A; Piersma, Aldert H
2015-08-01
Developmental toxicity can be caused through a multitude of mechanisms and can therefore not be captured through a single simple mechanistic paradigm. However, it may be possible to define a selected group of overarching mechanisms that might allow detection of the vast majority of developmental toxicants. Against this background, we have explored the usefulness of retinoic acid mediated regulation of neural tube and axial patterning as a general mechanism that, when perturbed, may result in manifestations of developmental toxicity that may cover a large part of malformations known to occur in experimental animals and in man. Through a literature survey, we have identified key genes in the regulation of retinoic acid homeostasis, as well as marker genes of neural tube and axial patterning, that may be used to detect developmental toxicants in in vitro systems. A retinoic acid-neural tube/axial patterning adverse outcome pathway (RA-NTA AOP) framework was designed. The framework was tested against existing data of flusilazole exposure in the rat whole embryo culture, the zebrafish embryotoxicity test, and the embryonic stem cell test. Flusilazole is known to interact with retinoic acid homeostasis, and induced common and unique NTA marker gene changes in the three test systems. Flusilazole-induced changes were similar in directionality to gene expression responses after retinoic acid exposure. It is suggested that the RA-NTA framework may provide a general tool to define mechanistic pathways and biomarkers of developmental toxicity that may be used in alternative in vitro assays for the detection of embryotoxic compounds.
Marco Schlepütz
Full Text Available The peripheral airway innervation of the lower respiratory tract of mammals is not completely functionally characterized. Recently, we have shown in rats that precision-cut lung slices (PCLS respond to electric field stimulation (EFS and provide a useful model to study neural airway responses in distal airways. Since airway responses are known to exhibit considerable species differences, here we examined the neural responses of PCLS prepared from mice, rats, guinea pigs, sheep, marmosets and humans. Peripheral neurons were activated either by EFS or by capsaicin. Bronchoconstriction in response to identical EFS conditions varied between species in magnitude. Frequency response curves did reveal further species-dependent differences of nerve activation in PCLS. Atropine antagonized the EFS-induced bronchoconstriction in human, guinea pig, sheep, rat and marmoset PCLS, showing cholinergic responses. Capsaicin (10 µM caused bronchoconstriction in human (4 from 7 and guinea pig lungs only, indicating excitatory non-adrenergic non-cholinergic responses (eNANC. However, this effect was notably smaller in human responder (30 ± 7.1% than in guinea pig (79 ± 5.1% PCLS. The transient receptor potential (TRP channel blockers SKF96365 and ruthenium red antagonized airway contractions after exposure to EFS or capsaicin in guinea pigs. In conclusion, the different species show distinct patterns of nerve-mediated bronchoconstriction. In the most common experimental animals, i.e. in mice and rats, these responses differ considerably from those in humans. On the other hand, guinea pig and marmoset monkey mimic human responses well and may thus serve as clinically relevant models to study neural airway responses.
Zhang, Chan; Wu, Jian-Min; Liao, Min; Wang, Jun-Ling; Xu, Chao-Jin
2016-12-01
Simvastatin, a lipophilic and fermentation-derived natural statin, is reported to treat neurological disorders, such as traumatic brain injury, Parkinson's disease (PD), Alzheimer disease (AD), etc. Recently, research also indicated that simvastatin could promote regeneration in the dentate gyrus of adult mice by Wnt/β-catenin signaling (Robin et al. in Stem Cell Reports 2:9-17, 2014). However, the effect and mechanisms by which simvastatin may affect the neural stem cells (NSCs; from the embryonic day 14.5 (E14.5) SD rat brain) are not fully understood. Here, we investigated the effects of different doses of simvastatin on the survival, proliferation, differentiation, migration, and cell cycle of NSCs as well as underlying intracellular signaling pathways. The results showed that simvastatin not only inhibits the proliferation of NSCs but also enhances the βIII-tubulin(+) neuron differentiation rate. Additionally, we find that simvastatin could also promote NSC migration and induce cell cycle arrest at M2 phrase. All these effects of simvastatin on NSCs were mimicked with an inhibitor of Rho kinase (ROCK) and a specific inhibitor of geranylgeranyl transferase (GGTase). In conclusion, these data indicate that simvastatin could promote neurogenesis of neural stem cells, and these effects were mediated through the ROCK/GGTase pathway.
Recurrent Syncope due to Esophageal Squamous Cell Carcinoma
A. Casini
2011-09-01
Full Text Available Syncope is caused by a wide variety of disorders. Recurrent syncope as a complication of malignancy is uncommon and may be difficult to diagnose and to treat. Primary neck carcinoma or metastases spreading in parapharyngeal and carotid spaces can involve the internal carotid artery and cause neurally mediated syncope with a clinical presentation like carotid sinus syndrome. We report the case of a 76-year-old man who suffered from recurrent syncope due to invasion of the right carotid sinus by metastases of a carcinoma of the esophagus, successfully treated by radiotherapy. In such cases, surgery, chemotherapy or radiotherapy can be performed. Because syncope may be an early sign of neck or cervical cancer, the diagnostic approach of syncope in patients with a past history of cancer should include the possibility of neck tumor recurrence or metastasis and an oncologic workout should be considered.
A CREB-Sirt1-Hes1 Circuitry Mediates Neural Stem Cell Response to Glucose Availability
Salvatore Fusco
2016-02-01
Full Text Available Adult neurogenesis plays increasingly recognized roles in brain homeostasis and repair and is profoundly affected by energy balance and nutrients. We found that the expression of Hes-1 (hairy and enhancer of split 1 is modulated in neural stem and progenitor cells (NSCs by extracellular glucose through the coordinated action of CREB (cyclic AMP responsive element binding protein and Sirt-1 (Sirtuin 1, two cellular nutrient sensors. Excess glucose reduced CREB-activated Hes-1 expression and results in impaired cell proliferation. CREB-deficient NSCs expanded poorly in vitro and did not respond to glucose availability. Elevated glucose also promoted Sirt-1-dependent repression of the Hes-1 promoter. Conversely, in low glucose, CREB replaced Sirt-1 on the chromatin associated with the Hes-1 promoter enhancing Hes-1 expression and cell proliferation. Thus, the glucose-regulated antagonism between CREB and Sirt-1 for Hes-1 transcription participates in the metabolic regulation of neurogenesis.
Sex differences in the neural circuit that mediates female sexual receptivity
Flanagan-Cato, Loretta M.
2011-01-01
Female sexual behavior in rodents, typified by the lordosis posture, is hormone-dependent and sex-specific. Ovarian hormones control this behavior via receptors in the hypothalamic ventromedial nucleus (VMH). This review considers the sex differences in the morphology, neurochemistry and neural circuitry of the VMH to gain insights into the mechanisms that control lordosis. The VMH is larger in males compared with females, due to more synaptic connections. Another sex difference is the responsiveness to estradiol, with males exhibiting muted, and in some cases reverse, effects compared with females. The lack of lordosis in males may be explained by differences in synaptic organization or estrogen responsiveness, or both, in the VMH. However, given that damage to other brain regions unmasks lordosis behavior in males, a male-typical VMH is unlikely the main factor that prevents lordosis. In females, key questions remain regarding the mechanisms whereby ovarian hormones modulate VMH function to promote lordosis. PMID:21338620
AgRP Neural Circuits Mediate Adaptive Behaviors in the Starved State
Padilla, Stephanie L.; Qiu, Jian; Soden, Marta E.; Sanz, Elisenda; Nestor, Casey C; Barker, Forrest D.; Quintana, Albert; Zweifel, Larry S.; Rønnekleiv, Oline K.; Kelly, Martin J.; Palmiter, Richard D.
2016-01-01
In the face of starvation animals will engage in high-risk behaviors that would normally be considered maladaptive. Starving rodents for example will forage in areas that are more susceptible to predators and will also modulate aggressive behavior within a territory of limited or depleted nutrients. The neural basis of these adaptive behaviors likely involves circuits that link innate feeding, aggression, and fear. Hypothalamic AgRP neurons are critically important for driving feeding and project axons to brain regions implicated in aggression and fear. Using circuit-mapping techniques, we define a disynaptic network originating from a subset of AgRP neurons that project to the medial nucleus of the amygdala and then to the principle bed nucleus of the stria terminalis, which plays a role in suppressing territorial aggression and reducing contextual fear. We propose that AgRP neurons serve as a master switch capable of coordinating behavioral decisions relative to internal state and environmental cues. PMID:27019015
Büschges, Ansgar
2005-03-01
It is well established that locomotor patterns result from the interaction between central pattern generating networks in the nervous system, local feedback from sensory neurons about movements and forces generated in the locomotor organs, and coordinating signals from neighboring segments or appendages. This review addresses the issue of how the movements of multi-segmented locomotor organs are coordinated and provides an overview of recent advances in understanding sensory control and the internal organization of central pattern generating networks that operate multi-segmented locomotor organs, such as a walking leg. Findings from the stick insect and the cat are compared and discussed in relation to new findings on the lamprey swimming network. These findings support the notion that common schemes of sensory feedback are used for generating walking and that central neural networks controlling multi-segmented locomotor organs generally encompass multiple central pattern generating networks that correspond with the segmental structure of the locomotor organ.
Katherine Rotker
2016-01-01
Full Text Available Varicocele recurrence is one of the most common complications associated with varicocele repair. A systematic review was performed to evaluate varicocele recurrence rates, anatomic causes of recurrence, and methods of management of recurrent varicoceles. The PubMed database was evaluated using keywords "recurrent" and "varicocele" as well as MESH criteria "recurrent" and "varicocele." Articles were not included that were not in English, represented single case reports, focused solely on subclinical varicocele, or focused solely on a pediatric population (age <18. Rates of recurrence vary with the technique of varicocele repair from 0% to 35%. Anatomy of recurrence can be defined by venography. Management of varicocele recurrence can be surgical or via embolization.
Tzou, Wen-Shyong; Lo, Ying-Tsang; Pai, Tun-Wen; Hu, Chin-Hwa; Li, Chung-Hao
2014-07-01
Notch signaling controls cell fate decisions and regulates multiple biological processes, such as cell proliferation, differentiation, and apoptosis. Computational modeling of the deterministic simulation of Notch signaling has provided important insight into the possible molecular mechanisms that underlie the switch from the undifferentiated stem cell to the differentiated cell. Here, we constructed a stochastic model of a Notch signaling model containing Hes1, Notch1, RBP-Jk, Mash1, Hes6, and Delta. mRNA and protein were represented as a discrete state, and 334 reactions were employed for each biochemical reaction using a graphics processing unit-accelerated Gillespie scheme. We employed the tuning of 40 molecular mechanisms and revealed several potential mediators capable of enabling the switch from cell stemness to differentiation. These effective mediators encompass different aspects of cellular regulations, including the nuclear transport of Hes1, the degradation of mRNA (Hes1 and Notch1) and protein (Notch1), the association between RBP-Jk and Notch intracellular domain (NICD), and the cleavage efficiency of the NICD. These mechanisms overlap with many modifiers that have only recently been discovered to modulate the Notch signaling output, including microRNA action, ubiquitin-mediated proteolysis, and the competitive binding of the RBP-Jk-DNA complex. Moreover, we identified the degradation of Hes1 mRNA and nuclear transport of Hes1 as the dominant mechanisms that were capable of abolishing the cell state transition induced by other molecular mechanisms.
Research on Estimation of Ads Cl ick Rate Based on Recurrent Neural Network%基于递归神经网络的广告点击率预估研究
陈巧红; 孙超红; 余仕敏; 贾宇波
2016-01-01
In order to improve the estimation accuracy of ads click rate and thus improve the revenue of online advertising,feature extraction and dimension reduction of advertising data were implemented. Then,the improved recurrent neural network based on LSTM was used as the ads click rate estimation model.Meanwhile,stochastic gradient descent and cross entropy were used as optimization algorithm and obj ective function separately. Experiments show that compared with logistic regression, BP neural network and recurrent neural network, the improved recurrent neural network based on LSTM can effectively improve the estimation accuracy of ads click rate.It not only helps advertising service providers develop reasonable price strategies,but also helps advertisers advertise reasonably.As a result,the revenue maximization of each role in the advertising industry chain is realized.%为提高广告点击率的预估准确率，从而提高在线广告的收益，对广告数据进行特征提取和特征降维，采用一种基于 LSTM的改进的递归神经网络作为广告点击率预估模型。分别采用随机梯度下降法和交叉熵函数作为预估模型的优化算法和目标函数。实验表明，与逻辑回归、BP神经网络和递归神经网络相比，基于 LSTM改进的递归神经网络模型，能有效提高广告点击率的预估准确率。该模型不仅有助于广告服务商制定合理的价格策略，也有助于广告主合理投放广告，实现广告产业链中各个角色的收益最大化。
Seymour, Jenessa L; Low, Kathy A; Maclin, Edward L; Chiarelli, Antonio M; Mathewson, Kyle E; Fabiani, Monica; Gratton, Gabriele; Dye, Matthew W G
2017-01-01
Theories of brain plasticity propose that, in the absence of input from the preferred sensory modality, some specialized brain areas may be recruited when processing information from other modalities, which may result in improved performance. The Useful Field of View task has previously been used to demonstrate that early deafness positively impacts peripheral visual attention. The current study sought to determine the neural changes associated with those deafness-related enhancements in visual performance. Based on previous findings, we hypothesized that recruitment of posterior portions of Brodmann area 22, a brain region most commonly associated with auditory processing, would be correlated with peripheral selective attention as measured using the Useful Field of View task. We report data from severe to profoundly deaf adults and normal-hearing controls who performed the Useful Field of View task while cortical activity was recorded using the event-related optical signal. Behavioral performance, obtained in a separate session, showed that deaf subjects had lower thresholds (i.e., better performance) on the Useful Field of View task. The event-related optical data indicated greater activity for the deaf adults than for the normal-hearing controls during the task in the posterior portion of Brodmann area 22 in the right hemisphere. Furthermore, the behavioral thresholds correlated significantly with this neural activity. This work provides further support for the hypothesis that cross-modal plasticity in deaf individuals appears in higher-order auditory cortices, whereas no similar evidence was obtained for primary auditory areas. It is also the only neuroimaging study to date that has linked deaf-related changes in the right temporal lobe to visual task performance outside of the imaging environment. The event-related optical signal is a valuable technique for studying cross-modal plasticity in deaf humans. The non-invasive and relatively quiet characteristics of
Politis, Marios; Wu, Kit; Loane, Clare; Quinn, Niall P; Brooks, David J; Rehncrona, Stig; Bjorklund, Anders; Lindvall, Olle; Piccini, Paola
2010-06-30
Troublesome involuntary movements in the absence of dopaminergic medication, so-called off-medication dyskinesias, are a serious adverse effect of fetal neural grafts that hinders the development of cell-based therapies for Parkinson's disease. The mechanisms underlying these dyskinesias are not well understood, and it is not known whether they are the same as in the dyskinesias induced by l-dopa treatment. Using in vivo brain imaging, we show excessive serotonergic innervation in the grafted striatum of two patients with Parkinson's disease, who had exhibited major motor recovery after transplantation with dopamine-rich fetal mesencephalic tissue but had later developed off-medication dyskinesias. The dyskinesias were markedly attenuated by systemic administration of a serotonin [5-hydroxytryptamine (5-HT)] receptor (5-HT(1A)) agonist, which dampens transmitter release from serotonergic neurons, indicating that the dyskinesias were caused by the serotonergic hyperinnervation. Our observations suggest strategies for avoiding and treating graft-induced dyskinesias that result from cell therapies for Parkinson's disease with fetal tissue or stem cells.
Patient-specific models of microglia-mediated engulfment of synapses and neural progenitors
Sellgren, C M; Sheridan, S D; Gracias, J; Xuan, D; Fu, T; Perlis, R H
2017-01-01
Engulfment of synapses and neural progenitor cells (NPCs) by microglia is critical for the development and maintenance of proper brain circuitry, and has been implicated in neurodevelopmental as well as neurodegenerative disease etiology. We have developed and validated models of these mechanisms by reprogramming microglia-like cells from peripheral blood mononuclear cells, and combining them with NPCs and neurons derived from induced pluripotent stem cells to create patient-specific cellular models of complement-dependent synaptic pruning and elimination of NPCs. The resulting microglia-like cells express appropriate markers and function as primary human microglia, while patient-matched macrophages differ markedly. As a demonstration of disease-relevant application, we studied the role of C4, recently implicated in schizophrenia, in engulfment of synaptic structures by human microglia. The ability to create complete patient-specific cellular models of critical microglial functions utilizing samples taken during a single clinical visit will extend the ability to model central nervous system disease while facilitating high-throughput screening. PMID:27956744
Sex differences in the neural mechanisms mediating addiction: a new synthesis and hypothesis
Becker Jill B
2012-06-01
Full Text Available Abstract In this review we propose that there are sex differences in how men and women enter onto the path that can lead to addiction. Males are more likely than females to engage in risky behaviors that include experimenting with drugs of abuse, and in susceptible individuals, they are drawn into the spiral that can eventually lead to addiction. Women and girls are more likely to begin taking drugs as self-medication to reduce stress or alleviate depression. For this reason women enter into the downward spiral further along the path to addiction, and so transition to addiction more rapidly. We propose that this sex difference is due, at least in part, to sex differences in the organization of the neural systems responsible for motivation and addiction. Additionally, we suggest that sex differences in these systems and their functioning are accentuated with addiction. In the current review we discuss historical, cultural, social and biological bases for sex differences in addiction with an emphasis on sex differences in the neurotransmitter systems that are implicated.
Curiosity and Cure: Translational Research Strategies for Neural Repair-Mediated Rehabilitation
Dobkin, Bruce H.
2014-01-01
Clinicians who seek interventions for neural repair in patients with paralysis and other impairments may extrapolate the results of cell culture and rodent experiments into the framework of a preclinical study. These experiments, however, must be interpreted within the context of the model and the highly constrained hypothesis and manipulation being tested. Rodent models of repair for stroke and spinal cord injury offer examples of potential pitfalls in the interpretation of results from developmental gene activation, transgenic mice, endogeneous neurogenesis, cellular transplantation, axon regeneration and remyelination, dendritic proliferation, activity-dependent adaptations, skills learning, and behavioral testing. Preclinical experiments that inform the design of human trials ideally include a lesion of etiology, volume and location that reflects the human disease; examine changes induced by injury and by repair procedures both near and remote from the lesion; distinguish between reactive molecular and histologic changes versus changes critical to repair cascades; employ explicit training paradigms for the reacquisition of testable skills; correlate morphologic and physiologic measures of repair with behavioral measures of task reacquisition; reproduce key results in more than one laboratory, in different strains or species of rodent, and in a larger mammal; and generalize the results across several disease models, such as axonal regeneration in a stroke and spinal cord injury platform. Collaborations between basic and clinical scientists in the development of translational animal models of injury and repair can propel experiments for ethical bench-to-bedside therapies to augment the rehabilitation of disabled patients. PMID:17514711
The neural mediators of kindness-based meditation: a theoretical model
Jennifer Streiffer Mascaro
2015-02-01
Full Text Available Although kindness-based contemplative practices are increasingly employed by clinicians and cognitive researchers to enhance prosocial emotions, social cognitive skills, and well-being, and as a tool to understand the basic workings of the social mind, we lack a coherent theoretical model with which to test the mechanisms by which kindness-based meditation may alter the brain and body. Here we link contemplative accounts of compassion and loving-kindness practices with research from social cognitive neuroscience and social psychology to generate predictions about how diverse practices may alter brain structure and function and related aspects of social cognition. Contingent on the nuances of the practice, kindness-based meditation may enhance the neural systems related to faster and more basic perceptual or motor simulation processes, simulation of another’s affective body state, slower and higher-level perspective-taking, modulatory processes such as emotion regulation and self/other discrimination, and combinations thereof. This theoretical model will be discussed alongside best practices for testing such a model and potential implications and applications of future work.
A leg-local neural mechanism mediates the decision to search in stick insects.
Berg, Eva M; Hooper, Scott L; Schmidt, Joachim; Büschges, Ansgar
2015-08-01
In many animals, individual legs can either function independently, as in behaviors such as scratching or searching, or be used in coordinated patterns with other legs, as in walking or climbing. While the control of walking has been extensively investigated, the mechanisms mediating the behavioral choice to activate individual legs independently are poorly understood. We examined this issue in stick insects, in which each leg can independently produce a rhythmic searching motor pattern if it doesn't find a foothold [1-4]. We show here that one non-spiking interneuron, I4, controls searching behavior in individual legs. One I4 is present in each hemi-segment of the three thoracic ganglia [5, 6]. Search-inducing sensory input depolarizes I4. I4 activity was necessary and sufficient to initiate and maintain searching movements. When substrate contact was provided, I4 depolarization no longer induced searching. I4 therefore both integrates search-inducing sensory input and is gated out by other sensory input (substrate contact). Searching thus occurs only when it is behaviorally appropriate. I4 depolarization never elicited stepping. These data show that individual, locally activated neurons can mediate the behavioral choice to use individual legs independently. This mechanism may be particularly important in insects' front legs, which can function independently like vertebrate arms and hands [7]. Similar local command mechanisms that selectively activate the pattern generators controlling repeated functional units such as legs or body segments may be present in other systems.
AKT signaling mediates IGF-I survival actions on otic neural progenitors.
Maria R Aburto
Full Text Available BACKGROUND: Otic neurons and sensory cells derive from common progenitors whose transition into mature cells requires the coordination of cell survival, proliferation and differentiation programmes. Neurotrophic support and survival of post-mitotic otic neurons have been intensively studied, but the bases underlying the regulation of programmed cell death in immature proliferative otic neuroblasts remains poorly understood. The protein kinase AKT acts as a node, playing a critical role in controlling cell survival and cell cycle progression. AKT is activated by trophic factors, including insulin-like growth factor I (IGF-I, through the generation of the lipidic second messenger phosphatidylinositol 3-phosphate by phosphatidylinositol 3-kinase (PI3K. Here we have investigated the role of IGF-dependent activation of the PI3K-AKT pathway in maintenance of otic neuroblasts. METHODOLOGY/PRINCIPAL FINDINGS: By using a combination of organotypic cultures of chicken (Gallus gallus otic vesicles and acoustic-vestibular ganglia, Western blotting, immunohistochemistry and in situ hybridization, we show that IGF-I-activation of AKT protects neural progenitors from programmed cell death. IGF-I maintains otic neuroblasts in an undifferentiated and proliferative state, which is characterised by the upregulation of the forkhead box M1 (FoxM1 transcription factor. By contrast, our results indicate that post-mitotic p27(Kip-positive neurons become IGF-I independent as they extend their neuronal processes. Neurons gradually reduce their expression of the Igf1r, while they increase that of the neurotrophin receptor, TrkC. CONCLUSIONS/SIGNIFICANCE: Proliferative otic neuroblasts are dependent on the activation of the PI3K-AKT pathway by IGF-I for survival during the otic neuronal progenitor phase of early inner ear development.
Povlsen, Gro Klitgaard; Berezin, Vladimir; Bock, Elisabeth
2008-01-01
The neural cell adhesion molecule (NCAM) plays important roles in neuronal development, regeneration, and synaptic plasticity. NCAM homophilic binding mediates cell adhesion and induces intracellular signals, in which the fibroblast growth factor receptor plays a prominent role. Recent studies...... not require NCAM-mediated fibroblast growth factor receptor activation....... on axon guidance in Drosophila suggest that NCAM also regulates the epidermal growth factor receptor (EGFR) (Molecular and Cellular Neuroscience, 28, 2005, 141). A possible interaction between NCAM and EGFR in mammalian cells has not been investigated. The present study demonstrates for the first time...
Mapping the neural systems that mediate the Paced Auditory Serial Addition Task (PASAT).
Lockwood, Alan H; Linn, Richard T; Szymanski, Herman; Coad, Mary Lou; Wack, David S
2004-01-01
The paced auditory serial addition task (PASAT), in which subjects hear a number-string and add the two most-recently heard numbers, is a neuropsychological test sensitive to cerebral dysfunction. We mapped the brain regions activated by the PASAT using positron emission tomography (PET) and 15O-water to measure cerebral blood flow. We parsed the PASAT by mapping sites activated by immediate repetition of numbers and by repetition of the prior number after the presentation of the next number in the series. The PASAT activated dispersed non-contiguous foci in the superior temporal gyri, bifrontal and biparietal sites, the anterior cingulate and bilateral cerebellar sites. These sites are consistent with the elements of the task that include auditory perception and processing, speech production, working memory, and attention. Sites mediating addition were not identified. The extent of the sites activated during the performance of the PASAT accounts for the sensitivity of this test and justifies its use in a variety of seemingly disparate conditions.
Zukor, Hillel; Song, Wei; Liberman, Adrienne; Mui, Jeannie; Vali, Hojatollah; Fillebeen, Carine; Pantopoulos, Kostas; Wu, Ting-Di; Guerquin-Kern, Jean-Luc; Schipper, Hyman M
2009-05-01
Oxidative stress, deposition of non-transferrin iron, and mitochondrial insufficiency occur in the brains of patients with Alzheimer disease (AD) and Parkinson disease (PD). We previously demonstrated that heme oxygenase-1 (HO-1) is up-regulated in AD and PD brain and promotes the accumulation of non-transferrin iron in astroglial mitochondria. Herein, dynamic secondary ion mass spectrometry (SIMS) and other techniques were employed to ascertain (i) the impact of HO-1 over-expression on astroglial mitochondrial morphology in vitro, (ii) the topography of aberrant iron sequestration in astrocytes over-expressing HO-1, and (iii) the role of iron regulatory proteins (IRP) in HO-1-mediated iron deposition. Astroglial hHO-1 over-expression induced cytoplasmic vacuolation, mitochondrial membrane damage, and macroautophagy. HO-1 promoted trapping of redox-active iron and sulfur within many cytopathological profiles without impacting ferroportin, transferrin receptor, ferritin, and IRP2 protein levels or IRP1 activity. Thus, HO-1 activity promotes mitochondrial macroautophagy and sequestration of redox-active iron in astroglia independently of classical iron mobilization pathways. Glial HO-1 may be a rational therapeutic target in AD, PD, and other human CNS conditions characterized by the unregulated deposition of brain iron.
Neural evidence for competition-mediated suppression in the perception of a single object.
Cacciamani, Laura; Scalf, Paige E; Peterson, Mary A
2015-11-01
Multiple objects compete for representation in visual cortex. Competition may also underlie the perception of a single object. Computational models implement object perception as competition between units on opposite sides of a border. The border is assigned to the winning side, which is perceived as an object (or "figure"), whereas the other side is perceived as a shapeless ground. Behavioral experiments suggest that the ground is inhibited to a degree that depends on the extent to which it competed for object status, and that this inhibition is relayed to low-level brain areas. Here, we used fMRI to assess activation for ground regions of task-irrelevant novel silhouettes presented in the left or right visual field (LVF or RVF) while participants performed a difficult task at fixation. Silhouettes were designed so that the insides would win the competition for object status. The outsides (grounds) suggested portions of familiar objects in half of the silhouettes and novel objects in the other half. Because matches to object memories affect the competition, these two types of silhouettes operationalized, respectively, high competition and low competition from the grounds. The results showed that activation corresponding to ground regions was reduced for high- versus low-competition silhouettes in V4, where receptive fields (RFs) are large enough to encompass the familiar objects in the grounds, and in V1/V2, where RFs are much smaller. These results support a theory of object perception involving competition-mediated ground suppression and feedback from higher to lower levels. This pattern of results was observed in the left hemisphere (RVF), but not in the right hemisphere (LVF). One explanation of the lateralized findings is that task-irrelevant silhouettes in the RVF captured attention, allowing us to observe these effects, whereas those in the LVF did not. Experiment 2 provided preliminary behavioral evidence consistent with this possibility.
Ketamine, propofol and the EEG: a neural field analysis of HCN1-mediated interactions
Ingo eBojak
2013-04-01
Full Text Available Ketamine and propofol are two well-known, powerful anesthetic agents, yet at first sight this appears to be their only commonality. Ketamine is a dissociative anesthetic agent, whose main mechanism of action is considered to be N-methyl-D-aspartate (NMDA antagonism; whereas propofol is a general anesthetic agent, which is assumed to primarily potentiate currents gated by γ-aminobutyric acid type A (GABA A receptors. However, several experimental observations suggest a closer relationship. First, the effect of ketamine on the electroencephalogram (EEG is markedly changed in the presence of propofol: on its own ketamine increases theta (4–8 Hz and decreases alpha (8–13 Hz oscillations, whereas ketamine induces a significant shift to beta band frequencies (13–30 Hz in the presence of propofol. Second, both ketamine and propofol cause inhibition of the inward pacemaker current Ih, by binding to the corresponding hyperpolarization-activated cyclic nucleotide-gated potassium channel 1 (HCN1 subunit. The resulting effect is a hyperpolarization of the neuron’s resting membrane potential. Third, the ability of both ketamine and propofol to induce hypnosis is reduced in HCN1-knockout mice. Here we show that one can theoretically understand the observed spectral changes of the EEG based on HCN1-mediated hyperpolarizations alone, without involving the supposed main mechanisms of action of these drugs through NMDA and GABA A, respectively. On the basis of our successful EEG model we conclude that ketamine and propofol should be antagonistic to each other in their interaction at HCN1 subunits. Such a prediction is in accord with the results of clinical experiment in which it is found that ketamine and propofol interact in an infra-additive manner with respect to the endpoints of hypnosis and immobility.
Takahiro Ishimoto
Full Text Available The aim of the present study is to clarify the functional expression and physiological role in neural progenitor cells (NPCs of carnitine/organic cation transporter OCTN1/SLC22A4, which accepts the naturally occurring food-derived antioxidant ergothioneine (ERGO as a substrate in vivo. Real-time PCR analysis revealed that mRNA expression of OCTN1 was much higher than that of other organic cation transporters in mouse cultured cortical NPCs. Immunocytochemical analysis showed colocalization of OCTN1 with the NPC marker nestin in cultured NPCs and mouse embryonic carcinoma P19 cells differentiated into neural progenitor-like cells (P19-NPCs. These cells exhibited time-dependent [(3H]ERGO uptake. These results demonstrate that OCTN1 is functionally expressed in murine NPCs. Cultured NPCs and P19-NPCs formed neurospheres from clusters of proliferating cells in a culture time-dependent manner. Exposure of cultured NPCs to ERGO or other antioxidants (edaravone and ascorbic acid led to a significant decrease in the area of neurospheres with concomitant elimination of intracellular reactive oxygen species. Transfection of P19-NPCs with small interfering RNA for OCTN1 markedly promoted formation of neurospheres with a concomitant decrease of [(3H]ERGO uptake. On the other hand, exposure of cultured NPCs to ERGO markedly increased the number of cells immunoreactive for the neuronal marker βIII-tubulin, but decreased the number immunoreactive for the astroglial marker glial fibrillary acidic protein (GFAP, with concomitant up-regulation of neuronal differentiation activator gene Math1. Interestingly, edaravone and ascorbic acid did not affect such differentiation of NPCs, in contrast to the case of proliferation. Knockdown of OCTN1 increased the number of cells immunoreactive for GFAP, but decreased the number immunoreactive for βIII-tubulin, with concomitant down-regulation of Math1 in P19-NPCs. Thus, OCTN1-mediated uptake of ERGO in NPCs inhibits
一种递归模糊神经网络的广义预测控制方法%An Generalized Predictive Control Using Recurrent Fuzzy Neural Network
李国勇; 刘鹏
2012-01-01
A kind of recurrent fuzzy neural network(RFNN) is constructed, in which, the a bility of the input information handling is enhanced by adding the vector adjustment layer. Based on the designed recursion fuzzy neural network, nonlinear system's discrete mathematics multi-step fuzzy forecast model is established. This model is used to forecast the system's output, and the corresponding forecast control law is obtained by the existing predictive control algorithm. The simulation result indicates that this method has the high control precision as well as moderate certain anti-interference ability.%提出了一种递归模糊神经网络(RFNN),通过加入向量调节层,提高了网络对输入信息的处理能力.基于所设计的递归模糊神经网络,建立非线性系统的离散数学多步模糊预测模型,根据这一模型对系统的输出进行预测,然后利用预测控制算法得到相应的预测控制规律.仿真结果表明该方法具有较高的控制精度以及一定的抗干扰能力.
基于递归模糊神经网络的污水处理控制方法%Wastewater treatment control method based on recurrent fuzzy neural network
韩改堂; 乔俊飞; 韩红桂
2016-01-01
Due to the nonlinear and highly time-varying issues of wastewater treatment processes, a kind of multi- variable control method based on the recurrent fuzzy neural network (RFNN) is proposed. The proposed RFNN can obtain self-adaptive control accuracy of operating variables. The controller uses the learning rate on the basis of conventional BP learning algorithm on adaptive learning algorithm and the introduction of momentum to train network parameters, can avoid falling into local optimum network, which improved network control of the system accuracy. Finally, based on the benchmark simulation model (BSM1), experiments validate the effectiveness of the method that control the dissolved oxygen concentration in the fifth partition and nitrate nitrogen concentration in the second partition. Compared to PID, forward neural network and conventional recurrent neural network, the experimental results show that this control method can improve the adaptive control precision of the system.%针对污水处理过程具有非线性、大时变等问题，提出了一种基于递归模糊神经网络的多变量控制方法。该方法通过递归模糊神经网络控制器自适应地获得对操作变量的控制精度，控制器在常规 BP 学习算法的基础上采用学习率自适应学习算法且引入了动量项来训练网络参数，避免网络陷入局部最优，提高了网络对系统的控制精度。最后，基于仿真基准模型（BSM1）平台对第五分区中的溶解氧和第二分区中的硝态氮控制进行动态仿真实验，结果表明，与PID、前馈神经网络和常规递归神经网络相比，该方法能有效提高系统的自适应控制精度。
Kursad Zorlu
2014-07-01
Full Text Available The aim of the research is to estimate the effect of workplace deviance behavior on organizational citizenship and job satisfaction and to put forward the mediator role of the organizational support perception in possible relations. The information based on hypothetical and literature are provided in the research principally and then the research part including the questionnaire applied to the employees of Kirsehir Municipality is presented. The validity and reliability tests have been performed successfully and the artificial neural network method has been used as the analysis method. In parallel with the averages and correlation values of the variables in the analysis the Artificial Neural Networks have been modelled by determining the inputs and outputs. In accordance with the findings obtained the workplace deviance behavior has a negative impact on the organizational citizenship and job satisfaction and the organizational support perception can take the mediator role as a relative for eliminating the abovementioned effect. When the artificial neural networks’ being used as the analysis method and the difficulties in measuring the workplace deviance behavior are taken into consideration it can be stated that the findings obtained have at a certain level of originality in terms of management discipline.
Kürşad Zorlu
2016-01-01
Full Text Available The aim of the research is to estimate the effect of workplace deviance behavior on organizational citizenship and job satisfaction and to put forward the mediator role of the organizational support perception in possible relations. The information based on hypothetical and literature are provided in the research principally and then the research part including the questionnaire applied to the employees of Kirsehir Municipality is presented. The validity and reliability tests have been performed successfully and the artificial neural network method has been used as the analysis method. In parallel with the averages and correlation values of the variables in the analysis the Artificial Neural Networks have been modelled by determining the inputs and outputs. In accordance with the findings obtained the workplace deviance behavior has a negative impact on the organizational citizenship and job satisfaction and the organizational support perception can take the mediator role as a relative for eliminating the abovementioned effect. When the artificial neural networks’ being used as the analysis method and the difficulties in measuring the workplace deviance behavior are taken into consideration it can be stated that the findings obtained have at a certain level of originality in terms of management discipline.
Chung, Taemoon; Na, Juri; Kim, Young-Il; Chang, Da-Young; Kim, Young Il; Kim, Hyeonjin; Moon, Ho Eun; Kang, Keon Wook; Lee, Dong Soo; Chung, June-Key; Kim, Sung-Soo; Suh-Kim, Haeyoung; Paek, Sun Ha; Youn, Hyewon
2016-01-01
We investigated a therapeutic strategy for recurrent malignant gliomas using mesenchymal stem cells (MSC), expressing cytosine deaminase (CD), and prodrug 5-Fluorocytosine (5-FC) as a more specific and less toxic option. MSCs are emerging as a novel cell therapeutic agent with a cancer-targeting property, and CD is considered a promising enzyme in cancer gene therapy which can convert non-toxic 5-FC to toxic 5-Fluorouracil (5-FU). Therefore, use of prodrug 5-FC can minimize normal cell toxicity. Analyses of microarrays revealed that targeting DNA damage and its repair is a selectable option for gliomas after the standard chemo/radio-therapy. 5-FU is the most frequently used anti-cancer drug, which induces DNA breaks. Because dihydropyrimidine dehydrogenase (DPD) was reported to be involved in 5-FU metabolism to block DNA damage, we compared the survival rate with 5-FU treatment and the level of DPD expression in 15 different glioma cell lines. DPD-deficient cells showed higher sensitivity to 5-FU, and the regulation of DPD level by either siRNA or overexpression was directly related to the 5-FU sensitivity. For MSC/CD with 5-FC therapy, DPD-deficient cells such as U87MG, GBM28, and GBM37 showed higher sensitivity compared to DPD-high U373 cells. Effective inhibition of tumor growth was also observed in an orthotopic mouse model using DPD- deficient U87MG, indicating that DPD gene expression is indeed closely related to the efficacy of MSC/CD-mediated 5-FC therapy. Our results suggested that DPD can be used as a biomarker for selecting glioma patients who may possibly benefit from this therapy.
Kam, Nadine Wong Shi; Jan, Edward; Kotov, Nicholas A
2009-01-01
One of the key challenges to engineering neural interfaces is to minimize their immune response toward implanted electrodes. One potential approach is to manufacture materials that bear greater structural resemblance to living tissues and by utilizing neural stem cells. The unique electrical and mechanical properties of carbon nanotubes make them excellent candidates for neural interfaces, but their adoption hinges on finding approaches for "humanizing" their composites. Here we demonstrated the fabrication of layer-by-layer assembled composites from single-walled carbon nanotubes (SWNTs) and laminin, which is an essential part of human extracellular matrix. Laminin-SWNT thin films were found to be conducive to neural stem cells (NSC) differentiation and suitable for their successful excitation. We observed extensive formation of functional neural network as indicated by the presence of synaptic connections. Calcium imaging of the NSCs revealed generation of action potentials upon the application of a lateral current through the SWNT substrate. These results indicate that the protein-SWNT composite can serve as materials foundation of neural electrodes with chemical structure better adapted with long-term integration with the neural tissue.
Hjarvard, Stig
2017-01-01
Mediatization research shares media effects studies' ambition of answering the difficult questions with regard to whether and how media matter and influence contemporary culture and society. The two approaches nevertheless differ fundamentally in that mediatization research seeks answers...... to these general questions by distinguishing between two concepts: mediation and mediatization. The media effects tradition generally considers the effects of the media to be a result of individuals being exposed to media content, i.e. effects are seen as an outcome of mediated communication. Mediatization...... research is concerned with long-term structural changes involving media, culture, and society, i.e. the influences of the media are understood in relation to how media are implicated in social and cultural changes and how these processes come to create new conditions for human communication and interaction...
Powell, Anna M; Nyirjesy, Paul
2014-10-01
Vulvovaginitis (VV) is one of the most commonly encountered problems by a gynecologist. Many women frequently self-treat with over-the-counter medications, and may present to their health-care provider after a treatment failure. Vulvovaginal candidiasis, bacterial vaginosis, and trichomoniasis may occur as discreet or recurrent episodes, and have been associated with significant treatment cost and morbidity. We present an update on diagnostic capabilities and treatment modalities that address recurrent and refractory episodes of VV.
Schulz, Florian; Lutz, David; Rusche, Norman; Bastús, Neus G.; Stieben, Martin; Höltig, Michael; Grüner, Florian; Weller, Horst; Schachner, Melitta; Vossmeyer, Tobias; Loers, Gabriele
2013-10-01
The neural cell adhesion molecule L1 is involved in nervous system development and promotes regeneration in animal models of acute and chronic injury of the adult nervous system. To translate these conducive functions into therapeutic approaches, a 22-mer peptide that encompasses a minimal and functional L1 sequence of the third fibronectin type III domain of murine L1 was identified and conjugated to gold nanoparticles (AuNPs) to obtain constructs that interact homophilically with the extracellular domain of L1 and trigger the cognate beneficial L1-mediated functions. Covalent conjugation was achieved by reacting mixtures of two cysteine-terminated forms of this L1 peptide and thiolated poly(ethylene) glycol (PEG) ligands (~2.1 kDa) with citrate stabilized AuNPs of two different sizes (~14 and 40 nm in diameter). By varying the ratio of the L1 peptide-PEG mixtures, an optimized layer composition was achieved that resulted in the expected homophilic interaction of the AuNPs. These AuNPs were stable as tested over a time period of 30 days in artificial cerebrospinal fluid and interacted with the extracellular domain of L1 on neurons and Schwann cells, as could be shown by using cells from wild-type and L1-deficient mice. In vitro, the L1-derivatized particles promoted neurite outgrowth and survival of neurons from the central and peripheral nervous system and stimulated Schwann cell process formation and proliferation. These observations raise the hope that, in combination with other therapeutic approaches, L1 peptide-functionalized AuNPs may become a useful tool to ameliorate the deficits resulting from acute and chronic injuries of the mammalian nervous system.The neural cell adhesion molecule L1 is involved in nervous system development and promotes regeneration in animal models of acute and chronic injury of the adult nervous system. To translate these conducive functions into therapeutic approaches, a 22-mer peptide that encompasses a minimal and functional L1
基于递归模糊神经网络的移动机器人滑模控制%Sliding mode control of mobile robots based on recurrent fuzzy-neural network
李艳东; 王宗义; 朱玲; 刘涛
2011-01-01
针对非完整移动机器人轨迹跟踪控制问题，提出了一种Backstepping运动学控制器与自适应动态递归模糊神经滑模控制器相结合的控制结构。采用遗传算法对运动学控制器的参数进行了优化选取，有效地抑制了因初始位姿过大而引起的初始速度及输出力矩过大的问题；采用动态递归模糊神经网络（Adaptive dynamic recurrent fuzzy neural network，AD—RFNN）对动态非线性不确定部分进行在线估计，使不确定性估计误差大大减小；通过与自适应鲁棒控制器结合应用，不但解决了移动机器人的参数与非参数不确定性问题，同时也消除了在滑模控制中的输入抖振现象；基于Lyapunov方法的设计过程，保证了控制系统的稳定与收敛；仿真结果表明了该方法的有效性。%A control structure is proposed for trajectory tracking control of nonholonomic mobile robots. It integrates the backstepping kinematic controller and a sliding mode controller with Adaptive Dynamic Recurrent Fuzzy Neural Network （ADRFNN）. The genetic algorithm is used to optimize the parameters of kinematic controller that effectively suppresses the excessive initial speed and output torque caused by large initial error of posture. The ADRFNN is developed to achieve on-line estimation of the part of dynamic nonlinear uncertain, which greatly reduces estimation errors of uncertainties. By combing ADRFNN with the adaptive robust controller, this method can not only solve the problem of parameters and non-parameter uncertainties of mobile robots, but also eliminate input chattering of the sliding mode control. The stability and convergence of the control system are proved by Lyapunov theory. Simulation results demonstrate the effectiveness of the proposed method.
Torreggiani, Sofia; Filocamo, Giovanni; Esposito, Susanna
2016-03-25
Children presenting with recurrent fever may represent a diagnostic challenge. After excluding the most common etiologies, which include the consecutive occurrence of independent uncomplicated infections, a wide range of possible causes are considered. This article summarizes infectious and noninfectious causes of recurrent fever in pediatric patients. We highlight that, when investigating recurrent fever, it is important to consider age at onset, family history, duration of febrile episodes, length of interval between episodes, associated symptoms and response to treatment. Additionally, information regarding travel history and exposure to animals is helpful, especially with regard to infections. With the exclusion of repeated independent uncomplicated infections, many infective causes of recurrent fever are relatively rare in Western countries; therefore, clinicians should be attuned to suggestive case history data. It is important to rule out the possibility of an infectious process or a malignancy, in particular, if steroid therapy is being considered. After excluding an infectious or neoplastic etiology, immune-mediated and autoinflammatory diseases should be taken into consideration. Together with case history data, a careful physical exam during and between febrile episodes may give useful clues and guide laboratory investigations. However, despite a thorough evaluation, a recurrent fever may remain unexplained. A watchful follow-up is thus mandatory because new signs and symptoms may appear over time.
Sofia Torreggiani
2016-03-01
Full Text Available Children presenting with recurrent fever may represent a diagnostic challenge. After excluding the most common etiologies, which include the consecutive occurrence of independent uncomplicated infections, a wide range of possible causes are considered. This article summarizes infectious and noninfectious causes of recurrent fever in pediatric patients. We highlight that, when investigating recurrent fever, it is important to consider age at onset, family history, duration of febrile episodes, length of interval between episodes, associated symptoms and response to treatment. Additionally, information regarding travel history and exposure to animals is helpful, especially with regard to infections. With the exclusion of repeated independent uncomplicated infections, many infective causes of recurrent fever are relatively rare in Western countries; therefore, clinicians should be attuned to suggestive case history data. It is important to rule out the possibility of an infectious process or a malignancy, in particular, if steroid therapy is being considered. After excluding an infectious or neoplastic etiology, immune-mediated and autoinflammatory diseases should be taken into consideration. Together with case history data, a careful physical exam during and between febrile episodes may give useful clues and guide laboratory investigations. However, despite a thorough evaluation, a recurrent fever may remain unexplained. A watchful follow-up is thus mandatory because new signs and symptoms may appear over time.
Han, Jun; Ito, Yoshihiro; Yeo, Jae Yong; Sucov, Henry M; Maas, Richard; Chai, Yang
2003-09-01
Neural crest cells are multipotential progenitors that contribute to various cell and tissue types during embryogenesis. Here, we have investigated the molecular and cellular mechanism by which the fate of neural crest cell is regulated during tooth development. Using a two- component genetic system for indelibly marking the progeny of neural crest cells, we provide in vivo evidence of a deficiency of CNC-derived dental mesenchyme in Msx1 null mutant mouse embryos. The deficiency of the CNC results from an elevated CDK inhibitor p19(INK4d) activity and the disruption of cell proliferation. Interestingly, in the absence of Msx1, the CNC-derived dental mesenchyme misdifferentiates and possesses properties consistent with a neuronal fate, possibly through a default mechanism. Attenuation of p19(INK4d) in Msx1 null mutant mandibular explants restores mitotic activity in the dental mesenchyme, demonstrating the functional significance of Msx1-mediated p19(INK4d) expression in regulating CNC cell proliferation during odontogenesis. Collectively, our results demonstrate that homeobox gene Msx1 regulates the fate of CNC cells by controlling the progression of the cell cycle. Genetic mutation of Msx1 may alternatively instruct the fate of these progenitor cells during craniofacial development.
Henry, Brandon Michael; Graves, Matthew J; Vikse, Jens; Sanna, Beatrice; Pękala, Przemysław A; Walocha, Jerzy A; Barczyński, Marcin; Tomaszewski, Krzysztof A
2017-06-01
Recurrent laryngeal nerve (RLN) injury is one of the most common and detrimental complications following thyroidectomy. Intermittent intraoperative nerve monitoring (I-IONM) has been proposed to reduce prevalence of RLN injury following thyroidectomy and has gained increasing acceptance in recent years. A comprehensive database search was performed, and data from eligible meta-analyses meeting the inclusion criteria were extracted. Transient, permanent, and overall RLN injuries were the primary outcome measures. Quality assessment via AMSTAR, heterogeneity appraisal, and selection of best evidence was performed via a Jadad algorithm. Eight meta-analyses met the inclusion criteria. Meta-analyses included between 6 and 23 original studies each. Via utilization of the Jadad algorithm, the selection of best evidence resulted in choosing of Pisanu et al. (Surg Res 188:152-161, 2014). Five out of eight meta-analyses demonstrated non-significant (p > 0.05) RLN injury reduction with the use of I-IONM versus nerve visualization alone. To date, I-IONM has not achieved a significant level of RLN injury reduction as shown by the meta-analysis conducted by Pisanu et al. (Surg Res 188:152-161, 2014). However, most recent developments of IONM technology including continuous vagal IONM and concept of staged thyroidectomy in case of loss of signal on the first side in order to prevent bilateral RLN injury may provide additional benefits which were out of the scope of this study and need to be assessed in further prospective multicenter trials.
Viness Pillay
2012-10-01
Full Text Available Macroporous polyacrylamide-grafted-chitosan scaffolds for neural tissue engineering were fabricated with varied synthetic and viscosity profiles. A novel approach and mechanism was utilized for polyacrylamide grafting onto chitosan using potassium persulfate (KPS mediated degradation of both polymers under a thermally controlled environment. Commercially available high molecular mass polyacrylamide was used instead of the acrylamide monomer for graft copolymerization. This grafting strategy yielded an enhanced grafting efficiency (GE = 92%, grafting ratio (GR = 263%, intrinsic viscosity (IV = 5.231 dL/g and viscometric average molecular mass (MW = 1.63 × 106 Da compared with known acrylamide that has a GE = 83%, GR = 178%, IV = 3.901 dL/g and MW = 1.22 × 106 Da. Image processing analysis of SEM images of the newly grafted neurodurable scaffold was undertaken based on the polymer-pore threshold. Attenuated Total Reflectance-FTIR spectral analyses in conjugation with DSC were used for the characterization and comparison of the newly grafted copolymers. Static Lattice Atomistic Simulations were employed to investigate and elucidate the copolymeric assembly and reaction mechanism by exploring the spatial disposition of chitosan and polyacrylamide with respect to the reactional profile of potassium persulfate. Interestingly, potassium persulfate, a peroxide, was found to play a dual role initially degrading the polymers—“polymer slicing”—thereby initiating the formation of free radicals and subsequently leading to synthesis of the high molecular mass polyacrylamide-grafted-chitosan (PAAm-g-CHT—“polymer complexation”. Furthermore, the applicability of the uniquely grafted scaffold for neural tissue engineering was evaluated via PC12 neuronal cell seeding. The novel PAAm-g-CHT exhibited superior neurocompatibility in terms of cell infiltration owing to the anisotropic porous architecture, high molecular mass mediated robustness
邵辉; 野波健藏
2012-01-01
The model and inverse model of angular velocity of a hydraulic manipulator and a hydraulic robotic hand are established using recurrent neural networks, providing an effective so- lution to the modeling and control of dynamic hydraulic systems. The inverse models are used as position controllers of the manipulator and the hand. Experiments show that the models are close enough to the system dynamics with accuracy of position control satisfying the requirements.%为解决多关节油压机械臂及手系统动态参数的时变性，应用递归神经网络（RNN）建立了油压机械臂及手的速度模型及逆模型，并用逆模型作为臂及手各关节的控制器实现了位置控制。实验结果表明，所建模型性能接近系统性能，位置控制精度也能达到控制目标的要求。
Isaacs, David; Kesson, Alison; Lester-Smith, David; Chaitow, Jeffrey
2013-03-01
An 11-year-old girl had four episodes of fever in a year, lasting 7-10 days and associated with headache and neck stiffness. She had a long history of recurrent urticaria, usually preceding the fevers. There was also a history of vague pains in her knees and in the small joints of her hands. Her serum C-reactive protein was moderately raised at 41 g/L (normal <8). Her rheumatologist felt the association of recurrent fevers that lasted 7 or more days with headaches, arthralgia and recurrent urticaria suggested one of the periodic fever syndromes. Genetic testing confirmed she had a gene mutation consistent with one of tumour necrosis factor receptor-associated periodic syndrome.
Modular, Hierarchical Learning By Artificial Neural Networks
Baldi, Pierre F.; Toomarian, Nikzad
1996-01-01
Modular and hierarchical approach to supervised learning by artificial neural networks leads to neural networks more structured than neural networks in which all neurons fully interconnected. These networks utilize general feedforward flow of information and sparse recurrent connections to achieve dynamical effects. The modular organization, sparsity of modular units and connections, and fact that learning is much more circumscribed are all attractive features for designing neural-network hardware. Learning streamlined by imitating some aspects of biological neural networks.
Ditlevsen, Dorte K; Køhler, Lene B; Pedersen, Martin Volmer;
2003-01-01
The neural cell adhesion molecule, NCAM, is known to stimulate neurite outgrowth from primary neurones and PC12 cells presumably through signalling pathways involving the fibroblast growth factor receptor (FGFR), protein kinase A (PKA), protein kinase C (PKC), the Ras-mitogen activated protein...
Ditlevsen, Dorte K; Køhler, Lene B; Pedersen, Martin V
2003-01-01
The neural cell adhesion molecule, NCAM, is known to stimulate neurite outgrowth from primary neurones and PC12 cells presumably through signalling pathways involving the fibroblast growth factor receptor (FGFR), protein kinase A (PKA), protein kinase C (PKC), the Ras-mitogen activated protein ki...
Chen, Bangqian; Wu, Zhixiang; Wang, Jikun; Dong, Jinwei; Guan, Liming; Chen, Junming; Yang, Kai; Xie, Guishui
2015-04-01
Rubber (Hevea brasiliensis) plantations are one of the most important economic forest in tropical area. Retrieving leaf area index (LAI) and its dynamics by remote sensing is of great significance in ecological study and production management, such as yield prediction and post-hurricane damage evaluation. Thirteen HJ-1A/1B CCD images, which possess the spatial advantage of Landsat TM/ETM+ and 2-days temporal resolution of MODIS, were introduced to predict the spatial-temporal LAI of rubber plantation on Hainan Island by Nonlinear AutoRegressive networks with eXogenous inputs (NARX) model. Monthly measured LAIs at 30 stands by LAI-2000 between 2012 and 2013 were used to explore the LAI dynamics and their relationship with spectral bands and seven vegetation indices, and to develop and validate model. The NARX model, which was built base on input variables of day of year (DOY), four spectral bands and weight difference vegetation index (WDVI), possessed good accuracies during the model building for the data set of training (N = 202, R2 = 0.98, RMSE = 0.13), validation (N = 43, R2 = 0.93, RMSE = 0.24) and testing (N = 43, R2 = 0.87, RMSE = 0.31), respectively. The model performed well during field validation (N = 24, R2 = 0.88, RMSE = 0.24) and most of its mapping results showed better agreement (R2 = 0.54-0.58, RMSE = 0.47-0.71) with the field data than the results of corresponding stepwise regression models (R2 = 0.43-0.51, RMSE = 0.52-0.82). Besides, the LAI statistical values from the spatio-temporal LAI maps and their dynamics, which increased dramatically from late March (2.36 ± 0.59) to early May (3.22 ± 0.64) and then gradually slow down until reached the maximum value in early October (4.21 ± 0.87), were quite consistent with the statistical results of the field data. The study demonstrates the feasibility and reliability of retrieving spatio-temporal LAI of rubber plantations by an artificial neural network (ANN) approach, and provides some insight on the
V. Rezan USLU
2010-01-01
Full Text Available Obtaining the inflation prediction is an important problem. Having this prediction accurately will lead to more accurate decisions. Various time series techniques have been used in the literature for inflation prediction. Recently, Artificial Neural Network (ANN is being preferred in the time series prediction problem due to its flexible modeling capacity. Artificial neural network can be applied easily to any time series since it does not require prior conditions such as a linear or curved specific model pattern, stationary and normal distribution. In this study, the predictions have been obtained using the feed forward and recurrent artificial neural network for the Consumer Price Index (CPI. A new combined forecast has been proposed based on ANN in which the ANN model predictions employed in analysis were used as data.