WorldWideScience

Sample records for neural dynamic programming

  1. Accurate Natural Trail Detection Using a Combination of a Deep Neural Network and Dynamic Programming.

    Science.gov (United States)

    Adhikari, Shyam Prasad; Yang, Changju; Slot, Krzysztof; Kim, Hyongsuk

    2018-01-10

    This paper presents a vision sensor-based solution to the challenging problem of detecting and following trails in highly unstructured natural environments like forests, rural areas and mountains, using a combination of a deep neural network and dynamic programming. The deep neural network (DNN) concept has recently emerged as a very effective tool for processing vision sensor signals. A patch-based DNN is trained with supervised data to classify fixed-size image patches into "trail" and "non-trail" categories, and reshaped to a fully convolutional architecture to produce trail segmentation map for arbitrary-sized input images. As trail and non-trail patches do not exhibit clearly defined shapes or forms, the patch-based classifier is prone to misclassification, and produces sub-optimal trail segmentation maps. Dynamic programming is introduced to find an optimal trail on the sub-optimal DNN output map. Experimental results showing accurate trail detection for real-world trail datasets captured with a head mounted vision system are presented.

  2. Neural dynamics in reconfigurable silicon.

    Science.gov (United States)

    Basu, A; Ramakrishnan, S; Petre, C; Koziol, S; Brink, S; Hasler, P E

    2010-10-01

    A neuromorphic analog chip is presented that is capable of implementing massively parallel neural computations while retaining the programmability of digital systems. We show measurements from neurons with Hopf bifurcations and integrate and fire neurons, excitatory and inhibitory synapses, passive dendrite cables, coupled spiking neurons, and central pattern generators implemented on the chip. This chip provides a platform for not only simulating detailed neuron dynamics but also uses the same to interface with actual cells in applications such as a dynamic clamp. There are 28 computational analog blocks (CAB), each consisting of ion channels with tunable parameters, synapses, winner-take-all elements, current sources, transconductance amplifiers, and capacitors. There are four other CABs which have programmable bias generators. The programmability is achieved using floating gate transistors with on-chip programming control. The switch matrix for interconnecting the components in CABs also consists of floating-gate transistors. Emphasis is placed on replicating the detailed dynamics of computational neural models. Massive computational area efficiency is obtained by using the reconfigurable interconnect as synaptic weights, resulting in more than 50 000 possible 9-b accurate synapses in 9 mm(2).

  3. Creative-Dynamics Approach To Neural Intelligence

    Science.gov (United States)

    Zak, Michail A.

    1992-01-01

    Paper discusses approach to mathematical modeling of artificial neural networks exhibiting complicated behaviors reminiscent of creativity and intelligence of biological neural networks. Neural network treated as non-Lipschitzian dynamical system - as described in "Non-Lipschitzian Dynamics For Modeling Neural Networks" (NPO-17814). System serves as tool for modeling of temporal-pattern memories and recognition of complicated spatial patterns.

  4. Influence of neural adaptation on dynamics and equilibrium state of neural activities in a ring neural network

    Science.gov (United States)

    Takiyama, Ken

    2017-12-01

    How neural adaptation affects neural information processing (i.e. the dynamics and equilibrium state of neural activities) is a central question in computational neuroscience. In my previous works, I analytically clarified the dynamics and equilibrium state of neural activities in a ring-type neural network model that is widely used to model the visual cortex, motor cortex, and several other brain regions. The neural dynamics and the equilibrium state in the neural network model corresponded to a Bayesian computation and statistically optimal multiple information integration, respectively, under a biologically inspired condition. These results were revealed in an analytically tractable manner; however, adaptation effects were not considered. Here, I analytically reveal how the dynamics and equilibrium state of neural activities in a ring neural network are influenced by spike-frequency adaptation (SFA). SFA is an adaptation that causes gradual inhibition of neural activity when a sustained stimulus is applied, and the strength of this inhibition depends on neural activities. I reveal that SFA plays three roles: (1) SFA amplifies the influence of external input in neural dynamics; (2) SFA allows the history of the external input to affect neural dynamics; and (3) the equilibrium state corresponds to the statistically optimal multiple information integration independent of the existence of SFA. In addition, the equilibrium state in a ring neural network model corresponds to the statistically optimal integration of multiple information sources under biologically inspired conditions, independent of the existence of SFA.

  5. Dynamic decomposition of spatiotemporal neural signals.

    Directory of Open Access Journals (Sweden)

    Luca Ambrogioni

    2017-05-01

    Full Text Available Neural signals are characterized by rich temporal and spatiotemporal dynamics that reflect the organization of cortical networks. Theoretical research has shown how neural networks can operate at different dynamic ranges that correspond to specific types of information processing. Here we present a data analysis framework that uses a linearized model of these dynamic states in order to decompose the measured neural signal into a series of components that capture both rhythmic and non-rhythmic neural activity. The method is based on stochastic differential equations and Gaussian process regression. Through computer simulations and analysis of magnetoencephalographic data, we demonstrate the efficacy of the method in identifying meaningful modulations of oscillatory signals corrupted by structured temporal and spatiotemporal noise. These results suggest that the method is particularly suitable for the analysis and interpretation of complex temporal and spatiotemporal neural signals.

  6. Neural-network-observer-based optimal control for unknown nonlinear systems using adaptive dynamic programming

    Science.gov (United States)

    Liu, Derong; Huang, Yuzhu; Wang, Ding; Wei, Qinglai

    2013-09-01

    In this paper, an observer-based optimal control scheme is developed for unknown nonlinear systems using adaptive dynamic programming (ADP) algorithm. First, a neural-network (NN) observer is designed to estimate system states. Then, based on the observed states, a neuro-controller is constructed via ADP method to obtain the optimal control. In this design, two NN structures are used: a three-layer NN is used to construct the observer which can be applied to systems with higher degrees of nonlinearity and without a priori knowledge of system dynamics, and a critic NN is employed to approximate the value function. The optimal control law is computed using the critic NN and the observer NN. Uniform ultimate boundedness of the closed-loop system is guaranteed. The actor, critic, and observer structures are all implemented in real-time, continuously and simultaneously. Finally, simulation results are presented to demonstrate the effectiveness of the proposed control scheme.

  7. Dynamic training algorithm for dynamic neural networks

    International Nuclear Information System (INIS)

    Tan, Y.; Van Cauwenberghe, A.; Liu, Z.

    1996-01-01

    The widely used backpropagation algorithm for training neural networks based on the gradient descent has a significant drawback of slow convergence. A Gauss-Newton method based recursive least squares (RLS) type algorithm with dynamic error backpropagation is presented to speed-up the learning procedure of neural networks with local recurrent terms. Finally, simulation examples concerning the applications of the RLS type algorithm to identification of nonlinear processes using a local recurrent neural network are also included in this paper

  8. Local Dynamics in Trained Recurrent Neural Networks.

    Science.gov (United States)

    Rivkind, Alexander; Barak, Omri

    2017-06-23

    Learning a task induces connectivity changes in neural circuits, thereby changing their dynamics. To elucidate task-related neural dynamics, we study trained recurrent neural networks. We develop a mean field theory for reservoir computing networks trained to have multiple fixed point attractors. Our main result is that the dynamics of the network's output in the vicinity of attractors is governed by a low-order linear ordinary differential equation. The stability of the resulting equation can be assessed, predicting training success or failure. As a consequence, networks of rectified linear units and of sigmoidal nonlinearities are shown to have diametrically different properties when it comes to learning attractors. Furthermore, a characteristic time constant, which remains finite at the edge of chaos, offers an explanation of the network's output robustness in the presence of variability of the internal neural dynamics. Finally, the proposed theory predicts state-dependent frequency selectivity in the network response.

  9. Local Dynamics in Trained Recurrent Neural Networks

    Science.gov (United States)

    Rivkind, Alexander; Barak, Omri

    2017-06-01

    Learning a task induces connectivity changes in neural circuits, thereby changing their dynamics. To elucidate task-related neural dynamics, we study trained recurrent neural networks. We develop a mean field theory for reservoir computing networks trained to have multiple fixed point attractors. Our main result is that the dynamics of the network's output in the vicinity of attractors is governed by a low-order linear ordinary differential equation. The stability of the resulting equation can be assessed, predicting training success or failure. As a consequence, networks of rectified linear units and of sigmoidal nonlinearities are shown to have diametrically different properties when it comes to learning attractors. Furthermore, a characteristic time constant, which remains finite at the edge of chaos, offers an explanation of the network's output robustness in the presence of variability of the internal neural dynamics. Finally, the proposed theory predicts state-dependent frequency selectivity in the network response.

  10. DCS-Neural-Network Program for Aircraft Control and Testing

    Science.gov (United States)

    Jorgensen, Charles C.

    2006-01-01

    A computer program implements a dynamic-cell-structure (DCS) artificial neural network that can perform such tasks as learning selected aerodynamic characteristics of an airplane from wind-tunnel test data and computing real-time stability and control derivatives of the airplane for use in feedback linearized control. A DCS neural network is one of several types of neural networks that can incorporate additional nodes in order to rapidly learn increasingly complex relationships between inputs and outputs. In the DCS neural network implemented by the present program, the insertion of nodes is based on accumulated error. A competitive Hebbian learning rule (a supervised-learning rule in which connection weights are adjusted to minimize differences between actual and desired outputs for training examples) is used. A Kohonen-style learning rule (derived from a relatively simple training algorithm, implements a Delaunay triangulation layout of neurons) is used to adjust node positions during training. Neighborhood topology determines which nodes are used to estimate new values. The network learns, starting with two nodes, and adds new nodes sequentially in locations chosen to maximize reductions in global error. At any given time during learning, the error becomes homogeneously distributed over all nodes.

  11. Dynamics of neural cryptography.

    Science.gov (United States)

    Ruttor, Andreas; Kinzel, Wolfgang; Kanter, Ido

    2007-05-01

    Synchronization of neural networks has been used for public channel protocols in cryptography. In the case of tree parity machines the dynamics of both bidirectional synchronization and unidirectional learning is driven by attractive and repulsive stochastic forces. Thus it can be described well by a random walk model for the overlap between participating neural networks. For that purpose transition probabilities and scaling laws for the step sizes are derived analytically. Both these calculations as well as numerical simulations show that bidirectional interaction leads to full synchronization on average. In contrast, successful learning is only possible by means of fluctuations. Consequently, synchronization is much faster than learning, which is essential for the security of the neural key-exchange protocol. However, this qualitative difference between bidirectional and unidirectional interaction vanishes if tree parity machines with more than three hidden units are used, so that those neural networks are not suitable for neural cryptography. In addition, the effective number of keys which can be generated by the neural key-exchange protocol is calculated using the entropy of the weight distribution. As this quantity increases exponentially with the system size, brute-force attacks on neural cryptography can easily be made unfeasible.

  12. Dynamics of neural cryptography

    International Nuclear Information System (INIS)

    Ruttor, Andreas; Kinzel, Wolfgang; Kanter, Ido

    2007-01-01

    Synchronization of neural networks has been used for public channel protocols in cryptography. In the case of tree parity machines the dynamics of both bidirectional synchronization and unidirectional learning is driven by attractive and repulsive stochastic forces. Thus it can be described well by a random walk model for the overlap between participating neural networks. For that purpose transition probabilities and scaling laws for the step sizes are derived analytically. Both these calculations as well as numerical simulations show that bidirectional interaction leads to full synchronization on average. In contrast, successful learning is only possible by means of fluctuations. Consequently, synchronization is much faster than learning, which is essential for the security of the neural key-exchange protocol. However, this qualitative difference between bidirectional and unidirectional interaction vanishes if tree parity machines with more than three hidden units are used, so that those neural networks are not suitable for neural cryptography. In addition, the effective number of keys which can be generated by the neural key-exchange protocol is calculated using the entropy of the weight distribution. As this quantity increases exponentially with the system size, brute-force attacks on neural cryptography can easily be made unfeasible

  13. Dynamics of neural cryptography

    Science.gov (United States)

    Ruttor, Andreas; Kinzel, Wolfgang; Kanter, Ido

    2007-05-01

    Synchronization of neural networks has been used for public channel protocols in cryptography. In the case of tree parity machines the dynamics of both bidirectional synchronization and unidirectional learning is driven by attractive and repulsive stochastic forces. Thus it can be described well by a random walk model for the overlap between participating neural networks. For that purpose transition probabilities and scaling laws for the step sizes are derived analytically. Both these calculations as well as numerical simulations show that bidirectional interaction leads to full synchronization on average. In contrast, successful learning is only possible by means of fluctuations. Consequently, synchronization is much faster than learning, which is essential for the security of the neural key-exchange protocol. However, this qualitative difference between bidirectional and unidirectional interaction vanishes if tree parity machines with more than three hidden units are used, so that those neural networks are not suitable for neural cryptography. In addition, the effective number of keys which can be generated by the neural key-exchange protocol is calculated using the entropy of the weight distribution. As this quantity increases exponentially with the system size, brute-force attacks on neural cryptography can easily be made unfeasible.

  14. A dynamic programming approach to missing data estimation using neural networks

    CSIR Research Space (South Africa)

    Nelwamondo, FV

    2013-01-01

    Full Text Available method where dynamic programming is not used. This paper also suggests a different way of formulating a missing data problem such that the dynamic programming is applicable to estimate the missing data....

  15. Controlling the dynamics of multi-state neural networks

    International Nuclear Information System (INIS)

    Jin, Tao; Zhao, Hong

    2008-01-01

    In this paper, we first analyze the distribution of local fields (DLF) which is induced by the memory patterns in the Q-Ising model. It is found that the structure of the DLF is closely correlated with the network dynamics and the system performance. However, the design rule adopted in the Q-Ising model, like the other rules adopted for multi-state neural networks with associative memories, cannot be applied to directly control the DLF for a given set of memory patterns, and thus cannot be applied to further study the relationships between the structure of the DLF and the dynamics of the network. We then extend a design rule, which was presented recently for designing binary-state neural networks, to make it suitable for designing general multi-state neural networks. This rule is able to control the structure of the DLF as expected. We show that controlling the DLF not only can affect the dynamic behaviors of the multi-state neural networks for a given set of memory patterns, but also can improve the storage capacity. With the change of the DLF, the network shows very rich dynamic behaviors, such as the 'chaos phase', the 'memory phase', and the 'mixture phase'. These dynamic behaviors are also observed in the binary-state neural networks; therefore, our results imply that they may be the universal behaviors of feedback neural networks

  16. Advanced models of neural networks nonlinear dynamics and stochasticity in biological neurons

    CERN Document Server

    Rigatos, Gerasimos G

    2015-01-01

    This book provides a complete study on neural structures exhibiting nonlinear and stochastic dynamics, elaborating on neural dynamics by introducing advanced models of neural networks. It overviews the main findings in the modelling of neural dynamics in terms of electrical circuits and examines their stability properties with the use of dynamical systems theory. It is suitable for researchers and postgraduate students engaged with neural networks and dynamical systems theory.

  17. Dynamic Information Encoding With Dynamic Synapses in Neural Adaptation

    Science.gov (United States)

    Li, Luozheng; Mi, Yuanyuan; Zhang, Wenhao; Wang, Da-Hui; Wu, Si

    2018-01-01

    Adaptation refers to the general phenomenon that the neural system dynamically adjusts its response property according to the statistics of external inputs. In response to an invariant stimulation, neuronal firing rates first increase dramatically and then decrease gradually to a low level close to the background activity. This prompts a question: during the adaptation, how does the neural system encode the repeated stimulation with attenuated firing rates? It has been suggested that the neural system may employ a dynamical encoding strategy during the adaptation, the information of stimulus is mainly encoded by the strong independent spiking of neurons at the early stage of the adaptation; while the weak but synchronized activity of neurons encodes the stimulus information at the later stage of the adaptation. The previous study demonstrated that short-term facilitation (STF) of electrical synapses, which increases the synchronization between neurons, can provide a mechanism to realize dynamical encoding. In the present study, we further explore whether short-term plasticity (STP) of chemical synapses, an interaction form more common than electrical synapse in the cortex, can support dynamical encoding. We build a large-size network with chemical synapses between neurons. Notably, facilitation of chemical synapses only enhances pair-wise correlations between neurons mildly, but its effect on increasing synchronization of the network can be significant, and hence it can serve as a mechanism to convey the stimulus information. To read-out the stimulus information, we consider that a downstream neuron receives balanced excitatory and inhibitory inputs from the network, so that the downstream neuron only responds to synchronized firings of the network. Therefore, the response of the downstream neuron indicates the presence of the repeated stimulation. Overall, our study demonstrates that STP of chemical synapse can serve as a mechanism to realize dynamical neural

  18. Neutral Theory and Scale-Free Neural Dynamics

    Science.gov (United States)

    Martinello, Matteo; Hidalgo, Jorge; Maritan, Amos; di Santo, Serena; Plenz, Dietmar; Muñoz, Miguel A.

    2017-10-01

    Neural tissues have been consistently observed to be spontaneously active and to generate highly variable (scale-free distributed) outbursts of activity in vivo and in vitro. Understanding whether these heterogeneous patterns of activity stem from the underlying neural dynamics operating at the edge of a phase transition is a fascinating possibility, as criticality has been argued to entail many possible important functional advantages in biological computing systems. Here, we employ a well-accepted model for neural dynamics to elucidate an alternative scenario in which diverse neuronal avalanches, obeying scaling, can coexist simultaneously, even if the network operates in a regime far from the edge of any phase transition. We show that perturbations to the system state unfold dynamically according to a "neutral drift" (i.e., guided only by stochasticity) with respect to the background of endogenous spontaneous activity, and that such a neutral dynamics—akin to neutral theories of population genetics and of biogeography—implies marginal propagation of perturbations and scale-free distributed causal avalanches. We argue that causal information, not easily accessible to experiments, is essential to elucidate the nature and statistics of neural avalanches, and that neutral dynamics is likely to play an important role in the cortex functioning. We discuss the implications of these findings to design new empirical approaches to shed further light on how the brain processes and stores information.

  19. The Complexity of Dynamics in Small Neural Circuits.

    Directory of Open Access Journals (Sweden)

    Diego Fasoli

    2016-08-01

    Full Text Available Mean-field approximations are a powerful tool for studying large neural networks. However, they do not describe well the behavior of networks composed of a small number of neurons. In this case, major differences between the mean-field approximation and the real behavior of the network can arise. Yet, many interesting problems in neuroscience involve the study of mesoscopic networks composed of a few tens of neurons. Nonetheless, mathematical methods that correctly describe networks of small size are still rare, and this prevents us to make progress in understanding neural dynamics at these intermediate scales. Here we develop a novel systematic analysis of the dynamics of arbitrarily small networks composed of homogeneous populations of excitatory and inhibitory firing-rate neurons. We study the local bifurcations of their neural activity with an approach that is largely analytically tractable, and we numerically determine the global bifurcations. We find that for strong inhibition these networks give rise to very complex dynamics, caused by the formation of multiple branching solutions of the neural dynamics equations that emerge through spontaneous symmetry-breaking. This qualitative change of the neural dynamics is a finite-size effect of the network, that reveals qualitative and previously unexplored differences between mesoscopic cortical circuits and their mean-field approximation. The most important consequence of spontaneous symmetry-breaking is the ability of mesoscopic networks to regulate their degree of functional heterogeneity, which is thought to help reducing the detrimental effect of noise correlations on cortical information processing.

  20. Shaping the learning curve: epigenetic dynamics in neural plasticity

    Directory of Open Access Journals (Sweden)

    Zohar Ziv Bronfman

    2014-07-01

    Full Text Available A key characteristic of learning and neural plasticity is state-dependent acquisition dynamics reflected by the non-linear learning curve that links increase in learning with practice. Here we propose that the manner by which epigenetic states of individual cells change during learning contributes to the shape of the neural and behavioral learning curve. We base our suggestion on recent studies showing that epigenetic mechanisms such as DNA methylation, histone acetylation and RNA-mediated gene regulation are intimately involved in the establishment and maintenance of long-term neural plasticity, reflecting specific learning-histories and influencing future learning. Our model, which is the first to suggest a dynamic molecular account of the shape of the learning curve, leads to several testable predictions regarding the link between epigenetic dynamics at the promoter, gene-network and neural-network levels. This perspective opens up new avenues for therapeutic interventions in neurological pathologies.

  1. Collaborative Recurrent Neural Networks forDynamic Recommender Systems

    Science.gov (United States)

    2016-11-22

    JMLR: Workshop and Conference Proceedings 63:366–381, 2016 ACML 2016 Collaborative Recurrent Neural Networks for Dynamic Recommender Systems Young...an unprece- dented scale. Although such activity logs are abundantly available, most approaches to recommender systems are based on the rating...Recurrent Neural Network, Recommender System , Neural Language Model, Collaborative Filtering 1. Introduction As ever larger parts of the population

  2. EDITORIAL: Special issue on applied neurodynamics: from neural dynamics to neural engineering Special issue on applied neurodynamics: from neural dynamics to neural engineering

    Science.gov (United States)

    Chiel, Hillel J.; Thomas, Peter J.

    2011-12-01

    , the sun, earth and moon) proved to be far more difficult. In the late nineteenth century, Poincaré made significant progress on this problem, introducing a geometric method of reasoning about solutions to differential equations (Diacu and Holmes 1996). This work had a powerful impact on mathematicians and physicists, and also began to influence biology. In his 1925 book, based on his work starting in 1907, and that of others, Lotka used nonlinear differential equations and concepts from dynamical systems theory to analyze a wide variety of biological problems, including oscillations in the numbers of predators and prey (Lotka 1925). Although little was known in detail about the function of the nervous system, Lotka concluded his book with speculations about consciousness and the implications this might have for creating a mathematical formulation of biological systems. Much experimental work in the 1930s and 1940s focused on the biophysical mechanisms of excitability in neural tissue, and Rashevsky and others continued to apply tools and concepts from nonlinear dynamical systems theory as a means of providing a more general framework for understanding these results (Rashevsky 1960, Landahl and Podolsky 1949). The publication of Hodgkin and Huxley's classic quantitative model of the action potential in 1952 created a new impetus for these studies (Hodgkin and Huxley 1952). In 1955, FitzHugh published an important paper that summarized much of the earlier literature, and used concepts from phase plane analysis such as asymptotic stability, saddle points, separatrices and the role of noise to provide a deeper theoretical and conceptual understanding of threshold phenomena (Fitzhugh 1955, Izhikevich and FitzHugh 2006). The Fitzhugh-Nagumo equations constituted an important two-dimensional simplification of the four-dimensional Hodgkin and Huxley equations, and gave rise to an extensive literature of analysis. Many of the papers in this special issue build on tools

  3. Program Helps Simulate Neural Networks

    Science.gov (United States)

    Villarreal, James; Mcintire, Gary

    1993-01-01

    Neural Network Environment on Transputer System (NNETS) computer program provides users high degree of flexibility in creating and manipulating wide variety of neural-network topologies at processing speeds not found in conventional computing environments. Supports back-propagation and back-propagation-related algorithms. Back-propagation algorithm used is implementation of Rumelhart's generalized delta rule. NNETS developed on INMOS Transputer(R). Predefines back-propagation network, Jordan network, and reinforcement network to assist users in learning and defining own networks. Also enables users to configure other neural-network paradigms from NNETS basic architecture. Small portion of software written in OCCAM(R) language.

  4. Identification of Complex Dynamical Systems with Neural Networks (2/2)

    CERN Multimedia

    CERN. Geneva

    2016-01-01

    The identification and analysis of high dimensional nonlinear systems is obviously a challenging task. Neural networks have been proven to be universal approximators but this still leaves the identification task a hard one. To do it efficiently, we have to violate some of the rules of classical regression theory. Furthermore we should focus on the interpretation of the resulting model to overcome its black box character. First, we will discuss function approximation with 3 layer feedforward neural networks up to new developments in deep neural networks and deep learning. These nets are not only of interest in connection with image analysis but are a center point of the current artificial intelligence developments. Second, we will focus on the analysis of complex dynamical system in the form of state space models realized as recurrent neural networks. After the introduction of small open dynamical systems we will study dynamical systems on manifolds. Here manifold and dynamics have to be identified in parall...

  5. Identification of Complex Dynamical Systems with Neural Networks (1/2)

    CERN Multimedia

    CERN. Geneva

    2016-01-01

    The identification and analysis of high dimensional nonlinear systems is obviously a challenging task. Neural networks have been proven to be universal approximators but this still leaves the identification task a hard one. To do it efficiently, we have to violate some of the rules of classical regression theory. Furthermore we should focus on the interpretation of the resulting model to overcome its black box character. First, we will discuss function approximation with 3 layer feedforward neural networks up to new developments in deep neural networks and deep learning. These nets are not only of interest in connection with image analysis but are a center point of the current artificial intelligence developments. Second, we will focus on the analysis of complex dynamical system in the form of state space models realized as recurrent neural networks. After the introduction of small open dynamical systems we will study dynamical systems on manifolds. Here manifold and dynamics have to be identified in parall...

  6. Neural Computations in a Dynamical System with Multiple Time Scales.

    Science.gov (United States)

    Mi, Yuanyuan; Lin, Xiaohan; Wu, Si

    2016-01-01

    Neural systems display rich short-term dynamics at various levels, e.g., spike-frequency adaptation (SFA) at the single-neuron level, and short-term facilitation (STF) and depression (STD) at the synapse level. These dynamical features typically cover a broad range of time scales and exhibit large diversity in different brain regions. It remains unclear what is the computational benefit for the brain to have such variability in short-term dynamics. In this study, we propose that the brain can exploit such dynamical features to implement multiple seemingly contradictory computations in a single neural circuit. To demonstrate this idea, we use continuous attractor neural network (CANN) as a working model and include STF, SFA and STD with increasing time constants in its dynamics. Three computational tasks are considered, which are persistent activity, adaptation, and anticipative tracking. These tasks require conflicting neural mechanisms, and hence cannot be implemented by a single dynamical feature or any combination with similar time constants. However, with properly coordinated STF, SFA and STD, we show that the network is able to implement the three computational tasks concurrently. We hope this study will shed light on the understanding of how the brain orchestrates its rich dynamics at various levels to realize diverse cognitive functions.

  7. ChainMail based neural dynamics modeling of soft tissue deformation for surgical simulation.

    Science.gov (United States)

    Zhang, Jinao; Zhong, Yongmin; Smith, Julian; Gu, Chengfan

    2017-07-20

    Realistic and real-time modeling and simulation of soft tissue deformation is a fundamental research issue in the field of surgical simulation. In this paper, a novel cellular neural network approach is presented for modeling and simulation of soft tissue deformation by combining neural dynamics of cellular neural network with ChainMail mechanism. The proposed method formulates the problem of elastic deformation into cellular neural network activities to avoid the complex computation of elasticity. The local position adjustments of ChainMail are incorporated into the cellular neural network as the local connectivity of cells, through which the dynamic behaviors of soft tissue deformation are transformed into the neural dynamics of cellular neural network. Experiments demonstrate that the proposed neural network approach is capable of modeling the soft tissues' nonlinear deformation and typical mechanical behaviors. The proposed method not only improves ChainMail's linear deformation with the nonlinear characteristics of neural dynamics but also enables the cellular neural network to follow the principle of continuum mechanics to simulate soft tissue deformation.

  8. Discrete Globalised Dual Heuristic Dynamic Programming in Control of the Two-Wheeled Mobile Robot

    Directory of Open Access Journals (Sweden)

    Marcin Szuster

    2014-01-01

    Full Text Available Network-based control systems have been emerging technologies in the control of nonlinear systems over the past few years. This paper focuses on the implementation of the approximate dynamic programming algorithm in the network-based tracking control system of the two-wheeled mobile robot, Pioneer 2-DX. The proposed discrete tracking control system consists of the globalised dual heuristic dynamic programming algorithm, the PD controller, the supervisory term, and an additional control signal. The structure of the supervisory term derives from the stability analysis realised using the Lyapunov stability theorem. The globalised dual heuristic dynamic programming algorithm consists of two structures: the actor and the critic, realised in a form of neural networks. The actor generates the suboptimal control law, while the critic evaluates the realised control strategy by approximation of value function from the Bellman’s equation. The presented discrete tracking control system works online, the neural networks’ weights adaptation process is realised in every iteration step, and the neural networks preliminary learning procedure is not required. The performance of the proposed control system was verified by a series of computer simulations and experiments realised using the wheeled mobile robot Pioneer 2-DX.

  9. Approximate Dynamic Programming in Tracking Control of a Robotic Manipulator

    Directory of Open Access Journals (Sweden)

    Marcin Szuster

    2016-02-01

    Full Text Available This article focuses on the implementation of an approximate dynamic programming algorithm in the discrete tracking control system of the three-degrees of freedom Scorbot-ER 4pc robotic manipulator. The controlled system is included in an articulated robots group which uses rotary joints to access their work space. The main part of the control system is a dual heuristic dynamic programming algorithm that consists of two structures designed in the form of neural networks: an actor and a critic. The actor generates the suboptimal control law while the critic approximates the difference of the value function from Bellman's equation with respect to the state. The residual elements of the control system are the PD controller, the supervisory term and an additional control signal. The structure of the supervisory term derives from the stability analysis performed using the Lyapunov stability theorem. The control system works online, the neural networks' weights-adaptation procedure is performed in every iteration step, and the neural networks' preliminary learning process is not required. The performance of the control system was verified by a series of computer simulations and experiments performed using the Scorbot-ER 4pc robotic manipulator.

  10. Neural Population Dynamics during Reaching Are Better Explained by a Dynamical System than Representational Tuning.

    Science.gov (United States)

    Michaels, Jonathan A; Dann, Benjamin; Scherberger, Hansjörg

    2016-11-01

    Recent models of movement generation in motor cortex have sought to explain neural activity not as a function of movement parameters, known as representational models, but as a dynamical system acting at the level of the population. Despite evidence supporting this framework, the evaluation of representational models and their integration with dynamical systems is incomplete in the literature. Using a representational velocity-tuning based simulation of center-out reaching, we show that incorporating variable latency offsets between neural activity and kinematics is sufficient to generate rotational dynamics at the level of neural populations, a phenomenon observed in motor cortex. However, we developed a covariance-matched permutation test (CMPT) that reassigns neural data between task conditions independently for each neuron while maintaining overall neuron-to-neuron relationships, revealing that rotations based on the representational model did not uniquely depend on the underlying condition structure. In contrast, rotations based on either a dynamical model or motor cortex data depend on this relationship, providing evidence that the dynamical model more readily explains motor cortex activity. Importantly, implementing a recurrent neural network we demonstrate that both representational tuning properties and rotational dynamics emerge, providing evidence that a dynamical system can reproduce previous findings of representational tuning. Finally, using motor cortex data in combination with the CMPT, we show that results based on small numbers of neurons or conditions should be interpreted cautiously, potentially informing future experimental design. Together, our findings reinforce the view that representational models lack the explanatory power to describe complex aspects of single neuron and population level activity.

  11. Efficient Neural Network Modeling for Flight and Space Dynamics Simulation

    Directory of Open Access Journals (Sweden)

    Ayman Hamdy Kassem

    2011-01-01

    Full Text Available This paper represents an efficient technique for neural network modeling of flight and space dynamics simulation. The technique will free the neural network designer from guessing the size and structure for the required neural network model and will help to minimize the number of neurons. For linear flight/space dynamics systems, the technique can find the network weights and biases directly by solving a system of linear equations without the need for training. Nonlinear flight dynamic systems can be easily modeled by training its linearized models keeping the same network structure. The training is fast, as it uses the linear system knowledge to speed up the training process. The technique is tested on different flight/space dynamic models and showed promising results.

  12. Dynamical systems, attractors, and neural circuits.

    Science.gov (United States)

    Miller, Paul

    2016-01-01

    Biology is the study of dynamical systems. Yet most of us working in biology have limited pedagogical training in the theory of dynamical systems, an unfortunate historical fact that can be remedied for future generations of life scientists. In my particular field of systems neuroscience, neural circuits are rife with nonlinearities at all levels of description, rendering simple methodologies and our own intuition unreliable. Therefore, our ideas are likely to be wrong unless informed by good models. These models should be based on the mathematical theories of dynamical systems since functioning neurons are dynamic-they change their membrane potential and firing rates with time. Thus, selecting the appropriate type of dynamical system upon which to base a model is an important first step in the modeling process. This step all too easily goes awry, in part because there are many frameworks to choose from, in part because the sparsely sampled data can be consistent with a variety of dynamical processes, and in part because each modeler has a preferred modeling approach that is difficult to move away from. This brief review summarizes some of the main dynamical paradigms that can arise in neural circuits, with comments on what they can achieve computationally and what signatures might reveal their presence within empirical data. I provide examples of different dynamical systems using simple circuits of two or three cells, emphasizing that any one connectivity pattern is compatible with multiple, diverse functions.

  13. Convergent dynamics for multistable delayed neural networks

    International Nuclear Information System (INIS)

    Shih, Chih-Wen; Tseng, Jui-Pin

    2008-01-01

    This investigation aims at developing a methodology to establish convergence of dynamics for delayed neural network systems with multiple stable equilibria. The present approach is general and can be applied to several network models. We take the Hopfield-type neural networks with both instantaneous and delayed feedbacks to illustrate the idea. We shall construct the complete dynamical scenario which comprises exactly 2 n stable equilibria and exactly (3 n − 2 n ) unstable equilibria for the n-neuron network. In addition, it is shown that every solution of the system converges to one of the equilibria as time tends to infinity. The approach is based on employing the geometrical structure of the network system. Positively invariant sets and componentwise dynamical properties are derived under the geometrical configuration. An iteration scheme is subsequently designed to confirm the convergence of dynamics for the system. Two examples with numerical simulations are arranged to illustrate the present theory

  14. A one-layer recurrent neural network for constrained pseudoconvex optimization and its application for dynamic portfolio optimization.

    Science.gov (United States)

    Liu, Qingshan; Guo, Zhishan; Wang, Jun

    2012-02-01

    In this paper, a one-layer recurrent neural network is proposed for solving pseudoconvex optimization problems subject to linear equality and bound constraints. Compared with the existing neural networks for optimization (e.g., the projection neural networks), the proposed neural network is capable of solving more general pseudoconvex optimization problems with equality and bound constraints. Moreover, it is capable of solving constrained fractional programming problems as a special case. The convergence of the state variables of the proposed neural network to achieve solution optimality is guaranteed as long as the designed parameters in the model are larger than the derived lower bounds. Numerical examples with simulation results illustrate the effectiveness and characteristics of the proposed neural network. In addition, an application for dynamic portfolio optimization is discussed. Copyright © 2011 Elsevier Ltd. All rights reserved.

  15. Neural Network Based Real-time Correction of Transducer Dynamic Errors

    Science.gov (United States)

    Roj, J.

    2013-12-01

    In order to carry out real-time dynamic error correction of transducers described by a linear differential equation, a novel recurrent neural network was developed. The network structure is based on solving this equation with respect to the input quantity when using the state variables. It is shown that such a real-time correction can be carried out using simple linear perceptrons. Due to the use of a neural technique, knowledge of the dynamic parameters of the transducer is not necessary. Theoretical considerations are illustrated by the results of simulation studies performed for the modeled second order transducer. The most important properties of the neural dynamic error correction, when emphasizing the fundamental advantages and disadvantages, are discussed.

  16. Discriminating lysosomal membrane protein types using dynamic neural network.

    Science.gov (United States)

    Tripathi, Vijay; Gupta, Dwijendra Kumar

    2014-01-01

    This work presents a dynamic artificial neural network methodology, which classifies the proteins into their classes from their sequences alone: the lysosomal membrane protein classes and the various other membranes protein classes. In this paper, neural networks-based lysosomal-associated membrane protein type prediction system is proposed. Different protein sequence representations are fused to extract the features of a protein sequence, which includes seven feature sets; amino acid (AA) composition, sequence length, hydrophobic group, electronic group, sum of hydrophobicity, R-group, and dipeptide composition. To reduce the dimensionality of the large feature vector, we applied the principal component analysis. The probabilistic neural network, generalized regression neural network, and Elman regression neural network (RNN) are used as classifiers and compared with layer recurrent network (LRN), a dynamic network. The dynamic networks have memory, i.e. its output depends not only on the input but the previous outputs also. Thus, the accuracy of LRN classifier among all other artificial neural networks comes out to be the highest. The overall accuracy of jackknife cross-validation is 93.2% for the data-set. These predicted results suggest that the method can be effectively applied to discriminate lysosomal associated membrane proteins from other membrane proteins (Type-I, Outer membrane proteins, GPI-Anchored) and Globular proteins, and it also indicates that the protein sequence representation can better reflect the core feature of membrane proteins than the classical AA composition.

  17. Intelligent Energy Management Control for Extended Range Electric Vehicles Based on Dynamic Programming and Neural Network

    Directory of Open Access Journals (Sweden)

    Lihe Xi

    2017-11-01

    Full Text Available The extended range electric vehicle (EREV can store much clean energy from the electric grid when it arrives at the charging station with lower battery energy. Consuming minimum gasoline during the trip is a common goal for most energy management controllers. To achieve these objectives, an intelligent energy management controller for EREV based on dynamic programming and neural networks (IEMC_NN is proposed. The power demand split ratio between the extender and battery are optimized by DP, and the control objectives are presented as a cost function. The online controller is trained by neural networks. Three trained controllers, constructing the controller library in IEMC_NN, are obtained from training three typical lengths of the driving cycle. To determine an appropriate NN controller for different driving distance purposes, the selection module in IEMC_NN is developed based on the remaining battery energy and the driving distance to the charging station. Three simulation conditions are adopted to validate the performance of IEMC_NN. They are target driving distance information, known and unknown, changing the destination during the trip. Simulation results using these simulation conditions show that the IEMC_NN had better fuel economy than the charging deplete/charging sustain (CD/CS algorithm. More significantly, with known driving distance information, the battery SOC controlled by IEMC_NN can just reach the lower bound as the EREV arrives at the charging station, which was also feasible when the driver changed the destination during the trip.

  18. Artificial neural networks for control of a grid-connected rectifier/inverter under disturbance, dynamic and power converter switching conditions.

    Science.gov (United States)

    Li, Shuhui; Fairbank, Michael; Johnson, Cameron; Wunsch, Donald C; Alonso, Eduardo; Proaño, Julio L

    2014-04-01

    Three-phase grid-connected converters are widely used in renewable and electric power system applications. Traditionally, grid-connected converters are controlled with standard decoupled d-q vector control mechanisms. However, recent studies indicate that such mechanisms show limitations in their applicability to dynamic systems. This paper investigates how to mitigate such restrictions using a neural network to control a grid-connected rectifier/inverter. The neural network implements a dynamic programming algorithm and is trained by using back-propagation through time. To enhance performance and stability under disturbance, additional strategies are adopted, including the use of integrals of error signals to the network inputs and the introduction of grid disturbance voltage to the outputs of a well-trained network. The performance of the neural-network controller is studied under typical vector control conditions and compared against conventional vector control methods, which demonstrates that the neural vector control strategy proposed in this paper is effective. Even in dynamic and power converter switching environments, the neural vector controller shows strong ability to trace rapidly changing reference commands, tolerate system disturbances, and satisfy control requirements for a faulted power system.

  19. Neural Computations in a Dynamical System with Multiple Time Scales

    Directory of Open Access Journals (Sweden)

    Yuanyuan Mi

    2016-09-01

    Full Text Available Neural systems display rich short-term dynamics at various levels, e.g., spike-frequencyadaptation (SFA at single neurons, and short-term facilitation (STF and depression (STDat neuronal synapses. These dynamical features typically covers a broad range of time scalesand exhibit large diversity in different brain regions. It remains unclear what the computationalbenefit for the brain to have such variability in short-term dynamics is. In this study, we proposethat the brain can exploit such dynamical features to implement multiple seemingly contradictorycomputations in a single neural circuit. To demonstrate this idea, we use continuous attractorneural network (CANN as a working model and include STF, SFA and STD with increasing timeconstants in their dynamics. Three computational tasks are considered, which are persistent activity,adaptation, and anticipative tracking. These tasks require conflicting neural mechanisms, andhence cannot be implemented by a single dynamical feature or any combination with similar timeconstants. However, with properly coordinated STF, SFA and STD, we show that the network isable to implement the three computational tasks concurrently. We hope this study will shed lighton the understanding of how the brain orchestrates its rich dynamics at various levels to realizediverse cognitive functions.

  20. Oscillatory phase dynamics in neural entrainment underpin illusory percepts of time.

    Science.gov (United States)

    Herrmann, Björn; Henry, Molly J; Grigutsch, Maren; Obleser, Jonas

    2013-10-02

    Neural oscillatory dynamics are a candidate mechanism to steer perception of time and temporal rate change. While oscillator models of time perception are strongly supported by behavioral evidence, a direct link to neural oscillations and oscillatory entrainment has not yet been provided. In addition, it has thus far remained unaddressed how context-induced illusory percepts of time are coded for in oscillator models of time perception. To investigate these questions, we used magnetoencephalography and examined the neural oscillatory dynamics that underpin pitch-induced illusory percepts of temporal rate change. Human participants listened to frequency-modulated sounds that varied over time in both modulation rate and pitch, and judged the direction of rate change (decrease vs increase). Our results demonstrate distinct neural mechanisms of rate perception: Modulation rate changes directly affected listeners' rate percept as well as the exact frequency of the neural oscillation. However, pitch-induced illusory rate changes were unrelated to the exact frequency of the neural responses. The rate change illusion was instead linked to changes in neural phase patterns, which allowed for single-trial decoding of percepts. That is, illusory underestimations or overestimations of perceived rate change were tightly coupled to increased intertrial phase coherence and changes in cerebro-acoustic phase lag. The results provide insight on how illusory percepts of time are coded for by neural oscillatory dynamics.

  1. Dynamics of a neural system with a multiscale architecture

    Science.gov (United States)

    Breakspear, Michael; Stam, Cornelis J

    2005-01-01

    The architecture of the brain is characterized by a modular organization repeated across a hierarchy of spatial scales—neurons, minicolumns, cortical columns, functional brain regions, and so on. It is important to consider that the processes governing neural dynamics at any given scale are not only determined by the behaviour of other neural structures at that scale, but also by the emergent behaviour of smaller scales, and the constraining influence of activity at larger scales. In this paper, we introduce a theoretical framework for neural systems in which the dynamics are nested within a multiscale architecture. In essence, the dynamics at each scale are determined by a coupled ensemble of nonlinear oscillators, which embody the principle scale-specific neurobiological processes. The dynamics at larger scales are ‘slaved’ to the emergent behaviour of smaller scales through a coupling function that depends on a multiscale wavelet decomposition. The approach is first explicated mathematically. Numerical examples are then given to illustrate phenomena such as between-scale bifurcations, and how synchronization in small-scale structures influences the dynamics in larger structures in an intuitive manner that cannot be captured by existing modelling approaches. A framework for relating the dynamical behaviour of the system to measured observables is presented and further extensions to capture wave phenomena and mode coupling are suggested. PMID:16087448

  2. Fluctuation-Driven Neural Dynamics Reproduce Drosophila Locomotor Patterns.

    Directory of Open Access Journals (Sweden)

    Andrea Maesani

    2015-11-01

    Full Text Available The neural mechanisms determining the timing of even simple actions, such as when to walk or rest, are largely mysterious. One intriguing, but untested, hypothesis posits a role for ongoing activity fluctuations in neurons of central action selection circuits that drive animal behavior from moment to moment. To examine how fluctuating activity can contribute to action timing, we paired high-resolution measurements of freely walking Drosophila melanogaster with data-driven neural network modeling and dynamical systems analysis. We generated fluctuation-driven network models whose outputs-locomotor bouts-matched those measured from sensory-deprived Drosophila. From these models, we identified those that could also reproduce a second, unrelated dataset: the complex time-course of odor-evoked walking for genetically diverse Drosophila strains. Dynamical models that best reproduced both Drosophila basal and odor-evoked locomotor patterns exhibited specific characteristics. First, ongoing fluctuations were required. In a stochastic resonance-like manner, these fluctuations allowed neural activity to escape stable equilibria and to exceed a threshold for locomotion. Second, odor-induced shifts of equilibria in these models caused a depression in locomotor frequency following olfactory stimulation. Our models predict that activity fluctuations in action selection circuits cause behavioral output to more closely match sensory drive and may therefore enhance navigation in complex sensory environments. Together these data reveal how simple neural dynamics, when coupled with activity fluctuations, can give rise to complex patterns of animal behavior.

  3. Synthesis of recurrent neural networks for dynamical system simulation.

    Science.gov (United States)

    Trischler, Adam P; D'Eleuterio, Gabriele M T

    2016-08-01

    We review several of the most widely used techniques for training recurrent neural networks to approximate dynamical systems, then describe a novel algorithm for this task. The algorithm is based on an earlier theoretical result that guarantees the quality of the network approximation. We show that a feedforward neural network can be trained on the vector-field representation of a given dynamical system using backpropagation, then recast it as a recurrent network that replicates the original system's dynamics. After detailing this algorithm and its relation to earlier approaches, we present numerical examples that demonstrate its capabilities. One of the distinguishing features of our approach is that both the original dynamical systems and the recurrent networks that simulate them operate in continuous time. Copyright © 2016 Elsevier Ltd. All rights reserved.

  4. Dynamic Neural State Identification in Deep Brain Local Field Potentials of Neuropathic Pain.

    Science.gov (United States)

    Luo, Huichun; Huang, Yongzhi; Du, Xueying; Zhang, Yunpeng; Green, Alexander L; Aziz, Tipu Z; Wang, Shouyan

    2018-01-01

    In neuropathic pain, the neurophysiological and neuropathological function of the ventro-posterolateral nucleus of the thalamus (VPL) and the periventricular gray/periaqueductal gray area (PVAG) involves multiple frequency oscillations. Moreover, oscillations related to pain perception and modulation change dynamically over time. Fluctuations in these neural oscillations reflect the dynamic neural states of the nucleus. In this study, an approach to classifying the synchronization level was developed to dynamically identify the neural states. An oscillation extraction model based on windowed wavelet packet transform was designed to characterize the activity level of oscillations. The wavelet packet coefficients sparsely represented the activity level of theta and alpha oscillations in local field potentials (LFPs). Then, a state discrimination model was designed to calculate an adaptive threshold to determine the activity level of oscillations. Finally, the neural state was represented by the activity levels of both theta and alpha oscillations. The relationship between neural states and pain relief was further evaluated. The performance of the state identification approach achieved sensitivity and specificity beyond 80% in simulation signals. Neural states of the PVAG and VPL were dynamically identified from LFPs of neuropathic pain patients. The occurrence of neural states based on theta and alpha oscillations were correlated to the degree of pain relief by deep brain stimulation. In the PVAG LFPs, the occurrence of the state with high activity levels of theta oscillations independent of alpha and the state with low-level alpha and high-level theta oscillations were significantly correlated with pain relief by deep brain stimulation. This study provides a reliable approach to identifying the dynamic neural states in LFPs with a low signal-to-noise ratio by using sparse representation based on wavelet packet transform. Furthermore, it may advance closed-loop deep

  5. Biophysical Neural Spiking, Bursting, and Excitability Dynamics in Reconfigurable Analog VLSI.

    Science.gov (United States)

    Yu, T; Sejnowski, T J; Cauwenberghs, G

    2011-10-01

    We study a range of neural dynamics under variations in biophysical parameters underlying extended Morris-Lecar and Hodgkin-Huxley models in three gating variables. The extended models are implemented in NeuroDyn, a four neuron, twelve synapse continuous-time analog VLSI programmable neural emulation platform with generalized channel kinetics and biophysical membrane dynamics. The dynamics exhibit a wide range of time scales extending beyond 100 ms neglected in typical silicon models of tonic spiking neurons. Circuit simulations and measurements show transition from tonic spiking to tonic bursting dynamics through variation of a single conductance parameter governing calcium recovery. We similarly demonstrate transition from graded to all-or-none neural excitability in the onset of spiking dynamics through the variation of channel kinetic parameters governing the speed of potassium activation. Other combinations of variations in conductance and channel kinetic parameters give rise to phasic spiking and spike frequency adaptation dynamics. The NeuroDyn chip consumes 1.29 mW and occupies 3 mm × 3 mm in 0.5 μm CMOS, supporting emerging developments in neuromorphic silicon-neuron interfaces.

  6. Nonlinear programming with feedforward neural networks.

    Energy Technology Data Exchange (ETDEWEB)

    Reifman, J.

    1999-06-02

    We provide a practical and effective method for solving constrained optimization problems by successively training a multilayer feedforward neural network in a coupled neural-network/objective-function representation. Nonlinear programming problems are easily mapped into this representation which has a simpler and more transparent method of solution than optimization performed with Hopfield-like networks and poses very mild requirements on the functions appearing in the problem. Simulation results are illustrated and compared with an off-the-shelf optimization tool.

  7. Empirical Modeling of the Plasmasphere Dynamics Using Neural Networks

    Science.gov (United States)

    Zhelavskaya, I. S.; Shprits, Y.; Spasojevic, M.

    2017-12-01

    We present a new empirical model for reconstructing the global dynamics of the cold plasma density distribution based only on solar wind data and geomagnetic indices. Utilizing the density database obtained using the NURD (Neural-network-based Upper hybrid Resonance Determination) algorithm for the period of October 1, 2012 - July 1, 2016, in conjunction with solar wind data and geomagnetic indices, we develop a neural network model that is capable of globally reconstructing the dynamics of the cold plasma density distribution for 2 ≤ L ≤ 6 and all local times. We validate and test the model by measuring its performance on independent datasets withheld from the training set and by comparing the model predicted global evolution with global images of He+ distribution in the Earth's plasmasphere from the IMAGE Extreme UltraViolet (EUV) instrument. We identify the parameters that best quantify the plasmasphere dynamics by training and comparing multiple neural networks with different combinations of input parameters (geomagnetic indices, solar wind data, and different durations of their time history). We demonstrate results of both local and global plasma density reconstruction. This study illustrates how global dynamics can be reconstructed from local in-situ observations by using machine learning techniques.

  8. Development of an accident diagnosis system using a dynamic neural network for nuclear power plants

    International Nuclear Information System (INIS)

    Lee, Seung Jun; Kim, Jong Hyun; Seong, Poong Hyun

    2004-01-01

    In this work, an accident diagnosis system using the dynamic neural network is developed. In order to help the plant operators to quickly identify the problem, perform diagnosis and initiate recovery actions ensuring the safety of the plant, many operator support system and accident diagnosis systems have been developed. Neural networks have been recognized as a good method to implement an accident diagnosis system. However, conventional accident diagnosis systems that used neural networks did not consider a time factor sufficiently. If the neural network could be trained according to time, it is possible to perform more efficient and detailed accidents analysis. Therefore, this work suggests a dynamic neural network which has different features from existing dynamic neural networks. And a simple accident diagnosis system is implemented in order to validate the dynamic neural network. After training of the prototype, several accident diagnoses were performed. The results show that the prototype can detect the accidents correctly with good performances

  9. Sensitivity analysis of linear programming problem through a recurrent neural network

    Science.gov (United States)

    Das, Raja

    2017-11-01

    In this paper we study the recurrent neural network for solving linear programming problems. To achieve optimality in accuracy and also in computational effort, an algorithm is presented. We investigate the sensitivity analysis of linear programming problem through the neural network. A detailed example is also presented to demonstrate the performance of the recurrent neural network.

  10. Standard representation and unified stability analysis for dynamic artificial neural network models.

    Science.gov (United States)

    Kim, Kwang-Ki K; Patrón, Ernesto Ríos; Braatz, Richard D

    2018-02-01

    An overview is provided of dynamic artificial neural network models (DANNs) for nonlinear dynamical system identification and control problems, and convex stability conditions are proposed that are less conservative than past results. The three most popular classes of dynamic artificial neural network models are described, with their mathematical representations and architectures followed by transformations based on their block diagrams that are convenient for stability and performance analyses. Classes of nonlinear dynamical systems that are universally approximated by such models are characterized, which include rigorous upper bounds on the approximation errors. A unified framework and linear matrix inequality-based stability conditions are described for different classes of dynamic artificial neural network models that take additional information into account such as local slope restrictions and whether the nonlinearities within the DANNs are odd. A theoretical example shows reduced conservatism obtained by the conditions. Copyright © 2017. Published by Elsevier Ltd.

  11. A novel recurrent neural network with finite-time convergence for linear programming.

    Science.gov (United States)

    Liu, Qingshan; Cao, Jinde; Chen, Guanrong

    2010-11-01

    In this letter, a novel recurrent neural network based on the gradient method is proposed for solving linear programming problems. Finite-time convergence of the proposed neural network is proved by using the Lyapunov method. Compared with the existing neural networks for linear programming, the proposed neural network is globally convergent to exact optimal solutions in finite time, which is remarkable and rare in the literature of neural networks for optimization. Some numerical examples are given to show the effectiveness and excellent performance of the new recurrent neural network.

  12. Nonlinear identification of process dynamics using neural networks

    International Nuclear Information System (INIS)

    Parlos, A.G.; Atiya, A.F.; Chong, K.T.

    1992-01-01

    In this paper the nonlinear identification of process dynamics encountered in nuclear power plant components is addressed, in an input-output sense, using artificial neural systems. A hybrid feedforward/feedback neural network, namely, a recurrent multilayer perceptron, is used as the model structure to be identified. The feedforward portion of the network architecture provides its well-known interpolation property, while through recurrency and cross-talk, the local information feedback enables representation of temporal variations in the system nonlinearities. The standard backpropagation learning algorithm is modified, and it is used for the supervised training of the proposed hybrid network. The performance of recurrent multilayer perceptron networks in identifying process dynamics is investigated via the case study of a U-tube steam generator. The response of representative steam generator is predicted using a neural network, and it is compared to the response obtained from a sophisticated computer model based on first principles. The transient responses compare well, although further research is warranted to determine the predictive capabilities of these networks during more severe operational transients and accident scenarios

  13. Spatiotemporal neural network dynamics for the processing of dynamic facial expressions

    Science.gov (United States)

    Sato, Wataru; Kochiyama, Takanori; Uono, Shota

    2015-01-01

    The dynamic facial expressions of emotion automatically elicit multifaceted psychological activities; however, the temporal profiles and dynamic interaction patterns of brain activities remain unknown. We investigated these issues using magnetoencephalography. Participants passively observed dynamic facial expressions of fear and happiness, or dynamic mosaics. Source-reconstruction analyses utilizing functional magnetic-resonance imaging data revealed higher activation in broad regions of the bilateral occipital and temporal cortices in response to dynamic facial expressions than in response to dynamic mosaics at 150–200 ms and some later time points. The right inferior frontal gyrus exhibited higher activity for dynamic faces versus mosaics at 300–350 ms. Dynamic causal-modeling analyses revealed that dynamic faces activated the dual visual routes and visual–motor route. Superior influences of feedforward and feedback connections were identified before and after 200 ms, respectively. These results indicate that hierarchical, bidirectional neural network dynamics within a few hundred milliseconds implement the processing of dynamic facial expressions. PMID:26206708

  14. Spatiotemporal neural network dynamics for the processing of dynamic facial expressions.

    Science.gov (United States)

    Sato, Wataru; Kochiyama, Takanori; Uono, Shota

    2015-07-24

    The dynamic facial expressions of emotion automatically elicit multifaceted psychological activities; however, the temporal profiles and dynamic interaction patterns of brain activities remain unknown. We investigated these issues using magnetoencephalography. Participants passively observed dynamic facial expressions of fear and happiness, or dynamic mosaics. Source-reconstruction analyses utilizing functional magnetic-resonance imaging data revealed higher activation in broad regions of the bilateral occipital and temporal cortices in response to dynamic facial expressions than in response to dynamic mosaics at 150-200 ms and some later time points. The right inferior frontal gyrus exhibited higher activity for dynamic faces versus mosaics at 300-350 ms. Dynamic causal-modeling analyses revealed that dynamic faces activated the dual visual routes and visual-motor route. Superior influences of feedforward and feedback connections were identified before and after 200 ms, respectively. These results indicate that hierarchical, bidirectional neural network dynamics within a few hundred milliseconds implement the processing of dynamic facial expressions.

  15. Hamiltonian-Driven Adaptive Dynamic Programming for Continuous Nonlinear Dynamical Systems.

    Science.gov (United States)

    Yang, Yongliang; Wunsch, Donald; Yin, Yixin

    2017-08-01

    This paper presents a Hamiltonian-driven framework of adaptive dynamic programming (ADP) for continuous time nonlinear systems, which consists of evaluation of an admissible control, comparison between two different admissible policies with respect to the corresponding the performance function, and the performance improvement of an admissible control. It is showed that the Hamiltonian can serve as the temporal difference for continuous-time systems. In the Hamiltonian-driven ADP, the critic network is trained to output the value gradient. Then, the inner product between the critic and the system dynamics produces the value derivative. Under some conditions, the minimization of the Hamiltonian functional is equivalent to the value function approximation. An iterative algorithm starting from an arbitrary admissible control is presented for the optimal control approximation with its convergence proof. The implementation is accomplished by a neural network approximation. Two simulation studies demonstrate the effectiveness of Hamiltonian-driven ADP.

  16. Adaptive Dynamic Programming for Control Algorithms and Stability

    CERN Document Server

    Zhang, Huaguang; Luo, Yanhong; Wang, Ding

    2013-01-01

    There are many methods of stable controller design for nonlinear systems. In seeking to go beyond the minimum requirement of stability, Adaptive Dynamic Programming for Control approaches the challenging topic of optimal control for nonlinear systems using the tools of  adaptive dynamic programming (ADP). The range of systems treated is extensive; affine, switched, singularly perturbed and time-delay nonlinear systems are discussed as are the uses of neural networks and techniques of value and policy iteration. The text features three main aspects of ADP in which the methods proposed for stabilization and for tracking and games benefit from the incorporation of optimal control methods: • infinite-horizon control for which the difficulty of solving partial differential Hamilton–Jacobi–Bellman equations directly is overcome, and  proof provided that the iterative value function updating sequence converges to the infimum of all the value functions obtained by admissible control law sequences; • finite-...

  17. Nonlinear Dynamics and Chaos in Fractional-Order Hopfield Neural Networks with Delay

    Directory of Open Access Journals (Sweden)

    Xia Huang

    2013-01-01

    Full Text Available A fractional-order two-neuron Hopfield neural network with delay is proposed based on the classic well-known Hopfield neural networks, and further, the complex dynamical behaviors of such a network are investigated. A great variety of interesting dynamical phenomena, including single-periodic, multiple-periodic, and chaotic motions, are found to exist. The existence of chaotic attractors is verified by the bifurcation diagram and phase portraits as well.

  18. Linear programming based on neural networks for radiotherapy treatment planning

    International Nuclear Information System (INIS)

    Xingen Wu; Limin Luo

    2000-01-01

    In this paper, we propose a neural network model for linear programming that is designed to optimize radiotherapy treatment planning (RTP). This kind of neural network can be easily implemented by using a kind of 'neural' electronic system in order to obtain an optimization solution in real time. We first give an introduction to the RTP problem and construct a non-constraint objective function for the neural network model. We adopt a gradient algorithm to minimize the objective function and design the structure of the neural network for RTP. Compared to traditional linear programming methods, this neural network model can reduce the time needed for convergence, the size of problems (i.e., the number of variables to be searched) and the number of extra slack and surplus variables needed. We obtained a set of optimized beam weights that result in a better dose distribution as compared to that obtained using the simplex algorithm under the same initial condition. The example presented in this paper shows that this model is feasible in three-dimensional RTP. (author)

  19. A Neural Network Approach to Fluid Quantity Measurement in Dynamic Environments

    CERN Document Server

    Terzic, Edin; Nagarajah, Romesh; Alamgir, Muhammad

    2012-01-01

    Sloshing causes liquid to fluctuate, making accurate level readings difficult to obtain in dynamic environments. The measurement system described uses a single-tube capacitive sensor to obtain an instantaneous level reading of the fluid surface, thereby accurately determining the fluid quantity in the presence of slosh. A neural network based classification technique has been applied to predict the actual quantity of the fluid contained in a tank under sloshing conditions.   In A neural network approach to fluid quantity measurement in dynamic environments, effects of temperature variations and contamination on the capacitive sensor are discussed, and the authors propose that these effects can also be eliminated with the proposed neural network based classification system. To examine the performance of the classification system, many field trials were carried out on a running vehicle at various tank volume levels that range from 5 L to 50 L. The effectiveness of signal enhancement on the neural network base...

  20. Neural network error correction for solving coupled ordinary differential equations

    Science.gov (United States)

    Shelton, R. O.; Darsey, J. A.; Sumpter, B. G.; Noid, D. W.

    1992-01-01

    A neural network is presented to learn errors generated by a numerical algorithm for solving coupled nonlinear differential equations. The method is based on using a neural network to correctly learn the error generated by, for example, Runge-Kutta on a model molecular dynamics (MD) problem. The neural network programs used in this study were developed by NASA. Comparisons are made for training the neural network using backpropagation and a new method which was found to converge with fewer iterations. The neural net programs, the MD model and the calculations are discussed.

  1. Soft tissue deformation modelling through neural dynamics-based reaction-diffusion mechanics.

    Science.gov (United States)

    Zhang, Jinao; Zhong, Yongmin; Gu, Chengfan

    2018-05-30

    Soft tissue deformation modelling forms the basis of development of surgical simulation, surgical planning and robotic-assisted minimally invasive surgery. This paper presents a new methodology for modelling of soft tissue deformation based on reaction-diffusion mechanics via neural dynamics. The potential energy stored in soft tissues due to a mechanical load to deform tissues away from their rest state is treated as the equivalent transmembrane potential energy, and it is distributed in the tissue masses in the manner of reaction-diffusion propagation of nonlinear electrical waves. The reaction-diffusion propagation of mechanical potential energy and nonrigid mechanics of motion are combined to model soft tissue deformation and its dynamics, both of which are further formulated as the dynamics of cellular neural networks to achieve real-time computational performance. The proposed methodology is implemented with a haptic device for interactive soft tissue deformation with force feedback. Experimental results demonstrate that the proposed methodology exhibits nonlinear force-displacement relationship for nonlinear soft tissue deformation. Homogeneous, anisotropic and heterogeneous soft tissue material properties can be modelled through the inherent physical properties of mass points. Graphical abstract Soft tissue deformation modelling with haptic feedback via neural dynamics-based reaction-diffusion mechanics.

  2. A class of convergent neural network dynamics

    Science.gov (United States)

    Fiedler, Bernold; Gedeon, Tomáš

    1998-01-01

    We consider a class of systems of differential equations in Rn which exhibits convergent dynamics. We find a Lyapunov function and show that every bounded trajectory converges to the set of equilibria. Our result generalizes the results of Cohen and Grossberg (1983) for convergent neural networks. It replaces the symmetry assumption on the matrix of weights by the assumption on the structure of the connections in the neural network. We prove the convergence result also for a large class of Lotka-Volterra systems. These are naturally defined on the closed positive orthant. We show that there are no heteroclinic cycles on the boundary of the positive orthant for the systems in this class.

  3. Dynamic simulation of a steam generator by neural networks

    International Nuclear Information System (INIS)

    Masini, R.; Padovani, E.; Ricotti, M.E.; Zio, E.

    1999-01-01

    Numerical simulation by computers of the dynamic evolution of complex systems and components is a fundamental phase of any modern engineering design activity. This is of particular importance for risk-based design projects which require that the system behavior be analyzed under several and often extreme conditions. The traditional methods of simulation typically entail long, iterative, processes which lead to large simulation times, often exceeding the transients real time. Artificial neural networks (ANNs) may be exploited in this context, their advantages residing mainly in the speed of computation, in the capability of generalizing from few examples, in the robustness to noisy and partially incomplete data and in the capability of performing empirical input-output mapping without complete knowledge of the underlying physics. In this paper we present a novel approach to dynamic simulation by ANNs based on a superposition scheme in which a set of networks are individually trained, each one to respond to a different input forcing function. The dynamic simulation of a steam generator is considered as an example to show the potentialities of this tool and to point out the difficulties and crucial issues which typically arise when attempting to establish an efficient neural network simulator. The structure of the networks system is such to feedback, at each time step, a portion of the past evolution of the transient and this allows a good reproduction of also non-linear dynamic behaviors. A nice characteristic of the approach is that the modularization of the training reduces substantially its burden and gives this neural simulation tool a nice feature of transportability. (orig.)

  4. Computing single step operators of logic programming in radial basis function neural networks

    Science.gov (United States)

    Hamadneh, Nawaf; Sathasivam, Saratha; Choon, Ong Hong

    2014-07-01

    Logic programming is the process that leads from an original formulation of a computing problem to executable programs. A normal logic program consists of a finite set of clauses. A valuation I of logic programming is a mapping from ground atoms to false or true. The single step operator of any logic programming is defined as a function (Tp:I→I). Logic programming is well-suited to building the artificial intelligence systems. In this study, we established a new technique to compute the single step operators of logic programming in the radial basis function neural networks. To do that, we proposed a new technique to generate the training data sets of single step operators. The training data sets are used to build the neural networks. We used the recurrent radial basis function neural networks to get to the steady state (the fixed point of the operators). To improve the performance of the neural networks, we used the particle swarm optimization algorithm to train the networks.

  5. Computing single step operators of logic programming in radial basis function neural networks

    Energy Technology Data Exchange (ETDEWEB)

    Hamadneh, Nawaf; Sathasivam, Saratha; Choon, Ong Hong [School of Mathematical Sciences, Universiti Sains Malaysia, 11800 USM, Penang (Malaysia)

    2014-07-10

    Logic programming is the process that leads from an original formulation of a computing problem to executable programs. A normal logic program consists of a finite set of clauses. A valuation I of logic programming is a mapping from ground atoms to false or true. The single step operator of any logic programming is defined as a function (T{sub p}:I→I). Logic programming is well-suited to building the artificial intelligence systems. In this study, we established a new technique to compute the single step operators of logic programming in the radial basis function neural networks. To do that, we proposed a new technique to generate the training data sets of single step operators. The training data sets are used to build the neural networks. We used the recurrent radial basis function neural networks to get to the steady state (the fixed point of the operators). To improve the performance of the neural networks, we used the particle swarm optimization algorithm to train the networks.

  6. Computing single step operators of logic programming in radial basis function neural networks

    International Nuclear Information System (INIS)

    Hamadneh, Nawaf; Sathasivam, Saratha; Choon, Ong Hong

    2014-01-01

    Logic programming is the process that leads from an original formulation of a computing problem to executable programs. A normal logic program consists of a finite set of clauses. A valuation I of logic programming is a mapping from ground atoms to false or true. The single step operator of any logic programming is defined as a function (T p :I→I). Logic programming is well-suited to building the artificial intelligence systems. In this study, we established a new technique to compute the single step operators of logic programming in the radial basis function neural networks. To do that, we proposed a new technique to generate the training data sets of single step operators. The training data sets are used to build the neural networks. We used the recurrent radial basis function neural networks to get to the steady state (the fixed point of the operators). To improve the performance of the neural networks, we used the particle swarm optimization algorithm to train the networks

  7. Neural Architectures for Control

    Science.gov (United States)

    Peterson, James K.

    1991-01-01

    The cerebellar model articulated controller (CMAC) neural architectures are shown to be viable for the purposes of real-time learning and control. Software tools for the exploration of CMAC performance are developed for three hardware platforms, the MacIntosh, the IBM PC, and the SUN workstation. All algorithm development was done using the C programming language. These software tools were then used to implement an adaptive critic neuro-control design that learns in real-time how to back up a trailer truck. The truck backer-upper experiment is a standard performance measure in the neural network literature, but previously the training of the controllers was done off-line. With the CMAC neural architectures, it was possible to train the neuro-controllers on-line in real-time on a MS-DOS PC 386. CMAC neural architectures are also used in conjunction with a hierarchical planning approach to find collision-free paths over 2-D analog valued obstacle fields. The method constructs a coarse resolution version of the original problem and then finds the corresponding coarse optimal path using multipass dynamic programming. CMAC artificial neural architectures are used to estimate the analog transition costs that dynamic programming requires. The CMAC architectures are trained in real-time for each obstacle field presented. The coarse optimal path is then used as a baseline for the construction of a fine scale optimal path through the original obstacle array. These results are a very good indication of the potential power of the neural architectures in control design. In order to reach as wide an audience as possible, we have run a seminar on neuro-control that has met once per week since 20 May 1991. This seminar has thoroughly discussed the CMAC architecture, relevant portions of classical control, back propagation through time, and adaptive critic designs.

  8. Dynamic Pricing in Electronic Commerce Using Neural Network

    Science.gov (United States)

    Ghose, Tapu Kumar; Tran, Thomas T.

    In this paper, we propose an approach where feed-forward neural network is used for dynamically calculating a competitive price of a product in order to maximize sellers’ revenue. In the approach we considered that along with product price other attributes such as product quality, delivery time, after sales service and seller’s reputation contribute in consumers purchase decision. We showed that once the sellers, by using their limited prior knowledge, set an initial price of a product our model adjusts the price automatically with the help of neural network so that sellers’ revenue is maximized.

  9. A neural network approach to the study of dynamics and structure of molecular systems

    International Nuclear Information System (INIS)

    Getino, C.; Sumpter, B.G.; Noid, D.W.

    1994-01-01

    Neural networks are used to study intramolecular energy flow in molecular systems (tetratomics to macromolecules), developing new techniques for efficient analysis of data obtained from molecular-dynamics and quantum mechanics calculations. Neural networks can map phase space points to intramolecular vibrational energies along a classical trajectory (example of complicated coordinate transformation), producing reasonably accurate values for any region of the multidimensional phase space of a tetratomic molecule. Neural network energy flow predictions are found to significantly enhance the molecular-dynamics method to longer time-scales and extensive averaging of trajectories for macromolecular systems. Pattern recognition abilities of neural networks can be used to discern phase space features. Neural networks can also expand model calculations by interpolation of costly quantum mechanical ab initio data, used to develop semiempirical potential energy functions

  10. Neural-Network Object-Recognition Program

    Science.gov (United States)

    Spirkovska, L.; Reid, M. B.

    1993-01-01

    HONTIOR computer program implements third-order neural network exhibiting invariance under translation, change of scale, and in-plane rotation. Invariance incorporated directly into architecture of network. Only one view of each object needed to train network for two-dimensional-translation-invariant recognition of object. Also used for three-dimensional-transformation-invariant recognition by training network on only set of out-of-plane rotated views. Written in C language.

  11. A One-Layer Recurrent Neural Network for Real-Time Portfolio Optimization With Probability Criterion.

    Science.gov (United States)

    Liu, Qingshan; Dang, Chuangyin; Huang, Tingwen

    2013-02-01

    This paper presents a decision-making model described by a recurrent neural network for dynamic portfolio optimization. The portfolio-optimization problem is first converted into a constrained fractional programming problem. Since the objective function in the programming problem is not convex, the traditional optimization techniques are no longer applicable for solving this problem. Fortunately, the objective function in the fractional programming is pseudoconvex on the feasible region. It leads to a one-layer recurrent neural network modeled by means of a discontinuous dynamic system. To ensure the optimal solutions for portfolio optimization, the convergence of the proposed neural network is analyzed and proved. In fact, the neural network guarantees to get the optimal solutions for portfolio-investment advice if some mild conditions are satisfied. A numerical example with simulation results substantiates the effectiveness and illustrates the characteristics of the proposed neural network.

  12. Chaos Synchronization Using Adaptive Dynamic Neural Network Controller with Variable Learning Rates

    Directory of Open Access Journals (Sweden)

    Chih-Hong Kao

    2011-01-01

    Full Text Available This paper addresses the synchronization of chaotic gyros with unknown parameters and external disturbance via an adaptive dynamic neural network control (ADNNC system. The proposed ADNNC system is composed of a neural controller and a smooth compensator. The neural controller uses a dynamic RBF (DRBF network to online approximate an ideal controller. The DRBF network can create new hidden neurons online if the input data falls outside the hidden layer and prune the insignificant hidden neurons online if the hidden neuron is inappropriate. The smooth compensator is designed to compensate for the approximation error between the neural controller and the ideal controller. Moreover, the variable learning rates of the parameter adaptation laws are derived based on a discrete-type Lyapunov function to speed up the convergence rate of the tracking error. Finally, the simulation results which verified the chaotic behavior of two nonlinear identical chaotic gyros can be synchronized using the proposed ADNNC scheme.

  13. Dynamic Learning from Adaptive Neural Control of Uncertain Robots with Guaranteed Full-State Tracking Precision

    Directory of Open Access Journals (Sweden)

    Min Wang

    2017-01-01

    Full Text Available A dynamic learning method is developed for an uncertain n-link robot with unknown system dynamics, achieving predefined performance attributes on the link angular position and velocity tracking errors. For a known nonsingular initial robotic condition, performance functions and unconstrained transformation errors are employed to prevent the violation of the full-state tracking error constraints. By combining two independent Lyapunov functions and radial basis function (RBF neural network (NN approximator, a novel and simple adaptive neural control scheme is proposed for the dynamics of the unconstrained transformation errors, which guarantees uniformly ultimate boundedness of all the signals in the closed-loop system. In the steady-state control process, RBF NNs are verified to satisfy the partial persistent excitation (PE condition. Subsequently, an appropriate state transformation is adopted to achieve the accurate convergence of neural weight estimates. The corresponding experienced knowledge on unknown robotic dynamics is stored in NNs with constant neural weight values. Using the stored knowledge, a static neural learning controller is developed to improve the full-state tracking performance. A comparative simulation study on a 2-link robot illustrates the effectiveness of the proposed scheme.

  14. Neural network for solving convex quadratic bilevel programming problems.

    Science.gov (United States)

    He, Xing; Li, Chuandong; Huang, Tingwen; Li, Chaojie

    2014-03-01

    In this paper, using the idea of successive approximation, we propose a neural network to solve convex quadratic bilevel programming problems (CQBPPs), which is modeled by a nonautonomous differential inclusion. Different from the existing neural network for CQBPP, the model has the least number of state variables and simple structure. Based on the theory of nonsmooth analysis, differential inclusions and Lyapunov-like method, the limit equilibrium points sequence of the proposed neural networks can approximately converge to an optimal solution of CQBPP under certain conditions. Finally, simulation results on two numerical examples and the portfolio selection problem show the effectiveness and performance of the proposed neural network. Copyright © 2013 Elsevier Ltd. All rights reserved.

  15. An Online Energy Management Control for Hybrid Electric Vehicles Based on Neuro-Dynamic Programming

    Directory of Open Access Journals (Sweden)

    Feiyan Qin

    2018-03-01

    Full Text Available Hybrid electric vehicles are a compromise between traditional vehicles and pure electric vehicles and can be part of the solution to the energy shortage problem. Energy management strategies (EMSs are highly related to energy utilization in HEVs’ fuel economy. In this research, we have employed a neuro-dynamic programming (NDP method to simultaneously optimize fuel economy and battery state of charge (SOC. In this NDP method, the critic network is a multi-resolution wavelet neural network based on the Meyer wavelet function, and the action network is a conventional wavelet neural network based on the Morlet function. The weights and parameters of both networks are obtained by an algorithm of backpropagation type. The NDP-based EMS has been applied to a parallel HEV and compared with a previously reported NDP EMS and a stochastic dynamic programing-based method. Simulation results under ADVISOR2002 have shown that the proposed NDP approach achieves better performance than both the methods. These indicate that the proposed NDP EMS, and the CWNN and MRWNN, are effective in approximating a nonlinear system.

  16. Dynamic Adaptive Neural Network Arrays: A Neuromorphic Architecture

    Energy Technology Data Exchange (ETDEWEB)

    Disney, Adam [University of Tennessee (UT); Reynolds, John [University of Tennessee (UT)

    2015-01-01

    Dynamic Adaptive Neural Network Array (DANNA) is a neuromorphic hardware implementation. It differs from most other neuromorphic projects in that it allows for programmability of structure, and it is trained or designed using evolutionary optimization. This paper describes the DANNA structure, how DANNA is trained using evolutionary optimization, and an application of DANNA to a very simple classification task.

  17. Linking dynamic patterns of neural activity in orbitofrontal cortex with decision making.

    Science.gov (United States)

    Rich, Erin L; Stoll, Frederic M; Rudebeck, Peter H

    2018-04-01

    Humans and animals demonstrate extraordinary flexibility in choice behavior, particularly when deciding based on subjective preferences. We evaluate options on different scales, deliberate, and often change our minds. Little is known about the neural mechanisms that underlie these dynamic aspects of decision-making, although neural activity in orbitofrontal cortex (OFC) likely plays a central role. Recent evidence from studies in macaques shows that attention modulates value responses in OFC, and that ensembles of OFC neurons dynamically signal different options during choices. When contexts change, these ensembles flexibly remap to encode the new task. Determining how these dynamic patterns emerge and relate to choices will inform models of decision-making and OFC function. Copyright © 2017 Elsevier Ltd. All rights reserved.

  18. An Artificial Neural Network Based Short-term Dynamic Prediction of Algae Bloom

    Directory of Open Access Journals (Sweden)

    Yao Junyang

    2014-06-01

    Full Text Available This paper proposes a method of short-term prediction of algae bloom based on artificial neural network. Firstly, principal component analysis is applied to water environmental factors in algae bloom raceway ponds to get main factors that influence the formation of algae blooms. Then, a model of short-term dynamic prediction based on neural network is built with the current chlorophyll_a values as input and the chlorophyll_a values in the next moment as output to realize short-term dynamic prediction of algae bloom. Simulation results show that the model can realize short-term prediction of algae bloom effectively.

  19. Asymmetrically extremely dilute neural networks with Langevin dynamics and unconventional results

    International Nuclear Information System (INIS)

    Hatchett, J P L; Coolen, A C C

    2004-01-01

    We study graded response attractor neural networks with asymmetrically extremely dilute interactions and Langevin dynamics. We solve our model in the thermodynamic limit using generating functional analysis, and find (in contrast to the binary neurons case) that even in statics, for T > 0 or large α, one cannot eliminate the non-persistent order parameters, atypically for recurrent neural network models. The macroscopic dynamics is driven by the (non-trivial) joint distribution of neurons and fields, rather than just the (Gaussian) field distribution. We calculate phase transition lines and find, as may be expected for this asymmetric model, that there is no spin-glass phase, only recall and paramagnetic phases. We present simulation results in support of our theory

  20. A new neural network model for solving random interval linear programming problems.

    Science.gov (United States)

    Arjmandzadeh, Ziba; Safi, Mohammadreza; Nazemi, Alireza

    2017-05-01

    This paper presents a neural network model for solving random interval linear programming problems. The original problem involving random interval variable coefficients is first transformed into an equivalent convex second order cone programming problem. A neural network model is then constructed for solving the obtained convex second order cone problem. Employing Lyapunov function approach, it is also shown that the proposed neural network model is stable in the sense of Lyapunov and it is globally convergent to an exact satisfactory solution of the original problem. Several illustrative examples are solved in support of this technique. Copyright © 2017 Elsevier Ltd. All rights reserved.

  1. Introduction to dynamic programming

    CERN Document Server

    Cooper, Leon; Rodin, E Y

    1981-01-01

    Introduction to Dynamic Programming provides information pertinent to the fundamental aspects of dynamic programming. This book considers problems that can be quantitatively formulated and deals with mathematical models of situations or phenomena that exists in the real world.Organized into 10 chapters, this book begins with an overview of the fundamental components of any mathematical optimization model. This text then presents the details of the application of dynamic programming to variational problems. Other chapters consider the application of dynamic programming to inventory theory, Mark

  2. Global neural dynamic surface tracking control of strict-feedback systems with application to hypersonic flight vehicle.

    Science.gov (United States)

    Xu, Bin; Yang, Chenguang; Pan, Yongping

    2015-10-01

    This paper studies both indirect and direct global neural control of strict-feedback systems in the presence of unknown dynamics, using the dynamic surface control (DSC) technique in a novel manner. A new switching mechanism is designed to combine an adaptive neural controller in the neural approximation domain, together with the robust controller that pulls the transient states back into the neural approximation domain from the outside. In comparison with the conventional control techniques, which could only achieve semiglobally uniformly ultimately bounded stability, the proposed control scheme guarantees all the signals in the closed-loop system are globally uniformly ultimately bounded, such that the conventional constraints on initial conditions of the neural control system can be relaxed. The simulation studies of hypersonic flight vehicle (HFV) are performed to demonstrate the effectiveness of the proposed global neural DSC design.

  3. Robust fault detection of wind energy conversion systems based on dynamic neural networks.

    Science.gov (United States)

    Talebi, Nasser; Sadrnia, Mohammad Ali; Darabi, Ahmad

    2014-01-01

    Occurrence of faults in wind energy conversion systems (WECSs) is inevitable. In order to detect the occurred faults at the appropriate time, avoid heavy economic losses, ensure safe system operation, prevent damage to adjacent relevant systems, and facilitate timely repair of failed components; a fault detection system (FDS) is required. Recurrent neural networks (RNNs) have gained a noticeable position in FDSs and they have been widely used for modeling of complex dynamical systems. One method for designing an FDS is to prepare a dynamic neural model emulating the normal system behavior. By comparing the outputs of the real system and neural model, incidence of the faults can be identified. In this paper, by utilizing a comprehensive dynamic model which contains both mechanical and electrical components of the WECS, an FDS is suggested using dynamic RNNs. The presented FDS detects faults of the generator's angular velocity sensor, pitch angle sensors, and pitch actuators. Robustness of the FDS is achieved by employing an adaptive threshold. Simulation results show that the proposed scheme is capable to detect the faults shortly and it has very low false and missed alarms rate.

  4. A Recurrent Neural Network for Nonlinear Fractional Programming

    Directory of Open Access Journals (Sweden)

    Quan-Ju Zhang

    2012-01-01

    Full Text Available This paper presents a novel recurrent time continuous neural network model which performs nonlinear fractional optimization subject to interval constraints on each of the optimization variables. The network is proved to be complete in the sense that the set of optima of the objective function to be minimized with interval constraints coincides with the set of equilibria of the neural network. It is also shown that the network is primal and globally convergent in the sense that its trajectory cannot escape from the feasible region and will converge to an exact optimal solution for any initial point being chosen in the feasible interval region. Simulation results are given to demonstrate further the global convergence and good performance of the proposing neural network for nonlinear fractional programming problems with interval constraints.

  5. Bio-Inspired Neural Model for Learning Dynamic Models

    Science.gov (United States)

    Duong, Tuan; Duong, Vu; Suri, Ronald

    2009-01-01

    A neural-network mathematical model that, relative to prior such models, places greater emphasis on some of the temporal aspects of real neural physical processes, has been proposed as a basis for massively parallel, distributed algorithms that learn dynamic models of possibly complex external processes by means of learning rules that are local in space and time. The algorithms could be made to perform such functions as recognition and prediction of words in speech and of objects depicted in video images. The approach embodied in this model is said to be "hardware-friendly" in the following sense: The algorithms would be amenable to execution by special-purpose computers implemented as very-large-scale integrated (VLSI) circuits that would operate at relatively high speeds and low power demands.

  6. Artificial Neural Networks for Nonlinear Dynamic Response Simulation in Mechanical Systems

    DEFF Research Database (Denmark)

    Christiansen, Niels Hørbye; Høgsberg, Jan Becker; Winther, Ole

    2011-01-01

    It is shown how artificial neural networks can be trained to predict dynamic response of a simple nonlinear structure. Data generated using a nonlinear finite element model of a simplified wind turbine is used to train a one layer artificial neural network. When trained properly the network is ab...... to perform accurate response prediction much faster than the corresponding finite element model. Initial result indicate a reduction in cpu time by two orders of magnitude....

  7. A comparison between wavelet based static and dynamic neural network approaches for runoff prediction

    Science.gov (United States)

    Shoaib, Muhammad; Shamseldin, Asaad Y.; Melville, Bruce W.; Khan, Mudasser Muneer

    2016-04-01

    In order to predict runoff accurately from a rainfall event, the multilayer perceptron type of neural network models are commonly used in hydrology. Furthermore, the wavelet coupled multilayer perceptron neural network (MLPNN) models has also been found superior relative to the simple neural network models which are not coupled with wavelet. However, the MLPNN models are considered as static and memory less networks and lack the ability to examine the temporal dimension of data. Recurrent neural network models, on the other hand, have the ability to learn from the preceding conditions of the system and hence considered as dynamic models. This study for the first time explores the potential of wavelet coupled time lagged recurrent neural network (TLRNN) models for runoff prediction using rainfall data. The Discrete Wavelet Transformation (DWT) is employed in this study to decompose the input rainfall data using six of the most commonly used wavelet functions. The performance of the simple and the wavelet coupled static MLPNN models is compared with their counterpart dynamic TLRNN models. The study found that the dynamic wavelet coupled TLRNN models can be considered as alternative to the static wavelet MLPNN models. The study also investigated the effect of memory depth on the performance of static and dynamic neural network models. The memory depth refers to how much past information (lagged data) is required as it is not known a priori. The db8 wavelet function is found to yield the best results with the static MLPNN models and with the TLRNN models having small memory depths. The performance of the wavelet coupled TLRNN models with large memory depths is found insensitive to the selection of the wavelet function as all wavelet functions have similar performance.

  8. Models of neural dynamics in brain information processing - the developments of 'the decade'

    International Nuclear Information System (INIS)

    Borisyuk, G N; Borisyuk, R M; Kazanovich, Yakov B; Ivanitskii, Genrikh R

    2002-01-01

    Neural network models are discussed that have been developed during the last decade with the purpose of reproducing spatio-temporal patterns of neural activity in different brain structures. The main goal of the modeling was to test hypotheses of synchronization, temporal and phase relations in brain information processing. The models being considered are those of temporal structure of spike sequences, of neural activity dynamics, and oscillatory models of attention and feature integration. (reviews of topical problems)

  9. Context-dependent retrieval of information by neural-network dynamics with continuous attractors.

    Science.gov (United States)

    Tsuboshita, Yukihiro; Okamoto, Hiroshi

    2007-08-01

    Memory retrieval in neural networks has traditionally been described by dynamic systems with discrete attractors. However, recent neurophysiological findings of graded persistent activity suggest that memory retrieval in the brain is more likely to be described by dynamic systems with continuous attractors. To explore what sort of information processing is achieved by continuous-attractor dynamics, keyword extraction from documents by a network of bistable neurons, which gives robust continuous attractors, is examined. Given an associative network of terms, a continuous attractor led by propagation of neuronal activation in this network appears to represent keywords that express underlying meaning of a document encoded in the initial state of the network-activation pattern. A dominant hypothesis in cognitive psychology is that long-term memory is archived in the network structure, which resembles associative networks of terms. Our results suggest that keyword extraction by the neural-network dynamics with continuous attractors might symbolically represent context-dependent retrieval of short-term memory from long-term memory in the brain.

  10. Parameter estimation of breast tumour using dynamic neural network from thermal pattern

    Directory of Open Access Journals (Sweden)

    Elham Saniei

    2016-11-01

    Full Text Available This article presents a new approach for estimating the depth, size, and metabolic heat generation rate of a tumour. For this purpose, the surface temperature distribution of a breast thermal image and the dynamic neural network was used. The research consisted of two steps: forward and inverse. For the forward section, a finite element model was created. The Pennes bio-heat equation was solved to find surface and depth temperature distributions. Data from the analysis, then, were used to train the dynamic neural network model (DNN. Results from the DNN training/testing confirmed those of the finite element model. For the inverse section, the trained neural network was applied to estimate the depth temperature distribution (tumour position from the surface temperature profile, extracted from the thermal image. Finally, tumour parameters were obtained from the depth temperature distribution. Experimental findings (20 patients were promising in terms of the model’s potential for retrieving tumour parameters.

  11. A state space approach for piecewise-linear recurrent neural networks for identifying computational dynamics from neural measurements.

    Directory of Open Access Journals (Sweden)

    Daniel Durstewitz

    2017-06-01

    Full Text Available The computational and cognitive properties of neural systems are often thought to be implemented in terms of their (stochastic network dynamics. Hence, recovering the system dynamics from experimentally observed neuronal time series, like multiple single-unit recordings or neuroimaging data, is an important step toward understanding its computations. Ideally, one would not only seek a (lower-dimensional state space representation of the dynamics, but would wish to have access to its statistical properties and their generative equations for in-depth analysis. Recurrent neural networks (RNNs are a computationally powerful and dynamically universal formal framework which has been extensively studied from both the computational and the dynamical systems perspective. Here we develop a semi-analytical maximum-likelihood estimation scheme for piecewise-linear RNNs (PLRNNs within the statistical framework of state space models, which accounts for noise in both the underlying latent dynamics and the observation process. The Expectation-Maximization algorithm is used to infer the latent state distribution, through a global Laplace approximation, and the PLRNN parameters iteratively. After validating the procedure on toy examples, and using inference through particle filters for comparison, the approach is applied to multiple single-unit recordings from the rodent anterior cingulate cortex (ACC obtained during performance of a classical working memory task, delayed alternation. Models estimated from kernel-smoothed spike time data were able to capture the essential computational dynamics underlying task performance, including stimulus-selective delay activity. The estimated models were rarely multi-stable, however, but rather were tuned to exhibit slow dynamics in the vicinity of a bifurcation point. In summary, the present work advances a semi-analytical (thus reasonably fast maximum-likelihood estimation framework for PLRNNs that may enable to recover

  12. Two-photon imaging and analysis of neural network dynamics

    International Nuclear Information System (INIS)

    Luetcke, Henry; Helmchen, Fritjof

    2011-01-01

    The glow of a starry night sky, the smell of a freshly brewed cup of coffee or the sound of ocean waves breaking on the beach are representations of the physical world that have been created by the dynamic interactions of thousands of neurons in our brains. How the brain mediates perceptions, creates thoughts, stores memories and initiates actions remains one of the most profound puzzles in biology, if not all of science. A key to a mechanistic understanding of how the nervous system works is the ability to measure and analyze the dynamics of neuronal networks in the living organism in the context of sensory stimulation and behavior. Dynamic brain properties have been fairly well characterized on the microscopic level of individual neurons and on the macroscopic level of whole brain areas largely with the help of various electrophysiological techniques. However, our understanding of the mesoscopic level comprising local populations of hundreds to thousands of neurons (so-called 'microcircuits') remains comparably poor. Predominantly, this has been due to the technical difficulties involved in recording from large networks of neurons with single-cell spatial resolution and near-millisecond temporal resolution in the brain of living animals. In recent years, two-photon microscopy has emerged as a technique which meets many of these requirements and thus has become the method of choice for the interrogation of local neural circuits. Here, we review the state-of-research in the field of two-photon imaging of neuronal populations, covering the topics of microscope technology, suitable fluorescent indicator dyes, staining techniques, and in particular analysis techniques for extracting relevant information from the fluorescence data. We expect that functional analysis of neural networks using two-photon imaging will help to decipher fundamental operational principles of neural microcircuits.

  13. Two-photon imaging and analysis of neural network dynamics

    Science.gov (United States)

    Lütcke, Henry; Helmchen, Fritjof

    2011-08-01

    The glow of a starry night sky, the smell of a freshly brewed cup of coffee or the sound of ocean waves breaking on the beach are representations of the physical world that have been created by the dynamic interactions of thousands of neurons in our brains. How the brain mediates perceptions, creates thoughts, stores memories and initiates actions remains one of the most profound puzzles in biology, if not all of science. A key to a mechanistic understanding of how the nervous system works is the ability to measure and analyze the dynamics of neuronal networks in the living organism in the context of sensory stimulation and behavior. Dynamic brain properties have been fairly well characterized on the microscopic level of individual neurons and on the macroscopic level of whole brain areas largely with the help of various electrophysiological techniques. However, our understanding of the mesoscopic level comprising local populations of hundreds to thousands of neurons (so-called 'microcircuits') remains comparably poor. Predominantly, this has been due to the technical difficulties involved in recording from large networks of neurons with single-cell spatial resolution and near-millisecond temporal resolution in the brain of living animals. In recent years, two-photon microscopy has emerged as a technique which meets many of these requirements and thus has become the method of choice for the interrogation of local neural circuits. Here, we review the state-of-research in the field of two-photon imaging of neuronal populations, covering the topics of microscope technology, suitable fluorescent indicator dyes, staining techniques, and in particular analysis techniques for extracting relevant information from the fluorescence data. We expect that functional analysis of neural networks using two-photon imaging will help to decipher fundamental operational principles of neural microcircuits.

  14. Two-photon imaging and analysis of neural network dynamics

    Energy Technology Data Exchange (ETDEWEB)

    Luetcke, Henry; Helmchen, Fritjof [Brain Research Institute, University of Zurich, Winterthurerstrasse 190, CH-8057 Zurich (Switzerland)

    2011-08-15

    The glow of a starry night sky, the smell of a freshly brewed cup of coffee or the sound of ocean waves breaking on the beach are representations of the physical world that have been created by the dynamic interactions of thousands of neurons in our brains. How the brain mediates perceptions, creates thoughts, stores memories and initiates actions remains one of the most profound puzzles in biology, if not all of science. A key to a mechanistic understanding of how the nervous system works is the ability to measure and analyze the dynamics of neuronal networks in the living organism in the context of sensory stimulation and behavior. Dynamic brain properties have been fairly well characterized on the microscopic level of individual neurons and on the macroscopic level of whole brain areas largely with the help of various electrophysiological techniques. However, our understanding of the mesoscopic level comprising local populations of hundreds to thousands of neurons (so-called 'microcircuits') remains comparably poor. Predominantly, this has been due to the technical difficulties involved in recording from large networks of neurons with single-cell spatial resolution and near-millisecond temporal resolution in the brain of living animals. In recent years, two-photon microscopy has emerged as a technique which meets many of these requirements and thus has become the method of choice for the interrogation of local neural circuits. Here, we review the state-of-research in the field of two-photon imaging of neuronal populations, covering the topics of microscope technology, suitable fluorescent indicator dyes, staining techniques, and in particular analysis techniques for extracting relevant information from the fluorescence data. We expect that functional analysis of neural networks using two-photon imaging will help to decipher fundamental operational principles of neural microcircuits.

  15. Gr-GDHP: A New Architecture for Globalized Dual Heuristic Dynamic Programming.

    Science.gov (United States)

    Zhong, Xiangnan; Ni, Zhen; He, Haibo

    2017-10-01

    Goal representation globalized dual heuristic dynamic programming (Gr-GDHP) method is proposed in this paper. A goal neural network is integrated into the traditional GDHP method providing an internal reinforcement signal and its derivatives to help the control and learning process. From the proposed architecture, it is shown that the obtained internal reinforcement signal and its derivatives can be able to adjust themselves online over time rather than a fixed or predefined function in literature. Furthermore, the obtained derivatives can directly contribute to the objective function of the critic network, whose learning process is thus simplified. Numerical simulation studies are applied to show the performance of the proposed Gr-GDHP method and compare the results with other existing adaptive dynamic programming designs. We also investigate this method on a ball-and-beam balancing system. The statistical simulation results are presented for both the Gr-GDHP and the GDHP methods to demonstrate the improved learning and controlling performance.

  16. Neural basis for dynamic updating of object representation in visual working memory.

    Science.gov (United States)

    Takahama, Sachiko; Miyauchi, Satoru; Saiki, Jun

    2010-02-15

    In real world, objects have multiple features and change dynamically. Thus, object representations must satisfy dynamic updating and feature binding. Previous studies have investigated the neural activity of dynamic updating or feature binding alone, but not both simultaneously. We investigated the neural basis of feature-bound object representation in a dynamically updating situation by conducting a multiple object permanence tracking task, which required observers to simultaneously process both the maintenance and dynamic updating of feature-bound objects. Using an event-related design, we separated activities during memory maintenance and change detection. In the search for regions showing selective activation in dynamic updating of feature-bound objects, we identified a network during memory maintenance that was comprised of the inferior precentral sulcus, superior parietal lobule, and middle frontal gyrus. In the change detection period, various prefrontal regions, including the anterior prefrontal cortex, were activated. In updating object representation of dynamically moving objects, the inferior precentral sulcus closely cooperates with a so-called "frontoparietal network", and subregions of the frontoparietal network can be decomposed into those sensitive to spatial updating and feature binding. The anterior prefrontal cortex identifies changes in object representation by comparing memory and perceptual representations rather than maintaining object representations per se, as previously suggested. Copyright 2009 Elsevier Inc. All rights reserved.

  17. Forecasting influenza-like illness dynamics for military populations using neural networks and social media.

    Directory of Open Access Journals (Sweden)

    Svitlana Volkova

    Full Text Available This work is the first to take advantage of recurrent neural networks to predict influenza-like illness (ILI dynamics from various linguistic signals extracted from social media data. Unlike other approaches that rely on timeseries analysis of historical ILI data and the state-of-the-art machine learning models, we build and evaluate the predictive power of neural network architectures based on Long Short Term Memory (LSTMs units capable of nowcasting (predicting in "real-time" and forecasting (predicting the future ILI dynamics in the 2011 - 2014 influenza seasons. To build our models we integrate information people post in social media e.g., topics, embeddings, word ngrams, stylistic patterns, and communication behavior using hashtags and mentions. We then quantitatively evaluate the predictive power of different social media signals and contrast the performance of the-state-of-the-art regression models with neural networks using a diverse set of evaluation metrics. Finally, we combine ILI and social media signals to build a joint neural network model for ILI dynamics prediction. Unlike the majority of the existing work, we specifically focus on developing models for local rather than national ILI surveillance, specifically for military rather than general populations in 26 U.S. and six international locations., and analyze how model performance depends on the amount of social media data available per location. Our approach demonstrates several advantages: (a Neural network architectures that rely on LSTM units trained on social media data yield the best performance compared to previously used regression models. (b Previously under-explored language and communication behavior features are more predictive of ILI dynamics than stylistic and topic signals expressed in social media. (c Neural network models learned exclusively from social media signals yield comparable or better performance to the models learned from ILI historical data, thus

  18. Forecasting influenza-like illness dynamics for military populations using neural networks and social media.

    Science.gov (United States)

    Volkova, Svitlana; Ayton, Ellyn; Porterfield, Katherine; Corley, Courtney D

    2017-01-01

    This work is the first to take advantage of recurrent neural networks to predict influenza-like illness (ILI) dynamics from various linguistic signals extracted from social media data. Unlike other approaches that rely on timeseries analysis of historical ILI data and the state-of-the-art machine learning models, we build and evaluate the predictive power of neural network architectures based on Long Short Term Memory (LSTMs) units capable of nowcasting (predicting in "real-time") and forecasting (predicting the future) ILI dynamics in the 2011 - 2014 influenza seasons. To build our models we integrate information people post in social media e.g., topics, embeddings, word ngrams, stylistic patterns, and communication behavior using hashtags and mentions. We then quantitatively evaluate the predictive power of different social media signals and contrast the performance of the-state-of-the-art regression models with neural networks using a diverse set of evaluation metrics. Finally, we combine ILI and social media signals to build a joint neural network model for ILI dynamics prediction. Unlike the majority of the existing work, we specifically focus on developing models for local rather than national ILI surveillance, specifically for military rather than general populations in 26 U.S. and six international locations., and analyze how model performance depends on the amount of social media data available per location. Our approach demonstrates several advantages: (a) Neural network architectures that rely on LSTM units trained on social media data yield the best performance compared to previously used regression models. (b) Previously under-explored language and communication behavior features are more predictive of ILI dynamics than stylistic and topic signals expressed in social media. (c) Neural network models learned exclusively from social media signals yield comparable or better performance to the models learned from ILI historical data, thus, signals from

  19. Dynamic neural network-based methods for compensation of nonlinear effects in multimode communication lines

    Science.gov (United States)

    Sidelnikov, O. S.; Redyuk, A. A.; Sygletos, S.

    2017-12-01

    We consider neural network-based schemes of digital signal processing. It is shown that the use of a dynamic neural network-based scheme of signal processing ensures an increase in the optical signal transmission quality in comparison with that provided by other methods for nonlinear distortion compensation.

  20. Complex Dynamical Network Control for Trajectory Tracking Using Delayed Recurrent Neural Networks

    Directory of Open Access Journals (Sweden)

    Jose P. Perez

    2014-01-01

    Full Text Available In this paper, the problem of trajectory tracking is studied. Based on the V-stability and Lyapunov theory, a control law that achieves the global asymptotic stability of the tracking error between a delayed recurrent neural network and a complex dynamical network is obtained. To illustrate the analytic results, we present a tracking simulation of a dynamical network with each node being just one Lorenz’s dynamical system and three identical Chen’s dynamical systems.

  1. Dynamics and genetic fuzzy neural network vibration control design of a smart flexible four-bar linkage mechanism

    International Nuclear Information System (INIS)

    Rong Bao; Rui Xiaoting; Tao Ling

    2012-01-01

    In this paper, a dynamic modeling method and an active vibration control scheme for a smart flexible four-bar linkage mechanism featuring piezoelectric actuators and strain gauge sensors are presented. The dynamics of this smart mechanism is described by the Discrete Time Transfer Matrix Method of Multibody System (MS-DTTMM). Then a nonlinear fuzzy neural network control is employed to suppress the vibration of this smart mechanism. For improving the dynamic performance of the fuzzy neural network, a genetic algorithm based on the MS-DTTMM is designed offline to tune the initial parameters of the fuzzy neural network. The MS-DTTMM avoids the global dynamics equations of the system, which results in the matrices involved are always very small, so the computational efficiency of the dynamic analysis and control system optimization can be greatly improved. Formulations of the method as well as a numerical simulation are given to demonstrate the proposed dynamic method and control scheme.

  2. Predicting physical time series using dynamic ridge polynomial neural networks.

    Directory of Open Access Journals (Sweden)

    Dhiya Al-Jumeily

    Full Text Available Forecasting naturally occurring phenomena is a common problem in many domains of science, and this has been addressed and investigated by many scientists. The importance of time series prediction stems from the fact that it has wide range of applications, including control systems, engineering processes, environmental systems and economics. From the knowledge of some aspects of the previous behaviour of the system, the aim of the prediction process is to determine or predict its future behaviour. In this paper, we consider a novel application of a higher order polynomial neural network architecture called Dynamic Ridge Polynomial Neural Network that combines the properties of higher order and recurrent neural networks for the prediction of physical time series. In this study, four types of signals have been used, which are; The Lorenz attractor, mean value of the AE index, sunspot number, and heat wave temperature. The simulation results showed good improvements in terms of the signal to noise ratio in comparison to a number of higher order and feedforward neural networks in comparison to the benchmarked techniques.

  3. The dynamic brain: from spiking neurons to neural masses and cortical fields.

    Directory of Open Access Journals (Sweden)

    Gustavo Deco

    2008-08-01

    Full Text Available The cortex is a complex system, characterized by its dynamics and architecture, which underlie many functions such as action, perception, learning, language, and cognition. Its structural architecture has been studied for more than a hundred years; however, its dynamics have been addressed much less thoroughly. In this paper, we review and integrate, in a unifying framework, a variety of computational approaches that have been used to characterize the dynamics of the cortex, as evidenced at different levels of measurement. Computational models at different space-time scales help us understand the fundamental mechanisms that underpin neural processes and relate these processes to neuroscience data. Modeling at the single neuron level is necessary because this is the level at which information is exchanged between the computing elements of the brain; the neurons. Mesoscopic models tell us how neural elements interact to yield emergent behavior at the level of microcolumns and cortical columns. Macroscopic models can inform us about whole brain dynamics and interactions between large-scale neural systems such as cortical regions, the thalamus, and brain stem. Each level of description relates uniquely to neuroscience data, from single-unit recordings, through local field potentials to functional magnetic resonance imaging (fMRI, electroencephalogram (EEG, and magnetoencephalogram (MEG. Models of the cortex can establish which types of large-scale neuronal networks can perform computations and characterize their emergent properties. Mean-field and related formulations of dynamics also play an essential and complementary role as forward models that can be inverted given empirical data. This makes dynamic models critical in integrating theory and experiments. We argue that elaborating principled and informed models is a prerequisite for grounding empirical neuroscience in a cogent theoretical framework, commensurate with the achievements in the

  4. Adaptive dynamic inversion robust control for BTT missile based on wavelet neural network

    Science.gov (United States)

    Li, Chuanfeng; Wang, Yongji; Deng, Zhixiang; Wu, Hao

    2009-10-01

    A new nonlinear control strategy incorporated the dynamic inversion method with wavelet neural networks is presented for the nonlinear coupling system of Bank-to-Turn(BTT) missile in reentry phase. The basic control law is designed by using the dynamic inversion feedback linearization method, and the online learning wavelet neural network is used to compensate the inversion error due to aerodynamic parameter errors, modeling imprecise and external disturbance in view of the time-frequency localization properties of wavelet transform. Weights adjusting laws are derived according to Lyapunov stability theory, which can guarantee the boundedness of all signals in the whole system. Furthermore, robust stability of the closed-loop system under this tracking law is proved. Finally, the six degree-of-freedom(6DOF) simulation results have shown that the attitude angles can track the anticipant command precisely under the circumstances of existing external disturbance and in the presence of parameter uncertainty. It means that the dependence on model by dynamic inversion method is reduced and the robustness of control system is enhanced by using wavelet neural network(WNN) to reconstruct inversion error on-line.

  5. Lukasiewicz-Topos Models of Neural Networks, Cell Genome and Interactome Nonlinear Dynamic Models

    CERN Document Server

    Baianu, I C

    2004-01-01

    A categorical and Lukasiewicz-Topos framework for Lukasiewicz Algebraic Logic models of nonlinear dynamics in complex functional systems such as neural networks, genomes and cell interactomes is proposed. Lukasiewicz Algebraic Logic models of genetic networks and signaling pathways in cells are formulated in terms of nonlinear dynamic systems with n-state components that allow for the generalization of previous logical models of both genetic activities and neural networks. An algebraic formulation of variable 'next-state functions' is extended to a Lukasiewicz Topos with an n-valued Lukasiewicz Algebraic Logic subobject classifier description that represents non-random and nonlinear network activities as well as their transformations in developmental processes and carcinogenesis.

  6. Robustness analysis of uncertain dynamical neural networks with multiple time delays.

    Science.gov (United States)

    Senan, Sibel

    2015-10-01

    This paper studies the problem of global robust asymptotic stability of the equilibrium point for the class of dynamical neural networks with multiple time delays with respect to the class of slope-bounded activation functions and in the presence of the uncertainties of system parameters of the considered neural network model. By using an appropriate Lyapunov functional and exploiting the properties of the homeomorphism mapping theorem, we derive a new sufficient condition for the existence, uniqueness and global robust asymptotic stability of the equilibrium point for the class of neural networks with multiple time delays. The obtained stability condition basically relies on testing some relationships imposed on the interconnection matrices of the neural system, which can be easily verified by using some certain properties of matrices. An instructive numerical example is also given to illustrate the applicability of our result and show the advantages of this new condition over the previously reported corresponding results. Copyright © 2015 Elsevier Ltd. All rights reserved.

  7. DO DYNAMIC NEURAL NETWORKS STAND A BETTER CHANCE IN FRACTIONALLY INTEGRATED PROCESS FORECASTING?

    Directory of Open Access Journals (Sweden)

    Majid Delavari

    2013-04-01

    Full Text Available The main purpose of the present study was to investigate the capabilities of two generations of models such as those based on dynamic neural network (e.g., Nonlinear Neural network Auto Regressive or NNAR model and a regressive (Auto Regressive Fractionally Integrated Moving Average model which is based on Fractional Integration Approach in forecasting daily data related to the return index of Tehran Stock Exchange (TSE. In order to compare these models under similar conditions, Mean Square Error (MSE and also Root Mean Square Error (RMSE were selected as criteria for the models’ simulated out-of-sample forecasting performance. Besides, fractal markets hypothesis was examined and according to the findings, fractal structure was confirmed to exist in the time series under investigation. Another finding of the study was that dynamic artificial neural network model had the best performance in out-of-sample forecasting based on the criteria introduced for calculating forecasting error in comparison with the ARFIMA model.

  8. Gradient Learning in Spiking Neural Networks by Dynamic Perturbation of Conductances

    International Nuclear Information System (INIS)

    Fiete, Ila R.; Seung, H. Sebastian

    2006-01-01

    We present a method of estimating the gradient of an objective function with respect to the synaptic weights of a spiking neural network. The method works by measuring the fluctuations in the objective function in response to dynamic perturbation of the membrane conductances of the neurons. It is compatible with recurrent networks of conductance-based model neurons with dynamic synapses. The method can be interpreted as a biologically plausible synaptic learning rule, if the dynamic perturbations are generated by a special class of 'empiric' synapses driven by random spike trains from an external source

  9. The simplest problem in the collective dynamics of neural networks: is synchrony stable?

    International Nuclear Information System (INIS)

    Timme, Marc; Wolf, Fred

    2008-01-01

    For spiking neural networks we consider the stability problem of global synchrony, arguably the simplest non-trivial collective dynamics in such networks. We find that even this simplest dynamical problem—local stability of synchrony—is non-trivial to solve and requires novel methods for its solution. In particular, the discrete mode of pulsed communication together with the complicated connectivity of neural interaction networks requires a non-standard approach. The dynamics in the vicinity of the synchronous state is determined by a multitude of linear operators, in contrast to a single stability matrix in conventional linear stability theory. This unusual property qualitatively depends on network topology and may be neglected for globally coupled homogeneous networks. For generic networks, however, the number of operators increases exponentially with the size of the network. We present methods to treat this multi-operator problem exactly. First, based on the Gershgorin and Perron–Frobenius theorems, we derive bounds on the eigenvalues that provide important information about the synchronization process but are not sufficient to establish the asymptotic stability or instability of the synchronous state. We then present a complete analysis of asymptotic stability for topologically strongly connected networks using simple graph-theoretical considerations. For inhibitory interactions between dissipative (leaky) oscillatory neurons the synchronous state is stable, independent of the parameters and the network connectivity. These results indicate that pulse-like interactions play a profound role in network dynamical systems, and in particular in the dynamics of biological synchronization, unless the coupling is homogeneous and all-to-all. The concepts introduced here are expected to also facilitate the exact analysis of more complicated dynamical network states, for instance the irregular balanced activity in cortical neural networks

  10. Firing rate dynamics in recurrent spiking neural networks with intrinsic and network heterogeneity.

    Science.gov (United States)

    Ly, Cheng

    2015-12-01

    Heterogeneity of neural attributes has recently gained a lot of attention and is increasing recognized as a crucial feature in neural processing. Despite its importance, this physiological feature has traditionally been neglected in theoretical studies of cortical neural networks. Thus, there is still a lot unknown about the consequences of cellular and circuit heterogeneity in spiking neural networks. In particular, combining network or synaptic heterogeneity and intrinsic heterogeneity has yet to be considered systematically despite the fact that both are known to exist and likely have significant roles in neural network dynamics. In a canonical recurrent spiking neural network model, we study how these two forms of heterogeneity lead to different distributions of excitatory firing rates. To analytically characterize how these types of heterogeneities affect the network, we employ a dimension reduction method that relies on a combination of Monte Carlo simulations and probability density function equations. We find that the relationship between intrinsic and network heterogeneity has a strong effect on the overall level of heterogeneity of the firing rates. Specifically, this relationship can lead to amplification or attenuation of firing rate heterogeneity, and these effects depend on whether the recurrent network is firing asynchronously or rhythmically firing. These observations are captured with the aforementioned reduction method, and furthermore simpler analytic descriptions based on this dimension reduction method are developed. The final analytic descriptions provide compact and descriptive formulas for how the relationship between intrinsic and network heterogeneity determines the firing rate heterogeneity dynamics in various settings.

  11. Direct heuristic dynamic programming for damping oscillations in a large power system.

    Science.gov (United States)

    Lu, Chao; Si, Jennie; Xie, Xiaorong

    2008-08-01

    This paper applies a neural-network-based approximate dynamic programming method, namely, the direct heuristic dynamic programming (direct HDP), to a large power system stability control problem. The direct HDP is a learning- and approximation-based approach to addressing nonlinear coordinated control under uncertainty. One of the major design parameters, the controller learning objective function, is formulated to directly account for network-wide low-frequency oscillation with the presence of nonlinearity, uncertainty, and coupling effect among system components. Results include a novel learning control structure based on the direct HDP with applications to two power system problems. The first case involves static var compensator supplementary damping control, which is used to provide a comprehensive evaluation of the learning control performance. The second case aims at addressing a difficult complex system challenge by providing a new solution to a large interconnected power network oscillation damping control problem that frequently occurs in the China Southern Power Grid.

  12. A solution for two-dimensional mazes with use of chaotic dynamics in a recurrent neural network model.

    Science.gov (United States)

    Suemitsu, Yoshikazu; Nara, Shigetoshi

    2004-09-01

    Chaotic dynamics introduced into a neural network model is applied to solving two-dimensional mazes, which are ill-posed problems. A moving object moves from the position at t to t + 1 by simply defined motion function calculated from firing patterns of the neural network model at each time step t. We have embedded several prototype attractors that correspond to the simple motion of the object orienting toward several directions in two-dimensional space in our neural network model. Introducing chaotic dynamics into the network gives outputs sampled from intermediate state points between embedded attractors in a state space, and these dynamics enable the object to move in various directions. System parameter switching between a chaotic and an attractor regime in the state space of the neural network enables the object to move to a set target in a two-dimensional maze. Results of computer simulations show that the success rate for this method over 300 trials is higher than that of random walk. To investigate why the proposed method gives better performance, we calculate and discuss statistical data with respect to dynamical structure.

  13. Semi-empirical neural network models of controlled dynamical systems

    Directory of Open Access Journals (Sweden)

    Mihail V. Egorchev

    2017-12-01

    Full Text Available A simulation approach is discussed for maneuverable aircraft motion as nonlinear controlled dynamical system under multiple and diverse uncertainties including knowledge imperfection concerning simulated plant and its environment exposure. The suggested approach is based on a merging of theoretical knowledge for the plant with training tools of artificial neural network field. The efficiency of this approach is demonstrated using the example of motion modeling and the identification of the aerodynamic characteristics of a maneuverable aircraft. A semi-empirical recurrent neural network based model learning algorithm is proposed for multi-step ahead prediction problem. This algorithm sequentially states and solves numerical optimization subproblems of increasing complexity, using each solution as initial guess for subsequent subproblem. We also consider a procedure for representative training set acquisition that utilizes multisine control signals.

  14. Data Driven Broiler Weight Forecasting using Dynamic Neural Network Models

    DEFF Research Database (Denmark)

    Johansen, Simon Vestergaard; Bendtsen, Jan Dimon; Riisgaard-Jensen, Martin

    2017-01-01

    In this article, the dynamic influence of environmental broiler house conditions and broiler growth is investigated. Dynamic neural network forecasting models have been trained on farm-scale broiler batch production data from 12 batches from the same house. The model forecasts future broiler weight...... and uses environmental conditions such as heating, ventilation, and temperature along with broiler behavior such as feed and water consumption. Training data and forecasting data is analyzed to explain when the model might fail at generalizing. We present ensemble broiler weight forecasts to day 7, 14, 21...

  15. Design of Neural Networks for Fast Convergence and Accuracy: Dynamics and Control

    Science.gov (United States)

    Maghami, Peiman G.; Sparks, Dean W., Jr.

    1997-01-01

    A procedure for the design and training of artificial neural networks, used for rapid and efficient controls and dynamics design and analysis for flexible space systems, has been developed. Artificial neural networks are employed, such that once properly trained, they provide a means of evaluating the impact of design changes rapidly. Specifically, two-layer feedforward neural networks are designed to approximate the functional relationship between the component/spacecraft design changes and measures of its performance or nonlinear dynamics of the system/components. A training algorithm, based on statistical sampling theory, is presented, which guarantees that the trained networks provide a designer-specified degree of accuracy in mapping the functional relationship. Within each iteration of this statistical-based algorithm, a sequential design algorithm is used for the design and training of the feedforward network to provide rapid convergence to the network goals. Here, at each sequence a new network is trained to minimize the error of previous network. The proposed method should work for applications wherein an arbitrary large source of training data can be generated. Two numerical examples are performed on a spacecraft application in order to demonstrate the feasibility of the proposed approach.

  16. Models of neural dynamics in brain information processing - the developments of 'the decade'

    Energy Technology Data Exchange (ETDEWEB)

    Borisyuk, G N; Borisyuk, R M; Kazanovich, Yakov B [Institute of Mathematical Problems of Biology, Russian Academy of Sciences, Pushchino, Moscow region (Russian Federation); Ivanitskii, Genrikh R [Institute for Theoretical and Experimental Biophysics, Russian Academy of Sciences, Pushchino, Moscow region (Russian Federation)

    2002-10-31

    Neural network models are discussed that have been developed during the last decade with the purpose of reproducing spatio-temporal patterns of neural activity in different brain structures. The main goal of the modeling was to test hypotheses of synchronization, temporal and phase relations in brain information processing. The models being considered are those of temporal structure of spike sequences, of neural activity dynamics, and oscillatory models of attention and feature integration. (reviews of topical problems)

  17. Neural network-based adaptive dynamic surface control for permanent magnet synchronous motors.

    Science.gov (United States)

    Yu, Jinpeng; Shi, Peng; Dong, Wenjie; Chen, Bing; Lin, Chong

    2015-03-01

    This brief considers the problem of neural networks (NNs)-based adaptive dynamic surface control (DSC) for permanent magnet synchronous motors (PMSMs) with parameter uncertainties and load torque disturbance. First, NNs are used to approximate the unknown and nonlinear functions of PMSM drive system and a novel adaptive DSC is constructed to avoid the explosion of complexity in the backstepping design. Next, under the proposed adaptive neural DSC, the number of adaptive parameters required is reduced to only one, and the designed neural controllers structure is much simpler than some existing results in literature, which can guarantee that the tracking error converges to a small neighborhood of the origin. Then, simulations are given to illustrate the effectiveness and potential of the new design technique.

  18. Neural networks dynamic hysteresis model for piezoceramic actuator based on hysteresis operator of first-order differential equation

    International Nuclear Information System (INIS)

    Dang Xuanju; Tan Yonghong

    2005-01-01

    A new neural networks dynamic hysteresis model for piezoceramic actuator is proposed by combining the Preisach model with diagonal recurrent neural networks. The Preisach model is based on elementary rate-independent operators and is not suitable for modeling piezoceramic actuator across a wide frequency band because of the rate-dependent hysteresis characteristic of the piezoceramic actuator. The structure of the developed model is based on the structure of the Preisach model, in which the rate-independent relay hysteresis operators (cells) are replaced by the rate-dependent hysteresis operators of first-order differential equation. The diagonal recurrent neural networks being modified by an adjustable factor can be used to model the hysteresis behavior of the pizeoceramic actuator because its structure is similar to the structure of the modified Preisach model. Therefore, the proposed model not only possesses that of the Preisach model, but also can be used for describing its dynamic hysteresis behavior. Through the experimental results of both the approximation and the prediction, the effectiveness of the neural networks dynamic hysteresis model for the piezoceramic actuator is demonstrated

  19. A dynamic neural field model of temporal order judgments.

    Science.gov (United States)

    Hecht, Lauren N; Spencer, John P; Vecera, Shaun P

    2015-12-01

    Temporal ordering of events is biased, or influenced, by perceptual organization-figure-ground organization-and by spatial attention. For example, within a region assigned figural status or at an attended location, onset events are processed earlier (Lester, Hecht, & Vecera, 2009; Shore, Spence, & Klein, 2001), and offset events are processed for longer durations (Hecht & Vecera, 2011; Rolke, Ulrich, & Bausenhart, 2006). Here, we present an extension of a dynamic field model of change detection (Johnson, Spencer, Luck, & Schöner, 2009; Johnson, Spencer, & Schöner, 2009) that accounts for both the onset and offset performance for figural and attended regions. The model posits that neural populations processing the figure are more active, resulting in a peak of activation that quickly builds toward a detection threshold when the onset of a target is presented. This same enhanced activation for some neural populations is maintained when a present target is removed, creating delays in the perception of the target's offset. We discuss the broader implications of this model, including insights regarding how neural activation can be generated in response to the disappearance of information. (c) 2015 APA, all rights reserved).

  20. Forecasting financial asset processes: stochastic dynamics via learning neural networks.

    Science.gov (United States)

    Giebel, S; Rainer, M

    2010-01-01

    Models for financial asset dynamics usually take into account their inherent unpredictable nature by including a suitable stochastic component into their process. Unknown (forward) values of financial assets (at a given time in the future) are usually estimated as expectations of the stochastic asset under a suitable risk-neutral measure. This estimation requires the stochastic model to be calibrated to some history of sufficient length in the past. Apart from inherent limitations, due to the stochastic nature of the process, the predictive power is also limited by the simplifying assumptions of the common calibration methods, such as maximum likelihood estimation and regression methods, performed often without weights on the historic time series, or with static weights only. Here we propose a novel method of "intelligent" calibration, using learning neural networks in order to dynamically adapt the parameters of the stochastic model. Hence we have a stochastic process with time dependent parameters, the dynamics of the parameters being themselves learned continuously by a neural network. The back propagation in training the previous weights is limited to a certain memory length (in the examples we consider 10 previous business days), which is similar to the maximal time lag of autoregressive processes. We demonstrate the learning efficiency of the new algorithm by tracking the next-day forecasts for the EURTRY and EUR-HUF exchange rates each.

  1. Dynamical networks: Finding, measuring, and tracking neural population activity using network science

    Directory of Open Access Journals (Sweden)

    Mark D. Humphries

    2017-12-01

    Full Text Available Systems neuroscience is in a headlong rush to record from as many neurons at the same time as possible. As the brain computes and codes using neuron populations, it is hoped these data will uncover the fundamentals of neural computation. But with hundreds, thousands, or more simultaneously recorded neurons come the inescapable problems of visualizing, describing, and quantifying their interactions. Here I argue that network science provides a set of scalable, analytical tools that already solve these problems. By treating neurons as nodes and their interactions as links, a single network can visualize and describe an arbitrarily large recording. I show that with this description we can quantify the effects of manipulating a neural circuit, track changes in population dynamics over time, and quantitatively define theoretical concepts of neural populations such as cell assemblies. Using network science as a core part of analyzing population recordings will thus provide both qualitative and quantitative advances to our understanding of neural computation.

  2. Emerging phenomena in neural networks with dynamic synapses and their computational implications

    Directory of Open Access Journals (Sweden)

    Joaquin J. eTorres

    2013-04-01

    Full Text Available In this paper we review our research on the effect and computational role of dynamical synapses on feed-forward and recurrent neural networks. Among others, we report on the appearance of a new class of dynamical memories which result from the destabilisation of learned memory attractors. This has important consequences for dynamic information processing allowing the system to sequentially access the information stored in the memories under changing stimuli. Although storage capacity of stable memories also decreases, our study demonstrated the positive effect of synaptic facilitation to recover maximum storage capacity and to enlarge the capacity of the system for memory recall in noisy conditions. Possibly, the new dynamical behaviour can be associated with the voltage transitions between up and down states observed in cortical areas in the brain. We investigated the conditions for which the permanence times in the up state are power-law distributed, which is a sign for criticality, and concluded that the experimentally observed large variability of permanence times could be explained as the result of noisy dynamic synapses with large recovery times. Finally, we report how short-term synaptic processes can transmit weak signals throughout more than one frequency range in noisy neural networks, displaying a kind of stochastic multi-resonance. This effect is due to competition between activity-dependent synaptic fluctuations (due to dynamic synapses and the existence of neuron firing threshold which adapts to the incoming mean synaptic input.

  3. Satisfiability of logic programming based on radial basis function neural networks

    International Nuclear Information System (INIS)

    Hamadneh, Nawaf; Sathasivam, Saratha; Tilahun, Surafel Luleseged; Choon, Ong Hong

    2014-01-01

    In this paper, we propose a new technique to test the Satisfiability of propositional logic programming and quantified Boolean formula problem in radial basis function neural networks. For this purpose, we built radial basis function neural networks to represent the proportional logic which has exactly three variables in each clause. We used the Prey-predator algorithm to calculate the output weights of the neural networks, while the K-means clustering algorithm is used to determine the hidden parameters (the centers and the widths). Mean of the sum squared error function is used to measure the activity of the two algorithms. We applied the developed technique with the recurrent radial basis function neural networks to represent the quantified Boolean formulas. The new technique can be applied to solve many applications such as electronic circuits and NP-complete problems

  4. Satisfiability of logic programming based on radial basis function neural networks

    Energy Technology Data Exchange (ETDEWEB)

    Hamadneh, Nawaf; Sathasivam, Saratha; Tilahun, Surafel Luleseged; Choon, Ong Hong [School of Mathematical Sciences, Universiti Sains Malaysia, 11800 USM, Penang (Malaysia)

    2014-07-10

    In this paper, we propose a new technique to test the Satisfiability of propositional logic programming and quantified Boolean formula problem in radial basis function neural networks. For this purpose, we built radial basis function neural networks to represent the proportional logic which has exactly three variables in each clause. We used the Prey-predator algorithm to calculate the output weights of the neural networks, while the K-means clustering algorithm is used to determine the hidden parameters (the centers and the widths). Mean of the sum squared error function is used to measure the activity of the two algorithms. We applied the developed technique with the recurrent radial basis function neural networks to represent the quantified Boolean formulas. The new technique can be applied to solve many applications such as electronic circuits and NP-complete problems.

  5. SpikingLab: modelling agents controlled by Spiking Neural Networks in Netlogo.

    Science.gov (United States)

    Jimenez-Romero, Cristian; Johnson, Jeffrey

    2017-01-01

    The scientific interest attracted by Spiking Neural Networks (SNN) has lead to the development of tools for the simulation and study of neuronal dynamics ranging from phenomenological models to the more sophisticated and biologically accurate Hodgkin-and-Huxley-based and multi-compartmental models. However, despite the multiple features offered by neural modelling tools, their integration with environments for the simulation of robots and agents can be challenging and time consuming. The implementation of artificial neural circuits to control robots generally involves the following tasks: (1) understanding the simulation tools, (2) creating the neural circuit in the neural simulator, (3) linking the simulated neural circuit with the environment of the agent and (4) programming the appropriate interface in the robot or agent to use the neural controller. The accomplishment of the above-mentioned tasks can be challenging, especially for undergraduate students or novice researchers. This paper presents an alternative tool which facilitates the simulation of simple SNN circuits using the multi-agent simulation and the programming environment Netlogo (educational software that simplifies the study and experimentation of complex systems). The engine proposed and implemented in Netlogo for the simulation of a functional model of SNN is a simplification of integrate and fire (I&F) models. The characteristics of the engine (including neuronal dynamics, STDP learning and synaptic delay) are demonstrated through the implementation of an agent representing an artificial insect controlled by a simple neural circuit. The setup of the experiment and its outcomes are described in this work.

  6. Introduction to stochastic dynamic programming

    CERN Document Server

    Ross, Sheldon M; Lukacs, E

    1983-01-01

    Introduction to Stochastic Dynamic Programming presents the basic theory and examines the scope of applications of stochastic dynamic programming. The book begins with a chapter on various finite-stage models, illustrating the wide range of applications of stochastic dynamic programming. Subsequent chapters study infinite-stage models: discounting future returns, minimizing nonnegative costs, maximizing nonnegative returns, and maximizing the long-run average return. Each of these chapters first considers whether an optimal policy need exist-providing counterexamples where appropriate-and the

  7. Artificial Neural Network-Based Early-Age Concrete Strength Monitoring Using Dynamic Response Signals.

    Science.gov (United States)

    Kim, Junkyeong; Lee, Chaggil; Park, Seunghee

    2017-06-07

    Concrete is one of the most common materials used to construct a variety of civil infrastructures. However, since concrete might be susceptible to brittle fracture, it is essential to confirm the strength of concrete at the early-age stage of the curing process to prevent unexpected collapse. To address this issue, this study proposes a novel method to estimate the early-age strength of concrete, by integrating an artificial neural network algorithm with a dynamic response measurement of the concrete material. The dynamic response signals of the concrete, including both electromechanical impedances and guided ultrasonic waves, are obtained from an embedded piezoelectric sensor module. The cross-correlation coefficient of the electromechanical impedance signals and the amplitude of the guided ultrasonic wave signals are selected to quantify the variation in dynamic responses according to the strength of the concrete. Furthermore, an artificial neural network algorithm is used to verify a relationship between the variation in dynamic response signals and concrete strength. The results of an experimental study confirm that the proposed approach can be effectively applied to estimate the strength of concrete material from the early-age stage of the curing process.

  8. Spatiotemporal Dynamics and Reliable Computations in Recurrent Spiking Neural Networks

    Science.gov (United States)

    Pyle, Ryan; Rosenbaum, Robert

    2017-01-01

    Randomly connected networks of excitatory and inhibitory spiking neurons provide a parsimonious model of neural variability, but are notoriously unreliable for performing computations. We show that this difficulty is overcome by incorporating the well-documented dependence of connection probability on distance. Spatially extended spiking networks exhibit symmetry-breaking bifurcations and generate spatiotemporal patterns that can be trained to perform dynamical computations under a reservoir computing framework.

  9. Spatiotemporal Dynamics and Reliable Computations in Recurrent Spiking Neural Networks.

    Science.gov (United States)

    Pyle, Ryan; Rosenbaum, Robert

    2017-01-06

    Randomly connected networks of excitatory and inhibitory spiking neurons provide a parsimonious model of neural variability, but are notoriously unreliable for performing computations. We show that this difficulty is overcome by incorporating the well-documented dependence of connection probability on distance. Spatially extended spiking networks exhibit symmetry-breaking bifurcations and generate spatiotemporal patterns that can be trained to perform dynamical computations under a reservoir computing framework.

  10. Dynamics of continuous-time bidirectional associative memory neural networks with impulses and their discrete counterparts

    International Nuclear Information System (INIS)

    Huo Haifeng; Li Wantong

    2009-01-01

    This paper is concerned with the global stability characteristics of a system of equations modelling the dynamics of continuous-time bidirectional associative memory neural networks with impulses. Sufficient conditions which guarantee the existence of a unique equilibrium and its exponential stability of the networks are obtained. For the goal of computation, discrete-time analogues of the corresponding continuous-time bidirectional associative memory neural networks with impulses are also formulated and studied. Our results show that the above continuous-time and discrete-time systems with impulses preserve the dynamics of the networks without impulses when we make some modifications and impose some additional conditions on the systems, the convergence characteristics dynamics of the networks are preserved by both continuous-time and discrete-time systems with some restriction imposed on the impulse effect.

  11. Dynamic neural networking as a basis for plasticity in the control of heart rate.

    Science.gov (United States)

    Kember, G; Armour, J A; Zamir, M

    2013-01-21

    A model is proposed in which the relationship between individual neurons within a neural network is dynamically changing to the effect of providing a measure of "plasticity" in the control of heart rate. The neural network on which the model is based consists of three populations of neurons residing in the central nervous system, the intrathoracic extracardiac nervous system, and the intrinsic cardiac nervous system. This hierarchy of neural centers is used to challenge the classical view that the control of heart rate, a key clinical index, resides entirely in central neuronal command (spinal cord, medulla oblongata, and higher centers). Our results indicate that dynamic networking allows for the possibility of an interplay among the three populations of neurons to the effect of altering the order of control of heart rate among them. This interplay among the three levels of control allows for different neural pathways for the control of heart rate to emerge under different blood flow demands or disease conditions and, as such, it has significant clinical implications because current understanding and treatment of heart rate anomalies are based largely on a single level of control and on neurons acting in unison as a single entity rather than individually within a (plastically) interconnected network. Copyright © 2012 Elsevier Ltd. All rights reserved.

  12. Resolution enhancement in neural networks with dynamical synapses

    Directory of Open Access Journals (Sweden)

    C. C. Alan Fung

    2013-06-01

    Full Text Available Conventionally, information is represented by spike rates in the neural system. Here, we consider the ability of temporally modulated activities in neuronal networks to carry information extra to spike rates. These temporal modulations, commonly known as population spikes, are due to the presence of synaptic depression in a neuronal network model. We discuss its relevance to an experiment on transparent motions in macaque monkeys by Treue et al. in 2000. They found that if the moving directions of objects are too close, the firing rate profile will be very similar to that with one direction. As the difference in the moving directions of objects is large enough, the neuronal system would respond in such a way that the network enhances the resolution in the moving directions of the objects. In this paper, we propose that this behavior can be reproduced by neural networks with dynamical synapses when there are multiple external inputs. We will demonstrate how resolution enhancement can be achieved, and discuss the conditions under which temporally modulated activities are able to enhance information processing performances in general.

  13. Dynamic Programming Foundations and Principles

    CERN Document Server

    Sniedovich, Moshe

    2010-01-01

    Focusing on the modeling and solution of deterministic multistage decision problems, this book looks at dynamic programming as a problem-solving optimization method. With over 400 useful references, this edition discusses the dynamic programming analysis of a problem, illustrates the rationale behind this analysis, and clarifies the theoretical grounds that justify the rationale. It also explains the meaning and role of the concept of state in dynamic programming, examines the purpose and function of the principle of optimality, and outlines solution strategies for problems defiant of conventi

  14. Music enrichment programs improve the neural encoding of speech in at-risk children.

    Science.gov (United States)

    Kraus, Nina; Slater, Jessica; Thompson, Elaine C; Hornickel, Jane; Strait, Dana L; Nicol, Trent; White-Schwoch, Travis

    2014-09-03

    Musicians are often reported to have enhanced neurophysiological functions, especially in the auditory system. Musical training is thought to improve nervous system function by focusing attention on meaningful acoustic cues, and these improvements in auditory processing cascade to language and cognitive skills. Correlational studies have reported musician enhancements in a variety of populations across the life span. In light of these reports, educators are considering the potential for co-curricular music programs to provide auditory-cognitive enrichment to children during critical developmental years. To date, however, no studies have evaluated biological changes following participation in existing, successful music education programs. We used a randomized control design to investigate whether community music participation induces a tangible change in auditory processing. The community music training was a longstanding and successful program that provides free music instruction to children from underserved backgrounds who stand at high risk for learning and social problems. Children who completed 2 years of music training had a stronger neurophysiological distinction of stop consonants, a neural mechanism linked to reading and language skills. One year of training was insufficient to elicit changes in nervous system function; beyond 1 year, however, greater amounts of instrumental music training were associated with larger gains in neural processing. We therefore provide the first direct evidence that community music programs enhance the neural processing of speech in at-risk children, suggesting that active and repeated engagement with sound changes neural function. Copyright © 2014 the authors 0270-6474/14/3411913-06$15.00/0.

  15. Multiplex visibility graphs to investigate recurrent neural network dynamics

    Science.gov (United States)

    Bianchi, Filippo Maria; Livi, Lorenzo; Alippi, Cesare; Jenssen, Robert

    2017-03-01

    A recurrent neural network (RNN) is a universal approximator of dynamical systems, whose performance often depends on sensitive hyperparameters. Tuning them properly may be difficult and, typically, based on a trial-and-error approach. In this work, we adopt a graph-based framework to interpret and characterize internal dynamics of a class of RNNs called echo state networks (ESNs). We design principled unsupervised methods to derive hyperparameters configurations yielding maximal ESN performance, expressed in terms of prediction error and memory capacity. In particular, we propose to model time series generated by each neuron activations with a horizontal visibility graph, whose topological properties have been shown to be related to the underlying system dynamics. Successively, horizontal visibility graphs associated with all neurons become layers of a larger structure called a multiplex. We show that topological properties of such a multiplex reflect important features of ESN dynamics that can be used to guide the tuning of its hyperparamers. Results obtained on several benchmarks and a real-world dataset of telephone call data records show the effectiveness of the proposed methods.

  16. Predictive Modeling of Mechanical Properties of Welded Joints Based on Dynamic Fuzzy RBF Neural Network

    Directory of Open Access Journals (Sweden)

    ZHANG Yongzhi

    2016-10-01

    Full Text Available A dynamic fuzzy RBF neural network model was built to predict the mechanical properties of welded joints, and the purpose of the model was to overcome the shortcomings of static neural networks including structural identification, dynamic sample training and learning algorithm. The structure and parameters of the model are no longer head of default, dynamic adaptive adjustment in the training, suitable for dynamic sample data for learning, learning algorithm introduces hierarchical learning and fuzzy rule pruning strategy, to accelerate the training speed of model and make the model more compact. Simulation of the model was carried out by using three kinds of thickness and different process TC4 titanium alloy TIG welding test data. The results show that the model has higher prediction accuracy, which is suitable for predicting the mechanical properties of welded joints, and has opened up a new way for the on-line control of the welding process.

  17. Simulation of sensory integration dysfunction in autism with dynamic neural fields model

    NARCIS (Netherlands)

    Chonnaparamutt, W.; Barakova, E.I.; Rutkowski, L.; Taseusiewicz, R.

    2008-01-01

    This paper applies dynamic neural fields model [1,23,7] to multimodal interaction of sensory cues obtained from a mobile robot, and shows the impact of different temporal aspects of the integration to the precision of movements. We speculate that temporally uncoordinated sensory integration might be

  18. Dynamics of neural networks with continuous attractors

    Science.gov (United States)

    Fung, C. C. Alan; Wong, K. Y. Michael; Wu, Si

    2008-10-01

    We investigate the dynamics of continuous attractor neural networks (CANNs). Due to the translational invariance of their neuronal interactions, CANNs can hold a continuous family of stationary states. We systematically explore how their neutral stability facilitates the tracking performance of a CANN, which is believed to have wide applications in brain functions. We develop a perturbative approach that utilizes the dominant movement of the network stationary states in the state space. We quantify the distortions of the bump shape during tracking, and study their effects on the tracking performance. Results are obtained on the maximum speed for a moving stimulus to be trackable, and the reaction time to catch up an abrupt change in stimulus.

  19. Stochastic integer programming by dynamic programming

    NARCIS (Netherlands)

    Lageweg, B.J.; Lenstra, J.K.; Rinnooy Kan, A.H.G.; Stougie, L.; Ermoliev, Yu.; Wets, R.J.B.

    1988-01-01

    Stochastic integer programming is a suitable tool for modeling hierarchical decision situations with combinatorial features. In continuation of our work on the design and analysis of heuristics for such problems, we now try to find optimal solutions. Dynamic programming techniques can be used to

  20. Coordination: Neural, Behavioral and Social Dynamics

    CERN Document Server

    Fuchs, Armin

    2008-01-01

    One of the most striking features of Coordination Dynamics is its interdisciplinary character. The problems we are trying to solve in this field range from behavioral phenomena of interlimb coordination and coordination between stimuli and movements (perception-action tasks) through neural activation patterns that can be observed during these tasks to clinical applications and social behavior. It is not surprising that close collaboration among scientists from different fields as psychology, kinesiology, neurology and even physics are imperative to deal with the enormous difficulties we are facing when we try to understand a system as complex as the human brain. The chapters in this volume are not simply write-ups of the lectures given by the experts at the meeting but are written in a way that they give sufficient introductory information to be comprehensible and useful for all interested scientists and students.

  1. Pseudo dynamic transitional modeling of building heating energy demand using artificial neural network

    NARCIS (Netherlands)

    Paudel, S.; Elmtiri, M.; Kling, W.L.; Corre, le O.; Lacarriere, B.

    2014-01-01

    This paper presents the building heating demand prediction model with occupancy profile and operational heating power level characteristics in short time horizon (a couple of days) using artificial neural network. In addition, novel pseudo dynamic transitional model is introduced, which consider

  2. A novel neural network for multi project programming with limited resources

    International Nuclear Information System (INIS)

    Liping, Z.; Jianhua, W.; Fenfang, Z.; Guojian, H.

    1996-01-01

    This paper discusses the theory of multi project programming and how to use Artificial Neural Network model to solve this problem. To obtain global optimum solution, the simulated annealing technology is used in our scheme. To improve the convergence property of argument matrix in the process of optimization for target function. Lagrange operator is replaced with the inverse of temperature in simulated annealing. Combining the Hopfield networks algorithm, this problem is solved speedily and satisfactorily. Experimental results show it is very effective to use Artificial Neural Network to solve the problem

  3. Learning by stimulation avoidance: A principle to control spiking neural networks dynamics.

    Science.gov (United States)

    Sinapayen, Lana; Masumori, Atsushi; Ikegami, Takashi

    2017-01-01

    Learning based on networks of real neurons, and learning based on biologically inspired models of neural networks, have yet to find general learning rules leading to widespread applications. In this paper, we argue for the existence of a principle allowing to steer the dynamics of a biologically inspired neural network. Using carefully timed external stimulation, the network can be driven towards a desired dynamical state. We term this principle "Learning by Stimulation Avoidance" (LSA). We demonstrate through simulation that the minimal sufficient conditions leading to LSA in artificial networks are also sufficient to reproduce learning results similar to those obtained in biological neurons by Shahaf and Marom, and in addition explains synaptic pruning. We examined the underlying mechanism by simulating a small network of 3 neurons, then scaled it up to a hundred neurons. We show that LSA has a higher explanatory power than existing hypotheses about the response of biological neural networks to external simulation, and can be used as a learning rule for an embodied application: learning of wall avoidance by a simulated robot. In other works, reinforcement learning with spiking networks can be obtained through global reward signals akin simulating the dopamine system; we believe that this is the first project demonstrating sensory-motor learning with random spiking networks through Hebbian learning relying on environmental conditions without a separate reward system.

  4. MyT1 Counteracts the Neural Progenitor Program to Promote Vertebrate Neurogenesis

    Directory of Open Access Journals (Sweden)

    Francisca F. Vasconcelos

    2016-10-01

    Full Text Available The generation of neurons from neural stem cells requires large-scale changes in gene expression that are controlled to a large extent by proneural transcription factors, such as Ascl1. While recent studies have characterized the differentiation genes activated by proneural factors, less is known on the mechanisms that suppress progenitor cell identity. Here, we show that Ascl1 induces the transcription factor MyT1 while promoting neuronal differentiation. We combined functional studies of MyT1 during neurogenesis with the characterization of its transcriptional program. MyT1 binding is associated with repression of gene transcription in neural progenitor cells. It promotes neuronal differentiation by counteracting the inhibitory activity of Notch signaling at multiple levels, targeting the Notch1 receptor and many of its downstream targets. These include regulators of the neural progenitor program, such as Hes1, Sox2, Id3, and Olig1. Thus, Ascl1 suppresses Notch signaling cell-autonomously via MyT1, coupling neuronal differentiation with repression of the progenitor fate.

  5. Neural Dynamics and Information Representation in Microcircuits of Motor Cortex

    Directory of Open Access Journals (Sweden)

    Yasuhiro eTsubo

    2013-05-01

    Full Text Available The brain has to analyze and respond to external events that can change rapidly from time to time, suggesting that information processing by the brain may be essentially dynamic rather than static. The dynamical features of neural computation are of significant importance in motor cortex that governs the process of movement generation and learning. In this paper, we discuss these features based primarily on our recent findings on neural dynamics and information coding in the microcircuit of rat motor cortex. In fact, cortical neurons show a variety of dynamical behavior from rhythmic activity in various frequency bands to highly irregular spike firing. Of particular interest are the similarity and dissimilarity of the neuronal response properties in different layers of motor cortex. By conducting electrophysiological recordings in slice preparation, we report the phase response curves of neurons in different cortical layers to demonstrate their layer-dependent synchronization properties. We then study how motor cortex recruits task-related neurons in different layers for voluntary arm movements by simultaneous juxtacellular and multiunit recordings from behaving rats. The results suggest an interesting difference in the spectrum of functional activity between the superficial and deep layers. Furthermore, the task-related activities recorded from various layers exhibited power law distributions of inter-spike intervals (ISIs, in contrast to a general belief that ISIs obey Poisson or Gamma distributions in cortical neurons. We present a theoretical argument that this power law of in vivo neurons may represent the maximization of the entropy of firing rate with limited energy consumption of spike generation. Though further studies are required to fully clarify the functional implications of this coding principle, it may shed new light on information representations by neurons and circuits in motor cortex.

  6. Neural networks for tracking of unknown SISO discrete-time nonlinear dynamic systems.

    Science.gov (United States)

    Aftab, Muhammad Saleheen; Shafiq, Muhammad

    2015-11-01

    This article presents a Lyapunov function based neural network tracking (LNT) strategy for single-input, single-output (SISO) discrete-time nonlinear dynamic systems. The proposed LNT architecture is composed of two feedforward neural networks operating as controller and estimator. A Lyapunov function based back propagation learning algorithm is used for online adjustment of the controller and estimator parameters. The controller and estimator error convergence and closed-loop system stability analysis is performed by Lyapunov stability theory. Moreover, two simulation examples and one real-time experiment are investigated as case studies. The achieved results successfully validate the controller performance. Copyright © 2015 ISA. Published by Elsevier Ltd. All rights reserved.

  7. Optimization of matrix tablets controlled drug release using Elman dynamic neural networks and decision trees.

    Science.gov (United States)

    Petrović, Jelena; Ibrić, Svetlana; Betz, Gabriele; Đurić, Zorica

    2012-05-30

    The main objective of the study was to develop artificial intelligence methods for optimization of drug release from matrix tablets regardless of the matrix type. Static and dynamic artificial neural networks of the same topology were developed to model dissolution profiles of different matrix tablets types (hydrophilic/lipid) using formulation composition, compression force used for tableting and tablets porosity and tensile strength as input data. Potential application of decision trees in discovering knowledge from experimental data was also investigated. Polyethylene oxide polymer and glyceryl palmitostearate were used as matrix forming materials for hydrophilic and lipid matrix tablets, respectively whereas selected model drugs were diclofenac sodium and caffeine. Matrix tablets were prepared by direct compression method and tested for in vitro dissolution profiles. Optimization of static and dynamic neural networks used for modeling of drug release was performed using Monte Carlo simulations or genetic algorithms optimizer. Decision trees were constructed following discretization of data. Calculated difference (f(1)) and similarity (f(2)) factors for predicted and experimentally obtained dissolution profiles of test matrix tablets formulations indicate that Elman dynamic neural networks as well as decision trees are capable of accurate predictions of both hydrophilic and lipid matrix tablets dissolution profiles. Elman neural networks were compared to most frequently used static network, Multi-layered perceptron, and superiority of Elman networks have been demonstrated. Developed methods allow simple, yet very precise way of drug release predictions for both hydrophilic and lipid matrix tablets having controlled drug release. Copyright © 2012 Elsevier B.V. All rights reserved.

  8. Optimal Control of Complex Systems Based on Improved Dual Heuristic Dynamic Programming Algorithm

    Directory of Open Access Journals (Sweden)

    Hui Li

    2017-01-01

    Full Text Available When applied to solving the data modeling and optimal control problems of complex systems, the dual heuristic dynamic programming (DHP technique, which is based on the BP neural network algorithm (BP-DHP, has difficulty in prediction accuracy, slow convergence speed, poor stability, and so forth. In this paper, a dual DHP technique based on Extreme Learning Machine (ELM algorithm (ELM-DHP was proposed. Through constructing three kinds of network structures, the paper gives the detailed realization process of the DHP technique in the ELM. The controller designed upon the ELM-DHP algorithm controlled a molecular distillation system with complex features, such as multivariability, strong coupling, and nonlinearity. Finally, the effectiveness of the algorithm is verified by the simulation that compares DHP and HDP algorithms based on ELM and BP neural network. The algorithm can also be applied to solve the data modeling and optimal control problems of similar complex systems.

  9. Neural Progenitors Adopt Specific Identities by Directly Repressing All Alternative Progenitor Transcriptional Programs.

    Science.gov (United States)

    Kutejova, Eva; Sasai, Noriaki; Shah, Ankita; Gouti, Mina; Briscoe, James

    2016-03-21

    In the vertebrate neural tube, a morphogen-induced transcriptional network produces multiple molecularly distinct progenitor domains, each generating different neuronal subtypes. Using an in vitro differentiation system, we defined gene expression signatures of distinct progenitor populations and identified direct gene-regulatory inputs corresponding to locations of specific transcription factor binding. Combined with targeted perturbations of the network, this revealed a mechanism in which a progenitor identity is installed by active repression of the entire transcriptional programs of other neural progenitor fates. In the ventral neural tube, sonic hedgehog (Shh) signaling, together with broadly expressed transcriptional activators, concurrently activates the gene expression programs of several domains. The specific outcome is selected by repressive input provided by Shh-induced transcription factors that act as the key nodes in the network, enabling progenitors to adopt a single definitive identity from several initially permitted options. Together, the data suggest design principles relevant to many developing tissues. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.

  10. Nonlinear dynamics analysis of a self-organizing recurrent neural network: chaos waning.

    Science.gov (United States)

    Eser, Jürgen; Zheng, Pengsheng; Triesch, Jochen

    2014-01-01

    Self-organization is thought to play an important role in structuring nervous systems. It frequently arises as a consequence of plasticity mechanisms in neural networks: connectivity determines network dynamics which in turn feed back on network structure through various forms of plasticity. Recently, self-organizing recurrent neural network models (SORNs) have been shown to learn non-trivial structure in their inputs and to reproduce the experimentally observed statistics and fluctuations of synaptic connection strengths in cortex and hippocampus. However, the dynamics in these networks and how they change with network evolution are still poorly understood. Here we investigate the degree of chaos in SORNs by studying how the networks' self-organization changes their response to small perturbations. We study the effect of perturbations to the excitatory-to-excitatory weight matrix on connection strengths and on unit activities. We find that the network dynamics, characterized by an estimate of the maximum Lyapunov exponent, becomes less chaotic during its self-organization, developing into a regime where only few perturbations become amplified. We also find that due to the mixing of discrete and (quasi-)continuous variables in SORNs, small perturbations to the synaptic weights may become amplified only after a substantial delay, a phenomenon we propose to call deferred chaos.

  11. Identification and prediction of dynamic systems using an interactively recurrent self-evolving fuzzy neural network.

    Science.gov (United States)

    Lin, Yang-Yin; Chang, Jyh-Yeong; Lin, Chin-Teng

    2013-02-01

    This paper presents a novel recurrent fuzzy neural network, called an interactively recurrent self-evolving fuzzy neural network (IRSFNN), for prediction and identification of dynamic systems. The recurrent structure in an IRSFNN is formed as an external loops and internal feedback by feeding the rule firing strength of each rule to others rules and itself. The consequent part in the IRSFNN is composed of a Takagi-Sugeno-Kang (TSK) or functional-link-based type. The proposed IRSFNN employs a functional link neural network (FLNN) to the consequent part of fuzzy rules for promoting the mapping ability. Unlike a TSK-type fuzzy neural network, the FLNN in the consequent part is a nonlinear function of input variables. An IRSFNNs learning starts with an empty rule base and all of the rules are generated and learned online through a simultaneous structure and parameter learning. An on-line clustering algorithm is effective in generating fuzzy rules. The consequent update parameters are derived by a variable-dimensional Kalman filter algorithm. The premise and recurrent parameters are learned through a gradient descent algorithm. We test the IRSFNN for the prediction and identification of dynamic plants and compare it to other well-known recurrent FNNs. The proposed model obtains enhanced performance results.

  12. The Temporal Derivative of Expected Utility: A Neural Mechanism for Dynamic Decision-making

    Science.gov (United States)

    Zhang, Xian; Hirsch, Joy

    2012-01-01

    Real world tasks involving moving targets, such as driving a vehicle, are performed based on continuous decisions thought to depend upon the temporal derivative of the expected utility (∂V/∂t), where the expected utility (V) is the effective value of a future reward. However, those neural mechanisms that underlie dynamic decision-making are not well understood. This study investigates human neural correlates of both V and ∂V/∂t using fMRI and a novel experimental paradigm based on a pursuit-evasion game optimized to isolate components of dynamic decision processes. Our behavioral data show that players of the pursuit-evasion game adopt an exponential discounting function, supporting the expected utility theory. The continuous functions of V and ∂V/∂t were derived from the behavioral data and applied as regressors in fMRI analysis, enabling temporal resolution that exceeded the sampling rate of image acquisition, hyper-temporal resolution, by taking advantage of numerous trials that provide rich and independent manipulation of those variables. V and ∂V/∂t were each associated with distinct neural activity. Specifically, ∂V/∂t was associated with anterior and posterior cingulate cortices, superior parietal lobule, and ventral pallidum, whereas V was primarily associated with supplementary motor, pre and post central gyri, cerebellum, and thalamus. The association between the ∂V/∂t and brain regions previously related to decision-making is consistent with the primary role of the temporal derivative of expected utility in dynamic decision-making. PMID:22963852

  13. The temporal derivative of expected utility: a neural mechanism for dynamic decision-making.

    Science.gov (United States)

    Zhang, Xian; Hirsch, Joy

    2013-01-15

    Real world tasks involving moving targets, such as driving a vehicle, are performed based on continuous decisions thought to depend upon the temporal derivative of the expected utility (∂V/∂t), where the expected utility (V) is the effective value of a future reward. However, the neural mechanisms that underlie dynamic decision-making are not well understood. This study investigates human neural correlates of both V and ∂V/∂t using fMRI and a novel experimental paradigm based on a pursuit-evasion game optimized to isolate components of dynamic decision processes. Our behavioral data show that players of the pursuit-evasion game adopt an exponential discounting function, supporting the expected utility theory. The continuous functions of V and ∂V/∂t were derived from the behavioral data and applied as regressors in fMRI analysis, enabling temporal resolution that exceeded the sampling rate of image acquisition, hyper-temporal resolution, by taking advantage of numerous trials that provide rich and independent manipulation of those variables. V and ∂V/∂t were each associated with distinct neural activity. Specifically, ∂V/∂t was associated with anterior and posterior cingulate cortices, superior parietal lobule, and ventral pallidum, whereas V was primarily associated with supplementary motor, pre and post central gyri, cerebellum, and thalamus. The association between the ∂V/∂t and brain regions previously related to decision-making is consistent with the primary role of the temporal derivative of expected utility in dynamic decision-making. Copyright © 2012 Elsevier Inc. All rights reserved.

  14. Programmed Cell Death and Caspase Functions During Neural Development.

    Science.gov (United States)

    Yamaguchi, Yoshifumi; Miura, Masayuki

    2015-01-01

    Programmed cell death (PCD) is a fundamental component of nervous system development. PCD serves as the mechanism for quantitative matching of the number of projecting neurons and their target cells through direct competition for neurotrophic factors in the vertebrate peripheral nervous system. In addition, PCD plays roles in regulating neural cell numbers, canceling developmental errors or noise, and tissue remodeling processes. These findings are mainly derived from genetic studies that prevent cells from dying by apoptosis, which is a major form of PCD and is executed by activation of evolutionarily conserved cysteine protease caspases. Recent studies suggest that caspase activation can be coordinated in time and space at multiple levels, which might underlie nonapoptotic roles of caspases in neural development in addition to apoptotic roles. © 2015 Elsevier Inc. All rights reserved.

  15. Dynamically Partitionable Autoassociative Networks as a Solution to the Neural Binding Problem

    Directory of Open Access Journals (Sweden)

    Kenneth Jeffrey Hayworth

    2012-09-01

    Full Text Available An outstanding question in theoretical neuroscience is how the brain solves the neural binding problem. In vision, binding can be summarized as the ability to represent that certain properties belong to one object while other properties belong to a different object. I review the binding problem in visual and other domains, and review its simplest proposed solution – the anatomical binding hypothesis. This hypothesis has traditionally been rejected as a true solution because it seems to require a type of one-to-one wiring of neurons that would be impossible in a biological system (as opposed to an engineered system like a computer. I show that this requirement for one-to-one wiring can be loosened by carefully considering how the neural representation is actually put to use by the rest of the brain. This leads to a solution where a symbol is represented not as a particular pattern of neural activation but instead as a piece of a global stable attractor state. I introduce the Dynamically Partitionable AutoAssociative Network (DPAAN as an implementation of this solution and show how DPANNs can be used in systems which perform perceptual binding and in systems that implement syntax-sensitive rules. Finally I show how the core parts of the cognitive architecture ACT-R can be neurally implemented using a DPAAN as ACT-R’s global workspace. Because the DPAAN solution to the binding problem requires only ‘flat’ neural representations (as opposed to the phase encoded representation hypothesized in neural synchrony solutions it is directly compatible with the most well developed neural models of learning, memory, and pattern recognition.

  16. Dynamics of delay-coupled FitzHugh-Nagumo neural rings

    Science.gov (United States)

    Mao, Xiaochen; Sun, Jianqiao; Li, Shaofan

    2018-01-01

    This paper studies the dynamical behaviors of a pair of FitzHugh-Nagumo neural networks with bidirectional delayed couplings. It presents a detailed analysis of delay-independent and delay-dependent stabilities and the existence of bifurcated oscillations. Illustrative examples are performed to validate the analytical results and to discover interesting phenomena. It is shown that the network exhibits a variety of complicated activities, such as multiple stability switches, the coexistence of periodic and quasi-periodic oscillations, the coexistence of periodic and chaotic orbits, and the coexisting chaotic attractors.

  17. A Neural Network Model of the Structure and Dynamics of Human Personality

    Science.gov (United States)

    Read, Stephen J.; Monroe, Brian M.; Brownstein, Aaron L.; Yang, Yu; Chopra, Gurveen; Miller, Lynn C.

    2010-01-01

    We present a neural network model that aims to bridge the historical gap between dynamic and structural approaches to personality. The model integrates work on the structure of the trait lexicon, the neurobiology of personality, temperament, goal-based models of personality, and an evolutionary analysis of motives. It is organized in terms of two…

  18. Neural pathways in processing of sexual arousal: a dynamic causal modeling study.

    Science.gov (United States)

    Seok, J-W; Park, M-S; Sohn, J-H

    2016-09-01

    Three decades of research have investigated brain processing of visual sexual stimuli with neuroimaging methods. These researchers have found that sexual arousal stimuli elicit activity in a broad neural network of cortical and subcortical brain areas that are known to be associated with cognitive, emotional, motivational and physiological components. However, it is not completely understood how these neural systems integrate and modulated incoming information. Therefore, we identify cerebral areas whose activations were correlated with sexual arousal using event-related functional magnetic resonance imaging and used the dynamic causal modeling method for searching the effective connectivity about the sexual arousal processing network. Thirteen heterosexual males were scanned while they passively viewed alternating short trials of erotic and neutral pictures on a monitor. We created a subset of seven models based on our results and previous studies and selected a dominant connectivity model. Consequently, we suggest a dynamic causal model of the brain processes mediating the cognitive, emotional, motivational and physiological factors of human male sexual arousal. These findings are significant implications for the neuropsychology of male sexuality.

  19. Diagonal recurrent neural network based adaptive control of nonlinear dynamical systems using lyapunov stability criterion.

    Science.gov (United States)

    Kumar, Rajesh; Srivastava, Smriti; Gupta, J R P

    2017-03-01

    In this paper adaptive control of nonlinear dynamical systems using diagonal recurrent neural network (DRNN) is proposed. The structure of DRNN is a modification of fully connected recurrent neural network (FCRNN). Presence of self-recurrent neurons in the hidden layer of DRNN gives it an ability to capture the dynamic behaviour of the nonlinear plant under consideration (to be controlled). To ensure stability, update rules are developed using lyapunov stability criterion. These rules are then used for adjusting the various parameters of DRNN. The responses of plants obtained with DRNN are compared with those obtained when multi-layer feed forward neural network (MLFFNN) is used as a controller. Also, in example 4, FCRNN is also investigated and compared with DRNN and MLFFNN. Robustness of the proposed control scheme is also tested against parameter variations and disturbance signals. Four simulation examples including one-link robotic manipulator and inverted pendulum are considered on which the proposed controller is applied. The results so obtained show the superiority of DRNN over MLFFNN as a controller. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.

  20. Predicting the topology of dynamic neural networks for the simulation of electronic circuits

    NARCIS (Netherlands)

    Schilders, W.H.A.

    2009-01-01

    In this paper we discuss the use of the state-space modelling MOESP algorithm to generate precise information about the number of neurons and hidden layers in dynamic neural networks developed for the behavioural modelling of electronic circuits. The Bartels–Stewart algorithm is used to transform

  1. Dynamical Behaviors of Stochastic Reaction-Diffusion Cohen-Grossberg Neural Networks with Delays

    Directory of Open Access Journals (Sweden)

    Li Wan

    2012-01-01

    Full Text Available This paper investigates dynamical behaviors of stochastic Cohen-Grossberg neural network with delays and reaction diffusion. By employing Lyapunov method, Poincaré inequality and matrix technique, some sufficient criteria on ultimate boundedness, weak attractor, and asymptotic stability are obtained. Finally, a numerical example is given to illustrate the correctness and effectiveness of our theoretical results.

  2. Fractional Hopfield Neural Networks: Fractional Dynamic Associative Recurrent Neural Networks.

    Science.gov (United States)

    Pu, Yi-Fei; Yi, Zhang; Zhou, Ji-Liu

    2017-10-01

    This paper mainly discusses a novel conceptual framework: fractional Hopfield neural networks (FHNN). As is commonly known, fractional calculus has been incorporated into artificial neural networks, mainly because of its long-term memory and nonlocality. Some researchers have made interesting attempts at fractional neural networks and gained competitive advantages over integer-order neural networks. Therefore, it is naturally makes one ponder how to generalize the first-order Hopfield neural networks to the fractional-order ones, and how to implement FHNN by means of fractional calculus. We propose to introduce a novel mathematical method: fractional calculus to implement FHNN. First, we implement fractor in the form of an analog circuit. Second, we implement FHNN by utilizing fractor and the fractional steepest descent approach, construct its Lyapunov function, and further analyze its attractors. Third, we perform experiments to analyze the stability and convergence of FHNN, and further discuss its applications to the defense against chip cloning attacks for anticounterfeiting. The main contribution of our work is to propose FHNN in the form of an analog circuit by utilizing a fractor and the fractional steepest descent approach, construct its Lyapunov function, prove its Lyapunov stability, analyze its attractors, and apply FHNN to the defense against chip cloning attacks for anticounterfeiting. A significant advantage of FHNN is that its attractors essentially relate to the neuron's fractional order. FHNN possesses the fractional-order-stability and fractional-order-sensitivity characteristics.

  3. Integrating the behavioral and neural dynamics of response selection in a dual-task paradigm: a dynamic neural field model of Dux et al. (2009).

    Science.gov (United States)

    Buss, Aaron T; Wifall, Tim; Hazeltine, Eliot; Spencer, John P

    2014-02-01

    People are typically slower when executing two tasks than when only performing a single task. These dual-task costs are initially robust but are reduced with practice. Dux et al. (2009) explored the neural basis of dual-task costs and learning using fMRI. Inferior frontal junction (IFJ) showed a larger hemodynamic response on dual-task trials compared with single-task trial early in learning. As dual-task costs were eliminated, dual-task hemodynamics in IFJ reduced to single-task levels. Dux and colleagues concluded that the reduction of dual-task costs is accomplished through increased efficiency of information processing in IFJ. We present a dynamic field theory of response selection that addresses two questions regarding these results. First, what mechanism leads to the reduction of dual-task costs and associated changes in hemodynamics? We show that a simple Hebbian learning mechanism is able to capture the quantitative details of learning at both the behavioral and neural levels. Second, is efficiency isolated to cognitive control areas such as IFJ, or is it also evident in sensory motor areas? To investigate this, we restrict Hebbian learning to different parts of the neural model. None of the restricted learning models showed the same reductions in dual-task costs as the unrestricted learning model, suggesting that efficiency is distributed across cognitive control and sensory motor processing systems.

  4. A Dynamic Connectome Supports the Emergence of Stable Computational Function of Neural Circuits through Reward-Based Learning.

    Science.gov (United States)

    Kappel, David; Legenstein, Robert; Habenschuss, Stefan; Hsieh, Michael; Maass, Wolfgang

    2018-01-01

    Synaptic connections between neurons in the brain are dynamic because of continuously ongoing spine dynamics, axonal sprouting, and other processes. In fact, it was recently shown that the spontaneous synapse-autonomous component of spine dynamics is at least as large as the component that depends on the history of pre- and postsynaptic neural activity. These data are inconsistent with common models for network plasticity and raise the following questions: how can neural circuits maintain a stable computational function in spite of these continuously ongoing processes, and what could be functional uses of these ongoing processes? Here, we present a rigorous theoretical framework for these seemingly stochastic spine dynamics and rewiring processes in the context of reward-based learning tasks. We show that spontaneous synapse-autonomous processes, in combination with reward signals such as dopamine, can explain the capability of networks of neurons in the brain to configure themselves for specific computational tasks, and to compensate automatically for later changes in the network or task. Furthermore, we show theoretically and through computer simulations that stable computational performance is compatible with continuously ongoing synapse-autonomous changes. After reaching good computational performance it causes primarily a slow drift of network architecture and dynamics in task-irrelevant dimensions, as observed for neural activity in motor cortex and other areas. On the more abstract level of reinforcement learning the resulting model gives rise to an understanding of reward-driven network plasticity as continuous sampling of network configurations.

  5. Stochastic Nonlinear Evolutional Model of the Large-Scaled Neuronal Population and Dynamic Neural Coding Subject to Stimulation

    International Nuclear Information System (INIS)

    Wang Rubin; Yu Wei

    2005-01-01

    In this paper, we investigate how the population of neuronal oscillators deals with information and the dynamic evolution of neural coding when the external stimulation acts on it. Numerically computing method is used to describe the evolution process of neural coding in three-dimensioned space. The numerical result proves that only the suitable stimulation can change the coupling structure and plasticity of neurons

  6. Cell dynamic morphology classification using deep convolutional neural networks.

    Science.gov (United States)

    Li, Heng; Pang, Fengqian; Shi, Yonggang; Liu, Zhiwen

    2018-05-15

    Cell morphology is often used as a proxy measurement of cell status to understand cell physiology. Hence, interpretation of cell dynamic morphology is a meaningful task in biomedical research. Inspired by the recent success of deep learning, we here explore the application of convolutional neural networks (CNNs) to cell dynamic morphology classification. An innovative strategy for the implementation of CNNs is introduced in this study. Mouse lymphocytes were collected to observe the dynamic morphology, and two datasets were thus set up to investigate the performances of CNNs. Considering the installation of deep learning, the classification problem was simplified from video data to image data, and was then solved by CNNs in a self-taught manner with the generated image data. CNNs were separately performed in three installation scenarios and compared with existing methods. Experimental results demonstrated the potential of CNNs in cell dynamic morphology classification, and validated the effectiveness of the proposed strategy. CNNs were successfully applied to the classification problem, and outperformed the existing methods in the classification accuracy. For the installation of CNNs, transfer learning was proved to be a promising scheme. © 2018 International Society for Advancement of Cytometry. © 2018 International Society for Advancement of Cytometry.

  7. Differential Neural Networks for Identification and Filtering in Nonlinear Dynamic Games

    Directory of Open Access Journals (Sweden)

    Emmanuel García

    2014-01-01

    Full Text Available This paper deals with the problem of identifying and filtering a class of continuous-time nonlinear dynamic games (nonlinear differential games subject to additive and undesired deterministic perturbations. Moreover, the mathematical model of this class is completely unknown with the exception of the control actions of each player, and even though the deterministic noises are known, their power (or their effect is not. Therefore, two differential neural networks are designed in order to obtain a feedback (perfect state information pattern for the mentioned class of games. In this way, the stability conditions for two state identification errors and for a filtering error are established, the upper bounds of these errors are obtained, and two new learning laws for each neural network are suggested. Finally, an illustrating example shows the applicability of this approach.

  8. A modified dynamic evolving neural-fuzzy approach to modeling customer satisfaction for affective design.

    Science.gov (United States)

    Kwong, C K; Fung, K Y; Jiang, Huimin; Chan, K Y; Siu, Kin Wai Michael

    2013-01-01

    Affective design is an important aspect of product development to achieve a competitive edge in the marketplace. A neural-fuzzy network approach has been attempted recently to model customer satisfaction for affective design and it has been proved to be an effective one to deal with the fuzziness and non-linearity of the modeling as well as generate explicit customer satisfaction models. However, such an approach to modeling customer satisfaction has two limitations. First, it is not suitable for the modeling problems which involve a large number of inputs. Second, it cannot adapt to new data sets, given that its structure is fixed once it has been developed. In this paper, a modified dynamic evolving neural-fuzzy approach is proposed to address the above mentioned limitations. A case study on the affective design of mobile phones was conducted to illustrate the effectiveness of the proposed methodology. Validation tests were conducted and the test results indicated that: (1) the conventional Adaptive Neuro-Fuzzy Inference System (ANFIS) failed to run due to a large number of inputs; (2) the proposed dynamic neural-fuzzy model outperforms the subtractive clustering-based ANFIS model and fuzzy c-means clustering-based ANFIS model in terms of their modeling accuracy and computational effort.

  9. A Modified Dynamic Evolving Neural-Fuzzy Approach to Modeling Customer Satisfaction for Affective Design

    Directory of Open Access Journals (Sweden)

    C. K. Kwong

    2013-01-01

    Full Text Available Affective design is an important aspect of product development to achieve a competitive edge in the marketplace. A neural-fuzzy network approach has been attempted recently to model customer satisfaction for affective design and it has been proved to be an effective one to deal with the fuzziness and non-linearity of the modeling as well as generate explicit customer satisfaction models. However, such an approach to modeling customer satisfaction has two limitations. First, it is not suitable for the modeling problems which involve a large number of inputs. Second, it cannot adapt to new data sets, given that its structure is fixed once it has been developed. In this paper, a modified dynamic evolving neural-fuzzy approach is proposed to address the above mentioned limitations. A case study on the affective design of mobile phones was conducted to illustrate the effectiveness of the proposed methodology. Validation tests were conducted and the test results indicated that: (1 the conventional Adaptive Neuro-Fuzzy Inference System (ANFIS failed to run due to a large number of inputs; (2 the proposed dynamic neural-fuzzy model outperforms the subtractive clustering-based ANFIS model and fuzzy c-means clustering-based ANFIS model in terms of their modeling accuracy and computational effort.

  10. Optimal system size for complex dynamics in random neural networks near criticality

    Energy Technology Data Exchange (ETDEWEB)

    Wainrib, Gilles, E-mail: wainrib@math.univ-paris13.fr [Laboratoire Analyse Géométrie et Applications, Université Paris XIII, Villetaneuse (France); García del Molino, Luis Carlos, E-mail: garciadelmolino@ijm.univ-paris-diderot.fr [Institute Jacques Monod, Université Paris VII, Paris (France)

    2013-12-15

    In this article, we consider a model of dynamical agents coupled through a random connectivity matrix, as introduced by Sompolinsky et al. [Phys. Rev. Lett. 61(3), 259–262 (1988)] in the context of random neural networks. When system size is infinite, it is known that increasing the disorder parameter induces a phase transition leading to chaotic dynamics. We observe and investigate here a novel phenomenon in the sub-critical regime for finite size systems: the probability of observing complex dynamics is maximal for an intermediate system size when the disorder is close enough to criticality. We give a more general explanation of this type of system size resonance in the framework of extreme values theory for eigenvalues of random matrices.

  11. Optimal system size for complex dynamics in random neural networks near criticality

    International Nuclear Information System (INIS)

    Wainrib, Gilles; García del Molino, Luis Carlos

    2013-01-01

    In this article, we consider a model of dynamical agents coupled through a random connectivity matrix, as introduced by Sompolinsky et al. [Phys. Rev. Lett. 61(3), 259–262 (1988)] in the context of random neural networks. When system size is infinite, it is known that increasing the disorder parameter induces a phase transition leading to chaotic dynamics. We observe and investigate here a novel phenomenon in the sub-critical regime for finite size systems: the probability of observing complex dynamics is maximal for an intermediate system size when the disorder is close enough to criticality. We give a more general explanation of this type of system size resonance in the framework of extreme values theory for eigenvalues of random matrices

  12. SuperNeurons: Dynamic GPU Memory Management for Training Deep Neural Networks

    OpenAIRE

    Wang, Linnan; Ye, Jinmian; Zhao, Yiyang; Wu, Wei; Li, Ang; Song, Shuaiwen Leon; Xu, Zenglin; Kraska, Tim

    2018-01-01

    Going deeper and wider in neural architectures improves the accuracy, while the limited GPU DRAM places an undesired restriction on the network design domain. Deep Learning (DL) practitioners either need change to less desired network architectures, or nontrivially dissect a network across multiGPUs. These distract DL practitioners from concentrating on their original machine learning tasks. We present SuperNeurons: a dynamic GPU memory scheduling runtime to enable the network training far be...

  13. Neural network approach to time-dependent dividing surfaces in classical reaction dynamics

    Science.gov (United States)

    Schraft, Philippe; Junginger, Andrej; Feldmaier, Matthias; Bardakcioglu, Robin; Main, Jörg; Wunner, Günter; Hernandez, Rigoberto

    2018-04-01

    In a dynamical system, the transition between reactants and products is typically mediated by an energy barrier whose properties determine the corresponding pathways and rates. The latter is the flux through a dividing surface (DS) between the two corresponding regions, and it is exact only if it is free of recrossings. For time-independent barriers, the DS can be attached to the top of the corresponding saddle point of the potential energy surface, and in time-dependent systems, the DS is a moving object. The precise determination of these direct reaction rates, e.g., using transition state theory, requires the actual construction of a DS for a given saddle geometry, which is in general a demanding methodical and computational task, especially in high-dimensional systems. In this paper, we demonstrate how such time-dependent, global, and recrossing-free DSs can be constructed using neural networks. In our approach, the neural network uses the bath coordinates and time as input, and it is trained in a way that its output provides the position of the DS along the reaction coordinate. An advantage of this procedure is that, once the neural network is trained, the complete information about the dynamical phase space separation is stored in the network's parameters, and a precise distinction between reactants and products can be made for all possible system configurations, all times, and with little computational effort. We demonstrate this general method for two- and three-dimensional systems and explain its straightforward extension to even more degrees of freedom.

  14. Boundary Control of Linear Uncertain 1-D Parabolic PDE Using Approximate Dynamic Programming.

    Science.gov (United States)

    Talaei, Behzad; Jagannathan, Sarangapani; Singler, John

    2018-04-01

    This paper develops a near optimal boundary control method for distributed parameter systems governed by uncertain linear 1-D parabolic partial differential equations (PDE) by using approximate dynamic programming. A quadratic surface integral is proposed to express the optimal cost functional for the infinite-dimensional state space. Accordingly, the Hamilton-Jacobi-Bellman (HJB) equation is formulated in the infinite-dimensional domain without using any model reduction. Subsequently, a neural network identifier is developed to estimate the unknown spatially varying coefficient in PDE dynamics. Novel tuning law is proposed to guarantee the boundedness of identifier approximation error in the PDE domain. A radial basis network (RBN) is subsequently proposed to generate an approximate solution for the optimal surface kernel function online. The tuning law for near optimal RBN weights is created, such that the HJB equation error is minimized while the dynamics are identified and closed-loop system remains stable. Ultimate boundedness (UB) of the closed-loop system is verified by using the Lyapunov theory. The performance of the proposed controller is successfully confirmed by simulation on an unstable diffusion-reaction process.

  15. Adaptive Neural Output-Feedback Control for a Class of Nonlower Triangular Nonlinear Systems With Unmodeled Dynamics.

    Science.gov (United States)

    Wang, Huanqing; Liu, Peter Xiaoping; Li, Shuai; Wang, Ding

    2017-08-29

    This paper presents the development of an adaptive neural controller for a class of nonlinear systems with unmodeled dynamics and immeasurable states. An observer is designed to estimate system states. The structure consistency of virtual control signals and the variable partition technique are combined to overcome the difficulties appearing in a nonlower triangular form. An adaptive neural output-feedback controller is developed based on the backstepping technique and the universal approximation property of the radial basis function (RBF) neural networks. By using the Lyapunov stability analysis, the semiglobally and uniformly ultimate boundedness of all signals within the closed-loop system is guaranteed. The simulation results show that the controlled system converges quickly, and all the signals are bounded. This paper is novel at least in the two aspects: 1) an output-feedback control strategy is developed for a class of nonlower triangular nonlinear systems with unmodeled dynamics and 2) the nonlinear disturbances and their bounds are the functions of all states, which is in a more general form than existing results.

  16. Dynamic Neural Fields as a Step Towards Cognitive Neuromorphic Architectures

    Directory of Open Access Journals (Sweden)

    Yulia eSandamirskaya

    2014-01-01

    Full Text Available Dynamic Field Theory (DFT is an established framework for modelling embodied cognition. In DFT, elementary cognitive functions such as memory formation, formation of grounded representations, attentional processes, decision making, adaptation, and learning emerge from neuronal dynamics. The basic computational element of this framework is a Dynamic Neural Field (DNF. Under constraints on the time-scale of the dynamics, the DNF is computationally equivalent to a soft winner-take-all (WTA network, which is considered one of the basic computational units in neuronal processing. Recently, it has been shown how a WTA network may be implemented in neuromorphic hardware, such as analogue Very Large Scale Integration (VLSI device. This paper leverages the relationship between DFT and soft WTA networks to systematically revise and integrate established DFT mechanisms that have previously been spread among different architectures. In addition, I also identify some novel computational and architectural mechanisms of DFT which may be implemented in neuromorphic VLSI devices using WTA networks as an intermediate computational layer. These specific mechanisms include the stabilization of working memory, the coupling of sensory systems to motor dynamics, intentionality, and autonomous learning. I further demonstrate how all these elements may be integrated into a unified architecture to generate behavior and autonomous learning.

  17. Synchronization of cellular neural networks of neutral type via dynamic feedback controller

    International Nuclear Information System (INIS)

    Park, Ju H.

    2009-01-01

    In this paper, we aim to study global synchronization for neural networks with neutral delay. A dynamic feedback control scheme is proposed to achieve the synchronization between drive network and response network. By utilizing the Lyapunov function and linear matrix inequalities (LMIs), we derive simple and efficient criterion in terms of LMIs for synchronization. The feedback controllers can be easily obtained by solving the derived LMIs.

  18. Neural Based Orthogonal Data Fitting The EXIN Neural Networks

    CERN Document Server

    Cirrincione, Giansalvo

    2008-01-01

    Written by three leaders in the field of neural based algorithms, Neural Based Orthogonal Data Fitting proposes several neural networks, all endowed with a complete theory which not only explains their behavior, but also compares them with the existing neural and traditional algorithms. The algorithms are studied from different points of view, including: as a differential geometry problem, as a dynamic problem, as a stochastic problem, and as a numerical problem. All algorithms have also been analyzed on real time problems (large dimensional data matrices) and have shown accurate solutions. Wh

  19. Behavioural modelling using the MOESP algorithm, dynamic neural networks and the Bartels-Stewart algorithm

    NARCIS (Netherlands)

    Schilders, W.H.A.; Meijer, P.B.L.; Ciggaar, E.

    2008-01-01

    In this paper we discuss the use of the state-space modelling MOESP algorithm to generate precise information about the number of neurons and hidden layers in dynamic neural networks developed for the behavioural modelling of electronic circuits. The Bartels–Stewart algorithm is used to transform

  20. Dynamic cultural influences on neural representations of the self.

    Science.gov (United States)

    Chiao, Joan Y; Harada, Tokiko; Komeda, Hidetsugu; Li, Zhang; Mano, Yoko; Saito, Daisuke; Parrish, Todd B; Sadato, Norihiro; Iidaka, Tetsuya

    2010-01-01

    People living in multicultural environments often encounter situations which require them to acquire different cultural schemas and to switch between these cultural schemas depending on their immediate sociocultural context. Prior behavioral studies show that priming cultural schemas reliably impacts mental processes and behavior underlying self-concept. However, less well understood is whether or not cultural priming affects neurobiological mechanisms underlying the self. Here we examined whether priming cultural values of individualism and collectivism in bicultural individuals affects neural activity in cortical midline structures underlying self-relevant processes using functional magnetic resonance imaging. Biculturals primed with individualistic values showed increased activation within medial prefrontal cortex (MPFC) and posterior cingulate cortex (PCC) during general relative to contextual self-judgments, whereas biculturals primed with collectivistic values showed increased response within MPFC and PCC during contextual relative to general self-judgments. Moreover, degree of cultural priming was positively correlated with degree of MPFC and PCC activity during culturally congruent self-judgments. These findings illustrate the dynamic influence of culture on neural representations underlying the self and, more broadly, suggest a neurobiological basis by which people acculturate to novel environments.

  1. A Neural Network Model to Learn Multiple Tasks under Dynamic Environments

    Science.gov (United States)

    Tsumori, Kenji; Ozawa, Seiichi

    When environments are dynamically changed for agents, the knowledge acquired in an environment might be useless in future. In such dynamic environments, agents should be able to not only acquire new knowledge but also modify old knowledge in learning. However, modifying all knowledge acquired before is not efficient because the knowledge once acquired may be useful again when similar environment reappears and some knowledge can be shared among different environments. To learn efficiently in such environments, we propose a neural network model that consists of the following modules: resource allocating network, long-term & short-term memory, and environment change detector. We evaluate the model under a class of dynamic environments where multiple function approximation tasks are sequentially given. The experimental results demonstrate that the proposed model possesses stable incremental learning, accurate environmental change detection, proper association and recall of old knowledge, and efficient knowledge transfer.

  2. Development of a New Aprepitant Liquisolid Formulation with the Aid of Artificial Neural Networks and Genetic Programming.

    Science.gov (United States)

    Barmpalexis, Panagiotis; Grypioti, Agni; Eleftheriadis, Georgios K; Fatouros, Dimitris G

    2018-02-01

    In the present study, liquisolid formulations were developed for improving dissolution profile of aprepitant (APT) in a solid dosage form. Experimental studies were complemented with artificial neural networks and genetic programming. Specifically, the type and concentration of liquid vehicle was evaluated through saturation-solubility studies, while the effect of the amount of viscosity increasing agent (HPMC), the type of wetting (Soluplus® vs. PVP) and solubilizing (Poloxamer®407 vs. Kolliphor®ELP) agents, and the ratio of solid coating (microcrystalline cellulose) to carrier (colloidal silicon dioxide) were evaluated based on in vitro drug release studies. The optimum liquisolid formulation exhibited improved dissolution characteristics compared to the marketed product Emend®. X-ray diffraction (XRD), scanning electron microscopy (SEM) and a novel method combining particle size analysis by dynamic light scattering (DLS) and HPLC, revealed that the increase in dissolution rate of APT in the optimum liquisolid formulation was due to the formation of stable APT nanocrystals. Differential scanning calorimetry (DSC) and attenuated total reflection FTIR spectroscopy (ATR-FTIR) revealed the presence of intermolecular interactions between APT and liquisolid formulation excipients. Multilinear regression analysis (MLR), artificial neural networks (ANNs), and genetic programming (GP) were used to correlate several formulation variables with dissolution profile parameters (Y 15min and Y 30min ) using a full factorial experimental design. Results showed increased correlation efficacy for ANNs and GP (RMSE of 0.151 and 0.273, respectively) compared to MLR (RMSE = 0.413).

  3. Noninvasive fetal QRS detection using an echo state network and dynamic programming.

    Science.gov (United States)

    Lukoševičius, Mantas; Marozas, Vaidotas

    2014-08-01

    We address a classical fetal QRS detection problem from abdominal ECG recordings with a data-driven statistical machine learning approach. Our goal is to have a powerful, yet conceptually clean, solution. There are two novel key components at the heart of our approach: an echo state recurrent neural network that is trained to indicate fetal QRS complexes, and several increasingly sophisticated versions of statistics-based dynamic programming algorithms, which are derived from and rooted in probability theory. We also employ a standard technique for preprocessing and removing maternal ECG complexes from the signals, but do not take this as the main focus of this work. The proposed approach is quite generic and can be extended to other types of signals and annotations. Open-source code is provided.

  4. AASERT: Dynamic Training of Humans and Tutoring Agents

    National Research Council Canada - National Science Library

    Pollack, Jordan B

    2001-01-01

    ... (neural networks, genetic programs, adaptive dynamical systems), we have focused on a framework for learning in which the environment automatically and incrementally becomes more challenging as the learner progresses...

  5. A mathematical analysis of the effects of Hebbian learning rules on the dynamics and structure of discrete-time random recurrent neural networks.

    Science.gov (United States)

    Siri, Benoît; Berry, Hugues; Cessac, Bruno; Delord, Bruno; Quoy, Mathias

    2008-12-01

    We present a mathematical analysis of the effects of Hebbian learning in random recurrent neural networks, with a generic Hebbian learning rule, including passive forgetting and different timescales, for neuronal activity and learning dynamics. Previous numerical work has reported that Hebbian learning drives the system from chaos to a steady state through a sequence of bifurcations. Here, we interpret these results mathematically and show that these effects, involving a complex coupling between neuronal dynamics and synaptic graph structure, can be analyzed using Jacobian matrices, which introduce both a structural and a dynamical point of view on neural network evolution. Furthermore, we show that sensitivity to a learned pattern is maximal when the largest Lyapunov exponent is close to 0. We discuss how neural networks may take advantage of this regime of high functional interest.

  6. An Incremental Time-delay Neural Network for Dynamical Recurrent Associative Memory

    Institute of Scientific and Technical Information of China (English)

    2002-01-01

    An incremental time-delay neural network based on synapse growth, which is suitable for dynamic control and learning of autonomous robots, is proposed to improve the learning and retrieving performance of dynamical recurrent associative memory architecture. The model allows steady and continuous establishment of associative memory for spatio-temporal regularities and time series in discrete sequence of inputs. The inserted hidden units can be taken as the long-term memories that expand the capacity of network and sometimes may fade away under certain condition. Preliminary experiment has shown that this incremental network may be a promising approach to endow autonomous robots with the ability of adapting to new data without destroying the learned patterns. The system also benefits from its potential chaos character for emergence.

  7. Effects of neuronal loss in the dynamic model of neural networks

    International Nuclear Information System (INIS)

    Yoon, B-G; Choi, J; Choi, M Y

    2008-01-01

    We study the phase transitions and dynamic behavior of the dynamic model of neural networks, with an emphasis on the effects of neuronal loss due to external stress. In the absence of loss the overall results obtained numerically are found to agree excellently with the theoretical ones. When the external stress is turned on, some neurons may deteriorate and die; such loss of neurons, in general, weakens the memory in the system. As the loss increases beyond a critical value, the order parameter measuring the strength of memory decreases to zero either continuously or discontinuously, namely, the system loses its memory via a second- or a first-order transition, depending on the ratio of the refractory period to the duration of action potential

  8. Integration of Continuous-Time Dynamics in a Spiking Neural Network Simulator

    Directory of Open Access Journals (Sweden)

    Jan Hahne

    2017-05-01

    Full Text Available Contemporary modeling approaches to the dynamics of neural networks include two important classes of models: biologically grounded spiking neuron models and functionally inspired rate-based units. We present a unified simulation framework that supports the combination of the two for multi-scale modeling, enables the quantitative validation of mean-field approaches by spiking network simulations, and provides an increase in reliability by usage of the same simulation code and the same network model specifications for both model classes. While most spiking simulations rely on the communication of discrete events, rate models require time-continuous interactions between neurons. Exploiting the conceptual similarity to the inclusion of gap junctions in spiking network simulations, we arrive at a reference implementation of instantaneous and delayed interactions between rate-based models in a spiking network simulator. The separation of rate dynamics from the general connection and communication infrastructure ensures flexibility of the framework. In addition to the standard implementation we present an iterative approach based on waveform-relaxation techniques to reduce communication and increase performance for large-scale simulations of rate-based models with instantaneous interactions. Finally we demonstrate the broad applicability of the framework by considering various examples from the literature, ranging from random networks to neural-field models. The study provides the prerequisite for interactions between rate-based and spiking models in a joint simulation.

  9. Integration of Continuous-Time Dynamics in a Spiking Neural Network Simulator.

    Science.gov (United States)

    Hahne, Jan; Dahmen, David; Schuecker, Jannis; Frommer, Andreas; Bolten, Matthias; Helias, Moritz; Diesmann, Markus

    2017-01-01

    Contemporary modeling approaches to the dynamics of neural networks include two important classes of models: biologically grounded spiking neuron models and functionally inspired rate-based units. We present a unified simulation framework that supports the combination of the two for multi-scale modeling, enables the quantitative validation of mean-field approaches by spiking network simulations, and provides an increase in reliability by usage of the same simulation code and the same network model specifications for both model classes. While most spiking simulations rely on the communication of discrete events, rate models require time-continuous interactions between neurons. Exploiting the conceptual similarity to the inclusion of gap junctions in spiking network simulations, we arrive at a reference implementation of instantaneous and delayed interactions between rate-based models in a spiking network simulator. The separation of rate dynamics from the general connection and communication infrastructure ensures flexibility of the framework. In addition to the standard implementation we present an iterative approach based on waveform-relaxation techniques to reduce communication and increase performance for large-scale simulations of rate-based models with instantaneous interactions. Finally we demonstrate the broad applicability of the framework by considering various examples from the literature, ranging from random networks to neural-field models. The study provides the prerequisite for interactions between rate-based and spiking models in a joint simulation.

  10. Neural network modeling of chaotic dynamics in nuclear reactor flows

    International Nuclear Information System (INIS)

    Welstead, S.T.

    1992-01-01

    Neural networks have many scientific applications in areas such as pattern classification and time series prediction. The universal approximation property of these networks, however, can also be exploited to provide researchers with tool for modeling observed nonlinear phenomena. It has been shown that multilayer feed forward networks can capture important global nonlinear properties, such as chaotic dynamics, merely by training the network on a finite set of observed data. The network itself then provides a model of the process that generated the data. Characterizations such as the existence and general shape of a strange attractor and the sign of the largest Lyapunov exponent can then be extracted from the neural network model. In this paper, the author applies this idea to data generated from a nonlinear process that is representative of convective flows that can arise in nuclear reactor applications. Such flows play a role in forced convection heat removal from pressurized water reactors and boiling water reactors, and decay heat removal from liquid-metal-cooled reactors, either by natural convection or by thermosyphons

  11. Autonomous dynamics in neural networks: the dHAN concept and associative thought processes

    Science.gov (United States)

    Gros, Claudius

    2007-02-01

    The neural activity of the human brain is dominated by self-sustained activities. External sensory stimuli influence this autonomous activity but they do not drive the brain directly. Most standard artificial neural network models are however input driven and do not show spontaneous activities. It constitutes a challenge to develop organizational principles for controlled, self-sustained activity in artificial neural networks. Here we propose and examine the dHAN concept for autonomous associative thought processes in dense and homogeneous associative networks. An associative thought-process is characterized, within this approach, by a time-series of transient attractors. Each transient state corresponds to a stored information, a memory. The subsequent transient states are characterized by large associative overlaps, which are identical to acquired patterns. Memory states, the acquired patterns, have such a dual functionality. In this approach the self-sustained neural activity has a central functional role. The network acquires a discrimination capability, as external stimuli need to compete with the autonomous activity. Noise in the input is readily filtered-out. Hebbian learning of external patterns occurs coinstantaneous with the ongoing associative thought process. The autonomous dynamics needs a long-term working-point optimization which acquires within the dHAN concept a dual functionality: It stabilizes the time development of the associative thought process and limits runaway synaptic growth, which generically occurs otherwise in neural networks with self-induced activities and Hebbian-type learning rules.

  12. An implantable wireless neural interface for recording cortical circuit dynamics in moving primates

    Science.gov (United States)

    Borton, David A.; Yin, Ming; Aceros, Juan; Nurmikko, Arto

    2013-04-01

    Objective. Neural interface technology suitable for clinical translation has the potential to significantly impact the lives of amputees, spinal cord injury victims and those living with severe neuromotor disease. Such systems must be chronically safe, durable and effective. Approach. We have designed and implemented a neural interface microsystem, housed in a compact, subcutaneous and hermetically sealed titanium enclosure. The implanted device interfaces the brain with a 510k-approved, 100-element silicon-based microelectrode array via a custom hermetic feedthrough design. Full spectrum neural signals were amplified (0.1 Hz to 7.8 kHz, 200× gain) and multiplexed by a custom application specific integrated circuit, digitized and then packaged for transmission. The neural data (24 Mbps) were transmitted by a wireless data link carried on a frequency-shift-key-modulated signal at 3.2 and 3.8 GHz to a receiver 1 m away by design as a point-to-point communication link for human clinical use. The system was powered by an embedded medical grade rechargeable Li-ion battery for 7 h continuous operation between recharge via an inductive transcutaneous wireless power link at 2 MHz. Main results. Device verification and early validation were performed in both swine and non-human primate freely-moving animal models and showed that the wireless implant was electrically stable, effective in capturing and delivering broadband neural data, and safe for over one year of testing. In addition, we have used the multichannel data from these mobile animal models to demonstrate the ability to decode neural population dynamics associated with motor activity. Significance. We have developed an implanted wireless broadband neural recording device evaluated in non-human primate and swine. The use of this new implantable neural interface technology can provide insight into how to advance human neuroprostheses beyond the present early clinical trials. Further, such tools enable mobile

  13. Neural network based adaptive control for nonlinear dynamic regimes

    Science.gov (United States)

    Shin, Yoonghyun

    Adaptive control designs using neural networks (NNs) based on dynamic inversion are investigated for aerospace vehicles which are operated at highly nonlinear dynamic regimes. NNs play a key role as the principal element of adaptation to approximately cancel the effect of inversion error, which subsequently improves robustness to parametric uncertainty and unmodeled dynamics in nonlinear regimes. An adaptive control scheme previously named 'composite model reference adaptive control' is further developed so that it can be applied to multi-input multi-output output feedback dynamic inversion. It can have adaptive elements in both the dynamic compensator (linear controller) part and/or in the conventional adaptive controller part, also utilizing state estimation information for NN adaptation. This methodology has more flexibility and thus hopefully greater potential than conventional adaptive designs for adaptive flight control in highly nonlinear flight regimes. The stability of the control system is proved through Lyapunov theorems, and validated with simulations. The control designs in this thesis also include the use of 'pseudo-control hedging' techniques which are introduced to prevent the NNs from attempting to adapt to various actuation nonlinearities such as actuator position and rate saturations. Control allocation is introduced for the case of redundant control effectors including thrust vectoring nozzles. A thorough comparison study of conventional and NN-based adaptive designs for a system under a limit cycle, wing-rock, is included in this research, and the NN-based adaptive control designs demonstrate their performances for two highly maneuverable aerial vehicles, NASA F-15 ACTIVE and FQM-117B unmanned aerial vehicle (UAV), operated under various nonlinearities and uncertainties.

  14. Artificial neural networks for dynamic monitoring of simulated-operating parameters of high temperature gas cooled engineering test reactor (HTTR)

    International Nuclear Information System (INIS)

    Seker, Serhat; Tuerkcan, Erdinc; Ayaz, Emine; Barutcu, Burak

    2003-01-01

    This paper addresses to the problem of utilisation of the artificial neural networks (ANNs) for detecting anomalies as well as physical parameters of a nuclear power plant during power operation in real time. Three different types of neural network algorithms were used namely, feed-forward neural network (back-propagation, BP) and two types of recurrent neural networks (RNN). The data used in this paper were gathered from the simulation of the power operation of the Japan's High Temperature Engineering Testing Reactor (HTTR). For the wide range of power operation, 56 signals were generated by the reactor dynamic simulation code for several hours of normal power operation at different power ramps between 30 and 100% nominal power. Paper will compare the outcomes of different neural networks and presents the neural network system and the determination of physical parameters from the simulated operating data

  15. Neural dynamics of the cognitive map in the hippocampus.

    Science.gov (United States)

    Wagatsuma, Hiroaki; Yamaguchi, Yoko

    2007-06-01

    The rodent hippocampus has been thought to represent the spatial environment as a cognitive map. In the classical theory, the cognitive map has been explained as a consequence of the fact that different spatial regions are assigned to different cell populations in the framework of rate coding. Recently, the relation between place cell firing and local field oscillation theta in terms of theta phase precession was experimentally discovered and suggested as a temporal coding mechanism leading to memory formation of behavioral sequences accompanied with asymmetric Hebbian plasticity. The cognitive map theory is apparently outside of the sequence memory view. Therefore, theoretical analysis is necessary to consider the biological neural dynamics for the sequence encoding of the memory of behavioral sequences, providing the cognitive map formation. In this article, we summarize the theoretical neural dynamics of the real-time sequence encoding by theta phase precession, called theta phase coding, and review a series of theoretical models with the theta phase coding that we previously reported. With respect to memory encoding functions, instantaneous memory formation of one-time experience was first demonstrated, and then the ability of integration of memories of behavioral sequences into a network of the cognitive map was shown. In terms of memory retrieval functions, theta phase coding enables the hippocampus to represent the spatial location in the current behavioral context even with ambiguous sensory input when multiple sequences were coded. Finally, for utilization, retrieved temporal sequences in the hippocampus can be available for action selection, through the process of reverting theta rhythm-dependent activities to information in the behavioral time scale. This theoretical approach allows us to investigate how the behavioral sequences are encoded, updated, retrieved and used in the hippocampus, as the real-time interaction with the external environment. It may

  16. Pathwise dynamic programming

    NARCIS (Netherlands)

    Bender, Christian; Gärtner, Christian; Schweizer, Nikolaus

    2017-01-01

    We present a novel method for deriving tight Monte Carlo confidence intervals for solutions of stochastic dynamic programming equations. Taking some approximate solution to the equation as an input, we construct pathwise recursions with a known bias. Suitably coupling the recursions for lower and

  17. Neural dynamics during repetitive visual stimulation

    Science.gov (United States)

    Tsoneva, Tsvetomira; Garcia-Molina, Gary; Desain, Peter

    2015-12-01

    Objective. Steady-state visual evoked potentials (SSVEPs), the brain responses to repetitive visual stimulation (RVS), are widely utilized in neuroscience. Their high signal-to-noise ratio and ability to entrain oscillatory brain activity are beneficial for their applications in brain-computer interfaces, investigation of neural processes underlying brain rhythmic activity (steady-state topography) and probing the causal role of brain rhythms in cognition and emotion. This paper aims at analyzing the space and time EEG dynamics in response to RVS at the frequency of stimulation and ongoing rhythms in the delta, theta, alpha, beta, and gamma bands. Approach.We used electroencephalography (EEG) to study the oscillatory brain dynamics during RVS at 10 frequencies in the gamma band (40-60 Hz). We collected an extensive EEG data set from 32 participants and analyzed the RVS evoked and induced responses in the time-frequency domain. Main results. Stable SSVEP over parieto-occipital sites was observed at each of the fundamental frequencies and their harmonics and sub-harmonics. Both the strength and the spatial propagation of the SSVEP response seem sensitive to stimulus frequency. The SSVEP was more localized around the parieto-occipital sites for higher frequencies (>54 Hz) and spread to fronto-central locations for lower frequencies. We observed a strong negative correlation between stimulation frequency and relative power change at that frequency, the first harmonic and the sub-harmonic components over occipital sites. Interestingly, over parietal sites for sub-harmonics a positive correlation of relative power change and stimulation frequency was found. A number of distinct patterns in delta (1-4 Hz), theta (4-8 Hz), alpha (8-12 Hz) and beta (15-30 Hz) bands were also observed. The transient response, from 0 to about 300 ms after stimulation onset, was accompanied by increase in delta and theta power over fronto-central and occipital sites, which returned to baseline

  18. Neutrophil programming dynamics and its disease relevance.

    Science.gov (United States)

    Ran, Taojing; Geng, Shuo; Li, Liwu

    2017-11-01

    Neutrophils are traditionally considered as first responders to infection and provide antimicrobial host defense. However, recent advances indicate that neutrophils are also critically involved in the modulation of host immune environments by dynamically adopting distinct functional states. Functionally diverse neutrophil subsets are increasingly recognized as critical components mediating host pathophysiology. Despite its emerging significance, molecular mechanisms as well as functional relevance of dynamically programmed neutrophils remain to be better defined. The increasing complexity of neutrophil functions may require integrative studies that address programming dynamics of neutrophils and their pathophysiological relevance. This review aims to provide an update on the emerging topics of neutrophil programming dynamics as well as their functional relevance in diseases.

  19. PROPOSAL FOR NEURAL-LINGUISTIC PROGRAMMING (N.L.P. INTHE ADMINISTRATIVE DEVELOPMENT OF LEADERSHIP SPORTS

    Directory of Open Access Journals (Sweden)

    Khalil Samira

    2010-08-01

    Full Text Available Neural-linguistic programming is an organised method to know the human self construction and dealing with it in fixed means and styles so as to decesisively affect the processes of perception, thinking, imaging, ideas,feeling and also in behavior, skills and the human body and mental performance (1 Neural-linguistic programming has a private nature because it is a group of mechanisms and practicaltechniques far from likeliness, so it enters in the circle of application and employment of the human abilities and possibilities. (9 Al Fiky (2001 points out that neural linguistic programming created the favourable environment to help individuals to get rid of their diseased fears and controlling in their negative reactions and thus improving communication with themselves and with others. He shows it took its way into the human life fields because itsways and strategies are used in the sectors of health, education, marketing and administration(2. The modern administration embarks on the human element that represents the most valuable elementsof administration and is the most effective on the productivity and with the increasing the effect of the human element in the efficacy of the administrative organizations, the need increased to consider the management of the human resources as an independent function of administrative functions that cancers the human element and onwhose efficiency, abilities, experience and zeal for work, the administration efficacy depends.

  20. The equilibrium of neural firing: A mathematical theory

    Energy Technology Data Exchange (ETDEWEB)

    Lan, Sizhong, E-mail: lsz@fuyunresearch.org [Fuyun Research, Beijing, 100055 (China)

    2014-12-15

    Inspired by statistical thermodynamics, we presume that neuron system has equilibrium condition with respect to neural firing. We show that, even with dynamically changeable neural connections, it is inevitable for neural firing to evolve to equilibrium. To study the dynamics between neural firing and neural connections, we propose an extended communication system where noisy channel has the tendency towards fixed point, implying that neural connections are always attracted into fixed points such that equilibrium can be reached. The extended communication system and its mathematics could be useful back in thermodynamics.

  1. Neural Dynamics Associated with Semantic and Episodic Memory for Faces: Evidence from Multiple Frequency Bands

    Science.gov (United States)

    Zion-Golumbic, Elana; Kutas, Marta; Bentin, Shlomo

    2010-01-01

    Prior semantic knowledge facilitates episodic recognition memory for faces. To examine the neural manifestation of the interplay between semantic and episodic memory, we investigated neuroelectric dynamics during the creation (study) and the retrieval (test) of episodic memories for famous and nonfamous faces. Episodic memory effects were evident…

  2. The influence of mental fatigue and motivation on neural network dynamics; an EEG coherence study

    NARCIS (Netherlands)

    Lorist, Monicque M.; Bezdan, Eniko; Caat, Michael ten; Span, Mark M.; Roerdink, Jos B.T.M.; Maurits, Natasha M.

    2009-01-01

    The purpose of the present study is to examine the effects of mental fatigue and motivation on neural network dynamics activated during task switching. Mental fatigue was induced by 2 h of continuous performance; after which subjects were motivated by using social comparison and monetary reward as

  3. Configuring Airspace Sectors with Approximate Dynamic Programming

    Science.gov (United States)

    Bloem, Michael; Gupta, Pramod

    2010-01-01

    In response to changing traffic and staffing conditions, supervisors dynamically configure airspace sectors by assigning them to control positions. A finite horizon airspace sector configuration problem models this supervisor decision. The problem is to select an airspace configuration at each time step while considering a workload cost, a reconfiguration cost, and a constraint on the number of control positions at each time step. Three algorithms for this problem are proposed and evaluated: a myopic heuristic, an exact dynamic programming algorithm, and a rollouts approximate dynamic programming algorithm. On problem instances from current operations with only dozens of possible configurations, an exact dynamic programming solution gives the optimal cost value. The rollouts algorithm achieves costs within 2% of optimal for these instances, on average. For larger problem instances that are representative of future operations and have thousands of possible configurations, excessive computation time prohibits the use of exact dynamic programming. On such problem instances, the rollouts algorithm reduces the cost achieved by the heuristic by more than 15% on average with an acceptable computation time.

  4. Dynamic analysis program for frame structure

    International Nuclear Information System (INIS)

    Ando, Kozo; Chiba, Toshio

    1975-01-01

    A general purpose computer program named ISTRAN/FD (Isub(HI) STRucture ANalysis/Frame structure, Dynamic analysis) has been developed for dynamic analysis of three-dimensional frame structures. This program has functions of free vibration analysis, seismic response analysis, graphic display by plotter and CRT, etc. This paper introduces ISTRAN/FD; examples of its application are shown with various problems : idealization of the cantilever, dynamic analysis of the main tower of the suspension bridge, three-dimensional vibration in the plate girder bridge, seismic response in the boiler steel structure, and dynamic properties of the underground LNG tank. In this last example, solid elements, in addition to beam elements, are especially used for the analysis. (auth.)

  5. Noninvasive fetal QRS detection using an echo state network and dynamic programming

    International Nuclear Information System (INIS)

    Lukoševičius, Mantas; Marozas, Vaidotas

    2014-01-01

    We address a classical fetal QRS detection problem from abdominal ECG recordings with a data-driven statistical machine learning approach. Our goal is to have a powerful, yet conceptually clean, solution. There are two novel key components at the heart of our approach: an echo state recurrent neural network that is trained to indicate fetal QRS complexes, and several increasingly sophisticated versions of statistics-based dynamic programming algorithms, which are derived from and rooted in probability theory. We also employ a standard technique for preprocessing and removing maternal ECG complexes from the signals, but do not take this as the main focus of this work. The proposed approach is quite generic and can be extended to other types of signals and annotations. Open-source code is provided. (paper)

  6. Artificial intelligence. Application of the Statistical Neural Networks computer program in nuclear medicine

    International Nuclear Information System (INIS)

    Stefaniak, B.; Cholewinski, W.; Tarkowska, A.

    2005-01-01

    Artificial Neural Networks (ANN) may be a tool alternative and complementary to typical statistical analysis. However, in spite of many computer application of various ANN algorithms ready for use, artificial intelligence is relatively rarely applied to data processing. In this paper practical aspects of scientific application of ANN in medicine using the Statistical Neural Networks Computer program, were presented. Several steps of data analysis with the above ANN software package were discussed shortly, from material selection and its dividing into groups to the types of obtained results. The typical problems connected with assessing scintigrams by ANN were also described. (author)

  7. Zhang neural network for online solution of time-varying convex quadratic program subject to time-varying linear-equality constraints

    International Nuclear Information System (INIS)

    Zhang Yunong; Li Zhan

    2009-01-01

    In this Letter, by following Zhang et al.'s method, a recurrent neural network (termed as Zhang neural network, ZNN) is developed and analyzed for solving online the time-varying convex quadratic-programming problem subject to time-varying linear-equality constraints. Different from conventional gradient-based neural networks (GNN), such a ZNN model makes full use of the time-derivative information of time-varying coefficient. The resultant ZNN model is theoretically proved to have global exponential convergence to the time-varying theoretical optimal solution of the investigated time-varying convex quadratic program. Computer-simulation results further substantiate the effectiveness, efficiency and novelty of such ZNN model and method.

  8. Male veterans with PTSD exhibit aberrant neural dynamics during working memory processing: an MEG study.

    Science.gov (United States)

    McDermott, Timothy J; Badura-Brack, Amy S; Becker, Katherine M; Ryan, Tara J; Khanna, Maya M; Heinrichs-Graham, Elizabeth; Wilson, Tony W

    2016-06-01

    Posttraumatic stress disorder (PTSD) is associated with executive functioning deficits, including disruptions in working memory. In this study, we examined the neural dynamics of working memory processing in veterans with PTSD and a matched healthy control sample using magnetoencephalography (MEG). Our sample of recent combat veterans with PTSD and demographically matched participants without PTSD completed a working memory task during a 306-sensor MEG recording. The MEG data were preprocessed and transformed into the time-frequency domain. Significant oscillatory brain responses were imaged using a beamforming approach to identify spatiotemporal dynamics. Fifty-one men were included in our analyses: 27 combat veterans with PTSD and 24 controls. Across all participants, a dynamic wave of neural activity spread from posterior visual cortices to left frontotemporal regions during encoding, consistent with a verbal working memory task, and was sustained throughout maintenance. Differences related to PTSD emerged during early encoding, with patients exhibiting stronger α oscillatory responses than controls in the right inferior frontal gyrus (IFG). Differences spread to the right supramarginal and temporal cortices during later encoding where, along with the right IFG, they persisted throughout the maintenance period. This study focused on men with combat-related PTSD using a verbal working memory task. Future studies should evaluate women and the impact of various traumatic experiences using diverse tasks. Posttraumatic stress disorder is associated with neurophysiological abnormalities during working memory encoding and maintenance. Veterans with PTSD engaged a bilateral network, including the inferior prefrontal cortices and supramarginal gyri. Right hemispheric neural activity likely reflects compensatory processing, as veterans with PTSD work to maintain accurate performance despite known cognitive deficits associated with the disorder.

  9. Environmental/dynamic mechanical equipment qualification and dynamic electrical equipment qualification program (EDQP)

    International Nuclear Information System (INIS)

    Hunter, J.A.

    1984-01-01

    Equipment qualification research is being conducted to investigate acceptable criteria, requirements, and methodologies for the dynamic (including seismic) and environmental qualification of mechanical equipment and for the dynamic (including seismic) qualification of electrical equipment. The program is organized into three elements: (1) General Research, (2) Environmental Research, and (3) Dynamic Research. This paper presents the highlights of the results to date in these three elements of the program

  10. Dual Dynamic Programming - DDP

    International Nuclear Information System (INIS)

    Velasquez Bermudez, Jesus M

    1998-01-01

    Objections are presented to the mathematical formulation of the denominated Dual Dynamic programming-PDD that is the theoretical base of several computational model available for the optimal formulation of interconnected hydrothermal systems

  11. Dynamic programming for QFD in PES optimization

    Energy Technology Data Exchange (ETDEWEB)

    Sorrentino, R. [Mediterranean Univ. of Reggio Calabria, Reggio Calabria (Italy). Dept. of Computer Science and Electrical Technology

    2008-07-01

    Quality function deployment (QFD) is a method for linking the needs of the customer with design, development, engineering, manufacturing, and service functions. In the electric power industry, QFD is used to help designers concentrate on the most important technical attributes to develop better electrical services. Most optimization approaches used in QFD analysis have been based on integer or linear programming. These approaches perform well in certain circumstances, but there are problems that hinder their practical use. This paper proposed an approach to optimize Power and Energy Systems (PES). A dynamic programming approach was used along with an extended House of Quality to gather information. Dynamic programming was used to allocate the limited resources to the technical attributes. The approach integrated dynamic programming into the electrical service design process. The dynamic programming approach did not require the full relationship curve between technical attributes and customer satisfaction, or the relationship between technical attributes and cost. It only used a group of discrete points containing information about customer satisfaction, technical attributes, and the cost to find the optimal product design. Therefore, it required less time and resources than other approaches. At the end of the optimization process, the value of each technical attribute, the related cost, and the overall customer satisfaction were obtained at the same time. It was concluded that compared with other optimization methods, the dynamic programming method requires less information and the optimal results are more relevant. 21 refs., 2 tabs., 2 figs.

  12. Adaptive online inverse control of a shape memory alloy wire actuator using a dynamic neural network

    Science.gov (United States)

    Mai, Huanhuan; Song, Gangbing; Liao, Xiaofeng

    2013-01-01

    Shape memory alloy (SMA) actuators exhibit severe hysteresis, a nonlinear behavior, which complicates control strategies and limits their applications. This paper presents a new approach to controlling an SMA actuator through an adaptive inverse model based controller that consists of a dynamic neural network (DNN) identifier, a copy dynamic neural network (CDNN) feedforward term and a proportional (P) feedback action. Unlike fixed hysteresis models used in most inverse controllers, the proposed one uses a DNN to identify online the relationship between the applied voltage to the actuator and the displacement (the inverse model). Even without a priori knowledge of the SMA hysteresis and without pre-training, the proposed controller can precisely control the SMA wire actuator in various tracking tasks by identifying online the inverse model of the SMA actuator. Experiments were conducted, and experimental results demonstrated real-time modeling capabilities of DNN and the performance of the adaptive inverse controller.

  13. Enhanced Dynamic Model of Pneumatic Muscle Actuator with Elman Neural Network

    Directory of Open Access Journals (Sweden)

    Alexander Hošovský

    2015-01-01

    Full Text Available To make effective use of model-based control system design techniques, one needs a good model which captures system’s dynamic properties in the range of interest. Here an analytical model of pneumatic muscle actuator with two pneumatic artificial muscles driving a rotational joint is developed. Use of analytical model makes it possible to retain the physical interpretation of the model and the model is validated using open-loop responses. Since it was considered important to design a robust controller based on this model, the effect of changed moment of inertia (as a representation of uncertain parameter was taken into account and compared with nominal case. To improve the accuracy of the model, these effects are treated as a disturbance modeled using the recurrent (Elman neural network. Recurrent neural network was preferred over feedforward type due to its better long-term prediction capabilities well suited for simulation use of the model. The results confirm that this method improves the model performance (tested for five of the measured variables: joint angle, muscle pressures, and muscle forces while retaining its physical interpretation.

  14. Fuzzy Counter Propagation Neural Network Control for a Class of Nonlinear Dynamical Systems.

    Science.gov (United States)

    Sakhre, Vandana; Jain, Sanjeev; Sapkal, Vilas S; Agarwal, Dev P

    2015-01-01

    Fuzzy Counter Propagation Neural Network (FCPN) controller design is developed, for a class of nonlinear dynamical systems. In this process, the weight connecting between the instar and outstar, that is, input-hidden and hidden-output layer, respectively, is adjusted by using Fuzzy Competitive Learning (FCL). FCL paradigm adopts the principle of learning, which is used to calculate Best Matched Node (BMN) which is proposed. This strategy offers a robust control of nonlinear dynamical systems. FCPN is compared with the existing network like Dynamic Network (DN) and Back Propagation Network (BPN) on the basis of Mean Absolute Error (MAE), Mean Square Error (MSE), Best Fit Rate (BFR), and so forth. It envisages that the proposed FCPN gives better results than DN and BPN. The effectiveness of the proposed FCPN algorithms is demonstrated through simulations of four nonlinear dynamical systems and multiple input and single output (MISO) and a single input and single output (SISO) gas furnace Box-Jenkins time series data.

  15. Fuzzy Counter Propagation Neural Network Control for a Class of Nonlinear Dynamical Systems

    Directory of Open Access Journals (Sweden)

    Vandana Sakhre

    2015-01-01

    Full Text Available Fuzzy Counter Propagation Neural Network (FCPN controller design is developed, for a class of nonlinear dynamical systems. In this process, the weight connecting between the instar and outstar, that is, input-hidden and hidden-output layer, respectively, is adjusted by using Fuzzy Competitive Learning (FCL. FCL paradigm adopts the principle of learning, which is used to calculate Best Matched Node (BMN which is proposed. This strategy offers a robust control of nonlinear dynamical systems. FCPN is compared with the existing network like Dynamic Network (DN and Back Propagation Network (BPN on the basis of Mean Absolute Error (MAE, Mean Square Error (MSE, Best Fit Rate (BFR, and so forth. It envisages that the proposed FCPN gives better results than DN and BPN. The effectiveness of the proposed FCPN algorithms is demonstrated through simulations of four nonlinear dynamical systems and multiple input and single output (MISO and a single input and single output (SISO gas furnace Box-Jenkins time series data.

  16. Intelligent and robust prediction of short term wind power using genetic programming based ensemble of neural networks

    International Nuclear Information System (INIS)

    Zameer, Aneela; Arshad, Junaid; Khan, Asifullah; Raja, Muhammad Asif Zahoor

    2017-01-01

    Highlights: • Genetic programming based ensemble of neural networks is employed for short term wind power prediction. • Proposed predictor shows resilience against abrupt changes in weather. • Genetic programming evolves nonlinear mapping between meteorological measures and wind-power. • Proposed approach gives mathematical expressions of wind power to its independent variables. • Proposed model shows relatively accurate and steady wind-power prediction performance. - Abstract: The inherent instability of wind power production leads to critical problems for smooth power generation from wind turbines, which then requires an accurate forecast of wind power. In this study, an effective short term wind power prediction methodology is presented, which uses an intelligent ensemble regressor that comprises Artificial Neural Networks and Genetic Programming. In contrast to existing series based combination of wind power predictors, whereby the error or variation in the leading predictor is propagated down the stream to the next predictors, the proposed intelligent ensemble predictor avoids this shortcoming by introducing Genetical Programming based semi-stochastic combination of neural networks. It is observed that the decision of the individual base regressors may vary due to the frequent and inherent fluctuations in the atmospheric conditions and thus meteorological properties. The novelty of the reported work lies in creating ensemble to generate an intelligent, collective and robust decision space and thereby avoiding large errors due to the sensitivity of the individual wind predictors. The proposed ensemble based regressor, Genetic Programming based ensemble of Artificial Neural Networks, has been implemented and tested on data taken from five different wind farms located in Europe. Obtained numerical results of the proposed model in terms of various error measures are compared with the recent artificial intelligence based strategies to demonstrate the

  17. Hybrid Differential Dynamic Programming with Stochastic Search

    Science.gov (United States)

    Aziz, Jonathan; Parker, Jeffrey; Englander, Jacob

    2016-01-01

    Differential dynamic programming (DDP) has been demonstrated as a viable approach to low-thrust trajectory optimization, namely with the recent success of NASAs Dawn mission. The Dawn trajectory was designed with the DDP-based Static Dynamic Optimal Control algorithm used in the Mystic software. Another recently developed method, Hybrid Differential Dynamic Programming (HDDP) is a variant of the standard DDP formulation that leverages both first-order and second-order state transition matrices in addition to nonlinear programming (NLP) techniques. Areas of improvement over standard DDP include constraint handling, convergence properties, continuous dynamics, and multi-phase capability. DDP is a gradient based method and will converge to a solution nearby an initial guess. In this study, monotonic basin hopping (MBH) is employed as a stochastic search method to overcome this limitation, by augmenting the HDDP algorithm for a wider search of the solution space.

  18. A modular architecture for transparent computation in recurrent neural networks.

    Science.gov (United States)

    Carmantini, Giovanni S; Beim Graben, Peter; Desroches, Mathieu; Rodrigues, Serafim

    2017-01-01

    Computation is classically studied in terms of automata, formal languages and algorithms; yet, the relation between neural dynamics and symbolic representations and operations is still unclear in traditional eliminative connectionism. Therefore, we suggest a unique perspective on this central issue, to which we would like to refer as transparent connectionism, by proposing accounts of how symbolic computation can be implemented in neural substrates. In this study we first introduce a new model of dynamics on a symbolic space, the versatile shift, showing that it supports the real-time simulation of a range of automata. We then show that the Gödelization of versatile shifts defines nonlinear dynamical automata, dynamical systems evolving on a vectorial space. Finally, we present a mapping between nonlinear dynamical automata and recurrent artificial neural networks. The mapping defines an architecture characterized by its granular modularity, where data, symbolic operations and their control are not only distinguishable in activation space, but also spatially localizable in the network itself, while maintaining a distributed encoding of symbolic representations. The resulting networks simulate automata in real-time and are programmed directly, in the absence of network training. To discuss the unique characteristics of the architecture and their consequences, we present two examples: (i) the design of a Central Pattern Generator from a finite-state locomotive controller, and (ii) the creation of a network simulating a system of interactive automata that supports the parsing of garden-path sentences as investigated in psycholinguistics experiments. Copyright © 2016 Elsevier Ltd. All rights reserved.

  19. Learning and adaptation: neural and behavioural mechanisms behind behaviour change

    Science.gov (United States)

    Lowe, Robert; Sandamirskaya, Yulia

    2018-01-01

    This special issue presents perspectives on learning and adaptation as they apply to a number of cognitive phenomena including pupil dilation in humans and attention in robots, natural language acquisition and production in embodied agents (robots), human-robot game play and social interaction, neural-dynamic modelling of active perception and neural-dynamic modelling of infant development in the Piagetian A-not-B task. The aim of the special issue, through its contributions, is to highlight some of the critical neural-dynamic and behavioural aspects of learning as it grounds adaptive responses in robotic- and neural-dynamic systems.

  20. Joint Chance-Constrained Dynamic Programming

    Science.gov (United States)

    Ono, Masahiro; Kuwata, Yoshiaki; Balaram, J. Bob

    2012-01-01

    This paper presents a novel dynamic programming algorithm with a joint chance constraint, which explicitly bounds the risk of failure in order to maintain the state within a specified feasible region. A joint chance constraint cannot be handled by existing constrained dynamic programming approaches since their application is limited to constraints in the same form as the cost function, that is, an expectation over a sum of one-stage costs. We overcome this challenge by reformulating the joint chance constraint into a constraint on an expectation over a sum of indicator functions, which can be incorporated into the cost function by dualizing the optimization problem. As a result, the primal variables can be optimized by a standard dynamic programming, while the dual variable is optimized by a root-finding algorithm that converges exponentially. Error bounds on the primal and dual objective values are rigorously derived. We demonstrate the algorithm on a path planning problem, as well as an optimal control problem for Mars entry, descent and landing. The simulations are conducted using a real terrain data of Mars, with four million discrete states at each time step.

  1. Approximate Dynamic Programming Solving the Curses of Dimensionality

    CERN Document Server

    Powell, Warren B

    2011-01-01

    Praise for the First Edition "Finally, a book devoted to dynamic programming and written using the language of operations research (OR)! This beautiful book fills a gap in the libraries of OR specialists and practitioners."-Computing Reviews This new edition showcases a focus on modeling and computation for complex classes of approximate dynamic programming problems Understanding approximate dynamic programming (ADP) is vital in order to develop practical and high-quality solutions to complex industrial problems, particularly when those problems involve making decisions in the presence of unce

  2. Deep recurrent neural network reveals a hierarchy of process memory during dynamic natural vision.

    Science.gov (United States)

    Shi, Junxing; Wen, Haiguang; Zhang, Yizhen; Han, Kuan; Liu, Zhongming

    2018-05-01

    The human visual cortex extracts both spatial and temporal visual features to support perception and guide behavior. Deep convolutional neural networks (CNNs) provide a computational framework to model cortical representation and organization for spatial visual processing, but unable to explain how the brain processes temporal information. To overcome this limitation, we extended a CNN by adding recurrent connections to different layers of the CNN to allow spatial representations to be remembered and accumulated over time. The extended model, or the recurrent neural network (RNN), embodied a hierarchical and distributed model of process memory as an integral part of visual processing. Unlike the CNN, the RNN learned spatiotemporal features from videos to enable action recognition. The RNN better predicted cortical responses to natural movie stimuli than the CNN, at all visual areas, especially those along the dorsal stream. As a fully observable model of visual processing, the RNN also revealed a cortical hierarchy of temporal receptive window, dynamics of process memory, and spatiotemporal representations. These results support the hypothesis of process memory, and demonstrate the potential of using the RNN for in-depth computational understanding of dynamic natural vision. © 2018 Wiley Periodicals, Inc.

  3. Adaptive online inverse control of a shape memory alloy wire actuator using a dynamic neural network

    International Nuclear Information System (INIS)

    Mai, Huanhuan; Liao, Xiaofeng; Song, Gangbing

    2013-01-01

    Shape memory alloy (SMA) actuators exhibit severe hysteresis, a nonlinear behavior, which complicates control strategies and limits their applications. This paper presents a new approach to controlling an SMA actuator through an adaptive inverse model based controller that consists of a dynamic neural network (DNN) identifier, a copy dynamic neural network (CDNN) feedforward term and a proportional (P) feedback action. Unlike fixed hysteresis models used in most inverse controllers, the proposed one uses a DNN to identify online the relationship between the applied voltage to the actuator and the displacement (the inverse model). Even without a priori knowledge of the SMA hysteresis and without pre-training, the proposed controller can precisely control the SMA wire actuator in various tracking tasks by identifying online the inverse model of the SMA actuator. Experiments were conducted, and experimental results demonstrated real-time modeling capabilities of DNN and the performance of the adaptive inverse controller. (paper)

  4. Neural correlates of the perception of dynamic versus static facial expressions of emotion.

    Science.gov (United States)

    Kessler, Henrik; Doyen-Waldecker, Cornelia; Hofer, Christian; Hoffmann, Holger; Traue, Harald C; Abler, Birgit

    2011-04-20

    This study investigated brain areas involved in the perception of dynamic facial expressions of emotion. A group of 30 healthy subjects was measured with fMRI when passively viewing prototypical facial expressions of fear, disgust, sadness and happiness. Using morphing techniques, all faces were displayed as still images and also dynamically as a film clip with the expressions evolving from neutral to emotional. Irrespective of a specific emotion, dynamic stimuli selectively activated bilateral superior temporal sulcus, visual area V5, fusiform gyrus, thalamus and other frontal and parietal areas. Interaction effects of emotion and mode of presentation (static/dynamic) were only found for the expression of happiness, where static faces evoked greater activity in the medial prefrontal cortex. Our results confirm previous findings on neural correlates of the perception of dynamic facial expressions and are in line with studies showing the importance of the superior temporal sulcus and V5 in the perception of biological motion. Differential activation in the fusiform gyrus for dynamic stimuli stands in contrast to classical models of face perception but is coherent with new findings arguing for a more general role of the fusiform gyrus in the processing of socially relevant stimuli.

  5. Chaos control of the brushless direct current motor using adaptive dynamic surface control based on neural network with the minimum weights

    International Nuclear Information System (INIS)

    Luo, Shaohua; Wu, Songli; Gao, Ruizhen

    2015-01-01

    This paper investigates chaos control for the brushless DC motor (BLDCM) system by adaptive dynamic surface approach based on neural network with the minimum weights. The BLDCM system contains parameter perturbation, chaotic behavior, and uncertainty. With the help of radial basis function (RBF) neural network to approximate the unknown nonlinear functions, the adaptive law is established to overcome uncertainty of the control gain. By introducing the RBF neural network and adaptive technology into the dynamic surface control design, a robust chaos control scheme is developed. It is proved that the proposed control approach can guarantee that all signals in the closed-loop system are globally uniformly bounded, and the tracking error converges to a small neighborhood of the origin. Simulation results are provided to show that the proposed approach works well in suppressing chaos and parameter perturbation

  6. Chaos control of the brushless direct current motor using adaptive dynamic surface control based on neural network with the minimum weights.

    Science.gov (United States)

    Luo, Shaohua; Wu, Songli; Gao, Ruizhen

    2015-07-01

    This paper investigates chaos control for the brushless DC motor (BLDCM) system by adaptive dynamic surface approach based on neural network with the minimum weights. The BLDCM system contains parameter perturbation, chaotic behavior, and uncertainty. With the help of radial basis function (RBF) neural network to approximate the unknown nonlinear functions, the adaptive law is established to overcome uncertainty of the control gain. By introducing the RBF neural network and adaptive technology into the dynamic surface control design, a robust chaos control scheme is developed. It is proved that the proposed control approach can guarantee that all signals in the closed-loop system are globally uniformly bounded, and the tracking error converges to a small neighborhood of the origin. Simulation results are provided to show that the proposed approach works well in suppressing chaos and parameter perturbation.

  7. The application of dynamic programming in production planning

    Science.gov (United States)

    Wu, Run

    2017-05-01

    Nowadays, with the popularity of the computers, various industries and fields are widely applying computer information technology, which brings about huge demand for a variety of application software. In order to develop software meeting various needs with most economical cost and best quality, programmers must design efficient algorithms. A superior algorithm can not only soul up one thing, but also maximize the benefits and generate the smallest overhead. As one of the common algorithms, dynamic programming algorithms are used to solving problems with some sort of optimal properties. When solving problems with a large amount of sub-problems that needs repetitive calculations, the ordinary sub-recursive method requires to consume exponential time, and dynamic programming algorithm can reduce the time complexity of the algorithm to the polynomial level, according to which we can conclude that dynamic programming algorithm is a very efficient compared to other algorithms reducing the computational complexity and enriching the computational results. In this paper, we expound the concept, basic elements, properties, core, solving steps and difficulties of the dynamic programming algorithm besides, establish the dynamic programming model of the production planning problem.

  8. Convergence dynamics of hybrid bidirectional associative memory neural networks with distributed delays

    International Nuclear Information System (INIS)

    Liao Xiaofeng; Wong, K.-W.; Yang Shizhong

    2003-01-01

    In this Letter, the characteristics of the convergence dynamics of hybrid bidirectional associative memory neural networks with distributed transmission delays are studied. Without assuming the symmetry of synaptic connection weights and the monotonicity and differentiability of activation functions, the Lyapunov functionals are constructed and the generalized Halanay-type inequalities are employed to derive the delay-independent sufficient conditions under which the networks converge exponentially to the equilibria associated with temporally uniform external inputs. Some examples are given to illustrate the correctness of our results

  9. Neural dynamics as sampling: a model for stochastic computation in recurrent networks of spiking neurons.

    Science.gov (United States)

    Buesing, Lars; Bill, Johannes; Nessler, Bernhard; Maass, Wolfgang

    2011-11-01

    The organization of computations in networks of spiking neurons in the brain is still largely unknown, in particular in view of the inherently stochastic features of their firing activity and the experimentally observed trial-to-trial variability of neural systems in the brain. In principle there exists a powerful computational framework for stochastic computations, probabilistic inference by sampling, which can explain a large number of macroscopic experimental data in neuroscience and cognitive science. But it has turned out to be surprisingly difficult to create a link between these abstract models for stochastic computations and more detailed models of the dynamics of networks of spiking neurons. Here we create such a link and show that under some conditions the stochastic firing activity of networks of spiking neurons can be interpreted as probabilistic inference via Markov chain Monte Carlo (MCMC) sampling. Since common methods for MCMC sampling in distributed systems, such as Gibbs sampling, are inconsistent with the dynamics of spiking neurons, we introduce a different approach based on non-reversible Markov chains that is able to reflect inherent temporal processes of spiking neuronal activity through a suitable choice of random variables. We propose a neural network model and show by a rigorous theoretical analysis that its neural activity implements MCMC sampling of a given distribution, both for the case of discrete and continuous time. This provides a step towards closing the gap between abstract functional models of cortical computation and more detailed models of networks of spiking neurons.

  10. A Nonlinear Programming and Artificial Neural Network Approach for Optimizing the Performance of a Job Dispatching Rule in a Wafer Fabrication Factory

    Directory of Open Access Journals (Sweden)

    Toly Chen

    2012-01-01

    Full Text Available A nonlinear programming and artificial neural network approach is presented in this study to optimize the performance of a job dispatching rule in a wafer fabrication factory. The proposed methodology fuses two existing rules and constructs a nonlinear programming model to choose the best values of parameters in the two rules by dynamically maximizing the standard deviation of the slack, which has been shown to benefit scheduling performance by several studies. In addition, a more effective approach is also applied to estimate the remaining cycle time of a job, which is empirically shown to be conducive to the scheduling performance. The efficacy of the proposed methodology was validated with a simulated case; evidence was found to support its effectiveness. We also suggested several directions in which it can be exploited in the future.

  11. Connectivity effects in the dynamic model of neural networks

    International Nuclear Information System (INIS)

    Choi, J; Choi, M Y; Yoon, B-G

    2009-01-01

    We study, via extensive Monte Carlo calculations, the effects of connectivity in the dynamic model of neural networks, to observe that the Mattis-state order parameter increases with the number of coupled neurons. Such effects appear more pronounced when the average number of connections is increased by introducing shortcuts in the network. In particular, the power spectra of the order parameter at stationarity are found to exhibit power-law behavior, depending on how the average number of connections is increased. The cluster size distribution of the 'memory-unmatched' sites also follows a power law and possesses strong correlations with the power spectra. It is further observed that the distribution of waiting times for neuron firing fits roughly to a power law, again depending on how neuronal connections are increased

  12. Dynamic neural networks based on-line identification and control of high performance motor drives

    Science.gov (United States)

    Rubaai, Ahmed; Kotaru, Raj

    1995-01-01

    In the automated and high-tech industries of the future, there wil be a need for high performance motor drives both in the low-power range and in the high-power range. To meet very straight demands of tracking and regulation in the two quadrants of operation, advanced control technologies are of a considerable interest and need to be developed. In response a dynamics learning control architecture is developed with simultaneous on-line identification and control. the feature of the proposed approach, to efficiently combine the dual task of system identification (learning) and adaptive control of nonlinear motor drives into a single operation is presented. This approach, therefore, not only adapts to uncertainties of the dynamic parameters of the motor drives but also learns about their inherent nonlinearities. In fact, most of the neural networks based adaptive control approaches in use have an identification phase entirely separate from the control phase. Because these approaches separate the identification and control modes, it is not possible to cope with dynamic changes in a controlled process. Extensive simulation studies have been conducted and good performance was observed. The robustness characteristics of neuro-controllers to perform efficiently in a noisy environment is also demonstrated. With this initial success, the principal investigator believes that the proposed approach with the suggested neural structure can be used successfully for the control of high performance motor drives. Two identification and control topologies based on the model reference adaptive control technique are used in this present analysis. No prior knowledge of load dynamics is assumed in either topology while the second topology also assumes no knowledge of the motor parameters.

  13. Quantitative Live Imaging of Human Embryonic Stem Cell Derived Neural Rosettes Reveals Structure-Function Dynamics Coupled to Cortical Development.

    Science.gov (United States)

    Ziv, Omer; Zaritsky, Assaf; Yaffe, Yakey; Mutukula, Naresh; Edri, Reuven; Elkabetz, Yechiel

    2015-10-01

    Neural stem cells (NSCs) are progenitor cells for brain development, where cellular spatial composition (cytoarchitecture) and dynamics are hypothesized to be linked to critical NSC capabilities. However, understanding cytoarchitectural dynamics of this process has been limited by the difficulty to quantitatively image brain development in vivo. Here, we study NSC dynamics within Neural Rosettes--highly organized multicellular structures derived from human pluripotent stem cells. Neural rosettes contain NSCs with strong epithelial polarity and are expected to perform apical-basal interkinetic nuclear migration (INM)--a hallmark of cortical radial glial cell development. We developed a quantitative live imaging framework to characterize INM dynamics within rosettes. We first show that the tendency of cells to follow the INM orientation--a phenomenon we referred to as radial organization, is associated with rosette size, presumably via mechanical constraints of the confining structure. Second, early forming rosettes, which are abundant with founder NSCs and correspond to the early proliferative developing cortex, show fast motions and enhanced radial organization. In contrast, later derived rosettes, which are characterized by reduced NSC capacity and elevated numbers of differentiated neurons, and thus correspond to neurogenesis mode in the developing cortex, exhibit slower motions and decreased radial organization. Third, later derived rosettes are characterized by temporal instability in INM measures, in agreement with progressive loss in rosette integrity at later developmental stages. Finally, molecular perturbations of INM by inhibition of actin or non-muscle myosin-II (NMII) reduced INM measures. Our framework enables quantification of cytoarchitecture NSC dynamics and may have implications in functional molecular studies, drug screening, and iPS cell-based platforms for disease modeling.

  14. Quantitative Live Imaging of Human Embryonic Stem Cell Derived Neural Rosettes Reveals Structure-Function Dynamics Coupled to Cortical Development.

    Directory of Open Access Journals (Sweden)

    Omer Ziv

    2015-10-01

    Full Text Available Neural stem cells (NSCs are progenitor cells for brain development, where cellular spatial composition (cytoarchitecture and dynamics are hypothesized to be linked to critical NSC capabilities. However, understanding cytoarchitectural dynamics of this process has been limited by the difficulty to quantitatively image brain development in vivo. Here, we study NSC dynamics within Neural Rosettes--highly organized multicellular structures derived from human pluripotent stem cells. Neural rosettes contain NSCs with strong epithelial polarity and are expected to perform apical-basal interkinetic nuclear migration (INM--a hallmark of cortical radial glial cell development. We developed a quantitative live imaging framework to characterize INM dynamics within rosettes. We first show that the tendency of cells to follow the INM orientation--a phenomenon we referred to as radial organization, is associated with rosette size, presumably via mechanical constraints of the confining structure. Second, early forming rosettes, which are abundant with founder NSCs and correspond to the early proliferative developing cortex, show fast motions and enhanced radial organization. In contrast, later derived rosettes, which are characterized by reduced NSC capacity and elevated numbers of differentiated neurons, and thus correspond to neurogenesis mode in the developing cortex, exhibit slower motions and decreased radial organization. Third, later derived rosettes are characterized by temporal instability in INM measures, in agreement with progressive loss in rosette integrity at later developmental stages. Finally, molecular perturbations of INM by inhibition of actin or non-muscle myosin-II (NMII reduced INM measures. Our framework enables quantification of cytoarchitecture NSC dynamics and may have implications in functional molecular studies, drug screening, and iPS cell-based platforms for disease modeling.

  15. Modeling and control of magnetorheological fluid dampers using neural networks

    Science.gov (United States)

    Wang, D. H.; Liao, W. H.

    2005-02-01

    Due to the inherent nonlinear nature of magnetorheological (MR) fluid dampers, one of the challenging aspects for utilizing these devices to achieve high system performance is the development of accurate models and control algorithms that can take advantage of their unique characteristics. In this paper, the direct identification and inverse dynamic modeling for MR fluid dampers using feedforward and recurrent neural networks are studied. The trained direct identification neural network model can be used to predict the damping force of the MR fluid damper on line, on the basis of the dynamic responses across the MR fluid damper and the command voltage, and the inverse dynamic neural network model can be used to generate the command voltage according to the desired damping force through supervised learning. The architectures and the learning methods of the dynamic neural network models and inverse neural network models for MR fluid dampers are presented, and some simulation results are discussed. Finally, the trained neural network models are applied to predict and control the damping force of the MR fluid damper. Moreover, validation methods for the neural network models developed are proposed and used to evaluate their performance. Validation results with different data sets indicate that the proposed direct identification dynamic model using the recurrent neural network can be used to predict the damping force accurately and the inverse identification dynamic model using the recurrent neural network can act as a damper controller to generate the command voltage when the MR fluid damper is used in a semi-active mode.

  16. A neural model for transient identification in dynamic processes with 'don't know' response

    International Nuclear Information System (INIS)

    Mol, Antonio C. de A.; Martinez, Aquilino S.; Schirru, Roberto

    2003-01-01

    This work presents an approach for neural network based transient identification which allows either dynamic identification or a 'don't know' response. The approach uses two 'jump' multilayer neural networks (NN) trained with the backpropagation algorithm. The 'jump' network is used because it is useful to dealing with very complex patterns, which is the case of the space of the state variables during some abnormal events. The first one is responsible for the dynamic identification. This NN uses, as input, a short set (in a moving time window) of recent measurements of each variable avoiding the necessity of using starting events. The other one is used to validate the instantaneous identification (from the first net) through the validation of each variable. This net is responsible for allowing the system to provide a 'don't know' response. In order to validate the method, a Nuclear Power Plant (NPP) transient identification problem comprising 15 postulated accidents, simulated for a pressurized water reactor (PWR), was proposed in the validation process it has been considered noisy data in order to evaluate the method robustness. Obtained results reveal the ability of the method in dealing with both dynamic identification of transients and correct 'don't know' response. Another important point studied in this work is that the system has shown to be independent of a trigger signal which indicates the beginning of the transient, thus making it robust in relation to this limitation

  17. INDDGO: Integrated Network Decomposition & Dynamic programming for Graph Optimization

    Energy Technology Data Exchange (ETDEWEB)

    Groer, Christopher S [ORNL; Sullivan, Blair D [ORNL; Weerapurage, Dinesh P [ORNL

    2012-10-01

    It is well-known that dynamic programming algorithms can utilize tree decompositions to provide a way to solve some \\emph{NP}-hard problems on graphs where the complexity is polynomial in the number of nodes and edges in the graph, but exponential in the width of the underlying tree decomposition. However, there has been relatively little computational work done to determine the practical utility of such dynamic programming algorithms. We have developed software to construct tree decompositions using various heuristics and have created a fast, memory-efficient dynamic programming implementation for solving maximum weighted independent set. We describe our software and the algorithms we have implemented, focusing on memory saving techniques for the dynamic programming. We compare the running time and memory usage of our implementation with other techniques for solving maximum weighted independent set, including a commercial integer programming solver and a semi-definite programming solver. Our results indicate that it is possible to solve some instances where the underlying decomposition has width much larger than suggested by the literature. For certain types of problems, our dynamic programming code runs several times faster than these other methods.

  18. Hybrid neural network bushing model for vehicle dynamics simulation

    International Nuclear Information System (INIS)

    Sohn, Jeong Hyun; Lee, Seung Kyu; Yoo, Wan Suk

    2008-01-01

    Although the linear model was widely used for the bushing model in vehicle suspension systems, it could not express the nonlinear characteristics of bushing in terms of the amplitude and the frequency. An artificial neural network model was suggested to consider the hysteretic responses of bushings. This model, however, often diverges due to the uncertainties of the neural network under the unexpected excitation inputs. In this paper, a hybrid neural network bushing model combining linear and neural network is suggested. A linear model was employed to represent linear stiffness and damping effects, and the artificial neural network algorithm was adopted to take into account the hysteretic responses. A rubber test was performed to capture bushing characteristics, where sine excitation with different frequencies and amplitudes is applied. Random test results were used to update the weighting factors of the neural network model. It is proven that the proposed model has more robust characteristics than a simple neural network model under step excitation input. A full car simulation was carried out to verify the proposed bushing models. It was shown that the hybrid model results are almost identical to the linear model under several maneuvers

  19. The quest for a Quantum Neural Network

    OpenAIRE

    Schuld, M.; Sinayskiy, I.; Petruccione, F.

    2014-01-01

    With the overwhelming success in the field of quantum information in the last decades, the "quest" for a Quantum Neural Network (QNN) model began in order to combine quantum computing with the striking properties of neural computing. This article presents a systematic approach to QNN research, which so far consists of a conglomeration of ideas and proposals. It outlines the challenge of combining the nonlinear, dissipative dynamics of neural computing and the linear, unitary dynamics of quant...

  20. Effects of bursting dynamic features on the generation of multi-clustered structure of neural network with symmetric spike-timing-dependent plasticity learning rule

    International Nuclear Information System (INIS)

    Liu, Hui; Song, Yongduan; Xue, Fangzheng; Li, Xiumin

    2015-01-01

    In this paper, the generation of multi-clustered structure of self-organized neural network with different neuronal firing patterns, i.e., bursting or spiking, has been investigated. The initially all-to-all-connected spiking neural network or bursting neural network can be self-organized into clustered structure through the symmetric spike-timing-dependent plasticity learning for both bursting and spiking neurons. However, the time consumption of this clustering procedure of the burst-based self-organized neural network (BSON) is much shorter than the spike-based self-organized neural network (SSON). Our results show that the BSON network has more obvious small-world properties, i.e., higher clustering coefficient and smaller shortest path length than the SSON network. Also, the results of larger structure entropy and activity entropy of the BSON network demonstrate that this network has higher topological complexity and dynamical diversity, which benefits for enhancing information transmission of neural circuits. Hence, we conclude that the burst firing can significantly enhance the efficiency of clustering procedure and the emergent clustered structure renders the whole network more synchronous and therefore more sensitive to weak input. This result is further confirmed from its improved performance on stochastic resonance. Therefore, we believe that the multi-clustered neural network which self-organized from the bursting dynamics has high efficiency in information processing

  1. Effects of bursting dynamic features on the generation of multi-clustered structure of neural network with symmetric spike-timing-dependent plasticity learning rule

    Energy Technology Data Exchange (ETDEWEB)

    Liu, Hui; Song, Yongduan; Xue, Fangzheng; Li, Xiumin, E-mail: xmli@cqu.edu.cn [Key Laboratory of Dependable Service Computing in Cyber Physical Society of Ministry of Education, Chongqing University, Chongqing 400044 (China); College of Automation, Chongqing University, Chongqing 400044 (China)

    2015-11-15

    In this paper, the generation of multi-clustered structure of self-organized neural network with different neuronal firing patterns, i.e., bursting or spiking, has been investigated. The initially all-to-all-connected spiking neural network or bursting neural network can be self-organized into clustered structure through the symmetric spike-timing-dependent plasticity learning for both bursting and spiking neurons. However, the time consumption of this clustering procedure of the burst-based self-organized neural network (BSON) is much shorter than the spike-based self-organized neural network (SSON). Our results show that the BSON network has more obvious small-world properties, i.e., higher clustering coefficient and smaller shortest path length than the SSON network. Also, the results of larger structure entropy and activity entropy of the BSON network demonstrate that this network has higher topological complexity and dynamical diversity, which benefits for enhancing information transmission of neural circuits. Hence, we conclude that the burst firing can significantly enhance the efficiency of clustering procedure and the emergent clustered structure renders the whole network more synchronous and therefore more sensitive to weak input. This result is further confirmed from its improved performance on stochastic resonance. Therefore, we believe that the multi-clustered neural network which self-organized from the bursting dynamics has high efficiency in information processing.

  2. Dynamical principles in neuroscience

    International Nuclear Information System (INIS)

    Rabinovich, Mikhail I.; Varona, Pablo; Selverston, Allen I.; Abarbanel, Henry D. I.

    2006-01-01

    Dynamical modeling of neural systems and brain functions has a history of success over the last half century. This includes, for example, the explanation and prediction of some features of neural rhythmic behaviors. Many interesting dynamical models of learning and memory based on physiological experiments have been suggested over the last two decades. Dynamical models even of consciousness now exist. Usually these models and results are based on traditional approaches and paradigms of nonlinear dynamics including dynamical chaos. Neural systems are, however, an unusual subject for nonlinear dynamics for several reasons: (i) Even the simplest neural network, with only a few neurons and synaptic connections, has an enormous number of variables and control parameters. These make neural systems adaptive and flexible, and are critical to their biological function. (ii) In contrast to traditional physical systems described by well-known basic principles, first principles governing the dynamics of neural systems are unknown. (iii) Many different neural systems exhibit similar dynamics despite having different architectures and different levels of complexity. (iv) The network architecture and connection strengths are usually not known in detail and therefore the dynamical analysis must, in some sense, be probabilistic. (v) Since nervous systems are able to organize behavior based on sensory inputs, the dynamical modeling of these systems has to explain the transformation of temporal information into combinatorial or combinatorial-temporal codes, and vice versa, for memory and recognition. In this review these problems are discussed in the context of addressing the stimulating questions: What can neuroscience learn from nonlinear dynamics, and what can nonlinear dynamics learn from neuroscience?

  3. Dynamical principles in neuroscience

    Science.gov (United States)

    Rabinovich, Mikhail I.; Varona, Pablo; Selverston, Allen I.; Abarbanel, Henry D. I.

    2006-10-01

    Dynamical modeling of neural systems and brain functions has a history of success over the last half century. This includes, for example, the explanation and prediction of some features of neural rhythmic behaviors. Many interesting dynamical models of learning and memory based on physiological experiments have been suggested over the last two decades. Dynamical models even of consciousness now exist. Usually these models and results are based on traditional approaches and paradigms of nonlinear dynamics including dynamical chaos. Neural systems are, however, an unusual subject for nonlinear dynamics for several reasons: (i) Even the simplest neural network, with only a few neurons and synaptic connections, has an enormous number of variables and control parameters. These make neural systems adaptive and flexible, and are critical to their biological function. (ii) In contrast to traditional physical systems described by well-known basic principles, first principles governing the dynamics of neural systems are unknown. (iii) Many different neural systems exhibit similar dynamics despite having different architectures and different levels of complexity. (iv) The network architecture and connection strengths are usually not known in detail and therefore the dynamical analysis must, in some sense, be probabilistic. (v) Since nervous systems are able to organize behavior based on sensory inputs, the dynamical modeling of these systems has to explain the transformation of temporal information into combinatorial or combinatorial-temporal codes, and vice versa, for memory and recognition. In this review these problems are discussed in the context of addressing the stimulating questions: What can neuroscience learn from nonlinear dynamics, and what can nonlinear dynamics learn from neuroscience?

  4. Dynamic electricity pricing—Which programs do consumers prefer?

    International Nuclear Information System (INIS)

    Dütschke, Elisabeth; Paetz, Alexandra-Gwyn

    2013-01-01

    Dynamic pricing is being discussed as one method of demand side management (DSM) which could be crucial for integrating more renewable energy sources into the electricity system. At the same time, there have been very few analyses of consumer preferences in this regard: Which type of pricing program are consumers most likely to choose and why? This paper sheds some light on these issues based on two empirical studies from Germany: (1) A questionnaire study including a conjoint analysis-design and (2) A field experiment with test-residents of a smart home laboratory. The results show that consumers are open to dynamic pricing, but prefer simple programs to complex and highly dynamic ones; smart home technologies including demand automation are seen as a prerequisite for DSM. The study provides some indications that consumers might be more willing to accept more dynamic pricing programs if they have the chance to experience in practice how these can be managed in everyday life. At the same time, the individual and societal advantages of such programs are not obvious to consumers. For this reason, any market roll-out will need to be accompanied by convincing communication and information campaigns to ensure that these advantages are perceived. - Highlights: • Little is known about consumer preferences on dynamic pricing. • Two studies are conducted to analyze this topic. • A survey shows that consumers without experience prefer conventional programs. • Test residents of a smart home were more open to dynamic pricing. • They also prefer well-structured programs

  5. Internal representation of task rules by recurrent dynamics: the importance of the diversity of neural responses

    Directory of Open Access Journals (Sweden)

    Mattia Rigotti

    2010-10-01

    Full Text Available Neural activity of behaving animals, especially in the prefrontal cortex, is highly heterogeneous, with selective responses to diverse aspects of the executed task. We propose a general model of recurrent neural networks that perform complex rule-based tasks, and we show that the diversity of neuronal responses plays a fundamental role when the behavioral responses are context dependent. Specifically, we found that when the inner mental states encoding the task rules are represented by stable patterns of neural activity (attractors of the neural dynamics, the neurons must be selective for combinations of sensory stimuli and inner mental states. Such mixed selectivity is easily obtained by neurons that connect with random synaptic strengths both to the recurrent network and to neurons encoding sensory inputs. The number of randomly connected neurons needed to solve a task is on average only three times as large as the number of neurons needed in a network designed ad hoc. Moreover, the number of needed neurons grows only linearly with the number of task-relevant events and mental states, provided that each neuron responds to a large proportion of events (dense/distributed coding. A biologically realistic implementation of the model captures several aspects of the activity recorded from monkeys performing context dependent tasks. Our findings explain the importance of the diversity of neural responses and provide us with simple and general principles for designing attractor neural networks that perform complex computation.

  6. An Attractor-Based Complexity Measurement for Boolean Recurrent Neural Networks

    Science.gov (United States)

    Cabessa, Jérémie; Villa, Alessandro E. P.

    2014-01-01

    We provide a novel refined attractor-based complexity measurement for Boolean recurrent neural networks that represents an assessment of their computational power in terms of the significance of their attractor dynamics. This complexity measurement is achieved by first proving a computational equivalence between Boolean recurrent neural networks and some specific class of -automata, and then translating the most refined classification of -automata to the Boolean neural network context. As a result, a hierarchical classification of Boolean neural networks based on their attractive dynamics is obtained, thus providing a novel refined attractor-based complexity measurement for Boolean recurrent neural networks. These results provide new theoretical insights to the computational and dynamical capabilities of neural networks according to their attractive potentialities. An application of our findings is illustrated by the analysis of the dynamics of a simplified model of the basal ganglia-thalamocortical network simulated by a Boolean recurrent neural network. This example shows the significance of measuring network complexity, and how our results bear new founding elements for the understanding of the complexity of real brain circuits. PMID:24727866

  7. Nonlinear Dynamic Surface Control of Chaos in Permanent Magnet Synchronous Motor Based on the Minimum Weights of RBF Neural Network

    Directory of Open Access Journals (Sweden)

    Shaohua Luo

    2014-01-01

    Full Text Available This paper is concerned with the problem of the nonlinear dynamic surface control (DSC of chaos based on the minimum weights of RBF neural network for the permanent magnet synchronous motor system (PMSM wherein the unknown parameters, disturbances, and chaos are presented. RBF neural network is used to approximate the nonlinearities and an adaptive law is employed to estimate unknown parameters. Then, a simple and effective controller is designed by introducing dynamic surface control technique on the basis of first-order filters. Asymptotically tracking stability in the sense of uniformly ultimate boundedness is achieved in a short time. Finally, the performance of the proposed controller is testified through simulation results.

  8. Nuclear power plant monitoring method by neural network and its application to actual nuclear reactor

    International Nuclear Information System (INIS)

    Nabeshima, Kunihiko; Suzuki, Katsuo; Shinohara, Yoshikuni; Tuerkcan, E.

    1995-11-01

    In this paper, the anomaly detection method for nuclear power plant monitoring and its program are described by using a neural network approach, which is based on the deviation between measured signals and output signals of neural network model. The neural network used in this study has three layered auto-associative network with 12 input/output, and backpropagation algorithm is adopted for learning. Furthermore, to obtain better dynamical model of the reactor plant, a new learning technique was developed in which the learning process of the present neural network is divided into initial and adaptive learning modes. The test results at the actual nuclear reactor shows that the neural network plant monitoring system is successfull in detecting in real-time the symptom of small anomaly over a wide power range including reactor start-up, shut-down and stationary operation. (author)

  9. Development and Flight Testing of a Neural Network Based Flight Control System on the NF-15B Aircraft

    Science.gov (United States)

    Bomben, Craig R.; Smolka, James W.; Bosworth, John T.; Silliams-Hayes, Peggy S.; Burken, John J.; Larson, Richard R.; Buschbacher, Mark J.; Maliska, Heather A.

    2006-01-01

    The Intelligent Flight Control System (IFCS) project at the NASA Dryden Flight Research Center, Edwards AFB, CA, has been investigating the use of neural network based adaptive control on a unique NF-15B test aircraft. The IFCS neural network is a software processor that stores measured aircraft response information to dynamically alter flight control gains. In 2006, the neural network was engaged and allowed to learn in real time to dynamically alter the aircraft handling qualities characteristics in the presence of actual aerodynamic failure conditions injected into the aircraft through the flight control system. The use of neural network and similar adaptive technologies in the design of highly fault and damage tolerant flight control systems shows promise in making future aircraft far more survivable than current technology allows. This paper will present the results of the IFCS flight test program conducted at the NASA Dryden Flight Research Center in 2006, with emphasis on challenges encountered and lessons learned.

  10. Dynamic Changes in Amygdala Psychophysiological Connectivity Reveal Distinct Neural Networks for Facial Expressions of Basic Emotions.

    Science.gov (United States)

    Diano, Matteo; Tamietto, Marco; Celeghin, Alessia; Weiskrantz, Lawrence; Tatu, Mona-Karina; Bagnis, Arianna; Duca, Sergio; Geminiani, Giuliano; Cauda, Franco; Costa, Tommaso

    2017-03-27

    The quest to characterize the neural signature distinctive of different basic emotions has recently come under renewed scrutiny. Here we investigated whether facial expressions of different basic emotions modulate the functional connectivity of the amygdala with the rest of the brain. To this end, we presented seventeen healthy participants (8 females) with facial expressions of anger, disgust, fear, happiness, sadness and emotional neutrality and analyzed amygdala's psychophysiological interaction (PPI). In fact, PPI can reveal how inter-regional amygdala communications change dynamically depending on perception of various emotional expressions to recruit different brain networks, compared to the functional interactions it entertains during perception of neutral expressions. We found that for each emotion the amygdala recruited a distinctive and spatially distributed set of structures to interact with. These changes in amygdala connectional patters characterize the dynamic signature prototypical of individual emotion processing, and seemingly represent a neural mechanism that serves to implement the distinctive influence that each emotion exerts on perceptual, cognitive, and motor responses. Besides these differences, all emotions enhanced amygdala functional integration with premotor cortices compared to neutral faces. The present findings thus concur to reconceptualise the structure-function relation between brain-emotion from the traditional one-to-one mapping toward a network-based and dynamic perspective.

  11. Fluid dynamics computer programs for NERVA turbopump

    Science.gov (United States)

    Brunner, J. J.

    1972-01-01

    During the design of the NERVA turbopump, numerous computer programs were developed for the analyses of fluid dynamic problems within the machine. Program descriptions, example cases, users instructions, and listings for the majority of these programs are presented.

  12. Complex dynamics of a delayed discrete neural network of two nonidentical neurons.

    Science.gov (United States)

    Chen, Yuanlong; Huang, Tingwen; Huang, Yu

    2014-03-01

    In this paper, we discover that a delayed discrete Hopfield neural network of two nonidentical neurons with self-connections and no self-connections can demonstrate chaotic behaviors. To this end, we first transform the model, by a novel way, into an equivalent system which has some interesting properties. Then, we identify the chaotic invariant set for this system and show that the dynamics of this system within this set is topologically conjugate to the dynamics of the full shift map with two symbols. This confirms chaos in the sense of Devaney. Our main results generalize the relevant results of Huang and Zou [J. Nonlinear Sci. 15, 291-303 (2005)], Kaslik and Balint [J. Nonlinear Sci. 18, 415-432 (2008)] and Chen et al. [Sci. China Math. 56(9), 1869-1878 (2013)]. We also give some numeric simulations to verify our theoretical results.

  13. Iterative Adaptive Dynamic Programming for Solving Unknown Nonlinear Zero-Sum Game Based on Online Data.

    Science.gov (United States)

    Zhu, Yuanheng; Zhao, Dongbin; Li, Xiangjun

    2017-03-01

    H ∞ control is a powerful method to solve the disturbance attenuation problems that occur in some control systems. The design of such controllers relies on solving the zero-sum game (ZSG). But in practical applications, the exact dynamics is mostly unknown. Identification of dynamics also produces errors that are detrimental to the control performance. To overcome this problem, an iterative adaptive dynamic programming algorithm is proposed in this paper to solve the continuous-time, unknown nonlinear ZSG with only online data. A model-free approach to the Hamilton-Jacobi-Isaacs equation is developed based on the policy iteration method. Control and disturbance policies and value are approximated by neural networks (NNs) under the critic-actor-disturber structure. The NN weights are solved by the least-squares method. According to the theoretical analysis, our algorithm is equivalent to a Gauss-Newton method solving an optimization problem, and it converges uniformly to the optimal solution. The online data can also be used repeatedly, which is highly efficient. Simulation results demonstrate its feasibility to solve the unknown nonlinear ZSG. When compared with other algorithms, it saves a significant amount of online measurement time.

  14. Neural substrates and behavioral profiles of romantic jealousy and its temporal dynamics.

    Science.gov (United States)

    Sun, Yan; Yu, Hongbo; Chen, Jie; Liang, Jie; Lu, Lin; Zhou, Xiaolin; Shi, Jie

    2016-06-07

    Jealousy is not only a way of experiencing love but also a stabilizer of romantic relationships, although morbid romantic jealousy is maladaptive. Being engaged in a formal romantic relationship can tune one's romantic jealousy towards a specific target. Little is known about how the human brain processes romantic jealousy by now. Here, by combining scenario-based imagination and functional MRI, we investigated the behavioral and neural correlates of romantic jealousy and their development across stages (before vs. after being in a formal relationship). Romantic jealousy scenarios elicited activations primarily in the basal ganglia (BG) across stages, and were significantly higher after the relationship was established in both the behavioral rating and BG activation. The intensity of romantic jealousy was related to the intensity of romantic happiness, which mainly correlated with ventral medial prefrontal cortex activation. The increase in jealousy across stages was associated with the tendency for interpersonal aggression. These results bridge the gap between the theoretical conceptualization of romantic jealousy and its neural correlates and shed light on the dynamic changes in jealousy.

  15. Neural dynamics underlying attentional orienting to auditory representations in short-term memory.

    Science.gov (United States)

    Backer, Kristina C; Binns, Malcolm A; Alain, Claude

    2015-01-21

    Sounds are ephemeral. Thus, coherent auditory perception depends on "hearing" back in time: retrospectively attending that which was lost externally but preserved in short-term memory (STM). Current theories of auditory attention assume that sound features are integrated into a perceptual object, that multiple objects can coexist in STM, and that attention can be deployed to an object in STM. Recording electroencephalography from humans, we tested these assumptions, elucidating feature-general and feature-specific neural correlates of auditory attention to STM. Alpha/beta oscillations and frontal and posterior event-related potentials indexed feature-general top-down attentional control to one of several coexisting auditory representations in STM. Particularly, task performance during attentional orienting was correlated with alpha/low-beta desynchronization (i.e., power suppression). However, attention to one feature could occur without simultaneous processing of the second feature of the representation. Therefore, auditory attention to memory relies on both feature-specific and feature-general neural dynamics. Copyright © 2015 the authors 0270-6474/15/351307-12$15.00/0.

  16. Brain Dynamics in Predicting Driving Fatigue Using a Recurrent Self-Evolving Fuzzy Neural Network.

    Science.gov (United States)

    Liu, Yu-Ting; Lin, Yang-Yin; Wu, Shang-Lin; Chuang, Chun-Hsiang; Lin, Chin-Teng

    2016-02-01

    This paper proposes a generalized prediction system called a recurrent self-evolving fuzzy neural network (RSEFNN) that employs an on-line gradient descent learning rule to address the electroencephalography (EEG) regression problem in brain dynamics for driving fatigue. The cognitive states of drivers significantly affect driving safety; in particular, fatigue driving, or drowsy driving, endangers both the individual and the public. For this reason, the development of brain-computer interfaces (BCIs) that can identify drowsy driving states is a crucial and urgent topic of study. Many EEG-based BCIs have been developed as artificial auxiliary systems for use in various practical applications because of the benefits of measuring EEG signals. In the literature, the efficacy of EEG-based BCIs in recognition tasks has been limited by low resolutions. The system proposed in this paper represents the first attempt to use the recurrent fuzzy neural network (RFNN) architecture to increase adaptability in realistic EEG applications to overcome this bottleneck. This paper further analyzes brain dynamics in a simulated car driving task in a virtual-reality environment. The proposed RSEFNN model is evaluated using the generalized cross-subject approach, and the results indicate that the RSEFNN is superior to competing models regardless of the use of recurrent or nonrecurrent structures.

  17. Robust model predictive control of nonlinear systems with unmodeled dynamics and bounded uncertainties based on neural networks.

    Science.gov (United States)

    Yan, Zheng; Wang, Jun

    2014-03-01

    This paper presents a neural network approach to robust model predictive control (MPC) for constrained discrete-time nonlinear systems with unmodeled dynamics affected by bounded uncertainties. The exact nonlinear model of underlying process is not precisely known, but a partially known nominal model is available. This partially known nonlinear model is first decomposed to an affine term plus an unknown high-order term via Jacobian linearization. The linearization residue combined with unmodeled dynamics is then modeled using an extreme learning machine via supervised learning. The minimax methodology is exploited to deal with bounded uncertainties. The minimax optimization problem is reformulated as a convex minimization problem and is iteratively solved by a two-layer recurrent neural network. The proposed neurodynamic approach to nonlinear MPC improves the computational efficiency and sheds a light for real-time implementability of MPC technology. Simulation results are provided to substantiate the effectiveness and characteristics of the proposed approach.

  18. Tracking control of air-breathing hypersonic vehicles with non-affine dynamics via improved neural back-stepping design.

    Science.gov (United States)

    Bu, Xiangwei; He, Guangjun; Wang, Ke

    2018-04-01

    This study considers the design of a new back-stepping control approach for air-breathing hypersonic vehicle (AHV) non-affine models via neural approximation. The AHV's non-affine dynamics is decomposed into velocity subsystem and altitude subsystem to be controlled separately, and robust adaptive tracking control laws are developed using improved back-stepping designs. Neural networks are applied to estimate the unknown non-affine dynamics, which guarantees the addressed controllers with satisfactory robustness against uncertainties. In comparison with the existing control methodologies, the special contributions are that the non-affine issue is handled by constructing two low-pass filters based on model transformations, and virtual controllers are treated as intermediate variables such that they aren't needed for back-stepping designs any more. Lyapunov techniques are employed to show the uniformly ultimately boundedness of all closed-loop signals. Finally, simulation results are presented to verify the tracking performance and superiorities of the investigated control strategy. Copyright © 2018 ISA. Published by Elsevier Ltd. All rights reserved.

  19. Practical neural network recipies in C++

    CERN Document Server

    Masters

    2014-01-01

    This text serves as a cookbook for neural network solutions to practical problems using C++. It will enable those with moderate programming experience to select a neural network model appropriate to solving a particular problem, and to produce a working program implementing that network. The book provides guidance along the entire problem-solving path, including designing the training set, preprocessing variables, training and validating the network, and evaluating its performance. Though the book is not intended as a general course in neural networks, no background in neural works is assum

  20. Potential Mechanisms and Functions of Intermittent Neural Synchronization

    Directory of Open Access Journals (Sweden)

    Sungwoo Ahn

    2017-05-01

    Full Text Available Neural synchronization is believed to play an important role in different brain functions. Synchrony in cortical and subcortical circuits is frequently variable in time and not perfect. Few long intervals of desynchronized dynamics may be functionally different from many short desynchronized intervals although the average synchrony may be the same. Recent analysis of imperfect synchrony in different neural systems reported one common feature: neural oscillations may go out of synchrony frequently, but primarily for a short time interval. This study explores potential mechanisms and functional advantages of this short desynchronizations dynamics using computational neuroscience techniques. We show that short desynchronizations are exhibited in coupled neurons if their delayed rectifier potassium current has relatively large values of the voltage-dependent activation time-constant. The delayed activation of potassium current is associated with generation of quickly-rising action potential. This “spikiness” is a very general property of neurons. This may explain why very different neural systems exhibit short desynchronization dynamics. We also show how the distribution of desynchronization durations may be independent of the synchronization strength. Finally, we show that short desynchronization dynamics requires weaker synaptic input to reach a pre-set synchrony level. Thus, this dynamics allows for efficient regulation of synchrony and may promote efficient formation of synchronous neural assemblies.

  1. Dynamic programming models and applications

    CERN Document Server

    Denardo, Eric V

    2003-01-01

    Introduction to sequential decision processes covers use of dynamic programming in studying models of resource allocation, methods for approximating solutions of control problems in continuous time, production control, more. 1982 edition.

  2. Direct Adaptive Aircraft Control Using Dynamic Cell Structure Neural Networks

    Science.gov (United States)

    Jorgensen, Charles C.

    1997-01-01

    A Dynamic Cell Structure (DCS) Neural Network was developed which learns topology representing networks (TRNS) of F-15 aircraft aerodynamic stability and control derivatives. The network is integrated into a direct adaptive tracking controller. The combination produces a robust adaptive architecture capable of handling multiple accident and off- nominal flight scenarios. This paper describes the DCS network and modifications to the parameter estimation procedure. The work represents one step towards an integrated real-time reconfiguration control architecture for rapid prototyping of new aircraft designs. Performance was evaluated using three off-line benchmarks and on-line nonlinear Virtual Reality simulation. Flight control was evaluated under scenarios including differential stabilator lock, soft sensor failure, control and stability derivative variations, and air turbulence.

  3. Calsyntenins Are Expressed in a Dynamic and Partially Overlapping Manner during Neural Development

    Directory of Open Access Journals (Sweden)

    Gemma de Ramon Francàs

    2017-08-01

    Full Text Available Calsyntenins form a family of linker proteins between distinct populations of vesicles and kinesin motors for axonal transport. They were implicated in synapse formation and synaptic plasticity by findings in worms, mice and humans. These findings were in accordance with the postsynaptic localization of the Calsyntenins in the adult brain. However, they also affect the formation of neural circuits, as loss of Calsyntenin-1 (Clstn1 was shown to interfere with axonal branching and axon guidance. Despite the fact that Calsyntenins were discovered originally in embryonic chicken motoneurons, their distribution in the developing nervous system has not been analyzed in detail so far. Here, we summarize our analysis of the temporal and spatial expression patterns of the cargo-docking proteins Clstn1, Clstn2 and Clstn3 during neural development by comparing the dynamic distribution of their mRNAs by in situ hybridization in the spinal cord, the cerebellum, the retina and the tectum, as well as in the dorsal root ganglia (DRG.

  4. Recovery of Dynamics and Function in Spiking Neural Networks with Closed-Loop Control.

    Science.gov (United States)

    Vlachos, Ioannis; Deniz, Taşkin; Aertsen, Ad; Kumar, Arvind

    2016-02-01

    There is a growing interest in developing novel brain stimulation methods to control disease-related aberrant neural activity and to address basic neuroscience questions. Conventional methods for manipulating brain activity rely on open-loop approaches that usually lead to excessive stimulation and, crucially, do not restore the original computations performed by the network. Thus, they are often accompanied by undesired side-effects. Here, we introduce delayed feedback control (DFC), a conceptually simple but effective method, to control pathological oscillations in spiking neural networks (SNNs). Using mathematical analysis and numerical simulations we show that DFC can restore a wide range of aberrant network dynamics either by suppressing or enhancing synchronous irregular activity. Importantly, DFC, besides steering the system back to a healthy state, also recovers the computations performed by the underlying network. Finally, using our theory we identify the role of single neuron and synapse properties in determining the stability of the closed-loop system.

  5. Neural network regulation driven by autonomous neural firings

    Science.gov (United States)

    Cho, Myoung Won

    2016-07-01

    Biological neurons naturally fire spontaneously due to the existence of a noisy current. Such autonomous firings may provide a driving force for network formation because synaptic connections can be modified due to neural firings. Here, we study the effect of autonomous firings on network formation. For the temporally asymmetric Hebbian learning, bidirectional connections lose their balance easily and become unidirectional ones. Defining the difference between reciprocal connections as new variables, we could express the learning dynamics as if Ising model spins interact with each other in magnetism. We present a theoretical method to estimate the interaction between the new variables in a neural system. We apply the method to some network systems and find some tendencies of autonomous neural network regulation.

  6. Embedding responses in spontaneous neural activity shaped through sequential learning.

    Directory of Open Access Journals (Sweden)

    Tomoki Kurikawa

    Full Text Available Recent experimental measurements have demonstrated that spontaneous neural activity in the absence of explicit external stimuli has remarkable spatiotemporal structure. This spontaneous activity has also been shown to play a key role in the response to external stimuli. To better understand this role, we proposed a viewpoint, "memories-as-bifurcations," that differs from the traditional "memories-as-attractors" viewpoint. Memory recall from the memories-as-bifurcations viewpoint occurs when the spontaneous neural activity is changed to an appropriate output activity upon application of an input, known as a bifurcation in dynamical systems theory, wherein the input modifies the flow structure of the neural dynamics. Learning, then, is a process that helps create neural dynamical systems such that a target output pattern is generated as an attractor upon a given input. Based on this novel viewpoint, we introduce in this paper an associative memory model with a sequential learning process. Using a simple hebbian-type learning, the model is able to memorize a large number of input/output mappings. The neural dynamics shaped through the learning exhibit different bifurcations to make the requested targets stable upon an increase in the input, and the neural activity in the absence of input shows chaotic dynamics with occasional approaches to the memorized target patterns. These results suggest that these dynamics facilitate the bifurcations to each target attractor upon application of the corresponding input, which thus increases the capacity for learning. This theoretical finding about the behavior of the spontaneous neural activity is consistent with recent experimental observations in which the neural activity without stimuli wanders among patterns evoked by previously applied signals. In addition, the neural networks shaped by learning properly reflect the correlations of input and target-output patterns in a similar manner to those designed in

  7. A recurrent neural network for solving bilevel linear programming problem.

    Science.gov (United States)

    He, Xing; Li, Chuandong; Huang, Tingwen; Li, Chaojie; Huang, Junjian

    2014-04-01

    In this brief, based on the method of penalty functions, a recurrent neural network (NN) modeled by means of a differential inclusion is proposed for solving the bilevel linear programming problem (BLPP). Compared with the existing NNs for BLPP, the model has the least number of state variables and simple structure. Using nonsmooth analysis, the theory of differential inclusions, and Lyapunov-like method, the equilibrium point sequence of the proposed NNs can approximately converge to an optimal solution of BLPP under certain conditions. Finally, the numerical simulations of a supply chain distribution model have shown excellent performance of the proposed recurrent NNs.

  8. Dynamic Programming: An Introduction by Example

    Science.gov (United States)

    Zietz, Joachim

    2007-01-01

    The author introduces some basic dynamic programming techniques, using examples, with the help of the computer algebra system "Maple". The emphasis is on building confidence and intuition for the solution of dynamic problems in economics. To integrate the material better, the same examples are used to introduce different techniques. One covers the…

  9. Nonlinear dynamics based digital logic and circuits.

    Science.gov (United States)

    Kia, Behnam; Lindner, John F; Ditto, William L

    2015-01-01

    We discuss the role and importance of dynamics in the brain and biological neural networks and argue that dynamics is one of the main missing elements in conventional Boolean logic and circuits. We summarize a simple dynamics based computing method, and categorize different techniques that we have introduced to realize logic, functionality, and programmability. We discuss the role and importance of coupled dynamics in networks of biological excitable cells, and then review our simple coupled dynamics based method for computing. In this paper, for the first time, we show how dynamics can be used and programmed to implement computation in any given base, including but not limited to base two.

  10. Complex dynamics of a delayed discrete neural network of two nonidentical neurons

    Energy Technology Data Exchange (ETDEWEB)

    Chen, Yuanlong [Mathematics Department, GuangDong University of Finance, Guangzhou 510521 (China); Huang, Tingwen [Mathematics Department, Texas A and M University at Qatar, P. O. Box 23874, Doha (Qatar); Huang, Yu, E-mail: stshyu@mail.sysu.edu.cn [Mathematics Department, Sun Yat-Sen University, Guangzhou 510275, People' s Republic China (China)

    2014-03-15

    In this paper, we discover that a delayed discrete Hopfield neural network of two nonidentical neurons with self-connections and no self-connections can demonstrate chaotic behaviors. To this end, we first transform the model, by a novel way, into an equivalent system which has some interesting properties. Then, we identify the chaotic invariant set for this system and show that the dynamics of this system within this set is topologically conjugate to the dynamics of the full shift map with two symbols. This confirms chaos in the sense of Devaney. Our main results generalize the relevant results of Huang and Zou [J. Nonlinear Sci. 15, 291–303 (2005)], Kaslik and Balint [J. Nonlinear Sci. 18, 415–432 (2008)] and Chen et al. [Sci. China Math. 56(9), 1869–1878 (2013)]. We also give some numeric simulations to verify our theoretical results.

  11. Complex dynamics of a delayed discrete neural network of two nonidentical neurons

    International Nuclear Information System (INIS)

    Chen, Yuanlong; Huang, Tingwen; Huang, Yu

    2014-01-01

    In this paper, we discover that a delayed discrete Hopfield neural network of two nonidentical neurons with self-connections and no self-connections can demonstrate chaotic behaviors. To this end, we first transform the model, by a novel way, into an equivalent system which has some interesting properties. Then, we identify the chaotic invariant set for this system and show that the dynamics of this system within this set is topologically conjugate to the dynamics of the full shift map with two symbols. This confirms chaos in the sense of Devaney. Our main results generalize the relevant results of Huang and Zou [J. Nonlinear Sci. 15, 291–303 (2005)], Kaslik and Balint [J. Nonlinear Sci. 18, 415–432 (2008)] and Chen et al. [Sci. China Math. 56(9), 1869–1878 (2013)]. We also give some numeric simulations to verify our theoretical results

  12. Dynamic neural network models of the premotoneuronal circuitry controlling wrist movements in primates.

    Science.gov (United States)

    Maier, M A; Shupe, L E; Fetz, E E

    2005-10-01

    Dynamic recurrent neural networks were derived to simulate neuronal populations generating bidirectional wrist movements in the monkey. The models incorporate anatomical connections of cortical and rubral neurons, muscle afferents, segmental interneurons and motoneurons; they also incorporate the response profiles of four populations of neurons observed in behaving monkeys. The networks were derived by gradient descent algorithms to generate the eight characteristic patterns of motor unit activations observed during alternating flexion-extension wrist movements. The resulting model generated the appropriate input-output transforms and developed connection strengths resembling those in physiological pathways. We found that this network could be further trained to simulate additional tasks, such as experimentally observed reflex responses to limb perturbations that stretched or shortened the active muscles, and scaling of response amplitudes in proportion to inputs. In the final comprehensive network, motor units are driven by the combined activity of cortical, rubral, spinal and afferent units during step tracking and perturbations. The model displayed many emergent properties corresponding to physiological characteristics. The resulting neural network provides a working model of premotoneuronal circuitry and elucidates the neural mechanisms controlling motoneuron activity. It also predicts several features to be experimentally tested, for example the consequences of eliminating inhibitory connections in cortex and red nucleus. It also reveals that co-contraction can be achieved by simultaneous activation of the flexor and extensor circuits without invoking features specific to co-contraction.

  13. DynaSim: A MATLAB Toolbox for Neural Modeling and Simulation.

    Science.gov (United States)

    Sherfey, Jason S; Soplata, Austin E; Ardid, Salva; Roberts, Erik A; Stanley, David A; Pittman-Polletta, Benjamin R; Kopell, Nancy J

    2018-01-01

    DynaSim is an open-source MATLAB/GNU Octave toolbox for rapid prototyping of neural models and batch simulation management. It is designed to speed up and simplify the process of generating, sharing, and exploring network models of neurons with one or more compartments. Models can be specified by equations directly (similar to XPP or the Brian simulator) or by lists of predefined or custom model components. The higher-level specification supports arbitrarily complex population models and networks of interconnected populations. DynaSim also includes a large set of features that simplify exploring model dynamics over parameter spaces, running simulations in parallel using both multicore processors and high-performance computer clusters, and analyzing and plotting large numbers of simulated data sets in parallel. It also includes a graphical user interface (DynaSim GUI) that supports full functionality without requiring user programming. The software has been implemented in MATLAB to enable advanced neural modeling using MATLAB, given its popularity and a growing interest in modeling neural systems. The design of DynaSim incorporates a novel schema for model specification to facilitate future interoperability with other specifications (e.g., NeuroML, SBML), simulators (e.g., NEURON, Brian, NEST), and web-based applications (e.g., Geppetto) outside MATLAB. DynaSim is freely available at http://dynasimtoolbox.org. This tool promises to reduce barriers for investigating dynamics in large neural models, facilitate collaborative modeling, and complement other tools being developed in the neuroinformatics community.

  14. Neural networks for feedback feedforward nonlinear control systems.

    Science.gov (United States)

    Parisini, T; Zoppoli, R

    1994-01-01

    This paper deals with the problem of designing feedback feedforward control strategies to drive the state of a dynamic system (in general, nonlinear) so as to track any desired trajectory joining the points of given compact sets, while minimizing a certain cost function (in general, nonquadratic). Due to the generality of the problem, conventional methods are difficult to apply. Thus, an approximate solution is sought by constraining control strategies to take on the structure of multilayer feedforward neural networks. After discussing the approximation properties of neural control strategies, a particular neural architecture is presented, which is based on what has been called the "linear-structure preserving principle". The original functional problem is then reduced to a nonlinear programming one, and backpropagation is applied to derive the optimal values of the synaptic weights. Recursive equations to compute the gradient components are presented, which generalize the classical adjoint system equations of N-stage optimal control theory. Simulation results related to nonlinear nonquadratic problems show the effectiveness of the proposed method.

  15. Identification of nonlinear dynamics in power plant components using neural networks

    International Nuclear Information System (INIS)

    Parlos, A.G.; Fernandez, B.; Tsai, W.K.

    1990-01-01

    Advances in digital computer technology have enabled widespread implementation of closed-loop digital control systems in a variety of industries. In some instances, however, the complexity of the plant and the uncertainty associated with the parameters involved in the mathematical modeling narrow the range of applicability of most systematic control system design methodologies. A multiyear project has been initiated to assess the feasibility of the artificial neural networks (ANNs) technology for computerized enhanced diagnostics and control of nuclear power plant components. At this stage of the project, a new methodology, based on backpropagation learning, has been developed for identifying the nonlinear dynamic systems from a set of input-output data known as the training set

  16. The Neural Border: Induction, Specification and Maturation of the territory that generates Neural Crest cells.

    Science.gov (United States)

    Pla, Patrick; Monsoro-Burq, Anne H

    2018-05-28

    The neural crest is induced at the edge between the neural plate and the nonneural ectoderm, in an area called the neural (plate) border, during gastrulation and neurulation. In recent years, many studies have explored how this domain is patterned, and how the neural crest is induced within this territory, that also participates to the prospective dorsal neural tube, the dorsalmost nonneural ectoderm, as well as placode derivatives in the anterior area. This review highlights the tissue interactions, the cell-cell signaling and the molecular mechanisms involved in this dynamic spatiotemporal patterning, resulting in the induction of the premigratory neural crest. Collectively, these studies allow building a complex neural border and early neural crest gene regulatory network, mostly composed by transcriptional regulations but also, more recently, including novel signaling interactions. Copyright © 2018. Published by Elsevier Inc.

  17. Hopfield neural network in HEP track reconstruction

    International Nuclear Information System (INIS)

    Muresan, Raluca; Pentia, Mircea

    1996-01-01

    This work uses neural network technique (Hopfield method) to reconstruct particle tracks starting from a data set obtained with a coordinate detector system placed around a high energy accelerated particle interaction region. A learning algorithm for finding the optimal connection of the signal points have been elaborated and tested. We used a single layer neutral network with constraints in order to obtain the particle tracks drawn through the detected signal points. The dynamics of the systems is given by the MFT equations which determine the system evolution to a minimum energy function. We carried out a computing program that has been tested on a lot of Monte Carlo simulated data. With this program we obtained good results even for noise/signal ratio 200. (authors)

  18. A short note on dynamic programming in a band.

    Science.gov (United States)

    Gibrat, Jean-François

    2018-06-15

    Third generation sequencing technologies generate long reads that exhibit high error rates, in particular for insertions and deletions which are usually the most difficult errors to cope with. The only exact algorithm capable of aligning sequences with insertions and deletions is a dynamic programming algorithm. In this note, for the sake of efficiency, we consider dynamic programming in a band. We show how to choose the band width in function of the long reads' error rates, thus obtaining an [Formula: see text] algorithm in space and time. We also propose a procedure to decide whether this algorithm, when applied to semi-global alignments, provides the optimal score. We suggest that dynamic programming in a band is well suited to the problem of aligning long reads between themselves and can be used as a core component of methods for obtaining a consensus sequence from the long reads alone. The function implementing the dynamic programming algorithm in a band is available, as a standalone program, at: https://forgemia.inra.fr/jean-francois.gibrat/BAND_DYN_PROG.git.

  19. Neural dynamics of morphological processing in spoken word comprehension: Laterality and automaticity

    Directory of Open Access Journals (Sweden)

    Caroline M. Whiting

    2013-11-01

    Full Text Available Rapid and automatic processing of grammatical complexity is argued to take place during speech comprehension, engaging a left-lateralised fronto-temporal language network. Here we address how neural activity in these regions is modulated by the grammatical properties of spoken words. We used combined magneto- and electroencephalography (MEG, EEG to delineate the spatiotemporal patterns of activity that support the recognition of morphologically complex words in English with inflectional (-s and derivational (-er affixes (e.g. bakes, baker. The mismatch negativity (MMN, an index of linguistic memory traces elicited in a passive listening paradigm, was used to examine the neural dynamics elicited by morphologically complex words. Results revealed an initial peak 130-180 ms after the deviation point with a major source in left superior temporal cortex. The localisation of this early activation showed a sensitivity to two grammatical properties of the stimuli: 1 the presence of morphological complexity, with affixed words showing increased left-laterality compared to non-affixed words; and 2 the grammatical category, with affixed verbs showing greater left-lateralisation in inferior frontal gyrus compared to affixed nouns (bakes vs. beaks. This automatic brain response was additionally sensitive to semantic coherence (the meaning of the stem vs. the meaning of the whole form in fronto-temporal regions. These results demonstrate that the spatiotemporal pattern of neural activity in spoken word processing is modulated by the presence of morphological structure, predominantly engaging the left-hemisphere’s fronto-temporal language network, and does not require focused attention on the linguistic input.

  20. The Effect of an Enrichment Reading Program on the Cognitive Processes and Neural Structures of Children Having Reading Difficulties

    Directory of Open Access Journals (Sweden)

    Hayriye Gül KURUYER

    2017-06-01

    Full Text Available The main purpose of the current study is to explain the effect of an enrichment reading program on the cognitive processes and neural structures of children experiencing reading difficulties. The current study was carried out in line with a single-subject research method and the between-subjects multiple probe design belonging to this method. This research focuses on a group of eight students with reading difficulties. Within the context of the study, memory capacities, attention spans, reading-related activation and white matter pathways of the students were determined before and after the application of the enrichment reading program. This determination process was carried out in two stages. Neuro-imaging was performed in the first stage and in the second stage the students’ cognitive processes and neural structures were investigated in terms of focusing attention and memory capacities by using the following tools: Stroop Test TBAG Form, Auditory Verbal Digit Span Test-Form B, Cancellation Test and Number Order Learning Test. The results obtained show that the enrichment reading program resulted in an improvement in the reading profiles of the students having reading difficulties in terms of their cognitive processes and neural structures.

  1. ANT Advanced Neural Tool

    Energy Technology Data Exchange (ETDEWEB)

    Labrador, I.; Carrasco, R.; Martinez, L.

    1996-07-01

    This paper describes a practical introduction to the use of Artificial Neural Networks. Artificial Neural Nets are often used as an alternative to the traditional symbolic manipulation and first order logic used in Artificial Intelligence, due the high degree of difficulty to solve problems that can not be handled by programmers using algorithmic strategies. As a particular case of Neural Net a Multilayer Perception developed by programming in C language on OS9 real time operating system is presented. A detailed description about the program structure and practical use are included. Finally, several application examples that have been treated with the tool are presented, and some suggestions about hardware implementations. (Author) 15 refs.

  2. ANT Advanced Neural Tool

    International Nuclear Information System (INIS)

    Labrador, I.; Carrasco, R.; Martinez, L.

    1996-01-01

    This paper describes a practical introduction to the use of Artificial Neural Networks. Artificial Neural Nets are often used as an alternative to the traditional symbolic manipulation and first order logic used in Artificial Intelligence, due the high degree of difficulty to solve problems that can not be handled by programmers using algorithmic strategies. As a particular case of Neural Net a Multilayer Perception developed by programming in C language on OS9 real time operating system is presented. A detailed description about the program structure and practical use are included. Finally, several application examples that have been treated with the tool are presented, and some suggestions about hardware implementations. (Author) 15 refs

  3. Online Recorded Data-Based Composite Neural Control of Strict-Feedback Systems With Application to Hypersonic Flight Dynamics.

    Science.gov (United States)

    Xu, Bin; Yang, Daipeng; Shi, Zhongke; Pan, Yongping; Chen, Badong; Sun, Fuchun

    2017-09-25

    This paper investigates the online recorded data-based composite neural control of uncertain strict-feedback systems using the backstepping framework. In each step of the virtual control design, neural network (NN) is employed for uncertainty approximation. In previous works, most designs are directly toward system stability ignoring the fact how the NN is working as an approximator. In this paper, to enhance the learning ability, a novel prediction error signal is constructed to provide additional correction information for NN weight update using online recorded data. In this way, the neural approximation precision is highly improved, and the convergence speed can be faster. Furthermore, the sliding mode differentiator is employed to approximate the derivative of the virtual control signal, and thus, the complex analysis of the backstepping design can be avoided. The closed-loop stability is rigorously established, and the boundedness of the tracking error can be guaranteed. Through simulation of hypersonic flight dynamics, the proposed approach exhibits better tracking performance.

  4. Short-term synaptic plasticity and heterogeneity in neural systems

    Science.gov (United States)

    Mejias, J. F.; Kappen, H. J.; Longtin, A.; Torres, J. J.

    2013-01-01

    We review some recent results on neural dynamics and information processing which arise when considering several biophysical factors of interest, in particular, short-term synaptic plasticity and neural heterogeneity. The inclusion of short-term synaptic plasticity leads to enhanced long-term memory capacities, a higher robustness of memory to noise, and irregularity in the duration of the so-called up cortical states. On the other hand, considering some level of neural heterogeneity in neuron models allows neural systems to optimize information transmission in rate coding and temporal coding, two strategies commonly used by neurons to codify information in many brain areas. In all these studies, analytical approximations can be made to explain the underlying dynamics of these neural systems.

  5. Bio-inspired spiking neural network for nonlinear systems control.

    Science.gov (United States)

    Pérez, Javier; Cabrera, Juan A; Castillo, Juan J; Velasco, Juan M

    2018-08-01

    Spiking neural networks (SNN) are the third generation of artificial neural networks. SNN are the closest approximation to biological neural networks. SNNs make use of temporal spike trains to command inputs and outputs, allowing a faster and more complex computation. As demonstrated by biological organisms, they are a potentially good approach to designing controllers for highly nonlinear dynamic systems in which the performance of controllers developed by conventional techniques is not satisfactory or difficult to implement. SNN-based controllers exploit their ability for online learning and self-adaptation to evolve when transferred from simulations to the real world. SNN's inherent binary and temporary way of information codification facilitates their hardware implementation compared to analog neurons. Biological neural networks often require a lower number of neurons compared to other controllers based on artificial neural networks. In this work, these neuronal systems are imitated to perform the control of non-linear dynamic systems. For this purpose, a control structure based on spiking neural networks has been designed. Particular attention has been paid to optimizing the structure and size of the neural network. The proposed structure is able to control dynamic systems with a reduced number of neurons and connections. A supervised learning process using evolutionary algorithms has been carried out to perform controller training. The efficiency of the proposed network has been verified in two examples of dynamic systems control. Simulations show that the proposed control based on SNN exhibits superior performance compared to other approaches based on Neural Networks and SNNs. Copyright © 2018 Elsevier Ltd. All rights reserved.

  6. Nonlinear analysis and synthesis of video images using deep dynamic bottleneck neural networks for face recognition.

    Science.gov (United States)

    Moghadam, Saeed Montazeri; Seyyedsalehi, Seyyed Ali

    2018-05-31

    Nonlinear components extracted from deep structures of bottleneck neural networks exhibit a great ability to express input space in a low-dimensional manifold. Sharing and combining the components boost the capability of the neural networks to synthesize and interpolate new and imaginary data. This synthesis is possibly a simple model of imaginations in human brain where the components are expressed in a nonlinear low dimensional manifold. The current paper introduces a novel Dynamic Deep Bottleneck Neural Network to analyze and extract three main features of videos regarding the expression of emotions on the face. These main features are identity, emotion and expression intensity that are laid in three different sub-manifolds of one nonlinear general manifold. The proposed model enjoying the advantages of recurrent networks was used to analyze the sequence and dynamics of information in videos. It is noteworthy to mention that this model also has also the potential to synthesize new videos showing variations of one specific emotion on the face of unknown subjects. Experiments on discrimination and recognition ability of extracted components showed that the proposed model has an average of 97.77% accuracy in recognition of six prominent emotions (Fear, Surprise, Sadness, Anger, Disgust, and Happiness), and 78.17% accuracy in the recognition of intensity. The produced videos revealed variations from neutral to the apex of an emotion on the face of the unfamiliar test subject which is on average 0.8 similar to reference videos in the scale of the SSIM method. Copyright © 2018 Elsevier Ltd. All rights reserved.

  7. The Dose-Dependent Effects of Vascular Risk Factors on Dynamic Compensatory Neural Processes in Mild Cognitive Impairment

    Directory of Open Access Journals (Sweden)

    Haifeng Chen

    2018-05-01

    Full Text Available Background/Objectives: Mild cognitive impairment (MCI has been associated with risk for Alzheimer's Disease (AD. Previous investigations have suggested that vascular risk factors (VRFs were associated with cognitive decline and AD pathogenesis, and the intervention of VRFs may be a possible way to prevent dementia. However, in MCI, little is known about the potential impacts of VRFs on neural networks and their neural substrates, which may be a neuroimaging biomarker of the disease progression.Methods: 128 elderly Han Chinese participants (67 MCI subjects and 61 matched normal elderly with or without VRFs (hypertension, diabetes mellitus, hypercholesterolemia, smoking and alcohol drinking underwent the resting-state functional magnetic resonance imaging (fMRI and neuropsychological tests. We obtained the default mode network (DMN to identify alterations in MCI with the varying number of the VRF and analyzed the significant correlation with behavioral performance.Results: The effects of VRF on the DMN were primarily in bilateral dorsolateral prefrontal cortex (DLPFC (i.e., middle frontal gyrus. Normal elderly showed the gradually increased functional activity of DLPFC, while a fluctuant activation of DLPFC was displayed in MCI with the growing number of the VRF. Interestingly, the left DLPFC further displayed significantly dynamic correlation with executive function as the variation of VRF loading. Initial level of compensation was observed in normal aging and none-vascular risk factor (NVRF MCI, while these compensatory neural processes were suppressed in One-VRF MCI and were subsequently re-aroused in Over-One-VRF MCI.Conclusions: These findings suggested that the dose-dependent effects of VRF on DLPFC were highlighted in MCI, and the dynamic compensatory neural processes that fluctuated along with variations of VRF loading could be key role in the progression of MCI.

  8. Adaptive neural network output feedback control for stochastic nonlinear systems with unknown dead-zone and unmodeled dynamics.

    Science.gov (United States)

    Tong, Shaocheng; Wang, Tong; Li, Yongming; Zhang, Huaguang

    2014-06-01

    This paper discusses the problem of adaptive neural network output feedback control for a class of stochastic nonlinear strict-feedback systems. The concerned systems have certain characteristics, such as unknown nonlinear uncertainties, unknown dead-zones, unmodeled dynamics and without the direct measurements of state variables. In this paper, the neural networks (NNs) are employed to approximate the unknown nonlinear uncertainties, and then by representing the dead-zone as a time-varying system with a bounded disturbance. An NN state observer is designed to estimate the unmeasured states. Based on both backstepping design technique and a stochastic small-gain theorem, a robust adaptive NN output feedback control scheme is developed. It is proved that all the variables involved in the closed-loop system are input-state-practically stable in probability, and also have robustness to the unmodeled dynamics. Meanwhile, the observer errors and the output of the system can be regulated to a small neighborhood of the origin by selecting appropriate design parameters. Simulation examples are also provided to illustrate the effectiveness of the proposed approach.

  9. Real-time process optimization based on grey-box neural models

    Directory of Open Access Journals (Sweden)

    F. A. Cubillos

    2007-09-01

    Full Text Available This paper investigates the feasibility of using grey-box neural models (GNM in Real Time Optimization (RTO. These models are based on a suitable combination of fundamental conservation laws and neural networks, being used in at least two different ways: to complement available phenomenological knowledge with empirical information, or to reduce dimensionality of complex rigorous physical models. We have observed that the benefits of using these simple adaptable models are counteracted by some difficulties associated with the solution of the optimization problem. Nonlinear Programming (NLP algorithms failed in finding the global optimum due to the fact that neural networks can introduce multimodal objective functions. One alternative considered to solve this problem was the use of some kind of evolutionary algorithms, like Genetic Algorithms (GA. Although these algorithms produced better results in terms of finding the appropriate region, they took long periods of time to reach the global optimum. It was found that a combination of genetic and nonlinear programming algorithms can be use to fast obtain the optimum solution. The proposed approach was applied to the Williams-Otto reactor, considering three different GNM models of increasing complexity. Results demonstrated that the use of GNM models and mixed GA/NLP optimization algorithms is a promissory approach for solving dynamic RTO problems.

  10. Characterization of the disruption of neural control strategies for dynamic fingertip forces from attractor reconstruction.

    Directory of Open Access Journals (Sweden)

    Lorenzo Peppoloni

    Full Text Available The Strength-Dexterity (SD test measures the ability of the pulps of the thumb and index finger to compress a compliant and slender spring prone to buckling at low forces (<3N. We know that factors such as aging and neurodegenerative conditions bring deteriorating physiological changes (e.g., at the level of motor cortex, cerebellum, and basal ganglia, which lead to an overall loss of dexterous ability. However, little is known about how these changes reflect upon the dynamics of the underlying biological system. The spring-hand system exhibits nonlinear dynamical behavior and here we characterize the dynamical behavior of the phase portraits using attractor reconstruction. Thirty participants performed the SD test: 10 young adults, 10 older adults, and 10 older adults with Parkinson's disease (PD. We used delayed embedding of the applied force to reconstruct its attractor. We characterized the distribution of points of the phase portraits by their density (number of distant points and interquartile range and geometric features (trajectory length and size. We find phase portraits from older adults exhibit more distant points (p = 0.028 than young adults and participants with PD have larger interquartile ranges (p = 0.001, trajectory lengths (p = 0.005, and size (p = 0.003 than their healthy counterparts. The increased size of the phase portraits with healthy aging suggests a change in the dynamical properties of the system, which may represent a weakening of the neural control strategy. In contrast, the distortion of the attractor in PD suggests a fundamental change in the underlying biological system, and disruption of the neural control strategy. This ability to detect differences in the biological mechanisms of dexterity in healthy and pathological aging provides a simple means to assess their disruption in neurodegenerative conditions and justifies further studies to understand the link with the physiological changes.

  11. Computational modeling of neural plasticity for self-organization of neural networks.

    Science.gov (United States)

    Chrol-Cannon, Joseph; Jin, Yaochu

    2014-11-01

    Self-organization in biological nervous systems during the lifetime is known to largely occur through a process of plasticity that is dependent upon the spike-timing activity in connected neurons. In the field of computational neuroscience, much effort has been dedicated to building up computational models of neural plasticity to replicate experimental data. Most recently, increasing attention has been paid to understanding the role of neural plasticity in functional and structural neural self-organization, as well as its influence on the learning performance of neural networks for accomplishing machine learning tasks such as classification and regression. Although many ideas and hypothesis have been suggested, the relationship between the structure, dynamics and learning performance of neural networks remains elusive. The purpose of this article is to review the most important computational models for neural plasticity and discuss various ideas about neural plasticity's role. Finally, we suggest a few promising research directions, in particular those along the line that combines findings in computational neuroscience and systems biology, and their synergetic roles in understanding learning, memory and cognition, thereby bridging the gap between computational neuroscience, systems biology and computational intelligence. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  12. Stochastic control theory dynamic programming principle

    CERN Document Server

    Nisio, Makiko

    2015-01-01

    This book offers a systematic introduction to the optimal stochastic control theory via the dynamic programming principle, which is a powerful tool to analyze control problems. First we consider completely observable control problems with finite horizons. Using a time discretization we construct a nonlinear semigroup related to the dynamic programming principle (DPP), whose generator provides the Hamilton–Jacobi–Bellman (HJB) equation, and we characterize the value function via the nonlinear semigroup, besides the viscosity solution theory. When we control not only the dynamics of a system but also the terminal time of its evolution, control-stopping problems arise. This problem is treated in the same frameworks, via the nonlinear semigroup. Its results are applicable to the American option price problem. Zero-sum two-player time-homogeneous stochastic differential games and viscosity solutions of the Isaacs equations arising from such games are studied via a nonlinear semigroup related to DPP (the min-ma...

  13. Self: an adaptive pressure arising from self-organization, chaotic dynamics, and neural Darwinism.

    Science.gov (United States)

    Bruzzo, Angela Alessia; Vimal, Ram Lakhan Pandey

    2007-12-01

    In this article, we establish a model to delineate the emergence of "self" in the brain making recourse to the theory of chaos. Self is considered as the subjective experience of a subject. As essential ingredients of subjective experiences, our model includes wakefulness, re-entry, attention, memory, and proto-experiences. The stability as stated by chaos theory can potentially describe the non-linear function of "self" as sensitive to initial conditions and can characterize it as underlying order from apparently random signals. Self-similarity is discussed as a latent menace of a pathological confusion between "self" and "others". Our test hypothesis is that (1) consciousness might have emerged and evolved from a primordial potential or proto-experience in matter, such as the physical attractions and repulsions experienced by electrons, and (2) "self" arises from chaotic dynamics, self-organization and selective mechanisms during ontogenesis, while emerging post-ontogenically as an adaptive pressure driven by both volume and synaptic-neural transmission and influencing the functional connectivity of neural nets (structure).

  14. Control of Three-Phase Grid-Connected Microgrids Using Artificial Neural Networks

    OpenAIRE

    Shuhui, L.; Fu, X.; Jaithwa, I.; Alonso, E.; Fairbank, M.; Wunsch, D. C.

    2015-01-01

    A microgrid consists of a variety of inverter-interfaced distributed energy resources (DERs). A key issue is how to control DERs within the microgrid and how to connect them to or disconnect them from the microgrid quickly. This paper presents a strategy for controlling inverter-interfaced DERs within a microgrid using an artificial neural network, which implements a dynamic programming algorithm and is trained with a new Levenberg-Marquardt backpropagation algorithm. Compared to conventional...

  15. Temporal information encoding in dynamic memristive devices

    Energy Technology Data Exchange (ETDEWEB)

    Ma, Wen; Chen, Lin; Du, Chao; Lu, Wei D., E-mail: wluee@eecs.umich.edu [Department of Electrical Engineering and Computer Science, University of Michigan, Ann Arbor, Michigan 48109 (United States)

    2015-11-09

    We show temporal and frequency information can be effectively encoded in memristive devices with inherent short-term dynamics. Ag/Ag{sub 2}S/Pd based memristive devices with low programming voltage (∼100 mV) were fabricated and tested. At weak programming conditions, the devices exhibit inherent decay due to spontaneous diffusion of the Ag atoms. When the devices were subjected to pulse train inputs emulating different spiking patterns, the switching probability distribution function diverges from the standard Poisson distribution and evolves according to the input pattern. The experimentally observed switching probability distributions and the associated cumulative probability functions can be well-explained using a model accounting for the short-term decay effects. Such devices offer an intriguing opportunity to directly encode neural signals for neural information storage and analysis.

  16. Modelling of windmill induction generators in dynamic simulation programs

    DEFF Research Database (Denmark)

    Akhmatov, Vladislav; Knudsen, Hans

    1999-01-01

    with and without a model of the mechanical shaft. The reason for the discrepancies are explained, and it is shown that the phenomenon is due partly to the presence of DC offset currents in the induction machine stator, and partly to the mechanical shaft system of the wind turbine and the generator rotor......For AC networks with large amounts of induction generators-in case of e.g. windmills-the paper demonstrates a significant discrepancy in the simulated voltage recovery after faults in weak networks, when comparing result obtained with dynamic stability programs and transient programs, respectively....... It is shown that it is possible to include a transient model in dynamic stability programs and thus obtain correct results also in dynamic stability programs. A mechanical model of the shaft system has also been included in the generator model...

  17. Application of neural networks in coastal engineering

    Digital Repository Service at National Institute of Oceanography (India)

    Mandal, S.

    the neural network attractive. A neural network is an information processing system modeled on the structure of the dynamic process. It can solve the complex/nonlinear problems quickly once trained by operating on problems using an interconnected number...

  18. Neural control of magnetic suspension systems

    Science.gov (United States)

    Gray, W. Steven

    1993-01-01

    The purpose of this research program is to design, build and test (in cooperation with NASA personnel from the NASA Langley Research Center) neural controllers for two different small air-gap magnetic suspension systems. The general objective of the program is to study neural network architectures for the purpose of control in an experimental setting and to demonstrate the feasibility of the concept. The specific objectives of the research program are: (1) to demonstrate through simulation and experimentation the feasibility of using neural controllers to stabilize a nonlinear magnetic suspension system; (2) to investigate through simulation and experimentation the performance of neural controllers designs under various types of parametric and nonparametric uncertainty; (3) to investigate through simulation and experimentation various types of neural architectures for real-time control with respect to performance and complexity; and (4) to benchmark in an experimental setting the performance of neural controllers against other types of existing linear and nonlinear compensator designs. To date, the first one-dimensional, small air-gap magnetic suspension system has been built, tested and delivered to the NASA Langley Research Center. The device is currently being stabilized with a digital linear phase-lead controller. The neural controller hardware is under construction. Two different neural network paradigms are under consideration, one based on hidden layer feedforward networks trained via back propagation and one based on using Gaussian radial basis functions trained by analytical methods related to stability conditions. Some advanced nonlinear control algorithms using feedback linearization and sliding mode control are in simulation studies.

  19. Evaluating Dynamic Analysis Techniques for Program Comprehension

    NARCIS (Netherlands)

    Cornelissen, S.G.M.

    2009-01-01

    Program comprehension is an essential part of software development and software maintenance, as software must be sufficiently understood before it can be properly modified. One of the common approaches in getting to understand a program is the study of its execution, also known as dynamic analysis.

  20. Fragility in dynamic networks: application to neural networks in the epileptic cortex.

    Science.gov (United States)

    Sritharan, Duluxan; Sarma, Sridevi V

    2014-10-01

    Epilepsy is a network phenomenon characterized by atypical activity at the neuronal and population levels during seizures, including tonic spiking, increased heterogeneity in spiking rates, and synchronization. The etiology of epilepsy is unclear, but a common theme among proposed mechanisms is that structural connectivity between neurons is altered. It is hypothesized that epilepsy arises not from random changes in connectivity, but from specific structural changes to the most fragile nodes or neurons in the network. In this letter, the minimum energy perturbation on functional connectivity required to destabilize linear networks is derived. Perturbation results are then applied to a probabilistic nonlinear neural network model that operates at a stable fixed point. That is, if a small stimulus is applied to the network, the activation probabilities of each neuron respond transiently but eventually recover to their baseline values. When the perturbed network is destabilized, the activation probabilities shift to larger or smaller values or oscillate when a small stimulus is applied. Finally, the structural modifications to the neural network that achieve the functional perturbation are derived. Simulations of the unperturbed and perturbed networks qualitatively reflect neuronal activity observed in epilepsy patients, suggesting that the changes in network dynamics due to destabilizing perturbations, including the emergence of an unstable manifold or a stable limit cycle, may be indicative of neuronal or population dynamics during seizure. That is, the epileptic cortex is always on the brink of instability and minute changes in the synaptic weights associated with the most fragile node can suddenly destabilize the network to cause seizures. Finally, the theory developed here and its interpretation of epileptic networks enables the design of a straightforward feedback controller that first detects when the network has destabilized and then applies linear state

  1. The Functional Programming Language R and the Paradigm of Dynamic Scientific Programming

    NARCIS (Netherlands)

    Trancón y Widemann, B.; Bolz, C.F.; Grelck, C.; Loidl, H.-W.; Peña, R.

    2013-01-01

    R is an environment and functional programming language for statistical data analysis and visualization. Largely unknown to the functional programming community, it is popular and influential in many empirical sciences. Due to its integrated combination of dynamic and reflective scripting on one

  2. Algorithm for predicting the evolution of series of dynamics of complex systems in solving information problems

    Science.gov (United States)

    Kasatkina, T. I.; Dushkin, A. V.; Pavlov, V. A.; Shatovkin, R. R.

    2018-03-01

    In the development of information, systems and programming to predict the series of dynamics, neural network methods have recently been applied. They are more flexible, in comparison with existing analogues and are capable of taking into account the nonlinearities of the series. In this paper, we propose a modified algorithm for predicting the series of dynamics, which includes a method for training neural networks, an approach to describing and presenting input data, based on the prediction by the multilayer perceptron method. To construct a neural network, the values of a series of dynamics at the extremum points and time values corresponding to them, formed based on the sliding window method, are used as input data. The proposed algorithm can act as an independent approach to predicting the series of dynamics, and be one of the parts of the forecasting system. The efficiency of predicting the evolution of the dynamics series for a short-term one-step and long-term multi-step forecast by the classical multilayer perceptron method and a modified algorithm using synthetic and real data is compared. The result of this modification was the minimization of the magnitude of the iterative error that arises from the previously predicted inputs to the inputs to the neural network, as well as the increase in the accuracy of the iterative prediction of the neural network.

  3. Implementing a Dynamic Street-Children's Program: Successes and ...

    African Journals Online (AJOL)

    dynamic street children's program in Mzuzu Malawi – using a developmental ... dynamics of parentchild, parent-parent and child-parent-environment; life-events; ... of child and adolescent development, and how they can influence the child's ...

  4. Rich spectrum of neural field dynamics in the presence of short-term synaptic depression

    Science.gov (United States)

    Wang, He; Lam, Kin; Fung, C. C. Alan; Wong, K. Y. Michael; Wu, Si

    2015-09-01

    In continuous attractor neural networks (CANNs), spatially continuous information such as orientation, head direction, and spatial location is represented by Gaussian-like tuning curves that can be displaced continuously in the space of the preferred stimuli of the neurons. We investigate how short-term synaptic depression (STD) can reshape the intrinsic dynamics of the CANN model and its responses to a single static input. In particular, CANNs with STD can support various complex firing patterns and chaotic behaviors. These chaotic behaviors have the potential to encode various stimuli in the neuronal system.

  5. Application of artificial neural networks for predicting the impact of rolling dynamic compaction using dynamic cone penetrometer test results

    Directory of Open Access Journals (Sweden)

    R.A.T.M. Ranasinghe

    2017-04-01

    Full Text Available Rolling dynamic compaction (RDC, which involves the towing of a noncircular module, is now widespread and accepted among many other soil compaction methods. However, to date, there is no accurate method for reliable prediction of the densification of soil and the extent of ground improvement by means of RDC. This study presents the application of artificial neural networks (ANNs for a priori prediction of the effectiveness of RDC. The models are trained with in situ dynamic cone penetration (DCP test data obtained from previous civil projects associated with the 4-sided impact roller. The predictions from the ANN models are in good agreement with the measured field data, as indicated by the model correlation coefficient of approximately 0.8. It is concluded that the ANN models developed in this study can be successfully employed to provide more accurate prediction of the performance of the RDC on a range of soil types.

  6. Efficient dynamic optimization of logic programs

    Science.gov (United States)

    Laird, Phil

    1992-01-01

    A summary is given of the dynamic optimization approach to speed up learning for logic programs. The problem is to restructure a recursive program into an equivalent program whose expected performance is optimal for an unknown but fixed population of problem instances. We define the term 'optimal' relative to the source of input instances and sketch an algorithm that can come within a logarithmic factor of optimal with high probability. Finally, we show that finding high-utility unfolding operations (such as EBG) can be reduced to clause reordering.

  7. Molecular Dynamics Simulations with Quantum Mechanics/Molecular Mechanics and Adaptive Neural Networks.

    Science.gov (United States)

    Shen, Lin; Yang, Weitao

    2018-03-13

    Direct molecular dynamics (MD) simulation with ab initio quantum mechanical and molecular mechanical (QM/MM) methods is very powerful for studying the mechanism of chemical reactions in a complex environment but also very time-consuming. The computational cost of QM/MM calculations during MD simulations can be reduced significantly using semiempirical QM/MM methods with lower accuracy. To achieve higher accuracy at the ab initio QM/MM level, a correction on the existing semiempirical QM/MM model is an attractive idea. Recently, we reported a neural network (NN) method as QM/MM-NN to predict the potential energy difference between semiempirical and ab initio QM/MM approaches. The high-level results can be obtained using neural network based on semiempirical QM/MM MD simulations, but the lack of direct MD samplings at the ab initio QM/MM level is still a deficiency that limits the applications of QM/MM-NN. In the present paper, we developed a dynamic scheme of QM/MM-NN for direct MD simulations on the NN-predicted potential energy surface to approximate ab initio QM/MM MD. Since some configurations excluded from the database for NN training were encountered during simulations, which may cause some difficulties on MD samplings, an adaptive procedure inspired by the selection scheme reported by Behler [ Behler Int. J. Quantum Chem. 2015 , 115 , 1032 ; Behler Angew. Chem., Int. Ed. 2017 , 56 , 12828 ] was employed with some adaptions to update NN and carry out MD iteratively. We further applied the adaptive QM/MM-NN MD method to the free energy calculation and transition path optimization on chemical reactions in water. The results at the ab initio QM/MM level can be well reproduced using this method after 2-4 iteration cycles. The saving in computational cost is about 2 orders of magnitude. It demonstrates that the QM/MM-NN with direct MD simulations has great potentials not only for the calculation of thermodynamic properties but also for the characterization of

  8. On the origin of reproducible sequential activity in neural circuits

    Science.gov (United States)

    Afraimovich, V. S.; Zhigulin, V. P.; Rabinovich, M. I.

    2004-12-01

    Robustness and reproducibility of sequential spatio-temporal responses is an essential feature of many neural circuits in sensory and motor systems of animals. The most common mathematical images of dynamical regimes in neural systems are fixed points, limit cycles, chaotic attractors, and continuous attractors (attractive manifolds of neutrally stable fixed points). These are not suitable for the description of reproducible transient sequential neural dynamics. In this paper we present the concept of a stable heteroclinic sequence (SHS), which is not an attractor. SHS opens the way for understanding and modeling of transient sequential activity in neural circuits. We show that this new mathematical object can be used to describe robust and reproducible sequential neural dynamics. Using the framework of a generalized high-dimensional Lotka-Volterra model, that describes the dynamics of firing rates in an inhibitory network, we present analytical results on the existence of the SHS in the phase space of the network. With the help of numerical simulations we confirm its robustness in presence of noise in spite of the transient nature of the corresponding trajectories. Finally, by referring to several recent neurobiological experiments, we discuss possible applications of this new concept to several problems in neuroscience.

  9. Open quantum generalisation of Hopfield neural networks

    Science.gov (United States)

    Rotondo, P.; Marcuzzi, M.; Garrahan, J. P.; Lesanovsky, I.; Müller, M.

    2018-03-01

    We propose a new framework to understand how quantum effects may impact on the dynamics of neural networks. We implement the dynamics of neural networks in terms of Markovian open quantum systems, which allows us to treat thermal and quantum coherent effects on the same footing. In particular, we propose an open quantum generalisation of the Hopfield neural network, the simplest toy model of associative memory. We determine its phase diagram and show that quantum fluctuations give rise to a qualitatively new non-equilibrium phase. This novel phase is characterised by limit cycles corresponding to high-dimensional stationary manifolds that may be regarded as a generalisation of storage patterns to the quantum domain.

  10. Event- and Time-Driven Techniques Using Parallel CPU-GPU Co-processing for Spiking Neural Networks.

    Science.gov (United States)

    Naveros, Francisco; Garrido, Jesus A; Carrillo, Richard R; Ros, Eduardo; Luque, Niceto R

    2017-01-01

    Modeling and simulating the neural structures which make up our central neural system is instrumental for deciphering the computational neural cues beneath. Higher levels of biological plausibility usually impose higher levels of complexity in mathematical modeling, from neural to behavioral levels. This paper focuses on overcoming the simulation problems (accuracy and performance) derived from using higher levels of mathematical complexity at a neural level. This study proposes different techniques for simulating neural models that hold incremental levels of mathematical complexity: leaky integrate-and-fire (LIF), adaptive exponential integrate-and-fire (AdEx), and Hodgkin-Huxley (HH) neural models (ranged from low to high neural complexity). The studied techniques are classified into two main families depending on how the neural-model dynamic evaluation is computed: the event-driven or the time-driven families. Whilst event-driven techniques pre-compile and store the neural dynamics within look-up tables, time-driven techniques compute the neural dynamics iteratively during the simulation time. We propose two modifications for the event-driven family: a look-up table recombination to better cope with the incremental neural complexity together with a better handling of the synchronous input activity. Regarding the time-driven family, we propose a modification in computing the neural dynamics: the bi-fixed-step integration method. This method automatically adjusts the simulation step size to better cope with the stiffness of the neural model dynamics running in CPU platforms. One version of this method is also implemented for hybrid CPU-GPU platforms. Finally, we analyze how the performance and accuracy of these modifications evolve with increasing levels of neural complexity. We also demonstrate how the proposed modifications which constitute the main contribution of this study systematically outperform the traditional event- and time-driven techniques under

  11. Neural Networks

    International Nuclear Information System (INIS)

    Smith, Patrick I.

    2003-01-01

    Physicists use large detectors to measure particles created in high-energy collisions at particle accelerators. These detectors typically produce signals indicating either where ionization occurs along the path of the particle, or where energy is deposited by the particle. The data produced by these signals is fed into pattern recognition programs to try to identify what particles were produced, and to measure the energy and direction of these particles. Ideally, there are many techniques used in this pattern recognition software. One technique, neural networks, is particularly suitable for identifying what type of particle caused by a set of energy deposits. Neural networks can derive meaning from complicated or imprecise data, extract patterns, and detect trends that are too complex to be noticed by either humans or other computer related processes. To assist in the advancement of this technology, Physicists use a tool kit to experiment with several neural network techniques. The goal of this research is interface a neural network tool kit into Java Analysis Studio (JAS3), an application that allows data to be analyzed from any experiment. As the final result, a physicist will have the ability to train, test, and implement a neural network with the desired output while using JAS3 to analyze the results or output. Before an implementation of a neural network can take place, a firm understanding of what a neural network is and how it works is beneficial. A neural network is an artificial representation of the human brain that tries to simulate the learning process [5]. It is also important to think of the word artificial in that definition as computer programs that use calculations during the learning process. In short, a neural network learns by representative examples. Perhaps the easiest way to describe the way neural networks learn is to explain how the human brain functions. The human brain contains billions of neural cells that are responsible for processing

  12. Testing Object-Oriented Programs using Dynamic Aspects and Non-Determinism

    DEFF Research Database (Denmark)

    Achenbach, Michael; Ostermann, Klaus

    2010-01-01

    decisions exposing private data. We present an approach that both improves the expressiveness of test cases using non-deterministic choice and reduces design modifications using dynamic aspect-oriented programming techniques. Non-deterministic choice facilitates local definitions of multiple executions...... without parameterization or generation of tests. It also eases modelling naturally non-deterministic program features like IO or multi-threading in integration tests. Dynamic AOP facilitates powerful design adaptations without exposing test features, keeping the scope of these adaptations local to each...... test. We also combine non-determinism and dynamic aspects in a new approach to testing multi-threaded programs using co-routines....

  13. Interplay between autophagy and programmed cell death in mammalian neural stem cells

    Directory of Open Access Journals (Sweden)

    Kyung Min Chung

    2013-08-01

    Full Text Available Mammalian neural stem cells (NSCs are of particular interestbecause of their role in brain development and function. Recentfindings suggest the intimate involvement of programmed celldeath (PCD in the turnover of NSCs. However, the underlyingmechanisms of PCD are largely unknown. Although apoptosis isthe best-defined form of PCD, accumulating evidence hasrevealed a wide spectrum of PCD encompassing apoptosis,autophagic cell death (ACD and necrosis. This mini-reviewaims to illustrate a unique regulation of PCD in NSCs. Theresults of our recent studies on autophagic death of adulthippocampal neural stem (HCN cells are also discussed. HCNcell death following insulin withdrawal clearly provides areliable model that can be used to analyze the molecularmechanisms of ACD in the larger context of PCD. Moreresearch efforts are needed to increase our understanding of themolecular basis of NSC turnover under degenerating conditions,such as aging, stress and neurological diseases. Efforts aimed atprotecting and harnessing endogenous NSCs will offer novelopportunities for the development of new therapeutic strategiesfor neuropathologies. [BMB Reports 2013; 46(8: 383-390

  14. Taylor O(h³) Discretization of ZNN Models for Dynamic Equality-Constrained Quadratic Programming With Application to Manipulators.

    Science.gov (United States)

    Liao, Bolin; Zhang, Yunong; Jin, Long

    2016-02-01

    In this paper, a new Taylor-type numerical differentiation formula is first presented to discretize the continuous-time Zhang neural network (ZNN), and obtain higher computational accuracy. Based on the Taylor-type formula, two Taylor-type discrete-time ZNN models (termed Taylor-type discrete-time ZNNK and Taylor-type discrete-time ZNNU models) are then proposed and discussed to perform online dynamic equality-constrained quadratic programming. For comparison, Euler-type discrete-time ZNN models (called Euler-type discrete-time ZNNK and Euler-type discrete-time ZNNU models) and Newton iteration, with interesting links being found, are also presented. It is proved herein that the steady-state residual errors of the proposed Taylor-type discrete-time ZNN models, Euler-type discrete-time ZNN models, and Newton iteration have the patterns of O(h(3)), O(h(2)), and O(h), respectively, with h denoting the sampling gap. Numerical experiments, including the application examples, are carried out, of which the results further substantiate the theoretical findings and the efficacy of Taylor-type discrete-time ZNN models. Finally, the comparisons with Taylor-type discrete-time derivative model and other Lagrange-type discrete-time ZNN models for dynamic equality-constrained quadratic programming substantiate the superiority of the proposed Taylor-type discrete-time ZNN models once again.

  15. Handwritten dynamics assessment through convolutional neural networks: An application to Parkinson's disease identification.

    Science.gov (United States)

    Pereira, Clayton R; Pereira, Danilo R; Rosa, Gustavo H; Albuquerque, Victor H C; Weber, Silke A T; Hook, Christian; Papa, João P

    2018-04-16

    Parkinson's disease (PD) is considered a degenerative disorder that affects the motor system, which may cause tremors, micrography, and the freezing of gait. Although PD is related to the lack of dopamine, the triggering process of its development is not fully understood yet. In this work, we introduce convolutional neural networks to learn features from images produced by handwritten dynamics, which capture different information during the individual's assessment. Additionally, we make available a dataset composed of images and signal-based data to foster the research related to computer-aided PD diagnosis. The proposed approach was compared against raw data and texture-based descriptors, showing suitable results, mainly in the context of early stage detection, with results nearly to 95%. The analysis of handwritten dynamics using deep learning techniques showed to be useful for automatic Parkinson's disease identification, as well as it can outperform handcrafted features. Copyright © 2018 Elsevier B.V. All rights reserved.

  16. Application of cellular neural network (CNN) method to the nuclear reactor dynamics equations

    International Nuclear Information System (INIS)

    Hadad, K.; Piroozmand, A.

    2007-01-01

    This paper describes the application of a multilayer cellular neural network (CNN) to model and solve the nuclear reactor dynamic equations. An equivalent electrical circuit is analyzed and the governing equations of a bare, homogeneous reactor core are modeled via CNN. The validity of the CNN result is compared with numerical solution of the system of nonlinear governing partial differential equations (PDE) using MATLAB. Steady state as well as transient simulations, show very good comparison between the two methods. We used our CNN model to simulate space-time response of different reactivity excursions in a typical nuclear reactor. On line solution of reactor dynamic equations is used as an aid to reactor operation decision making. The complete algorithm could also be implemented using very large scale integrated circuit (VLSI) circuitry. The efficiency of the calculation method makes it useful for small size nuclear reactors such as the ones used in space missions

  17. Fixed-time stability of dynamical systems and fixed-time synchronization of coupled discontinuous neural networks.

    Science.gov (United States)

    Hu, Cheng; Yu, Juan; Chen, Zhanheng; Jiang, Haijun; Huang, Tingwen

    2017-05-01

    In this paper, the fixed-time stability of dynamical systems and the fixed-time synchronization of coupled discontinuous neural networks are investigated under the framework of Filippov solution. Firstly, by means of reduction to absurdity, a theorem of fixed-time stability is established and a high-precision estimation of the settling-time is given. It is shown by theoretic proof that the estimation bound of the settling time given in this paper is less conservative and more accurate compared with the classical results. Besides, as an important application, the fixed-time synchronization of coupled neural networks with discontinuous activation functions is proposed. By designing a discontinuous control law and using the theory of differential inclusions, some new criteria are derived to ensure the fixed-time synchronization of the addressed coupled networks. Finally, two numerical examples are provided to show the effectiveness and validity of the theoretical results. Copyright © 2017 Elsevier Ltd. All rights reserved.

  18. FEM-based neural-network approach to nonlinear modeling with application to longitudinal vehicle dynamics control.

    Science.gov (United States)

    Kalkkuhl, J; Hunt, K J; Fritz, H

    1999-01-01

    An finite-element methods (FEM)-based neural-network approach to Nonlinear AutoRegressive with eXogenous input (NARX) modeling is presented. The method uses multilinear interpolation functions on C0 rectangular elements. The local and global structure of the resulting model is analyzed. It is shown that the model can be interpreted both as a local model network and a single layer feedforward neural network. The main aim is to use the model for nonlinear control design. The proposed FEM NARX description is easily accessible to feedback linearizing control techniques. Its use with a two-degrees of freedom nonlinear internal model controller is discussed. The approach is applied to modeling of the nonlinear longitudinal dynamics of an experimental lorry, using measured data. The modeling results are compared with local model network and multilayer perceptron approaches. A nonlinear speed controller was designed based on the identified FEM model. The controller was implemented in a test vehicle, and several experimental results are presented.

  19. Chaotic system optimal tracking using data-based synchronous method with unknown dynamics and disturbances

    International Nuclear Information System (INIS)

    Song Ruizhuo; Wei Qinglai

    2017-01-01

    We develop an optimal tracking control method for chaotic system with unknown dynamics and disturbances. The method allows the optimal cost function and the corresponding tracking control to update synchronously. According to the tracking error and the reference dynamics, the augmented system is constructed. Then the optimal tracking control problem is defined. The policy iteration (PI) is introduced to solve the min-max optimization problem. The off-policy adaptive dynamic programming (ADP) algorithm is then proposed to find the solution of the tracking Hamilton–Jacobi–Isaacs (HJI) equation online only using measured data and without any knowledge about the system dynamics. Critic neural network (CNN), action neural network (ANN), and disturbance neural network (DNN) are used to approximate the cost function, control, and disturbance. The weights of these networks compose the augmented weight matrix, and the uniformly ultimately bounded (UUB) of which is proven. The convergence of the tracking error system is also proven. Two examples are given to show the effectiveness of the proposed synchronous solution method for the chaotic system tracking problem. (paper)

  20. Neural dynamics of learning sound-action associations.

    Directory of Open Access Journals (Sweden)

    Adam McNamara

    Full Text Available A motor component is pre-requisite to any communicative act as one must inherently move to communicate. To learn to make a communicative act, the brain must be able to dynamically associate arbitrary percepts to the neural substrate underlying the pre-requisite motor activity. We aimed to investigate whether brain regions involved in complex gestures (ventral pre-motor cortex, Brodmann Area 44 were involved in mediating association between novel abstract auditory stimuli and novel gestural movements. In a functional resonance imaging (fMRI study we asked participants to learn associations between previously unrelated novel sounds and meaningless gestures inside the scanner. We use functional connectivity analysis to eliminate the often present confound of 'strategic covert naming' when dealing with BA44 and to rule out effects of non-specific reductions in signal. Brodmann Area 44, a region incorporating Broca's region showed strong, bilateral, negative correlation of BOLD (blood oxygen level dependent response with learning of sound-action associations during data acquisition. Left-inferior-parietal-lobule (l-IPL and bilateral loci in and around visual area V5, right-orbital-frontal-gyrus, right-hippocampus, left-para-hippocampus, right-head-of-caudate, right-insula and left-lingual-gyrus also showed decreases in BOLD response with learning. Concurrent with these decreases in BOLD response, an increasing connectivity between areas of the imaged network as well as the right-middle-frontal-gyrus with rising learning performance was revealed by a psychophysiological interaction (PPI analysis. The increasing connectivity therefore occurs within an increasingly energy efficient network as learning proceeds. Strongest learning related connectivity between regions was found when analysing BA44 and l-IPL seeds. The results clearly show that BA44 and l-IPL is dynamically involved in linking gesture and sound and therefore provides evidence that one of

  1. Modeling the dynamics of the lead bismuth eutectic experimental accelerator driven system by an infinite impulse response locally recurrent neural network

    International Nuclear Information System (INIS)

    Zio, Enrico; Pedroni, Nicola; Broggi, Matteo; Golea, Lucia Roxana

    2009-01-01

    In this paper, an infinite impulse response locally recurrent neural network (IIR-LRNN) is employed for modelling the dynamics of the Lead Bismuth Eutectic eXperimental Accelerator Driven System (LBE-XADS). The network is trained by recursive back-propagation (RBP) and its ability in estimating transients is tested under various conditions. The results demonstrate the robustness of the locally recurrent scheme in the reconstruction of complex nonlinear dynamic relationships

  2. The brain as a dynamic physical system.

    Science.gov (United States)

    McKenna, T M; McMullen, T A; Shlesinger, M F

    1994-06-01

    The brain is a dynamic system that is non-linear at multiple levels of analysis. Characterization of its non-linear dynamics is fundamental to our understanding of brain function. Identifying families of attractors in phase space analysis, an approach which has proven valuable in describing non-linear mechanical and electrical systems, can prove valuable in describing a range of behaviors and associated neural activity including sensory and motor repertoires. Additionally, transitions between attractors may serve as useful descriptors for analysing state changes in neurons and neural ensembles. Recent observations of synchronous neural activity, and the emerging capability to record the spatiotemporal dynamics of neural activity by voltage-sensitive dyes and electrode arrays, provide opportunities for observing the population dynamics of neural ensembles within a dynamic systems context. New developments in the experimental physics of complex systems, such as the control of chaotic systems, selection of attractors, attractor switching and transient states, can be a source of powerful new analytical tools and insights into the dynamics of neural systems.

  3. Advanced neural network-based computational schemes for robust fault diagnosis

    CERN Document Server

    Mrugalski, Marcin

    2014-01-01

    The present book is devoted to problems of adaptation of artificial neural networks to robust fault diagnosis schemes. It presents neural networks-based modelling and estimation techniques used for designing robust fault diagnosis schemes for non-linear dynamic systems. A part of the book focuses on fundamental issues such as architectures of dynamic neural networks, methods for designing of neural networks and fault diagnosis schemes as well as the importance of robustness. The book is of a tutorial value and can be perceived as a good starting point for the new-comers to this field. The book is also devoted to advanced schemes of description of neural model uncertainty. In particular, the methods of computation of neural networks uncertainty with robust parameter estimation are presented. Moreover, a novel approach for system identification with the state-space GMDH neural network is delivered. All the concepts described in this book are illustrated by both simple academic illustrative examples and practica...

  4. Bellman's GAP--a language and compiler for dynamic programming in sequence analysis.

    Science.gov (United States)

    Sauthoff, Georg; Möhl, Mathias; Janssen, Stefan; Giegerich, Robert

    2013-03-01

    Dynamic programming is ubiquitous in bioinformatics. Developing and implementing non-trivial dynamic programming algorithms is often error prone and tedious. Bellman's GAP is a new programming system, designed to ease the development of bioinformatics tools based on the dynamic programming technique. In Bellman's GAP, dynamic programming algorithms are described in a declarative style by tree grammars, evaluation algebras and products formed thereof. This bypasses the design of explicit dynamic programming recurrences and yields programs that are free of subscript errors, modular and easy to modify. The declarative modules are compiled into C++ code that is competitive to carefully hand-crafted implementations. This article introduces the Bellman's GAP system and its language, GAP-L. It then demonstrates the ease of development and the degree of re-use by creating variants of two common bioinformatics algorithms. Finally, it evaluates Bellman's GAP as an implementation platform of 'real-world' bioinformatics tools. Bellman's GAP is available under GPL license from http://bibiserv.cebitec.uni-bielefeld.de/bellmansgap. This Web site includes a repository of re-usable modules for RNA folding based on thermodynamics.

  5. Passivation and control of partially known SISO nonlinear systems via dynamic neural networks

    Directory of Open Access Journals (Sweden)

    Reyes-Reyes J.

    2000-01-01

    Full Text Available In this paper, an adaptive technique is suggested to provide the passivity property for a class of partially known SISO nonlinear systems. A simple Dynamic Neural Network (DNN, containing only two neurons and without any hidden-layers, is used to identify the unknown nonlinear system. By means of a Lyapunov-like analysis the new learning law for this DNN, guarantying both successful identification and passivation effects, is derived. Based on this adaptive DNN model, an adaptive feedback controller, serving for wide class of nonlinear systems with an a priori incomplete model description, is designed. Two typical examples illustrate the effectiveness of the suggested approach.

  6. Large-scale hydropower system optimization using dynamic programming and object-oriented programming: the case of the Northeast China Power Grid.

    Science.gov (United States)

    Li, Ji-Qing; Zhang, Yu-Shan; Ji, Chang-Ming; Wang, Ai-Jing; Lund, Jay R

    2013-01-01

    This paper examines long-term optimal operation using dynamic programming for a large hydropower system of 10 reservoirs in Northeast China. Besides considering flow and hydraulic head, the optimization explicitly includes time-varying electricity market prices to maximize benefit. Two techniques are used to reduce the 'curse of dimensionality' of dynamic programming with many reservoirs. Discrete differential dynamic programming (DDDP) reduces the search space and computer memory needed. Object-oriented programming (OOP) and the ability to dynamically allocate and release memory with the C++ language greatly reduces the cumulative effect of computer memory for solving multi-dimensional dynamic programming models. The case study shows that the model can reduce the 'curse of dimensionality' and achieve satisfactory results.

  7. Rule of Thumb and Dynamic Programming

    NARCIS (Netherlands)

    Lettau, M.; Uhlig, H.F.H.V.S.

    1995-01-01

    This paper studies the relationships between learning about rules of thumb (represented by classifier systems) and dynamic programming. Building on a result about Markovian stochastic approximation algorithms, we characterize all decision functions that can be asymptotically obtained through

  8. An Event-Driven Classifier for Spiking Neural Networks Fed with Synthetic or Dynamic Vision Sensor Data

    Directory of Open Access Journals (Sweden)

    Evangelos Stromatias

    2017-06-01

    Full Text Available This paper introduces a novel methodology for training an event-driven classifier within a Spiking Neural Network (SNN System capable of yielding good classification results when using both synthetic input data and real data captured from Dynamic Vision Sensor (DVS chips. The proposed supervised method uses the spiking activity provided by an arbitrary topology of prior SNN layers to build histograms and train the classifier in the frame domain using the stochastic gradient descent algorithm. In addition, this approach can cope with leaky integrate-and-fire neuron models within the SNN, a desirable feature for real-world SNN applications, where neural activation must fade away after some time in the absence of inputs. Consequently, this way of building histograms captures the dynamics of spikes immediately before the classifier. We tested our method on the MNIST data set using different synthetic encodings and real DVS sensory data sets such as N-MNIST, MNIST-DVS, and Poker-DVS using the same network topology and feature maps. We demonstrate the effectiveness of our approach by achieving the highest classification accuracy reported on the N-MNIST (97.77% and Poker-DVS (100% real DVS data sets to date with a spiking convolutional network. Moreover, by using the proposed method we were able to retrain the output layer of a previously reported spiking neural network and increase its performance by 2%, suggesting that the proposed classifier can be used as the output layer in works where features are extracted using unsupervised spike-based learning methods. In addition, we also analyze SNN performance figures such as total event activity and network latencies, which are relevant for eventual hardware implementations. In summary, the paper aggregates unsupervised-trained SNNs with a supervised-trained SNN classifier, combining and applying them to heterogeneous sets of benchmarks, both synthetic and from real DVS chips.

  9. An Event-Driven Classifier for Spiking Neural Networks Fed with Synthetic or Dynamic Vision Sensor Data.

    Science.gov (United States)

    Stromatias, Evangelos; Soto, Miguel; Serrano-Gotarredona, Teresa; Linares-Barranco, Bernabé

    2017-01-01

    This paper introduces a novel methodology for training an event-driven classifier within a Spiking Neural Network (SNN) System capable of yielding good classification results when using both synthetic input data and real data captured from Dynamic Vision Sensor (DVS) chips. The proposed supervised method uses the spiking activity provided by an arbitrary topology of prior SNN layers to build histograms and train the classifier in the frame domain using the stochastic gradient descent algorithm. In addition, this approach can cope with leaky integrate-and-fire neuron models within the SNN, a desirable feature for real-world SNN applications, where neural activation must fade away after some time in the absence of inputs. Consequently, this way of building histograms captures the dynamics of spikes immediately before the classifier. We tested our method on the MNIST data set using different synthetic encodings and real DVS sensory data sets such as N-MNIST, MNIST-DVS, and Poker-DVS using the same network topology and feature maps. We demonstrate the effectiveness of our approach by achieving the highest classification accuracy reported on the N-MNIST (97.77%) and Poker-DVS (100%) real DVS data sets to date with a spiking convolutional network. Moreover, by using the proposed method we were able to retrain the output layer of a previously reported spiking neural network and increase its performance by 2%, suggesting that the proposed classifier can be used as the output layer in works where features are extracted using unsupervised spike-based learning methods. In addition, we also analyze SNN performance figures such as total event activity and network latencies, which are relevant for eventual hardware implementations. In summary, the paper aggregates unsupervised-trained SNNs with a supervised-trained SNN classifier, combining and applying them to heterogeneous sets of benchmarks, both synthetic and from real DVS chips.

  10. Programming of the appetite-regulating neural network: a link between maternal overnutrition and the programming of obesity?

    Science.gov (United States)

    Mühlhäusler, B S

    2007-01-01

    The concept of a functional foetal "appetite regulatory neural network" is a new and potentially critical one. There is a growing body of evidence showing that the nutritional environment to which the foetus is exposed during prenatal and perinatal development has long-term consequences for the function of the appetite-regulating neural network and therefore the way in which an individual regulates energy balance throughout later life. This is of particular importance in the context of evidence obtained from a wide range of epidemiological studies, which have shown that individuals exposed to an elevated nutrient supply before birth have an increased risk of becoming obese as children and adults. This review summarises the key pieces of experimental evidence, by our group and others, that have contributed to our current understanding of the programming of appetite, and highlights the important questions that are yet to be answered. It is clear that this area of research has the potential to generate, within the next few years, interventions that could begin to alleviate the adverse long-term consequences of being exposed to an elevated nutrient supply before birth.

  11. Programming Unconventional Computers: Dynamics, Development, Self-Reference

    Directory of Open Access Journals (Sweden)

    Susan Stepney

    2012-10-01

    Full Text Available Classical computing has well-established formalisms for specifying, refining, composing, proving, and otherwise reasoning about computations. These formalisms have matured over the past 70 years or so. Unconventional Computing includes the use of novel kinds of substrates–from black holes and quantum effects, through to chemicals, biomolecules, even slime moulds–to perform computations that do not conform to the classical model. Although many of these unconventional substrates can be coerced into performing classical computation, this is not how they “naturally” compute. Our ability to exploit unconventional computing is partly hampered by a lack of corresponding programming formalisms: we need models for building, composing, and reasoning about programs that execute in these substrates. What might, say, a slime mould programming language look like? Here I outline some of the issues and properties of these unconventional substrates that need to be addressed to find “natural” approaches to programming them. Important concepts include embodied real values, processes and dynamical systems, generative systems and their meta-dynamics, and embodied self-reference.

  12. Microsoft Dynamics NAV 7 programming cookbook

    CERN Document Server

    Raul, Rakesh

    2013-01-01

    Written in the style of a cookbook. Microsoft Dynamics NAV 7 Programming Cookbook is full of recipes to help you get the job done.If you are a junior / entry-level NAV developer then the first half of the book is designed primarily for you. You may or may not have any experience programming. It focuses on the basics of NAV programming.If you are a mid-level NAV developer, you will find these chapters explain how to think outside of the NAV box when building solutions. There are also recipes that senior developers will find useful.

  13. Hybrid Semantics of Stochastic Programs with Dynamic Reconfiguration

    Directory of Open Access Journals (Sweden)

    Alberto Policriti

    2009-10-01

    Full Text Available We begin by reviewing a technique to approximate the dynamics of stochastic programs --written in a stochastic process algebra-- by a hybrid system, suitable to capture a mixed discrete/continuous evolution. In a nutshell, the discrete dynamics is kept stochastic while the continuous evolution is given in terms of ODEs, and the overall technique, therefore, naturally associates a Piecewise Deterministic Markov Process with a stochastic program. The specific contribution in this work consists in an increase of the flexibility of the translation scheme, obtained by allowing a dynamic reconfiguration of the degree of discreteness/continuity of the semantics. We also discuss the relationships of this approach with other hybrid simulation strategies for biochemical systems.

  14. Field-theoretic approach to fluctuation effects in neural networks

    International Nuclear Information System (INIS)

    Buice, Michael A.; Cowan, Jack D.

    2007-01-01

    A well-defined stochastic theory for neural activity, which permits the calculation of arbitrary statistical moments and equations governing them, is a potentially valuable tool for theoretical neuroscience. We produce such a theory by analyzing the dynamics of neural activity using field theoretic methods for nonequilibrium statistical processes. Assuming that neural network activity is Markovian, we construct the effective spike model, which describes both neural fluctuations and response. This analysis leads to a systematic expansion of corrections to mean field theory, which for the effective spike model is a simple version of the Wilson-Cowan equation. We argue that neural activity governed by this model exhibits a dynamical phase transition which is in the universality class of directed percolation. More general models (which may incorporate refractoriness) can exhibit other universality classes, such as dynamic isotropic percolation. Because of the extremely high connectivity in typical networks, it is expected that higher-order terms in the systematic expansion are small for experimentally accessible measurements, and thus, consistent with measurements in neocortical slice preparations, we expect mean field exponents for the transition. We provide a quantitative criterion for the relative magnitude of each term in the systematic expansion, analogous to the Ginsburg criterion. Experimental identification of dynamic universality classes in vivo is an outstanding and important question for neuroscience

  15. Trimaran Resistance Artificial Neural Network

    Science.gov (United States)

    2011-01-01

    11th International Conference on Fast Sea Transportation FAST 2011, Honolulu, Hawaii, USA, September 2011 Trimaran Resistance Artificial Neural Network Richard...Trimaran Resistance Artificial Neural Network 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) 5d. PROJECT NUMBER 5e... Artificial Neural Network and is restricted to the center and side-hull configurations tested. The value in the parametric model is that it is able to

  16. Point process modeling and estimation: Advances in the analysis of dynamic neural spiking data

    Science.gov (United States)

    Deng, Xinyi

    2016-08-01

    A common interest of scientists in many fields is to understand the relationship between the dynamics of a physical system and the occurrences of discrete events within such physical system. Seismologists study the connection between mechanical vibrations of the Earth and the occurrences of earthquakes so that future earthquakes can be better predicted. Astrophysicists study the association between the oscillating energy of celestial regions and the emission of photons to learn the Universe's various objects and their interactions. Neuroscientists study the link between behavior and the millisecond-timescale spike patterns of neurons to understand higher brain functions. Such relationships can often be formulated within the framework of state-space models with point process observations. The basic idea is that the dynamics of the physical systems are driven by the dynamics of some stochastic state variables and the discrete events we observe in an interval are noisy observations with distributions determined by the state variables. This thesis proposes several new methodological developments that advance the framework of state-space models with point process observations at the intersection of statistics and neuroscience. In particular, we develop new methods 1) to characterize the rhythmic spiking activity using history-dependent structure, 2) to model population spike activity using marked point process models, 3) to allow for real-time decision making, and 4) to take into account the need for dimensionality reduction for high-dimensional state and observation processes. We applied these methods to a novel problem of tracking rhythmic dynamics in the spiking of neurons in the subthalamic nucleus of Parkinson's patients with the goal of optimizing placement of deep brain stimulation electrodes. We developed a decoding algorithm that can make decision in real-time (for example, to stimulate the neurons or not) based on various sources of information present in

  17. Qualitative analysis and control of complex neural networks with delays

    CERN Document Server

    Wang, Zhanshan; Zheng, Chengde

    2016-01-01

    This book focuses on the stability of the dynamical neural system, synchronization of the coupling neural system and their applications in automation control and electrical engineering. The redefined concept of stability, synchronization and consensus are adopted to provide a better explanation of the complex neural network. Researchers in the fields of dynamical systems, computer science, electrical engineering and mathematics will benefit from the discussions on complex systems. The book will also help readers to better understand the theory behind the control technique and its design.

  18. Simulating the dynamics of the neutron flux in a nuclear reactor by locally recurrent neural networks

    International Nuclear Information System (INIS)

    Cadini, F.; Zio, E.; Pedroni, N.

    2007-01-01

    In this paper, a locally recurrent neural network (LRNN) is employed for approximating the temporal evolution of a nonlinear dynamic system model of a simplified nuclear reactor. To this aim, an infinite impulse response multi-layer perceptron (IIR-MLP) is trained according to a recursive back-propagation (RBP) algorithm. The network nodes contain internal feedback paths and their connections are realized by means of IIR synaptic filters, which provide the LRNN with the necessary system state memory

  19. A Combination of Central Pattern Generator-based and Reflex-based Neural Networks for Dynamic, Adaptive, Robust Bipedal Locomotion

    DEFF Research Database (Denmark)

    Di Canio, Giuliano; Larsen, Jørgen Christian; Wörgötter, Florentin

    2016-01-01

    Robotic systems inspired from humans have always been lightening up the curiosity of engineers and scientists. Of many challenges, human locomotion is a very difficult one where a number of different systems needs to interact in order to generate a correct and balanced pattern. To simulate...... the interaction of these systems, implementations with reflexbased or central pattern generator (CPG)-based controllers have been tested on bipedal robot systems. In this paper we will combine the two controller types, into a controller that works with both reflex and CPG signals. We use a reflex-based neural...... network to generate basic walking patterns of a dynamic bipedal walking robot (DACBOT) and then a CPG-based neural network to ensure robust walking behavior...

  20. Discrete-Time Nonzero-Sum Games for Multiplayer Using Policy-Iteration-Based Adaptive Dynamic Programming Algorithms.

    Science.gov (United States)

    Zhang, Huaguang; Jiang, He; Luo, Chaomin; Xiao, Geyang

    2017-10-01

    In this paper, we investigate the nonzero-sum games for a class of discrete-time (DT) nonlinear systems by using a novel policy iteration (PI) adaptive dynamic programming (ADP) method. The main idea of our proposed PI scheme is to utilize the iterative ADP algorithm to obtain the iterative control policies, which not only ensure the system to achieve stability but also minimize the performance index function for each player. This paper integrates game theory, optimal control theory, and reinforcement learning technique to formulate and handle the DT nonzero-sum games for multiplayer. First, we design three actor-critic algorithms, an offline one and two online ones, for the PI scheme. Subsequently, neural networks are employed to implement these algorithms and the corresponding stability analysis is also provided via the Lyapunov theory. Finally, a numerical simulation example is presented to demonstrate the effectiveness of our proposed approach.

  1. Dynamics of coupled mode solitons in bursting neural networks

    Science.gov (United States)

    Nfor, N. Oma; Ghomsi, P. Guemkam; Moukam Kakmeni, F. M.

    2018-02-01

    Using an electrically coupled chain of Hindmarsh-Rose neural models, we analytically derived the nonlinearly coupled complex Ginzburg-Landau equations. This is realized by superimposing the lower and upper cutoff modes of wave propagation and by employing the multiple scale expansions in the semidiscrete approximation. We explore the modified Hirota method to analytically obtain the bright-bright pulse soliton solutions of our nonlinearly coupled equations. With these bright solitons as initial conditions of our numerical scheme, and knowing that electrical signals are the basis of information transfer in the nervous system, it is found that prior to collisions at the boundaries of the network, neural information is purely conveyed by bisolitons at lower cutoff mode. After collision, the bisolitons are completely annihilated and neural information is now relayed by the upper cutoff mode via the propagation of plane waves. It is also shown that the linear gain of the system is inextricably linked to the complex physiological mechanisms of ion mobility, since the speeds and spatial profiles of the coupled nerve impulses vary with the gain. A linear stability analysis performed on the coupled system mainly confirms the instability of plane waves in the neural network, with a glaring example of the transition of weak plane waves into a dark soliton and then static kinks. Numerical simulations have confirmed the annihilation phenomenon subsequent to collision in neural systems. They equally showed that the symmetry breaking of the pulse solution of the system leaves in the network static internal modes, sometime referred to as Goldstone modes.

  2. Integrated evolutionary computation neural network quality controller for automated systems

    Energy Technology Data Exchange (ETDEWEB)

    Patro, S.; Kolarik, W.J. [Texas Tech Univ., Lubbock, TX (United States). Dept. of Industrial Engineering

    1999-06-01

    With increasing competition in the global market, more and more stringent quality standards and specifications are being demands at lower costs. Manufacturing applications of computing power are becoming more common. The application of neural networks to identification and control of dynamic processes has been discussed. The limitations of using neural networks for control purposes has been pointed out and a different technique, evolutionary computation, has been discussed. The results of identifying and controlling an unstable, dynamic process using evolutionary computation methods has been presented. A framework for an integrated system, using both neural networks and evolutionary computation, has been proposed to identify the process and then control the product quality, in a dynamic, multivariable system, in real-time.

  3. Learning from neural control.

    Science.gov (United States)

    Wang, Cong; Hill, David J

    2006-01-01

    One of the amazing successes of biological systems is their ability to "learn by doing" and so adapt to their environment. In this paper, first, a deterministic learning mechanism is presented, by which an appropriately designed adaptive neural controller is capable of learning closed-loop system dynamics during tracking control to a periodic reference orbit. Among various neural network (NN) architectures, the localized radial basis function (RBF) network is employed. A property of persistence of excitation (PE) for RBF networks is established, and a partial PE condition of closed-loop signals, i.e., the PE condition of a regression subvector constructed out of the RBFs along a periodic state trajectory, is proven to be satisfied. Accurate NN approximation for closed-loop system dynamics is achieved in a local region along the periodic state trajectory, and a learning ability is implemented during a closed-loop feedback control process. Second, based on the deterministic learning mechanism, a neural learning control scheme is proposed which can effectively recall and reuse the learned knowledge to achieve closed-loop stability and improved control performance. The significance of this paper is that the presented deterministic learning mechanism and the neural learning control scheme provide elementary components toward the development of a biologically-plausible learning and control methodology. Simulation studies are included to demonstrate the effectiveness of the approach.

  4. Nonequilibrium landscape theory of neural networks

    Science.gov (United States)

    Yan, Han; Zhao, Lei; Hu, Liang; Wang, Xidi; Wang, Erkang; Wang, Jin

    2013-01-01

    The brain map project aims to map out the neuron connections of the human brain. Even with all of the wirings mapped out, the global and physical understandings of the function and behavior are still challenging. Hopfield quantified the learning and memory process of symmetrically connected neural networks globally through equilibrium energy. The energy basins of attractions represent memories, and the memory retrieval dynamics is determined by the energy gradient. However, the realistic neural networks are asymmetrically connected, and oscillations cannot emerge from symmetric neural networks. Here, we developed a nonequilibrium landscape–flux theory for realistic asymmetrically connected neural networks. We uncovered the underlying potential landscape and the associated Lyapunov function for quantifying the global stability and function. We found the dynamics and oscillations in human brains responsible for cognitive processes and physiological rhythm regulations are determined not only by the landscape gradient but also by the flux. We found that the flux is closely related to the degrees of the asymmetric connections in neural networks and is the origin of the neural oscillations. The neural oscillation landscape shows a closed-ring attractor topology. The landscape gradient attracts the network down to the ring. The flux is responsible for coherent oscillations on the ring. We suggest the flux may provide the driving force for associations among memories. We applied our theory to rapid-eye movement sleep cycle. We identified the key regulation factors for function through global sensitivity analysis of landscape topography against wirings, which are in good agreements with experiments. PMID:24145451

  5. Nonequilibrium landscape theory of neural networks.

    Science.gov (United States)

    Yan, Han; Zhao, Lei; Hu, Liang; Wang, Xidi; Wang, Erkang; Wang, Jin

    2013-11-05

    The brain map project aims to map out the neuron connections of the human brain. Even with all of the wirings mapped out, the global and physical understandings of the function and behavior are still challenging. Hopfield quantified the learning and memory process of symmetrically connected neural networks globally through equilibrium energy. The energy basins of attractions represent memories, and the memory retrieval dynamics is determined by the energy gradient. However, the realistic neural networks are asymmetrically connected, and oscillations cannot emerge from symmetric neural networks. Here, we developed a nonequilibrium landscape-flux theory for realistic asymmetrically connected neural networks. We uncovered the underlying potential landscape and the associated Lyapunov function for quantifying the global stability and function. We found the dynamics and oscillations in human brains responsible for cognitive processes and physiological rhythm regulations are determined not only by the landscape gradient but also by the flux. We found that the flux is closely related to the degrees of the asymmetric connections in neural networks and is the origin of the neural oscillations. The neural oscillation landscape shows a closed-ring attractor topology. The landscape gradient attracts the network down to the ring. The flux is responsible for coherent oscillations on the ring. We suggest the flux may provide the driving force for associations among memories. We applied our theory to rapid-eye movement sleep cycle. We identified the key regulation factors for function through global sensitivity analysis of landscape topography against wirings, which are in good agreements with experiments.

  6. Neural - levelset shape detection segmentation of brain tumors in dynamic susceptibility contrast enhanced and diffusion weighted magnetic resonance images

    International Nuclear Information System (INIS)

    Vijayakumar, C.; Bhargava, Sunil; Gharpure, Damayanti Chandrashekhar

    2008-01-01

    A novel Neuro - level set shape detection algorithm is proposed and evaluated for segmentation and grading of brain tumours. The algorithm evaluates vascular and cellular information provided by dynamic contrast susceptibility magnetic resonance images and apparent diffusion coefficient maps. The proposed neural shape detection algorithm is based on the levels at algorithm (shape detection algorithm) and utilizes a neural block to provide the speed image for the level set methods. In this study, two different architectures of level set method have been implemented and their results are compared. The results show that the proposed Neuro-shape detection performs better in differentiating the tumor, edema, necrosis in reconstructed images of perfusion and diffusion weighted magnetic resonance images. (author)

  7. A methodology based on dynamic artificial neural network for short-term forecasting of the power output of a PV generator

    International Nuclear Information System (INIS)

    Almonacid, F.; Pérez-Higueras, P.J.; Fernández, Eduardo F.; Hontoria, L.

    2014-01-01

    Highlights: • The output of the majority of renewables energies depends on the variability of the weather conditions. • The short-term forecast is going to be essential for effectively integrating solar energy sources. • A new method based on artificial neural network to predict the power output of a PV generator one hour ahead is proposed. • This new method is based on dynamic artificial neural network to predict global solar irradiance and the air temperature. • The methodology developed can be used to estimate the power output of a PV generator with a satisfactory margin of error. - Abstract: One of the problems of some renewables energies is that the output of these kinds of systems is non-dispatchable depending on variability of weather conditions that cannot be predicted and controlled. From this point of view, the short-term forecast is going to be essential for effectively integrating solar energy sources, being a very useful tool for the reliability and stability of the grid ensuring that an adequate supply is present. In this paper a new methodology for forecasting the output of a PV generator one hour ahead based on dynamic artificial neural network is presented. The results of this study show that the proposed methodology could be used to forecast the power output of PV systems one hour ahead with an acceptable degree of accuracy

  8. Different-Level Simultaneous Minimization Scheme for Fault Tolerance of Redundant Manipulator Aided with Discrete-Time Recurrent Neural Network.

    Science.gov (United States)

    Jin, Long; Liao, Bolin; Liu, Mei; Xiao, Lin; Guo, Dongsheng; Yan, Xiaogang

    2017-01-01

    By incorporating the physical constraints in joint space, a different-level simultaneous minimization scheme, which takes both the robot kinematics and robot dynamics into account, is presented and investigated for fault-tolerant motion planning of redundant manipulator in this paper. The scheme is reformulated as a quadratic program (QP) with equality and bound constraints, which is then solved by a discrete-time recurrent neural network. Simulative verifications based on a six-link planar redundant robot manipulator substantiate the efficacy and accuracy of the presented acceleration fault-tolerant scheme, the resultant QP and the corresponding discrete-time recurrent neural network.

  9. SAGRAD: A Program for Neural Network Training with Simulated Annealing and the Conjugate Gradient Method.

    Science.gov (United States)

    Bernal, Javier; Torres-Jimenez, Jose

    2015-01-01

    SAGRAD (Simulated Annealing GRADient), a Fortran 77 program for computing neural networks for classification using batch learning, is discussed. Neural network training in SAGRAD is based on a combination of simulated annealing and Møller's scaled conjugate gradient algorithm, the latter a variation of the traditional conjugate gradient method, better suited for the nonquadratic nature of neural networks. Different aspects of the implementation of the training process in SAGRAD are discussed, such as the efficient computation of gradients and multiplication of vectors by Hessian matrices that are required by Møller's algorithm; the (re)initialization of weights with simulated annealing required to (re)start Møller's algorithm the first time and each time thereafter that it shows insufficient progress in reaching a possibly local minimum; and the use of simulated annealing when Møller's algorithm, after possibly making considerable progress, becomes stuck at a local minimum or flat area of weight space. Outlines of the scaled conjugate gradient algorithm, the simulated annealing procedure and the training process used in SAGRAD are presented together with results from running SAGRAD on two examples of training data.

  10. Neural dynamics of feedforward and feedback processing in figure-ground segregation.

    Science.gov (United States)

    Layton, Oliver W; Mingolla, Ennio; Yazdanbakhsh, Arash

    2014-01-01

    Determining whether a region belongs to the interior or exterior of a shape (figure-ground segregation) is a core competency of the primate brain, yet the underlying mechanisms are not well understood. Many models assume that figure-ground segregation occurs by assembling progressively more complex representations through feedforward connections, with feedback playing only a modulatory role. We present a dynamical model of figure-ground segregation in the primate ventral stream wherein feedback plays a crucial role in disambiguating a figure's interior and exterior. We introduce a processing strategy whereby jitter in RF center locations and variation in RF sizes is exploited to enhance and suppress neural activity inside and outside of figures, respectively. Feedforward projections emanate from units that model cells in V4 known to respond to the curvature of boundary contours (curved contour cells), and feedback projections from units predicted to exist in IT that strategically group neurons with different RF sizes and RF center locations (teardrop cells). Neurons (convex cells) that preferentially respond when centered on a figure dynamically balance feedforward (bottom-up) information and feedback from higher visual areas. The activation is enhanced when an interior portion of a figure is in the RF via feedback from units that detect closure in the boundary contours of a figure. Our model produces maximal activity along the medial axis of well-known figures with and without concavities, and inside algorithmically generated shapes. Our results suggest that the dynamic balancing of feedforward signals with the specific feedback mechanisms proposed by the model is crucial for figure-ground segregation.

  11. Neural Dynamics of Feedforward and Feedback Processing in Figure-Ground Segregation

    Directory of Open Access Journals (Sweden)

    Oliver W. Layton

    2014-09-01

    Full Text Available Determining whether a region belongs to the interior or exterior of a shape (figure-ground segregation is a core competency of the primate brain, yet the underlying mechanisms are not well understood. Many models assume that figure-ground segregation occurs by assembling progressively more complex representations through feedforward connections, with feedback playing only a modulatory role. We present a dynamical model of figure-ground segregation in the primate ventral stream wherein feedback plays a crucial role in disambiguating a figure’s interior and exterior. We introduce a processing strategy whereby jitter in RF center locations and variation in RF sizes is exploited to enhance and suppress neural activity inside and outside of figures, respectively. Feedforward projections emanate from units that model cells in V4 known to respond to the curvature of boundary contours (curved contour cells, and feedback projections from units predicted to exist in IT that strategically group neurons with different RF sizes and RF center locations (teardrop cells. Neurons (convex cells that preferentially respond when centered on a figure dynamically balance feedforward (bottom-up information and feedback from higher visual areas. The activation is enhanced when an interior portion of a figure is in the RF via feedback from units that detect closure in the boundary contours of a figure. Our model produces maximal activity along the medial axis of well-known figures with and without concavities, and inside algorithmically generated shapes. Our results suggest that the dynamic balancing of feedforward signals with the specific feedback mechanisms proposed by the model is crucial for figure-ground segregation.

  12. Neural dynamics of feedforward and feedback processing in figure-ground segregation

    Science.gov (United States)

    Layton, Oliver W.; Mingolla, Ennio; Yazdanbakhsh, Arash

    2014-01-01

    Determining whether a region belongs to the interior or exterior of a shape (figure-ground segregation) is a core competency of the primate brain, yet the underlying mechanisms are not well understood. Many models assume that figure-ground segregation occurs by assembling progressively more complex representations through feedforward connections, with feedback playing only a modulatory role. We present a dynamical model of figure-ground segregation in the primate ventral stream wherein feedback plays a crucial role in disambiguating a figure's interior and exterior. We introduce a processing strategy whereby jitter in RF center locations and variation in RF sizes is exploited to enhance and suppress neural activity inside and outside of figures, respectively. Feedforward projections emanate from units that model cells in V4 known to respond to the curvature of boundary contours (curved contour cells), and feedback projections from units predicted to exist in IT that strategically group neurons with different RF sizes and RF center locations (teardrop cells). Neurons (convex cells) that preferentially respond when centered on a figure dynamically balance feedforward (bottom-up) information and feedback from higher visual areas. The activation is enhanced when an interior portion of a figure is in the RF via feedback from units that detect closure in the boundary contours of a figure. Our model produces maximal activity along the medial axis of well-known figures with and without concavities, and inside algorithmically generated shapes. Our results suggest that the dynamic balancing of feedforward signals with the specific feedback mechanisms proposed by the model is crucial for figure-ground segregation. PMID:25346703

  13. Effects of Aging on Cortical Neural Dynamics and Local Sleep Homeostasis in Mice.

    Science.gov (United States)

    McKillop, Laura E; Fisher, Simon P; Cui, Nanyi; Peirson, Stuart N; Foster, Russell G; Wafford, Keith A; Vyazovskiy, Vladyslav V

    2018-04-18

    Healthy aging is associated with marked effects on sleep, including its daily amount and architecture, as well as the specific EEG oscillations. Neither the neurophysiological underpinnings nor the biological significance of these changes are understood, and crucially the question remains whether aging is associated with reduced sleep need or a diminished capacity to generate sufficient sleep. Here we tested the hypothesis that aging may affect local cortical networks, disrupting the capacity to generate and sustain sleep oscillations, and with it the local homeostatic response to sleep loss. We performed chronic recordings of cortical neural activity and local field potentials from the motor cortex in young and older male C57BL/6J mice, during spontaneous waking and sleep, as well as during sleep after sleep deprivation. In older animals, we observed an increase in the incidence of non-rapid eye movement sleep local field potential slow waves and their associated neuronal silent (OFF) periods, whereas the overall pattern of state-dependent cortical neuronal firing was generally similar between ages. Furthermore, we observed that the response to sleep deprivation at the level of local cortical network activity was not affected by aging. Our data thus suggest that the local cortical neural dynamics and local sleep homeostatic mechanisms, at least in the motor cortex, are not impaired during healthy senescence in mice. This indicates that powerful protective or compensatory mechanisms may exist to maintain neuronal function stable across the life span, counteracting global changes in sleep amount and architecture. SIGNIFICANCE STATEMENT The biological significance of age-dependent changes in sleep is unknown but may reflect either a diminished sleep need or a reduced capacity to generate deep sleep stages. As aging has been linked to profound disruptions in cortical sleep oscillations and because sleep need is reflected in specific patterns of cortical activity, we

  14. Hybrid discrete-time neural networks.

    Science.gov (United States)

    Cao, Hongjun; Ibarz, Borja

    2010-11-13

    Hybrid dynamical systems combine evolution equations with state transitions. When the evolution equations are discrete-time (also called map-based), the result is a hybrid discrete-time system. A class of biological neural network models that has recently received some attention falls within this category: map-based neuron models connected by means of fast threshold modulation (FTM). FTM is a connection scheme that aims to mimic the switching dynamics of a neuron subject to synaptic inputs. The dynamic equations of the neuron adopt different forms according to the state (either firing or not firing) and type (excitatory or inhibitory) of their presynaptic neighbours. Therefore, the mathematical model of one such network is a combination of discrete-time evolution equations with transitions between states, constituting a hybrid discrete-time (map-based) neural network. In this paper, we review previous work within the context of these models, exemplifying useful techniques to analyse them. Typical map-based neuron models are low-dimensional and amenable to phase-plane analysis. In bursting models, fast-slow decomposition can be used to reduce dimensionality further, so that the dynamics of a pair of connected neurons can be easily understood. We also discuss a model that includes electrical synapses in addition to chemical synapses with FTM. Furthermore, we describe how master stability functions can predict the stability of synchronized states in these networks. The main results are extended to larger map-based neural networks.

  15. Planar multibody dynamics formulation, programming and applications

    CERN Document Server

    Nikravesh, Parviz E

    2007-01-01

    Introduction Multibody Mechanical Systems Types of Analyses Methods of Formulation Computer Programming Application Examples Unit System Remarks Preliminaries Reference Axes Scalars and Vectors Matrices Vector, Array, and Matrix Differentiation Equations and Expressions Remarks Problems Fundamentals of Kinematics A Particle Kinematics of a Rigid Body Definitions Remarks Problems Fundamentals of Dynamics Newton's Laws of Motion Dynamics of a Body Force Elements Applied Forces Reaction Force Remarks Problems Point-Coordinates: Kinematics Multipoint

  16. A neural model for transient identification in dynamic processes with 'don't know' response

    Energy Technology Data Exchange (ETDEWEB)

    Mol, Antonio C. de A. E-mail: mol@ien.gov.br; Martinez, Aquilino S. E-mail: aquilino@lmp.ufrj.br; Schirru, Roberto E-mail: schirru@lmp.ufrj.br

    2003-09-01

    This work presents an approach for neural network based transient identification which allows either dynamic identification or a 'don't know' response. The approach uses two 'jump' multilayer neural networks (NN) trained with the backpropagation algorithm. The 'jump' network is used because it is useful to dealing with very complex patterns, which is the case of the space of the state variables during some abnormal events. The first one is responsible for the dynamic identification. This NN uses, as input, a short set (in a moving time window) of recent measurements of each variable avoiding the necessity of using starting events. The other one is used to validate the instantaneous identification (from the first net) through the validation of each variable. This net is responsible for allowing the system to provide a 'don't know' response. In order to validate the method, a Nuclear Power Plant (NPP) transient identification problem comprising 15 postulated accidents, simulated for a pressurized water reactor (PWR), was proposed in the validation process it has been considered noisy data in order to evaluate the method robustness. Obtained results reveal the ability of the method in dealing with both dynamic identification of transients and correct 'don't know' response. Another important point studied in this work is that the system has shown to be independent of a trigger signal which indicates the beginning of the transient, thus making it robust in relation to this limitation.

  17. Value Iteration Adaptive Dynamic Programming for Optimal Control of Discrete-Time Nonlinear Systems.

    Science.gov (United States)

    Wei, Qinglai; Liu, Derong; Lin, Hanquan

    2016-03-01

    In this paper, a value iteration adaptive dynamic programming (ADP) algorithm is developed to solve infinite horizon undiscounted optimal control problems for discrete-time nonlinear systems. The present value iteration ADP algorithm permits an arbitrary positive semi-definite function to initialize the algorithm. A novel convergence analysis is developed to guarantee that the iterative value function converges to the optimal performance index function. Initialized by different initial functions, it is proven that the iterative value function will be monotonically nonincreasing, monotonically nondecreasing, or nonmonotonic and will converge to the optimum. In this paper, for the first time, the admissibility properties of the iterative control laws are developed for value iteration algorithms. It is emphasized that new termination criteria are established to guarantee the effectiveness of the iterative control laws. Neural networks are used to approximate the iterative value function and compute the iterative control law, respectively, for facilitating the implementation of the iterative ADP algorithm. Finally, two simulation examples are given to illustrate the performance of the present method.

  18. Statistical Physics of Neural Systems with Nonadditive Dendritic Coupling

    Directory of Open Access Journals (Sweden)

    David Breuer

    2014-03-01

    Full Text Available How neurons process their inputs crucially determines the dynamics of biological and artificial neural networks. In such neural and neural-like systems, synaptic input is typically considered to be merely transmitted linearly or sublinearly by the dendritic compartments. Yet, single-neuron experiments report pronounced supralinear dendritic summation of sufficiently synchronous and spatially close-by inputs. Here, we provide a statistical physics approach to study the impact of such nonadditive dendritic processing on single-neuron responses and the performance of associative-memory tasks in artificial neural networks. First, we compute the effect of random input to a neuron incorporating nonlinear dendrites. This approach is independent of the details of the neuronal dynamics. Second, we use those results to study the impact of dendritic nonlinearities on the network dynamics in a paradigmatic model for associative memory, both numerically and analytically. We find that dendritic nonlinearities maintain network convergence and increase the robustness of memory performance against noise. Interestingly, an intermediate number of dendritic branches is optimal for memory functionality.

  19. Near scale-free dynamics in neural population activity of waking/sleeping rats revealed by multiscale analysis.

    Directory of Open Access Journals (Sweden)

    Leonid A Safonov

    Full Text Available A neuron embedded in an intact brain, unlike an isolated neuron, participates in network activity at various spatial resolutions. Such multiple scale spatial dynamics is potentially reflected in multiple time scales of temporal dynamics. We identify such multiple dynamical time scales of the inter-spike interval (ISI fluctuations of neurons of waking/sleeping rats by means of multiscale analysis. The time scale of large non-Gaussianity in the ISI fluctuations, measured with the Castaing method, ranges up to several minutes, markedly escaping the low-pass filtering characteristics of neurons. A comparison between neural activity during waking and sleeping reveals that non-Gaussianity is stronger during waking than sleeping throughout the entire range of scales observed. We find a remarkable property of near scale independence of the magnitude correlations as the primary cause of persistent non-Gaussianity. Such scale-invariance of correlations is characteristic of multiplicative cascade processes and raises the possibility of the existence of a scale independent memory preserving mechanism.

  20. Striatal Activity and Reward Relativity: Neural Signals Encoding Dynamic Outcome Valuation.

    Science.gov (United States)

    Webber, Emily S; Mankin, David E; Cromwell, Howard C

    2016-01-01

    The striatum is a key brain region involved in reward processing. Striatal activity has been linked to encoding reward magnitude and integrating diverse reward outcome information. Recent work has supported the involvement of striatum in the valuation of outcomes. The present work extends this idea by examining striatal activity during dynamic shifts in value that include different levels and directions of magnitude disparity. A novel task was used to produce diverse relative reward effects on a chain of instrumental action. Rats ( Rattus norvegicus ) were trained to respond to cues associated with specific outcomes varying by food pellet magnitude. Animals were exposed to single-outcome sessions followed by mixed-outcome sessions, and neural activity was compared among identical outcome trials from the different behavioral contexts. Results recording striatal activity show that neural responses to different task elements reflect incentive contrast as well as other relative effects that involve generalization between outcomes or possible influences of outcome variety. The activity that was most prevalent was linked to food consumption and post-food consumption periods. Relative encoding was sensitive to magnitude disparity. A within-session analysis showed strong contrast effects that were dependent upon the outcome received in the immediately preceding trial. Significantly higher numbers of responses were found in ventral striatum linked to relative outcome effects. Our results support the idea that relative value can incorporate diverse relationships, including comparisons from specific individual outcomes to general behavioral contexts. The striatum contains these diverse relative processes, possibly enabling both a higher information yield concerning value shifts and a greater behavioral flexibility.

  1. Artificial Neural Network Analysis System

    Science.gov (United States)

    2001-02-27

    Contract No. DASG60-00-M-0201 Purchase request no.: Foot in the Door-01 Title Name: Artificial Neural Network Analysis System Company: Atlantic... Artificial Neural Network Analysis System 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) Powell, Bruce C 5d. PROJECT NUMBER 5e. TASK NUMBER...34) 27-02-2001 Report Type N/A Dates Covered (from... to) ("DD MON YYYY") 28-10-2000 27-02-2001 Title and Subtitle Artificial Neural Network Analysis

  2. Robustly Fitting and Forecasting Dynamical Data With Electromagnetically Coupled Artificial Neural Network: A Data Compression Method.

    Science.gov (United States)

    Wang, Ziyin; Liu, Mandan; Cheng, Yicheng; Wang, Rubin

    2017-06-01

    In this paper, a dynamical recurrent artificial neural network (ANN) is proposed and studied. Inspired from a recent research in neuroscience, we introduced nonsynaptic coupling to form a dynamical component of the network. We mathematically proved that, with adequate neurons provided, this dynamical ANN model is capable of approximating any continuous dynamic system with an arbitrarily small error in a limited time interval. Its extreme concise Jacobian matrix makes the local stability easy to control. We designed this ANN for fitting and forecasting dynamic data and obtained satisfied results in simulation. The fitting performance is also compared with those of both the classic dynamic ANN and the state-of-the-art models. Sufficient trials and the statistical results indicated that our model is superior to those have been compared. Moreover, we proposed a robust approximation problem, which asking the ANN to approximate a cluster of input-output data pairs in large ranges and to forecast the output of the system under previously unseen input. Our model and learning scheme proposed in this paper have successfully solved this problem, and through this, the approximation becomes much more robust and adaptive to noise, perturbation, and low-order harmonic wave. This approach is actually an efficient method for compressing massive external data of a dynamic system into the weight of the ANN.

  3. Adaptive Sliding Mode Control of Dynamic Systems Using Double Loop Recurrent Neural Network Structure.

    Science.gov (United States)

    Fei, Juntao; Lu, Cheng

    2018-04-01

    In this paper, an adaptive sliding mode control system using a double loop recurrent neural network (DLRNN) structure is proposed for a class of nonlinear dynamic systems. A new three-layer RNN is proposed to approximate unknown dynamics with two different kinds of feedback loops where the firing weights and output signal calculated in the last step are stored and used as the feedback signals in each feedback loop. Since the new structure has combined the advantages of internal feedback NN and external feedback NN, it can acquire the internal state information while the output signal is also captured, thus the new designed DLRNN can achieve better approximation performance compared with the regular NNs without feedback loops or the regular RNNs with a single feedback loop. The new proposed DLRNN structure is employed in an equivalent controller to approximate the unknown nonlinear system dynamics, and the parameters of the DLRNN are updated online by adaptive laws to get favorable approximation performance. To investigate the effectiveness of the proposed controller, the designed adaptive sliding mode controller with the DLRNN is applied to a -axis microelectromechanical system gyroscope to control the vibrating dynamics of the proof mass. Simulation results demonstrate that the proposed methodology can achieve good tracking property, and the comparisons of the approximation performance between radial basis function NN, RNN, and DLRNN show that the DLRNN can accurately estimate the unknown dynamics with a fast speed while the internal states of DLRNN are more stable.

  4. BWR-plant simulator and its neural network companion with programming under mat lab environment

    International Nuclear Information System (INIS)

    Ghenniwa, Fatma Suleiman

    2008-01-01

    Stand alone nuclear power plant simulators, as well as building blocks based nuclear power simulator are available from different companies throughout the world. In this work, a review of such simulators has been explored for both types. Also a survey of the possible authoring tools for such simulators development has been performed. It is decided, in this research, to develop prototype simulator based on components building blocks. Further more, the authoring tool (Mat lab software) has been selected for programming. It has all the basic tools required for the simulator development similar to that developed by specialized companies for simulator like MMS, APROS and others. Components simulations, as well as integrated components for power plant simulation have been demonstrated. Preliminary neural network reactor model as part of a prepared neural network modules library has been used to demonstrate module order shuffling during simulation. The developed components library can be refined and extended for further development. (author)

  5. Hybrid computing using a neural network with dynamic external memory.

    Science.gov (United States)

    Graves, Alex; Wayne, Greg; Reynolds, Malcolm; Harley, Tim; Danihelka, Ivo; Grabska-Barwińska, Agnieszka; Colmenarejo, Sergio Gómez; Grefenstette, Edward; Ramalho, Tiago; Agapiou, John; Badia, Adrià Puigdomènech; Hermann, Karl Moritz; Zwols, Yori; Ostrovski, Georg; Cain, Adam; King, Helen; Summerfield, Christopher; Blunsom, Phil; Kavukcuoglu, Koray; Hassabis, Demis

    2016-10-27

    Artificial neural networks are remarkably adept at sensory processing, sequence learning and reinforcement learning, but are limited in their ability to represent variables and data structures and to store data over long timescales, owing to the lack of an external memory. Here we introduce a machine learning model called a differentiable neural computer (DNC), which consists of a neural network that can read from and write to an external memory matrix, analogous to the random-access memory in a conventional computer. Like a conventional computer, it can use its memory to represent and manipulate complex data structures, but, like a neural network, it can learn to do so from data. When trained with supervised learning, we demonstrate that a DNC can successfully answer synthetic questions designed to emulate reasoning and inference problems in natural language. We show that it can learn tasks such as finding the shortest path between specified points and inferring the missing links in randomly generated graphs, and then generalize these tasks to specific graphs such as transport networks and family trees. When trained with reinforcement learning, a DNC can complete a moving blocks puzzle in which changing goals are specified by sequences of symbols. Taken together, our results demonstrate that DNCs have the capacity to solve complex, structured tasks that are inaccessible to neural networks without external read-write memory.

  6. Techniques for extracting single-trial activity patterns from large-scale neural recordings

    Science.gov (United States)

    Churchland, Mark M; Yu, Byron M; Sahani, Maneesh; Shenoy, Krishna V

    2008-01-01

    Summary Large, chronically-implanted arrays of microelectrodes are an increasingly common tool for recording from primate cortex, and can provide extracellular recordings from many (order of 100) neurons. While the desire for cortically-based motor prostheses has helped drive their development, such arrays also offer great potential to advance basic neuroscience research. Here we discuss the utility of array recording for the study of neural dynamics. Neural activity often has dynamics beyond that driven directly by the stimulus. While governed by those dynamics, neural responses may nevertheless unfold differently for nominally identical trials, rendering many traditional analysis methods ineffective. We review recent studies – some employing simultaneous recording, some not – indicating that such variability is indeed present both during movement generation, and during the preceding premotor computations. In such cases, large-scale simultaneous recordings have the potential to provide an unprecedented view of neural dynamics at the level of single trials. However, this enterprise will depend not only on techniques for simultaneous recording, but also on the use and further development of analysis techniques that can appropriately reduce the dimensionality of the data, and allow visualization of single-trial neural behavior. PMID:18093826

  7. A Symbolic and Graphical Computer Representation of Dynamical Systems

    Science.gov (United States)

    Gould, Laurence I.

    2005-04-01

    AUTONO is a Macsyma/Maxima program, designed at the University of Hartford, for solving autonomous systems of differential equations as well as for relating Lagrangians and Hamiltonians to their associated dynamical equations. AUTONO can be used in a number of fields to decipher a variety of complex dynamical systems with ease, producing their Lagrangian and Hamiltonian equations in seconds. These equations can then be incorporated into VisSim, a modeling and simulation program, which yields graphical representations of motion in a given system through easily chosen input parameters. The program, along with the VisSim differential-equations graphical package, allows for resolution and easy understanding of complex problems in a relatively short time; thus enabling quicker and more advanced computing of dynamical systems on any number of platforms---from a network of sensors on a space probe, to the behavior of neural networks, to the effects of an electromagnetic field on components in a dynamical system. A flowchart of AUTONO, along with some simple applications and VisSim output, will be shown.

  8. Optimal Operation of Radial Distribution Systems Using Extended Dynamic Programming

    DEFF Research Database (Denmark)

    Lopez, Juan Camilo; Vergara, Pedro P.; Lyra, Christiano

    2018-01-01

    An extended dynamic programming (EDP) approach is developed to optimize the ac steady-state operation of radial electrical distribution systems (EDS). Based on the optimality principle of the recursive Hamilton-Jacobi-Bellman equations, the proposed EDP approach determines the optimal operation o...... approach is illustrated using real-scale systems and comparisons with commercial programming solvers. Finally, generalizations to consider other EDS operation problems are also discussed.......An extended dynamic programming (EDP) approach is developed to optimize the ac steady-state operation of radial electrical distribution systems (EDS). Based on the optimality principle of the recursive Hamilton-Jacobi-Bellman equations, the proposed EDP approach determines the optimal operation...... of the EDS by setting the values of the controllable variables at each time period. A suitable definition for the stages of the problem makes it possible to represent the optimal ac power flow of radial EDS as a dynamic programming problem, wherein the 'curse of dimensionality' is a minor concern, since...

  9. Optimal Brain Surgeon on Artificial Neural Networks in

    DEFF Research Database (Denmark)

    Christiansen, Niels Hørbye; Job, Jonas Hultmann; Klyver, Katrine

    2012-01-01

    It is shown how the procedure know as optimal brain surgeon can be used to trim and optimize artificial neural networks in nonlinear structural dynamics. Beside optimizing the neural network, and thereby minimizing computational cost in simulation, the surgery procedure can also serve as a quick...

  10. Neural Networks for the Beginner.

    Science.gov (United States)

    Snyder, Robin M.

    Motivated by the brain, neural networks are a right-brained approach to artificial intelligence that is used to recognize patterns based on previous training. In practice, one would not program an expert system to recognize a pattern and one would not train a neural network to make decisions from rules; but one could combine the best features of…

  11. Autonomy in action: linking the act of looking to memory formation in infancy via dynamic neural fields.

    Science.gov (United States)

    Perone, Sammy; Spencer, John P

    2013-01-01

    Looking is a fundamental exploratory behavior by which infants acquire knowledge about the world. In theories of infant habituation, however, looking as an exploratory behavior has been deemphasized relative to the reliable nature with which looking indexes active cognitive processing. We present a new theory that connects looking to the dynamics of memory formation and formally implement this theory in a Dynamic Neural Field model that learns autonomously as it actively looks and looks away from a stimulus. We situate this model in a habituation task and illustrate the mechanisms by which looking, encoding, working memory formation, and long-term memory formation give rise to habituation across multiple stimulus and task contexts. We also illustrate how the act of looking and the temporal dynamics of learning affect each other. Finally, we test a new hypothesis about the sources of developmental differences in looking. Copyright © 2012 Cognitive Science Society, Inc.

  12. Neural-fuzzy control of adept one SCARA

    International Nuclear Information System (INIS)

    Er, M.J.; Toh, B.H.; Toh, B.Y.

    1998-01-01

    This paper presents an Intelligent Control Strategy for the Adept One SCARA (Selective Compliance Assembly Robot Arm). It covers the design and simulation study of a Neural-Fuzzy Controller (NFC) for the SCARA with a view of tracking a predetermined trajectory of motion in the joint space. The SCARA was simulated as a three-axis manipulator with the dynamics of the tool (fourth link) neglected and the mass of the load incorporated into the mass of the third link. The overall performance of the control system under different conditions, namely variation in playload, variations in coefficients of static, dynamic and viscous friction and different trajectories were studied and comparison made with an existing Neural Network Controller and two Computed Torque Controllers. The NFC was shown to be robust and is able to overcome the drawback of the existing Neural Network Controller

  13. Neural Parallel Engine: A toolbox for massively parallel neural signal processing.

    Science.gov (United States)

    Tam, Wing-Kin; Yang, Zhi

    2018-05-01

    Large-scale neural recordings provide detailed information on neuronal activities and can help elicit the underlying neural mechanisms of the brain. However, the computational burden is also formidable when we try to process the huge data stream generated by such recordings. In this study, we report the development of Neural Parallel Engine (NPE), a toolbox for massively parallel neural signal processing on graphical processing units (GPUs). It offers a selection of the most commonly used routines in neural signal processing such as spike detection and spike sorting, including advanced algorithms such as exponential-component-power-component (EC-PC) spike detection and binary pursuit spike sorting. We also propose a new method for detecting peaks in parallel through a parallel compact operation. Our toolbox is able to offer a 5× to 110× speedup compared with its CPU counterparts depending on the algorithms. A user-friendly MATLAB interface is provided to allow easy integration of the toolbox into existing workflows. Previous efforts on GPU neural signal processing only focus on a few rudimentary algorithms, are not well-optimized and often do not provide a user-friendly programming interface to fit into existing workflows. There is a strong need for a comprehensive toolbox for massively parallel neural signal processing. A new toolbox for massively parallel neural signal processing has been created. It can offer significant speedup in processing signals from large-scale recordings up to thousands of channels. Copyright © 2018 Elsevier B.V. All rights reserved.

  14. Dynamic neural network of insight: a functional magnetic resonance imaging study on solving Chinese 'chengyu' riddles.

    Directory of Open Access Journals (Sweden)

    Qingbai Zhao

    Full Text Available The key components of insight include breaking mental sets and forming the novel, task-related associations. The majority of researchers have agreed that the anterior cingulate cortex may mediate processes of breaking one's mental set, while the exact neural correlates of forming novel associations are still debatable. In the present study, we used a paradigm of answer selection to explore brain activations of insight by using event-related functional magnetic resonance imaging during solving Chinese 'chengyu' (in Chinese pinyin riddles. Based on the participant's choice, the trials were classified into the insight and non-insight conditions. Both stimulus-locked and response-locked analyses are conducted to detect the neural activity corresponding to the early and late periods of insight solution, respectively. Our data indicate that the early period of insight solution shows more activation in the middle temporal gyrus, the middle frontal gyrus and the anterior cingulate cortex. These activities might be associated to the extensive semantic processing, as well as detecting and resolving cognitive conflicts. In contrast, the late period of insight solution produced increased activities in the hippocampus and the amygdala, possibly reflecting the forming of novel association and the concomitant "Aha" feeling. Our study supports the key role of hippocampus in forming novel associations, and indicates a dynamic neural network during insight solution.

  15. Assimilation of Biophysical Neuronal Dynamics in Neuromorphic VLSI.

    Science.gov (United States)

    Wang, Jun; Breen, Daniel; Akinin, Abraham; Broccard, Frederic; Abarbanel, Henry D I; Cauwenberghs, Gert

    2017-12-01

    Representing the biophysics of neuronal dynamics and behavior offers a principled analysis-by-synthesis approach toward understanding mechanisms of nervous system functions. We report on a set of procedures assimilating and emulating neurobiological data on a neuromorphic very large scale integrated (VLSI) circuit. The analog VLSI chip, NeuroDyn, features 384 digitally programmable parameters specifying for 4 generalized Hodgkin-Huxley neurons coupled through 12 conductance-based chemical synapses. The parameters also describe reversal potentials, maximal conductances, and spline regressed kinetic functions for ion channel gating variables. In one set of experiments, we assimilated membrane potential recorded from one of the neurons on the chip to the model structure upon which NeuroDyn was designed using the known current input sequence. We arrived at the programmed parameters except for model errors due to analog imperfections in the chip fabrication. In a related set of experiments, we replicated songbird individual neuron dynamics on NeuroDyn by estimating and configuring parameters extracted using data assimilation from intracellular neural recordings. Faithful emulation of detailed biophysical neural dynamics will enable the use of NeuroDyn as a tool to probe electrical and molecular properties of functional neural circuits. Neuroscience applications include studying the relationship between molecular properties of neurons and the emergence of different spike patterns or different brain behaviors. Clinical applications include studying and predicting effects of neuromodulators or neurodegenerative diseases on ion channel kinetics.

  16. A user's guide to the Flexible Spacecraft Dynamics and Control Program

    Science.gov (United States)

    Fedor, J. V.

    1984-01-01

    A guide to the use of the Flexible Spacecraft Dynamics Program (FSD) is presented covering input requirements, control words, orbit generation, spacecraft description and simulation options, and output definition. The program can be used in dynamics and control analysis as well as in orbit support of deployment and control of spacecraft. The program is applicable to inertially oriented spinning, Earth oriented or gravity gradient stabilized spacecraft. Internal and external environmental effects can be simulated.

  17. Learning Data Set Influence on Identification Accuracy of Gas Turbine Neural Network Model

    Science.gov (United States)

    Kuznetsov, A. V.; Makaryants, G. M.

    2018-01-01

    There are many gas turbine engine identification researches via dynamic neural network models. It should minimize errors between model and real object during identification process. Questions about training data set processing of neural networks are usually missed. This article presents a study about influence of data set type on gas turbine neural network model accuracy. The identification object is thermodynamic model of micro gas turbine engine. The thermodynamic model input signal is the fuel consumption and output signal is the engine rotor rotation frequency. Four types input signals was used for creating training and testing data sets of dynamic neural network models - step, fast, slow and mixed. Four dynamic neural networks were created based on these types of training data sets. Each neural network was tested via four types test data sets. In the result 16 transition processes from four neural networks and four test data sets from analogous solving results of thermodynamic model were compared. The errors comparison was made between all neural network errors in each test data set. In the comparison result it was shown error value ranges of each test data set. It is shown that error values ranges is small therefore the influence of data set types on identification accuracy is low.

  18. Distributed cooperative H∞ optimal tracking control of MIMO nonlinear multi-agent systems in strict-feedback form via adaptive dynamic programming

    Science.gov (United States)

    Luy, N. T.

    2018-04-01

    The design of distributed cooperative H∞ optimal controllers for multi-agent systems is a major challenge when the agents' models are uncertain multi-input and multi-output nonlinear systems in strict-feedback form in the presence of external disturbances. In this paper, first, the distributed cooperative H∞ optimal tracking problem is transformed into controlling the cooperative tracking error dynamics in affine form. Second, control schemes and online algorithms are proposed via adaptive dynamic programming (ADP) and the theory of zero-sum differential graphical games. The schemes use only one neural network (NN) for each agent instead of three from ADP to reduce computational complexity as well as avoid choosing initial NN weights for stabilising controllers. It is shown that despite not using knowledge of cooperative internal dynamics, the proposed algorithms not only approximate values to Nash equilibrium but also guarantee all signals, such as the NN weight approximation errors and the cooperative tracking errors in the closed-loop system, to be uniformly ultimately bounded. Finally, the effectiveness of the proposed method is shown by simulation results of an application to wheeled mobile multi-robot systems.

  19. Adaptive Control of Nonlinear Discrete-Time Systems by Using OS-ELM Neural Networks

    Directory of Open Access Journals (Sweden)

    Xiao-Li Li

    2014-01-01

    Full Text Available As a kind of novel feedforward neural network with single hidden layer, ELM (extreme learning machine neural networks are studied for the identification and control of nonlinear dynamic systems. The property of simple structure and fast convergence of ELM can be shown clearly. In this paper, we are interested in adaptive control of nonlinear dynamic plants by using OS-ELM (online sequential extreme learning machine neural networks. Based on data scope division, the problem that training process of ELM neural network is sensitive to the initial training data is also solved. According to the output range of the controlled plant, the data corresponding to this range will be used to initialize ELM. Furthermore, due to the drawback of conventional adaptive control, when the OS-ELM neural network is used for adaptive control of the system with jumping parameters, the topological structure of the neural network can be adjusted dynamically by using multiple model switching strategy, and an MMAC (multiple model adaptive control will be used to improve the control performance. Simulation results are included to complement the theoretical results.

  20. Dynamics in a delayed-neural network

    International Nuclear Information System (INIS)

    Yuan Yuan

    2007-01-01

    In this paper, we consider a neural network of four identical neurons with time-delayed connections. Some parameter regions are given for global, local stability and synchronization using the theory of functional differential equations. The root distributions in the corresponding characteristic transcendental equation are analyzed, Pitchfork bifurcation, Hopf and equivariant Hopf bifurcations are investigated by revealing the center manifolds and normal forms. Numerical simulations are shown the agreements with the theoretical results

  1. Dynamic Learning Objects to Teach Java Programming Language

    Science.gov (United States)

    Narasimhamurthy, Uma; Al Shawkani, Khuloud

    2010-01-01

    This article describes a model for teaching Java Programming Language through Dynamic Learning Objects. The design of the learning objects was based on effective learning design principles to help students learn the complex topic of Java Programming. Visualization was also used to facilitate the learning of the concepts. (Contains 1 figure and 2…

  2. Robustness analysis of the Zhang neural network for online time-varying quadratic optimization

    International Nuclear Information System (INIS)

    Zhang Yunong; Ruan Gongqin; Li Kene; Yang Yiwen

    2010-01-01

    A general type of recurrent neural network (termed as Zhang neural network, ZNN) has recently been proposed by Zhang et al for the online solution of time-varying quadratic-minimization (QM) and quadratic-programming (QP) problems. Global exponential convergence of the ZNN could be achieved theoretically in an ideal error-free situation. In this paper, with the normal differentiation and dynamics-implementation errors considered, the robustness properties of the ZNN model are investigated for solving these time-varying problems. In addition, linear activation functions and power-sigmoid activation functions could be applied to such a perturbed ZNN model. Both theoretical-analysis and computer-simulation results demonstrate the good ZNN robustness and superior performance for online time-varying QM and QP problem solving, especially when using power-sigmoid activation functions.

  3. Enhancing neural-network performance via assortativity

    International Nuclear Information System (INIS)

    Franciscis, Sebastiano de; Johnson, Samuel; Torres, Joaquin J.

    2011-01-01

    The performance of attractor neural networks has been shown to depend crucially on the heterogeneity of the underlying topology. We take this analysis a step further by examining the effect of degree-degree correlations - assortativity - on neural-network behavior. We make use of a method recently put forward for studying correlated networks and dynamics thereon, both analytically and computationally, which is independent of how the topology may have evolved. We show how the robustness to noise is greatly enhanced in assortative (positively correlated) neural networks, especially if it is the hub neurons that store the information.

  4. Permutation parity machines for neural synchronization

    International Nuclear Information System (INIS)

    Reyes, O M; Kopitzke, I; Zimmermann, K-H

    2009-01-01

    Synchronization of neural networks has been studied in recent years as an alternative to cryptographic applications such as the realization of symmetric key exchange protocols. This paper presents a first view of the so-called permutation parity machine, an artificial neural network proposed as a binary variant of the tree parity machine. The dynamics of the synchronization process by mutual learning between permutation parity machines is analytically studied and the results are compared with those of tree parity machines. It will turn out that for neural synchronization, permutation parity machines form a viable alternative to tree parity machines

  5. Complex-valued neural networks advances and applications

    CERN Document Server

    Hirose, Akira

    2013-01-01

    Presents the latest advances in complex-valued neural networks by demonstrating the theory in a wide range of applications Complex-valued neural networks is a rapidly developing neural network framework that utilizes complex arithmetic, exhibiting specific characteristics in its learning, self-organizing, and processing dynamics. They are highly suitable for processing complex amplitude, composed of amplitude and phase, which is one of the core concepts in physical systems to deal with electromagnetic, light, sonic/ultrasonic waves as well as quantum waves, namely, electron and

  6. Fast and Cache-Oblivious Dynamic Programming with Local Dependencies

    DEFF Research Database (Denmark)

    Bille, Philip; Stöckel, Morten

    2012-01-01

    are widely used in bioinformatics to compare DNA and protein sequences. These problems can all be solved using essentially the same dynamic programming scheme over a two-dimensional matrix, where each entry depends locally on at most 3 neighboring entries. We present a simple, fast, and cache......-oblivious algorithm for this type of local dynamic programming suitable for comparing large-scale strings. Our algorithm outperforms the previous state-of-the-art solutions. Surprisingly, our new simple algorithm is competitive with a complicated, optimized, and tuned implementation of the best cache-aware algorithm...

  7. Self-organizing neural networks for automatic detection and classification of contrast-enhancing lesions in dynamic MR-mammography

    International Nuclear Information System (INIS)

    Vomweg, T.W.; Teifke, A.; Kauczor, H.U.; Achenbach, T.; Rieker, O.; Schreiber, W.G.; Heitmann, K.R.; Beier, T.; Thelen, M.

    2005-01-01

    Purpose: Investigation and statistical evaluation of 'Self-Organizing Maps', a special type of neural networks in the field of artificial intelligence, classifying contrast enhancing lesions in dynamic MR-mammography. Material and Methods: 176 investigations with proven histology after core biopsy or operation were randomly divided into two groups. Several Self-Organizing Maps were trained by investigations of the first group to detect and classify contrast enhancing lesions in dynamic MR-mammography. Each single pixel's signal/time curve of all patients within the second group was analyzed by the Self-Organizing Maps. The likelihood of malignancy was visualized by color overlays on the MR-images. At last assessment of contrast-enhancing lesions by each different network was rated visually and evaluated statistically. Results: A well balanced neural network achieved a sensitivity of 90.5% and a specificity of 72.2% in predicting malignancy of 88 enhancing lesions. Detailed analysis of false-positive results revealed that every second fibroadenoma showed a 'typical malignant' signal/time curve without any chance to differentiate between fibroadenomas and malignant tissue regarding contrast enhancement alone; but this special group of lesions was represented by a well-defined area of the Self-Organizing Map. Discussion: Self-Organizing Maps are capable of classifying a dynamic signal/time curve as 'typical benign' or 'typical malignant'. Therefore, they can be used as second opinion. In view of the now known localization of fibroadenomas enhancing like malignant tumors at the Self-Organizing Map, these lesions could be passed to further analysis by additional post-processing elements (e.g., based on T2-weighted series or morphology analysis) in the future. (orig.)

  8. Embedding recurrent neural networks into predator-prey models.

    Science.gov (United States)

    Moreau, Yves; Louiès, Stephane; Vandewalle, Joos; Brenig, Leon

    1999-03-01

    We study changes of coordinates that allow the embedding of ordinary differential equations describing continuous-time recurrent neural networks into differential equations describing predator-prey models-also called Lotka-Volterra systems. We transform the equations for the neural network first into quasi-monomial form (Brenig, L. (1988). Complete factorization and analytic solutions of generalized Lotka-Volterra equations. Physics Letters A, 133(7-8), 378-382), where we express the vector field of the dynamical system as a linear combination of products of powers of the variables. In practice, this transformation is possible only if the activation function is the hyperbolic tangent or the logistic sigmoid. From this quasi-monomial form, we can directly transform the system further into Lotka-Volterra equations. The resulting Lotka-Volterra system is of higher dimension than the original system, but the behavior of its first variables is equivalent to the behavior of the original neural network. We expect that this transformation will permit the application of existing techniques for the analysis of Lotka-Volterra systems to recurrent neural networks. Furthermore, our results show that Lotka-Volterra systems are universal approximators of dynamical systems, just as are continuous-time neural networks.

  9. Low-Dimensional Models of "Neuro-Glio-Vascular Unit" for Describing Neural Dynamics under Normal and Energy-Starved Conditions.

    Science.gov (United States)

    Chhabria, Karishma; Chakravarthy, V Srinivasa

    2016-01-01

    The motivation of developing simple minimal models for neuro-glio-vascular (NGV) system arises from a recent modeling study elucidating the bidirectional information flow within the NGV system having 89 dynamic equations (1). While this was one of the first attempts at formulating a comprehensive model for neuro-glio-vascular system, it poses severe restrictions in scaling up to network levels. On the contrary, low--dimensional models are convenient devices in simulating large networks that also provide an intuitive understanding of the complex interactions occurring within the NGV system. The key idea underlying the proposed models is to describe the glio-vascular system as a lumped system, which takes neural firing rate as input and returns an "energy" variable (analogous to ATP) as output. To this end, we present two models: biophysical neuro-energy (Model 1 with five variables), comprising KATP channel activity governed by neuronal ATP dynamics, and the dynamic threshold (Model 2 with three variables), depicting the dependence of neural firing threshold on the ATP dynamics. Both the models show different firing regimes, such as continuous spiking, phasic, and tonic bursting depending on the ATP production coefficient, ɛp, and external current. We then demonstrate that in a network comprising such energy-dependent neuron units, ɛp could modulate the local field potential (LFP) frequency and amplitude. Interestingly, low-frequency LFP dominates under low ɛp conditions, which is thought to be reminiscent of seizure-like activity observed in epilepsy. The proposed "neuron-energy" unit may be implemented in building models of NGV networks to simulate data obtained from multimodal neuroimaging systems, such as functional near infrared spectroscopy coupled to electroencephalogram and functional magnetic resonance imaging coupled to electroencephalogram. Such models could also provide a theoretical basis for devising optimal neurorehabilitation strategies, such as

  10. Step by step parallel programming method for molecular dynamics code

    International Nuclear Information System (INIS)

    Orii, Shigeo; Ohta, Toshio

    1996-07-01

    Parallel programming for a numerical simulation program of molecular dynamics is carried out with a step-by-step programming technique using the two phase method. As a result, within the range of a certain computing parameters, it is found to obtain parallel performance by using the level of parallel programming which decomposes the calculation according to indices of do-loops into each processor on the vector parallel computer VPP500 and the scalar parallel computer Paragon. It is also found that VPP500 shows parallel performance in wider range computing parameters. The reason is that the time cost of the program parts, which can not be reduced by the do-loop level of the parallel programming, can be reduced to the negligible level by the vectorization. After that, the time consuming parts of the program are concentrated on less parts that can be accelerated by the do-loop level of the parallel programming. This report shows the step-by-step parallel programming method and the parallel performance of the molecular dynamics code on VPP500 and Paragon. (author)

  11. q-state Potts-glass neural network based on pseudoinverse rule

    International Nuclear Information System (INIS)

    Xiong Daxing; Zhao Hong

    2010-01-01

    We study the q-state Potts-glass neural network with the pseudoinverse (PI) rule. Its performance is investigated and compared with that of the counterpart network with the Hebbian rule instead. We find that there exists a critical point of q, i.e., q cr =14, below which the storage capacity and the retrieval quality can be greatly improved by introducing the PI rule. We show that the dynamics of the neural networks constructed with the two learning rules respectively are quite different; but however, regardless of the learning rules, in the q-state Potts-glass neural networks with q≥3 there is a common novel dynamical phase in which the spurious memories are completely suppressed. This property has never been noticed in the symmetric feedback neural networks. Free from the spurious memories implies that the multistate Potts-glass neural networks would not be trapped in the metastable states, which is a favorable property for their applications.

  12. The Dynamic Geometrisation of Computer Programming

    Science.gov (United States)

    Sinclair, Nathalie; Patterson, Margaret

    2018-01-01

    The goal of this paper is to explore dynamic geometry environments (DGE) as a type of computer programming language. Using projects created by secondary students in one particular DGE, we analyse the extent to which the various aspects of computational thinking--including both ways of doing things and particular concepts--were evident in their…

  13. Quantum optical device accelerating dynamic programming

    OpenAIRE

    Grigoriev, D.; Kazakov, A.; Vakulenko, S.

    2005-01-01

    In this paper we discuss analogue computers based on quantum optical systems accelerating dynamic programming for some computational problems. These computers, at least in principle, can be realized by actually existing devices. We estimate an acceleration in resolving of some NP-hard problems that can be obtained in such a way versus deterministic computers

  14. Containment control of networked autonomous underwater vehicles: A predictor-based neural DSC design.

    Science.gov (United States)

    Peng, Zhouhua; Wang, Dan; Wang, Wei; Liu, Lu

    2015-11-01

    This paper investigates the containment control problem of networked autonomous underwater vehicles in the presence of model uncertainty and unknown ocean disturbances. A predictor-based neural dynamic surface control design method is presented to develop the distributed adaptive containment controllers, under which the trajectories of follower vehicles nearly converge to the dynamic convex hull spanned by multiple reference trajectories over a directed network. Prediction errors, rather than tracking errors, are used to update the neural adaptation laws, which are independent of the tracking error dynamics, resulting in two time-scales to govern the entire system. The stability property of the closed-loop network is established via Lyapunov analysis, and transient property is quantified in terms of L2 norms of the derivatives of neural weights, which are shown to be smaller than the classical neural dynamic surface control approach. Comparative studies are given to show the substantial improvements of the proposed new method. Copyright © 2015 ISA. Published by Elsevier Ltd. All rights reserved.

  15. Dynamic Fault Diagnosis for Semi-Batch Reactor under Closed-Loop Control via Independent Radial Basis Function Neural Network

    OpenAIRE

    Abdelkarim M. Ertiame; D. W. Yu; D. L. Yu; J. B. Gomm

    2015-01-01

    In this paper, a robust fault detection and isolation (FDI) scheme is developed to monitor a multivariable nonlinear chemical process called the Chylla-Haase polymerization reactor, when it is under the cascade PI control. The scheme employs a radial basis function neural network (RBFNN) in an independent mode to model the process dynamics, and using the weighted sum-squared prediction error as the residual. The Recursive Orthogonal Least Squares algorithm (ROLS) is emplo...

  16. An Improved Dynamic Programming Decomposition Approach for Network Revenue Management

    OpenAIRE

    Dan Zhang

    2011-01-01

    We consider a nonlinear nonseparable functional approximation to the value function of a dynamic programming formulation for the network revenue management (RM) problem with customer choice. We propose a simultaneous dynamic programming approach to solve the resulting problem, which is a nonlinear optimization problem with nonlinear constraints. We show that our approximation leads to a tighter upper bound on optimal expected revenue than some known bounds in the literature. Our approach can ...

  17. Decentralized neural control application to robotics

    CERN Document Server

    Garcia-Hernandez, Ramon; Sanchez, Edgar N; Alanis, Alma y; Ruz-Hernandez, Jose A

    2017-01-01

    This book provides a decentralized approach for the identification and control of robotics systems. It also presents recent research in decentralized neural control and includes applications to robotics. Decentralized control is free from difficulties due to complexity in design, debugging, data gathering and storage requirements, making it preferable for interconnected systems. Furthermore, as opposed to the centralized approach, it can be implemented with parallel processors. This approach deals with four decentralized control schemes, which are able to identify the robot dynamics. The training of each neural network is performed on-line using an extended Kalman filter (EKF). The first indirect decentralized control scheme applies the discrete-time block control approach, to formulate a nonlinear sliding manifold. The second direct decentralized neural control scheme is based on the backstepping technique, approximated by a high order neural network. The third control scheme applies a decentralized neural i...

  18. Approximate dynamic programming solving the curses of dimensionality

    CERN Document Server

    Powell, Warren B

    2007-01-01

    Warren B. Powell, PhD, is Professor of Operations Research and Financial Engineering at Princeton University, where he is founder and Director of CASTLE Laboratory, a research unit that works with industrial partners to test new ideas found in operations research. The recipient of the 2004 INFORMS Fellow Award, Dr. Powell has authored over 100 refereed publications on stochastic optimization, approximate dynamic programming, and dynamic resource management.

  19. Spherical harmonics based descriptor for neural network potentials: Structure and dynamics of Au147 nanocluster.

    Science.gov (United States)

    Jindal, Shweta; Chiriki, Siva; Bulusu, Satya S

    2017-05-28

    We propose a highly efficient method for fitting the potential energy surface of a nanocluster using a spherical harmonics based descriptor integrated with an artificial neural network. Our method achieves the accuracy of quantum mechanics and speed of empirical potentials. For large sized gold clusters (Au 147 ), the computational time for accurate calculation of energy and forces is about 1.7 s, which is faster by several orders of magnitude compared to density functional theory (DFT). This method is used to perform the global minimum optimizations and molecular dynamics simulations for Au 147 , and it is found that its global minimum is not an icosahedron. The isomer that can be regarded as the global minimum is found to be 4 eV lower in energy than the icosahedron and is confirmed from DFT. The geometry of the obtained global minimum contains 105 atoms on the surface and 42 atoms in the core. A brief study on the fluxionality in Au 147 is performed, and it is concluded that Au 147 has a dynamic surface, thus opening a new window for studying its reaction dynamics.

  20. Bellman’s GAP—a language and compiler for dynamic programming in sequence analysis

    Science.gov (United States)

    Sauthoff, Georg; Möhl, Mathias; Janssen, Stefan; Giegerich, Robert

    2013-01-01

    Motivation: Dynamic programming is ubiquitous in bioinformatics. Developing and implementing non-trivial dynamic programming algorithms is often error prone and tedious. Bellman’s GAP is a new programming system, designed to ease the development of bioinformatics tools based on the dynamic programming technique. Results: In Bellman’s GAP, dynamic programming algorithms are described in a declarative style by tree grammars, evaluation algebras and products formed thereof. This bypasses the design of explicit dynamic programming recurrences and yields programs that are free of subscript errors, modular and easy to modify. The declarative modules are compiled into C++ code that is competitive to carefully hand-crafted implementations. This article introduces the Bellman’s GAP system and its language, GAP-L. It then demonstrates the ease of development and the degree of re-use by creating variants of two common bioinformatics algorithms. Finally, it evaluates Bellman’s GAP as an implementation platform of ‘real-world’ bioinformatics tools. Availability: Bellman’s GAP is available under GPL license from http://bibiserv.cebitec.uni-bielefeld.de/bellmansgap. This Web site includes a repository of re-usable modules for RNA folding based on thermodynamics. Contact: robert@techfak.uni-bielefeld.de Supplementary information: Supplementary data are available at Bioinformatics online PMID:23355290

  1. Proving deadlock freedom of logic programs with dynamic scheduling

    NARCIS (Netherlands)

    E. Marchiori; F. Teusink (Frank)

    1996-01-01

    textabstractIn increasingly many logic programming systems, the Prolog left to right selection rule has been replaced with dynamic selection rules, that select an atom of a query among those satisfying suitable conditions. These conditions describe the form of the arguments of every program

  2. Mitochondrial metabolism in early neural fate and its relevance for neuronal disease modeling.

    Science.gov (United States)

    Lorenz, Carmen; Prigione, Alessandro

    2017-12-01

    Modulation of energy metabolism is emerging as a key aspect associated with cell fate transition. The establishment of a correct metabolic program is particularly relevant for neural cells given their high bioenergetic requirements. Accordingly, diseases of the nervous system commonly involve mitochondrial impairment. Recent studies in animals and in neural derivatives of human pluripotent stem cells (PSCs) highlighted the importance of mitochondrial metabolism for neural fate decisions in health and disease. The mitochondria-based metabolic program of early neurogenesis suggests that PSC-derived neural stem cells (NSCs) may be used for modeling neurological disorders. Understanding how metabolic programming is orchestrated during neural commitment may provide important information for the development of therapies against conditions affecting neural functions, including aging and mitochondrial disorders. Copyright © 2017. Published by Elsevier Ltd.

  3. Biological oscillations for learning walking coordination: dynamic recurrent neural network functionally models physiological central pattern generator.

    Science.gov (United States)

    Hoellinger, Thomas; Petieau, Mathieu; Duvinage, Matthieu; Castermans, Thierry; Seetharaman, Karthik; Cebolla, Ana-Maria; Bengoetxea, Ana; Ivanenko, Yuri; Dan, Bernard; Cheron, Guy

    2013-01-01

    The existence of dedicated neuronal modules such as those organized in the cerebral cortex, thalamus, basal ganglia, cerebellum, or spinal cord raises the question of how these functional modules are coordinated for appropriate motor behavior. Study of human locomotion offers an interesting field for addressing this central question. The coordination of the elevation of the 3 leg segments under a planar covariation rule (Borghese et al., 1996) was recently modeled (Barliya et al., 2009) by phase-adjusted simple oscillators shedding new light on the understanding of the central pattern generator (CPG) processing relevant oscillation signals. We describe the use of a dynamic recurrent neural network (DRNN) mimicking the natural oscillatory behavior of human locomotion for reproducing the planar covariation rule in both legs at different walking speeds. Neural network learning was based on sinusoid signals integrating frequency and amplitude features of the first three harmonics of the sagittal elevation angles of the thigh, shank, and foot of each lower limb. We verified the biological plausibility of the neural networks. Best results were obtained with oscillations extracted from the first three harmonics in comparison to oscillations outside the harmonic frequency peaks. Physiological replication steadily increased with the number of neuronal units from 1 to 80, where similarity index reached 0.99. Analysis of synaptic weighting showed that the proportion of inhibitory connections consistently increased with the number of neuronal units in the DRNN. This emerging property in the artificial neural networks resonates with recent advances in neurophysiology of inhibitory neurons that are involved in central nervous system oscillatory activities. The main message of this study is that this type of DRNN may offer a useful model of physiological central pattern generator for gaining insights in basic research and developing clinical applications.

  4. Dynamic Programming Approaches for the Traveling Salesman Problem with Drone

    OpenAIRE

    Bouman, Paul; Agatz, Niels; Schmidt, Marie

    2017-01-01

    markdownabstractA promising new delivery model involves the use of a delivery truck that collaborates with a drone to make deliveries. Effectively combining a drone and a truck gives rise to a new planning problem that is known as the Traveling Salesman Problem with Drone (TSP-D). This paper presents an exact solution approach for the TSP-D based on dynamic programming and present experimental results of different dynamic programming based heuristics. Our numerical experiments show that our a...

  5. Genetic algorithm for neural networks optimization

    Science.gov (United States)

    Setyawati, Bina R.; Creese, Robert C.; Sahirman, Sidharta

    2004-11-01

    This paper examines the forecasting performance of multi-layer feed forward neural networks in modeling a particular foreign exchange rates, i.e. Japanese Yen/US Dollar. The effects of two learning methods, Back Propagation and Genetic Algorithm, in which the neural network topology and other parameters fixed, were investigated. The early results indicate that the application of this hybrid system seems to be well suited for the forecasting of foreign exchange rates. The Neural Networks and Genetic Algorithm were programmed using MATLAB«.

  6. Efficient universal computing architectures for decoding neural activity.

    Directory of Open Access Journals (Sweden)

    Benjamin I Rapoport

    Full Text Available The ability to decode neural activity into meaningful control signals for prosthetic devices is critical to the development of clinically useful brain- machine interfaces (BMIs. Such systems require input from tens to hundreds of brain-implanted recording electrodes in order to deliver robust and accurate performance; in serving that primary function they should also minimize power dissipation in order to avoid damaging neural tissue; and they should transmit data wirelessly in order to minimize the risk of infection associated with chronic, transcutaneous implants. Electronic architectures for brain- machine interfaces must therefore minimize size and power consumption, while maximizing the ability to compress data to be transmitted over limited-bandwidth wireless channels. Here we present a system of extremely low computational complexity, designed for real-time decoding of neural signals, and suited for highly scalable implantable systems. Our programmable architecture is an explicit implementation of a universal computing machine emulating the dynamics of a network of integrate-and-fire neurons; it requires no arithmetic operations except for counting, and decodes neural signals using only computationally inexpensive logic operations. The simplicity of this architecture does not compromise its ability to compress raw neural data by factors greater than [Formula: see text]. We describe a set of decoding algorithms based on this computational architecture, one designed to operate within an implanted system, minimizing its power consumption and data transmission bandwidth; and a complementary set of algorithms for learning, programming the decoder, and postprocessing the decoded output, designed to operate in an external, nonimplanted unit. The implementation of the implantable portion is estimated to require fewer than 5000 operations per second. A proof-of-concept, 32-channel field-programmable gate array (FPGA implementation of this portion

  7. Sandia Dynamic Materials Program Strategic Plan.

    Energy Technology Data Exchange (ETDEWEB)

    Flicker, Dawn Gustine [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Benage, John F. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Desjarlais, Michael P. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Knudson, Marcus D. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Leifeste, Gordon T. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Lemke, Raymond W. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Mattsson, Thomas [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Wise, Jack L. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2017-05-01

    Materials in nuclear and conventional weapons can reach multi-megabar pressures and 1000s of degree temperatures on timescales ranging from microseconds to nanoseconds. Understanding the response of complex materials under these conditions is important for designing and assessing changes to nuclear weapons. In the next few decades, a major concern will be evaluating the behavior of aging materials and remanufactured components. The science to enable the program to underwrite decisions quickly and confidently on use, remanufacturing, and replacement of these materials will be critical to NNSA’s new Stockpile Responsiveness Program. Material response is also important for assessing the risks posed by adversaries or proliferants. Dynamic materials research, which refers to the use of high-speed experiments to produce extreme conditions in matter, is an important part of NNSA’s Stockpile Stewardship Program.

  8. Phase Diagram of Spiking Neural Networks

    Directory of Open Access Journals (Sweden)

    Hamed eSeyed-Allaei

    2015-03-01

    Full Text Available In computer simulations of spiking neural networks, often it is assumed that every two neurons of the network are connected by a probablilty of 2%, 20% of neurons are inhibitory and 80% are excitatory. These common values are based on experiments, observations. but here, I take a different perspective, inspired by evolution. I simulate many networks, each with a different set of parameters, and then I try to figure out what makes the common values desirable by nature. Networks which are configured according to the common values, have the best dynamic range in response to an impulse and their dynamic range is more robust in respect to synaptic weights. In fact, evolution has favored networks of best dynamic range. I present a phase diagram that shows the dynamic ranges of different networks of different parameteres. This phase diagram gives an insight into the space of parameters -- excitatory to inhibitory ratio, sparseness of connections and synaptic weights. It may serve as a guideline to decide about the values of parameters in a simulation of spiking neural network.

  9. Action Potential Modulation of Neural Spin Networks Suggests Possible Role of Spin

    CERN Document Server

    Hu, H P

    2004-01-01

    In this paper we show that nuclear spin networks in neural membranes are modulated by action potentials through J-coupling, dipolar coupling and chemical shielding tensors and perturbed by microscopically strong and fluctuating internal magnetic fields produced largely by paramagnetic oxygen. We suggest that these spin networks could be involved in brain functions since said modulation inputs information carried by the neural spike trains into them, said perturbation activates various dynamics within them and the combination of the two likely produce stochastic resonance thus synchronizing said dynamics to the neural firings. Although quantum coherence is desirable and may indeed exist, it is not required for these spin networks to serve as the subatomic components for the conventional neural networks.

  10. Feature to prototype transition in neural networks

    Science.gov (United States)

    Krotov, Dmitry; Hopfield, John

    Models of associative memory with higher order (higher than quadratic) interactions, and their relationship to neural networks used in deep learning are discussed. Associative memory is conventionally described by recurrent neural networks with dynamical convergence to stable points. Deep learning typically uses feedforward neural nets without dynamics. However, a simple duality relates these two different views when applied to problems of pattern classification. From the perspective of associative memory such models deserve attention because they make it possible to store a much larger number of memories, compared to the quadratic case. In the dual description, these models correspond to feedforward neural networks with one hidden layer and unusual activation functions transmitting the activities of the visible neurons to the hidden layer. These activation functions are rectified polynomials of a higher degree rather than the rectified linear functions used in deep learning. The network learns representations of the data in terms of features for rectified linear functions, but as the power in the activation function is increased there is a gradual shift to a prototype-based representation, the two extreme regimes of pattern recognition known in cognitive psychology. Simons Center for Systems Biology.

  11. Intensive Research Program on Advances in Nonsmooth Dynamics 2016

    CERN Document Server

    Jeffrey, Mike; Lázaro, J; Olm, Josep

    2017-01-01

    This volume contains extended abstracts outlining selected talks and other selected presentations given by participants throughout the "Intensive Research Program on Advances in Nonsmooth Dynamics 2016", held at the Centre de Recerca Matemàtica (CRM) in Barcelona from February 1st to April 29th, 2016. They include brief research articles reporting new results, descriptions of preliminary work or open problems, and outlines of prominent discussion sessions. The articles are all the result of direct collaborations initiated during the research program. The topic is the theory and applications of Nonsmooth Dynamics. This includes systems involving elements of: impacting, switching, on/off control, hybrid discrete-continuous dynamics, jumps in physical properties, and many others. Applications include: electronics, climate modeling, life sciences, mechanics, ecology, and more. Numerous new results are reported concerning the dimensionality and robustness of nonsmooth models, shadowing variables, numbers of limit...

  12. Transient analysis for PWR reactor core using neural networks predictors

    International Nuclear Information System (INIS)

    Gueray, B.S.

    2001-01-01

    In this study, transient analysis for a Pressurized Water Reactor core has been performed. A lumped parameter approximation is preferred for that purpose, to describe the reactor core together with mechanism which play an important role in dynamic analysis. The dynamic behavior of the reactor core during transients is analyzed considering the transient initiating events, wich are an essential part of Safety Analysis Reports. several transients are simulated based on the employed core model. Simulation results are in accord the physical expectations. A neural network is developed to predict the future response of the reactor core, in advance. The neural network is trained using the simulation results of a number of representative transients. Structure of the neural network is optimized by proper selection of transfer functions for the neurons. Trained neural network is used to predict the future responses following an early observation of the changes in system variables. Estimated behaviour using the neural network is in good agreement with the simulation results for various for types of transients. Results of this study indicate that the designed neural network can be used as an estimator of the time dependent behavior of the reactor core under transient conditions

  13. Neural Dynamics of Multiple Object Processing in Mild Cognitive Impairment and Alzheimer's Disease: Future Early Diagnostic Biomarkers?

    Science.gov (United States)

    Bagattini, Chiara; Mazza, Veronica; Panizza, Laura; Ferrari, Clarissa; Bonomini, Cristina; Brignani, Debora

    2017-01-01

    The aim of this study was to investigate the behavioral and electrophysiological dynamics of multiple object processing (MOP) in mild cognitive impairment (MCI) and Alzheimer's disease (AD), and to test whether its neural signatures may represent reliable diagnostic biomarkers. Behavioral performance and event-related potentials [N2pc and contralateral delay activity (CDA)] were measured in AD, MCI, and healthy controls during a MOP task, which consisted in enumerating a variable number of targets presented among distractors. AD patients showed an overall decline in accuracy for both small and large target quantities, whereas in MCI patients, only enumeration of large quantities was impaired. N2pc, a neural marker of attentive individuation, was spared in both AD and MCI patients. In contrast, CDA, which indexes visual short term memory abilities, was altered in both groups of patients, with a non-linear pattern of amplitude modulation along the continuum of the disease: a reduction in AD and an increase in MCI. These results indicate that AD pathology shows a progressive decline in MOP, which is associated to the decay of visual short-term memory mechanisms. Crucially, CDA may be considered as a useful neural signature both to distinguish between healthy and pathological aging and to characterize the different stages along the AD continuum, possibly becoming a reliable candidate for an early diagnostic biomarker of AD pathology.

  14. A Mechanistic Neural Field Theory of How Anesthesia Suppresses Consciousness: Synaptic Drive Dynamics, Bifurcations, Attractors, and Partial State Equipartitioning.

    Science.gov (United States)

    Hou, Saing Paul; Haddad, Wassim M; Meskin, Nader; Bailey, James M

    2015-12-01

    With the advances in biochemistry, molecular biology, and neurochemistry there has been impressive progress in understanding the molecular properties of anesthetic agents. However, there has been little focus on how the molecular properties of anesthetic agents lead to the observed macroscopic property that defines the anesthetic state, that is, lack of responsiveness to noxious stimuli. In this paper, we use dynamical system theory to develop a mechanistic mean field model for neural activity to study the abrupt transition from consciousness to unconsciousness as the concentration of the anesthetic agent increases. The proposed synaptic drive firing-rate model predicts the conscious-unconscious transition as the applied anesthetic concentration increases, where excitatory neural activity is characterized by a Poincaré-Andronov-Hopf bifurcation with the awake state transitioning to a stable limit cycle and then subsequently to an asymptotically stable unconscious equilibrium state. Furthermore, we address the more general question of synchronization and partial state equipartitioning of neural activity without mean field assumptions. This is done by focusing on a postulated subset of inhibitory neurons that are not themselves connected to other inhibitory neurons. Finally, several numerical experiments are presented to illustrate the different aspects of the proposed theory.

  15. Functional neural networks underlying response inhibition in adolescents and adults.

    Science.gov (United States)

    Stevens, Michael C; Kiehl, Kent A; Pearlson, Godfrey D; Calhoun, Vince D

    2007-07-19

    This study provides the first description of neural network dynamics associated with response inhibition in healthy adolescents and adults. Functional and effective connectivity analyses of whole brain hemodynamic activity elicited during performance of a Go/No-Go task were used to identify functionally integrated neural networks and characterize their causal interactions. Three response inhibition circuits formed a hierarchical, inter-dependent system wherein thalamic modulation of input to premotor cortex by fronto-striatal regions led to response suppression. Adolescents differed from adults in the degree of network engagement, regional fronto-striatal-thalamic connectivity, and network dynamics. We identify and characterize several age-related differences in the function of neural circuits that are associated with behavioral performance changes across adolescent development.

  16. A PSO based Artificial Neural Network approach for short term unit commitment problem

    Directory of Open Access Journals (Sweden)

    AFTAB AHMAD

    2010-10-01

    Full Text Available Unit commitment (UC is a non-linear, large scale, complex, mixed-integer combinatorial constrained optimization problem. This paper proposes, a new hybrid approach for generating unit commitment schedules using swarm intelligence learning rule based neural network. The training data has been generated using dynamic programming for machines without valve point effects and using genetic algorithm for machines with valve point effects. A set of load patterns as inputs and the corresponding unit generation schedules as outputs are used to train the network. The neural network fine tunes the best results to the desired targets. The proposed approach has been validated for three thermal machines with valve point effects and without valve point effects. The results are compared with the approaches available in the literature. The PSO-ANN trained model gives better results which show the promise of the proposed methodology.

  17. A Dynamic Programming Approach to Constrained Portfolios

    DEFF Research Database (Denmark)

    Kraft, Holger; Steffensen, Mogens

    2013-01-01

    This paper studies constrained portfolio problems that may involve constraints on the probability or the expected size of a shortfall of wealth or consumption. Our first contribution is that we solve the problems by dynamic programming, which is in contrast to the existing literature that applies...

  18. Information transmission and recovery in neural communications channels

    International Nuclear Information System (INIS)

    Eguia, M. C.; Rabinovich, M. I.; Abarbanel, H. D. I.

    2000-01-01

    Biological neural communications channels transport environmental information from sensors through chains of active dynamical neurons to neural centers for decisions and actions to achieve required functions. These kinds of communications channels are able to create information and to transfer information from one time scale to the other because of the intrinsic nonlinear dynamics of the component neurons. We discuss a very simple neural information channel composed of sensory input in the form of a spike train that arrives at a model neuron, then moves through a realistic synapse to a second neuron where the information in the initial sensory signal is read. Our model neurons are four-dimensional generalizations of the Hindmarsh-Rose neuron, and we use a model of chemical synapse derived from first-order kinetics. The four-dimensional model neuron has a rich variety of dynamical behaviors, including periodic bursting, chaotic bursting, continuous spiking, and multistability. We show that, for many of these regimes, the parameters of the chemical synapse can be tuned so that information about the stimulus that is unreadable at the first neuron in the channel can be recovered by the dynamical activity of the synapse and the second neuron. Information creation by nonlinear dynamical systems that allow chaotic oscillations is familiar in their autonomous oscillations. It is associated with the instabilities that lead to positive Lyapunov exponents in their dynamical behavior. Our results indicate how nonlinear neurons acting as input/output systems along a communications channel can recover information apparently ''lost'' in earlier junctions on the channel. Our measure of information transmission is the average mutual information between elements, and because the channel is active and nonlinear, the average mutual information between the sensory source and the final neuron may be greater than the average mutual information at an earlier neuron in the channel. This

  19. Augmented Lagrange Programming Neural Network for Localization Using Time-Difference-of-Arrival Measurements.

    Science.gov (United States)

    Han, Zifa; Leung, Chi Sing; So, Hing Cheung; Constantinides, Anthony George

    2017-08-15

    A commonly used measurement model for locating a mobile source is time-difference-of-arrival (TDOA). As each TDOA measurement defines a hyperbola, it is not straightforward to compute the mobile source position due to the nonlinear relationship in the measurements. This brief exploits the Lagrange programming neural network (LPNN), which provides a general framework to solve nonlinear constrained optimization problems, for the TDOA-based localization. The local stability of the proposed LPNN solution is also analyzed. Simulation results are included to evaluate the localization accuracy of the LPNN scheme by comparing with the state-of-the-art methods and the optimality benchmark of Cramér-Rao lower bound.

  20. Low-dimensional models of ‘Neuro-glio-vascular unit’ for describing neural dynamics under normal and energy-starved conditions

    Directory of Open Access Journals (Sweden)

    Karishma eChhabria

    2016-03-01

    Full Text Available The motivation of developing simple minimal models for neuro-glio-vascular system arises from a recent modeling study elucidating the bidirectional information flow within the neuro-glio-vascular system having 89 dynamic equations (Chander and Chakravarthy 2012. While this was one of the first attempts at formulating a comprehensive model for neuro-glia-vascular system, it poses severe restrictions in scaling up to network levels. On the contrary, low dimensional models are convenient devices in simulating large networks that also provide an intuitive understanding of the complex interactions occurring within the neuro-glio-vascular system. The key idea underlying the proposed models is to describe the glio-vascular system as a lumped system which takes neural firing rate as input and returns an ‘energy’ variable (analogous to ATP as output. To this end we present two models: Biophysical neuro-energy (Model #1 with 5 variables, comprising of KATP channel activity governed by neuronal ATP dynamics and the Dynamic threshold (Model #2 with 3 variables depicting the dependence of neural firing threshold on the ATP dynamics. Both the models show different firing regimes such as continuous spiking, phasic and tonic bursting depending on the ATP production coefficient, εp and external current. We then demonstrate that in a network comprising of such energy-dependent neuron units, εp could modulate the Local field potential (LFP frequency and amplitude. Interestingly, low frequency LFP dominates under low εp conditions, which is thought to be reminiscent of seizure-like activity observed in epilepsy. The proposed ‘neuron-energy’ unit may be implemented in building models of neuro-glio-vascular networks to simulate data obtained from multimodal neuroimaging systems such as fNIRS-EEG and fMRI-EEG. Such models could also provide a theoretical basis for devising optimal neurorehabilitation strategies such as non-invasive brain stimulation for

  1. Runway Scheduling Using Generalized Dynamic Programming

    Science.gov (United States)

    Montoya, Justin; Wood, Zachary; Rathinam, Sivakumar

    2011-01-01

    A generalized dynamic programming method for finding a set of pareto optimal solutions for a runway scheduling problem is introduced. The algorithm generates a set of runway fight sequences that are optimal for both runway throughput and delay. Realistic time-based operational constraints are considered, including miles-in-trail separation, runway crossings, and wake vortex separation. The authors also model divergent runway takeoff operations to allow for reduced wake vortex separation. A modeled Dallas/Fort Worth International airport and three baseline heuristics are used to illustrate preliminary benefits of using the generalized dynamic programming method. Simulated traffic levels ranged from 10 aircraft to 30 aircraft with each test case spanning 15 minutes. The optimal solution shows a 40-70 percent decrease in the expected delay per aircraft over the baseline schedulers. Computational results suggest that the algorithm is promising for real-time application with an average computation time of 4.5 seconds. For even faster computation times, two heuristics are developed. As compared to the optimal, the heuristics are within 5% of the expected delay per aircraft and 1% of the expected number of runway operations per hour ad can be 100x faster.

  2. Modeling of steam generator in nuclear power plant using neural network ensemble

    International Nuclear Information System (INIS)

    Lee, S. K.; Lee, E. C.; Jang, J. W.

    2003-01-01

    Neural network is now being used in modeling the steam generator is known to be difficult due to the reverse dynamics. However, Neural network is prone to the problem of overfitting. This paper investigates the use of neural network combining methods to model steam generator water level and compares with single neural network. The results show that neural network ensemble is effective tool which can offer improved generalization, lower dependence of the training set and reduced training time

  3. Dynamic programming algorithms for biological sequence comparison.

    Science.gov (United States)

    Pearson, W R; Miller, W

    1992-01-01

    Efficient dynamic programming algorithms are available for a broad class of protein and DNA sequence comparison problems. These algorithms require computer time proportional to the product of the lengths of the two sequences being compared [O(N2)] but require memory space proportional only to the sum of these lengths [O(N)]. Although the requirement for O(N2) time limits use of the algorithms to the largest computers when searching protein and DNA sequence databases, many other applications of these algorithms, such as calculation of distances for evolutionary trees and comparison of a new sequence to a library of sequence profiles, are well within the capabilities of desktop computers. In particular, the results of library searches with rapid searching programs, such as FASTA or BLAST, should be confirmed by performing a rigorous optimal alignment. Whereas rapid methods do not overlook significant sequence similarities, FASTA limits the number of gaps that can be inserted into an alignment, so that a rigorous alignment may extend the alignment substantially in some cases. BLAST does not allow gaps in the local regions that it reports; a calculation that allows gaps is very likely to extend the alignment substantially. Although a Monte Carlo evaluation of the statistical significance of a similarity score with a rigorous algorithm is much slower than the heuristic approach used by the RDF2 program, the dynamic programming approach should take less than 1 hr on a 386-based PC or desktop Unix workstation. For descriptive purposes, we have limited our discussion to methods for calculating similarity scores and distances that use gap penalties of the form g = rk. Nevertheless, programs for the more general case (g = q+rk) are readily available. Versions of these programs that run either on Unix workstations, IBM-PC class computers, or the Macintosh can be obtained from either of the authors.

  4. Deflection Prediction of No-Fines Lightweight Concrete Wall Using Neural Network Caused Dynamic Loads

    Directory of Open Access Journals (Sweden)

    Ridho Bayuaji

    2018-04-01

    Full Text Available No-fines lightweight concrete wall with horizontal reinforcement refers to an alternative material for wall construction with an aim of improving the wall quality towards horizontal loads. This study is focused on artificial neural network (ANN application to predicting the deflection deformation caused by dynamic loads. The ANN method is able to capture the complex interactions among input/output variables in a system without any knowledge of interaction nature and without any explicit assumption to model form. This paper explains the existing data research, data selection and process of ANN modelling training process and validation. The results of this research show that the deformation can be predicted more accurately, simply and quickly due to the alternating horizontal loads.

  5. A stochastic learning algorithm for layered neural networks

    International Nuclear Information System (INIS)

    Bartlett, E.B.; Uhrig, R.E.

    1992-01-01

    The random optimization method typically uses a Gaussian probability density function (PDF) to generate a random search vector. In this paper the random search technique is applied to the neural network training problem and is modified to dynamically seek out the optimal probability density function (OPDF) from which to select the search vector. The dynamic OPDF search process, combined with an auto-adaptive stratified sampling technique and a dynamic node architecture (DNA) learning scheme, completes the modifications of the basic method. The DNA technique determines the appropriate number of hidden nodes needed for a given training problem. By using DNA, researchers do not have to set the neural network architectures before training is initiated. The approach is applied to networks of generalized, fully interconnected, continuous perceptions. Computer simulation results are given

  6. Robust Adaptive Neural Control of Morphing Aircraft with Prescribed Performance

    OpenAIRE

    Wu, Zhonghua; Lu, Jingchao; Shi, Jingping; Liu, Yang; Zhou, Qing

    2017-01-01

    This study proposes a low-computational composite adaptive neural control scheme for the longitudinal dynamics of a swept-back wing aircraft subject to parameter uncertainties. To efficiently release the constraint often existing in conventional neural designs, whose closed-loop stability analysis always necessitates that neural networks (NNs) be confined in the active regions, a smooth switching function is presented to conquer this issue. By integrating minimal learning parameter (MLP) tech...

  7. An energy management for series hybrid electric vehicle using improved dynamic programming

    Science.gov (United States)

    Peng, Hao; Yang, Yaoquan; Liu, Chunyu

    2018-02-01

    With the increasing numbers of hybrid electric vehicle (HEV), management for two energy sources, engine and battery, is more and more important to achieve the minimum fuel consumption. This paper introduces several working modes of series hybrid electric vehicle (SHEV) firstly and then describes the mathematical model of main relative components in SHEV. On the foundation of this model, dynamic programming is applied to distribute energy of engine and battery on the platform of matlab and acquires less fuel consumption compared with traditional control strategy. Besides, control rule recovering energy in brake profiles is added into dynamic programming, so shorter computing time is realized by improved dynamic programming and optimization on algorithm.

  8. Neural redundancy applied to the parity space for signal validation

    International Nuclear Information System (INIS)

    Mol, Antonio Carlos de Abreu; Pereira, Claudio Marcio Nascimento Abreu; Martinez, Aquilino Senra

    2005-01-01

    The objective of signal validation is to provide more reliable information from the plant sensor data The method presented in this work introduces the concept of neural redundancy and applies it to the space parity method [1] to overcome an inherent deficiency of this method - the determination of the best estimative of the redundant measures when they are inconsistent. The concept of neural redundancy consists on the calculation of a redundancy through neural networks based on the time series of the own state variable. Therefore, neural networks, dynamically trained with the time series, will estimate the current value of the own measure, which will be used as referee of the redundant measures in the parity space. For this purpose the neural network should have the capacity to supply the neural redundancy in real time and with maximum error corresponding to the group deviation. The historical series should be enough to allow the estimate of the next value, during transients and at the same time, it should be optimized to facilitate the retraining of the neural network to each acquisition. In order to have the capacity to reproduce the tendency of the time series even under accident condition, the dynamic training of the neural network privileges the recent points of the time series. The tests accomplished with simulated data of a nuclear plant, demonstrated that this method applied on the parity space method improves the signal validation process. (author)

  9. Neural redundancy applied to the parity space for signal validation

    Energy Technology Data Exchange (ETDEWEB)

    Mol, Antonio Carlos de Abreu; Pereira, Claudio Marcio Nascimento Abreu [Instituto de Engenharia Nuclear (IEN), Rio de Janeiro, RJ (Brazil)]. E-mail: cmnap@ien.gov.br; Martinez, Aquilino Senra [Universidade Federal, Rio de Janeiro, RJ (Brazil). Coordenacao dos Programas de Pos-graduacao de Engenharia]. E-mail: aquilino@lmp.br

    2005-07-01

    The objective of signal validation is to provide more reliable information from the plant sensor data The method presented in this work introduces the concept of neural redundancy and applies it to the space parity method [1] to overcome an inherent deficiency of this method - the determination of the best estimative of the redundant measures when they are inconsistent. The concept of neural redundancy consists on the calculation of a redundancy through neural networks based on the time series of the own state variable. Therefore, neural networks, dynamically trained with the time series, will estimate the current value of the own measure, which will be used as referee of the redundant measures in the parity space. For this purpose the neural network should have the capacity to supply the neural redundancy in real time and with maximum error corresponding to the group deviation. The historical series should be enough to allow the estimate of the next value, during transients and at the same time, it should be optimized to facilitate the retraining of the neural network to each acquisition. In order to have the capacity to reproduce the tendency of the time series even under accident condition, the dynamic training of the neural network privileges the recent points of the time series. The tests accomplished with simulated data of a nuclear plant, demonstrated that this method applied on the parity space method improves the signal validation process. (author)

  10. International Conference on Artificial Neural Networks (ICANN)

    CERN Document Server

    Mladenov, Valeri; Kasabov, Nikola; Artificial Neural Networks : Methods and Applications in Bio-/Neuroinformatics

    2015-01-01

    The book reports on the latest theories on artificial neural networks, with a special emphasis on bio-neuroinformatics methods. It includes twenty-three papers selected from among the best contributions on bio-neuroinformatics-related issues, which were presented at the International Conference on Artificial Neural Networks, held in Sofia, Bulgaria, on September 10-13, 2013 (ICANN 2013). The book covers a broad range of topics concerning the theory and applications of artificial neural networks, including recurrent neural networks, super-Turing computation and reservoir computing, double-layer vector perceptrons, nonnegative matrix factorization, bio-inspired models of cell communities, Gestalt laws, embodied theory of language understanding, saccadic gaze shifts and memory formation, and new training algorithms for Deep Boltzmann Machines, as well as dynamic neural networks and kernel machines. It also reports on new approaches to reinforcement learning, optimal control of discrete time-delay systems, new al...

  11. Neural substrates underlying the tendency to accept anger-infused ultimatum offers during dynamic social interactions.

    Science.gov (United States)

    Gilam, Gadi; Lin, Tamar; Raz, Gal; Azrielant, Shir; Fruchter, Eyal; Ariely, Dan; Hendler, Talma

    2015-10-15

    In managing our way through interpersonal conflict, anger might be crucial in determining whether the dispute escalates to aggressive behaviors or resolves cooperatively. The Ultimatum Game (UG) is a social decision-making paradigm that provides a framework for studying interpersonal conflict over division of monetary resources. Unfair monetary UG-offers elicit anger and while accepting them engages regulatory processes, rejecting them is regarded as an aggressive retribution. Ventro-medial prefrontal-cortex (vmPFC) activity has been shown to relate to idiosyncratic tendencies in accepting unfair offers possibly through its role in emotion regulation. Nevertheless, standard UG paradigms lack fundamental aspects of real-life social interactions in which one reacts to other people in a response contingent fashion. To uncover the neural substrates underlying the tendency to accept anger-infused ultimatum offers during dynamic social interactions, we incorporated on-line verbal negotiations with an obnoxious partner in a repeated-UG during fMRI scanning. We hypothesized that vmPFC activity will differentiate between individuals with high or low monetary gains accumulated throughout the game and reflect a divergence in the associated emotional experience. We found that as individuals gained more money, they reported less anger but also more positive feelings and had slower sympathetic response. In addition, high-gain individuals had increased vmPFC activity, but also decreased brainstem activity, which possibly reflected the locus coeruleus. During the more angering unfair offers, these individuals had increased dorsal-posterior Insula (dpI) activity which functionally coupled to the medial-thalamus (mT). Finally, both vmPFC activity and dpI-mT connectivity contributed to increased gain, possibly by modulating the ongoing subjective emotional experience. These ecologically valid findings point towards a neural mechanism that might nurture pro-social interactions by

  12. Computational modeling of spiking neural network with learning rules from STDP and intrinsic plasticity

    Science.gov (United States)

    Li, Xiumin; Wang, Wei; Xue, Fangzheng; Song, Yongduan

    2018-02-01

    Recently there has been continuously increasing interest in building up computational models of spiking neural networks (SNN), such as the Liquid State Machine (LSM). The biologically inspired self-organized neural networks with neural plasticity can enhance the capability of computational performance, with the characteristic features of dynamical memory and recurrent connection cycles which distinguish them from the more widely used feedforward neural networks. Despite a variety of computational models for brain-like learning and information processing have been proposed, the modeling of self-organized neural networks with multi-neural plasticity is still an important open challenge. The main difficulties lie in the interplay among different forms of neural plasticity rules and understanding how structures and dynamics of neural networks shape the computational performance. In this paper, we propose a novel approach to develop the models of LSM with a biologically inspired self-organizing network based on two neural plasticity learning rules. The connectivity among excitatory neurons is adapted by spike-timing-dependent plasticity (STDP) learning; meanwhile, the degrees of neuronal excitability are regulated to maintain a moderate average activity level by another learning rule: intrinsic plasticity (IP). Our study shows that LSM with STDP+IP performs better than LSM with a random SNN or SNN obtained by STDP alone. The noticeable improvement with the proposed method is due to the better reflected competition among different neurons in the developed SNN model, as well as the more effectively encoded and processed relevant dynamic information with its learning and self-organizing mechanism. This result gives insights to the optimization of computational models of spiking neural networks with neural plasticity.

  13. A Case for Dynamic Reverse-code Generation to Debug Non-deterministic Programs

    Directory of Open Access Journals (Sweden)

    Jooyong Yi

    2013-09-01

    Full Text Available Backtracking (i.e., reverse execution helps the user of a debugger to naturally think backwards along the execution path of a program, and thinking backwards makes it easy to locate the origin of a bug. So far backtracking has been implemented mostly by state saving or by checkpointing. These implementations, however, inherently do not scale. Meanwhile, a more recent backtracking method based on reverse-code generation seems promising because executing reverse code can restore the previous states of a program without state saving. In the literature, there can be found two methods that generate reverse code: (a static reverse-code generation that pre-generates reverse code through static analysis before starting a debugging session, and (b dynamic reverse-code generation that generates reverse code by applying dynamic analysis on the fly during a debugging session. In particular, we espoused the latter one in our previous work to accommodate non-determinism of a program caused by e.g., multi-threading. To demonstrate the usefulness of our dynamic reverse-code generation, this article presents a case study of various backtracking methods including ours. We compare the memory usage of various backtracking methods in a simple but nontrivial example, a bounded-buffer program. In the case of non-deterministic programs such as this bounded-buffer program, our dynamic reverse-code generation outperforms the existing backtracking methods in terms of memory efficiency.

  14. Dynamic indoor thermal comfort model identification based on neural computing PMV index

    International Nuclear Information System (INIS)

    Sahari, K S Mohamed; Jalal, M F Abdul; Homod, R Z; Eng, Y K

    2013-01-01

    This paper focuses on modelling and simulation of building dynamic thermal comfort control for non-linear HVAC system. Thermal comfort in general refers to temperature and also humidity. However in reality, temperature or humidity is just one of the factors affecting the thermal comfort but not the main measures. Besides, as HVAC control system has the characteristic of time delay, large inertia, and highly nonlinear behaviour, it is difficult to determine the thermal comfort sensation accurately if we use traditional Fanger's PMV index. Hence, Artificial Neural Network (ANN) has been introduced due to its ability to approximate any nonlinear mapping. Using ANN to train, we can get the input-output mapping of HVAC control system or in other word; we can propose a practical approach to identify thermal comfort of a building. Simulations were carried out to validate and verify the proposed method. Results show that the proposed ANN method can track down the desired thermal sensation for a specified condition space.

  15. How single node dynamics enhances synchronization in neural networks with electrical coupling

    International Nuclear Information System (INIS)

    Bonacini, E.; Burioni, R.; Di Volo, M.; Groppi, M.; Soresina, C.; Vezzani, A.

    2016-01-01

    The stability of the completely synchronous state in neural networks with electrical coupling is analytically investigated applying both the Master Stability Function approach (MSF), developed by Pecora and Carroll (1998), and the Connection Graph Stability method (CGS) proposed by Belykh et al. (2004). The local dynamics is described by Morris–Lecar model for spiking neurons and by Hindmarsh–Rose model in spike, burst, irregular spike and irregular burst regimes. The combined application of both CGS and MSF methods provides an efficient estimate of the synchronization thresholds, namely bounds for the coupling strength ranges in which the synchronous state is stable. In all the considered cases, we observe that high values of coupling strength tend to synchronize the system. Furthermore, we observe a correlation between the single node attractor and the local stability properties given by MSF. The analytical results are compared with numerical simulations on a sample network, with excellent agreement.

  16. Self-supervised dynamical systems

    International Nuclear Information System (INIS)

    Zak, Michail

    2004-01-01

    A new type of dynamical systems which capture the interactions via information flows typical for active multi-agent systems is introduced. The mathematical formalism is based upon coupling the classical dynamical system (with random components caused by uncertainties in initial conditions as well as by Langevin forces) with the corresponding Liouville or the Fokker-Planck equations describing evolution of these uncertainties in terms of probability density. The coupling is implemented by information-based supervising forces which fundamentally change the patterns of probability evolution. It is demonstrated that the probability density can approach prescribed attractors while exhibiting such patterns as shock waves, solitons and chaos in probability space. Applications of these phenomena to information-based neural nets, expectation-based cooperation, self-programmed systems, control chaos using terminal attractors as well as to games with incomplete information, are addressed. A formal similarity between the mathematical structure of the introduced dynamical systems and quantum mechanics is discussed

  17. A dynamic feedforward neural network based on gaussian particle swarm optimization and its application for predictive control.

    Science.gov (United States)

    Han, Min; Fan, Jianchao; Wang, Jun

    2011-09-01

    A dynamic feedforward neural network (DFNN) is proposed for predictive control, whose adaptive parameters are adjusted by using Gaussian particle swarm optimization (GPSO) in the training process. Adaptive time-delay operators are added in the DFNN to improve its generalization for poorly known nonlinear dynamic systems with long time delays. Furthermore, GPSO adopts a chaotic map with Gaussian function to balance the exploration and exploitation capabilities of particles, which improves the computational efficiency without compromising the performance of the DFNN. The stability of the particle dynamics is analyzed, based on the robust stability theory, without any restrictive assumption. A stability condition for the GPSO+DFNN model is derived, which ensures a satisfactory global search and quick convergence, without the need for gradients. The particle velocity ranges could change adaptively during the optimization process. The results of a comparative study show that the performance of the proposed algorithm can compete with selected algorithms on benchmark problems. Additional simulation results demonstrate the effectiveness and accuracy of the proposed combination algorithm in identifying and controlling nonlinear systems with long time delays.

  18. Self-organized critical neural networks

    International Nuclear Information System (INIS)

    Bornholdt, Stefan; Roehl, Torsten

    2003-01-01

    A mechanism for self-organization of the degree of connectivity in model neural networks is studied. Network connectivity is regulated locally on the basis of an order parameter of the global dynamics, which is estimated from an observable at the single synapse level. This principle is studied in a two-dimensional neural network with randomly wired asymmetric weights. In this class of networks, network connectivity is closely related to a phase transition between ordered and disordered dynamics. A slow topology change is imposed on the network through a local rewiring rule motivated by activity-dependent synaptic development: Neighbor neurons whose activity is correlated, on average develop a new connection while uncorrelated neighbors tend to disconnect. As a result, robust self-organization of the network towards the order disorder transition occurs. Convergence is independent of initial conditions, robust against thermal noise, and does not require fine tuning of parameters

  19. Tracking performance and global stability guaranteed neural control of uncertain hypersonic flight vehicle

    Directory of Open Access Journals (Sweden)

    Tao Teng

    2016-02-01

    Full Text Available In this article, a global adaptive neural dynamic surface control with predefined tracking performance is developed for a class of hypersonic flight vehicles, whose accurate dynamics is hard to obtain. The control scheme developed in this paper overcomes the limitations of neural approximation region by employing a switching mechanism which incorporates an additional robust controller outside the neural approximation region to pull the transient state variables back when they overstep the neural approximation region, such that globally uniformly ultimately bounded stability can be guaranteed. Especially, the developed global adaptive neural control also improves the tracking performance by introducing an error transformation mechanism, such that both transient and steady-state performance can be shaped according to the predefined bounds. Simulation studies on the hypersonic flight vehicle validate that the designed controller has good velocity modulation and velocity stability performance.

  20. Dynamic Programming Approach for Exact Decision Rule Optimization

    KAUST Repository

    Amin, Talha M.; Chikalov, Igor; Moshkov, Mikhail; Zielosko, Beata

    2013-01-01

    This chapter is devoted to the study of an extension of dynamic programming approach that allows sequential optimization of exact decision rules relative to the length and coverage. It contains also results of experiments with decision tables from

  1. Program packages for dynamics systems analysis and design

    International Nuclear Information System (INIS)

    Athani, V.V.

    1976-01-01

    The development of computer program packages for dynamic system analysis and design are reported. The purpose of developing these program packages is to take the burden of writing computer programs off the mind of the system engineer and to enable him to concentrate on his main system analysis and design work. Towards this end, four standard computer program packages have been prepared : (1) TFANA - starting from system transfer function this program computes transient response, frequency response, root locus and stability by Routh Hurwitz criterion, (2) TFSYN - classical synthesis using algebraic method of Shipley, (3) MODANA - starting from state equations of the system this program computes solution of state equations, controllability, observability and stability, (4) OPTCON - This program obtains solutions of (i) linear regulator problem, (ii) servomechanism problems and (iii) problem of pole placement. The paper describes these program packages with the help of flowcharts and illustrates their use with the help of examples. (author)

  2. EEG-fMRI Bayesian framework for neural activity estimation: a simulation study

    Science.gov (United States)

    Croce, Pierpaolo; Basti, Alessio; Marzetti, Laura; Zappasodi, Filippo; Del Gratta, Cosimo

    2016-12-01

    Objective. Due to the complementary nature of electroencephalography (EEG) and functional magnetic resonance imaging (fMRI), and given the possibility of simultaneous acquisition, the joint data analysis can afford a better understanding of the underlying neural activity estimation. In this simulation study we want to show the benefit of the joint EEG-fMRI neural activity estimation in a Bayesian framework. Approach. We built a dynamic Bayesian framework in order to perform joint EEG-fMRI neural activity time course estimation. The neural activity is originated by a given brain area and detected by means of both measurement techniques. We have chosen a resting state neural activity situation to address the worst case in terms of the signal-to-noise ratio. To infer information by EEG and fMRI concurrently we used a tool belonging to the sequential Monte Carlo (SMC) methods: the particle filter (PF). Main results. First, despite a high computational cost, we showed the feasibility of such an approach. Second, we obtained an improvement in neural activity reconstruction when using both EEG and fMRI measurements. Significance. The proposed simulation shows the improvements in neural activity reconstruction with EEG-fMRI simultaneous data. The application of such an approach to real data allows a better comprehension of the neural dynamics.

  3. Empirical modeling of nuclear power plants using neural networks

    International Nuclear Information System (INIS)

    Parlos, A.G.; Atiya, A.; Chong, K.T.

    1991-01-01

    A summary of a procedure for nonlinear identification of process dynamics encountered in nuclear power plant components is presented in this paper using artificial neural systems. A hybrid feedforward/feedback neural network, namely, a recurrent multilayer perceptron, is used as the nonlinear structure for system identification. In the overall identification process, the feedforward portion of the network architecture provides its well-known interpolation property, while through recurrency and cross-talk, the local information feedback enables representation of time-dependent system nonlinearities. The standard backpropagation learning algorithm is modified and is used to train the proposed hybrid network in a supervised manner. The performance of recurrent multilayer perceptron networks in identifying process dynamics is investigated via the case study of a U-tube steam generator. The nonlinear response of a representative steam generator is predicted using a neural network and is compared to the response obtained from a sophisticated physical model during both high- and low-power operation. The transient responses compare well, though further research is warranted for training and testing of recurrent neural networks during more severe operational transients and accident scenarios

  4. Neural Meta-Memes Framework for Combinatorial Optimization

    Science.gov (United States)

    Song, Li Qin; Lim, Meng Hiot; Ong, Yew Soon

    In this paper, we present a Neural Meta-Memes Framework (NMMF) for combinatorial optimization. NMMF is a framework which models basic optimization algorithms as memes and manages them dynamically when solving combinatorial problems. NMMF encompasses neural networks which serve as the overall planner/coordinator to balance the workload between memes. We show the efficacy of the proposed NMMF through empirical study on a class of combinatorial problem, the quadratic assignment problem (QAP).

  5. Modelling of word usage frequency dynamics using artificial neural network

    International Nuclear Information System (INIS)

    Maslennikova, Yu S; Bochkarev, V V; Voloskov, D S

    2014-01-01

    In this paper the method for modelling of word usage frequency time series is proposed. An artificial feedforward neural network was used to predict word usage frequencies. The neural network was trained using the maximum likelihood criterion. The Google Books Ngram corpus was used for the analysis. This database provides a large amount of data on frequency of specific word forms for 7 languages. Statistical modelling of word usage frequency time series allows finding optimal fitting and filtering algorithm for subsequent lexicographic analysis and verification of frequency trend models

  6. A study of reactor monitoring method with neural network

    Energy Technology Data Exchange (ETDEWEB)

    Nabeshima, Kunihiko [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment

    2001-03-01

    The purpose of this study is to investigate the methodology of Nuclear Power Plant (NPP) monitoring with neural networks, which create the plant models by the learning of the past normal operation patterns. The concept of this method is to detect the symptom of small anomalies by monitoring the deviations between the process signals measured from an actual plant and corresponding output signals from the neural network model, which might not be equal if the abnormal operational patterns are presented to the input of the neural network. Auto-associative network, which has same output as inputs, can detect an kind of anomaly condition by using normal operation data only. The monitoring tests of the feedforward neural network with adaptive learning were performed using the PWR plant simulator by which many kinds of anomaly conditions can be easily simulated. The adaptively trained feedforward network could follow the actual plant dynamics and the changes of plant condition, and then find most of the anomalies much earlier than the conventional alarm system during steady state and transient operations. Then the off-line and on-line test results during one year operation at the actual NPP (PWR) showed that the neural network could detect several small anomalies which the operators or the conventional alarm system didn't noticed. Furthermore, the sensitivity analysis suggests that the plant models by neural networks are appropriate. Finally, the simulation results show that the recurrent neural network with feedback connections could successfully model the slow behavior of the reactor dynamics without adaptive learning. Therefore, the recurrent neural network with adaptive learning will be the best choice for the actual reactor monitoring system. (author)

  7. Dynamic analysis of stochastic bidirectional associative memory neural networks with delays

    International Nuclear Information System (INIS)

    Zhao Hongyong; Ding Nan

    2007-01-01

    In this paper, stochastic bidirectional associative memory neural networks model with delays is considered. By constructing Lyapunov functionals, and using stochastic analysis method and inequality technique, we give some sufficient criteria ensuring almost sure exponential stability, pth exponential stability and mean value exponential stability. The obtained criteria can be used as theoretic guidance to stabilize neural networks in practical applications when stochastic noise is taken into consideration

  8. Serotonin 2A Receptor Signaling Underlies LSD-induced Alteration of the Neural Response to Dynamic Changes in Music.

    Science.gov (United States)

    Barrett, Frederick S; Preller, Katrin H; Herdener, Marcus; Janata, Petr; Vollenweider, Franz X

    2017-09-28

    Classic psychedelic drugs (serotonin 2A, or 5HT2A, receptor agonists) have notable effects on music listening. In the current report, blood oxygen level-dependent (BOLD) signal was collected during music listening in 25 healthy adults after administration of placebo, lysergic acid diethylamide (LSD), and LSD pretreated with the 5HT2A antagonist ketanserin, to investigate the role of 5HT2A receptor signaling in the neural response to the time-varying tonal structure of music. Tonality-tracking analysis of BOLD data revealed that 5HT2A receptor signaling alters the neural response to music in brain regions supporting basic and higher-level musical and auditory processing, and areas involved in memory, emotion, and self-referential processing. This suggests a critical role of 5HT2A receptor signaling in supporting the neural tracking of dynamic tonal structure in music, as well as in supporting the associated increases in emotionality, connectedness, and meaningfulness in response to music that are commonly observed after the administration of LSD and other psychedelics. Together, these findings inform the neuropsychopharmacology of music perception and cognition, meaningful music listening experiences, and altered perception of music during psychedelic experiences. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  9. Numerical simulation of particle dynamics in storage rings using BETACOOL program

    International Nuclear Information System (INIS)

    Meshkov, I.N.; Pivin, R.V.; Sidorin, A.O.; Smirnov, A.V.; Trubnikov, G.V.

    2006-01-01

    BETACOOL program developed by JINR electron cooling group is a kit of algorithms based on common format of input and output files. The program is oriented to simulation of the ion beam dynamics in a storage ring in the presence of cooling and heating effects. The version presented in this report includes three basic algorithms: simulation of rms parameters of the ion distribution function evolution in time, simulation of the distribution function evolution using Monte-Carlo method and tracking algorithm based on molecular dynamics technique. General processes to be investigated with the program are intrabeam scattering in the ion beam, electron cooling, interaction with residual gas and internal target

  10. Dynamic methylation and expression of Oct4 in early neural stem cells.

    Science.gov (United States)

    Lee, Shih-Han; Jeyapalan, Jennie N; Appleby, Vanessa; Mohamed Noor, Dzul Azri; Sottile, Virginie; Scotting, Paul J

    2010-09-01

    Neural stem cells are a multipotent population of tissue-specific stem cells with a broad but limited differentiation potential. However, recent studies have shown that over-expression of the pluripotency gene, Oct4, alone is sufficient to initiate a process by which these can form 'induced pluripotent stem cells' (iPS cells) with the same broad potential as embryonic stem cells. This led us to examine the expression of Oct4 in endogenous neural stem cells, as data regarding its expression in neural stem cells in vivo are contradictory and incomplete. In this study we have therefore analysed the expression of Oct4 and other genes associated with pluripotency throughout development of the mouse CNS and in neural stem cells grown in vitro. We find that Oct4 is still expressed in the CNS by E8.5, but that this expression declines rapidly until it is undetectable by E15.5. This decline is coincident with the gradual methylation of the Oct4 promoter and proximal enhancer. Immunostaining suggests that the Oct4 protein is predominantly cytoplasmic in location. We also found that neural stem cells from all ages expressed the pluripotency associated genes, Sox2, c-Myc, Klf4 and Nanog. These data provide an explanation for the varying behaviour of cells from the early neuroepithelium at different stages of development. The expression of these genes also provides an indication of why Oct4 alone is sufficient to induce iPS formation in neural stem cells at later stages.

  11. A design philosophy for multi-layer neural networks with applications to robot control

    Science.gov (United States)

    Vadiee, Nader; Jamshidi, MO

    1989-01-01

    A system is proposed which receives input information from many sensors that may have diverse scaling, dimension, and data representations. The proposed system tolerates sensory information with faults. The proposed self-adaptive processing technique has great promise in integrating the techniques of artificial intelligence and neural networks in an attempt to build a more intelligent computing environment. The proposed architecture can provide a detailed decision tree based on the input information, information stored in a long-term memory, and the adapted rule-based knowledge. A mathematical model for analysis will be obtained to validate the cited hypotheses. An extensive software program will be developed to simulate a typical example of pattern recognition problem. It is shown that the proposed model displays attention, expectation, spatio-temporal, and predictory behavior which are specific to the human brain. The anticipated results of this research project are: (1) creation of a new dynamic neural network structure, and (2) applications to and comparison with conventional multi-layer neural network structures. The anticipated benefits from this research are vast. The model can be used in a neuro-computer architecture as a building block which can perform complicated, nonlinear, time-varying mapping from a multitude of input excitory classes to an output or decision environment. It can be used for coordinating different sensory inputs and past experience of a dynamic system and actuating signals. The commercial applications of this project can be the creation of a special-purpose neuro-computer hardware which can be used in spatio-temporal pattern recognitions in such areas as air defense systems, e.g., target tracking, and recognition. Potential robotics-related applications are trajectory planning, inverse dynamics computations, hierarchical control, task-oriented control, and collision avoidance.

  12. Neural network-based model reference adaptive control system.

    Science.gov (United States)

    Patino, H D; Liu, D

    2000-01-01

    In this paper, an approach to model reference adaptive control based on neural networks is proposed and analyzed for a class of first-order continuous-time nonlinear dynamical systems. The controller structure can employ either a radial basis function network or a feedforward neural network to compensate adaptively the nonlinearities in the plant. A stable controller-parameter adjustment mechanism, which is determined using the Lyapunov theory, is constructed using a sigma-modification-type updating law. The evaluation of control error in terms of the neural network learning error is performed. That is, the control error converges asymptotically to a neighborhood of zero, whose size is evaluated and depends on the approximation error of the neural network. In the design and analysis of neural network-based control systems, it is important to take into account the neural network learning error and its influence on the control error of the plant. Simulation results showing the feasibility and performance of the proposed approach are given.

  13. Comparison of neural network applications for channel assignment in cellular TDMA networks and dynamically sectored PCS networks

    Science.gov (United States)

    Hortos, William S.

    1997-04-01

    The use of artificial neural networks (NNs) to address the channel assignment problem (CAP) for cellular time-division multiple access and code-division multiple access networks has previously been investigated by this author and many others. The investigations to date have been based on a hexagonal cell structure established by omnidirectional antennas at the base stations. No account was taken of the use of spatial isolation enabled by directional antennas to reduce interference between mobiles. Any reduction in interference translates into increased capacity and consequently alters the performance of the NNs. Previous studies have sought to improve the performance of Hopfield- Tank network algorithms and self-organizing feature map algorithms applied primarily to static channel assignment (SCA) for cellular networks that handle uniformly distributed, stationary traffic in each cell for a single type of service. The resulting algorithms minimize energy functions representing interference constraint and ad hoc conditions that promote convergence to optimal solutions. While the structures of the derived neural network algorithms (NNAs) offer the potential advantages of inherent parallelism and adaptability to changing system conditions, this potential has yet to be fulfilled the CAP for emerging mobile networks. The next-generation communication infrastructures must accommodate dynamic operating conditions. Macrocell topologies are being refined to microcells and picocells that can be dynamically sectored by adaptively controlled, directional antennas and programmable transceivers. These networks must support the time-varying demands for personal communication services (PCS) that simultaneously carry voice, data and video and, thus, require new dynamic channel assignment (DCA) algorithms. This paper examines the impact of dynamic cell sectoring and geometric conditioning on NNAs developed for SCA in omnicell networks with stationary traffic to improve the metrics

  14. Fabrication of micropatterned hydrogels for neural culture systems using dynamic mask projection photolithography.

    Science.gov (United States)

    Curley, J Lowry; Jennings, Scott R; Moore, Michael J

    2011-02-11

    Increasingly, patterned cell culture environments are becoming a relevant technique to study cellular characteristics, and many researchers believe in the need for 3D environments to represent in vitro experiments which better mimic in vivo qualities. Studies in fields such as cancer research, neural engineering, cardiac physiology, and cell-matrix interaction have shown cell behavior differs substantially between traditional monolayer cultures and 3D constructs. Hydrogels are used as 3D environments because of their variety, versatility and ability to tailor molecular composition through functionalization. Numerous techniques exist for creation of constructs as cell-supportive matrices, including electrospinning, elastomer stamps, inkjet printing, additive photopatterning, static photomask projection-lithography, and dynamic mask microstereolithography. Unfortunately, these methods involve multiple production steps and/or equipment not readily adaptable to conventional cell and tissue culture methods. The technique employed in this protocol adapts the latter two methods, using a digital micromirror device (DMD) to create dynamic photomasks for crosslinking geometrically specific poly-(ethylene glycol) (PEG) hydrogels, induced through UV initiated free radical polymerization. The resulting "2.5D" structures provide a constrained 3D environment for neural growth. We employ a dual-hydrogel approach, where PEG serves as a cell-restrictive region supplying structure to an otherwise shapeless but cell-permissive self-assembling gel made from either Puramatrix or agarose. The process is a quick simple one step fabrication which is highly reproducible and easily adapted for use with conventional cell culture methods and substrates. Whole tissue explants, such as embryonic dorsal root ganglia (DRG), can be incorporated into the dual hydrogel constructs for experimental assays such as neurite outgrowth. Additionally, dissociated cells can be encapsulated in the

  15. Optimization of Algorithms Using Extensions of Dynamic Programming

    KAUST Repository

    AbouEisha, Hassan M.

    2017-04-09

    We study and answer questions related to the complexity of various important problems such as: multi-frontal solvers of hp-adaptive finite element method, sorting and majority. We advocate the use of dynamic programming as a viable tool to study optimal algorithms for these problems. The main approach used to attack these problems is modeling classes of algorithms that may solve this problem using a discrete model of computation then defining cost functions on this discrete structure that reflect different complexity measures of the represented algorithms. As a last step, dynamic programming algorithms are designed and used to optimize those models (algorithms) and to obtain exact results on the complexity of the studied problems. The first part of the thesis presents a novel model of computation (element partition tree) that represents a class of algorithms for multi-frontal solvers along with cost functions reflecting various complexity measures such as: time and space. It then introduces dynamic programming algorithms for multi-stage and bi-criteria optimization of element partition trees. In addition, it presents results based on optimal element partition trees for famous benchmark meshes such as: meshes with point and edge singularities. New improved heuristics for those benchmark meshes were ob- tained based on insights of the optimal results found by our algorithms. The second part of the thesis starts by introducing a general problem where different problems can be reduced to and show how to use a decision table to model such problem. We describe how decision trees and decision tests for this table correspond to adaptive and non-adaptive algorithms for the original problem. We present exact bounds on the average time complexity of adaptive algorithms for the eight elements sorting problem. Then bounds on adaptive and non-adaptive algorithms for a variant of the majority problem are introduced. Adaptive algorithms are modeled as decision trees whose depth

  16. Coordinated three-dimensional motion of the head and torso by dynamic neural networks.

    Science.gov (United States)

    Kim, J; Hemami, H

    1998-01-01

    The problem of trajectory tracking control of a three dimensional (3D) model of the human upper torso and head is considered. The torso and the head are modeled as two rigid bodies connected at one point, and the Newton-Euler method is used to derive the nonlinear differential equations that govern the motion of the system. The two-link system is driven by six pairs of muscle like actuators that possess physiologically inspired alpha like and gamma like inputs, and spindle like and Golgi tendon organ like outputs. These outputs are utilized as reflex feedback for stability and stiffness control, in a long loop feedback for the purpose of estimating the state of the system (somesthesis), and as part of the input to the controller. Ideal delays of different duration are included in the feedforward and feedback paths of the system to emulate such delays encountered in physiological systems. Dynamical neural networks are trained to learn effective control of the desired maneuvers of the system. The feasibility of the controller is demonstrated by computer simulation of the successful execution of the desired maneuvers. This work demonstrates the capabilities of neural circuits in controlling highly nonlinear systems with multidelays in their feedforward and feedback paths. The ultimate long range goal of this research is toward understanding the working of the central nervous system in controlling movement. It is an interdisciplinary effort relying on mechanics, biomechanics, neuroscience, system theory, physiology and anatomy, and its short range relevance to rehabilitation must be noted.

  17. A Neural Theory of Visual Attention: Bridging Cognition and Neurophysiology

    Science.gov (United States)

    Bundesen, Claus; Habekost, Thomas; Kyllingsbaek, Soren

    2005-01-01

    A neural theory of visual attention (NTVA) is presented. NTVA is a neural interpretation of C. Bundesen's (1990) theory of visual attention (TVA). In NTVA, visual processing capacity is distributed across stimuli by dynamic remapping of receptive fields of cortical cells such that more processing resources (cells) are devoted to behaviorally…

  18. Using deep recurrent neural network for direct beam solar irradiance cloud screening

    Science.gov (United States)

    Chen, Maosi; Davis, John M.; Liu, Chaoshun; Sun, Zhibin; Zempila, Melina Maria; Gao, Wei

    2017-09-01

    Cloud screening is an essential procedure for in-situ calibration and atmospheric properties retrieval on (UV-)MultiFilter Rotating Shadowband Radiometer [(UV-)MFRSR]. Previous study has explored a cloud screening algorithm for direct-beam (UV-)MFRSR voltage measurements based on the stability assumption on a long time period (typically a half day or a whole day). To design such an algorithm requires in-depth understanding of radiative transfer and delicate data manipulation. Recent rapid developments on deep neural network and computation hardware have opened a window for modeling complicated End-to-End systems with a standardized strategy. In this study, a multi-layer dynamic bidirectional recurrent neural network is built for determining the cloudiness on each time point with a 17-year training dataset and tested with another 1-year dataset. The dataset is the daily 3-minute cosine corrected voltages, airmasses, and the corresponding cloud/clear-sky labels at two stations of the USDA UV-B Monitoring and Research Program. The results show that the optimized neural network model (3-layer, 250 hidden units, and 80 epochs of training) has an overall test accuracy of 97.87% (97.56% for the Oklahoma site and 98.16% for the Hawaii site). Generally, the neural network model grasps the key concept of the original model to use data in the entire day rather than short nearby measurements to perform cloud screening. A scrutiny of the logits layer suggests that the neural network model automatically learns a way to calculate a quantity similar to total optical depth and finds an appropriate threshold for cloud screening.

  19. Time-lapse imaging of neural development: zebrafish lead the way into the fourth dimension.

    Science.gov (United States)

    Rieger, Sandra; Wang, Fang; Sagasti, Alvaro

    2011-07-01

    Time-lapse imaging is often the only way to appreciate fully the many dynamic cell movements critical to neural development. Zebrafish possess many advantages that make them the best vertebrate model organism for live imaging of dynamic development events. This review will discuss technical considerations of time-lapse imaging experiments in zebrafish, describe selected examples of imaging studies in zebrafish that revealed new features or principles of neural development, and consider the promise and challenges of future time-lapse studies of neural development in zebrafish embryos and adults. Copyright © 2011 Wiley-Liss, Inc.

  20. An optimal maintenance policy for machine replacement problem using dynamic programming

    OpenAIRE

    Mohsen Sadegh Amalnik; Morteza Pourgharibshahi

    2017-01-01

    In this article, we present an acceptance sampling plan for machine replacement problem based on the backward dynamic programming model. Discount dynamic programming is used to solve a two-state machine replacement problem. We plan to design a model for maintenance by consid-ering the quality of the item produced. The purpose of the proposed model is to determine the optimal threshold policy for maintenance in a finite time horizon. We create a decision tree based on a sequential sampling inc...

  1. Uncertainties in neural network model based on carbon dioxide concentration for occupancy estimation

    Energy Technology Data Exchange (ETDEWEB)

    Alam, Azimil Gani; Rahman, Haolia; Kim, Jung-Kyung; Han, Hwataik [Kookmin University, Seoul (Korea, Republic of)

    2017-05-15

    Demand control ventilation is employed to save energy by adjusting airflow rate according to the ventilation load of a building. This paper investigates a method for occupancy estimation by using a dynamic neural network model based on carbon dioxide concentration in an occupied zone. The method can be applied to most commercial and residential buildings where human effluents to be ventilated. An indoor simulation program CONTAMW is used to generate indoor CO{sub 2} data corresponding to various occupancy schedules and airflow patterns to train neural network models. Coefficients of variation are obtained depending on the complexities of the physical parameters as well as the system parameters of neural networks, such as the numbers of hidden neurons and tapped delay lines. We intend to identify the uncertainties caused by the model parameters themselves, by excluding uncertainties in input data inherent in measurement. Our results show estimation accuracy is highly influenced by the frequency of occupancy variation but not significantly influenced by fluctuation in the airflow rate. Furthermore, we discuss the applicability and validity of the present method based on passive environmental conditions for estimating occupancy in a room from the viewpoint of demand control ventilation applications.

  2. OpenDx programs for visualization of computational fluid dynamics (CFD) simulations

    International Nuclear Information System (INIS)

    Silva, Marcelo Mariano da

    2008-01-01

    The search for high performance and low cost hardware and software solutions always guides the developments performed at the IEN parallel computing laboratory. In this context, this dissertation about the building of programs for visualization of computational fluid dynamics (CFD) simulations using the open source software OpenDx was written. The programs developed are useful to produce videos and images in two or three dimensions. They are interactive, easily to use and were designed to serve fluid dynamics researchers. A detailed description about how this programs were developed and the complete instructions of how to use them was done. The use of OpenDx as development tool is also introduced. There are examples that help the reader to understand how programs can be useful for many applications. (author)

  3. Dynamic neural network modeling of HF radar current maps for forecasting oil spill trajectories

    International Nuclear Information System (INIS)

    Tissot, P.; Perez, J.; Kelly, F.J.; Bonner, J.; Michaud, P.

    2001-01-01

    This paper examined the concept of dynamic neural network (NN) modeling for short-term forecasts of coastal high-frequency (HF) radar current maps offshore of Galveston Texas. HF radar technology is emerging as a viable and affordable way to measure surface currents in real time and the number of users applying the technology is increasing. A 25 megahertz, two site, Seasonde HF radar system was used to map ocean and bay surface currents along the coast of Texas where wind and river discharge create complex and rapidly changing current patters that override the weaker tidal flow component. The HF radar system is particularly useful in this type of setting because its mobility makes it a good marine spill response tool that could provide hourly current maps. This capability helps improve deployment of response resources. In addition, the NN model recently developed by the Conrad Blucher Institute can be used to forecast water levels during storm events. Forecasted currents are based on time series of current vectors from HF radar plus wind speed, wind direction, and water levels, as well as tidal forecasts. The dynamic NN model was tested to evaluate its performance and the results were compared with a baseline model which assumes the currents do not change from the time of the forecast up to the forecasted time. The NN model showed improvements over the baseline model for forecasting time equal or greater than 3 hours, but the difference was relatively small. The test demonstrated the ability of the dynamic NN model to link meteorological forcing functions with HF radar current maps. Development of the dynamic NN modeling is still ongoing. 18 refs., 1 tab., 5 figs

  4. Classification of mammographic masses using generalized dynamic fuzzy neural networks

    International Nuclear Information System (INIS)

    Lim, Wei Keat; Er, Meng Joo

    2004-01-01

    In this article, computer-aided classification of mammographic masses using generalized dynamic fuzzy neural networks (GDFNN) is presented. The texture parameters, derived from first-order gradient distribution and gray-level co-occurrence matrices, were computed from the regions of interest. A total of 343 images containing 180 benign masses and 163 malignant masses from the Digital Database for Screening Mammography were analyzed. A fast approach of automatically generating fuzzy rules from training samples was implemented to classify tumors. This work is novel in that it alleviates the problem of requiring a designer to examine all the input-output relationships of a training database in order to obtain the most appropriate structure for the classifier in a conventional computer-aided diagnosis. In this approach, not only the connection weights can be adjusted, but also the structure can be self-adaptive during the learning process. By virtue of the automatic generation of the classifier by the GDFNN learning algorithm, the area under the receiver-operating characteristic curve, A z , attains 0.868±0.020, which corresponds to a true-positive fraction of 95.0% at a false positive fraction of 52.8%. The corresponding accuracy is 70.0%, the positive predictive value is 62.0%, and the negative predictive value is 91.4%

  5. Synaptic plasticity in a recurrent neural network for versatile and adaptive behaviors of a walking robot

    DEFF Research Database (Denmark)

    Grinke, Eduard; Tetzlaff, Christian; Wörgötter, Florentin

    2015-01-01

    correlation-based learning with synaptic scaling is applied to adequately change the connections of the network. By doing so, we can effectively exploit neural dynamics (i.e., hysteresis effects and single attractors) in the network to generate different turning angles with short-term memory for a walking...... dynamics, plasticity, sensory feedback, and biomechanics. Generating such versatile and adaptive behaviors for a many degrees-of-freedom (DOFs) walking robot is a challenging task. Thus, in this study, we present a bio-inspired approach to solve this task. Specifically, the approach combines neural...... mechanisms with plasticity, exteroceptive sensory feedback, and biomechanics. The neural mechanisms consist of adaptive neural sensory processing and modular neural locomotion control. The sensory processing is based on a small recurrent neural network consisting of two fully connected neurons. Online...

  6. The modulation of neural gain facilitates a transition between functional segregation and integration in the brain.

    Science.gov (United States)

    Shine, James M; Aburn, Matthew J; Breakspear, Michael; Poldrack, Russell A

    2018-01-29

    Cognitive function relies on a dynamic, context-sensitive balance between functional integration and segregation in the brain. Previous work has proposed that this balance is mediated by global fluctuations in neural gain by projections from ascending neuromodulatory nuclei. To test this hypothesis in silico, we studied the effects of neural gain on network dynamics in a model of large-scale neuronal dynamics. We found that increases in neural gain directed the network through an abrupt dynamical transition, leading to an integrated network topology that was maximal in frontoparietal 'rich club' regions. This gain-mediated transition was also associated with increased topological complexity, as well as increased variability in time-resolved topological structure, further highlighting the potential computational benefits of the gain-mediated network transition. These results support the hypothesis that neural gain modulation has the computational capacity to mediate the balance between integration and segregation in the brain. © 2018, Shine et al.

  7. An algorithm for the solution of dynamic linear programs

    Science.gov (United States)

    Psiaki, Mark L.

    1989-01-01

    The algorithm's objective is to efficiently solve Dynamic Linear Programs (DLP) by taking advantage of their special staircase structure. This algorithm constitutes a stepping stone to an improved algorithm for solving Dynamic Quadratic Programs, which, in turn, would make the nonlinear programming method of Successive Quadratic Programs more practical for solving trajectory optimization problems. The ultimate goal is to being trajectory optimization solution speeds into the realm of real-time control. The algorithm exploits the staircase nature of the large constraint matrix of the equality-constrained DLPs encountered when solving inequality-constrained DLPs by an active set approach. A numerically-stable, staircase QL factorization of the staircase constraint matrix is carried out starting from its last rows and columns. The resulting recursion is like the time-varying Riccati equation from multi-stage LQR theory. The resulting factorization increases the efficiency of all of the typical LP solution operations over that of a dense matrix LP code. At the same time numerical stability is ensured. The algorithm also takes advantage of dynamic programming ideas about the cost-to-go by relaxing active pseudo constraints in a backwards sweeping process. This further decreases the cost per update of the LP rank-1 updating procedure, although it may result in more changes of the active set that if pseudo constraints were relaxed in a non-stagewise fashion. The usual stability of closed-loop Linear/Quadratic optimally-controlled systems, if it carries over to strictly linear cost functions, implies that the saving due to reduced factor update effort may outweigh the cost of an increased number of updates. An aerospace example is presented in which a ground-to-ground rocket's distance is maximized. This example demonstrates the applicability of this class of algorithms to aerospace guidance. It also sheds light on the efficacy of the proposed pseudo constraint relaxation

  8. NMDA Receptor Signaling Is Important for Neural Tube Formation and for Preventing Antiepileptic Drug-Induced Neural Tube Defects.

    Science.gov (United States)

    Sequerra, Eduardo B; Goyal, Raman; Castro, Patricio A; Levin, Jacqueline B; Borodinsky, Laura N

    2018-05-16

    Failure of neural tube closure leads to neural tube defects (NTDs), which can have serious neurological consequences or be lethal. Use of antiepileptic drugs (AEDs) during pregnancy increases the incidence of NTDs in offspring by unknown mechanisms. Here we show that during Xenopus laevis neural tube formation, neural plate cells exhibit spontaneous calcium dynamics that are partially mediated by glutamate signaling. We demonstrate that NMDA receptors are important for the formation of the neural tube and that the loss of their function induces an increase in neural plate cell proliferation and impairs neural cell migration, which result in NTDs. We present evidence that the AED valproic acid perturbs glutamate signaling, leading to NTDs that are rescued with varied efficacy by preventing DNA synthesis, activating NMDA receptors, or recruiting the NMDA receptor target ERK1/2. These findings may prompt mechanistic identification of AEDs that do not interfere with neural tube formation. SIGNIFICANCE STATEMENT Neural tube defects are one of the most common birth defects. Clinical investigations have determined that the use of antiepileptic drugs during pregnancy increases the incidence of these defects in the offspring by unknown mechanisms. This study discovers that glutamate signaling regulates neural plate cell proliferation and oriented migration and is necessary for neural tube formation. We demonstrate that the widely used antiepileptic drug valproic acid interferes with glutamate signaling and consequently induces neural tube defects, challenging the current hypotheses arguing that they are side effects of this antiepileptic drug that cause the increased incidence of these defects. Understanding the mechanisms of neurotransmitter signaling during neural tube formation may contribute to the identification and development of antiepileptic drugs that are safer during pregnancy. Copyright © 2018 the authors 0270-6474/18/384762-12$15.00/0.

  9. Impact of leakage delay on bifurcation in high-order fractional BAM neural networks.

    Science.gov (United States)

    Huang, Chengdai; Cao, Jinde

    2018-02-01

    The effects of leakage delay on the dynamics of neural networks with integer-order have lately been received considerable attention. It has been confirmed that fractional neural networks more appropriately uncover the dynamical properties of neural networks, but the results of fractional neural networks with leakage delay are relatively few. This paper primarily concentrates on the issue of bifurcation for high-order fractional bidirectional associative memory(BAM) neural networks involving leakage delay. The first attempt is made to tackle the stability and bifurcation of high-order fractional BAM neural networks with time delay in leakage terms in this paper. The conditions for the appearance of bifurcation for the proposed systems with leakage delay are firstly established by adopting time delay as a bifurcation parameter. Then, the bifurcation criteria of such system without leakage delay are successfully acquired. Comparative analysis wondrously detects that the stability performance of the proposed high-order fractional neural networks is critically weakened by leakage delay, they cannot be overlooked. Numerical examples are ultimately exhibited to attest the efficiency of the theoretical results. Copyright © 2017 Elsevier Ltd. All rights reserved.

  10. Optimal Risk Reduction in the Railway Industry by Using Dynamic Programming

    OpenAIRE

    Michael Todinov; Eberechi Weli

    2013-01-01

    The paper suggests for the first time the use of dynamic programming techniques for optimal risk reduction in the railway industry. It is shown that by using the concept ‘amount of removed risk by a risk reduction option’, the problem related to optimal allocation of a fixed budget to achieve a maximum risk reduction in the railway industry can be reduced to an optimisation problem from dynamic programming. For n risk reduction options and size of the available risk reduction budget B (expres...

  11. Attention training improves aberrant neural dynamics during working memory processing in veterans with PTSD.

    Science.gov (United States)

    McDermott, Timothy J; Badura-Brack, Amy S; Becker, Katherine M; Ryan, Tara J; Bar-Haim, Yair; Pine, Daniel S; Khanna, Maya M; Heinrichs-Graham, Elizabeth; Wilson, Tony W

    2016-12-01

    Posttraumatic stress disorder (PTSD) is associated with executive functioning deficits, including disruptions in working memory (WM). Recent studies suggest that attention training reduces PTSD symptomatology, but the underlying neural mechanisms are unknown. We used high-density magnetoencephalography (MEG) to evaluate whether attention training modulates brain regions serving WM processing in PTSD. Fourteen veterans with PTSD completed a WM task during a 306-sensor MEG recording before and after 8 sessions of attention training treatment. A matched comparison sample of 12 combat-exposed veterans without PTSD completed the same WM task during a single MEG session. To identify the spatiotemporal dynamics, each group's data were transformed into the time-frequency domain, and significant oscillatory brain responses were imaged using a beamforming approach. All participants exhibited activity in left hemispheric language areas consistent with a verbal WM task. Additionally, veterans with PTSD and combat-exposed healthy controls each exhibited oscillatory responses in right hemispheric homologue regions (e.g., right Broca's area); however, these responses were in opposite directions. Group differences in oscillatory activity emerged in the theta band (4-8 Hz) during encoding and in the alpha band (9-12 Hz) during maintenance and were significant in right prefrontal and right supramarginal and inferior parietal regions. Importantly, following attention training, these significant group differences were reduced or eliminated. This study provides initial evidence that attention training improves aberrant neural activity in brain networks serving WM processing.

  12. The dynamic programming high-order Dynamic Bayesian Networks learning for identifying effective connectivity in human brain from fMRI.

    Science.gov (United States)

    Dang, Shilpa; Chaudhury, Santanu; Lall, Brejesh; Roy, Prasun Kumar

    2017-06-15

    Determination of effective connectivity (EC) among brain regions using fMRI is helpful in understanding the underlying neural mechanisms. Dynamic Bayesian Networks (DBNs) are an appropriate class of probabilistic graphical temporal-models that have been used in past to model EC from fMRI, specifically order-one. High-order DBNs (HO-DBNs) have still not been explored for fMRI data. A fundamental problem faced in the structure-learning of HO-DBN is high computational-burden and low accuracy by the existing heuristic search techniques used for EC detection from fMRI. In this paper, we propose using dynamic programming (DP) principle along with integration of properties of scoring-function in a way to reduce search space for structure-learning of HO-DBNs and finally, for identifying EC from fMRI which has not been done yet to the best of our knowledge. The proposed exact search-&-score learning approach HO-DBN-DP is an extension of the technique which was originally devised for learning a BN's structure from static data (Singh and Moore, 2005). The effectiveness in structure-learning is shown on synthetic fMRI dataset. The algorithm reaches globally-optimal solution in appreciably reduced time-complexity than the static counterpart due to integration of properties. The proof of optimality is provided. The results demonstrate that HO-DBN-DP is comparably more accurate and faster than currently used structure-learning algorithms used for identifying EC from fMRI. The real data EC from HO-DBN-DP shows consistency with previous literature than the classical Granger Causality method. Hence, the DP algorithm can be employed for reliable EC estimates from experimental fMRI data. Copyright © 2017 Elsevier B.V. All rights reserved.

  13. Dynamics and spike trains statistics in conductance-based integrate-and-fire neural networks with chemical and electric synapses

    International Nuclear Information System (INIS)

    Cofré, Rodrigo; Cessac, Bruno

    2013-01-01

    We investigate the effect of electric synapses (gap junctions) on collective neuronal dynamics and spike statistics in a conductance-based integrate-and-fire neural network, driven by Brownian noise, where conductances depend upon spike history. We compute explicitly the time evolution operator and show that, given the spike-history of the network and the membrane potentials at a given time, the further dynamical evolution can be written in a closed form. We show that spike train statistics is described by a Gibbs distribution whose potential can be approximated with an explicit formula, when the noise is weak. This potential form encompasses existing models for spike trains statistics analysis such as maximum entropy models or generalized linear models (GLM). We also discuss the different types of correlations: those induced by a shared stimulus and those induced by neurons interactions

  14. Neural network application to aircraft control system design

    Science.gov (United States)

    Troudet, Terry; Garg, Sanjay; Merrill, Walter C.

    1991-01-01

    The feasibility of using artificial neural network as control systems for modern, complex aerospace vehicles is investigated via an example aircraft control design study. The problem considered is that of designing a controller for an integrated airframe/propulsion longitudinal dynamics model of a modern fighter aircraft to provide independent control of pitch rate and airspeed responses to pilot command inputs. An explicit model following controller using H infinity control design techniques is first designed to gain insight into the control problem as well as to provide a baseline for evaluation of the neurocontroller. Using the model of the desired dynamics as a command generator, a multilayer feedforward neural network is trained to control the vehicle model within the physical limitations of the actuator dynamics. This is achieved by minimizing an objective function which is a weighted sum of tracking errors and control input commands and rates. To gain insight in the neurocontrol, linearized representations of the nonlinear neurocontroller are analyzed along a commanded trajectory. Linear robustness analysis tools are then applied to the linearized neurocontroller models and to the baseline H infinity based controller. Future areas of research identified to enhance the practical applicability of neural networks to flight control design.

  15. Neural network application to aircraft control system design

    Science.gov (United States)

    Troudet, Terry; Garg, Sanjay; Merrill, Walter C.

    1991-01-01

    The feasibility of using artificial neural networks as control systems for modern, complex aerospace vehicles is investigated via an example aircraft control design study. The problem considered is that of designing a controller for an integrated airframe/propulsion longitudinal dynamics model of a modern fighter aircraft to provide independent control of pitch rate and airspeed responses to pilot command inputs. An explicit model following controller using H infinity control design techniques is first designed to gain insight into the control problem as well as to provide a baseline for evaluation of the neurocontroller. Using the model of the desired dynamics as a command generator, a multilayer feedforward neural network is trained to control the vehicle model within the physical limitations of the actuator dynamics. This is achieved by minimizing an objective function which is a weighted sum of tracking errors and control input commands and rates. To gain insight in the neurocontrol, linearized representations of the nonlinear neurocontroller are analyzed along a commanded trajectory. Linear robustness analysis tools are then applied to the linearized neurocontroller models and to the baseline H infinity based controller. Future areas of research are identified to enhance the practical applicability of neural networks to flight control design.

  16. Data systems and computer science: Neural networks base R/T program overview

    Science.gov (United States)

    Gulati, Sandeep

    1991-01-01

    The research base, in the U.S. and abroad, for the development of neural network technology is discussed. The technical objectives are to develop and demonstrate adaptive, neural information processing concepts. The leveraging of external funding is also discussed.

  17. Modeling and Speed Control of Induction Motor Drives Using Neural Networks

    Directory of Open Access Journals (Sweden)

    V. Jamuna

    2010-08-01

    Full Text Available Speed control of induction motor drives using neural networks is presented. The mathematical model of single phase induction motor is developed. A new simulink model for a neural network-controlled bidirectional chopper fed single phase induction motor is proposed. Under normal operation, the true drive parameters are real-time identified and they are converted into the controller parameters through multilayer forward computation by neural networks. Comparative study has been made between the conventional and neural network controllers. It is observed that the neural network controlled drive system has better dynamic performance, reduced overshoot and faster transient response than the conventional controlled system.

  18. Region stability analysis and tracking control of memristive recurrent neural network.

    Science.gov (United States)

    Bao, Gang; Zeng, Zhigang; Shen, Yanjun

    2018-02-01

    Memristor is firstly postulated by Leon Chua and realized by Hewlett-Packard (HP) laboratory. Research results show that memristor can be used to simulate the synapses of neurons. This paper presents a class of recurrent neural network with HP memristors. Firstly, it shows that memristive recurrent neural network has more compound dynamics than the traditional recurrent neural network by simulations. Then it derives that n dimensional memristive recurrent neural network is composed of [Formula: see text] sub neural networks which do not have a common equilibrium point. By designing the tracking controller, it can make memristive neural network being convergent to the desired sub neural network. At last, two numerical examples are given to verify the validity of our result. Copyright © 2017 Elsevier Ltd. All rights reserved.

  19. Simultaneous multichannel signal transfers via chaos in a recurrent neural network.

    Science.gov (United States)

    Soma, Ken-ichiro; Mori, Ryota; Sato, Ryuichi; Furumai, Noriyuki; Nara, Shigetoshi

    2015-05-01

    We propose neural network model that demonstrates the phenomenon of signal transfer between separated neuron groups via other chaotic neurons that show no apparent correlations with the input signal. The model is a recurrent neural network in which it is supposed that synchronous behavior between small groups of input and output neurons has been learned as fragments of high-dimensional memory patterns, and depletion of neural connections results in chaotic wandering dynamics. Computer experiments show that when a strong oscillatory signal is applied to an input group in the chaotic regime, the signal is successfully transferred to the corresponding output group, although no correlation is observed between the input signal and the intermediary neurons. Signal transfer is also observed when multiple signals are applied simultaneously to separate input groups belonging to different memory attractors. In this sense simultaneous multichannel communications are realized, and the chaotic neural dynamics acts as a signal transfer medium in which the signal appears to be hidden.

  20. Three-dimensional interactive Molecular Dynamics program for the study of defect dynamics in crystals

    Science.gov (United States)

    Patriarca, M.; Kuronen, A.; Robles, M.; Kaski, K.

    2007-01-01

    The study of crystal defects and the complex processes underlying their formation and time evolution has motivated the development of the program ALINE for interactive molecular dynamics experiments. This program couples a molecular dynamics code to a Graphical User Interface and runs on a UNIX-X11 Window System platform with the MOTIF library, which is contained in many standard Linux releases. ALINE is written in C, thus giving the user the possibility to modify the source code, and, at the same time, provides an effective and user-friendly framework for numerical experiments, in which the main parameters can be interactively varied and the system visualized in various ways. We illustrate the main features of the program through some examples of detection and dynamical tracking of point-defects, linear defects, and planar defects, such as stacking faults in lattice-mismatched heterostructures. Program summaryTitle of program:ALINE Catalogue identifier:ADYJ_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADYJ_v1_0 Program obtainable from: CPC Program Library, Queen University of Belfast, N. Ireland Computer for which the program is designed and others on which it has been tested: Computers:DEC ALPHA 300, Intel i386 compatible computers, G4 Apple Computers Installations:Laboratory of Computational Engineering, Helsinki University of Technology, Helsinki, Finland Operating systems under which the program has been tested:True64 UNIX, Linux-i386, Mac OS X 10.3 and 10.4 Programming language used:Standard C and MOTIF libraries Memory required to execute with typical data:6 Mbytes but may be larger depending on the system size No. of lines in distributed program, including test data, etc.:16 901 No. of bytes in distributed program, including test data, etc.:449 559 Distribution format:tar.gz Nature of physical problem:Some phenomena involving defects take place inside three-dimensional crystals at times which can be hardly predicted. For this reason they are

  1. UAV Trajectory Modeling Using Neural Networks

    Science.gov (United States)

    Xue, Min

    2017-01-01

    Large amount of small Unmanned Aerial Vehicles (sUAVs) are projected to operate in the near future. Potential sUAV applications include, but not limited to, search and rescue, inspection and surveillance, aerial photography and video, precision agriculture, and parcel delivery. sUAVs are expected to operate in the uncontrolled Class G airspace, which is at or below 500 feet above ground level (AGL), where many static and dynamic constraints exist, such as ground properties and terrains, restricted areas, various winds, manned helicopters, and conflict avoidance among sUAVs. How to enable safe, efficient, and massive sUAV operations at the low altitude airspace remains a great challenge. NASA's Unmanned aircraft system Traffic Management (UTM) research initiative works on establishing infrastructure and developing policies, requirement, and rules to enable safe and efficient sUAVs' operations. To achieve this goal, it is important to gain insights of future UTM traffic operations through simulations, where the accurate trajectory model plays an extremely important role. On the other hand, like what happens in current aviation development, trajectory modeling should also serve as the foundation for any advanced concepts and tools in UTM. Accurate models of sUAV dynamics and control systems are very important considering the requirement of the meter level precision in UTM operations. The vehicle dynamics are relatively easy to derive and model, however, vehicle control systems remain unknown as they are usually kept by manufactures as a part of intellectual properties. That brings challenges to trajectory modeling for sUAVs. How to model the vehicle's trajectories with unknown control system? This work proposes to use a neural network to model a vehicle's trajectory. The neural network is first trained to learn the vehicle's responses at numerous conditions. Once being fully trained, given current vehicle states, winds, and desired future trajectory, the neural

  2. Encoding sensory and motor patterns as time-invariant trajectories in recurrent neural networks.

    Science.gov (United States)

    Goudar, Vishwa; Buonomano, Dean V

    2018-03-14

    Much of the information the brain processes and stores is temporal in nature-a spoken word or a handwritten signature, for example, is defined by how it unfolds in time. However, it remains unclear how neural circuits encode complex time-varying patterns. We show that by tuning the weights of a recurrent neural network (RNN), it can recognize and then transcribe spoken digits. The model elucidates how neural dynamics in cortical networks may resolve three fundamental challenges: first, encode multiple time-varying sensory and motor patterns as stable neural trajectories; second, generalize across relevant spatial features; third, identify the same stimuli played at different speeds-we show that this temporal invariance emerges because the recurrent dynamics generate neural trajectories with appropriately modulated angular velocities. Together our results generate testable predictions as to how recurrent networks may use different mechanisms to generalize across the relevant spatial and temporal features of complex time-varying stimuli. © 2018, Goudar et al.

  3. Neural network models: from biology to many - body phenomenology

    International Nuclear Information System (INIS)

    Clark, J.W.

    1993-01-01

    The current surge of research on practical side of neural networks and their utility in memory storage/recall, pattern recognition and classification is given in this article. The initial attraction of neural networks as dynamical and statistical system has been investigated. From the view of many-body theorist, the neurons may be thought of as particles, and the weighted connection between the units, as the interaction between these particles. Finally, the author has seen the impressive capabilities of artificial neural networks in pattern recognition and classification may be exploited to solve data management problems in experimental physics and the discovery of radically new theoretically description of physical problems and neural networks can be used in physics. (A.B.)

  4. Dynamic Power Management for Portable Hybrid Power-Supply Systems Utilizing Approximate Dynamic Programming

    Directory of Open Access Journals (Sweden)

    Jooyoung Park

    2015-05-01

    Full Text Available Recently, the optimization of power flows in portable hybrid power-supply systems (HPSSs has become an important issue with the advent of a variety of mobile systems and hybrid energy technologies. In this paper, a control strategy is considered for dynamically managing power flows in portable HPSSs employing batteries and supercapacitors. Our dynamic power management strategy utilizes the concept of approximate dynamic programming (ADP. ADP methods are important tools in the fields of stochastic control and machine learning, and the utilization of these tools for practical engineering problems is now an active and promising research field. We propose an ADP-based procedure based on optimization under constraints including the iterated Bellman inequalities, which can be solved by convex optimization carried out offline, to find the optimal power management rules for portable HPSSs. The effectiveness of the proposed procedure is tested through dynamic simulations for smartphone workload scenarios, and simulation results show that the proposed strategy can successfully cope with uncertain workload demands.

  5. Transplantation dose alters the dynamics of human neural stem cell engraftment, proliferation and migration after spinal cord injury

    Directory of Open Access Journals (Sweden)

    Katja M. Piltti

    2015-09-01

    Full Text Available The effect of transplantation dose on the spatiotemporal dynamics of human neural stem cell (hNSC engraftment has not been quantitatively evaluated in the central nervous system. We investigated changes over time in engraftment/survival, proliferation, and migration of multipotent human central nervous system-derived neural stem cells (hCNS-SCns transplanted at doses ranging from 10,000 to 500,000 cells in spinal cord injured immunodeficient mice. Transplant dose was inversely correlated with measures of donor cell proliferation at 2 weeks post-transplant (WPT and dose-normalized engraftment at 16 WPT. Critically, mice receiving the highest cell dose exhibited an engraftment plateau, in which the total number of engrafted human cells never exceeded the initial dose. These data suggest that donor cell expansion was inversely regulated by target niche parameters and/or transplantation density. Investigation of the response of donor cells to the host microenvironment should be a key variable in defining target cell dose in pre-clinical models of CNS disease and injury.

  6. Neural network for adapting nuclear power plant control for wide-range operation

    International Nuclear Information System (INIS)

    Ku, C.C.; Lee, K.Y.; Edwards, R.M.

    1991-01-01

    A new concept of using neural networks has been evaluated for optimal control of a nuclear reactor. The neural network uses the architecture of a standard backpropagation network; however, a new dynamic learning algorithm has been developed to capture the underlying system dynamics. The learning algorithm is based on parameter estimation for dynamic systems. The approach is demonstrated on an optimal reactor temperature controller by adjusting the feedback gains for wide-range operation. Application of optimal control to a reactor has been considered for improving temperature response using a robust fifth-order reactor power controller. Conventional gain scheduling can be employed to extend the range of good performance to accommodate large changes in power where nonlinear characteristics significantly modify the dynamics of the power plant. Gain scheduling is developed based on expected parameter variations, and it may be advantageous to further adapt feedback gains on-line to better match actual plant performance. A neural network approach is used here to adapt the gains to better accommodate plant uncertainties and thereby achieve improved robustness characteristics

  7. Parametric models to relate spike train and LFP dynamics with neural information processing.

    Science.gov (United States)

    Banerjee, Arpan; Dean, Heather L; Pesaran, Bijan

    2012-01-01

    Spike trains and local field potentials (LFPs) resulting from extracellular current flows provide a substrate for neural information processing. Understanding the neural code from simultaneous spike-field recordings and subsequent decoding of information processing events will have widespread applications. One way to demonstrate an understanding of the neural code, with particular advantages for the development of applications, is to formulate a parametric statistical model of neural activity and its covariates. Here, we propose a set of parametric spike-field models (unified models) that can be used with existing decoding algorithms to reveal the timing of task or stimulus specific processing. Our proposed unified modeling framework captures the effects of two important features of information processing: time-varying stimulus-driven inputs and ongoing background activity that occurs even in the absence of environmental inputs. We have applied this framework for decoding neural latencies in simulated and experimentally recorded spike-field sessions obtained from the lateral intraparietal area (LIP) of awake, behaving monkeys performing cued look-and-reach movements to spatial targets. Using both simulated and experimental data, we find that estimates of trial-by-trial parameters are not significantly affected by the presence of ongoing background activity. However, including background activity in the unified model improves goodness of fit for predicting individual spiking events. Uncovering the relationship between the model parameters and the timing of movements offers new ways to test hypotheses about the relationship between neural activity and behavior. We obtained significant spike-field onset time correlations from single trials using a previously published data set where significantly strong correlation was only obtained through trial averaging. We also found that unified models extracted a stronger relationship between neural response latency and trial

  8. Adaptive dynamic programming with applications in optimal control

    CERN Document Server

    Liu, Derong; Wang, Ding; Yang, Xiong; Li, Hongliang

    2017-01-01

    This book covers the most recent developments in adaptive dynamic programming (ADP). The text begins with a thorough background review of ADP making sure that readers are sufficiently familiar with the fundamentals. In the core of the book, the authors address first discrete- and then continuous-time systems. Coverage of discrete-time systems starts with a more general form of value iteration to demonstrate its convergence, optimality, and stability with complete and thorough theoretical analysis. A more realistic form of value iteration is studied where value function approximations are assumed to have finite errors. Adaptive Dynamic Programming also details another avenue of the ADP approach: policy iteration. Both basic and generalized forms of policy-iteration-based ADP are studied with complete and thorough theoretical analysis in terms of convergence, optimality, stability, and error bounds. Among continuous-time systems, the control of affine and nonaffine nonlinear systems is studied using the ADP app...

  9. GLOBEC (Global Ocean Ecosystems Dynamics: Northwest Atlantic program

    Science.gov (United States)

    1991-01-01

    The specific objective of the meeting was to plan an experiment in the Northwestern Atlantic to study the marine ecosystem and its role, together with that of climate and physical dynamics, in determining fisheries recruitment. The underlying focus of the GLOBEC initiative is to understand the marine ecosystem as it related to marine living resources and to understand how fluctuation in these resources are driven by climate change and exploitation. In this sense the goal is a solid scientific program to provide basic information concerning major fisheries stocks and the environment that sustains them. The plan is to attempt to reach this understanding through a multidisciplinary program that brings to bear new techniques as disparate as numerical fluid dynamic models of ocean circulation, molecular biology and modern acoustic imaging. The effort will also make use of the massive historical data sets on fisheries and the state of the climate in a coordinated manner.

  10. Feedforward Nonlinear Control Using Neural Gas Network

    OpenAIRE

    Machón-González, Iván; López-García, Hilario

    2017-01-01

    Nonlinear systems control is a main issue in control theory. Many developed applications suffer from a mathematical foundation not as general as the theory of linear systems. This paper proposes a control strategy of nonlinear systems with unknown dynamics by means of a set of local linear models obtained by a supervised neural gas network. The proposed approach takes advantage of the neural gas feature by which the algorithm yields a very robust clustering procedure. The direct model of the ...

  11. SORN: a self-organizing recurrent neural network

    Directory of Open Access Journals (Sweden)

    Andreea Lazar

    2009-10-01

    Full Text Available Understanding the dynamics of recurrent neural networks is crucial for explaining how the brain processes information. In the neocortex, a range of different plasticity mechanisms are shaping recurrent networks into effective information processing circuits that learn appropriate representations for time-varying sensory stimuli. However, it has been difficult to mimic these abilities in artificial neural network models. Here we introduce SORN, a self-organizing recurrent network. It combines three distinct forms of local plasticity to learn spatio-temporal patterns in its input while maintaining its dynamics in a healthy regime suitable for learning. The SORN learns to encode information in the form of trajectories through its high-dimensional state space reminiscent of recent biological findings on cortical coding. All three forms of plasticity are shown to be essential for the network's success.

  12. Fault diagnosis system of electromagnetic valve using neural network filter

    International Nuclear Information System (INIS)

    Hayashi, Shoji; Odaka, Tomohiro; Kuroiwa, Jousuke; Ogura, Hisakazu

    2008-01-01

    This paper is concerned with the gas leakage fault detection of electromagnetic valve using a neural network filter. In modern plants, the ability to detect and identify gas leakage faults is becoming increasingly important. The main difficulty in detecting gas leakage faults by sound signals lies in the fact that the practical plants are usually very noisy. To solve this difficulty, a neural network filter is used to eliminate background noise and raise the signal noise ratio of the sound signal. The background noise is assumed as a dynamic system, and an accurate mathematical model of the dynamic system can be established using a neural network filter. The predicted error between predicted values and practical ones constitutes the output of the filter. If the predicted error is zero, then there is no leakage. If the predicted error is greater than a certain value, then there is a leakage fault. Through application to practical pneumatic systems, it is verified that the neural network filter was effective in gas leakage detection. (author)

  13. A Dynamic Bioinspired Neural Network Based Real-Time Path Planning Method for Autonomous Underwater Vehicles.

    Science.gov (United States)

    Ni, Jianjun; Wu, Liuying; Shi, Pengfei; Yang, Simon X

    2017-01-01

    Real-time path planning for autonomous underwater vehicle (AUV) is a very difficult and challenging task. Bioinspired neural network (BINN) has been used to deal with this problem for its many distinct advantages: that is, no learning process is needed and realization is also easy. However, there are some shortcomings when BINN is applied to AUV path planning in a three-dimensional (3D) unknown environment, including complex computing problem when the environment is very large and repeated path problem when the size of obstacles is bigger than the detection range of sensors. To deal with these problems, an improved dynamic BINN is proposed in this paper. In this proposed method, the AUV is regarded as the core of the BINN and the size of the BINN is based on the detection range of sensors. Then the BINN will move with the AUV and the computing could be reduced. A virtual target is proposed in the path planning method to ensure that the AUV can move to the real target effectively and avoid big-size obstacles automatically. Furthermore, a target attractor concept is introduced to improve the computing efficiency of neural activities. Finally, some experiments are conducted under various 3D underwater environments. The experimental results show that the proposed BINN based method can deal with the real-time path planning problem for AUV efficiently.

  14. A dynamically reconfigurable logic cell: from artificial neural networks to quantum-dot cellular automata

    Science.gov (United States)

    Naqvi, Syed Rameez; Akram, Tallha; Iqbal, Saba; Haider, Sajjad Ali; Kamran, Muhammad; Muhammad, Nazeer

    2018-02-01

    Considering the lack of optimization support for Quantum-dot Cellular Automata, we propose a dynamically reconfigurable logic cell capable of implementing various logic operations by means of artificial neural networks. The cell can be reconfigured to any 2-input combinational logic gate by altering the strength of connections, called weights and biases. We demonstrate how these cells may appositely be organized to perform multi-bit arithmetic and logic operations. The proposed work is important in that it gives a standard implementation of an 8-bit arithmetic and logic unit for quantum-dot cellular automata with minimal area and latency overhead. We also compare the proposed design with a few existing arithmetic and logic units, and show that it is more area efficient than any equivalent available in literature. Furthermore, the design is adaptable to 16, 32, and 64 bit architectures.

  15. Decentralized adaptive neural control for high-order interconnected stochastic nonlinear time-delay systems with unknown system dynamics.

    Science.gov (United States)

    Si, Wenjie; Dong, Xunde; Yang, Feifei

    2018-03-01

    This paper is concerned with the problem of decentralized adaptive backstepping state-feedback control for uncertain high-order large-scale stochastic nonlinear time-delay systems. For the control design of high-order large-scale nonlinear systems, only one adaptive parameter is constructed to overcome the over-parameterization, and neural networks are employed to cope with the difficulties raised by completely unknown system dynamics and stochastic disturbances. And then, the appropriate Lyapunov-Krasovskii functional and the property of hyperbolic tangent functions are used to deal with the unknown unmatched time-delay interactions of high-order large-scale systems for the first time. At last, on the basis of Lyapunov stability theory, the decentralized adaptive neural controller was developed, and it decreases the number of learning parameters. The actual controller can be designed so as to ensure that all the signals in the closed-loop system are semi-globally uniformly ultimately bounded (SGUUB) and the tracking error converges in the small neighborhood of zero. The simulation example is used to further show the validity of the design method. Copyright © 2018 Elsevier Ltd. All rights reserved.

  16. Effect of the CTL proliferation program on virus dynamics

    DEFF Research Database (Denmark)

    Wodarz, Dominik; Thomsen, Allan Randrup

    2005-01-01

    Experiments have established that CTLs do not require continuous antigenic stimulation for expansion. Instead, responses develop by a process of programmed proliferation which involves approximately 7-10 antigen-independent cell divisions, the generation of effector cells and the differentiation...... virus loads and thus acute symptoms. The reason is that the programmed divisions are independent from antigenic stimulation, and an increase in virus load does not speed up the rate of CTL expansion. We hypothesize that the 7-10 programmed divisions observed in vivo represent an optimal solution...... into memory cells. The effect of this program on the infection dynamics and the advantages gained by the program have, however, not been explored yet. We investigate this with mathematical models. We find that more programmed divisions can make virus clearance more efficient because CTL division continues...

  17. Hydrogen Detection With a Gas Sensor Array – Processing and Recognition of Dynamic Responses Using Neural Networks

    Directory of Open Access Journals (Sweden)

    Gwiżdż Patryk

    2015-03-01

    Full Text Available An array consisting of four commercial gas sensors with target specifications for hydrocarbons, ammonia, alcohol, explosive gases has been constructed and tested. The sensors in the array operate in the dynamic mode upon the temperature modulation from 350°C to 500°C. Changes in the sensor operating temperature lead to distinct resistance responses affected by the gas type, its concentration and the humidity level. The measurements are performed upon various hydrogen (17-3000 ppm, methane (167-3000 ppm and propane (167-3000 ppm concentrations at relative humidity levels of 0-75%RH. The measured dynamic response signals are further processed with the Discrete Fourier Transform. Absolute values of the dc component and the first five harmonics of each sensor are analysed by a feed-forward back-propagation neural network. The ultimate aim of this research is to achieve a reliable hydrogen detection despite an interference of the humidity and residual gases.

  18. Automated Flight Routing Using Stochastic Dynamic Programming

    Science.gov (United States)

    Ng, Hok K.; Morando, Alex; Grabbe, Shon

    2010-01-01

    Airspace capacity reduction due to convective weather impedes air traffic flows and causes traffic congestion. This study presents an algorithm that reroutes flights in the presence of winds, enroute convective weather, and congested airspace based on stochastic dynamic programming. A stochastic disturbance model incorporates into the reroute design process the capacity uncertainty. A trajectory-based airspace demand model is employed for calculating current and future airspace demand. The optimal routes minimize the total expected traveling time, weather incursion, and induced congestion costs. They are compared to weather-avoidance routes calculated using deterministic dynamic programming. The stochastic reroutes have smaller deviation probability than the deterministic counterpart when both reroutes have similar total flight distance. The stochastic rerouting algorithm takes into account all convective weather fields with all severity levels while the deterministic algorithm only accounts for convective weather systems exceeding a specified level of severity. When the stochastic reroutes are compared to the actual flight routes, they have similar total flight time, and both have about 1% of travel time crossing congested enroute sectors on average. The actual flight routes induce slightly less traffic congestion than the stochastic reroutes but intercept more severe convective weather.

  19. Noradrenergic modulation of neural erotic stimulus perception.

    Science.gov (United States)

    Graf, Heiko; Wiegers, Maike; Metzger, Coraline Danielle; Walter, Martin; Grön, Georg; Abler, Birgit

    2017-09-01

    We recently investigated neuromodulatory effects of the noradrenergic agent reboxetine and the dopamine receptor affine amisulpride in healthy subjects on dynamic erotic stimulus processing. Whereas amisulpride left sexual functions and neural activations unimpaired, we observed detrimental activations under reboxetine within the caudate nucleus corresponding to motivational components of sexual behavior. However, broadly impaired subjective sexual functioning under reboxetine suggested effects on further neural components. We now investigated the same sample under these two agents with static erotic picture stimulation as alternative stimulus presentation mode to potentially observe further neural treatment effects of reboxetine. 19 healthy males were investigated under reboxetine, amisulpride and placebo for 7 days each within a double-blind cross-over design. During fMRI static erotic picture were presented with preceding anticipation periods. Subjective sexual functions were assessed by a self-reported questionnaire. Neural activations were attenuated within the caudate nucleus, putamen, ventral striatum, the pregenual and anterior midcingulate cortex and in the orbitofrontal cortex under reboxetine. Subjective diminished sexual arousal under reboxetine was correlated with attenuated neural reactivity within the posterior insula. Again, amisulpride left neural activations along with subjective sexual functioning unimpaired. Neither reboxetine nor amisulpride altered differential neural activations during anticipation of erotic stimuli. Our results verified detrimental effects of noradrenergic agents on neural motivational but also emotional and autonomic components of sexual behavior. Considering the overlap of neural network alterations with those evoked by serotonergic agents, our results suggest similar neuromodulatory effects of serotonergic and noradrenergic agents on common neural pathways relevant for sexual behavior. Copyright © 2017 Elsevier B.V. and

  20. Markdown Optimization via Approximate Dynamic Programming

    Directory of Open Access Journals (Sweden)

    Cos?gun

    2013-02-01

    Full Text Available We consider the markdown optimization problem faced by the leading apparel retail chain. Because of substitution among products the markdown policy of one product affects the sales of other products. Therefore, markdown policies for product groups having a significant crossprice elasticity among each other should be jointly determined. Since the state space of the problem is very huge, we use Approximate Dynamic Programming. Finally, we provide insights on the behavior of how each product price affects the markdown policy.