WorldWideScience

Sample records for neural activity model

  1. Associative memory model with spontaneous neural activity

    Science.gov (United States)

    Kurikawa, Tomoki; Kaneko, Kunihiko

    2012-05-01

    We propose a novel associative memory model wherein the neural activity without an input (i.e., spontaneous activity) is modified by an input to generate a target response that is memorized for recall upon the same input. Suitable design of synaptic connections enables the model to memorize input/output (I/O) mappings equaling 70% of the total number of neurons, where the evoked activity distinguishes a target pattern from others. Spontaneous neural activity without an input shows chaotic dynamics but keeps some similarity with evoked activities, as reported in recent experimental studies.

  2. Neural electrical activity and neural network growth.

    Science.gov (United States)

    Gafarov, F M

    2018-05-01

    The development of central and peripheral neural system depends in part on the emergence of the correct functional connectivity in its input and output pathways. Now it is generally accepted that molecular factors guide neurons to establish a primary scaffold that undergoes activity-dependent refinement for building a fully functional circuit. However, a number of experimental results obtained recently shows that the neuronal electrical activity plays an important role in the establishing of initial interneuronal connections. Nevertheless, these processes are rather difficult to study experimentally, due to the absence of theoretical description and quantitative parameters for estimation of the neuronal activity influence on growth in neural networks. In this work we propose a general framework for a theoretical description of the activity-dependent neural network growth. The theoretical description incorporates a closed-loop growth model in which the neural activity can affect neurite outgrowth, which in turn can affect neural activity. We carried out the detailed quantitative analysis of spatiotemporal activity patterns and studied the relationship between individual cells and the network as a whole to explore the relationship between developing connectivity and activity patterns. The model, developed in this work will allow us to develop new experimental techniques for studying and quantifying the influence of the neuronal activity on growth processes in neural networks and may lead to a novel techniques for constructing large-scale neural networks by self-organization. Copyright © 2018 Elsevier Ltd. All rights reserved.

  3. Influence of neural adaptation on dynamics and equilibrium state of neural activities in a ring neural network

    Science.gov (United States)

    Takiyama, Ken

    2017-12-01

    How neural adaptation affects neural information processing (i.e. the dynamics and equilibrium state of neural activities) is a central question in computational neuroscience. In my previous works, I analytically clarified the dynamics and equilibrium state of neural activities in a ring-type neural network model that is widely used to model the visual cortex, motor cortex, and several other brain regions. The neural dynamics and the equilibrium state in the neural network model corresponded to a Bayesian computation and statistically optimal multiple information integration, respectively, under a biologically inspired condition. These results were revealed in an analytically tractable manner; however, adaptation effects were not considered. Here, I analytically reveal how the dynamics and equilibrium state of neural activities in a ring neural network are influenced by spike-frequency adaptation (SFA). SFA is an adaptation that causes gradual inhibition of neural activity when a sustained stimulus is applied, and the strength of this inhibition depends on neural activities. I reveal that SFA plays three roles: (1) SFA amplifies the influence of external input in neural dynamics; (2) SFA allows the history of the external input to affect neural dynamics; and (3) the equilibrium state corresponds to the statistically optimal multiple information integration independent of the existence of SFA. In addition, the equilibrium state in a ring neural network model corresponds to the statistically optimal integration of multiple information sources under biologically inspired conditions, independent of the existence of SFA.

  4. Modeling and preparation of activated carbon for methane storage II. Neural network modeling and experimental studies of the activated carbon preparation

    International Nuclear Information System (INIS)

    Namvar-Asl, Mahnaz; Soltanieh, Mohammad; Rashidi, Alimorad

    2008-01-01

    This study describes the activated carbon (AC) preparation for methane storage. Due to the need for the introduction of a model, correlating the effective preparation parameters with the characteristic parameters of the activated carbon, a model was developed by neural networks. In a previous study [Namvar-Asl M, Soltanieh M, Rashidi A, Irandoukht A. Modeling and preparation of activated carbon for methane storage: (I) modeling of activated carbon characteristics with neural networks and response surface method. Proceedings of CESEP07, Krakow, Poland; 2007.], the model was designed with the MATLAB toolboxes providing the best response for the correlation of the characteristics parameters and the methane uptake of the activated carbon. Regarding this model, the characteristics of the activated carbon were determined for a target methane uptake. After the determination of the characteristics, the demonstrated model of this work guided us to the selection of the effective AC preparation parameters. According to the modeling results, some samples were prepared and their methane storage capacity was measured. The results were compared with those of a target methane uptake (special amount of methane storage). Among the designed models, one of them illustrated the methane storage capacity of 180 v/v. It was finally found that the neural network modeling for the assay of the efficient AC preparation parameters was financially feasible, with respect to the determined methane storage capacity. This study could be useful for the development of the Adsorbed Natural Gas (ANG) technology

  5. Neural networks with discontinuous/impact activations

    CERN Document Server

    Akhmet, Marat

    2014-01-01

    This book presents as its main subject new models in mathematical neuroscience. A wide range of neural networks models with discontinuities are discussed, including impulsive differential equations, differential equations with piecewise constant arguments, and models of mixed type. These models involve discontinuities, which are natural because huge velocities and short distances are usually observed in devices modeling the networks. A discussion of the models, appropriate for the proposed applications, is also provided. This book also: Explores questions related to the biological underpinning for models of neural networks\\ Considers neural networks modeling using differential equations with impulsive and piecewise constant argument discontinuities Provides all necessary mathematical basics for application to the theory of neural networks Neural Networks with Discontinuous/Impact Activations is an ideal book for researchers and professionals in the field of engineering mathematics that have an interest in app...

  6. Active Neural Localization

    OpenAIRE

    Chaplot, Devendra Singh; Parisotto, Emilio; Salakhutdinov, Ruslan

    2018-01-01

    Localization is the problem of estimating the location of an autonomous agent from an observation and a map of the environment. Traditional methods of localization, which filter the belief based on the observations, are sub-optimal in the number of steps required, as they do not decide the actions taken by the agent. We propose "Active Neural Localizer", a fully differentiable neural network that learns to localize accurately and efficiently. The proposed model incorporates ideas of tradition...

  7. Predicting Neural Activity Patterns Associated with Sentences Using a Neurobiologically Motivated Model of Semantic Representation.

    Science.gov (United States)

    Anderson, Andrew James; Binder, Jeffrey R; Fernandino, Leonardo; Humphries, Colin J; Conant, Lisa L; Aguilar, Mario; Wang, Xixi; Doko, Donias; Raizada, Rajeev D S

    2017-09-01

    We introduce an approach that predicts neural representations of word meanings contained in sentences then superposes these to predict neural representations of new sentences. A neurobiological semantic model based on sensory, motor, social, emotional, and cognitive attributes was used as a foundation to define semantic content. Previous studies have predominantly predicted neural patterns for isolated words, using models that lack neurobiological interpretation. Fourteen participants read 240 sentences describing everyday situations while undergoing fMRI. To connect sentence-level fMRI activation patterns to the word-level semantic model, we devised methods to decompose the fMRI data into individual words. Activation patterns associated with each attribute in the model were then estimated using multiple-regression. This enabled synthesis of activation patterns for trained and new words, which were subsequently averaged to predict new sentences. Region-of-interest analyses revealed that prediction accuracy was highest using voxels in the left temporal and inferior parietal cortex, although a broad range of regions returned statistically significant results, showing that semantic information is widely distributed across the brain. The results show how a neurobiologically motivated semantic model can decompose sentence-level fMRI data into activation features for component words, which can be recombined to predict activation patterns for new sentences. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  8. Modeling long-term human activeness using recurrent neural networks for biometric data.

    Science.gov (United States)

    Kim, Zae Myung; Oh, Hyungrai; Kim, Han-Gyu; Lim, Chae-Gyun; Oh, Kyo-Joong; Choi, Ho-Jin

    2017-05-18

    With the invention of fitness trackers, it has been possible to continuously monitor a user's biometric data such as heart rates, number of footsteps taken, and amount of calories burned. This paper names the time series of these three types of biometric data, the user's "activeness", and investigates the feasibility in modeling and predicting the long-term activeness of the user. The dataset used in this study consisted of several months of biometric time-series data gathered by seven users independently. Four recurrent neural network (RNN) architectures-as well as a deep neural network and a simple regression model-were proposed to investigate the performance on predicting the activeness of the user under various length-related hyper-parameter settings. In addition, the learned model was tested to predict the time period when the user's activeness falls below a certain threshold. A preliminary experimental result shows that each type of activeness data exhibited a short-term autocorrelation; and among the three types of data, the consumed calories and the number of footsteps were positively correlated, while the heart rate data showed almost no correlation with neither of them. It is probably due to this characteristic of the dataset that although the RNN models produced the best results on modeling the user's activeness, the difference was marginal; and other baseline models, especially the linear regression model, performed quite admirably as well. Further experimental results show that it is feasible to predict a user's future activeness with precision, for example, a trained RNN model could predict-with the precision of 84%-when the user would be less active within the next hour given the latest 15 min of his activeness data. This paper defines and investigates the notion of a user's "activeness", and shows that forecasting the long-term activeness of the user is indeed possible. Such information can be utilized by a health-related application to proactively

  9. The effect of the neural activity on topological properties of growing neural networks.

    Science.gov (United States)

    Gafarov, F M; Gafarova, V R

    2016-09-01

    The connectivity structure in cortical networks defines how information is transmitted and processed, and it is a source of the complex spatiotemporal patterns of network's development, and the process of creation and deletion of connections is continuous in the whole life of the organism. In this paper, we study how neural activity influences the growth process in neural networks. By using a two-dimensional activity-dependent growth model we demonstrated the neural network growth process from disconnected neurons to fully connected networks. For making quantitative investigation of the network's activity influence on its topological properties we compared it with the random growth network not depending on network's activity. By using the random graphs theory methods for the analysis of the network's connections structure it is shown that the growth in neural networks results in the formation of a well-known "small-world" network.

  10. Model for a flexible motor memory based on a self-active recurrent neural network.

    Science.gov (United States)

    Boström, Kim Joris; Wagner, Heiko; Prieske, Markus; de Lussanet, Marc

    2013-10-01

    Using recent recurrent network architecture based on the reservoir computing approach, we propose and numerically simulate a model that is focused on the aspects of a flexible motor memory for the storage of elementary movement patterns into the synaptic weights of a neural network, so that the patterns can be retrieved at any time by simple static commands. The resulting motor memory is flexible in that it is capable to continuously modulate the stored patterns. The modulation consists in an approximately linear inter- and extrapolation, generating a large space of possible movements that have not been learned before. A recurrent network of thousand neurons is trained in a manner that corresponds to a realistic exercising scenario, with experimentally measured muscular activations and with kinetic data representing proprioceptive feedback. The network is "self-active" in that it maintains recurrent flow of activation even in the absence of input, a feature that resembles the "resting-state activity" found in the human and animal brain. The model involves the concept of "neural outsourcing" which amounts to the permanent shifting of computational load from higher to lower-level neural structures, which might help to explain why humans are able to execute learned skills in a fluent and flexible manner without the need for attention to the details of the movement. Copyright © 2013 Elsevier B.V. All rights reserved.

  11. Model Integrating Fuzzy Argument with Neural Network Enhancing the Performance of Active Queue Management

    Directory of Open Access Journals (Sweden)

    Nguyen Kim Quoc

    2015-08-01

    Full Text Available The bottleneck control by active queue management mechanisms at network nodes is essential. In recent years, some researchers have used fuzzy argument to improve the active queue management mechanisms to enhance the network performance. However, the projects using the fuzzy controller depend heavily on professionals and their parameters cannot be updated according to changes in the network, so the effectiveness of this mechanism is not high. Therefore, we propose a model combining the fuzzy controller with neural network (FNN to overcome the limitations above. Results of the training of the neural networks will find the optimal parameters for the adaptive fuzzy controller well to changes of the network. This improves the operational efficiency of the active queue management mechanisms at network nodes.

  12. Computational modeling of neural plasticity for self-organization of neural networks.

    Science.gov (United States)

    Chrol-Cannon, Joseph; Jin, Yaochu

    2014-11-01

    Self-organization in biological nervous systems during the lifetime is known to largely occur through a process of plasticity that is dependent upon the spike-timing activity in connected neurons. In the field of computational neuroscience, much effort has been dedicated to building up computational models of neural plasticity to replicate experimental data. Most recently, increasing attention has been paid to understanding the role of neural plasticity in functional and structural neural self-organization, as well as its influence on the learning performance of neural networks for accomplishing machine learning tasks such as classification and regression. Although many ideas and hypothesis have been suggested, the relationship between the structure, dynamics and learning performance of neural networks remains elusive. The purpose of this article is to review the most important computational models for neural plasticity and discuss various ideas about neural plasticity's role. Finally, we suggest a few promising research directions, in particular those along the line that combines findings in computational neuroscience and systems biology, and their synergetic roles in understanding learning, memory and cognition, thereby bridging the gap between computational neuroscience, systems biology and computational intelligence. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  13. A theory of how active behavior stabilises neural activity: Neural gain modulation by closed-loop environmental feedback.

    Directory of Open Access Journals (Sweden)

    Christopher L Buckley

    2018-01-01

    Full Text Available During active behaviours like running, swimming, whisking or sniffing, motor actions shape sensory input and sensory percepts guide future motor commands. Ongoing cycles of sensory and motor processing constitute a closed-loop feedback system which is central to motor control and, it has been argued, for perceptual processes. This closed-loop feedback is mediated by brainwide neural circuits but how the presence of feedback signals impacts on the dynamics and function of neurons is not well understood. Here we present a simple theory suggesting that closed-loop feedback between the brain/body/environment can modulate neural gain and, consequently, change endogenous neural fluctuations and responses to sensory input. We support this theory with modeling and data analysis in two vertebrate systems. First, in a model of rodent whisking we show that negative feedback mediated by whisking vibrissa can suppress coherent neural fluctuations and neural responses to sensory input in the barrel cortex. We argue this suppression provides an appealing account of a brain state transition (a marked change in global brain activity coincident with the onset of whisking in rodents. Moreover, this mechanism suggests a novel signal detection mechanism that selectively accentuates active, rather than passive, whisker touch signals. This mechanism is consistent with a predictive coding strategy that is sensitive to the consequences of motor actions rather than the difference between the predicted and actual sensory input. We further support the theory by re-analysing previously published two-photon data recorded in zebrafish larvae performing closed-loop optomotor behaviour in a virtual swim simulator. We show, as predicted by this theory, that the degree to which each cell contributes in linking sensory and motor signals well explains how much its neural fluctuations are suppressed by closed-loop optomotor behaviour. More generally we argue that our results

  14. A theory of how active behavior stabilises neural activity: Neural gain modulation by closed-loop environmental feedback.

    Science.gov (United States)

    Buckley, Christopher L; Toyoizumi, Taro

    2018-01-01

    During active behaviours like running, swimming, whisking or sniffing, motor actions shape sensory input and sensory percepts guide future motor commands. Ongoing cycles of sensory and motor processing constitute a closed-loop feedback system which is central to motor control and, it has been argued, for perceptual processes. This closed-loop feedback is mediated by brainwide neural circuits but how the presence of feedback signals impacts on the dynamics and function of neurons is not well understood. Here we present a simple theory suggesting that closed-loop feedback between the brain/body/environment can modulate neural gain and, consequently, change endogenous neural fluctuations and responses to sensory input. We support this theory with modeling and data analysis in two vertebrate systems. First, in a model of rodent whisking we show that negative feedback mediated by whisking vibrissa can suppress coherent neural fluctuations and neural responses to sensory input in the barrel cortex. We argue this suppression provides an appealing account of a brain state transition (a marked change in global brain activity) coincident with the onset of whisking in rodents. Moreover, this mechanism suggests a novel signal detection mechanism that selectively accentuates active, rather than passive, whisker touch signals. This mechanism is consistent with a predictive coding strategy that is sensitive to the consequences of motor actions rather than the difference between the predicted and actual sensory input. We further support the theory by re-analysing previously published two-photon data recorded in zebrafish larvae performing closed-loop optomotor behaviour in a virtual swim simulator. We show, as predicted by this theory, that the degree to which each cell contributes in linking sensory and motor signals well explains how much its neural fluctuations are suppressed by closed-loop optomotor behaviour. More generally we argue that our results demonstrate the dependence

  15. The Effects of GABAergic Polarity Changes on Episodic Neural Network Activity in Developing Neural Systems

    Directory of Open Access Journals (Sweden)

    Wilfredo Blanco

    2017-09-01

    Full Text Available Early in development, neural systems have primarily excitatory coupling, where even GABAergic synapses are excitatory. Many of these systems exhibit spontaneous episodes of activity that have been characterized through both experimental and computational studies. As development progress the neural system goes through many changes, including synaptic remodeling, intrinsic plasticity in the ion channel expression, and a transformation of GABAergic synapses from excitatory to inhibitory. What effect each of these, and other, changes have on the network behavior is hard to know from experimental studies since they all happen in parallel. One advantage of a computational approach is that one has the ability to study developmental changes in isolation. Here, we examine the effects of GABAergic synapse polarity change on the spontaneous activity of both a mean field and a neural network model that has both glutamatergic and GABAergic coupling, representative of a developing neural network. We find some intuitive behavioral changes as the GABAergic neurons go from excitatory to inhibitory, shared by both models, such as a decrease in the duration of episodes. We also find some paradoxical changes in the activity that are only present in the neural network model. In particular, we find that during early development the inter-episode durations become longer on average, while later in development they become shorter. In addressing this unexpected finding, we uncover a priming effect that is particularly important for a small subset of neurons, called the “intermediate neurons.” We characterize these neurons and demonstrate why they are crucial to episode initiation, and why the paradoxical behavioral change result from priming of these neurons. The study illustrates how even arguably the simplest of developmental changes that occurs in neural systems can present non-intuitive behaviors. It also makes predictions about neural network behavioral changes

  16. Emergence of gamma motor activity in an artificial neural network model of the corticospinal system.

    Science.gov (United States)

    Grandjean, Bernard; Maier, Marc A

    2017-02-01

    Muscle spindle discharge during active movement is a function of mechanical and neural parameters. Muscle length changes (and their derivatives) represent its primary mechanical, fusimotor drive its neural component. However, neither the action nor the function of fusimotor and in particular of γ-drive, have been clearly established, since γ-motor activity during voluntary, non-locomotor movements remains largely unknown. Here, using a computational approach, we explored whether γ-drive emerges in an artificial neural network model of the corticospinal system linked to a biomechanical antagonist wrist simulator. The wrist simulator included length-sensitive and γ-drive-dependent type Ia and type II muscle spindle activity. Network activity and connectivity were derived by a gradient descent algorithm to generate reciprocal, known target α-motor unit activity during wrist flexion-extension (F/E) movements. Two tasks were simulated: an alternating F/E task and a slow F/E tracking task. Emergence of γ-motor activity in the alternating F/E network was a function of α-motor unit drive: if muscle afferent (together with supraspinal) input was required for driving α-motor units, then γ-drive emerged in the form of α-γ coactivation, as predicted by empirical studies. In the slow F/E tracking network, γ-drive emerged in the form of α-γ dissociation and provided critical, bidirectional muscle afferent activity to the cortical network, containing known bidirectional target units. The model thus demonstrates the complementary aspects of spindle output and hence γ-drive: i) muscle spindle activity as a driving force of α-motor unit activity, and ii) afferent activity providing continuous sensory information, both of which crucially depend on γ-drive.

  17. Embedding responses in spontaneous neural activity shaped through sequential learning.

    Directory of Open Access Journals (Sweden)

    Tomoki Kurikawa

    Full Text Available Recent experimental measurements have demonstrated that spontaneous neural activity in the absence of explicit external stimuli has remarkable spatiotemporal structure. This spontaneous activity has also been shown to play a key role in the response to external stimuli. To better understand this role, we proposed a viewpoint, "memories-as-bifurcations," that differs from the traditional "memories-as-attractors" viewpoint. Memory recall from the memories-as-bifurcations viewpoint occurs when the spontaneous neural activity is changed to an appropriate output activity upon application of an input, known as a bifurcation in dynamical systems theory, wherein the input modifies the flow structure of the neural dynamics. Learning, then, is a process that helps create neural dynamical systems such that a target output pattern is generated as an attractor upon a given input. Based on this novel viewpoint, we introduce in this paper an associative memory model with a sequential learning process. Using a simple hebbian-type learning, the model is able to memorize a large number of input/output mappings. The neural dynamics shaped through the learning exhibit different bifurcations to make the requested targets stable upon an increase in the input, and the neural activity in the absence of input shows chaotic dynamics with occasional approaches to the memorized target patterns. These results suggest that these dynamics facilitate the bifurcations to each target attractor upon application of the corresponding input, which thus increases the capacity for learning. This theoretical finding about the behavior of the spontaneous neural activity is consistent with recent experimental observations in which the neural activity without stimuli wanders among patterns evoked by previously applied signals. In addition, the neural networks shaped by learning properly reflect the correlations of input and target-output patterns in a similar manner to those designed in

  18. Stimulus-dependent maximum entropy models of neural population codes.

    Directory of Open Access Journals (Sweden)

    Einat Granot-Atedgi

    Full Text Available Neural populations encode information about their stimulus in a collective fashion, by joint activity patterns of spiking and silence. A full account of this mapping from stimulus to neural activity is given by the conditional probability distribution over neural codewords given the sensory input. For large populations, direct sampling of these distributions is impossible, and so we must rely on constructing appropriate models. We show here that in a population of 100 retinal ganglion cells in the salamander retina responding to temporal white-noise stimuli, dependencies between cells play an important encoding role. We introduce the stimulus-dependent maximum entropy (SDME model-a minimal extension of the canonical linear-nonlinear model of a single neuron, to a pairwise-coupled neural population. We find that the SDME model gives a more accurate account of single cell responses and in particular significantly outperforms uncoupled models in reproducing the distributions of population codewords emitted in response to a stimulus. We show how the SDME model, in conjunction with static maximum entropy models of population vocabulary, can be used to estimate information-theoretic quantities like average surprise and information transmission in a neural population.

  19. Spike Neural Models Part II: Abstract Neural Models

    Directory of Open Access Journals (Sweden)

    Johnson, Melissa G.

    2018-02-01

    Full Text Available Neurons are complex cells that require a lot of time and resources to model completely. In spiking neural networks (SNN though, not all that complexity is required. Therefore simple, abstract models are often used. These models save time, use less computer resources, and are easier to understand. This tutorial presents two such models: Izhikevich's model, which is biologically realistic in the resulting spike trains but not in the parameters, and the Leaky Integrate and Fire (LIF model which is not biologically realistic but does quickly and easily integrate input to produce spikes. Izhikevich's model is based on Hodgkin-Huxley's model but simplified such that it uses only two differentiation equations and four parameters to produce various realistic spike patterns. LIF is based on a standard electrical circuit and contains one equation. Either of these two models, or any of the many other models in literature can be used in a SNN. Choosing a neural model is an important task that depends on the goal of the research and the resources available. Once a model is chosen, network decisions such as connectivity, delay, and sparseness, need to be made. Understanding neural models and how they are incorporated into the network is the first step in creating a SNN.

  20. Models of neural dynamics in brain information processing - the developments of 'the decade'

    International Nuclear Information System (INIS)

    Borisyuk, G N; Borisyuk, R M; Kazanovich, Yakov B; Ivanitskii, Genrikh R

    2002-01-01

    Neural network models are discussed that have been developed during the last decade with the purpose of reproducing spatio-temporal patterns of neural activity in different brain structures. The main goal of the modeling was to test hypotheses of synchronization, temporal and phase relations in brain information processing. The models being considered are those of temporal structure of spike sequences, of neural activity dynamics, and oscillatory models of attention and feature integration. (reviews of topical problems)

  1. QSAR modelling using combined simple competitive learning networks and RBF neural networks.

    Science.gov (United States)

    Sheikhpour, R; Sarram, M A; Rezaeian, M; Sheikhpour, E

    2018-04-01

    The aim of this study was to propose a QSAR modelling approach based on the combination of simple competitive learning (SCL) networks with radial basis function (RBF) neural networks for predicting the biological activity of chemical compounds. The proposed QSAR method consisted of two phases. In the first phase, an SCL network was applied to determine the centres of an RBF neural network. In the second phase, the RBF neural network was used to predict the biological activity of various phenols and Rho kinase (ROCK) inhibitors. The predictive ability of the proposed QSAR models was evaluated and compared with other QSAR models using external validation. The results of this study showed that the proposed QSAR modelling approach leads to better performances than other models in predicting the biological activity of chemical compounds. This indicated the efficiency of simple competitive learning networks in determining the centres of RBF neural networks.

  2. Altered behavior and neural activity in conspecific cagemates co-housed with mouse models of brain disorders.

    Science.gov (United States)

    Yang, Hyunwoo; Jung, Seungmoon; Seo, Jinsoo; Khalid, Arshi; Yoo, Jung-Seok; Park, Jihyun; Kim, Soyun; Moon, Jangsup; Lee, Soon-Tae; Jung, Keun-Hwa; Chu, Kon; Lee, Sang Kun; Jeon, Daejong

    2016-09-01

    The psychosocial environment is one of the major contributors of social stress. Family members or caregivers who consistently communicate with individuals with brain disorders are considered at risk for physical and mental health deterioration, possibly leading to mental disorders. However, the underlying neural mechanisms of this phenomenon remain poorly understood. To address this, we developed a social stress paradigm in which a mouse model of epilepsy or depression was housed long-term (>4weeks) with normal conspecifics. We characterized the behavioral phenotypes and electrophysiologically investigated the neural activity of conspecific cagemate mice. The cagemates exhibited deficits in behavioral tasks assessing anxiety, locomotion, learning/memory, and depression-like behavior. Furthermore, they showed severe social impairment in social behavioral tasks involving social interaction or aggression. Strikingly, behavioral dysfunction remained in the cagemates 4weeks following co-housing cessation with the mouse models. In an electrophysiological study, the cagemates showed an increased number of spikes in medial prefrontal cortex (mPFC) neurons. Our results demonstrate that conspecifics co-housed with mouse models of brain disorders develop chronic behavioral dysfunctions, and suggest a possible association between abnormal mPFC neural activity and their behavioral pathogenesis. These findings contribute to the understanding of the psychosocial and psychiatric symptoms frequently present in families or caregivers of patients with brain disorders. Copyright © 2016 Elsevier Inc. All rights reserved.

  3. Linear and nonlinear ARMA model parameter estimation using an artificial neural network

    Science.gov (United States)

    Chon, K. H.; Cohen, R. J.

    1997-01-01

    This paper addresses parametric system identification of linear and nonlinear dynamic systems by analysis of the input and output signals. Specifically, we investigate the relationship between estimation of the system using a feedforward neural network model and estimation of the system by use of linear and nonlinear autoregressive moving-average (ARMA) models. By utilizing a neural network model incorporating a polynomial activation function, we show the equivalence of the artificial neural network to the linear and nonlinear ARMA models. We compare the parameterization of the estimated system using the neural network and ARMA approaches by utilizing data generated by means of computer simulations. Specifically, we show that the parameters of a simulated ARMA system can be obtained from the neural network analysis of the simulated data or by conventional least squares ARMA analysis. The feasibility of applying neural networks with polynomial activation functions to the analysis of experimental data is explored by application to measurements of heart rate (HR) and instantaneous lung volume (ILV) fluctuations.

  4. A Tensor-Product-Kernel Framework for Multiscale Neural Activity Decoding and Control

    Science.gov (United States)

    Li, Lin; Brockmeier, Austin J.; Choi, John S.; Francis, Joseph T.; Sanchez, Justin C.; Príncipe, José C.

    2014-01-01

    Brain machine interfaces (BMIs) have attracted intense attention as a promising technology for directly interfacing computers or prostheses with the brain's motor and sensory areas, thereby bypassing the body. The availability of multiscale neural recordings including spike trains and local field potentials (LFPs) brings potential opportunities to enhance computational modeling by enriching the characterization of the neural system state. However, heterogeneity on data type (spike timing versus continuous amplitude signals) and spatiotemporal scale complicates the model integration of multiscale neural activity. In this paper, we propose a tensor-product-kernel-based framework to integrate the multiscale activity and exploit the complementary information available in multiscale neural activity. This provides a common mathematical framework for incorporating signals from different domains. The approach is applied to the problem of neural decoding and control. For neural decoding, the framework is able to identify the nonlinear functional relationship between the multiscale neural responses and the stimuli using general purpose kernel adaptive filtering. In a sensory stimulation experiment, the tensor-product-kernel decoder outperforms decoders that use only a single neural data type. In addition, an adaptive inverse controller for delivering electrical microstimulation patterns that utilizes the tensor-product kernel achieves promising results in emulating the responses to natural stimulation. PMID:24829569

  5. Analysis of Oscillatory Neural Activity in Series Network Models of Parkinson's Disease During Deep Brain Stimulation.

    Science.gov (United States)

    Davidson, Clare M; de Paor, Annraoi M; Cagnan, Hayriye; Lowery, Madeleine M

    2016-01-01

    Parkinson's disease is a progressive, neurodegenerative disorder, characterized by hallmark motor symptoms. It is associated with pathological, oscillatory neural activity in the basal ganglia. Deep brain stimulation (DBS) is often successfully used to treat medically refractive Parkinson's disease. However, the selection of stimulation parameters is based on qualitative assessment of the patient, which can result in a lengthy tuning period and a suboptimal choice of parameters. This study explores fourth-order, control theory-based models of oscillatory activity in the basal ganglia. Describing function analysis is applied to examine possible mechanisms for the generation of oscillations in interacting nuclei and to investigate the suppression of oscillations with high-frequency stimulation. The theoretical results for the suppression of the oscillatory activity obtained using both the fourth-order model, and a previously described second-order model, are optimized to fit clinically recorded local field potential data obtained from Parkinsonian patients with implanted DBS. Close agreement between the power of oscillations recorded for a range of stimulation amplitudes is observed ( R(2)=0.69-0.99 ). The results suggest that the behavior of the system and the suppression of pathological neural oscillations with DBS is well described by the macroscopic models presented. The results also demonstrate that in this instance, a second-order model is sufficient to model the clinical data, without the need for added complexity. Describing the system behavior with computationally efficient models could aid in the identification of optimal stimulation parameters for patients in a clinical environment.

  6. Modeling and control of magnetorheological fluid dampers using neural networks

    Science.gov (United States)

    Wang, D. H.; Liao, W. H.

    2005-02-01

    Due to the inherent nonlinear nature of magnetorheological (MR) fluid dampers, one of the challenging aspects for utilizing these devices to achieve high system performance is the development of accurate models and control algorithms that can take advantage of their unique characteristics. In this paper, the direct identification and inverse dynamic modeling for MR fluid dampers using feedforward and recurrent neural networks are studied. The trained direct identification neural network model can be used to predict the damping force of the MR fluid damper on line, on the basis of the dynamic responses across the MR fluid damper and the command voltage, and the inverse dynamic neural network model can be used to generate the command voltage according to the desired damping force through supervised learning. The architectures and the learning methods of the dynamic neural network models and inverse neural network models for MR fluid dampers are presented, and some simulation results are discussed. Finally, the trained neural network models are applied to predict and control the damping force of the MR fluid damper. Moreover, validation methods for the neural network models developed are proposed and used to evaluate their performance. Validation results with different data sets indicate that the proposed direct identification dynamic model using the recurrent neural network can be used to predict the damping force accurately and the inverse identification dynamic model using the recurrent neural network can act as a damper controller to generate the command voltage when the MR fluid damper is used in a semi-active mode.

  7. Parametric models to relate spike train and LFP dynamics with neural information processing.

    Science.gov (United States)

    Banerjee, Arpan; Dean, Heather L; Pesaran, Bijan

    2012-01-01

    Spike trains and local field potentials (LFPs) resulting from extracellular current flows provide a substrate for neural information processing. Understanding the neural code from simultaneous spike-field recordings and subsequent decoding of information processing events will have widespread applications. One way to demonstrate an understanding of the neural code, with particular advantages for the development of applications, is to formulate a parametric statistical model of neural activity and its covariates. Here, we propose a set of parametric spike-field models (unified models) that can be used with existing decoding algorithms to reveal the timing of task or stimulus specific processing. Our proposed unified modeling framework captures the effects of two important features of information processing: time-varying stimulus-driven inputs and ongoing background activity that occurs even in the absence of environmental inputs. We have applied this framework for decoding neural latencies in simulated and experimentally recorded spike-field sessions obtained from the lateral intraparietal area (LIP) of awake, behaving monkeys performing cued look-and-reach movements to spatial targets. Using both simulated and experimental data, we find that estimates of trial-by-trial parameters are not significantly affected by the presence of ongoing background activity. However, including background activity in the unified model improves goodness of fit for predicting individual spiking events. Uncovering the relationship between the model parameters and the timing of movements offers new ways to test hypotheses about the relationship between neural activity and behavior. We obtained significant spike-field onset time correlations from single trials using a previously published data set where significantly strong correlation was only obtained through trial averaging. We also found that unified models extracted a stronger relationship between neural response latency and trial

  8. A neural model for temporal order judgments and their active recalibration: a common mechanism for space and time?

    Directory of Open Access Journals (Sweden)

    Mingbo eCai

    2012-11-01

    Full Text Available When observers experience a constant delay between their motor actions and sensory feedback, their perception of the temporal order between actions and sensations adapt (Stetson et al., 2006a. We present here a novel neural model that can explain temporal order judgments (TOJs and their recalibration. Our model employs three ubiquitous features of neural systems: 1 information pooling, 2 opponent processing, and 3 synaptic scaling. Specifically, the model proposes that different populations of neurons encode different delays between motor-sensory events, the outputs of these populations feed into rivaling neural populations (encoding before and after, and the activity difference between these populations determines the perceptual judgment. As a consequence of synaptic scaling of input weights, motor acts which are consistently followed by delayed sensory feedback will cause the network to recalibrate its point of subjective simultaneity. The structure of our model raises the possibility that recalibration of TOJs is a temporal analogue to the motion aftereffect. In other words, identical neural mechanisms may be used to make perceptual determinations about both space and time. Our model captures behavioral recalibration results for different numbers of adapting trials and different adapting delays. In line with predictions of the model, we additionally demonstrate that temporal recalibration can last through time, in analogy to storage of the motion aftereffect.

  9. A dynamic neural field model of temporal order judgments.

    Science.gov (United States)

    Hecht, Lauren N; Spencer, John P; Vecera, Shaun P

    2015-12-01

    Temporal ordering of events is biased, or influenced, by perceptual organization-figure-ground organization-and by spatial attention. For example, within a region assigned figural status or at an attended location, onset events are processed earlier (Lester, Hecht, & Vecera, 2009; Shore, Spence, & Klein, 2001), and offset events are processed for longer durations (Hecht & Vecera, 2011; Rolke, Ulrich, & Bausenhart, 2006). Here, we present an extension of a dynamic field model of change detection (Johnson, Spencer, Luck, & Schöner, 2009; Johnson, Spencer, & Schöner, 2009) that accounts for both the onset and offset performance for figural and attended regions. The model posits that neural populations processing the figure are more active, resulting in a peak of activation that quickly builds toward a detection threshold when the onset of a target is presented. This same enhanced activation for some neural populations is maintained when a present target is removed, creating delays in the perception of the target's offset. We discuss the broader implications of this model, including insights regarding how neural activation can be generated in response to the disappearance of information. (c) 2015 APA, all rights reserved).

  10. Computational modeling of neural activities for statistical inference

    CERN Document Server

    Kolossa, Antonio

    2016-01-01

    This authored monograph supplies empirical evidence for the Bayesian brain hypothesis by modeling event-related potentials (ERP) of the human electroencephalogram (EEG) during successive trials in cognitive tasks. The employed observer models are useful to compute probability distributions over observable events and hidden states, depending on which are present in the respective tasks. Bayesian model selection is then used to choose the model which best explains the ERP amplitude fluctuations. Thus, this book constitutes a decisive step towards a better understanding of the neural coding and computing of probabilities following Bayesian rules. The target audience primarily comprises research experts in the field of computational neurosciences, but the book may also be beneficial for graduate students who want to specialize in this field. .

  11. Lukasiewicz-Topos Models of Neural Networks, Cell Genome and Interactome Nonlinear Dynamic Models

    CERN Document Server

    Baianu, I C

    2004-01-01

    A categorical and Lukasiewicz-Topos framework for Lukasiewicz Algebraic Logic models of nonlinear dynamics in complex functional systems such as neural networks, genomes and cell interactomes is proposed. Lukasiewicz Algebraic Logic models of genetic networks and signaling pathways in cells are formulated in terms of nonlinear dynamic systems with n-state components that allow for the generalization of previous logical models of both genetic activities and neural networks. An algebraic formulation of variable 'next-state functions' is extended to a Lukasiewicz Topos with an n-valued Lukasiewicz Algebraic Logic subobject classifier description that represents non-random and nonlinear network activities as well as their transformations in developmental processes and carcinogenesis.

  12. Understanding the Implications of Neural Population Activity on Behavior

    Science.gov (United States)

    Briguglio, John

    Learning how neural activity in the brain leads to the behavior we exhibit is one of the fundamental questions in Neuroscience. In this dissertation, several lines of work are presented to that use principles of neural coding to understand behavior. In one line of work, we formulate the efficient coding hypothesis in a non-traditional manner in order to test human perceptual sensitivity to complex visual textures. We find a striking agreement between how variable a particular texture signal is and how sensitive humans are to its presence. This reveals that the efficient coding hypothesis is still a guiding principle for neural organization beyond the sensory periphery, and that the nature of cortical constraints differs from the peripheral counterpart. In another line of work, we relate frequency discrimination acuity to neural responses from auditory cortex in mice. It has been previously observed that optogenetic manipulation of auditory cortex, in addition to changing neural responses, evokes changes in behavioral frequency discrimination. We are able to account for changes in frequency discrimination acuity on an individual basis by examining the Fisher information from the neural population with and without optogenetic manipulation. In the third line of work, we address the question of what a neural population should encode given that its inputs are responses from another group of neurons. Drawing inspiration from techniques in machine learning, we train Deep Belief Networks on fake retinal data and show the emergence of Garbor-like filters, reminiscent of responses in primary visual cortex. In the last line of work, we model the state of a cortical excitatory-inhibitory network during complex adaptive stimuli. Using a rate model with Wilson-Cowan dynamics, we demonstrate that simple non-linearities in the signal transferred from inhibitory to excitatory neurons can account for real neural recordings taken from auditory cortex. This work establishes and tests

  13. Sequential neural models with stochastic layers

    DEFF Research Database (Denmark)

    Fraccaro, Marco; Sønderby, Søren Kaae; Paquet, Ulrich

    2016-01-01

    How can we efficiently propagate uncertainty in a latent state representation with recurrent neural networks? This paper introduces stochastic recurrent neural networks which glue a deterministic recurrent neural network and a state space model together to form a stochastic and sequential neural...... generative model. The clear separation of deterministic and stochastic layers allows a structured variational inference network to track the factorization of the model's posterior distribution. By retaining both the nonlinear recursive structure of a recurrent neural network and averaging over...

  14. Neural Activity Patterns in the Human Brain Reflect Tactile Stickiness Perception

    Science.gov (United States)

    Kim, Junsuk; Yeon, Jiwon; Ryu, Jaekyun; Park, Jang-Yeon; Chung, Soon-Cheol; Kim, Sung-Phil

    2017-01-01

    Our previous human fMRI study found brain activations correlated with tactile stickiness perception using the uni-variate general linear model (GLM) (Yeon et al., 2017). Here, we conducted an in-depth investigation on neural correlates of sticky sensations by employing a multivoxel pattern analysis (MVPA) on the same dataset. In particular, we statistically compared multi-variate neural activities in response to the three groups of sticky stimuli: A supra-threshold group including a set of sticky stimuli that evoked vivid sticky perception; an infra-threshold group including another set of sticky stimuli that barely evoked sticky perception; and a sham group including acrylic stimuli with no physically sticky property. Searchlight MVPAs were performed to search for local activity patterns carrying neural information of stickiness perception. Similar to the uni-variate GLM results, significant multi-variate neural activity patterns were identified in postcentral gyrus, subcortical (basal ganglia and thalamus), and insula areas (insula and adjacent areas). Moreover, MVPAs revealed that activity patterns in posterior parietal cortex discriminated the perceptual intensities of stickiness, which was not present in the uni-variate analysis. Next, we applied a principal component analysis (PCA) to the voxel response patterns within identified clusters so as to find low-dimensional neural representations of stickiness intensities. Follow-up clustering analyses clearly showed separate neural grouping configurations between the Supra- and Infra-threshold groups. Interestingly, this neural categorization was in line with the perceptual grouping pattern obtained from the psychophysical data. Our findings thus suggest that different stickiness intensities would elicit distinct neural activity patterns in the human brain and may provide a neural basis for the perception and categorization of tactile stickiness. PMID:28936171

  15. A neural population model incorporating dopaminergic neurotransmission during complex voluntary behaviors.

    Directory of Open Access Journals (Sweden)

    Stefan Fürtinger

    2014-11-01

    Full Text Available Assessing brain activity during complex voluntary motor behaviors that require the recruitment of multiple neural sites is a field of active research. Our current knowledge is primarily based on human brain imaging studies that have clear limitations in terms of temporal and spatial resolution. We developed a physiologically informed non-linear multi-compartment stochastic neural model to simulate functional brain activity coupled with neurotransmitter release during complex voluntary behavior, such as speech production. Due to its state-dependent modulation of neural firing, dopaminergic neurotransmission plays a key role in the organization of functional brain circuits controlling speech and language and thus has been incorporated in our neural population model. A rigorous mathematical proof establishing existence and uniqueness of solutions to the proposed model as well as a computationally efficient strategy to numerically approximate these solutions are presented. Simulated brain activity during the resting state and sentence production was analyzed using functional network connectivity, and graph theoretical techniques were employed to highlight differences between the two conditions. We demonstrate that our model successfully reproduces characteristic changes seen in empirical data between the resting state and speech production, and dopaminergic neurotransmission evokes pronounced changes in modeled functional connectivity by acting on the underlying biological stochastic neural model. Specifically, model and data networks in both speech and rest conditions share task-specific network features: both the simulated and empirical functional connectivity networks show an increase in nodal influence and segregation in speech over the resting state. These commonalities confirm that dopamine is a key neuromodulator of the functional connectome of speech control. Based on reproducible characteristic aspects of empirical data, we suggest a number

  16. Models of neural dynamics in brain information processing - the developments of 'the decade'

    Energy Technology Data Exchange (ETDEWEB)

    Borisyuk, G N; Borisyuk, R M; Kazanovich, Yakov B [Institute of Mathematical Problems of Biology, Russian Academy of Sciences, Pushchino, Moscow region (Russian Federation); Ivanitskii, Genrikh R [Institute for Theoretical and Experimental Biophysics, Russian Academy of Sciences, Pushchino, Moscow region (Russian Federation)

    2002-10-31

    Neural network models are discussed that have been developed during the last decade with the purpose of reproducing spatio-temporal patterns of neural activity in different brain structures. The main goal of the modeling was to test hypotheses of synchronization, temporal and phase relations in brain information processing. The models being considered are those of temporal structure of spike sequences, of neural activity dynamics, and oscillatory models of attention and feature integration. (reviews of topical problems)

  17. Neural activation in stress-related exhaustion

    DEFF Research Database (Denmark)

    Gavelin, Hanna Malmberg; Neely, Anna Stigsdotter; Andersson, Micael

    2017-01-01

    The primary purpose of this study was to investigate the association between burnout and neural activation during working memory processing in patients with stress-related exhaustion. Additionally, we investigated the neural effects of cognitive training as part of stress rehabilitation. Fifty...... association between burnout level and working memory performance was found, however, our findings indicate that frontostriatal neural responses related to working memory were modulated by burnout severity. We suggest that patients with high levels of burnout need to recruit additional cognitive resources...... to uphold task performance. Following cognitive training, increased neural activation was observed during 3-back in working memory-related regions, including the striatum, however, low sample size limits any firm conclusions....

  18. A neural network model for credit risk evaluation.

    Science.gov (United States)

    Khashman, Adnan

    2009-08-01

    Credit scoring is one of the key analytical techniques in credit risk evaluation which has been an active research area in financial risk management. This paper presents a credit risk evaluation system that uses a neural network model based on the back propagation learning algorithm. We train and implement the neural network to decide whether to approve or reject a credit application, using seven learning schemes and real world credit applications from the Australian credit approval datasets. A comparison of the system performance under the different learning schemes is provided, furthermore, we compare the performance of two neural networks; with one and two hidden layers following the ideal learning scheme. Experimental results suggest that neural networks can be effectively used in automatic processing of credit applications.

  19. A model for integrating elementary neural functions into delayed-response behavior.

    Directory of Open Access Journals (Sweden)

    Thomas Gisiger

    2006-04-01

    Full Text Available It is well established that various cortical regions can implement a wide array of neural processes, yet the mechanisms which integrate these processes into behavior-producing, brain-scale activity remain elusive. We propose that an important role in this respect might be played by executive structures controlling the traffic of information between the cortical regions involved. To illustrate this hypothesis, we present a neural network model comprising a set of interconnected structures harboring stimulus-related activity (visual representation, working memory, and planning, and a group of executive units with task-related activity patterns that manage the information flowing between them. The resulting dynamics allows the network to perform the dual task of either retaining an image during a delay (delayed-matching to sample task, or recalling from this image another one that has been associated with it during training (delayed-pair association task. The model reproduces behavioral and electrophysiological data gathered on the inferior temporal and prefrontal cortices of primates performing these same tasks. It also makes predictions on how neural activity coding for the recall of the image associated with the sample emerges and becomes prospective during the training phase. The network dynamics proves to be very stable against perturbations, and it exhibits signs of scale-invariant organization and cooperativity. The present network represents a possible neural implementation for active, top-down, prospective memory retrieval in primates. The model suggests that brain activity leading to performance of cognitive tasks might be organized in modular fashion, simple neural functions becoming integrated into more complex behavior by executive structures harbored in prefrontal cortex and/or basal ganglia.

  20. A model for integrating elementary neural functions into delayed-response behavior.

    Science.gov (United States)

    Gisiger, Thomas; Kerszberg, Michel

    2006-04-01

    It is well established that various cortical regions can implement a wide array of neural processes, yet the mechanisms which integrate these processes into behavior-producing, brain-scale activity remain elusive. We propose that an important role in this respect might be played by executive structures controlling the traffic of information between the cortical regions involved. To illustrate this hypothesis, we present a neural network model comprising a set of interconnected structures harboring stimulus-related activity (visual representation, working memory, and planning), and a group of executive units with task-related activity patterns that manage the information flowing between them. The resulting dynamics allows the network to perform the dual task of either retaining an image during a delay (delayed-matching to sample task), or recalling from this image another one that has been associated with it during training (delayed-pair association task). The model reproduces behavioral and electrophysiological data gathered on the inferior temporal and prefrontal cortices of primates performing these same tasks. It also makes predictions on how neural activity coding for the recall of the image associated with the sample emerges and becomes prospective during the training phase. The network dynamics proves to be very stable against perturbations, and it exhibits signs of scale-invariant organization and cooperativity. The present network represents a possible neural implementation for active, top-down, prospective memory retrieval in primates. The model suggests that brain activity leading to performance of cognitive tasks might be organized in modular fashion, simple neural functions becoming integrated into more complex behavior by executive structures harbored in prefrontal cortex and/or basal ganglia.

  1. Numerical Analysis of Modeling Based on Improved Elman Neural Network

    Directory of Open Access Journals (Sweden)

    Shao Jie

    2014-01-01

    Full Text Available A modeling based on the improved Elman neural network (IENN is proposed to analyze the nonlinear circuits with the memory effect. The hidden layer neurons are activated by a group of Chebyshev orthogonal basis functions instead of sigmoid functions in this model. The error curves of the sum of squared error (SSE varying with the number of hidden neurons and the iteration step are studied to determine the number of the hidden layer neurons. Simulation results of the half-bridge class-D power amplifier (CDPA with two-tone signal and broadband signals as input have shown that the proposed behavioral modeling can reconstruct the system of CDPAs accurately and depict the memory effect of CDPAs well. Compared with Volterra-Laguerre (VL model, Chebyshev neural network (CNN model, and basic Elman neural network (BENN model, the proposed model has better performance.

  2. Identifying Emotions on the Basis of Neural Activation.

    Science.gov (United States)

    Kassam, Karim S; Markey, Amanda R; Cherkassky, Vladimir L; Loewenstein, George; Just, Marcel Adam

    2013-01-01

    We attempt to determine the discriminability and organization of neural activation corresponding to the experience of specific emotions. Method actors were asked to self-induce nine emotional states (anger, disgust, envy, fear, happiness, lust, pride, sadness, and shame) while in an fMRI scanner. Using a Gaussian Naïve Bayes pooled variance classifier, we demonstrate the ability to identify specific emotions experienced by an individual at well over chance accuracy on the basis of: 1) neural activation of the same individual in other trials, 2) neural activation of other individuals who experienced similar trials, and 3) neural activation of the same individual to a qualitatively different type of emotion induction. Factor analysis identified valence, arousal, sociality, and lust as dimensions underlying the activation patterns. These results suggest a structure for neural representations of emotion and inform theories of emotional processing.

  3. Identifying Emotions on the Basis of Neural Activation.

    Directory of Open Access Journals (Sweden)

    Karim S Kassam

    Full Text Available We attempt to determine the discriminability and organization of neural activation corresponding to the experience of specific emotions. Method actors were asked to self-induce nine emotional states (anger, disgust, envy, fear, happiness, lust, pride, sadness, and shame while in an fMRI scanner. Using a Gaussian Naïve Bayes pooled variance classifier, we demonstrate the ability to identify specific emotions experienced by an individual at well over chance accuracy on the basis of: 1 neural activation of the same individual in other trials, 2 neural activation of other individuals who experienced similar trials, and 3 neural activation of the same individual to a qualitatively different type of emotion induction. Factor analysis identified valence, arousal, sociality, and lust as dimensions underlying the activation patterns. These results suggest a structure for neural representations of emotion and inform theories of emotional processing.

  4. Race modulates neural activity during imitation

    Science.gov (United States)

    Losin, Elizabeth A. Reynolds; Iacoboni, Marco; Martin, Alia; Cross, Katy A.; Dapretto, Mirella

    2014-01-01

    Imitation plays a central role in the acquisition of culture. People preferentially imitate others who are self-similar, prestigious or successful. Because race can indicate a person's self-similarity or status, race influences whom people imitate. Prior studies of the neural underpinnings of imitation have not considered the effects of race. Here we measured neural activity with fMRI while European American participants imitated meaningless gestures performed by actors of their own race, and two racial outgroups, African American, and Chinese American. Participants also passively observed the actions of these actors and their portraits. Frontal, parietal and occipital areas were differentially activated while participants imitated actors of different races. More activity was present when imitating African Americans than the other racial groups, perhaps reflecting participants' reported lack of experience with and negative attitudes towards this group, or the group's lower perceived social status. This pattern of neural activity was not found when participants passively observed the gestures of the actors or simply looked at their faces. Instead, during face-viewing neural responses were overall greater for own-race individuals, consistent with prior race perception studies not involving imitation. Our findings represent a first step in elucidating neural mechanisms involved in cultural learning, a process that influences almost every aspect of our lives but has thus far received little neuroscientific study. PMID:22062193

  5. Resting-state hemodynamics are spatiotemporally coupled to synchronized and symmetric neural activity in excitatory neurons

    Science.gov (United States)

    Ma, Ying; Shaik, Mohammed A.; Kozberg, Mariel G.; Portes, Jacob P.; Timerman, Dmitriy

    2016-01-01

    Brain hemodynamics serve as a proxy for neural activity in a range of noninvasive neuroimaging techniques including functional magnetic resonance imaging (fMRI). In resting-state fMRI, hemodynamic fluctuations have been found to exhibit patterns of bilateral synchrony, with correlated regions inferred to have functional connectivity. However, the relationship between resting-state hemodynamics and underlying neural activity has not been well established, making the neural underpinnings of functional connectivity networks unclear. In this study, neural activity and hemodynamics were recorded simultaneously over the bilateral cortex of awake and anesthetized Thy1-GCaMP mice using wide-field optical mapping. Neural activity was visualized via selective expression of the calcium-sensitive fluorophore GCaMP in layer 2/3 and 5 excitatory neurons. Characteristic patterns of resting-state hemodynamics were accompanied by more rapidly changing bilateral patterns of resting-state neural activity. Spatiotemporal hemodynamics could be modeled by convolving this neural activity with hemodynamic response functions derived through both deconvolution and gamma-variate fitting. Simultaneous imaging and electrophysiology confirmed that Thy1-GCaMP signals are well-predicted by multiunit activity. Neurovascular coupling between resting-state neural activity and hemodynamics was robust and fast in awake animals, whereas coupling in urethane-anesthetized animals was slower, and in some cases included lower-frequency (resting-state hemodynamics in the awake and anesthetized brain are coupled to underlying patterns of excitatory neural activity. The patterns of bilaterally-symmetric spontaneous neural activity revealed by wide-field Thy1-GCaMP imaging may depict the neural foundation of functional connectivity networks detected in resting-state fMRI. PMID:27974609

  6. A cardiac electrical activity model based on a cellular automata system in comparison with neural network model.

    Science.gov (United States)

    Khan, Muhammad Sadiq Ali; Yousuf, Sidrah

    2016-03-01

    Cardiac Electrical Activity is commonly distributed into three dimensions of Cardiac Tissue (Myocardium) and evolves with duration of time. The indicator of heart diseases can occur randomly at any time of a day. Heart rate, conduction and each electrical activity during cardiac cycle should be monitor non-invasively for the assessment of "Action Potential" (regular) and "Arrhythmia" (irregular) rhythms. Many heart diseases can easily be examined through Automata model like Cellular Automata concepts. This paper deals with the different states of cardiac rhythms using cellular automata with the comparison of neural network also provides fast and highly effective stimulation for the contraction of cardiac muscles on the Atria in the result of genesis of electrical spark or wave. The specific formulated model named as "States of automaton Proposed Model for CEA (Cardiac Electrical Activity)" by using Cellular Automata Methodology is commonly shows the three states of cardiac tissues conduction phenomena (i) Resting (Relax and Excitable state), (ii) ARP (Excited but Absolutely refractory Phase i.e. Excited but not able to excite neighboring cells) (iii) RRP (Excited but Relatively Refractory Phase i.e. Excited and able to excite neighboring cells). The result indicates most efficient modeling with few burden of computation and it is Action Potential during the pumping of blood in cardiac cycle.

  7. ChainMail based neural dynamics modeling of soft tissue deformation for surgical simulation.

    Science.gov (United States)

    Zhang, Jinao; Zhong, Yongmin; Smith, Julian; Gu, Chengfan

    2017-07-20

    Realistic and real-time modeling and simulation of soft tissue deformation is a fundamental research issue in the field of surgical simulation. In this paper, a novel cellular neural network approach is presented for modeling and simulation of soft tissue deformation by combining neural dynamics of cellular neural network with ChainMail mechanism. The proposed method formulates the problem of elastic deformation into cellular neural network activities to avoid the complex computation of elasticity. The local position adjustments of ChainMail are incorporated into the cellular neural network as the local connectivity of cells, through which the dynamic behaviors of soft tissue deformation are transformed into the neural dynamics of cellular neural network. Experiments demonstrate that the proposed neural network approach is capable of modeling the soft tissues' nonlinear deformation and typical mechanical behaviors. The proposed method not only improves ChainMail's linear deformation with the nonlinear characteristics of neural dynamics but also enables the cellular neural network to follow the principle of continuum mechanics to simulate soft tissue deformation.

  8. Modeling fMRI signals can provide insights into neural processing in the cerebral cortex.

    Science.gov (United States)

    Vanni, Simo; Sharifian, Fariba; Heikkinen, Hanna; Vigário, Ricardo

    2015-08-01

    Every stimulus or task activates multiple areas in the mammalian cortex. These distributed activations can be measured with functional magnetic resonance imaging (fMRI), which has the best spatial resolution among the noninvasive brain imaging methods. Unfortunately, the relationship between the fMRI activations and distributed cortical processing has remained unclear, both because the coupling between neural and fMRI activations has remained poorly understood and because fMRI voxels are too large to directly sense the local neural events. To get an idea of the local processing given the macroscopic data, we need models to simulate the neural activity and to provide output that can be compared with fMRI data. Such models can describe neural mechanisms as mathematical functions between input and output in a specific system, with little correspondence to physiological mechanisms. Alternatively, models can be biomimetic, including biological details with straightforward correspondence to experimental data. After careful balancing between complexity, computational efficiency, and realism, a biomimetic simulation should be able to provide insight into how biological structures or functions contribute to actual data processing as well as to promote theory-driven neuroscience experiments. This review analyzes the requirements for validating system-level computational models with fMRI. In particular, we study mesoscopic biomimetic models, which include a limited set of details from real-life networks and enable system-level simulations of neural mass action. In addition, we discuss how recent developments in neurophysiology and biophysics may significantly advance the modelling of fMRI signals. Copyright © 2015 the American Physiological Society.

  9. On the origin of reproducible sequential activity in neural circuits

    Science.gov (United States)

    Afraimovich, V. S.; Zhigulin, V. P.; Rabinovich, M. I.

    2004-12-01

    Robustness and reproducibility of sequential spatio-temporal responses is an essential feature of many neural circuits in sensory and motor systems of animals. The most common mathematical images of dynamical regimes in neural systems are fixed points, limit cycles, chaotic attractors, and continuous attractors (attractive manifolds of neutrally stable fixed points). These are not suitable for the description of reproducible transient sequential neural dynamics. In this paper we present the concept of a stable heteroclinic sequence (SHS), which is not an attractor. SHS opens the way for understanding and modeling of transient sequential activity in neural circuits. We show that this new mathematical object can be used to describe robust and reproducible sequential neural dynamics. Using the framework of a generalized high-dimensional Lotka-Volterra model, that describes the dynamics of firing rates in an inhibitory network, we present analytical results on the existence of the SHS in the phase space of the network. With the help of numerical simulations we confirm its robustness in presence of noise in spite of the transient nature of the corresponding trajectories. Finally, by referring to several recent neurobiological experiments, we discuss possible applications of this new concept to several problems in neuroscience.

  10. Neural activity in the hippocampus during conflict resolution.

    Science.gov (United States)

    Sakimoto, Yuya; Okada, Kana; Hattori, Minoru; Takeda, Kozue; Sakata, Shogo

    2013-01-15

    This study examined configural association theory and conflict resolution models in relation to hippocampal neural activity during positive patterning tasks. According to configural association theory, the hippocampus is important for responses to compound stimuli in positive patterning tasks. In contrast, according to the conflict resolution model, the hippocampus is important for responses to single stimuli in positive patterning tasks. We hypothesized that if configural association theory is applicable, and not the conflict resolution model, the hippocampal theta power should be increased when compound stimuli are presented. If, on the other hand, the conflict resolution model is applicable, but not configural association theory, then the hippocampal theta power should be increased when single stimuli are presented. If both models are valid and applicable in the positive patterning task, we predict that the hippocampal theta power should be increased by presentation of both compound and single stimuli during the positive patterning task. To examine our hypotheses, we measured hippocampal theta power in rats during a positive patterning task. The results showed that hippocampal theta power increased during the presentation of a single stimulus, but did not increase during the presentation of a compound stimulus. This finding suggests that the conflict resolution model is more applicable than the configural association theory for describing neural activity during positive patterning tasks. Copyright © 2012 Elsevier B.V. All rights reserved.

  11. Deep Recurrent Neural Networks for Human Activity Recognition

    Directory of Open Access Journals (Sweden)

    Abdulmajid Murad

    2017-11-01

    Full Text Available Adopting deep learning methods for human activity recognition has been effective in extracting discriminative features from raw input sequences acquired from body-worn sensors. Although human movements are encoded in a sequence of successive samples in time, typical machine learning methods perform recognition tasks without exploiting the temporal correlations between input data samples. Convolutional neural networks (CNNs address this issue by using convolutions across a one-dimensional temporal sequence to capture dependencies among input data. However, the size of convolutional kernels restricts the captured range of dependencies between data samples. As a result, typical models are unadaptable to a wide range of activity-recognition configurations and require fixed-length input windows. In this paper, we propose the use of deep recurrent neural networks (DRNNs for building recognition models that are capable of capturing long-range dependencies in variable-length input sequences. We present unidirectional, bidirectional, and cascaded architectures based on long short-term memory (LSTM DRNNs and evaluate their effectiveness on miscellaneous benchmark datasets. Experimental results show that our proposed models outperform methods employing conventional machine learning, such as support vector machine (SVM and k-nearest neighbors (KNN. Additionally, the proposed models yield better performance than other deep learning techniques, such as deep believe networks (DBNs and CNNs.

  12. Deep Recurrent Neural Networks for Human Activity Recognition.

    Science.gov (United States)

    Murad, Abdulmajid; Pyun, Jae-Young

    2017-11-06

    Adopting deep learning methods for human activity recognition has been effective in extracting discriminative features from raw input sequences acquired from body-worn sensors. Although human movements are encoded in a sequence of successive samples in time, typical machine learning methods perform recognition tasks without exploiting the temporal correlations between input data samples. Convolutional neural networks (CNNs) address this issue by using convolutions across a one-dimensional temporal sequence to capture dependencies among input data. However, the size of convolutional kernels restricts the captured range of dependencies between data samples. As a result, typical models are unadaptable to a wide range of activity-recognition configurations and require fixed-length input windows. In this paper, we propose the use of deep recurrent neural networks (DRNNs) for building recognition models that are capable of capturing long-range dependencies in variable-length input sequences. We present unidirectional, bidirectional, and cascaded architectures based on long short-term memory (LSTM) DRNNs and evaluate their effectiveness on miscellaneous benchmark datasets. Experimental results show that our proposed models outperform methods employing conventional machine learning, such as support vector machine (SVM) and k-nearest neighbors (KNN). Additionally, the proposed models yield better performance than other deep learning techniques, such as deep believe networks (DBNs) and CNNs.

  13. Large-scale multielectrode recording and stimulation of neural activity

    International Nuclear Information System (INIS)

    Sher, A.; Chichilnisky, E.J.; Dabrowski, W.; Grillo, A.A.; Grivich, M.; Gunning, D.; Hottowy, P.; Kachiguine, S.; Litke, A.M.; Mathieson, K.; Petrusca, D.

    2007-01-01

    Large circuits of neurons are employed by the brain to encode and process information. How this encoding and processing is carried out is one of the central questions in neuroscience. Since individual neurons communicate with each other through electrical signals (action potentials), the recording of neural activity with arrays of extracellular electrodes is uniquely suited for the investigation of this question. Such recordings provide the combination of the best spatial (individual neurons) and temporal (individual action-potentials) resolutions compared to other large-scale imaging methods. Electrical stimulation of neural activity in turn has two very important applications: it enhances our understanding of neural circuits by allowing active interactions with them, and it is a basis for a large variety of neural prosthetic devices. Until recently, the state-of-the-art in neural activity recording systems consisted of several dozen electrodes with inter-electrode spacing ranging from tens to hundreds of microns. Using silicon microstrip detector expertise acquired in the field of high-energy physics, we created a unique neural activity readout and stimulation framework that consists of high-density electrode arrays, multi-channel custom-designed integrated circuits, a data acquisition system, and data-processing software. Using this framework we developed a number of neural readout and stimulation systems: (1) a 512-electrode system for recording the simultaneous activity of as many as hundreds of neurons, (2) a 61-electrode system for electrical stimulation and readout of neural activity in retinas and brain-tissue slices, and (3) a system with telemetry capabilities for recording neural activity in the intact brain of awake, naturally behaving animals. We will report on these systems, their various applications to the field of neurobiology, and novel scientific results obtained with some of them. We will also outline future directions

  14. Can Neural Activity Propagate by Endogenous Electrical Field?

    Science.gov (United States)

    Qiu, Chen; Shivacharan, Rajat S.; Zhang, Mingming

    2015-01-01

    It is widely accepted that synaptic transmissions and gap junctions are the major governing mechanisms for signal traveling in the neural system. Yet, a group of neural waves, either physiological or pathological, share the same speed of ∼0.1 m/s without synaptic transmission or gap junctions, and this speed is not consistent with axonal conduction or ionic diffusion. The only explanation left is an electrical field effect. We tested the hypothesis that endogenous electric fields are sufficient to explain the propagation with in silico and in vitro experiments. Simulation results show that field effects alone can indeed mediate propagation across layers of neurons with speeds of 0.12 ± 0.09 m/s with pathological kinetics, and 0.11 ± 0.03 m/s with physiologic kinetics, both generating weak field amplitudes of ∼2–6 mV/mm. Further, the model predicted that propagation speed values are inversely proportional to the cell-to-cell distances, but do not significantly change with extracellular resistivity, membrane capacitance, or membrane resistance. In vitro recordings in mice hippocampi produced similar speeds (0.10 ± 0.03 m/s) and field amplitudes (2.5–5 mV/mm), and by applying a blocking field, the propagation speed was greatly reduced. Finally, osmolarity experiments confirmed the model's prediction that cell-to-cell distance inversely affects propagation speed. Together, these results show that despite their weak amplitude, electric fields can be solely responsible for spike propagation at ∼0.1 m/s. This phenomenon could be important to explain the slow propagation of epileptic activity and other normal propagations at similar speeds. SIGNIFICANCE STATEMENT Neural activity (waves or spikes) can propagate using well documented mechanisms such as synaptic transmission, gap junctions, or diffusion. However, the purpose of this paper is to provide an explanation for experimental data showing that neural signals can propagate by means other than synaptic

  15. The Energy Coding of a Structural Neural Network Based on the Hodgkin-Huxley Model.

    Science.gov (United States)

    Zhu, Zhenyu; Wang, Rubin; Zhu, Fengyun

    2018-01-01

    Based on the Hodgkin-Huxley model, the present study established a fully connected structural neural network to simulate the neural activity and energy consumption of the network by neural energy coding theory. The numerical simulation result showed that the periodicity of the network energy distribution was positively correlated to the number of neurons and coupling strength, but negatively correlated to signal transmitting delay. Moreover, a relationship was established between the energy distribution feature and the synchronous oscillation of the neural network, which showed that when the proportion of negative energy in power consumption curve was high, the synchronous oscillation of the neural network was apparent. In addition, comparison with the simulation result of structural neural network based on the Wang-Zhang biophysical model of neurons showed that both models were essentially consistent.

  16. Integrating probabilistic models of perception and interactive neural networks: a historical and tutorial review.

    Science.gov (United States)

    McClelland, James L

    2013-01-01

    This article seeks to establish a rapprochement between explicitly Bayesian models of contextual effects in perception and neural network models of such effects, particularly the connectionist interactive activation (IA) model of perception. The article is in part an historical review and in part a tutorial, reviewing the probabilistic Bayesian approach to understanding perception and how it may be shaped by context, and also reviewing ideas about how such probabilistic computations may be carried out in neural networks, focusing on the role of context in interactive neural networks, in which both bottom-up and top-down signals affect the interpretation of sensory inputs. It is pointed out that connectionist units that use the logistic or softmax activation functions can exactly compute Bayesian posterior probabilities when the bias terms and connection weights affecting such units are set to the logarithms of appropriate probabilistic quantities. Bayesian concepts such the prior, likelihood, (joint and marginal) posterior, probability matching and maximizing, and calculating vs. sampling from the posterior are all reviewed and linked to neural network computations. Probabilistic and neural network models are explicitly linked to the concept of a probabilistic generative model that describes the relationship between the underlying target of perception (e.g., the word intended by a speaker or other source of sensory stimuli) and the sensory input that reaches the perceiver for use in inferring the underlying target. It is shown how a new version of the IA model called the multinomial interactive activation (MIA) model can sample correctly from the joint posterior of a proposed generative model for perception of letters in words, indicating that interactive processing is fully consistent with principled probabilistic computation. Ways in which these computations might be realized in real neural systems are also considered.

  17. Neural speech recognition: continuous phoneme decoding using spatiotemporal representations of human cortical activity

    Science.gov (United States)

    Moses, David A.; Mesgarani, Nima; Leonard, Matthew K.; Chang, Edward F.

    2016-10-01

    Objective. The superior temporal gyrus (STG) and neighboring brain regions play a key role in human language processing. Previous studies have attempted to reconstruct speech information from brain activity in the STG, but few of them incorporate the probabilistic framework and engineering methodology used in modern speech recognition systems. In this work, we describe the initial efforts toward the design of a neural speech recognition (NSR) system that performs continuous phoneme recognition on English stimuli with arbitrary vocabulary sizes using the high gamma band power of local field potentials in the STG and neighboring cortical areas obtained via electrocorticography. Approach. The system implements a Viterbi decoder that incorporates phoneme likelihood estimates from a linear discriminant analysis model and transition probabilities from an n-gram phonemic language model. Grid searches were used in an attempt to determine optimal parameterizations of the feature vectors and Viterbi decoder. Main results. The performance of the system was significantly improved by using spatiotemporal representations of the neural activity (as opposed to purely spatial representations) and by including language modeling and Viterbi decoding in the NSR system. Significance. These results emphasize the importance of modeling the temporal dynamics of neural responses when analyzing their variations with respect to varying stimuli and demonstrate that speech recognition techniques can be successfully leveraged when decoding speech from neural signals. Guided by the results detailed in this work, further development of the NSR system could have applications in the fields of automatic speech recognition and neural prosthetics.

  18. Causal Learning and Explanation of Deep Neural Networks via Autoencoded Activations

    OpenAIRE

    Harradon, Michael; Druce, Jeff; Ruttenberg, Brian

    2018-01-01

    Deep neural networks are complex and opaque. As they enter application in a variety of important and safety critical domains, users seek methods to explain their output predictions. We develop an approach to explaining deep neural networks by constructing causal models on salient concepts contained in a CNN. We develop methods to extract salient concepts throughout a target network by using autoencoders trained to extract human-understandable representations of network activations. We then bu...

  19. Death and rebirth of neural activity in sparse inhibitory networks

    Science.gov (United States)

    Angulo-Garcia, David; Luccioli, Stefano; Olmi, Simona; Torcini, Alessandro

    2017-05-01

    Inhibition is a key aspect of neural dynamics playing a fundamental role for the emergence of neural rhythms and the implementation of various information coding strategies. Inhibitory populations are present in several brain structures, and the comprehension of their dynamics is strategical for the understanding of neural processing. In this paper, we clarify the mechanisms underlying a general phenomenon present in pulse-coupled heterogeneous inhibitory networks: inhibition can induce not only suppression of neural activity, as expected, but can also promote neural re-activation. In particular, for globally coupled systems, the number of firing neurons monotonically reduces upon increasing the strength of inhibition (neuronal death). However, the random pruning of connections is able to reverse the action of inhibition, i.e. in a random sparse network a sufficiently strong synaptic strength can surprisingly promote, rather than depress, the activity of neurons (neuronal rebirth). Thus, the number of firing neurons reaches a minimum value at some intermediate synaptic strength. We show that this minimum signals a transition from a regime dominated by neurons with a higher firing activity to a phase where all neurons are effectively sub-threshold and their irregular firing is driven by current fluctuations. We explain the origin of the transition by deriving a mean field formulation of the problem able to provide the fraction of active neurons as well as the first two moments of their firing statistics. The introduction of a synaptic time scale does not modify the main aspects of the reported phenomenon. However, for sufficiently slow synapses the transition becomes dramatic, and the system passes from a perfectly regular evolution to irregular bursting dynamics. In this latter regime the model provides predictions consistent with experimental findings for a specific class of neurons, namely the medium spiny neurons in the striatum.

  20. Modeling Broadband Microwave Structures by Artificial Neural Networks

    Directory of Open Access Journals (Sweden)

    V. Otevrel

    2004-06-01

    Full Text Available The paper describes the exploitation of feed-forward neural networksand recurrent neural networks for replacing full-wave numerical modelsof microwave structures in complex microwave design tools. Building aneural model, attention is turned to the modeling accuracy and to theefficiency of building a model. Dealing with the accuracy, we describea method of increasing it by successive completing a training set.Neural models are mutually compared in order to highlight theiradvantages and disadvantages. As a reference model for comparisons,approximations based on standard cubic splines are used. Neural modelsare used to replace both the time-domain numeric models and thefrequency-domain ones.

  1. Artificial Neural Networks for Reducing Computational Effort in Active Truncated Model Testing of Mooring Lines

    DEFF Research Database (Denmark)

    Christiansen, Niels Hørbye; Voie, Per Erlend Torbergsen; Høgsberg, Jan Becker

    2015-01-01

    simultaneously, this method is very demanding in terms of numerical efficiency and computational power. Therefore, this method has not yet proved to be feasible. It has recently been shown how a hybrid method combining classical numerical models and artificial neural networks (ANN) can provide a dramatic...... prior to the experiment and with a properly trained ANN it is no problem to obtain accurate simulations much faster than real time-without any need for large computational capacity. The present study demonstrates how this hybrid method can be applied to the active truncated experiments yielding a system...

  2. Evaluation of neural networks to identify types of activity using accelerometers

    NARCIS (Netherlands)

    Vries, S.I. de; Garre, F.G.; Engbers, L.H.; Hildebrandt, V.H.; Buuren, S. van

    2011-01-01

    Purpose: To develop and evaluate two artificial neural network (ANN) models based on single-sensor accelerometer data and an ANN model based on the data of two accelerometers for the identification of types of physical activity in adults. Methods: Forty-nine subjects (21 men and 28 women; age range

  3. Exponential stability of Cohen-Grossberg neural networks with a general class of activation functions

    International Nuclear Information System (INIS)

    Wan Anhua; Wang Miansen; Peng Jigen; Qiao Hong

    2006-01-01

    In this Letter, the dynamics of Cohen-Grossberg neural networks model are investigated. The activation functions are only assumed to be Lipschitz continuous, which provide a much wider application domain for neural networks than the previous results. By means of the extended nonlinear measure approach, new and relaxed sufficient conditions for the existence, uniqueness and global exponential stability of equilibrium of the neural networks are obtained. Moreover, an estimate for the exponential convergence rate of the neural networks is precisely characterized. Our results improve those existing ones

  4. Theories of Person Perception Predict Patterns of Neural Activity During Mentalizing.

    Science.gov (United States)

    Thornton, Mark A; Mitchell, Jason P

    2017-08-22

    Social life requires making inferences about other people. What information do perceivers spontaneously draw upon to make such inferences? Here, we test 4 major theories of person perception, and 1 synthetic theory that combines their features, to determine whether the dimensions of such theories can serve as bases for describing patterns of neural activity during mentalizing. While undergoing functional magnetic resonance imaging, participants made social judgments about well-known public figures. Patterns of brain activity were then predicted using feature encoding models that represented target people's positions on theoretical dimensions such as warmth and competence. All 5 theories of person perception proved highly accurate at reconstructing activity patterns, indicating that each could describe the informational basis of mentalizing. Cross-validation indicated that the theories robustly generalized across both targets and participants. The synthetic theory consistently attained the best performance-approximately two-thirds of noise ceiling accuracy--indicating that, in combination, the theories considered here can account for much of the neural representation of other people. Moreover, encoding models trained on the present data could reconstruct patterns of activity associated with mental state representations in independent data, suggesting the use of a common neural code to represent others' traits and states. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  5. Probabilistic models for neural populations that naturally capture global coupling and criticality.

    Science.gov (United States)

    Humplik, Jan; Tkačik, Gašper

    2017-09-01

    Advances in multi-unit recordings pave the way for statistical modeling of activity patterns in large neural populations. Recent studies have shown that the summed activity of all neurons strongly shapes the population response. A separate recent finding has been that neural populations also exhibit criticality, an anomalously large dynamic range for the probabilities of different population activity patterns. Motivated by these two observations, we introduce a class of probabilistic models which takes into account the prior knowledge that the neural population could be globally coupled and close to critical. These models consist of an energy function which parametrizes interactions between small groups of neurons, and an arbitrary positive, strictly increasing, and twice differentiable function which maps the energy of a population pattern to its probability. We show that: 1) augmenting a pairwise Ising model with a nonlinearity yields an accurate description of the activity of retinal ganglion cells which outperforms previous models based on the summed activity of neurons; 2) prior knowledge that the population is critical translates to prior expectations about the shape of the nonlinearity; 3) the nonlinearity admits an interpretation in terms of a continuous latent variable globally coupling the system whose distribution we can infer from data. Our method is independent of the underlying system's state space; hence, it can be applied to other systems such as natural scenes or amino acid sequences of proteins which are also known to exhibit criticality.

  6. Embedding recurrent neural networks into predator-prey models.

    Science.gov (United States)

    Moreau, Yves; Louiès, Stephane; Vandewalle, Joos; Brenig, Leon

    1999-03-01

    We study changes of coordinates that allow the embedding of ordinary differential equations describing continuous-time recurrent neural networks into differential equations describing predator-prey models-also called Lotka-Volterra systems. We transform the equations for the neural network first into quasi-monomial form (Brenig, L. (1988). Complete factorization and analytic solutions of generalized Lotka-Volterra equations. Physics Letters A, 133(7-8), 378-382), where we express the vector field of the dynamical system as a linear combination of products of powers of the variables. In practice, this transformation is possible only if the activation function is the hyperbolic tangent or the logistic sigmoid. From this quasi-monomial form, we can directly transform the system further into Lotka-Volterra equations. The resulting Lotka-Volterra system is of higher dimension than the original system, but the behavior of its first variables is equivalent to the behavior of the original neural network. We expect that this transformation will permit the application of existing techniques for the analysis of Lotka-Volterra systems to recurrent neural networks. Furthermore, our results show that Lotka-Volterra systems are universal approximators of dynamical systems, just as are continuous-time neural networks.

  7. Hybrid neural network bushing model for vehicle dynamics simulation

    International Nuclear Information System (INIS)

    Sohn, Jeong Hyun; Lee, Seung Kyu; Yoo, Wan Suk

    2008-01-01

    Although the linear model was widely used for the bushing model in vehicle suspension systems, it could not express the nonlinear characteristics of bushing in terms of the amplitude and the frequency. An artificial neural network model was suggested to consider the hysteretic responses of bushings. This model, however, often diverges due to the uncertainties of the neural network under the unexpected excitation inputs. In this paper, a hybrid neural network bushing model combining linear and neural network is suggested. A linear model was employed to represent linear stiffness and damping effects, and the artificial neural network algorithm was adopted to take into account the hysteretic responses. A rubber test was performed to capture bushing characteristics, where sine excitation with different frequencies and amplitudes is applied. Random test results were used to update the weighting factors of the neural network model. It is proven that the proposed model has more robust characteristics than a simple neural network model under step excitation input. A full car simulation was carried out to verify the proposed bushing models. It was shown that the hybrid model results are almost identical to the linear model under several maneuvers

  8. The relationship between structural and functional connectivity: graph theoretical analysis of an EEG neural mass model

    NARCIS (Netherlands)

    Ponten, S.C.; Daffertshofer, A.; Hillebrand, A.; Stam, C.J.

    2010-01-01

    We investigated the relationship between structural network properties and both synchronization strength and functional characteristics in a combined neural mass and graph theoretical model of the electroencephalogram (EEG). Thirty-two neural mass models (NMMs), each representing the lump activity

  9. Interpretation of correlated neural variability from models of feed-forward and recurrent circuits

    Science.gov (United States)

    2018-01-01

    Neural populations respond to the repeated presentations of a sensory stimulus with correlated variability. These correlations have been studied in detail, with respect to their mechanistic origin, as well as their influence on stimulus discrimination and on the performance of population codes. A number of theoretical studies have endeavored to link network architecture to the nature of the correlations in neural activity. Here, we contribute to this effort: in models of circuits of stochastic neurons, we elucidate the implications of various network architectures—recurrent connections, shared feed-forward projections, and shared gain fluctuations—on the stimulus dependence in correlations. Specifically, we derive mathematical relations that specify the dependence of population-averaged covariances on firing rates, for different network architectures. In turn, these relations can be used to analyze data on population activity. We examine recordings from neural populations in mouse auditory cortex. We find that a recurrent network model with random effective connections captures the observed statistics. Furthermore, using our circuit model, we investigate the relation between network parameters, correlations, and how well different stimuli can be discriminated from one another based on the population activity. As such, our approach allows us to relate properties of the neural circuit to information processing. PMID:29408930

  10. Interpretation of correlated neural variability from models of feed-forward and recurrent circuits.

    Directory of Open Access Journals (Sweden)

    Volker Pernice

    2018-02-01

    Full Text Available Neural populations respond to the repeated presentations of a sensory stimulus with correlated variability. These correlations have been studied in detail, with respect to their mechanistic origin, as well as their influence on stimulus discrimination and on the performance of population codes. A number of theoretical studies have endeavored to link network architecture to the nature of the correlations in neural activity. Here, we contribute to this effort: in models of circuits of stochastic neurons, we elucidate the implications of various network architectures-recurrent connections, shared feed-forward projections, and shared gain fluctuations-on the stimulus dependence in correlations. Specifically, we derive mathematical relations that specify the dependence of population-averaged covariances on firing rates, for different network architectures. In turn, these relations can be used to analyze data on population activity. We examine recordings from neural populations in mouse auditory cortex. We find that a recurrent network model with random effective connections captures the observed statistics. Furthermore, using our circuit model, we investigate the relation between network parameters, correlations, and how well different stimuli can be discriminated from one another based on the population activity. As such, our approach allows us to relate properties of the neural circuit to information processing.

  11. A Biophysical Neural Model To Describe Spatial Visual Attention

    International Nuclear Information System (INIS)

    Hugues, Etienne; Jose, Jorge V.

    2008-01-01

    Visual scenes have enormous spatial and temporal information that are transduced into neural spike trains. Psychophysical experiments indicate that only a small portion of a spatial image is consciously accessible. Electrophysiological experiments in behaving monkeys have revealed a number of modulations of the neural activity in special visual area known as V4, when the animal is paying attention directly towards a particular stimulus location. The nature of the attentional input to V4, however, remains unknown as well as to the mechanisms responsible for these modulations. We use a biophysical neural network model of V4 to address these issues. We first constrain our model to reproduce the experimental results obtained for different external stimulus configurations and without paying attention. To reproduce the known neuronal response variability, we found that the neurons should receive about equal, or balanced, levels of excitatory and inhibitory inputs and whose levels are high as they are in in vivo conditions. Next we consider attentional inputs that can induce and reproduce the observed spiking modulations. We also elucidate the role played by the neural network to generate these modulations

  12. Windowed active sampling for reliable neural learning

    NARCIS (Netherlands)

    Barakova, E.I; Spaanenburg, L

    The composition of the example set has a major impact on the quality of neural learning. The popular approach is focused on extensive pre-processing to bridge the representation gap between process measurement and neural presentation. In contrast, windowed active sampling attempts to solve these

  13. Performance of Deep and Shallow Neural Networks, the Universal Approximation Theorem, Activity Cliffs, and QSAR.

    Science.gov (United States)

    Winkler, David A; Le, Tu C

    2017-01-01

    Neural networks have generated valuable Quantitative Structure-Activity/Property Relationships (QSAR/QSPR) models for a wide variety of small molecules and materials properties. They have grown in sophistication and many of their initial problems have been overcome by modern mathematical techniques. QSAR studies have almost always used so-called "shallow" neural networks in which there is a single hidden layer between the input and output layers. Recently, a new and potentially paradigm-shifting type of neural network based on Deep Learning has appeared. Deep learning methods have generated impressive improvements in image and voice recognition, and are now being applied to QSAR and QSAR modelling. This paper describes the differences in approach between deep and shallow neural networks, compares their abilities to predict the properties of test sets for 15 large drug data sets (the kaggle set), discusses the results in terms of the Universal Approximation theorem for neural networks, and describes how DNN may ameliorate or remove troublesome "activity cliffs" in QSAR data sets. © 2017 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.

  14. Neural network models of categorical perception.

    Science.gov (United States)

    Damper, R I; Harnad, S R

    2000-05-01

    Studies of the categorical perception (CP) of sensory continua have a long and rich history in psychophysics. In 1977, Macmillan, Kaplan, and Creelman introduced the use of signal detection theory to CP studies. Anderson and colleagues simultaneously proposed the first neural model for CP, yet this line of research has been less well explored. In this paper, we assess the ability of neural-network models of CP to predict the psychophysical performance of real observers with speech sounds and artificial/novel stimuli. We show that a variety of neural mechanisms are capable of generating the characteristics of CP. Hence, CP may not be a special model of perception but an emergent property of any sufficiently powerful general learning system.

  15. Task-dependent modulation of oscillatory neural activity during movements

    DEFF Research Database (Denmark)

    Herz, D. M.; Christensen, M. S.; Reck, C.

    2011-01-01

    connectivity was strongest between central and cerebellar regions. Our results show that neural coupling within motor networks is modulated in distinct frequency bands depending on the motor task. They provide evidence that dynamic causal modeling in combination with EEG source analysis is a valuable tool......Neural oscillations in different frequency bands have been observed in a range of sensorimotor tasks and have been linked to coupling of spatially distinct neurons. The goal of this study was to detect a general motor network that is activated during phasic and tonic movements and to study the task......-dependent modulation of frequency coupling within this network. To this end we recorded 122-multichannel EEG in 13 healthy subjects while they performed three simple motor tasks. EEG data source modeling using individual MR images was carried out with a multiple source beamformer approach. A bilateral motor network...

  16. Neural network modeling of associative memory: Beyond the Hopfield model

    Science.gov (United States)

    Dasgupta, Chandan

    1992-07-01

    A number of neural network models, in which fixed-point and limit-cycle attractors of the underlying dynamics are used to store and associatively recall information, are described. In the first class of models, a hierarchical structure is used to store an exponentially large number of strongly correlated memories. The second class of models uses limit cycles to store and retrieve individual memories. A neurobiologically plausible network that generates low-amplitude periodic variations of activity, similar to the oscillations observed in electroencephalographic recordings, is also described. Results obtained from analytic and numerical studies of the properties of these networks are discussed.

  17. Computational modeling of spiking neural network with learning rules from STDP and intrinsic plasticity

    Science.gov (United States)

    Li, Xiumin; Wang, Wei; Xue, Fangzheng; Song, Yongduan

    2018-02-01

    Recently there has been continuously increasing interest in building up computational models of spiking neural networks (SNN), such as the Liquid State Machine (LSM). The biologically inspired self-organized neural networks with neural plasticity can enhance the capability of computational performance, with the characteristic features of dynamical memory and recurrent connection cycles which distinguish them from the more widely used feedforward neural networks. Despite a variety of computational models for brain-like learning and information processing have been proposed, the modeling of self-organized neural networks with multi-neural plasticity is still an important open challenge. The main difficulties lie in the interplay among different forms of neural plasticity rules and understanding how structures and dynamics of neural networks shape the computational performance. In this paper, we propose a novel approach to develop the models of LSM with a biologically inspired self-organizing network based on two neural plasticity learning rules. The connectivity among excitatory neurons is adapted by spike-timing-dependent plasticity (STDP) learning; meanwhile, the degrees of neuronal excitability are regulated to maintain a moderate average activity level by another learning rule: intrinsic plasticity (IP). Our study shows that LSM with STDP+IP performs better than LSM with a random SNN or SNN obtained by STDP alone. The noticeable improvement with the proposed method is due to the better reflected competition among different neurons in the developed SNN model, as well as the more effectively encoded and processed relevant dynamic information with its learning and self-organizing mechanism. This result gives insights to the optimization of computational models of spiking neural networks with neural plasticity.

  18. How fast can we learn maximum entropy models of neural populations?

    Energy Technology Data Exchange (ETDEWEB)

    Ganmor, Elad; Schneidman, Elad [Department of Neuroscience, Weizmann Institute of Science, Rehovot 76100 (Israel); Segev, Ronen, E-mail: elad.ganmor@weizmann.ac.i, E-mail: elad.schneidman@weizmann.ac.i [Department of Life Sciences and Zlotowski Center for Neuroscience, Ben-Gurion University of the Negev, Beer-Sheva 84105 (Israel)

    2009-12-01

    Most of our knowledge about how the brain encodes information comes from recordings of single neurons. However, computations in the brain are carried out by large groups of neurons. Modelling the joint activity of many interacting elements is computationally hard because of the large number of possible activity patterns and limited experimental data. Recently it was shown in several different neural systems that maximum entropy pairwise models, which rely only on firing rates and pairwise correlations of neurons, are excellent models for the distribution of activity patterns of neural populations, and in particular, their responses to natural stimuli. Using simultaneous recordings of large groups of neurons in the vertebrate retina responding to naturalistic stimuli, we show here that the relevant statistics required for finding the pairwise model can be accurately estimated within seconds. Furthermore, while higher order statistics may, in theory, improve model accuracy, they are, in practice, harmful for times of up to 20 minutes due to sampling noise. Finally, we demonstrate that trading accuracy for entropy may actually improve model performance when data is limited, and suggest an optimization method that automatically adjusts model constraints in order to achieve good performance.

  19. How fast can we learn maximum entropy models of neural populations?

    International Nuclear Information System (INIS)

    Ganmor, Elad; Schneidman, Elad; Segev, Ronen

    2009-01-01

    Most of our knowledge about how the brain encodes information comes from recordings of single neurons. However, computations in the brain are carried out by large groups of neurons. Modelling the joint activity of many interacting elements is computationally hard because of the large number of possible activity patterns and limited experimental data. Recently it was shown in several different neural systems that maximum entropy pairwise models, which rely only on firing rates and pairwise correlations of neurons, are excellent models for the distribution of activity patterns of neural populations, and in particular, their responses to natural stimuli. Using simultaneous recordings of large groups of neurons in the vertebrate retina responding to naturalistic stimuli, we show here that the relevant statistics required for finding the pairwise model can be accurately estimated within seconds. Furthermore, while higher order statistics may, in theory, improve model accuracy, they are, in practice, harmful for times of up to 20 minutes due to sampling noise. Finally, we demonstrate that trading accuracy for entropy may actually improve model performance when data is limited, and suggest an optimization method that automatically adjusts model constraints in order to achieve good performance.

  20. Strategies influence neural activity for feedback learning across child and adolescent development.

    Science.gov (United States)

    Peters, Sabine; Koolschijn, P Cédric M P; Crone, Eveline A; Van Duijvenvoorde, Anna C K; Raijmakers, Maartje E J

    2014-09-01

    Learning from feedback is an important aspect of executive functioning that shows profound improvements during childhood and adolescence. This is accompanied by neural changes in the feedback-learning network, which includes pre-supplementary motor area (pre- SMA)/anterior cingulate cortex (ACC), dorsolateral prefrontal cortex (DLPFC), superior parietal cortex (SPC), and the basal ganglia. However, there can be considerable differences within age ranges in performance that are ascribed to differences in strategy use. This is problematic for traditional approaches of analyzing developmental data, in which age groups are assumed to be homogenous in strategy use. In this study, we used latent variable models to investigate if underlying strategy groups could be detected for a feedback-learning task and whether there were differences in neural activation patterns between strategies. In a sample of 268 participants between ages 8 to 25 years, we observed four underlying strategy groups, which were cut across age groups and varied in the optimality of executive functioning. These strategy groups also differed in neural activity during learning; especially the most optimal performing group showed more activity in DLPFC, SPC and pre-SMA/ACC compared to the other groups. However, age differences remained an important contributor to neural activation, even when correcting for strategy. These findings contribute to the debate of age versus performance predictors of neural development, and highlight the importance of studying individual differences in strategy use when studying development. Copyright © 2014 Elsevier Ltd. All rights reserved.

  1. Dynamic neural network models of the premotoneuronal circuitry controlling wrist movements in primates.

    Science.gov (United States)

    Maier, M A; Shupe, L E; Fetz, E E

    2005-10-01

    Dynamic recurrent neural networks were derived to simulate neuronal populations generating bidirectional wrist movements in the monkey. The models incorporate anatomical connections of cortical and rubral neurons, muscle afferents, segmental interneurons and motoneurons; they also incorporate the response profiles of four populations of neurons observed in behaving monkeys. The networks were derived by gradient descent algorithms to generate the eight characteristic patterns of motor unit activations observed during alternating flexion-extension wrist movements. The resulting model generated the appropriate input-output transforms and developed connection strengths resembling those in physiological pathways. We found that this network could be further trained to simulate additional tasks, such as experimentally observed reflex responses to limb perturbations that stretched or shortened the active muscles, and scaling of response amplitudes in proportion to inputs. In the final comprehensive network, motor units are driven by the combined activity of cortical, rubral, spinal and afferent units during step tracking and perturbations. The model displayed many emergent properties corresponding to physiological characteristics. The resulting neural network provides a working model of premotoneuronal circuitry and elucidates the neural mechanisms controlling motoneuron activity. It also predicts several features to be experimentally tested, for example the consequences of eliminating inhibitory connections in cortex and red nucleus. It also reveals that co-contraction can be achieved by simultaneous activation of the flexor and extensor circuits without invoking features specific to co-contraction.

  2. Encoding Time in Feedforward Trajectories of a Recurrent Neural Network Model.

    Science.gov (United States)

    Hardy, N F; Buonomano, Dean V

    2018-02-01

    Brain activity evolves through time, creating trajectories of activity that underlie sensorimotor processing, behavior, and learning and memory. Therefore, understanding the temporal nature of neural dynamics is essential to understanding brain function and behavior. In vivo studies have demonstrated that sequential transient activation of neurons can encode time. However, it remains unclear whether these patterns emerge from feedforward network architectures or from recurrent networks and, furthermore, what role network structure plays in timing. We address these issues using a recurrent neural network (RNN) model with distinct populations of excitatory and inhibitory units. Consistent with experimental data, a single RNN could autonomously produce multiple functionally feedforward trajectories, thus potentially encoding multiple timed motor patterns lasting up to several seconds. Importantly, the model accounted for Weber's law, a hallmark of timing behavior. Analysis of network connectivity revealed that efficiency-a measure of network interconnectedness-decreased as the number of stored trajectories increased. Additionally, the balance of excitation (E) and inhibition (I) shifted toward excitation during each unit's activation time, generating the prediction that observed sequential activity relies on dynamic control of the E/I balance. Our results establish for the first time that the same RNN can generate multiple functionally feedforward patterns of activity as a result of dynamic shifts in the E/I balance imposed by the connectome of the RNN. We conclude that recurrent network architectures account for sequential neural activity, as well as for a fundamental signature of timing behavior: Weber's law.

  3. Correlation of neural activity with behavioral kinematics reveals distinct sensory encoding and evidence accumulation processes during active tactile sensing.

    Science.gov (United States)

    Delis, Ioannis; Dmochowski, Jacek P; Sajda, Paul; Wang, Qi

    2018-03-23

    Many real-world decisions rely on active sensing, a dynamic process for directing our sensors (e.g. eyes or fingers) across a stimulus to maximize information gain. Though ecologically pervasive, limited work has focused on identifying neural correlates of the active sensing process. In tactile perception, we often make decisions about an object/surface by actively exploring its shape/texture. Here we investigate the neural correlates of active tactile decision-making by simultaneously measuring electroencephalography (EEG) and finger kinematics while subjects interrogated a haptic surface to make perceptual judgments. Since sensorimotor behavior underlies decision formation in active sensing tasks, we hypothesized that the neural correlates of decision-related processes would be detectable by relating active sensing to neural activity. Novel brain-behavior correlation analysis revealed that three distinct EEG components, localizing to right-lateralized occipital cortex (LOC), middle frontal gyrus (MFG), and supplementary motor area (SMA), respectively, were coupled with active sensing as their activity significantly correlated with finger kinematics. To probe the functional role of these components, we fit their single-trial-couplings to decision-making performance using a hierarchical-drift-diffusion-model (HDDM), revealing that the LOC modulated the encoding of the tactile stimulus whereas the MFG predicted the rate of information integration towards a choice. Interestingly, the MFG disappeared from components uncovered from control subjects performing active sensing but not required to make perceptual decisions. By uncovering the neural correlates of distinct stimulus encoding and evidence accumulation processes, this study delineated, for the first time, the functional role of cortical areas in active tactile decision-making. Copyright © 2018 Elsevier Inc. All rights reserved.

  4. Computationally efficient model predictive control algorithms a neural network approach

    CERN Document Server

    Ławryńczuk, Maciej

    2014-01-01

    This book thoroughly discusses computationally efficient (suboptimal) Model Predictive Control (MPC) techniques based on neural models. The subjects treated include: ·         A few types of suboptimal MPC algorithms in which a linear approximation of the model or of the predicted trajectory is successively calculated on-line and used for prediction. ·         Implementation details of the MPC algorithms for feedforward perceptron neural models, neural Hammerstein models, neural Wiener models and state-space neural models. ·         The MPC algorithms based on neural multi-models (inspired by the idea of predictive control). ·         The MPC algorithms with neural approximation with no on-line linearization. ·         The MPC algorithms with guaranteed stability and robustness. ·         Cooperation between the MPC algorithms and set-point optimization. Thanks to linearization (or neural approximation), the presented suboptimal algorithms do not require d...

  5. Shape perception simultaneously up- and downregulates neural activity in the primary visual cortex.

    Science.gov (United States)

    Kok, Peter; de Lange, Floris P

    2014-07-07

    An essential part of visual perception is the grouping of local elements (such as edges and lines) into coherent shapes. Previous studies have shown that this grouping process modulates neural activity in the primary visual cortex (V1) that is signaling the local elements [1-4]. However, the nature of this modulation is controversial. Some studies find that shape perception reduces neural activity in V1 [2, 5, 6], while others report increased V1 activity during shape perception [1, 3, 4, 7-10]. Neurocomputational theories that cast perception as a generative process [11-13] propose that feedback connections carry predictions (i.e., the generative model), while feedforward connections signal the mismatch between top-down predictions and bottom-up inputs. Within this framework, the effect of feedback on early visual cortex may be either enhancing or suppressive, depending on whether the feedback signal is met by congruent bottom-up input. Here, we tested this hypothesis by quantifying the spatial profile of neural activity in V1 during the perception of illusory shapes using population receptive field mapping. We find that shape perception concurrently increases neural activity in regions of V1 that have a receptive field on the shape but do not receive bottom-up input and suppresses activity in regions of V1 that receive bottom-up input that is predicted by the shape. These effects were not modulated by task requirements. Together, these findings suggest that shape perception changes lower-order sensory representations in a highly specific and automatic manner, in line with theories that cast perception in terms of hierarchical generative models. Copyright © 2014 Elsevier Ltd. All rights reserved.

  6. Proposal of a model of mammalian neural induction

    Science.gov (United States)

    Levine, Ariel J.; Brivanlou, Ali H.

    2009-01-01

    How does the vertebrate embryo make a nervous system? This complex question has been at the center of developmental biology for many years. The earliest step in this process – the induction of neural tissue – is intimately linked to patterning of the entire early embryo, and the molecular and embryological basis these processes are beginning to emerge. Here, we analyze classic and cutting-edge findings on neural induction in the mouse. We find that data from genetics, tissue explants, tissue grafting, and molecular marker expression support a coherent framework for mammalian neural induction. In this model, the gastrula organizer of the mouse embryo inhibits BMP signaling to allow neural tissue to form as a default fate – in the absence of instructive signals. The first neural tissue induced is anterior and subsequent neural tissue is posteriorized to form the midbrain, hindbrain, and spinal cord. The anterior visceral endoderm protects the pre-specified anterior neural fate from similar posteriorization, allowing formation of forebrain. This model is very similar to the default model of neural induction in the frog, thus bridging the evolutionary gap between amphibians and mammals. PMID:17585896

  7. A quantum-implementable neural network model

    Science.gov (United States)

    Chen, Jialin; Wang, Lingli; Charbon, Edoardo

    2017-10-01

    A quantum-implementable neural network, namely quantum probability neural network (QPNN) model, is proposed in this paper. QPNN can use quantum parallelism to trace all possible network states to improve the result. Due to its unique quantum nature, this model is robust to several quantum noises under certain conditions, which can be efficiently implemented by the qubus quantum computer. Another advantage is that QPNN can be used as memory to retrieve the most relevant data and even to generate new data. The MATLAB experimental results of Iris data classification and MNIST handwriting recognition show that much less neuron resources are required in QPNN to obtain a good result than the classical feedforward neural network. The proposed QPNN model indicates that quantum effects are useful for real-life classification tasks.

  8. Functional model of biological neural networks.

    Science.gov (United States)

    Lo, James Ting-Ho

    2010-12-01

    A functional model of biological neural networks, called temporal hierarchical probabilistic associative memory (THPAM), is proposed in this paper. THPAM comprises functional models of dendritic trees for encoding inputs to neurons, a first type of neuron for generating spike trains, a second type of neuron for generating graded signals to modulate neurons of the first type, supervised and unsupervised Hebbian learning mechanisms for easy learning and retrieving, an arrangement of dendritic trees for maximizing generalization, hardwiring for rotation-translation-scaling invariance, and feedback connections with different delay durations for neurons to make full use of present and past informations generated by neurons in the same and higher layers. These functional models and their processing operations have many functions of biological neural networks that have not been achieved by other models in the open literature and provide logically coherent answers to many long-standing neuroscientific questions. However, biological justifications of these functional models and their processing operations are required for THPAM to qualify as a macroscopic model (or low-order approximate) of biological neural networks.

  9. EEG-fMRI Bayesian framework for neural activity estimation: a simulation study

    Science.gov (United States)

    Croce, Pierpaolo; Basti, Alessio; Marzetti, Laura; Zappasodi, Filippo; Del Gratta, Cosimo

    2016-12-01

    Objective. Due to the complementary nature of electroencephalography (EEG) and functional magnetic resonance imaging (fMRI), and given the possibility of simultaneous acquisition, the joint data analysis can afford a better understanding of the underlying neural activity estimation. In this simulation study we want to show the benefit of the joint EEG-fMRI neural activity estimation in a Bayesian framework. Approach. We built a dynamic Bayesian framework in order to perform joint EEG-fMRI neural activity time course estimation. The neural activity is originated by a given brain area and detected by means of both measurement techniques. We have chosen a resting state neural activity situation to address the worst case in terms of the signal-to-noise ratio. To infer information by EEG and fMRI concurrently we used a tool belonging to the sequential Monte Carlo (SMC) methods: the particle filter (PF). Main results. First, despite a high computational cost, we showed the feasibility of such an approach. Second, we obtained an improvement in neural activity reconstruction when using both EEG and fMRI measurements. Significance. The proposed simulation shows the improvements in neural activity reconstruction with EEG-fMRI simultaneous data. The application of such an approach to real data allows a better comprehension of the neural dynamics.

  10. Fast neutron spectra determination by threshold activation detectors using neural networks

    International Nuclear Information System (INIS)

    Kardan, M.R.; Koohi-Fayegh, R.; Setayeshi, S.; Ghiassi-Nejad, M.

    2004-01-01

    Neural network method was used for fast neutron spectra unfolding in spectrometry by threshold activation detectors. The input layer of the neural networks consisted of 11 neurons for the specific activities of neutron-induced nuclear reaction products, while the output layers were fast neutron spectra which had been subdivided into 6, 8, 10, 12, 15 and 20 energy bins. Neural network training was performed by 437 fast neutron spectra and corresponding threshold activation detector readings. The trained neural network have been applied for unfolding 50 spectra, which were not in training sets and the results were compared with real spectra and unfolded spectra by SANDII. The best results belong to 10 energy bin spectra. The neural network was also trained by detector readings with 5% uncertainty and the response of the trained neural network to detector readings with 5%, 10%, 15%, 20%, 25% and 50% uncertainty was compared with real spectra. Neural network algorithm, in comparison with other unfolding methods, is very fast and needless to detector response matrix and any prior information about spectra and also the outputs have low sensitivity to uncertainty in the activity measurements. The results show that the neural network algorithm is useful when a fast response is required with reasonable accuracy

  11. Neural networkbased semi-active control strategy for structural vibration mitigation with magnetorheological damper

    DEFF Research Database (Denmark)

    Bhowmik, Subrata

    2011-01-01

    This paper presents a neural network based semi-active control method for a rotary type magnetorheological (MR) damper. The characteristics of the MR damper are described by the classic Bouc-Wen model, and the performance of the proposed control method is evaluated in terms of a base exited shear...... to determine the damper current based on the derived optimal damper force. For that reason an inverse MR damper model is also designed based on the neural network identification of the particular rotary MR damper. The performance of the proposed controller is compared to that of an optimal pure viscous damper...

  12. Artificial Neural Network Modeling of an Inverse Fluidized Bed ...

    African Journals Online (AJOL)

    A Radial Basis Function neural network has been successfully employed for the modeling of the inverse fluidized bed reactor. In the proposed model, the trained neural network represents the kinetics of biological decomposition of pollutants in the reactor. The neural network has been trained with experimental data ...

  13. Population-wide distributions of neural activity during perceptual decision-making

    Science.gov (United States)

    Machens, Christian

    2018-01-01

    Cortical activity involves large populations of neurons, even when it is limited to functionally coherent areas. Electrophysiological recordings, on the other hand, involve comparatively small neural ensembles, even when modern-day techniques are used. Here we review results which have started to fill the gap between these two scales of inquiry, by shedding light on the statistical distributions of activity in large populations of cells. We put our main focus on data recorded in awake animals that perform simple decision-making tasks and consider statistical distributions of activity throughout cortex, across sensory, associative, and motor areas. We transversally review the complexity of these distributions, from distributions of firing rates and metrics of spike-train structure, through distributions of tuning to stimuli or actions and of choice signals, and finally the dynamical evolution of neural population activity and the distributions of (pairwise) neural interactions. This approach reveals shared patterns of statistical organization across cortex, including: (i) long-tailed distributions of activity, where quasi-silence seems to be the rule for a majority of neurons; that are barely distinguishable between spontaneous and active states; (ii) distributions of tuning parameters for sensory (and motor) variables, which show an extensive extrapolation and fragmentation of their representations in the periphery; and (iii) population-wide dynamics that reveal rotations of internal representations over time, whose traces can be found both in stimulus-driven and internally generated activity. We discuss how these insights are leading us away from the notion of discrete classes of cells, and are acting as powerful constraints on theories and models of cortical organization and population coding. PMID:23123501

  14. Bio-Inspired Neural Model for Learning Dynamic Models

    Science.gov (United States)

    Duong, Tuan; Duong, Vu; Suri, Ronald

    2009-01-01

    A neural-network mathematical model that, relative to prior such models, places greater emphasis on some of the temporal aspects of real neural physical processes, has been proposed as a basis for massively parallel, distributed algorithms that learn dynamic models of possibly complex external processes by means of learning rules that are local in space and time. The algorithms could be made to perform such functions as recognition and prediction of words in speech and of objects depicted in video images. The approach embodied in this model is said to be "hardware-friendly" in the following sense: The algorithms would be amenable to execution by special-purpose computers implemented as very-large-scale integrated (VLSI) circuits that would operate at relatively high speeds and low power demands.

  15. Neural network modeling for near wall turbulent flow

    International Nuclear Information System (INIS)

    Milano, Michele; Koumoutsakos, Petros

    2002-01-01

    A neural network methodology is developed in order to reconstruct the near wall field in a turbulent flow by exploiting flow fields provided by direct numerical simulations. The results obtained from the neural network methodology are compared with the results obtained from prediction and reconstruction using proper orthogonal decomposition (POD). Using the property that the POD is equivalent to a specific linear neural network, a nonlinear neural network extension is presented. It is shown that for a relatively small additional computational cost nonlinear neural networks provide us with improved reconstruction and prediction capabilities for the near wall velocity fields. Based on these results advantages and drawbacks of both approaches are discussed with an outlook toward the development of near wall models for turbulence modeling and control

  16. Neural networks in economic modelling : An empirical study

    NARCIS (Netherlands)

    Verkooijen, W.J.H.

    1996-01-01

    This dissertation addresses the statistical aspects of neural networks and their usability for solving problems in economics and finance. Neural networks are discussed in a framework of modelling which is generally accepted in econometrics. Within this framework a neural network is regarded as a

  17. Short-Term Load Forecasting Model Based on Quantum Elman Neural Networks

    Directory of Open Access Journals (Sweden)

    Zhisheng Zhang

    2016-01-01

    Full Text Available Short-term load forecasting model based on quantum Elman neural networks was constructed in this paper. The quantum computation and Elman feedback mechanism were integrated into quantum Elman neural networks. Quantum computation can effectively improve the approximation capability and the information processing ability of the neural networks. Quantum Elman neural networks have not only the feedforward connection but also the feedback connection. The feedback connection between the hidden nodes and the context nodes belongs to the state feedback in the internal system, which has formed specific dynamic memory performance. Phase space reconstruction theory is the theoretical basis of constructing the forecasting model. The training samples are formed by means of K-nearest neighbor approach. Through the example simulation, the testing results show that the model based on quantum Elman neural networks is better than the model based on the quantum feedforward neural network, the model based on the conventional Elman neural network, and the model based on the conventional feedforward neural network. So the proposed model can effectively improve the prediction accuracy. The research in the paper makes a theoretical foundation for the practical engineering application of the short-term load forecasting model based on quantum Elman neural networks.

  18. UAV Trajectory Modeling Using Neural Networks

    Science.gov (United States)

    Xue, Min

    2017-01-01

    Massive small unmanned aerial vehicles are envisioned to operate in the near future. While there are lots of research problems need to be addressed before dense operations can happen, trajectory modeling remains as one of the keys to understand and develop policies, regulations, and requirements for safe and efficient unmanned aerial vehicle operations. The fidelity requirement of a small unmanned vehicle trajectory model is high because these vehicles are sensitive to winds due to their small size and low operational altitude. Both vehicle control systems and dynamic models are needed for trajectory modeling, which makes the modeling a great challenge, especially considering the fact that manufactures are not willing to share their control systems. This work proposed to use a neural network approach for modelling small unmanned vehicle's trajectory without knowing its control system and bypassing exhaustive efforts for aerodynamic parameter identification. As a proof of concept, instead of collecting data from flight tests, this work used the trajectory data generated by a mathematical vehicle model for training and testing the neural network. The results showed great promise because the trained neural network can predict 4D trajectories accurately, and prediction errors were less than 2:0 meters in both temporal and spatial dimensions.

  19. UAV Trajectory Modeling Using Neural Networks

    Science.gov (United States)

    Xue, Min

    2017-01-01

    Large amount of small Unmanned Aerial Vehicles (sUAVs) are projected to operate in the near future. Potential sUAV applications include, but not limited to, search and rescue, inspection and surveillance, aerial photography and video, precision agriculture, and parcel delivery. sUAVs are expected to operate in the uncontrolled Class G airspace, which is at or below 500 feet above ground level (AGL), where many static and dynamic constraints exist, such as ground properties and terrains, restricted areas, various winds, manned helicopters, and conflict avoidance among sUAVs. How to enable safe, efficient, and massive sUAV operations at the low altitude airspace remains a great challenge. NASA's Unmanned aircraft system Traffic Management (UTM) research initiative works on establishing infrastructure and developing policies, requirement, and rules to enable safe and efficient sUAVs' operations. To achieve this goal, it is important to gain insights of future UTM traffic operations through simulations, where the accurate trajectory model plays an extremely important role. On the other hand, like what happens in current aviation development, trajectory modeling should also serve as the foundation for any advanced concepts and tools in UTM. Accurate models of sUAV dynamics and control systems are very important considering the requirement of the meter level precision in UTM operations. The vehicle dynamics are relatively easy to derive and model, however, vehicle control systems remain unknown as they are usually kept by manufactures as a part of intellectual properties. That brings challenges to trajectory modeling for sUAVs. How to model the vehicle's trajectories with unknown control system? This work proposes to use a neural network to model a vehicle's trajectory. The neural network is first trained to learn the vehicle's responses at numerous conditions. Once being fully trained, given current vehicle states, winds, and desired future trajectory, the neural

  20. Artificial neural network modelling

    CERN Document Server

    Samarasinghe, Sandhya

    2016-01-01

    This book covers theoretical aspects as well as recent innovative applications of Artificial Neural networks (ANNs) in natural, environmental, biological, social, industrial and automated systems. It presents recent results of ANNs in modelling small, large and complex systems under three categories, namely, 1) Networks, Structure Optimisation, Robustness and Stochasticity 2) Advances in Modelling Biological and Environmental Systems and 3) Advances in Modelling Social and Economic Systems. The book aims at serving undergraduates, postgraduates and researchers in ANN computational modelling. .

  1. Runoff Modelling in Urban Storm Drainage by Neural Networks

    DEFF Research Database (Denmark)

    Rasmussen, Michael R.; Brorsen, Michael; Schaarup-Jensen, Kjeld

    1995-01-01

    A neural network is used to simulate folw and water levels in a sewer system. The calibration of th neural network is based on a few measured events and the network is validated against measureed events as well as flow simulated with the MOUSE model (Lindberg and Joergensen, 1986). The neural...... network is used to compute flow or water level at selected points in the sewer system, and to forecast the flow from a small residential area. The main advantages of the neural network are the build-in self calibration procedure and high speed performance, but the neural network cannot be used to extract...... knowledge of the runoff process. The neural network was found to simulate 150 times faster than e.g. the MOUSE model....

  2. Modeling polyvinyl chloride Plasma Modification by Neural Networks

    Science.gov (United States)

    Wang, Changquan

    2018-03-01

    Neural networks model were constructed to analyze the connection between dielectric barrier discharge parameters and surface properties of material. The experiment data were generated from polyvinyl chloride plasma modification by using uniform design. Discharge voltage, discharge gas gap and treatment time were as neural network input layer parameters. The measured values of contact angle were as the output layer parameters. A nonlinear mathematical model of the surface modification for polyvinyl chloride was developed based upon the neural networks. The optimum model parameters were obtained by the simulation evaluation and error analysis. The results of the optimal model show that the predicted value is very close to the actual test value. The prediction model obtained here are useful for discharge plasma surface modification analysis.

  3. Forecasting Flare Activity Using Deep Convolutional Neural Networks

    Science.gov (United States)

    Hernandez, T.

    2017-12-01

    Current operational flare forecasting relies on human morphological analysis of active regions and the persistence of solar flare activity through time (i.e. that the Sun will continue to do what it is doing right now: flaring or remaining calm). In this talk we present the results of applying deep Convolutional Neural Networks (CNNs) to the problem of solar flare forecasting. CNNs operate by training a set of tunable spatial filters that, in combination with neural layer interconnectivity, allow CNNs to automatically identify significant spatial structures predictive for classification and regression problems. We will start by discussing the applicability and success rate of the approach, the advantages it has over non-automated forecasts, and how mining our trained neural network provides a fresh look into the mechanisms behind magnetic energy storage and release.

  4. Neural activation toward erotic stimuli in homosexual and heterosexual males.

    Science.gov (United States)

    Kagerer, Sabine; Klucken, Tim; Wehrum, Sina; Zimmermann, Mark; Schienle, Anne; Walter, Bertram; Vaitl, Dieter; Stark, Rudolf

    2011-11-01

    Studies investigating sexual arousal exist, yet there are diverging findings on the underlying neural mechanisms with regard to sexual orientation. Moreover, sexual arousal effects have often been confounded with general arousal effects. Hence, it is still unclear which structures underlie the sexual arousal response in homosexual and heterosexual men. Neural activity and subjective responses were investigated in order to disentangle sexual from general arousal. Considering sexual orientation, differential and conjoint neural activations were of interest. The functional magnetic resonance imaging (fMRI) study focused on the neural networks involved in the processing of sexual stimuli in 21 male participants (11 homosexual, 10 heterosexual). Both groups viewed pictures with erotic content as well as aversive and neutral stimuli. The erotic pictures were subdivided into three categories (most sexually arousing, least sexually arousing, and rest) based on the individual subjective ratings of each participant. Blood oxygen level-dependent responses measured by fMRI and subjective ratings. A conjunction analysis revealed conjoint neural activation related to sexual arousal in thalamus, hypothalamus, occipital cortex, and nucleus accumbens. Increased insula, amygdala, and anterior cingulate gyrus activation could be linked to general arousal. Group differences emerged neither when viewing the most sexually arousing pictures compared with highly arousing aversive pictures nor compared with neutral pictures. Results suggest that a widespread neural network is activated by highly sexually arousing visual stimuli. A partly distinct network of structures underlies sexual and general arousal effects. The processing of preferred, highly sexually arousing stimuli recruited similar structures in homosexual and heterosexual males. © 2011 International Society for Sexual Medicine.

  5. A model of microsaccade-related neural responses induced by short-term depression in thalamocortical synapses

    Directory of Open Access Journals (Sweden)

    Wujie eYuan

    2013-04-01

    Full Text Available Microsaccades during fixation have been suggested to counteract visual fading. Recent experi- ments have also observed microsaccade-related neural responses from cellular record, scalp elec- troencephalogram (EEG and functional magnetic resonance imaging (fMRI. The underlying mechanism, however, is not yet understood and highly debated. It has been proposed that the neural activity of primary visual cortex (V1 is a crucial component for counteracting visual adaptation. In this paper, we use computational modeling to investigate how short-term depres- sion (STD in thalamocortical synapses might affect the neural responses of V1 in the presence of microsaccades. Our model not only gives a possible synaptic explanation for microsaccades in counteracting visual fading, but also reproduces several features in experimental findings. These modeling results suggest that STD in thalamocortical synapses plays an important role in microsaccade-related neural responses and the model may be useful for further investigation of behavioral properties and functional roles of microsaccades.

  6. A model of microsaccade-related neural responses induced by short-term depression in thalamocortical synapses

    Science.gov (United States)

    Yuan, Wu-Jie; Dimigen, Olaf; Sommer, Werner; Zhou, Changsong

    2013-01-01

    Microsaccades during fixation have been suggested to counteract visual fading. Recent experiments have also observed microsaccade-related neural responses from cellular record, scalp electroencephalogram (EEG), and functional magnetic resonance imaging (fMRI). The underlying mechanism, however, is not yet understood and highly debated. It has been proposed that the neural activity of primary visual cortex (V1) is a crucial component for counteracting visual adaptation. In this paper, we use computational modeling to investigate how short-term depression (STD) in thalamocortical synapses might affect the neural responses of V1 in the presence of microsaccades. Our model not only gives a possible synaptic explanation for microsaccades in counteracting visual fading, but also reproduces several features in experimental findings. These modeling results suggest that STD in thalamocortical synapses plays an important role in microsaccade-related neural responses and the model may be useful for further investigation of behavioral properties and functional roles of microsaccades. PMID:23630494

  7. Activity in part of the neural correlates of consciousness reflects integration.

    Science.gov (United States)

    Eriksson, Johan

    2017-10-01

    Integration is commonly viewed as a key process for generating conscious experiences. Accordingly, there should be increased activity within the neural correlates of consciousness when demands on integration increase. We used fMRI and "informational masking" to isolate the neural correlates of consciousness and measured how the associated brain activity changed as a function of required integration. Integration was manipulated by comparing the experience of hearing simple reoccurring tones to hearing harmonic tone triplets. The neural correlates of auditory consciousness included superior temporal gyrus, lateral and medial frontal regions, cerebellum, and also parietal cortex. Critically, only activity in left parietal cortex increased significantly as a function of increasing demands on integration. We conclude that integration can explain part of the neural activity associated with the generation conscious experiences, but that much of associated brain activity apparently reflects other processes. Copyright © 2017 Elsevier Inc. All rights reserved.

  8. An Activity for Demonstrating the Concept of a Neural Circuit

    Science.gov (United States)

    Kreiner, David S.

    2012-01-01

    College students in two sections of a general psychology course participated in a demonstration of a simple neural circuit. The activity was based on a neural circuit that Jeffress proposed for localizing sounds. Students in one section responded to a questionnaire prior to participating in the activity, while students in the other section…

  9. PREDIKSI FOREX MENGGUNAKAN MODEL NEURAL NETWORK

    Directory of Open Access Journals (Sweden)

    R. Hadapiningradja Kusumodestoni

    2015-11-01

    Full Text Available ABSTRAK Prediksi adalah salah satu teknik yang paling penting dalam menjalankan bisnis forex. Keputusan dalam memprediksi adalah sangatlah penting, karena dengan prediksi dapat membantu mengetahui nilai forex di waktu tertentu kedepan sehingga dapat mengurangi resiko kerugian. Tujuan dari penelitian ini dimaksudkan memprediksi bisnis fores menggunakan model neural network dengan data time series per 1 menit untuk mengetahui nilai akurasi prediksi sehingga dapat mengurangi resiko dalam menjalankan bisnis forex. Metode penelitian pada penelitian ini meliputi metode pengumpulan data kemudian dilanjutkan ke metode training, learning, testing menggunakan neural network. Setelah di evaluasi hasil penelitian ini menunjukan bahwa penerapan algoritma Neural Network mampu untuk memprediksi forex dengan tingkat akurasi prediksi 0.431 +/- 0.096 sehingga dengan prediksi ini dapat membantu mengurangi resiko dalam menjalankan bisnis forex. Kata kunci: prediksi, forex, neural network.

  10. Hysteretic recurrent neural networks: a tool for modeling hysteretic materials and systems

    International Nuclear Information System (INIS)

    Veeramani, Arun S; Crews, John H; Buckner, Gregory D

    2009-01-01

    This paper introduces a novel recurrent neural network, the hysteretic recurrent neural network (HRNN), that is ideally suited to modeling hysteretic materials and systems. This network incorporates a hysteretic neuron consisting of conjoined sigmoid activation functions. Although similar hysteretic neurons have been explored previously, the HRNN is unique in its utilization of simple recurrence to 'self-select' relevant activation functions. Furthermore, training is facilitated by placing the network weights on the output side, allowing standard backpropagation of error training algorithms to be used. We present two- and three-phase versions of the HRNN for modeling hysteretic materials with distinct phases. These models are experimentally validated using data collected from shape memory alloys and ferromagnetic materials. The results demonstrate the HRNN's ability to accurately generalize hysteretic behavior with a relatively small number of neurons. Additional benefits lie in the network's ability to identify statistical information concerning the macroscopic material by analyzing the weights of the individual neurons

  11. Feed forward neural networks modeling for K-P interactions

    International Nuclear Information System (INIS)

    El-Bakry, M.Y.

    2003-01-01

    Artificial intelligence techniques involving neural networks became vital modeling tools where model dynamics are difficult to track with conventional techniques. The paper make use of the feed forward neural networks (FFNN) to model the charged multiplicity distribution of K-P interactions at high energies. The FFNN was trained using experimental data for the multiplicity distributions at different lab momenta. Results of the FFNN model were compared to that generated using the parton two fireball model and the experimental data. The proposed FFNN model results showed good fitting to the experimental data. The neural network model performance was also tested at non-trained space and was found to be in good agreement with the experimental data

  12. What are the odds? The neural correlates of active choice during gambling

    Directory of Open Access Journals (Sweden)

    Bettina eStuder

    2012-04-01

    Full Text Available Gambling is a widespread recreational activity and requires pitting the values of potential wins and losses against their probability of occurrence. Neuropsychological research showed that betting behavior on laboratory gambling tasks is highly sensitive to focal lesions to the ventromedial prefrontal cortex (vmPFC and insula. In the current study, we assessed the neural basis of betting choices in healthy participants, using functional magnetic resonance imaging of the Roulette Betting Task. In half of the trials participants actively chose their bets; in the other half the computer dictated the bet size. Our results highlight the impact of volitional choice upon the neural substrates of gambling: Neural activity in a distributed network - including key structures of the reward circuitry (midbrain, striatum - was higher during active compared to computer-dictated bet selection. In line with neuropsychological data, the anterior insula and vmPFC were more activated during self-directed bet selection, and responses in these areas were differentially modulated by the odds of winning in the two choice conditions. In addition, responses in the vmPFC and ventral striatum were modulated by the bet size. Convergent with electrophysiological research in macaques, our results further implicate the inferior parietal cortex (IPC in the processing of the likelihood of potential outcomes: Neural responses in the IPC bilaterally reflected the probability of winning during bet selection. Moreover, the IPC was particularly sensitive to the odds of winning in the active choice condition, where this information was used to guide bet selection. Our results indicate a neglected role of the IPC in human decision-making under risk and help to integrate neuropsychological data of risk-taking following vmPFC and insula damage with models of choice derived from human neuroimaging and monkey electrophysiology.

  13. Neural network tagging in a toy model

    International Nuclear Information System (INIS)

    Milek, Marko; Patel, Popat

    1999-01-01

    The purpose of this study is a comparison of Artificial Neural Network approach to HEP analysis against the traditional methods. A toy model used in this analysis consists of two types of particles defined by four generic properties. A number of 'events' was created according to the model using standard Monte Carlo techniques. Several fully connected, feed forward multi layered Artificial Neural Networks were trained to tag the model events. The performance of each network was compared to the standard analysis mechanisms and significant improvement was observed

  14. An approach to the interpretation of backpropagation neural network models in QSAR studies.

    Science.gov (United States)

    Baskin, I I; Ait, A O; Halberstam, N M; Palyulin, V A; Zefirov, N S

    2002-03-01

    An approach to the interpretation of backpropagation neural network models for quantitative structure-activity and structure-property relationships (QSAR/QSPR) studies is proposed. The method is based on analyzing the first and second moments of distribution of the values of the first and the second partial derivatives of neural network outputs with respect to inputs calculated at data points. The use of such statistics makes it possible not only to obtain actually the same characteristics as for the case of traditional "interpretable" statistical methods, such as the linear regression analysis, but also to reveal important additional information regarding the non-linear character of QSAR/QSPR relationships. The approach is illustrated by an example of interpreting a backpropagation neural network model for predicting position of the long-wave absorption band of cyane dyes.

  15. Delta Learning Rule for the Active Sites Model

    OpenAIRE

    Lingashetty, Krishna Chaithanya

    2010-01-01

    This paper reports the results on methods of comparing the memory retrieval capacity of the Hebbian neural network which implements the B-Matrix approach, by using the Widrow-Hoff rule of learning. We then, extend the recently proposed Active Sites model by developing a delta rule to increase memory capacity. Also, this paper extends the binary neural network to a multi-level (non-binary) neural network.

  16. Modelling collective cell migration of neural crest.

    Science.gov (United States)

    Szabó, András; Mayor, Roberto

    2016-10-01

    Collective cell migration has emerged in the recent decade as an important phenomenon in cell and developmental biology and can be defined as the coordinated and cooperative movement of groups of cells. Most studies concentrate on tightly connected epithelial tissues, even though collective migration does not require a constant physical contact. Movement of mesenchymal cells is more independent, making their emergent collective behaviour less intuitive and therefore lending importance to computational modelling. Here we focus on such modelling efforts that aim to understand the collective migration of neural crest cells, a mesenchymal embryonic population that migrates large distances as a group during early vertebrate development. By comparing different models of neural crest migration, we emphasize the similarity and complementary nature of these approaches and suggest a future direction for the field. The principles derived from neural crest modelling could aid understanding the collective migration of other mesenchymal cell types. Copyright © 2016 Elsevier Ltd. All rights reserved.

  17. SPR imaging combined with cyclic voltammetry for the detection of neural activity

    Directory of Open Access Journals (Sweden)

    Hui Li

    2014-03-01

    Full Text Available Surface plasmon resonance (SPR detects changes in refractive index at a metal-dielectric interface. In this study, SPR imaging (SPRi combined with cyclic voltammetry (CV was applied to detect neural activity in isolated bullfrog sciatic nerves. The neural activities induced by chemical and electrical stimulation led to an SPR response, and the activities were recorded in real time. The activities of different parts of the sciatic nerve were recorded and compared. The results demonstrated that SPR imaging combined with CV is a powerful tool for the investigation of neural activity.

  18. The effects of noise on binocular rivalry waves: a stochastic neural field model

    International Nuclear Information System (INIS)

    Webber, Matthew A; Bressloff, Paul C

    2013-01-01

    We analyze the effects of extrinsic noise on traveling waves of visual perception in a competitive neural field model of binocular rivalry. The model consists of two one-dimensional excitatory neural fields, whose activity variables represent the responses to left-eye and right-eye stimuli, respectively. The two networks mutually inhibit each other, and slow adaptation is incorporated into the model by taking the network connections to exhibit synaptic depression. We first show how, in the absence of any noise, the system supports a propagating composite wave consisting of an invading activity front in one network co-moving with a retreating front in the other network. Using a separation of time scales and perturbation methods previously developed for stochastic reaction–diffusion equations, we then show how extrinsic noise in the activity variables leads to a diffusive-like displacement (wandering) of the composite wave from its uniformly translating position at long time scales, and fluctuations in the wave profile around its instantaneous position at short time scales. We use our analysis to calculate the first-passage-time distribution for a stochastic rivalry wave to travel a fixed distance, which we find to be given by an inverse Gaussian. Finally, we investigate the effects of noise in the depression variables, which under an adiabatic approximation lead to quenched disorder in the neural fields during propagation of a wave. (paper)

  19. State-dependent, bidirectional modulation of neural network activity by endocannabinoids.

    Science.gov (United States)

    Piet, Richard; Garenne, André; Farrugia, Fanny; Le Masson, Gwendal; Marsicano, Giovanni; Chavis, Pascale; Manzoni, Olivier J

    2011-11-16

    The endocannabinoid (eCB) system and the cannabinoid CB1 receptor (CB1R) play key roles in the modulation of brain functions. Although actions of eCBs and CB1Rs are well described at the synaptic level, little is known of their modulation of neural activity at the network level. Using microelectrode arrays, we have examined the role of CB1R activation in the modulation of the electrical activity of rat and mice cortical neural networks in vitro. We find that exogenous activation of CB1Rs expressed on glutamatergic neurons decreases the spontaneous activity of cortical neural networks. Moreover, we observe that the net effect of the CB1R antagonist AM251 inversely correlates with the initial level of activity in the network: blocking CB1Rs increases network activity when basal network activity is low, whereas it depresses spontaneous activity when its initial level is high. Our results reveal a complex role of CB1Rs in shaping spontaneous network activity, and suggest that the outcome of endogenous neuromodulation on network function might be state dependent.

  20. Cultured Neural Networks: Optimization of Patterned Network Adhesiveness and Characterization of their Neural Activity

    Directory of Open Access Journals (Sweden)

    W. L. C. Rutten

    2006-01-01

    Full Text Available One type of future, improved neural interface is the “cultured probe”. It is a hybrid type of neural information transducer or prosthesis, for stimulation and/or recording of neural activity. It would consist of a microelectrode array (MEA on a planar substrate, each electrode being covered and surrounded by a local circularly confined network (“island” of cultured neurons. The main purpose of the local networks is that they act as biofriendly intermediates for collateral sprouts from the in vivo system, thus allowing for an effective and selective neuron–electrode interface. As a secondary purpose, one may envisage future information processing applications of these intermediary networks. In this paper, first, progress is shown on how substrates can be chemically modified to confine developing networks, cultured from dissociated rat cortex cells, to “islands” surrounding an electrode site. Additional coating of neurophobic, polyimide-coated substrate by triblock-copolymer coating enhances neurophilic-neurophobic adhesion contrast. Secondly, results are given on neuronal activity in patterned, unconnected and connected, circular “island” networks. For connected islands, the larger the island diameter (50, 100 or 150 μm, the more spontaneous activity is seen. Also, activity may show a very high degree of synchronization between two islands. For unconnected islands, activity may start at 22 days in vitro (DIV, which is two weeks later than in unpatterned networks.

  1. Modeling Distillation Column Using ARX Model Structure and Artificial Neural Networks

    Directory of Open Access Journals (Sweden)

    Reza Pirmoradi

    2012-04-01

    Full Text Available Distillation is a complex and highly nonlinear industrial process. In general it is not always possible to obtain accurate first principles models for high-purity distillation columns. On the other hand the development of first principles models is usually time consuming and expensive. To overcome these problems, empirical models such as neural networks can be used. One major drawback of empirical models is that the prediction is valid only inside the data domain that is sufficiently covered by measurement data. Modeling distillation columns by means of neural networks is reported in literature by using recursive networks. The recursive networks are proper for modeling purpose, but such models have the problems of high complexity and high computational cost. The objective of this paper is to propose a simple and reliable model for distillation column. The proposed model uses feed forward neural networks which results in a simple model with less parameters and faster training time. Simulation results demonstrate that predictions of the proposed model in all regions are close to outputs of the dynamic model and the error in negligible. This implies that the model is reliable in all regions.

  2. Where's the Noise? Key Features of Spontaneous Activity and Neural Variability Arise through Learning in a Deterministic Network.

    Directory of Open Access Journals (Sweden)

    Christoph Hartmann

    2015-12-01

    Full Text Available Even in the absence of sensory stimulation the brain is spontaneously active. This background "noise" seems to be the dominant cause of the notoriously high trial-to-trial variability of neural recordings. Recent experimental observations have extended our knowledge of trial-to-trial variability and spontaneous activity in several directions: 1. Trial-to-trial variability systematically decreases following the onset of a sensory stimulus or the start of a motor act. 2. Spontaneous activity states in sensory cortex outline the region of evoked sensory responses. 3. Across development, spontaneous activity aligns itself with typical evoked activity patterns. 4. The spontaneous brain activity prior to the presentation of an ambiguous stimulus predicts how the stimulus will be interpreted. At present it is unclear how these observations relate to each other and how they arise in cortical circuits. Here we demonstrate that all of these phenomena can be accounted for by a deterministic self-organizing recurrent neural network model (SORN, which learns a predictive model of its sensory environment. The SORN comprises recurrently coupled populations of excitatory and inhibitory threshold units and learns via a combination of spike-timing dependent plasticity (STDP and homeostatic plasticity mechanisms. Similar to balanced network architectures, units in the network show irregular activity and variable responses to inputs. Additionally, however, the SORN exhibits sequence learning abilities matching recent findings from visual cortex and the network's spontaneous activity reproduces the experimental findings mentioned above. Intriguingly, the network's behaviour is reminiscent of sampling-based probabilistic inference, suggesting that correlates of sampling-based inference can develop from the interaction of STDP and homeostasis in deterministic networks. We conclude that key observations on spontaneous brain activity and the variability of neural

  3. Neural pathways in processing of sexual arousal: a dynamic causal modeling study.

    Science.gov (United States)

    Seok, J-W; Park, M-S; Sohn, J-H

    2016-09-01

    Three decades of research have investigated brain processing of visual sexual stimuli with neuroimaging methods. These researchers have found that sexual arousal stimuli elicit activity in a broad neural network of cortical and subcortical brain areas that are known to be associated with cognitive, emotional, motivational and physiological components. However, it is not completely understood how these neural systems integrate and modulated incoming information. Therefore, we identify cerebral areas whose activations were correlated with sexual arousal using event-related functional magnetic resonance imaging and used the dynamic causal modeling method for searching the effective connectivity about the sexual arousal processing network. Thirteen heterosexual males were scanned while they passively viewed alternating short trials of erotic and neutral pictures on a monitor. We created a subset of seven models based on our results and previous studies and selected a dominant connectivity model. Consequently, we suggest a dynamic causal model of the brain processes mediating the cognitive, emotional, motivational and physiological factors of human male sexual arousal. These findings are significant implications for the neuropsychology of male sexuality.

  4. Acquiring neural signals for developing a perception and cognition model

    Science.gov (United States)

    Li, Wei; Li, Yunyi; Chen, Genshe; Shen, Dan; Blasch, Erik; Pham, Khanh; Lynch, Robert

    2012-06-01

    The understanding of how humans process information, determine salience, and combine seemingly unrelated information is essential to automated processing of large amounts of information that is partially relevant, or of unknown relevance. Recent neurological science research in human perception, and in information science regarding contextbased modeling, provides us with a theoretical basis for using a bottom-up approach for automating the management of large amounts of information in ways directly useful for human operators. However, integration of human intelligence into a game theoretic framework for dynamic and adaptive decision support needs a perception and cognition model. For the purpose of cognitive modeling, we present a brain-computer-interface (BCI) based humanoid robot system to acquire brainwaves during human mental activities of imagining a humanoid robot-walking behavior. We use the neural signals to investigate relationships between complex humanoid robot behaviors and human mental activities for developing the perception and cognition model. The BCI system consists of a data acquisition unit with an electroencephalograph (EEG), a humanoid robot, and a charge couple CCD camera. An EEG electrode cup acquires brainwaves from the skin surface on scalp. The humanoid robot has 20 degrees of freedom (DOFs); 12 DOFs located on hips, knees, and ankles for humanoid robot walking, 6 DOFs on shoulders and arms for arms motion, and 2 DOFs for head yaw and pitch motion. The CCD camera takes video clips of the human subject's hand postures to identify mental activities that are correlated to the robot-walking behaviors. We use the neural signals to investigate relationships between complex humanoid robot behaviors and human mental activities for developing the perception and cognition model.

  5. Explaining neural signals in human visual cortex with an associative learning model.

    Science.gov (United States)

    Jiang, Jiefeng; Schmajuk, Nestor; Egner, Tobias

    2012-08-01

    "Predictive coding" models posit a key role for associative learning in visual cognition, viewing perceptual inference as a process of matching (learned) top-down predictions (or expectations) against bottom-up sensory evidence. At the neural level, these models propose that each region along the visual processing hierarchy entails one set of processing units encoding predictions of bottom-up input, and another set computing mismatches (prediction error or surprise) between predictions and evidence. This contrasts with traditional views of visual neurons operating purely as bottom-up feature detectors. In support of the predictive coding hypothesis, a recent human neuroimaging study (Egner, Monti, & Summerfield, 2010) showed that neural population responses to expected and unexpected face and house stimuli in the "fusiform face area" (FFA) could be well-described as a summation of hypothetical face-expectation and -surprise signals, but not by feature detector responses. Here, we used computer simulations to test whether these imaging data could be formally explained within the broader framework of a mathematical neural network model of associative learning (Schmajuk, Gray, & Lam, 1996). Results show that FFA responses could be fit very closely by model variables coding for conditional predictions (and their violations) of stimuli that unconditionally activate the FFA. These data document that neural population signals in the ventral visual stream that deviate from classic feature detection responses can formally be explained by associative prediction and surprise signals.

  6. Active Engine Mounting Control Algorithm Using Neural Network

    Directory of Open Access Journals (Sweden)

    Fadly Jashi Darsivan

    2009-01-01

    Full Text Available This paper proposes the application of neural network as a controller to isolate engine vibration in an active engine mounting system. It has been shown that the NARMA-L2 neurocontroller has the ability to reject disturbances from a plant. The disturbance is assumed to be both impulse and sinusoidal disturbances that are induced by the engine. The performance of the neural network controller is compared with conventional PD and PID controllers tuned using Ziegler-Nichols. From the result simulated the neural network controller has shown better ability to isolate the engine vibration than the conventional controllers.

  7. Neural activity when people solve verbal problems with insight.

    Directory of Open Access Journals (Sweden)

    Mark Jung-Beeman

    2004-04-01

    Full Text Available People sometimes solve problems with a unique process called insight, accompanied by an "Aha!" experience. It has long been unclear whether different cognitive and neural processes lead to insight versus noninsight solutions, or if solutions differ only in subsequent subjective feeling. Recent behavioral studies indicate distinct patterns of performance and suggest differential hemispheric involvement for insight and noninsight solutions. Subjects solved verbal problems, and after each correct solution indicated whether they solved with or without insight. We observed two objective neural correlates of insight. Functional magnetic resonance imaging (Experiment 1 revealed increased activity in the right hemisphere anterior superior temporal gyrus for insight relative to noninsight solutions. The same region was active during initial solving efforts. Scalp electroencephalogram recordings (Experiment 2 revealed a sudden burst of high-frequency (gamma-band neural activity in the same area beginning 0.3 s prior to insight solutions. This right anterior temporal area is associated with making connections across distantly related information during comprehension. Although all problem solving relies on a largely shared cortical network, the sudden flash of insight occurs when solvers engage distinct neural and cognitive processes that allow them to see connections that previously eluded them.

  8. Neural dynamics as sampling: a model for stochastic computation in recurrent networks of spiking neurons.

    Science.gov (United States)

    Buesing, Lars; Bill, Johannes; Nessler, Bernhard; Maass, Wolfgang

    2011-11-01

    The organization of computations in networks of spiking neurons in the brain is still largely unknown, in particular in view of the inherently stochastic features of their firing activity and the experimentally observed trial-to-trial variability of neural systems in the brain. In principle there exists a powerful computational framework for stochastic computations, probabilistic inference by sampling, which can explain a large number of macroscopic experimental data in neuroscience and cognitive science. But it has turned out to be surprisingly difficult to create a link between these abstract models for stochastic computations and more detailed models of the dynamics of networks of spiking neurons. Here we create such a link and show that under some conditions the stochastic firing activity of networks of spiking neurons can be interpreted as probabilistic inference via Markov chain Monte Carlo (MCMC) sampling. Since common methods for MCMC sampling in distributed systems, such as Gibbs sampling, are inconsistent with the dynamics of spiking neurons, we introduce a different approach based on non-reversible Markov chains that is able to reflect inherent temporal processes of spiking neuronal activity through a suitable choice of random variables. We propose a neural network model and show by a rigorous theoretical analysis that its neural activity implements MCMC sampling of a given distribution, both for the case of discrete and continuous time. This provides a step towards closing the gap between abstract functional models of cortical computation and more detailed models of networks of spiking neurons.

  9. Weak correlations between hemodynamic signals and ongoing neural activity during the resting state

    Science.gov (United States)

    Winder, Aaron T.; Echagarruga, Christina; Zhang, Qingguang; Drew, Patrick J.

    2017-01-01

    Spontaneous fluctuations in hemodynamic signals in the absence of a task or overt stimulation are used to infer neural activity. We tested this coupling by simultaneously measuring neural activity and changes in cerebral blood volume (CBV) in the somatosensory cortex of awake, head-fixed mice during periods of true rest, and during whisker stimulation and volitional whisking. Here we show that neurovascular coupling was similar across states, and large spontaneous CBV changes in the absence of sensory input were driven by volitional whisker and body movements. Hemodynamic signals during periods of rest were weakly correlated with neural activity. Spontaneous fluctuations in CBV and vessel diameter persisted when local neural spiking and glutamatergic input was blocked, and during blockade of noradrenergic receptors, suggesting a non-neuronal origin for spontaneous CBV fluctuations. Spontaneous hemodynamic signals reflect a combination of behavior, local neural activity, and putatively non-neural processes. PMID:29184204

  10. Weather forecasting based on hybrid neural model

    Science.gov (United States)

    Saba, Tanzila; Rehman, Amjad; AlGhamdi, Jarallah S.

    2017-11-01

    Making deductions and expectations about climate has been a challenge all through mankind's history. Challenges with exact meteorological directions assist to foresee and handle problems well in time. Different strategies have been investigated using various machine learning techniques in reported forecasting systems. Current research investigates climate as a major challenge for machine information mining and deduction. Accordingly, this paper presents a hybrid neural model (MLP and RBF) to enhance the accuracy of weather forecasting. Proposed hybrid model ensure precise forecasting due to the specialty of climate anticipating frameworks. The study concentrates on the data representing Saudi Arabia weather forecasting. The main input features employed to train individual and hybrid neural networks that include average dew point, minimum temperature, maximum temperature, mean temperature, average relative moistness, precipitation, normal wind speed, high wind speed and average cloudiness. The output layer composed of two neurons to represent rainy and dry weathers. Moreover, trial and error approach is adopted to select an appropriate number of inputs to the hybrid neural network. Correlation coefficient, RMSE and scatter index are the standard yard sticks adopted for forecast accuracy measurement. On individual standing MLP forecasting results are better than RBF, however, the proposed simplified hybrid neural model comes out with better forecasting accuracy as compared to both individual networks. Additionally, results are better than reported in the state of art, using a simple neural structure that reduces training time and complexity.

  11. Stability of a neural network model with small-world connections

    International Nuclear Information System (INIS)

    Li Chunguang; Chen Guanrong

    2003-01-01

    Small-world networks are highly clustered networks with small distances among the nodes. There are many biological neural networks that present this kind of connection. There are no special weightings in the connections of most existing small-world network models. However, this kind of simply connected model cannot characterize biological neural networks, in which there are different weights in synaptic connections. In this paper, we present a neural network model with weighted small-world connections and further investigate the stability of this model

  12. Concurrent heterogeneous neural model simulation on real-time neuromimetic hardware.

    Science.gov (United States)

    Rast, Alexander; Galluppi, Francesco; Davies, Sergio; Plana, Luis; Patterson, Cameron; Sharp, Thomas; Lester, David; Furber, Steve

    2011-11-01

    Dedicated hardware is becoming increasingly essential to simulate emerging very-large-scale neural models. Equally, however, it needs to be able to support multiple models of the neural dynamics, possibly operating simultaneously within the same system. This may be necessary either to simulate large models with heterogeneous neural types, or to simplify simulation and analysis of detailed, complex models in a large simulation by isolating the new model to a small subpopulation of a larger overall network. The SpiNNaker neuromimetic chip is a dedicated neural processor able to support such heterogeneous simulations. Implementing these models on-chip uses an integrated library-based tool chain incorporating the emerging PyNN interface that allows a modeller to input a high-level description and use an automated process to generate an on-chip simulation. Simulations using both LIF and Izhikevich models demonstrate the ability of the SpiNNaker system to generate and simulate heterogeneous networks on-chip, while illustrating, through the network-scale effects of wavefront synchronisation and burst gating, methods that can provide effective behavioural abstractions for large-scale hardware modelling. SpiNNaker's asynchronous virtual architecture permits greater scope for model exploration, with scalable levels of functional and temporal abstraction, than conventional (or neuromorphic) computing platforms. The complete system illustrates a potential path to understanding the neural model of computation, by building (and breaking) neural models at various scales, connecting the blocks, then comparing them against the biology: computational cognitive neuroscience. Copyright © 2011 Elsevier Ltd. All rights reserved.

  13. Transport energy demand modeling of South Korea using artificial neural network

    International Nuclear Information System (INIS)

    Geem, Zong Woo

    2011-01-01

    Artificial neural network models were developed to forecast South Korea's transport energy demand. Various independent variables, such as GDP, population, oil price, number of vehicle registrations, and passenger transport amount, were considered and several good models (Model 1 with GDP, population, and passenger transport amount; Model 2 with GDP, number of vehicle registrations, and passenger transport amount; and Model 3 with oil price, number of vehicle registrations, and passenger transport amount) were selected by comparing with multiple linear regression models. Although certain regression models obtained better R-squared values than neural network models, this does not guarantee the fact that the former is better than the latter because root mean squared errors of the former were much inferior to those of the latter. Also, certain regression model had structural weakness based on P-value. Instead, neural network models produced more robust results. Forecasted results using the neural network models show that South Korea will consume around 37 MTOE of transport energy in 2025. - Highlights: → Transport energy demand of South Korea was forecasted using artificial neural network. → Various variables (GDP, population, oil price, number of registrations, etc.) were considered. → Results of artificial neural network were compared with those of multiple linear regression.

  14. Neural model of gene regulatory network: a survey on supportive meta-heuristics.

    Science.gov (United States)

    Biswas, Surama; Acharyya, Sriyankar

    2016-06-01

    Gene regulatory network (GRN) is produced as a result of regulatory interactions between different genes through their coded proteins in cellular context. Having immense importance in disease detection and drug finding, GRN has been modelled through various mathematical and computational schemes and reported in survey articles. Neural and neuro-fuzzy models have been the focus of attraction in bioinformatics. Predominant use of meta-heuristic algorithms in training neural models has proved its excellence. Considering these facts, this paper is organized to survey neural modelling schemes of GRN and the efficacy of meta-heuristic algorithms towards parameter learning (i.e. weighting connections) within the model. This survey paper renders two different structure-related approaches to infer GRN which are global structure approach and substructure approach. It also describes two neural modelling schemes, such as artificial neural network/recurrent neural network based modelling and neuro-fuzzy modelling. The meta-heuristic algorithms applied so far to learn the structure and parameters of neutrally modelled GRN have been reviewed here.

  15. Numeral eddy current sensor modelling based on genetic neural network

    International Nuclear Information System (INIS)

    Yu Along

    2008-01-01

    This paper presents a method used to the numeral eddy current sensor modelling based on the genetic neural network to settle its nonlinear problem. The principle and algorithms of genetic neural network are introduced. In this method, the nonlinear model parameters of the numeral eddy current sensor are optimized by genetic neural network (GNN) according to measurement data. So the method remains both the global searching ability of genetic algorithm and the good local searching ability of neural network. The nonlinear model has the advantages of strong robustness, on-line modelling and high precision. The maximum nonlinearity error can be reduced to 0.037% by using GNN. However, the maximum nonlinearity error is 0.075% using the least square method

  16. The effects of noise on binocular rivalry waves: a stochastic neural field model

    KAUST Repository

    Webber, Matthew A

    2013-03-12

    We analyze the effects of extrinsic noise on traveling waves of visual perception in a competitive neural field model of binocular rivalry. The model consists of two one-dimensional excitatory neural fields, whose activity variables represent the responses to left-eye and right-eye stimuli, respectively. The two networks mutually inhibit each other, and slow adaptation is incorporated into the model by taking the network connections to exhibit synaptic depression. We first show how, in the absence of any noise, the system supports a propagating composite wave consisting of an invading activity front in one network co-moving with a retreating front in the other network. Using a separation of time scales and perturbation methods previously developed for stochastic reaction-diffusion equations, we then show how extrinsic noise in the activity variables leads to a diffusive-like displacement (wandering) of the composite wave from its uniformly translating position at long time scales, and fluctuations in the wave profile around its instantaneous position at short time scales. We use our analysis to calculate the first-passage-time distribution for a stochastic rivalry wave to travel a fixed distance, which we find to be given by an inverse Gaussian. Finally, we investigate the effects of noise in the depression variables, which under an adiabatic approximation lead to quenched disorder in the neural fields during propagation of a wave. © 2013 IOP Publishing Ltd and SISSA Medialab srl.

  17. Energy efficiency optimisation for distillation column using artificial neural network models

    International Nuclear Information System (INIS)

    Osuolale, Funmilayo N.; Zhang, Jie

    2016-01-01

    This paper presents a neural network based strategy for the modelling and optimisation of energy efficiency in distillation columns incorporating the second law of thermodynamics. Real-time optimisation of distillation columns based on mechanistic models is often infeasible due to the effort in model development and the large computation effort associated with mechanistic model computation. This issue can be addressed by using neural network models which can be quickly developed from process operation data. The computation time in neural network model evaluation is very short making them ideal for real-time optimisation. Bootstrap aggregated neural networks are used in this study for enhanced model accuracy and reliability. Aspen HYSYS is used for the simulation of the distillation systems. Neural network models for exergy efficiency and product compositions are developed from simulated process operation data and are used to maximise exergy efficiency while satisfying products qualities constraints. Applications to binary systems of methanol-water and benzene-toluene separations culminate in a reduction of utility consumption of 8.2% and 28.2% respectively. Application to multi-component separation columns also demonstrate the effectiveness of the proposed method with a 32.4% improvement in the exergy efficiency. - Highlights: • Neural networks can accurately model exergy efficiency in distillation columns. • Bootstrap aggregated neural network offers improved model prediction accuracy. • Improved exergy efficiency is obtained through model based optimisation. • Reductions of utility consumption by 8.2% and 28.2% were achieved for binary systems. • The exergy efficiency for multi-component distillation is increased by 32.4%.

  18. Application of neural networks to seismic active control

    International Nuclear Information System (INIS)

    Tang, Yu.

    1995-01-01

    An exploratory study on seismic active control using an artificial neural network (ANN) is presented in which a singledegree-of-freedom (SDF) structural system is controlled by a trained neural network. A feed-forward neural network and the backpropagation training method are used in the study. In backpropagation training, the learning rate is determined by ensuring the decrease of the error function at each training cycle. The training patterns for the neural net are generated randomly. Then, the trained ANN is used to compute the control force according to the control algorithm. The control strategy proposed herein is to apply the control force at every time step to destroy the build-up of the system response. The ground motions considered in the simulations are the N21E and N69W components of the Lake Hughes No. 12 record that occurred in the San Fernando Valley in California on February 9, 1971. Significant reduction of the structural response by one order of magnitude is observed. Also, it is shown that the proposed control strategy has the ability to reduce the peak that occurs during the first few cycles of the time history. These promising results assert the potential of applying ANNs to active structural control under seismic loads

  19. neural network based model o work based model of an industrial oil

    African Journals Online (AJOL)

    eobe

    technique. g, Neural Network Model, Regression, Mean Square Error, PID controller. ... during the training processes. An additio ... used to carry out simulation studies of the mode .... A two-layer feed-forward neural network with Matlab.

  20. Self-reported empathy and neural activity during action imitation and observation in schizophrenia.

    Science.gov (United States)

    Horan, William P; Iacoboni, Marco; Cross, Katy A; Korb, Alex; Lee, Junghee; Nori, Poorang; Quintana, Javier; Wynn, Jonathan K; Green, Michael F

    2014-01-01

    Although social cognitive impairments are key determinants of functional outcome in schizophrenia their neural bases are poorly understood. This study investigated neural activity during imitation and observation of finger movements and facial expressions in schizophrenia, and their correlates with self-reported empathy. 23 schizophrenia outpatients and 23 healthy controls were studied with functional magnetic resonance imaging (fMRI) while they imitated, executed, or simply observed finger movements and facial emotional expressions. Between-group activation differences, as well as relationships between activation and self-reported empathy, were evaluated. Both patients and controls similarly activated neural systems previously associated with these tasks. We found no significant between-group differences in task-related activations. There were, however, between-group differences in the correlation between self-reported empathy and right inferior frontal (pars opercularis) activity during observation of facial emotional expressions. As in previous studies, controls demonstrated a positive association between brain activity and empathy scores. In contrast, the pattern in the patient group reflected a negative association between brain activity and empathy. Although patients with schizophrenia demonstrated largely normal patterns of neural activation across the finger movement and facial expression tasks, they reported decreased self perceived empathy and failed to show the typical relationship between neural activity and self-reported empathy seen in controls. These findings suggest that patients show a disjunction between automatic neural responses to low level social cues and higher level, integrative social cognitive processes involved in self-perceived empathy.

  1. The fiber-optic imaging and manipulation of neural activity during animal behavior.

    Science.gov (United States)

    Miyamoto, Daisuke; Murayama, Masanori

    2016-02-01

    Recent progress with optogenetic probes for imaging and manipulating neural activity has further increased the relevance of fiber-optic systems for neural circuitry research. Optical fibers, which bi-directionally transmit light between separate sites (even at a distance of several meters), can be used for either optical imaging or manipulating neural activity relevant to behavioral circuitry mechanisms. The method's flexibility and the specifications of the light structure are well suited for following the behavior of freely moving animals. Furthermore, thin optical fibers allow researchers to monitor neural activity from not only the cortical surface but also deep brain regions, including the hippocampus and amygdala. Such regions are difficult to target with two-photon microscopes. Optogenetic manipulation of neural activity with an optical fiber has the advantage of being selective for both cell-types and projections as compared to conventional electrophysiological brain tissue stimulation. It is difficult to extract any data regarding changes in neural activity solely from a fiber-optic manipulation device; however, the readout of data is made possible by combining manipulation with electrophysiological recording, or the simultaneous application of optical imaging and manipulation using a bundle-fiber. The present review introduces recent progress in fiber-optic imaging and manipulation methods, while also discussing fiber-optic system designs that are suitable for a given experimental protocol. Copyright © 2015 The Authors. Published by Elsevier Ireland Ltd.. All rights reserved.

  2. Neural network modeling of emotion

    Science.gov (United States)

    Levine, Daniel S.

    2007-03-01

    This article reviews the history and development of computational neural network modeling of cognitive and behavioral processes that involve emotion. The exposition starts with models of classical conditioning dating from the early 1970s. Then it proceeds toward models of interactions between emotion and attention. Then models of emotional influences on decision making are reviewed, including some speculative (not and not yet simulated) models of the evolution of decision rules. Through the late 1980s, the neural networks developed to model emotional processes were mainly embodiments of significant functional principles motivated by psychological data. In the last two decades, network models of these processes have become much more detailed in their incorporation of known physiological properties of specific brain regions, while preserving many of the psychological principles from the earlier models. Most network models of emotional processes so far have dealt with positive and negative emotion in general, rather than specific emotions such as fear, joy, sadness, and anger. But a later section of this article reviews a few models relevant to specific emotions: one family of models of auditory fear conditioning in rats, and one model of induced pleasure enhancing creativity in humans. Then models of emotional disorders are reviewed. The article concludes with philosophical statements about the essential contributions of emotion to intelligent behavior and the importance of quantitative theories and models to the interdisciplinary enterprise of understanding the interactions of emotion, cognition, and behavior.

  3. Predicting musically induced emotions from physiological inputs: linear and neural network models.

    Science.gov (United States)

    Russo, Frank A; Vempala, Naresh N; Sandstrom, Gillian M

    2013-01-01

    Listening to music often leads to physiological responses. Do these physiological responses contain sufficient information to infer emotion induced in the listener? The current study explores this question by attempting to predict judgments of "felt" emotion from physiological responses alone using linear and neural network models. We measured five channels of peripheral physiology from 20 participants-heart rate (HR), respiration, galvanic skin response, and activity in corrugator supercilii and zygomaticus major facial muscles. Using valence and arousal (VA) dimensions, participants rated their felt emotion after listening to each of 12 classical music excerpts. After extracting features from the five channels, we examined their correlation with VA ratings, and then performed multiple linear regression to see if a linear relationship between the physiological responses could account for the ratings. Although linear models predicted a significant amount of variance in arousal ratings, they were unable to do so with valence ratings. We then used a neural network to provide a non-linear account of the ratings. The network was trained on the mean ratings of eight of the 12 excerpts and tested on the remainder. Performance of the neural network confirms that physiological responses alone can be used to predict musically induced emotion. The non-linear model derived from the neural network was more accurate than linear models derived from multiple linear regression, particularly along the valence dimension. A secondary analysis allowed us to quantify the relative contributions of inputs to the non-linear model. The study represents a novel approach to understanding the complex relationship between physiological responses and musically induced emotion.

  4. Model of Cholera Forecasting Using Artificial Neural Network in Chabahar City, Iran

    Directory of Open Access Journals (Sweden)

    Zahra Pezeshki

    2016-02-01

    Full Text Available Background: Cholera as an endemic disease remains a health issue in Iran despite decrease in incidence. Since forecasting epidemic diseases provides appropriate preventive actions in disease spread, different forecasting methods including artificial neural networks have been developed to study parameters involved in incidence and spread of epidemic diseases such as cholera. Objectives: In this study, cholera in rural area of Chabahar, Iran was investigated to achieve a proper forecasting model. Materials and Methods: Data of cholera was gathered from 465 villages, of which 104 reported cholera during ten years period of study. Logistic regression modeling and correlate bivariate were used to determine risk factors and achieve possible predictive model one-hidden-layer perception neural network with backpropagation training algorithm and the sigmoid activation function was trained and tested between the two groups of infected and non-infected villages after preprocessing. For determining validity of prediction, the ROC diagram was used. The study variables included climate conditions and geographical parameters. Results: After determining significant variables of cholera incidence, the described artificial neural network model was capable of forecasting cholera event among villages of test group with accuracy up to 80%. The highest accuracy was achieved when model was trained with variables that were significant in statistical analysis describing that the two methods confirm the result of each other. Conclusions: Application of artificial neural networking assists forecasting cholera for adopting protective measures. For a more accurate prediction, comprehensive information is required including data on hygienic, social and demographic parameters.

  5. Predicting musically induced emotions from physiological inputs: Linear and neural network models

    Directory of Open Access Journals (Sweden)

    Frank A. Russo

    2013-08-01

    Full Text Available Listening to music often leads to physiological responses. Do these physiological responses contain sufficient information to infer emotion induced in the listener? The current study explores this question by attempting to predict judgments of 'felt' emotion from physiological responses alone using linear and neural network models. We measured five channels of peripheral physiology from 20 participants – heart rate, respiration, galvanic skin response, and activity in corrugator supercilii and zygomaticus major facial muscles. Using valence and arousal (VA dimensions, participants rated their felt emotion after listening to each of 12 classical music excerpts. After extracting features from the five channels, we examined their correlation with VA ratings, and then performed multiple linear regression to see if a linear relationship between the physiological responses could account for the ratings. Although linear models predicted a significant amount of variance in arousal ratings, they were unable to do so with valence ratings. We then used a neural network to provide a nonlinear account of the ratings. The network was trained on the mean ratings of eight of the 12 excerpts and tested on the remainder. Performance of the neural network confirms that physiological responses alone can be used to predict musically induced emotion. The nonlinear model derived from the neural network was more accurate than linear models derived from multiple linear regression, particularly along the valence dimension. A secondary analysis allowed us to quantify the relative contributions of inputs to the nonlinear model. The study represents a novel approach to understanding the complex relationship between physiological responses and musically induced emotion.

  6. Spontaneous cortical activity reveals hallmarks of an optimal internal model of the environment.

    Science.gov (United States)

    Berkes, Pietro; Orbán, Gergo; Lengyel, Máté; Fiser, József

    2011-01-07

    The brain maintains internal models of its environment to interpret sensory inputs and to prepare actions. Although behavioral studies have demonstrated that these internal models are optimally adapted to the statistics of the environment, the neural underpinning of this adaptation is unknown. Using a Bayesian model of sensory cortical processing, we related stimulus-evoked and spontaneous neural activities to inferences and prior expectations in an internal model and predicted that they should match if the model is statistically optimal. To test this prediction, we analyzed visual cortical activity of awake ferrets during development. Similarity between spontaneous and evoked activities increased with age and was specific to responses evoked by natural scenes. This demonstrates the progressive adaptation of internal models to the statistics of natural stimuli at the neural level.

  7. Neural Activity Reveals Preferences Without Choices

    Science.gov (United States)

    Smith, Alec; Bernheim, B. Douglas; Camerer, Colin

    2014-01-01

    We investigate the feasibility of inferring the choices people would make (if given the opportunity) based on their neural responses to the pertinent prospects when they are not engaged in actual decision making. The ability to make such inferences is of potential value when choice data are unavailable, or limited in ways that render standard methods of estimating choice mappings problematic. We formulate prediction models relating choices to “non-choice” neural responses and use them to predict out-of-sample choices for new items and for new groups of individuals. The predictions are sufficiently accurate to establish the feasibility of our approach. PMID:25729468

  8. A neural network model of ventriloquism effect and aftereffect.

    Science.gov (United States)

    Magosso, Elisa; Cuppini, Cristiano; Ursino, Mauro

    2012-01-01

    Presenting simultaneous but spatially discrepant visual and auditory stimuli induces a perceptual translocation of the sound towards the visual input, the ventriloquism effect. General explanation is that vision tends to dominate over audition because of its higher spatial reliability. The underlying neural mechanisms remain unclear. We address this question via a biologically inspired neural network. The model contains two layers of unimodal visual and auditory neurons, with visual neurons having higher spatial resolution than auditory ones. Neurons within each layer communicate via lateral intra-layer synapses; neurons across layers are connected via inter-layer connections. The network accounts for the ventriloquism effect, ascribing it to a positive feedback between the visual and auditory neurons, triggered by residual auditory activity at the position of the visual stimulus. Main results are: i) the less localized stimulus is strongly biased toward the most localized stimulus and not vice versa; ii) amount of the ventriloquism effect changes with visual-auditory spatial disparity; iii) ventriloquism is a robust behavior of the network with respect to parameter value changes. Moreover, the model implements Hebbian rules for potentiation and depression of lateral synapses, to explain ventriloquism aftereffect (that is, the enduring sound shift after exposure to spatially disparate audio-visual stimuli). By adaptively changing the weights of lateral synapses during cross-modal stimulation, the model produces post-adaptive shifts of auditory localization that agree with in-vivo observations. The model demonstrates that two unimodal layers reciprocally interconnected may explain ventriloquism effect and aftereffect, even without the presence of any convergent multimodal area. The proposed study may provide advancement in understanding neural architecture and mechanisms at the basis of visual-auditory integration in the spatial realm.

  9. Adaptive control using neural networks and approximate models.

    Science.gov (United States)

    Narendra, K S; Mukhopadhyay, S

    1997-01-01

    The NARMA model is an exact representation of the input-output behavior of finite-dimensional nonlinear discrete-time dynamical systems in a neighborhood of the equilibrium state. However, it is not convenient for purposes of adaptive control using neural networks due to its nonlinear dependence on the control input. Hence, quite often, approximate methods are used for realizing the neural controllers to overcome computational complexity. In this paper, we introduce two classes of models which are approximations to the NARMA model, and which are linear in the control input. The latter fact substantially simplifies both the theoretical analysis as well as the practical implementation of the controller. Extensive simulation studies have shown that the neural controllers designed using the proposed approximate models perform very well, and in many cases even better than an approximate controller designed using the exact NARMA model. In view of their mathematical tractability as well as their success in simulation studies, a case is made in this paper that such approximate input-output models warrant a detailed study in their own right.

  10. Advanced models of neural networks nonlinear dynamics and stochasticity in biological neurons

    CERN Document Server

    Rigatos, Gerasimos G

    2015-01-01

    This book provides a complete study on neural structures exhibiting nonlinear and stochastic dynamics, elaborating on neural dynamics by introducing advanced models of neural networks. It overviews the main findings in the modelling of neural dynamics in terms of electrical circuits and examines their stability properties with the use of dynamical systems theory. It is suitable for researchers and postgraduate students engaged with neural networks and dynamical systems theory.

  11. Spike neural models (part I: The Hodgkin-Huxley model

    Directory of Open Access Journals (Sweden)

    Johnson, Melissa G.

    2017-05-01

    Full Text Available Artificial neural networks, or ANNs, have grown a lot since their inception back in the 1940s. But no matter the changes, one of the most important components of neural networks is still the node, which represents the neuron. Within spiking neural networks, the node is especially important because it contains the functions and properties of neurons that are necessary for their network. One important aspect of neurons is the ionic flow which produces action potentials, or spikes. Forces of diffusion and electrostatic pressure work together with the physical properties of the cell to move ions around changing the cell membrane potential which ultimately produces the action potential. This tutorial reviews the Hodkgin-Huxley model and shows how it simulates the ionic flow of the giant squid axon via four differential equations. The model is implemented in Matlab using Euler's Method to approximate the differential equations. By using Euler's method, an extra parameter is created, the time step. This new parameter needs to be carefully considered or the results of the node may be impaired.

  12. Simultaneous surface and depth neural activity recording with graphene transistor-based dual-modality probes.

    Science.gov (United States)

    Du, Mingde; Xu, Xianchen; Yang, Long; Guo, Yichuan; Guan, Shouliang; Shi, Jidong; Wang, Jinfen; Fang, Ying

    2018-05-15

    Subdural surface and penetrating depth probes are widely applied to record neural activities from the cortical surface and intracortical locations of the brain, respectively. Simultaneous surface and depth neural activity recording is essential to understand the linkage between the two modalities. Here, we develop flexible dual-modality neural probes based on graphene transistors. The neural probes exhibit stable electrical performance even under 90° bending because of the excellent mechanical properties of graphene, and thus allow multi-site recording from the subdural surface of rat cortex. In addition, finite element analysis was carried out to investigate the mechanical interactions between probe and cortex tissue during intracortical implantation. Based on the simulation results, a sharp tip angle of π/6 was chosen to facilitate tissue penetration of the neural probes. Accordingly, the graphene transistor-based dual-modality neural probes have been successfully applied for simultaneous surface and depth recording of epileptiform activity of rat brain in vivo. Our results show that graphene transistor-based dual-modality neural probes can serve as a facile and versatile tool to study tempo-spatial patterns of neural activities. Copyright © 2018 Elsevier B.V. All rights reserved.

  13. Artificial Neural Network Model for Predicting Compressive

    Directory of Open Access Journals (Sweden)

    Salim T. Yousif

    2013-05-01

    Full Text Available   Compressive strength of concrete is a commonly used criterion in evaluating concrete. Although testing of the compressive strength of concrete specimens is done routinely, it is performed on the 28th day after concrete placement. Therefore, strength estimation of concrete at early time is highly desirable. This study presents the effort in applying neural network-based system identification techniques to predict the compressive strength of concrete based on concrete mix proportions, maximum aggregate size (MAS, and slump of fresh concrete. Back-propagation neural networks model is successively developed, trained, and tested using actual data sets of concrete mix proportions gathered from literature.    The test of the model by un-used data within the range of input parameters shows that the maximum absolute error for model is about 20% and 88% of the output results has absolute errors less than 10%. The parametric study shows that water/cement ratio (w/c is the most significant factor  affecting the output of the model.     The results showed that neural networks has strong potential as a feasible tool for predicting compressive strength of concrete.

  14. Neural markers of loss aversion in resting-state brain activity.

    Science.gov (United States)

    Canessa, Nicola; Crespi, Chiara; Baud-Bovy, Gabriel; Dodich, Alessandra; Falini, Andrea; Antonellis, Giulia; Cappa, Stefano F

    2017-02-01

    Neural responses in striatal, limbic and somatosensory brain regions track individual differences in loss aversion, i.e. the higher sensitivity to potential losses compared with equivalent gains in decision-making under risk. The engagement of structures involved in the processing of aversive stimuli and experiences raises a further question, i.e. whether the tendency to avoid losses rather than acquire gains represents a transient fearful overreaction elicited by choice-related information, or rather a stable component of one's own preference function, reflecting a specific pattern of neural activity. We tested the latter hypothesis by assessing in 57 healthy human subjects whether the relationship between behavioral and neural loss aversion holds at rest, i.e. when the BOLD signal is collected during 5minutes of cross-fixation in the absence of an explicit task. Within the resting-state networks highlighted by a spatial group Independent Component Analysis (gICA), we found a significant correlation between strength of activity and behavioral loss aversion in the left ventral striatum and right posterior insula/supramarginal gyrus, i.e. the very same regions displaying a pattern of neural loss aversion during explicit choices. Cross-study analyses confirmed that this correlation holds when voxels identified by gICA are used as regions of interest in task-related activity and vice versa. These results suggest that the individual degree of (neural) loss aversion represents a stable dimension of decision-making, which reflects in specific metrics of intrinsic brain activity at rest possibly modulating cortical excitability at choice. Copyright © 2016 Elsevier Inc. All rights reserved.

  15. Models of neural networks temporal aspects of coding and information processing in biological systems

    CERN Document Server

    Hemmen, J; Schulten, Klaus

    1994-01-01

    Since the appearance of Vol. 1 of Models of Neural Networks in 1991, the theory of neural nets has focused on two paradigms: information coding through coherent firing of the neurons and functional feedback. Information coding through coherent neuronal firing exploits time as a cardinal degree of freedom. This capacity of a neural network rests on the fact that the neuronal action potential is a short, say 1 ms, spike, localized in space and time. Spatial as well as temporal correlations of activity may represent different states of a network. In particular, temporal correlations of activity may express that neurons process the same "object" of, for example, a visual scene by spiking at the very same time. The traditional description of a neural network through a firing rate, the famous S-shaped curve, presupposes a wide time window of, say, at least 100 ms. It thus fails to exploit the capacity to "bind" sets of coherently firing neurons for the purpose of both scene segmentation and figure-ground segregatio...

  16. Plasmodium berghei ANKA: erythropoietin activates neural stem cells in an experimental cerebral malaria model

    DEFF Research Database (Denmark)

    Core, Andrew; Hempel, Casper; Kurtzhals, Jørgen A L

    2011-01-01

    investigated if EPO's neuroprotective effects include activation of endogenous neural stem cells (NSC). By using immunohistochemical markers of different NSC maturation stages, we show that EPO increased the number of nestin(+) cells in the dentate gyrus and in the sub-ventricular zone of the lateral...

  17. Learning to Recognize Actions From Limited Training Examples Using a Recurrent Spiking Neural Model

    Science.gov (United States)

    Panda, Priyadarshini; Srinivasa, Narayan

    2018-01-01

    A fundamental challenge in machine learning today is to build a model that can learn from few examples. Here, we describe a reservoir based spiking neural model for learning to recognize actions with a limited number of labeled videos. First, we propose a novel encoding, inspired by how microsaccades influence visual perception, to extract spike information from raw video data while preserving the temporal correlation across different frames. Using this encoding, we show that the reservoir generalizes its rich dynamical activity toward signature action/movements enabling it to learn from few training examples. We evaluate our approach on the UCF-101 dataset. Our experiments demonstrate that our proposed reservoir achieves 81.3/87% Top-1/Top-5 accuracy, respectively, on the 101-class data while requiring just 8 video examples per class for training. Our results establish a new benchmark for action recognition from limited video examples for spiking neural models while yielding competitive accuracy with respect to state-of-the-art non-spiking neural models. PMID:29551962

  18. Stability of a neural predictive controller scheme on a neural model

    DEFF Research Database (Denmark)

    Luther, Jim Benjamin; Sørensen, Paul Haase

    2009-01-01

    In previous works presenting various forms of neural-network-based predictive controllers, the main emphasis has been on the implementation aspects, i.e. the development of a robust optimization algorithm for the controller, which will be able to perform in real time. However, the stability issue....... The resulting controller is tested on a nonlinear pneumatic servo system.......In previous works presenting various forms of neural-network-based predictive controllers, the main emphasis has been on the implementation aspects, i.e. the development of a robust optimization algorithm for the controller, which will be able to perform in real time. However, the stability issue...... has not been addressed specifically for these controllers. On the other hand a number of results concerning the stability of receding horizon controllers on a nonlinear system exist. In this paper we present a proof of stability for a predictive controller controlling a neural network model...

  19. Two stage neural network modelling for robust model predictive control.

    Science.gov (United States)

    Patan, Krzysztof

    2018-01-01

    The paper proposes a novel robust model predictive control scheme realized by means of artificial neural networks. The neural networks are used twofold: to design the so-called fundamental model of a plant and to catch uncertainty associated with the plant model. In order to simplify the optimization process carried out within the framework of predictive control an instantaneous linearization is applied which renders it possible to define the optimization problem in the form of constrained quadratic programming. Stability of the proposed control system is also investigated by showing that a cost function is monotonically decreasing with respect to time. Derived robust model predictive control is tested and validated on the example of a pneumatic servomechanism working at different operating regimes. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.

  20. Prototype-Incorporated Emotional Neural Network.

    Science.gov (United States)

    Oyedotun, Oyebade K; Khashman, Adnan

    2017-08-15

    Artificial neural networks (ANNs) aim to simulate the biological neural activities. Interestingly, many ''engineering'' prospects in ANN have relied on motivations from cognition and psychology studies. So far, two important learning theories that have been subject of active research are the prototype and adaptive learning theories. The learning rules employed for ANNs can be related to adaptive learning theory, where several examples of the different classes in a task are supplied to the network for adjusting internal parameters. Conversely, the prototype-learning theory uses prototypes (representative examples); usually, one prototype per class of the different classes contained in the task. These prototypes are supplied for systematic matching with new examples so that class association can be achieved. In this paper, we propose and implement a novel neural network algorithm based on modifying the emotional neural network (EmNN) model to unify the prototype- and adaptive-learning theories. We refer to our new model as ``prototype-incorporated EmNN''. Furthermore, we apply the proposed model to two real-life challenging tasks, namely, static hand-gesture recognition and face recognition, and compare the result to those obtained using the popular back-propagation neural network (BPNN), emotional BPNN (EmNN), deep networks, an exemplar classification model, and k-nearest neighbor.

  1. Discriminative training of self-structuring hidden control neural models

    DEFF Research Database (Denmark)

    Sørensen, Helge Bjarup Dissing; Hartmann, Uwe; Hunnerup, Preben

    1995-01-01

    This paper presents a new training algorithm for self-structuring hidden control neural (SHC) models. The SHC models were trained non-discriminatively for speech recognition applications. Better recognition performance can generally be achieved, if discriminative training is applied instead. Thus...... we developed a discriminative training algorithm for SHC models, where each SHC model for a specific speech pattern is trained with utterances of the pattern to be recognized and with other utterances. The discriminative training of SHC neural models has been tested on the TIDIGITS database...

  2. Modeling of steam generator in nuclear power plant using neural network ensemble

    International Nuclear Information System (INIS)

    Lee, S. K.; Lee, E. C.; Jang, J. W.

    2003-01-01

    Neural network is now being used in modeling the steam generator is known to be difficult due to the reverse dynamics. However, Neural network is prone to the problem of overfitting. This paper investigates the use of neural network combining methods to model steam generator water level and compares with single neural network. The results show that neural network ensemble is effective tool which can offer improved generalization, lower dependence of the training set and reduced training time

  3. A neural network model of lateralization during letter identification.

    Science.gov (United States)

    Shevtsova, N; Reggia, J A

    1999-03-01

    The causes of cerebral lateralization of cognitive and other functions are currently not well understood. To investigate one aspect of function lateralization, a bihemispheric neural network model for a simple visual identification task was developed that has two parallel interacting paths of information processing. The model is based on commonly accepted concepts concerning neural connectivity, activity dynamics, and synaptic plasticity. A combination of both unsupervised (Hebbian) and supervised (Widrow-Hoff) learning rules is used to train the model to identify a small set of letters presented as input stimuli in the left visual hemifield, in the central position, and in the right visual hemifield. Each visual hemifield projects onto the contralateral hemisphere, and the two hemispheres interact via a simulated corpus callosum. The contribution of each individual hemisphere to the process of input stimuli identification was studied for a variety of underlying asymmetries. The results indicate that multiple asymmetries may cause lateralization. Lateralization occurred toward the side having larger size, higher excitability, or higher learning rate parameters. It appeared more intensively with strong inhibitory callosal connections, supporting the hypothesis that the corpus callosum plays a functionally inhibitory role. The model demonstrates clearly the dependence of lateralization on different hemisphere parameters and suggests that computational models can be useful in better understanding the mechanisms underlying emergence of lateralization.

  4. Activity patterns of cultured neural networks on micro electrode arrays

    NARCIS (Netherlands)

    Rutten, Wim; van Pelt, J.

    2001-01-01

    A hybrid neuro-electronic interface is a cell-cultured micro electrode array, acting as a neural information transducer for stimulation and/or recording of neural activity in the brain or the spinal cord (ventral motor region or dorsal sensory region). It consists of an array of micro electrodes on

  5. Modeling of methane emissions using artificial neural network approach

    Directory of Open Access Journals (Sweden)

    Stamenković Lidija J.

    2015-01-01

    Full Text Available The aim of this study was to develop a model for forecasting CH4 emissions at the national level, using Artificial Neural Networks (ANN with broadly available sustainability, economical and industrial indicators as their inputs. ANN modeling was performed using two different types of architecture; a Backpropagation Neural Network (BPNN and a General Regression Neural Network (GRNN. A conventional multiple linear regression (MLR model was also developed in order to compare model performance and assess which model provides the best results. ANN and MLR models were developed and tested using the same annual data for 20 European countries. The ANN model demonstrated very good performance, significantly better than the MLR model. It was shown that a forecast of CH4 emissions at the national level using the ANN model can be made successfully and accurately for a future period of up to two years, thereby opening the possibility to apply such a modeling technique which can be used to support the implementation of sustainable development strategies and environmental management policies. [Projekat Ministarstva nauke Republike Srbije, br. 172007

  6. Adaptive Neural-Sliding Mode Control of Active Suspension System for Camera Stabilization

    Directory of Open Access Journals (Sweden)

    Feng Zhao

    2015-01-01

    Full Text Available The camera always suffers from image instability on the moving vehicle due to the unintentional vibrations caused by road roughness. This paper presents a novel adaptive neural network based on sliding mode control strategy to stabilize the image captured area of the camera. The purpose is to suppress vertical displacement of sprung mass with the application of active suspension system. Since the active suspension system has nonlinear and time varying characteristics, adaptive neural network (ANN is proposed to make the controller robustness against systematic uncertainties, which release the model-based requirement of the sliding model control, and the weighting matrix is adjusted online according to Lyapunov function. The control system consists of two loops. The outer loop is a position controller designed with sliding mode strategy, while the PID controller in the inner loop is to track the desired force. The closed loop stability and asymptotic convergence performance can be guaranteed on the basis of the Lyapunov stability theory. Finally, the simulation results show that the employed controller effectively suppresses the vibration of the camera and enhances the stabilization of the entire camera, where different excitations are considered to validate the system performance.

  7. Neural network-based model reference adaptive control system.

    Science.gov (United States)

    Patino, H D; Liu, D

    2000-01-01

    In this paper, an approach to model reference adaptive control based on neural networks is proposed and analyzed for a class of first-order continuous-time nonlinear dynamical systems. The controller structure can employ either a radial basis function network or a feedforward neural network to compensate adaptively the nonlinearities in the plant. A stable controller-parameter adjustment mechanism, which is determined using the Lyapunov theory, is constructed using a sigma-modification-type updating law. The evaluation of control error in terms of the neural network learning error is performed. That is, the control error converges asymptotically to a neighborhood of zero, whose size is evaluated and depends on the approximation error of the neural network. In the design and analysis of neural network-based control systems, it is important to take into account the neural network learning error and its influence on the control error of the plant. Simulation results showing the feasibility and performance of the proposed approach are given.

  8. A Quantum Implementation Model for Artificial Neural Networks

    Directory of Open Access Journals (Sweden)

    Ammar Daskin

    2018-02-01

    Full Text Available The learning process for multilayered neural networks with many nodes makes heavy demands on computational resources. In some neural network models, the learning formulas, such as the Widrow–Hoff formula, do not change the eigenvectors of the weight matrix while flatting the eigenvalues. In infinity, these iterative formulas result in terms formed by the principal components of the weight matrix, namely, the eigenvectors corresponding to the non-zero eigenvalues. In quantum computing, the phase estimation algorithm is known to provide speedups over the conventional algorithms for the eigenvalue-related problems. Combining the quantum amplitude amplification with the phase estimation algorithm, a quantum implementation model for artificial neural networks using the Widrow–Hoff learning rule is presented. The complexity of the model is found to be linear in the size of the weight matrix. This provides a quadratic improvement over the classical algorithms. Quanta 2018; 7: 7–18.

  9. Time Series Neural Network Model for Part-of-Speech Tagging Indonesian Language

    Science.gov (United States)

    Tanadi, Theo

    2018-03-01

    Part-of-speech tagging (POS tagging) is an important part in natural language processing. Many methods have been used to do this task, including neural network. This paper models a neural network that attempts to do POS tagging. A time series neural network is modelled to solve the problems that a basic neural network faces when attempting to do POS tagging. In order to enable the neural network to have text data input, the text data will get clustered first using Brown Clustering, resulting a binary dictionary that the neural network can use. To further the accuracy of the neural network, other features such as the POS tag, suffix, and affix of previous words would also be fed to the neural network.

  10. Similar patterns of neural activity predict memory function during encoding and retrieval.

    Science.gov (United States)

    Kragel, James E; Ezzyat, Youssef; Sperling, Michael R; Gorniak, Richard; Worrell, Gregory A; Berry, Brent M; Inman, Cory; Lin, Jui-Jui; Davis, Kathryn A; Das, Sandhitsu R; Stein, Joel M; Jobst, Barbara C; Zaghloul, Kareem A; Sheth, Sameer A; Rizzuto, Daniel S; Kahana, Michael J

    2017-07-15

    Neural networks that span the medial temporal lobe (MTL), prefrontal cortex, and posterior cortical regions are essential to episodic memory function in humans. Encoding and retrieval are supported by the engagement of both distinct neural pathways across the cortex and common structures within the medial temporal lobes. However, the degree to which memory performance can be determined by neural processing that is common to encoding and retrieval remains to be determined. To identify neural signatures of successful memory function, we administered a delayed free-recall task to 187 neurosurgical patients implanted with subdural or intraparenchymal depth electrodes. We developed multivariate classifiers to identify patterns of spectral power across the brain that independently predicted successful episodic encoding and retrieval. During encoding and retrieval, patterns of increased high frequency activity in prefrontal, MTL, and inferior parietal cortices, accompanied by widespread decreases in low frequency power across the brain predicted successful memory function. Using a cross-decoding approach, we demonstrate the ability to predict memory function across distinct phases of the free-recall task. Furthermore, we demonstrate that classifiers that combine information from both encoding and retrieval states can outperform task-independent models. These findings suggest that the engagement of a core memory network during either encoding or retrieval shapes the ability to remember the past, despite distinct neural interactions that facilitate encoding and retrieval. Copyright © 2017 Elsevier Inc. All rights reserved.

  11. Generalized activity equations for spiking neural network dynamics

    Directory of Open Access Journals (Sweden)

    Michael A Buice

    2013-11-01

    Full Text Available Much progress has been made in uncovering the computational capabilities of spiking neural networks. However, spiking neurons will always be more expensive to simulate compared to rate neurons because of the inherent disparity in time scales - the spike duration time is much shorter than the inter-spike time, which is much shorter than any learning time scale. In numerical analysis, this is a classic stiff problem. Spiking neurons are also much more difficult to study analytically. One possible approach to making spiking networks more tractable is to augment mean field activity models with some information about spiking correlations. For example, such a generalized activity model could carry information about spiking rates and correlations between spikes self-consistently. Here, we will show how this can be accomplished by constructing a complete formal probabilistic description of the network and then expanding around a small parameter such as the inverse of the number of neurons in the network. The mean field theory of the system gives a rate-like description. The first order terms in the perturbation expansion keep track of covariances.

  12. The gamma model : a new neural network for temporal processing

    NARCIS (Netherlands)

    Vries, de B.

    1992-01-01

    In this paper we develop the gamma neural model, a new neural net architecture for processing of temporal patterns. Time varying patterns are normally segmented into a sequence of static patterns that are successively presented to a neural net. In the approach presented here segmentation is avoided.

  13. SpikingLab: modelling agents controlled by Spiking Neural Networks in Netlogo.

    Science.gov (United States)

    Jimenez-Romero, Cristian; Johnson, Jeffrey

    2017-01-01

    The scientific interest attracted by Spiking Neural Networks (SNN) has lead to the development of tools for the simulation and study of neuronal dynamics ranging from phenomenological models to the more sophisticated and biologically accurate Hodgkin-and-Huxley-based and multi-compartmental models. However, despite the multiple features offered by neural modelling tools, their integration with environments for the simulation of robots and agents can be challenging and time consuming. The implementation of artificial neural circuits to control robots generally involves the following tasks: (1) understanding the simulation tools, (2) creating the neural circuit in the neural simulator, (3) linking the simulated neural circuit with the environment of the agent and (4) programming the appropriate interface in the robot or agent to use the neural controller. The accomplishment of the above-mentioned tasks can be challenging, especially for undergraduate students or novice researchers. This paper presents an alternative tool which facilitates the simulation of simple SNN circuits using the multi-agent simulation and the programming environment Netlogo (educational software that simplifies the study and experimentation of complex systems). The engine proposed and implemented in Netlogo for the simulation of a functional model of SNN is a simplification of integrate and fire (I&F) models. The characteristics of the engine (including neuronal dynamics, STDP learning and synaptic delay) are demonstrated through the implementation of an agent representing an artificial insect controlled by a simple neural circuit. The setup of the experiment and its outcomes are described in this work.

  14. Modeling and Speed Control of Induction Motor Drives Using Neural Networks

    Directory of Open Access Journals (Sweden)

    V. Jamuna

    2010-08-01

    Full Text Available Speed control of induction motor drives using neural networks is presented. The mathematical model of single phase induction motor is developed. A new simulink model for a neural network-controlled bidirectional chopper fed single phase induction motor is proposed. Under normal operation, the true drive parameters are real-time identified and they are converted into the controller parameters through multilayer forward computation by neural networks. Comparative study has been made between the conventional and neural network controllers. It is observed that the neural network controlled drive system has better dynamic performance, reduced overshoot and faster transient response than the conventional controlled system.

  15. Modeling the electrode-neuron interface of cochlear implants: effects of neural survival, electrode placement, and the partial tripolar configuration.

    Science.gov (United States)

    Goldwyn, Joshua H; Bierer, Steven M; Bierer, Julie Arenberg

    2010-09-01

    The partial tripolar electrode configuration is a relatively novel stimulation strategy that can generate more spatially focused electric fields than the commonly used monopolar configuration. Focused stimulation strategies should improve spectral resolution in cochlear implant users, but may also be more sensitive to local irregularities in the electrode-neuron interface. In this study, we develop a practical computer model of cochlear implant stimulation that can simulate neural activation in a simplified cochlear geometry and we relate the resulting patterns of neural activity to basic psychophysical measures. We examine how two types of local irregularities in the electrode-neuron interface, variations in spiral ganglion nerve density and electrode position within the scala tympani, affect the simulated neural activation patterns and how these patterns change with electrode configuration. The model shows that higher partial tripolar fractions activate more spatially restricted populations of neurons at all current levels and require higher current levels to excite a given number of neurons. We find that threshold levels are more sensitive at high partial tripolar fractions to both types of irregularities, but these effects are not independent. In particular, at close electrode-neuron distances, activation is typically more spatially localized which leads to a greater influence of neural dead regions. Copyright (c) 2010 Elsevier B.V. All rights reserved.

  16. A model of interval timing by neural integration.

    Science.gov (United States)

    Simen, Patrick; Balci, Fuat; de Souza, Laura; Cohen, Jonathan D; Holmes, Philip

    2011-06-22

    We show that simple assumptions about neural processing lead to a model of interval timing as a temporal integration process, in which a noisy firing-rate representation of time rises linearly on average toward a response threshold over the course of an interval. Our assumptions include: that neural spike trains are approximately independent Poisson processes, that correlations among them can be largely cancelled by balancing excitation and inhibition, that neural populations can act as integrators, and that the objective of timed behavior is maximal accuracy and minimal variance. The model accounts for a variety of physiological and behavioral findings in rodents, monkeys, and humans, including ramping firing rates between the onset of reward-predicting cues and the receipt of delayed rewards, and universally scale-invariant response time distributions in interval timing tasks. It furthermore makes specific, well-supported predictions about the skewness of these distributions, a feature of timing data that is usually ignored. The model also incorporates a rapid (potentially one-shot) duration-learning procedure. Human behavioral data support the learning rule's predictions regarding learning speed in sequences of timed responses. These results suggest that simple, integration-based models should play as prominent a role in interval timing theory as they do in theories of perceptual decision making, and that a common neural mechanism may underlie both types of behavior.

  17. A neural network model of ventriloquism effect and aftereffect.

    Directory of Open Access Journals (Sweden)

    Elisa Magosso

    Full Text Available Presenting simultaneous but spatially discrepant visual and auditory stimuli induces a perceptual translocation of the sound towards the visual input, the ventriloquism effect. General explanation is that vision tends to dominate over audition because of its higher spatial reliability. The underlying neural mechanisms remain unclear. We address this question via a biologically inspired neural network. The model contains two layers of unimodal visual and auditory neurons, with visual neurons having higher spatial resolution than auditory ones. Neurons within each layer communicate via lateral intra-layer synapses; neurons across layers are connected via inter-layer connections. The network accounts for the ventriloquism effect, ascribing it to a positive feedback between the visual and auditory neurons, triggered by residual auditory activity at the position of the visual stimulus. Main results are: i the less localized stimulus is strongly biased toward the most localized stimulus and not vice versa; ii amount of the ventriloquism effect changes with visual-auditory spatial disparity; iii ventriloquism is a robust behavior of the network with respect to parameter value changes. Moreover, the model implements Hebbian rules for potentiation and depression of lateral synapses, to explain ventriloquism aftereffect (that is, the enduring sound shift after exposure to spatially disparate audio-visual stimuli. By adaptively changing the weights of lateral synapses during cross-modal stimulation, the model produces post-adaptive shifts of auditory localization that agree with in-vivo observations. The model demonstrates that two unimodal layers reciprocally interconnected may explain ventriloquism effect and aftereffect, even without the presence of any convergent multimodal area. The proposed study may provide advancement in understanding neural architecture and mechanisms at the basis of visual-auditory integration in the spatial realm.

  18. Altered Neural Activity Associated with Mindfulness during Nociception: A Systematic Review of Functional MRI

    Directory of Open Access Journals (Sweden)

    Elena Bilevicius

    2016-04-01

    Full Text Available Objective: To assess the neural activity associated with mindfulness-based alterations of pain perception. Methods: The Cochrane Central, EMBASE, Ovid Medline, PsycINFO, Scopus, and Web of Science databases were searched on 2 February 2016. Titles, abstracts, and full-text articles were independently screened by two reviewers. Data were independently extracted from records that included topics of functional neuroimaging, pain, and mindfulness interventions. Results: The literature search produced 946 total records, of which five met the inclusion criteria. Records reported pain in terms of anticipation (n = 2, unpleasantness (n = 5, and intensity (n = 5, and how mindfulness conditions altered the neural activity during noxious stimulation accordingly. Conclusions: Although the studies were inconsistent in relating pain components to neural activity, in general, mindfulness was able to reduce pain anticipation and unpleasantness ratings, as well as alter the corresponding neural activity. The major neural underpinnings of mindfulness-based pain reduction consisted of altered activity in the anterior cingulate cortex, insula, and dorsolateral prefrontal cortex.

  19. Altered Neural Activity Associated with Mindfulness during Nociception: A Systematic Review of Functional MRI.

    Science.gov (United States)

    Bilevicius, Elena; Kolesar, Tiffany A; Kornelsen, Jennifer

    2016-04-19

    To assess the neural activity associated with mindfulness-based alterations of pain perception. The Cochrane Central, EMBASE, Ovid Medline, PsycINFO, Scopus, and Web of Science databases were searched on 2 February 2016. Titles, abstracts, and full-text articles were independently screened by two reviewers. Data were independently extracted from records that included topics of functional neuroimaging, pain, and mindfulness interventions. The literature search produced 946 total records, of which five met the inclusion criteria. Records reported pain in terms of anticipation (n = 2), unpleasantness (n = 5), and intensity (n = 5), and how mindfulness conditions altered the neural activity during noxious stimulation accordingly. Although the studies were inconsistent in relating pain components to neural activity, in general, mindfulness was able to reduce pain anticipation and unpleasantness ratings, as well as alter the corresponding neural activity. The major neural underpinnings of mindfulness-based pain reduction consisted of altered activity in the anterior cingulate cortex, insula, and dorsolateral prefrontal cortex.

  20. Sensory Entrainment Mechanisms in Auditory Perception: Neural Synchronization Cortico-Striatal Activation.

    Science.gov (United States)

    Sameiro-Barbosa, Catia M; Geiser, Eveline

    2016-01-01

    The auditory system displays modulations in sensitivity that can align with the temporal structure of the acoustic environment. This sensory entrainment can facilitate sensory perception and is particularly relevant for audition. Systems neuroscience is slowly uncovering the neural mechanisms underlying the behaviorally observed sensory entrainment effects in the human sensory system. The present article summarizes the prominent behavioral effects of sensory entrainment and reviews our current understanding of the neural basis of sensory entrainment, such as synchronized neural oscillations, and potentially, neural activation in the cortico-striatal system.

  1. Sensory Entrainment Mechanisms in Auditory Perception: Neural Synchronization Cortico-Striatal Activation

    Science.gov (United States)

    Sameiro-Barbosa, Catia M.; Geiser, Eveline

    2016-01-01

    The auditory system displays modulations in sensitivity that can align with the temporal structure of the acoustic environment. This sensory entrainment can facilitate sensory perception and is particularly relevant for audition. Systems neuroscience is slowly uncovering the neural mechanisms underlying the behaviorally observed sensory entrainment effects in the human sensory system. The present article summarizes the prominent behavioral effects of sensory entrainment and reviews our current understanding of the neural basis of sensory entrainment, such as synchronized neural oscillations, and potentially, neural activation in the cortico-striatal system. PMID:27559306

  2. Learning Data Set Influence on Identification Accuracy of Gas Turbine Neural Network Model

    Science.gov (United States)

    Kuznetsov, A. V.; Makaryants, G. M.

    2018-01-01

    There are many gas turbine engine identification researches via dynamic neural network models. It should minimize errors between model and real object during identification process. Questions about training data set processing of neural networks are usually missed. This article presents a study about influence of data set type on gas turbine neural network model accuracy. The identification object is thermodynamic model of micro gas turbine engine. The thermodynamic model input signal is the fuel consumption and output signal is the engine rotor rotation frequency. Four types input signals was used for creating training and testing data sets of dynamic neural network models - step, fast, slow and mixed. Four dynamic neural networks were created based on these types of training data sets. Each neural network was tested via four types test data sets. In the result 16 transition processes from four neural networks and four test data sets from analogous solving results of thermodynamic model were compared. The errors comparison was made between all neural network errors in each test data set. In the comparison result it was shown error value ranges of each test data set. It is shown that error values ranges is small therefore the influence of data set types on identification accuracy is low.

  3. Efficient Neural Network Modeling for Flight and Space Dynamics Simulation

    Directory of Open Access Journals (Sweden)

    Ayman Hamdy Kassem

    2011-01-01

    Full Text Available This paper represents an efficient technique for neural network modeling of flight and space dynamics simulation. The technique will free the neural network designer from guessing the size and structure for the required neural network model and will help to minimize the number of neurons. For linear flight/space dynamics systems, the technique can find the network weights and biases directly by solving a system of linear equations without the need for training. Nonlinear flight dynamic systems can be easily modeled by training its linearized models keeping the same network structure. The training is fast, as it uses the linear system knowledge to speed up the training process. The technique is tested on different flight/space dynamic models and showed promising results.

  4. Optimal Hierarchical Modular Topologies for Producing Limited Sustained Activation of Neural Networks

    OpenAIRE

    Kaiser, Marcus; Hilgetag, Claus C.

    2010-01-01

    An essential requirement for the representation of functional patterns in complex neural networks, such as the mammalian cerebral cortex, is the existence of stable regimes of network activation, typically arising from a limited parameter range. In this range of limited sustained activity (LSA), the activity of neural populations in the network persists between the extremes of either quickly dying out or activating the whole network. Hierarchical modular networks were previously found to show...

  5. Linking dynamic patterns of neural activity in orbitofrontal cortex with decision making.

    Science.gov (United States)

    Rich, Erin L; Stoll, Frederic M; Rudebeck, Peter H

    2018-04-01

    Humans and animals demonstrate extraordinary flexibility in choice behavior, particularly when deciding based on subjective preferences. We evaluate options on different scales, deliberate, and often change our minds. Little is known about the neural mechanisms that underlie these dynamic aspects of decision-making, although neural activity in orbitofrontal cortex (OFC) likely plays a central role. Recent evidence from studies in macaques shows that attention modulates value responses in OFC, and that ensembles of OFC neurons dynamically signal different options during choices. When contexts change, these ensembles flexibly remap to encode the new task. Determining how these dynamic patterns emerge and relate to choices will inform models of decision-making and OFC function. Copyright © 2017 Elsevier Ltd. All rights reserved.

  6. Modeling of quasistatic magnetic hysteresis with feed-forward neural networks

    International Nuclear Information System (INIS)

    Makaveev, Dimitre; Dupre, Luc; De Wulf, Marc; Melkebeek, Jan

    2001-01-01

    A modeling technique for rate-independent (quasistatic) scalar magnetic hysteresis is presented, using neural networks. Based on the theory of dynamic systems and the wiping-out and congruency properties of the classical scalar Preisach hysteresis model, the choice of a feed-forward neural network model is motivated. The neural network input parameters at each time step are the corresponding magnetic field strength and memory state, thereby assuring accurate prediction of the change of magnetic induction. For rate-independent hysteresis, the current memory state can be determined by the last extreme magnetic field strength and induction values, kept in memory. The choice of a network training set is motivated and the performance of the network is illustrated for a test set not used during training. Very accurate prediction of both major and minor hysteresis loops is observed, proving that the neural network technique is suitable for hysteresis modeling. [copyright] 2001 American Institute of Physics

  7. Modeling the Constitutive Relationship of Al–0.62Mg–0.73Si Alloy Based on Artificial Neural Network

    Directory of Open Access Journals (Sweden)

    Ying Han

    2017-03-01

    Full Text Available In this work, the hot deformation behavior of 6A02 aluminum alloy was investigated by isothermal compression tests conducted in the temperature range of 683–783 K and strain-rate range of 0.001–1 s−1. According to the obtained true stress–true strain curves, the constitutive relationship of the alloy was revealed by establishing the Arrhenius-type constitutive model and back-propagation (BP neural network model. It is found that the flow characteristic of 6A02 aluminum alloy is closely related to deformation temperature and strain rate, and the true stress decreases with increasing temperatures and decreasing strain rates. The hot deformation activation energy is calculated to be 168.916 kJ mol−1. The BP neural network model with one hidden layer and 20 neurons in the hidden layer is developed. The accuracy in prediction of the Arrhenius-type constitutive model and BP neural network model is eveluated by using statistics analysis method. It is demonstrated that the BP neural network model has better performance in predicting the flow stress.

  8. Decorrelation of Neural-Network Activity by Inhibitory Feedback

    Science.gov (United States)

    Einevoll, Gaute T.; Diesmann, Markus

    2012-01-01

    Correlations in spike-train ensembles can seriously impair the encoding of information by their spatio-temporal structure. An inevitable source of correlation in finite neural networks is common presynaptic input to pairs of neurons. Recent studies demonstrate that spike correlations in recurrent neural networks are considerably smaller than expected based on the amount of shared presynaptic input. Here, we explain this observation by means of a linear network model and simulations of networks of leaky integrate-and-fire neurons. We show that inhibitory feedback efficiently suppresses pairwise correlations and, hence, population-rate fluctuations, thereby assigning inhibitory neurons the new role of active decorrelation. We quantify this decorrelation by comparing the responses of the intact recurrent network (feedback system) and systems where the statistics of the feedback channel is perturbed (feedforward system). Manipulations of the feedback statistics can lead to a significant increase in the power and coherence of the population response. In particular, neglecting correlations within the ensemble of feedback channels or between the external stimulus and the feedback amplifies population-rate fluctuations by orders of magnitude. The fluctuation suppression in homogeneous inhibitory networks is explained by a negative feedback loop in the one-dimensional dynamics of the compound activity. Similarly, a change of coordinates exposes an effective negative feedback loop in the compound dynamics of stable excitatory-inhibitory networks. The suppression of input correlations in finite networks is explained by the population averaged correlations in the linear network model: In purely inhibitory networks, shared-input correlations are canceled by negative spike-train correlations. In excitatory-inhibitory networks, spike-train correlations are typically positive. Here, the suppression of input correlations is not a result of the mere existence of correlations between

  9. SCYNet. Testing supersymmetric models at the LHC with neural networks

    Energy Technology Data Exchange (ETDEWEB)

    Bechtle, Philip; Belkner, Sebastian; Hamer, Matthias [Universitaet Bonn, Bonn (Germany); Dercks, Daniel [Universitaet Hamburg, Hamburg (Germany); Keller, Tim; Kraemer, Michael; Sarrazin, Bjoern; Schuette-Engel, Jan; Tattersall, Jamie [RWTH Aachen University, Institute for Theoretical Particle Physics and Cosmology, Aachen (Germany)

    2017-10-15

    SCYNet (SUSY Calculating Yield Net) is a tool for testing supersymmetric models against LHC data. It uses neural network regression for a fast evaluation of the profile likelihood ratio. Two neural network approaches have been developed: one network has been trained using the parameters of the 11-dimensional phenomenological Minimal Supersymmetric Standard Model (pMSSM-11) as an input and evaluates the corresponding profile likelihood ratio within milliseconds. It can thus be used in global pMSSM-11 fits without time penalty. In the second approach, the neural network has been trained using model-independent signature-related objects, such as energies and particle multiplicities, which were estimated from the parameters of a given new physics model. (orig.)

  10. Reduced-Order Modeling for Flutter/LCO Using Recurrent Artificial Neural Network

    Science.gov (United States)

    Yao, Weigang; Liou, Meng-Sing

    2012-01-01

    The present study demonstrates the efficacy of a recurrent artificial neural network to provide a high fidelity time-dependent nonlinear reduced-order model (ROM) for flutter/limit-cycle oscillation (LCO) modeling. An artificial neural network is a relatively straightforward nonlinear method for modeling an input-output relationship from a set of known data, for which we use the radial basis function (RBF) with its parameters determined through a training process. The resulting RBF neural network, however, is only static and is not yet adequate for an application to problems of dynamic nature. The recurrent neural network method [1] is applied to construct a reduced order model resulting from a series of high-fidelity time-dependent data of aero-elastic simulations. Once the RBF neural network ROM is constructed properly, an accurate approximate solution can be obtained at a fraction of the cost of a full-order computation. The method derived during the study has been validated for predicting nonlinear aerodynamic forces in transonic flow and is capable of accurate flutter/LCO simulations. The obtained results indicate that the present recurrent RBF neural network is accurate and efficient for nonlinear aero-elastic system analysis

  11. Neural network modelling of antifungal activity of a series of oxazole derivatives based on in silico pharmacokinetic parameters

    Directory of Open Access Journals (Sweden)

    Kovačević Strahinja Z.

    2013-01-01

    Full Text Available In the present paper, the antifungal activity of a series of benzoxazole and oxazolo[ 4,5-b]pyridine derivatives was evaluated against Candida albicans by using quantitative structure-activity relationships chemometric methodology with artificial neural network (ANN regression approach. In vitro antifungal activity of the tested compounds was presented by minimum inhibitory concentration expressed as log(1/cMIC. In silico pharmacokinetic parameters related to absorption, distribution, metabolism and excretion (ADME were calculated for all studied compounds by using PreADMET software. A feedforward back-propagation ANN with gradient descent learning algorithm was applied for modelling of the relationship between ADME descriptors (blood-brain barrier penetration, plasma protein binding, Madin-Darby cell permeability and Caco-2 cell permeability and experimental log(1/cMIC values. A 4-6-1 ANN was developed with the optimum momentum and learning rates of 0.3 and 0.05, respectively. An excellent correlation between experimental antifungal activity and values predicted by the ANN was obtained with a correlation coefficient of 0.9536. [Projekat Ministarstva nauke Republike Srbije, br. 172012 i br. 172014

  12. A customizable stochastic state point process filter (SSPPF) for neural spiking activity.

    Science.gov (United States)

    Xin, Yao; Li, Will X Y; Min, Biao; Han, Yan; Cheung, Ray C C

    2013-01-01

    Stochastic State Point Process Filter (SSPPF) is effective for adaptive signal processing. In particular, it has been successfully applied to neural signal coding/decoding in recent years. Recent work has proven its efficiency in non-parametric coefficients tracking in modeling of mammal nervous system. However, existing SSPPF has only been realized in commercial software platforms which limit their computational capability. In this paper, the first hardware architecture of SSPPF has been designed and successfully implemented on field-programmable gate array (FPGA), proving a more efficient means for coefficient tracking in a well-established generalized Laguerre-Volterra model for mammalian hippocampal spiking activity research. By exploring the intrinsic parallelism of the FPGA, the proposed architecture is able to process matrices or vectors with random size, and is efficiently scalable. Experimental result shows its superior performance comparing to the software implementation, while maintaining the numerical precision. This architecture can also be potentially utilized in the future hippocampal cognitive neural prosthesis design.

  13. Neural network-based nonlinear model predictive control vs. linear quadratic gaussian control

    Science.gov (United States)

    Cho, C.; Vance, R.; Mardi, N.; Qian, Z.; Prisbrey, K.

    1997-01-01

    One problem with the application of neural networks to the multivariable control of mineral and extractive processes is determining whether and how to use them. The objective of this investigation was to compare neural network control to more conventional strategies and to determine if there are any advantages in using neural network control in terms of set-point tracking, rise time, settling time, disturbance rejection and other criteria. The procedure involved developing neural network controllers using both historical plant data and simulation models. Various control patterns were tried, including both inverse and direct neural network plant models. These were compared to state space controllers that are, by nature, linear. For grinding and leaching circuits, a nonlinear neural network-based model predictive control strategy was superior to a state space-based linear quadratic gaussian controller. The investigation pointed out the importance of incorporating state space into neural networks by making them recurrent, i.e., feeding certain output state variables into input nodes in the neural network. It was concluded that neural network controllers can have better disturbance rejection, set-point tracking, rise time, settling time and lower set-point overshoot, and it was also concluded that neural network controllers can be more reliable and easy to implement in complex, multivariable plants.

  14. Deep Convolutional and LSTM Recurrent Neural Networks for Multimodal Wearable Activity Recognition

    Directory of Open Access Journals (Sweden)

    Francisco Javier Ordóñez

    2016-01-01

    Full Text Available Human activity recognition (HAR tasks have traditionally been solved using engineered features obtained by heuristic processes. Current research suggests that deep convolutional neural networks are suited to automate feature extraction from raw sensor inputs. However, human activities are made of complex sequences of motor movements, and capturing this temporal dynamics is fundamental for successful HAR. Based on the recent success of recurrent neural networks for time series domains, we propose a generic deep framework for activity recognition based on convolutional and LSTM recurrent units, which: (i is suitable for multimodal wearable sensors; (ii can perform sensor fusion naturally; (iii does not require expert knowledge in designing features; and (iv explicitly models the temporal dynamics of feature activations. We evaluate our framework on two datasets, one of which has been used in a public activity recognition challenge. Our results show that our framework outperforms competing deep non-recurrent networks on the challenge dataset by 4% on average; outperforming some of the previous reported results by up to 9%. Our results show that the framework can be applied to homogeneous sensor modalities, but can also fuse multimodal sensors to improve performance. We characterise key architectural hyperparameters’ influence on performance to provide insights about their optimisation.

  15. Deep Convolutional and LSTM Recurrent Neural Networks for Multimodal Wearable Activity Recognition.

    Science.gov (United States)

    Ordóñez, Francisco Javier; Roggen, Daniel

    2016-01-18

    Human activity recognition (HAR) tasks have traditionally been solved using engineered features obtained by heuristic processes. Current research suggests that deep convolutional neural networks are suited to automate feature extraction from raw sensor inputs. However, human activities are made of complex sequences of motor movements, and capturing this temporal dynamics is fundamental for successful HAR. Based on the recent success of recurrent neural networks for time series domains, we propose a generic deep framework for activity recognition based on convolutional and LSTM recurrent units, which: (i) is suitable for multimodal wearable sensors; (ii) can perform sensor fusion naturally; (iii) does not require expert knowledge in designing features; and (iv) explicitly models the temporal dynamics of feature activations. We evaluate our framework on two datasets, one of which has been used in a public activity recognition challenge. Our results show that our framework outperforms competing deep non-recurrent networks on the challenge dataset by 4% on average; outperforming some of the previous reported results by up to 9%. Our results show that the framework can be applied to homogeneous sensor modalities, but can also fuse multimodal sensors to improve performance. We characterise key architectural hyperparameters' influence on performance to provide insights about their optimisation.

  16. Deep Convolutional and LSTM Recurrent Neural Networks for Multimodal Wearable Activity Recognition

    Science.gov (United States)

    Ordóñez, Francisco Javier; Roggen, Daniel

    2016-01-01

    Human activity recognition (HAR) tasks have traditionally been solved using engineered features obtained by heuristic processes. Current research suggests that deep convolutional neural networks are suited to automate feature extraction from raw sensor inputs. However, human activities are made of complex sequences of motor movements, and capturing this temporal dynamics is fundamental for successful HAR. Based on the recent success of recurrent neural networks for time series domains, we propose a generic deep framework for activity recognition based on convolutional and LSTM recurrent units, which: (i) is suitable for multimodal wearable sensors; (ii) can perform sensor fusion naturally; (iii) does not require expert knowledge in designing features; and (iv) explicitly models the temporal dynamics of feature activations. We evaluate our framework on two datasets, one of which has been used in a public activity recognition challenge. Our results show that our framework outperforms competing deep non-recurrent networks on the challenge dataset by 4% on average; outperforming some of the previous reported results by up to 9%. Our results show that the framework can be applied to homogeneous sensor modalities, but can also fuse multimodal sensors to improve performance. We characterise key architectural hyperparameters’ influence on performance to provide insights about their optimisation. PMID:26797612

  17. Critical Branching Neural Networks

    Science.gov (United States)

    Kello, Christopher T.

    2013-01-01

    It is now well-established that intrinsic variations in human neural and behavioral activity tend to exhibit scaling laws in their fluctuations and distributions. The meaning of these scaling laws is an ongoing matter of debate between isolable causes versus pervasive causes. A spiking neural network model is presented that self-tunes to critical…

  18. Neural Network Based Model of an Industrial Oil-Fired Boiler System ...

    African Journals Online (AJOL)

    A two-layer feed-forward neural network with Hyperbolic tangent sigmoid ... The neural network model when subjected to test, using the validation input data; ... Proportional Integral Derivative (PID) Controller is used to control the neural ...

  19. Cognon Neural Model Software Verification and Hardware Implementation Design

    Science.gov (United States)

    Haro Negre, Pau

    Little is known yet about how the brain can recognize arbitrary sensory patterns within milliseconds using neural spikes to communicate information between neurons. In a typical brain there are several layers of neurons, with each neuron axon connecting to ˜104 synapses of neurons in an adjacent layer. The information necessary for cognition is contained in theses synapses, which strengthen during the learning phase in response to newly presented spike patterns. Continuing on the model proposed in "Models for Neural Spike Computation and Cognition" by David H. Staelin and Carl H. Staelin, this study seeks to understand cognition from an information theoretic perspective and develop potential models for artificial implementation of cognition based on neuronal models. To do so we focus on the mathematical properties and limitations of spike-based cognition consistent with existing neurological observations. We validate the cognon model through software simulation and develop concepts for an optical hardware implementation of a network of artificial neural cognons.

  20. Computational neural network regression model for Host based Intrusion Detection System

    Directory of Open Access Journals (Sweden)

    Sunil Kumar Gautam

    2016-09-01

    Full Text Available The current scenario of information gathering and storing in secure system is a challenging task due to increasing cyber-attacks. There exists computational neural network techniques designed for intrusion detection system, which provide security to single machine and entire network's machine. In this paper, we have used two types of computational neural network models, namely, Generalized Regression Neural Network (GRNN model and Multilayer Perceptron Neural Network (MPNN model for Host based Intrusion Detection System using log files that are generated by a single personal computer. The simulation results show correctly classified percentage of normal and abnormal (intrusion class using confusion matrix. On the basis of results and discussion, we found that the Host based Intrusion Systems Model (HISM significantly improved the detection accuracy while retaining minimum false alarm rate.

  1. Using c-Jun to identify fear extinction learning-specific patterns of neural activity that are affected by single prolonged stress.

    Science.gov (United States)

    Knox, Dayan; Stanfield, Briana R; Staib, Jennifer M; David, Nina P; DePietro, Thomas; Chamness, Marisa; Schneider, Elizabeth K; Keller, Samantha M; Lawless, Caroline

    2018-04-02

    Neural circuits via which stress leads to disruptions in fear extinction is often explored in animal stress models. Using the single prolonged stress (SPS) model of post traumatic stress disorder and the immediate early gene (IEG) c-Fos as a measure of neural activity, we previously identified patterns of neural activity through which SPS disrupts extinction retention. However, none of these stress effects were specific to fear or extinction learning and memory. C-Jun is another IEG that is sometimes regulated in a different manner to c-Fos and could be used to identify emotional learning/memory specific patterns of neural activity that are sensitive to SPS. Animals were either fear conditioned (CS-fear) or presented with CSs only (CS-only) then subjected to extinction training and testing. C-Jun was then assayed within neural substrates critical for extinction memory. Inhibited c-Jun levels in the hippocampus (Hipp) and enhanced functional connectivity between the ventromedial prefrontal cortex (vmPFC) and basolateral amygdala (BLA) during extinction training was disrupted by SPS in the CS-fear group only. As a result, these effects were specific to emotional learning/memory. SPS also disrupted inhibited Hipp c-Jun levels, enhanced BLA c-Jun levels, and altered functional connectivity among the vmPFC, BLA, and Hipp during extinction testing in SPS rats in the CS-fear and CS-only groups. As a result, these effects were not specific to emotional learning/memory. Our findings suggest that SPS disrupts neural activity specific to extinction memory, but may also disrupt the retention of fear extinction by mechanisms that do not involve emotional learning/memory. Copyright © 2017 Elsevier B.V. All rights reserved.

  2. Combining BMI stimulation and mathematical modeling for acute stroke recovery and neural repair

    Directory of Open Access Journals (Sweden)

    Sara L Gonzalez Andino

    2011-07-01

    Full Text Available Rehabilitation is a neural plasticity-exploiting approach that forces undamaged neural circuits to undertake the functionality of other circuits damaged by stroke. It aims to partial restoration of the neural functions by circuit remodeling rather than by the regeneration of damaged circuits. The core hypothesis of the present paper is that - in stroke - Brain Machine Interfaces can be designed to target neural repair instead of rehabilitation. To support this hypothesis we first review existing evidence on the role of endogenous or externally applied electric fields on all processes involved in CNS repair. We then describe our own results to illustrate the neuroprotective and neuroregenerative effects of BMI- electrical stimulation on sensory deprivation-related degenerative processes of the CNS. Finally, we discuss three of the crucial issues involved in the design of neural repair-oriented BMIs: when to stimulate, where to stimulate and - the particularly important but unsolved issue of - how to stimulate. We argue that optimal parameters for the electrical stimulation can be determined from studying and modeling the dynamics of the electric fields that naturally emerge at the central and peripheral nervous system during spontaneous healing in both, experimental animals and human patients. We conclude that a closed-loop BMI that defines the optimal stimulation parameters from a priori developed experimental models of the dynamics of spontaneous repair and the on-line monitoring of neural activity might place BMIs as an alternative or complement to stem-cell transplantation or pharmacological approaches, intensively pursued nowadays.

  3. Self-reported empathy and neural activity during action imitation and observation in schizophrenia

    Directory of Open Access Journals (Sweden)

    William P. Horan

    2014-01-01

    Conclusions: Although patients with schizophrenia demonstrated largely normal patterns of neural activation across the finger movement and facial expression tasks, they reported decreased self perceived empathy and failed to show the typical relationship between neural activity and self-reported empathy seen in controls. These findings suggest that patients show a disjunction between automatic neural responses to low level social cues and higher level, integrative social cognitive processes involved in self-perceived empathy.

  4. Typology of nonlinear activity waves in a layered neural continuum.

    Science.gov (United States)

    Koch, Paul; Leisman, Gerry

    2006-04-01

    Neural tissue, a medium containing electro-chemical energy, can amplify small increments in cellular activity. The growing disturbance, measured as the fraction of active cells, manifests as propagating waves. In a layered geometry with a time delay in synaptic signals between the layers, the delay is instrumental in determining the amplified wavelengths. The growth of the waves is limited by the finite number of neural cells in a given region of the continuum. As wave growth saturates, the resulting activity patterns in space and time show a variety of forms, ranging from regular monochromatic waves to highly irregular mixtures of different spatial frequencies. The type of wave configuration is determined by a number of parameters, including alertness and synaptic conditioning as well as delay. For all cases studied, using numerical solution of the nonlinear Wilson-Cowan (1973) equations, there is an interval in delay in which the wave mixing occurs. As delay increases through this interval, during a series of consecutive waves propagating through a continuum region, the activity within that region changes from a single-frequency to a multiple-frequency pattern and back again. The diverse spatio-temporal patterns give a more concrete form to several metaphors advanced over the years to attempt an explanation of cognitive phenomena: Activity waves embody the "holographic memory" (Pribram, 1991); wave mixing provides a plausible cause of the competition called "neural Darwinism" (Edelman, 1988); finally the consecutive generation of growing neural waves can explain the discontinuousness of "psychological time" (Stroud, 1955).

  5. Nonlinear adaptive inverse control via the unified model neural network

    Science.gov (United States)

    Jeng, Jin-Tsong; Lee, Tsu-Tian

    1999-03-01

    In this paper, we propose a new nonlinear adaptive inverse control via a unified model neural network. In order to overcome nonsystematic design and long training time in nonlinear adaptive inverse control, we propose the approximate transformable technique to obtain a Chebyshev Polynomials Based Unified Model (CPBUM) neural network for the feedforward/recurrent neural networks. It turns out that the proposed method can use less training time to get an inverse model. Finally, we apply this proposed method to control magnetic bearing system. The experimental results show that the proposed nonlinear adaptive inverse control architecture provides a greater flexibility and better performance in controlling magnetic bearing systems.

  6. A Simple Quantum Neural Net with a Periodic Activation Function

    OpenAIRE

    Daskin, Ammar

    2018-01-01

    In this paper, we propose a simple neural net that requires only $O(nlog_2k)$ number of qubits and $O(nk)$ quantum gates: Here, $n$ is the number of input parameters, and $k$ is the number of weights applied to these parameters in the proposed neural net. We describe the network in terms of a quantum circuit, and then draw its equivalent classical neural net which involves $O(k^n)$ nodes in the hidden layer. Then, we show that the network uses a periodic activation function of cosine values o...

  7. Temporal-pattern learning in neural models

    CERN Document Server

    Genís, Carme Torras

    1985-01-01

    While the ability of animals to learn rhythms is an unquestionable fact, the underlying neurophysiological mechanisms are still no more than conjectures. This monograph explores the requirements of such mechanisms, reviews those previously proposed and postulates a new one based on a direct electric coding of stimulation frequencies. Experi­ mental support for the option taken is provided both at the single neuron and neural network levels. More specifically, the material presented divides naturally into four parts: a description of the experimental and theoretical framework where this work becomes meaningful (Chapter 2), a detailed specifica­ tion of the pacemaker neuron model proposed together with its valida­ tion through simulation (Chapter 3), an analytic study of the behavior of this model when submitted to rhythmic stimulation (Chapter 4) and a description of the neural network model proposed for learning, together with an analysis of the simulation results obtained when varying seve­ ral factors r...

  8. A model of stimulus-specific neural assemblies in the insect antennal lobe.

    Directory of Open Access Journals (Sweden)

    Dominique Martinez

    2008-08-01

    Full Text Available It has been proposed that synchronized neural assemblies in the antennal lobe of insects encode the identity of olfactory stimuli. In response to an odor, some projection neurons exhibit synchronous firing, phase-locked to the oscillations of the field potential, whereas others do not. Experimental data indicate that neural synchronization and field oscillations are induced by fast GABA(A-type inhibition, but it remains unclear how desynchronization occurs. We hypothesize that slow inhibition plays a key role in desynchronizing projection neurons. Because synaptic noise is believed to be the dominant factor that limits neuronal reliability, we consider a computational model of the antennal lobe in which a population of oscillatory neurons interact through unreliable GABA(A and GABA(B inhibitory synapses. From theoretical analysis and extensive computer simulations, we show that transmission failures at slow GABA(B synapses make the neural response unpredictable. Depending on the balance between GABA(A and GABA(B inputs, particular neurons may either synchronize or desynchronize. These findings suggest a wiring scheme that triggers stimulus-specific synchronized assemblies. Inhibitory connections are set by Hebbian learning and selectively activated by stimulus patterns to form a spiking associative memory whose storage capacity is comparable to that of classical binary-coded models. We conclude that fast inhibition acts in concert with slow inhibition to reformat the glomerular input into odor-specific synchronized neural assemblies.

  9. An artificial neural network model for periodic trajectory generation

    Science.gov (United States)

    Shankar, S.; Gander, R. E.; Wood, H. C.

    A neural network model based on biological systems was developed for potential robotic application. The model consists of three interconnected layers of artificial neurons or units: an input layer subdivided into state and plan units, an output layer, and a hidden layer between the two outer layers which serves to implement nonlinear mappings between the input and output activation vectors. Weighted connections are created between the three layers, and learning is effected by modifying these weights. Feedback connections between the output and the input state serve to make the network operate as a finite state machine. The activation vector of the plan units of the input layer emulates the supraspinal commands in biological central pattern generators in that different plan activation vectors correspond to different sequences or trajectories being recalled, even with different frequencies. Three trajectories were chosen for implementation, and learning was accomplished in 10,000 trials. The fault tolerant behavior, adaptiveness, and phase maintenance of the implemented network are discussed.

  10. Reynolds averaged turbulence modelling using deep neural networks with embedded invariance

    International Nuclear Information System (INIS)

    Ling, Julia; Kurzawski, Andrew; Templeton, Jeremy

    2016-01-01

    There exists significant demand for improved Reynolds-averaged Navier–Stokes (RANS) turbulence models that are informed by and can represent a richer set of turbulence physics. This paper presents a method of using deep neural networks to learn a model for the Reynolds stress anisotropy tensor from high-fidelity simulation data. A novel neural network architecture is proposed which uses a multiplicative layer with an invariant tensor basis to embed Galilean invariance into the predicted anisotropy tensor. It is demonstrated that this neural network architecture provides improved prediction accuracy compared with a generic neural network architecture that does not embed this invariance property. Furthermore, the Reynolds stress anisotropy predictions of this invariant neural network are propagated through to the velocity field for two test cases. For both test cases, significant improvement versus baseline RANS linear eddy viscosity and nonlinear eddy viscosity models is demonstrated.

  11. Modelling and Prediction of Photovoltaic Power Output Using Artificial Neural Networks

    Directory of Open Access Journals (Sweden)

    Aminmohammad Saberian

    2014-01-01

    Full Text Available This paper presents a solar power modelling method using artificial neural networks (ANNs. Two neural network structures, namely, general regression neural network (GRNN feedforward back propagation (FFBP, have been used to model a photovoltaic panel output power and approximate the generated power. Both neural networks have four inputs and one output. The inputs are maximum temperature, minimum temperature, mean temperature, and irradiance; the output is the power. The data used in this paper started from January 1, 2006, until December 31, 2010. The five years of data were split into two parts: 2006–2008 and 2009-2010; the first part was used for training and the second part was used for testing the neural networks. A mathematical equation is used to estimate the generated power. At the end, both of these networks have shown good modelling performance; however, FFBP has shown a better performance comparing with GRNN.

  12. Neural field model of memory-guided search.

    Science.gov (United States)

    Kilpatrick, Zachary P; Poll, Daniel B

    2017-12-01

    Many organisms can remember locations they have previously visited during a search. Visual search experiments have shown exploration is guided away from these locations, reducing redundancies in the search path before finding a hidden target. We develop and analyze a two-layer neural field model that encodes positional information during a search task. A position-encoding layer sustains a bump attractor corresponding to the searching agent's current location, and search is modeled by velocity input that propagates the bump. A memory layer sustains persistent activity bounded by a wave front, whose edges expand in response to excitatory input from the position layer. Search can then be biased in response to remembered locations, influencing velocity inputs to the position layer. Asymptotic techniques are used to reduce the dynamics of our model to a low-dimensional system of equations that track the bump position and front boundary. Performance is compared for different target-finding tasks.

  13. Neural field model of memory-guided search

    Science.gov (United States)

    Kilpatrick, Zachary P.; Poll, Daniel B.

    2017-12-01

    Many organisms can remember locations they have previously visited during a search. Visual search experiments have shown exploration is guided away from these locations, reducing redundancies in the search path before finding a hidden target. We develop and analyze a two-layer neural field model that encodes positional information during a search task. A position-encoding layer sustains a bump attractor corresponding to the searching agent's current location, and search is modeled by velocity input that propagates the bump. A memory layer sustains persistent activity bounded by a wave front, whose edges expand in response to excitatory input from the position layer. Search can then be biased in response to remembered locations, influencing velocity inputs to the position layer. Asymptotic techniques are used to reduce the dynamics of our model to a low-dimensional system of equations that track the bump position and front boundary. Performance is compared for different target-finding tasks.

  14. Modulation of Neural Activity during Guided Viewing of Visual Art.

    Science.gov (United States)

    Herrera-Arcos, Guillermo; Tamez-Duque, Jesús; Acosta-De-Anda, Elsa Y; Kwan-Loo, Kevin; de-Alba, Mayra; Tamez-Duque, Ulises; Contreras-Vidal, Jose L; Soto, Rogelio

    2017-01-01

    Mobile Brain-Body Imaging (MoBI) technology was deployed to record multi-modal data from 209 participants to examine the brain's response to artistic stimuli at the Museo de Arte Contemporáneo (MARCO) in Monterrey, México. EEG signals were recorded as the subjects walked through the exhibit in guided groups of 6-8 people. Moreover, guided groups were either provided with an explanation of each art piece (Guided-E), or given no explanation (Guided-NE). The study was performed using portable Muse (InteraXon, Inc, Toronto, ON, Canada) headbands with four dry electrodes located at AF7, AF8, TP9, and TP10. Each participant performed a baseline (BL) control condition devoid of artistic stimuli and selected his/her favorite piece of art (FP) during the guided tour. In this study, we report data related to participants' demographic information and aesthetic preference as well as effects of art viewing on neural activity (EEG) in a select subgroup of 18-30 year-old subjects (Nc = 25) that generated high-quality EEG signals, on both BL and FP conditions. Dependencies on gender, sensor placement, and presence or absence of art explanation were also analyzed. After denoising, clustering of spectral EEG models was used to identify neural patterns associated with BL and FP conditions. Results indicate statistically significant suppression of beta band frequencies (15-25 Hz) in the prefrontal electrodes (AF7 and AF8) during appreciation of subjects' favorite painting, compared to the BL condition, which was significantly different from EEG responses to non-favorite paintings (NFP). No significant differences in brain activity in relation to the presence or absence of explanation during exhibit tours were found. Moreover, a frontal to posterior asymmetry in neural activity was observed, for both BL and FP conditions. These findings provide new information about frequency-related effects of preferred art viewing in brain activity, and support the view that art appreciation is

  15. A case study to estimate costs using Neural Networks and regression based models

    Directory of Open Access Journals (Sweden)

    Nadia Bhuiyan

    2012-07-01

    Full Text Available Bombardier Aerospace’s high performance aircrafts and services set the utmost standard for the Aerospace industry. A case study in collaboration with Bombardier Aerospace is conducted in order to estimate the target cost of a landing gear. More precisely, the study uses both parametric model and neural network models to estimate the cost of main landing gears, a major aircraft commodity. A comparative analysis between the parametric based model and those upon neural networks model will be considered in order to determine the most accurate method to predict the cost of a main landing gear. Several trials are presented for the design and use of the neural network model. The analysis for the case under study shows the flexibility in the design of the neural network model. Furthermore, the performance of the neural network model is deemed superior to the parametric models for this case study.

  16. Nonlinearly Activated Neural Network for Solving Time-Varying Complex Sylvester Equation.

    Science.gov (United States)

    Li, Shuai; Li, Yangming

    2013-10-28

    The Sylvester equation is often encountered in mathematics and control theory. For the general time-invariant Sylvester equation problem, which is defined in the domain of complex numbers, the Bartels-Stewart algorithm and its extensions are effective and widely used with an O(n³) time complexity. When applied to solving the time-varying Sylvester equation, the computation burden increases intensively with the decrease of sampling period and cannot satisfy continuous realtime calculation requirements. For the special case of the general Sylvester equation problem defined in the domain of real numbers, gradient-based recurrent neural networks are able to solve the time-varying Sylvester equation in real time, but there always exists an estimation error while a recently proposed recurrent neural network by Zhang et al [this type of neural network is called Zhang neural network (ZNN)] converges to the solution ideally. The advancements in complex-valued neural networks cast light to extend the existing real-valued ZNN for solving the time-varying real-valued Sylvester equation to its counterpart in the domain of complex numbers. In this paper, a complex-valued ZNN for solving the complex-valued Sylvester equation problem is investigated and the global convergence of the neural network is proven with the proposed nonlinear complex-valued activation functions. Moreover, a special type of activation function with a core function, called sign-bi-power function, is proven to enable the ZNN to converge in finite time, which further enhances its advantage in online processing. In this case, the upper bound of the convergence time is also derived analytically. Simulations are performed to evaluate and compare the performance of the neural network with different parameters and activation functions. Both theoretical analysis and numerical simulations validate the effectiveness of the proposed method.

  17. Assessing neural activity related to decision-making through flexible odds ratio curves and their derivatives.

    Science.gov (United States)

    Roca-Pardiñas, Javier; Cadarso-Suárez, Carmen; Pardo-Vazquez, Jose L; Leboran, Victor; Molenberghs, Geert; Faes, Christel; Acuña, Carlos

    2011-06-30

    It is well established that neural activity is stochastically modulated over time. Therefore, direct comparisons across experimental conditions and determination of change points or maximum firing rates are not straightforward. This study sought to compare temporal firing probability curves that may vary across groups defined by different experimental conditions. Odds-ratio (OR) curves were used as a measure of comparison, and the main goal was to provide a global test to detect significant differences of such curves through the study of their derivatives. An algorithm is proposed that enables ORs based on generalized additive models, including factor-by-curve-type interactions to be flexibly estimated. Bootstrap methods were used to draw inferences from the derivatives curves, and binning techniques were applied to speed up computation in the estimation and testing processes. A simulation study was conducted to assess the validity of these bootstrap-based tests. This methodology was applied to study premotor ventral cortex neural activity associated with decision-making. The proposed statistical procedures proved very useful in revealing the neural activity correlates of decision-making in a visual discrimination task. Copyright © 2011 John Wiley & Sons, Ltd.

  18. A continuous-time neural model for sequential action.

    Science.gov (United States)

    Kachergis, George; Wyatte, Dean; O'Reilly, Randall C; de Kleijn, Roy; Hommel, Bernhard

    2014-11-05

    Action selection, planning and execution are continuous processes that evolve over time, responding to perceptual feedback as well as evolving top-down constraints. Existing models of routine sequential action (e.g. coffee- or pancake-making) generally fall into one of two classes: hierarchical models that include hand-built task representations, or heterarchical models that must learn to represent hierarchy via temporal context, but thus far lack goal-orientedness. We present a biologically motivated model of the latter class that, because it is situated in the Leabra neural architecture, affords an opportunity to include both unsupervised and goal-directed learning mechanisms. Moreover, we embed this neurocomputational model in the theoretical framework of the theory of event coding (TEC), which posits that actions and perceptions share a common representation with bidirectional associations between the two. Thus, in this view, not only does perception select actions (along with task context), but actions are also used to generate perceptions (i.e. intended effects). We propose a neural model that implements TEC to carry out sequential action control in hierarchically structured tasks such as coffee-making. Unlike traditional feedforward discrete-time neural network models, which use static percepts to generate static outputs, our biological model accepts continuous-time inputs and likewise generates non-stationary outputs, making short-timescale dynamic predictions. © 2014 The Author(s) Published by the Royal Society. All rights reserved.

  19. Neural activity predicts attitude change in cognitive dissonance.

    Science.gov (United States)

    van Veen, Vincent; Krug, Marie K; Schooler, Jonathan W; Carter, Cameron S

    2009-11-01

    When our actions conflict with our prior attitudes, we often change our attitudes to be more consistent with our actions. This phenomenon, known as cognitive dissonance, is considered to be one of the most influential theories in psychology. However, the neural basis of this phenomenon is unknown. Using a Solomon four-group design, we scanned participants with functional MRI while they argued that the uncomfortable scanner environment was nevertheless a pleasant experience. We found that cognitive dissonance engaged the dorsal anterior cingulate cortex and anterior insula; furthermore, we found that the activation of these regions tightly predicted participants' subsequent attitude change. These effects were not observed in a control group. Our findings elucidate the neural representation of cognitive dissonance, and support the role of the anterior cingulate cortex in detecting cognitive conflict and the neural prediction of attitude change.

  20. The effects of gratitude expression on neural activity.

    Science.gov (United States)

    Kini, Prathik; Wong, Joel; McInnis, Sydney; Gabana, Nicole; Brown, Joshua W

    2016-03-01

    Gratitude is a common aspect of social interaction, yet relatively little is known about the neural bases of gratitude expression, nor how gratitude expression may lead to longer-term effects on brain activity. To address these twin issues, we recruited subjects who coincidentally were entering psychotherapy for depression and/or anxiety. One group participated in a gratitude writing intervention, which required them to write letters expressing gratitude. The therapy-as-usual control group did not perform a writing intervention. After three months, subjects performed a "Pay It Forward" task in the fMRI scanner. In the task, subjects were repeatedly endowed with a monetary gift and then asked to pass it on to a charitable cause to the extent they felt grateful for the gift. Operationalizing gratitude as monetary gifts allowed us to engage the subjects and quantify the gratitude expression for subsequent analyses. We measured brain activity and found regions where activity correlated with self-reported gratitude experience during the task, even including related constructs such as guilt motivation and desire to help as statistical controls. These were mostly distinct from brain regions activated by empathy or theory of mind. Also, our between groups cross-sectional study found that a simple gratitude writing intervention was associated with significantly greater and lasting neural sensitivity to gratitude - subjects who participated in gratitude letter writing showed both behavioral increases in gratitude and significantly greater neural modulation by gratitude in the medial prefrontal cortex three months later. Copyright © 2015 Elsevier Inc. All rights reserved.

  1. Modelling a variable valve timing spark ignition engine using different neural networks

    Energy Technology Data Exchange (ETDEWEB)

    Beham, M. [BMW AG, Munich (Germany); Yu, D.L. [John Moores University, Liverpool (United Kingdom). Control Systems Research Group

    2004-10-01

    In this paper different neural networks (NN) are compared for modelling a variable valve timing spark-ignition (VVT SI) engine. The overall system is divided for each output into five neural multi-input single output (MISO) subsystems. Three kinds of NN, multilayer Perceptron (MLP), pseudo-linear radial basis function (PLRBF), and local linear model tree (LOLIMOT) networks, are used to model each subsystem. Real data were collected when the engine was under different operating conditions and these data are used in training and validation of the developed neural models. The obtained models are finally tested in a real-time online model configuration on the test bench. The neural models run independently of the engine in parallel mode. The model outputs are compared with process output and compared among different models. These models performed well and can be used in the model-based engine control and optimization, and for hardware in the loop systems. (author)

  2. Coupling Strength and System Size Induce Firing Activity of Globally Coupled Neural Network

    International Nuclear Information System (INIS)

    Wei Duqu; Luo Xiaoshu; Zou Yanli

    2008-01-01

    We investigate how firing activity of globally coupled neural network depends on the coupling strength C and system size N. Network elements are described by space-clamped FitzHugh-Nagumo (SCFHN) neurons with the values of parameters at which no firing activity occurs. It is found that for a given appropriate coupling strength, there is an intermediate range of system size where the firing activity of globally coupled SCFHN neural network is induced and enhanced. On the other hand, for a given intermediate system size level, there exists an optimal value of coupling strength such that the intensity of firing activity reaches its maximum. These phenomena imply that the coupling strength and system size play a vital role in firing activity of neural network

  3. Unloading arm movement modeling using neural networks for a rotary hearth furnace

    Directory of Open Access Journals (Sweden)

    Iulia Inoan

    2011-12-01

    Full Text Available Neural networks are being applied in many fields of engineering having nowadays a wide range of application. Neural networks are very useful for modeling dynamic processes for which the mathematical modeling is hard to obtain, or for processes that can’t be modeled using mathematical equations. This paper describes the modeling process for the unloading arm movement from a rotary hearth furnace using neural networks with back propagation algorithm. In this case the designed network was trained using the simulation results from a previous calculated mathematical model.

  4. Improving quantitative structure-activity relationship models using Artificial Neural Networks trained with dropout.

    Science.gov (United States)

    Mendenhall, Jeffrey; Meiler, Jens

    2016-02-01

    Dropout is an Artificial Neural Network (ANN) training technique that has been shown to improve ANN performance across canonical machine learning (ML) datasets. Quantitative Structure Activity Relationship (QSAR) datasets used to relate chemical structure to biological activity in Ligand-Based Computer-Aided Drug Discovery pose unique challenges for ML techniques, such as heavily biased dataset composition, and relatively large number of descriptors relative to the number of actives. To test the hypothesis that dropout also improves QSAR ANNs, we conduct a benchmark on nine large QSAR datasets. Use of dropout improved both enrichment false positive rate and log-scaled area under the receiver-operating characteristic curve (logAUC) by 22-46 % over conventional ANN implementations. Optimal dropout rates are found to be a function of the signal-to-noise ratio of the descriptor set, and relatively independent of the dataset. Dropout ANNs with 2D and 3D autocorrelation descriptors outperform conventional ANNs as well as optimized fingerprint similarity search methods.

  5. Feature to prototype transition in neural networks

    Science.gov (United States)

    Krotov, Dmitry; Hopfield, John

    Models of associative memory with higher order (higher than quadratic) interactions, and their relationship to neural networks used in deep learning are discussed. Associative memory is conventionally described by recurrent neural networks with dynamical convergence to stable points. Deep learning typically uses feedforward neural nets without dynamics. However, a simple duality relates these two different views when applied to problems of pattern classification. From the perspective of associative memory such models deserve attention because they make it possible to store a much larger number of memories, compared to the quadratic case. In the dual description, these models correspond to feedforward neural networks with one hidden layer and unusual activation functions transmitting the activities of the visible neurons to the hidden layer. These activation functions are rectified polynomials of a higher degree rather than the rectified linear functions used in deep learning. The network learns representations of the data in terms of features for rectified linear functions, but as the power in the activation function is increased there is a gradual shift to a prototype-based representation, the two extreme regimes of pattern recognition known in cognitive psychology. Simons Center for Systems Biology.

  6. Neural Networks for Modeling and Control of Particle Accelerators

    Science.gov (United States)

    Edelen, A. L.; Biedron, S. G.; Chase, B. E.; Edstrom, D.; Milton, S. V.; Stabile, P.

    2016-04-01

    Particle accelerators are host to myriad nonlinear and complex physical phenomena. They often involve a multitude of interacting systems, are subject to tight performance demands, and should be able to run for extended periods of time with minimal interruptions. Often times, traditional control techniques cannot fully meet these requirements. One promising avenue is to introduce machine learning and sophisticated control techniques inspired by artificial intelligence, particularly in light of recent theoretical and practical advances in these fields. Within machine learning and artificial intelligence, neural networks are particularly well-suited to modeling, control, and diagnostic analysis of complex, nonlinear, and time-varying systems, as well as systems with large parameter spaces. Consequently, the use of neural network-based modeling and control techniques could be of significant benefit to particle accelerators. For the same reasons, particle accelerators are also ideal test-beds for these techniques. Many early attempts to apply neural networks to particle accelerators yielded mixed results due to the relative immaturity of the technology for such tasks. The purpose of this paper is to re-introduce neural networks to the particle accelerator community and report on some work in neural network control that is being conducted as part of a dedicated collaboration between Fermilab and Colorado State University (CSU). We describe some of the challenges of particle accelerator control, highlight recent advances in neural network techniques, discuss some promising avenues for incorporating neural networks into particle accelerator control systems, and describe a neural network-based control system that is being developed for resonance control of an RF electron gun at the Fermilab Accelerator Science and Technology (FAST) facility, including initial experimental results from a benchmark controller.

  7. Modelling the permeability of polymers: a neural network approach

    NARCIS (Netherlands)

    Wessling, Matthias; Mulder, M.H.V.; Bos, A.; Bos, A.; van der Linden, M.K.T.; Bos, M.; van der Linden, W.E.

    1994-01-01

    In this short communication, the prediction of the permeability of carbon dioxide through different polymers using a neural network is studied. A neural network is a numeric-mathematical construction that can model complex non-linear relationships. Here it is used to correlate the IR spectrum of a

  8. Modelling innovation performance of European regions using multi-output neural networks.

    Science.gov (United States)

    Hajek, Petr; Henriques, Roberto

    2017-01-01

    Regional innovation performance is an important indicator for decision-making regarding the implementation of policies intended to support innovation. However, patterns in regional innovation structures are becoming increasingly diverse, complex and nonlinear. To address these issues, this study aims to develop a model based on a multi-output neural network. Both intra- and inter-regional determinants of innovation performance are empirically investigated using data from the 4th and 5th Community Innovation Surveys of NUTS 2 (Nomenclature of Territorial Units for Statistics) regions. The results suggest that specific innovation strategies must be developed based on the current state of input attributes in the region. Thus, it is possible to develop appropriate strategies and targeted interventions to improve regional innovation performance. We demonstrate that support of entrepreneurship is an effective instrument of innovation policy. We also provide empirical support that both business and government R&D activity have a sigmoidal effect, implying that the most effective R&D support should be directed to regions with below-average and average R&D activity. We further show that the multi-output neural network outperforms traditional statistical and machine learning regression models. In general, therefore, it seems that the proposed model can effectively reflect both the multiple-output nature of innovation performance and the interdependency of the output attributes.

  9. Modelling innovation performance of European regions using multi-output neural networks.

    Directory of Open Access Journals (Sweden)

    Petr Hajek

    Full Text Available Regional innovation performance is an important indicator for decision-making regarding the implementation of policies intended to support innovation. However, patterns in regional innovation structures are becoming increasingly diverse, complex and nonlinear. To address these issues, this study aims to develop a model based on a multi-output neural network. Both intra- and inter-regional determinants of innovation performance are empirically investigated using data from the 4th and 5th Community Innovation Surveys of NUTS 2 (Nomenclature of Territorial Units for Statistics regions. The results suggest that specific innovation strategies must be developed based on the current state of input attributes in the region. Thus, it is possible to develop appropriate strategies and targeted interventions to improve regional innovation performance. We demonstrate that support of entrepreneurship is an effective instrument of innovation policy. We also provide empirical support that both business and government R&D activity have a sigmoidal effect, implying that the most effective R&D support should be directed to regions with below-average and average R&D activity. We further show that the multi-output neural network outperforms traditional statistical and machine learning regression models. In general, therefore, it seems that the proposed model can effectively reflect both the multiple-output nature of innovation performance and the interdependency of the output attributes.

  10. Daily rainfall-runoff modelling by neural networks in semi-arid zone ...

    African Journals Online (AJOL)

    This research work will allow checking efficiency of formal neural networks for flows' modelling of wadi Ouahrane's basin from rainfall-runoff relation which is non-linear. Two models of neural networks were optimized through supervised learning and compared in order to achieve this goal, the first model with input rain, and ...

  11. Modeling and Control of CSTR using Model based Neural Network Predictive Control

    OpenAIRE

    Shrivastava, Piyush

    2012-01-01

    This paper presents a predictive control strategy based on neural network model of the plant is applied to Continuous Stirred Tank Reactor (CSTR). This system is a highly nonlinear process; therefore, a nonlinear predictive method, e.g., neural network predictive control, can be a better match to govern the system dynamics. In the paper, the NN model and the way in which it can be used to predict the behavior of the CSTR process over a certain prediction horizon are described, and some commen...

  12. Patterns recognition of electric brain activity using artificial neural networks

    Science.gov (United States)

    Musatov, V. Yu.; Pchelintseva, S. V.; Runnova, A. E.; Hramov, A. E.

    2017-04-01

    An approach for the recognition of various cognitive processes in the brain activity in the perception of ambiguous images. On the basis of developed theoretical background and the experimental data, we propose a new classification of oscillating patterns in the human EEG by using an artificial neural network approach. After learning of the artificial neural network reliably identified cube recognition processes, for example, left-handed or right-oriented Necker cube with different intensity of their edges, construct an artificial neural network based on Perceptron architecture and demonstrate its effectiveness in the pattern recognition of the EEG in the experimental.

  13. DynaSim: A MATLAB Toolbox for Neural Modeling and Simulation.

    Science.gov (United States)

    Sherfey, Jason S; Soplata, Austin E; Ardid, Salva; Roberts, Erik A; Stanley, David A; Pittman-Polletta, Benjamin R; Kopell, Nancy J

    2018-01-01

    DynaSim is an open-source MATLAB/GNU Octave toolbox for rapid prototyping of neural models and batch simulation management. It is designed to speed up and simplify the process of generating, sharing, and exploring network models of neurons with one or more compartments. Models can be specified by equations directly (similar to XPP or the Brian simulator) or by lists of predefined or custom model components. The higher-level specification supports arbitrarily complex population models and networks of interconnected populations. DynaSim also includes a large set of features that simplify exploring model dynamics over parameter spaces, running simulations in parallel using both multicore processors and high-performance computer clusters, and analyzing and plotting large numbers of simulated data sets in parallel. It also includes a graphical user interface (DynaSim GUI) that supports full functionality without requiring user programming. The software has been implemented in MATLAB to enable advanced neural modeling using MATLAB, given its popularity and a growing interest in modeling neural systems. The design of DynaSim incorporates a novel schema for model specification to facilitate future interoperability with other specifications (e.g., NeuroML, SBML), simulators (e.g., NEURON, Brian, NEST), and web-based applications (e.g., Geppetto) outside MATLAB. DynaSim is freely available at http://dynasimtoolbox.org. This tool promises to reduce barriers for investigating dynamics in large neural models, facilitate collaborative modeling, and complement other tools being developed in the neuroinformatics community.

  14. Forecasting macroeconomic variables using neural network models and three automated model selection techniques

    DEFF Research Database (Denmark)

    Kock, Anders Bredahl; Teräsvirta, Timo

    2016-01-01

    When forecasting with neural network models one faces several problems, all of which influence the accuracy of the forecasts. First, neural networks are often hard to estimate due to their highly nonlinear structure. To alleviate the problem, White (2006) presented a solution (QuickNet) that conv...

  15. Statistical mechanics of attractor neural network models with synaptic depression

    International Nuclear Information System (INIS)

    Igarashi, Yasuhiko; Oizumi, Masafumi; Otsubo, Yosuke; Nagata, Kenji; Okada, Masato

    2009-01-01

    Synaptic depression is known to control gain for presynaptic inputs. Since cortical neurons receive thousands of presynaptic inputs, and their outputs are fed into thousands of other neurons, the synaptic depression should influence macroscopic properties of neural networks. We employ simple neural network models to explore the macroscopic effects of synaptic depression. Systems with the synaptic depression cannot be analyzed due to asymmetry of connections with the conventional equilibrium statistical-mechanical approach. Thus, we first propose a microscopic dynamical mean field theory. Next, we derive macroscopic steady state equations and discuss the stabilities of steady states for various types of neural network models.

  16. Modeling and optimization by particle swarm embedded neural network for adsorption of zinc (II) by palm kernel shell based activated carbon from aqueous environment.

    Science.gov (United States)

    Karri, Rama Rao; Sahu, J N

    2018-01-15

    Zn (II) is one the common pollutant among heavy metals found in industrial effluents. Removal of pollutant from industrial effluents can be accomplished by various techniques, out of which adsorption was found to be an efficient method. Applications of adsorption limits itself due to high cost of adsorbent. In this regard, a low cost adsorbent produced from palm oil kernel shell based agricultural waste is examined for its efficiency to remove Zn (II) from waste water and aqueous solution. The influence of independent process variables like initial concentration, pH, residence time, activated carbon (AC) dosage and process temperature on the removal of Zn (II) by palm kernel shell based AC from batch adsorption process are studied systematically. Based on the design of experimental matrix, 50 experimental runs are performed with each process variable in the experimental range. The optimal values of process variables to achieve maximum removal efficiency is studied using response surface methodology (RSM) and artificial neural network (ANN) approaches. A quadratic model, which consists of first order and second order degree regressive model is developed using the analysis of variance and RSM - CCD framework. The particle swarm optimization which is a meta-heuristic optimization is embedded on the ANN architecture to optimize the search space of neural network. The optimized trained neural network well depicts the testing data and validation data with R 2 equal to 0.9106 and 0.9279 respectively. The outcomes indicates that the superiority of ANN-PSO based model predictions over the quadratic model predictions provided by RSM. Copyright © 2017 Elsevier Ltd. All rights reserved.

  17. Exploring Neural Network Models with Hierarchical Memories and Their Use in Modeling Biological Systems

    Science.gov (United States)

    Pusuluri, Sai Teja

    Energy landscapes are often used as metaphors for phenomena in biology, social sciences and finance. Different methods have been implemented in the past for the construction of energy landscapes. Neural network models based on spin glass physics provide an excellent mathematical framework for the construction of energy landscapes. This framework uses a minimal number of parameters and constructs the landscape using data from the actual phenomena. In the past neural network models were used to mimic the storage and retrieval process of memories (patterns) in the brain. With advances in the field now, these models are being used in machine learning, deep learning and modeling of complex phenomena. Most of the past literature focuses on increasing the storage capacity and stability of stored patterns in the network but does not study these models from a modeling perspective or an energy landscape perspective. This dissertation focuses on neural network models both from a modeling perspective and from an energy landscape perspective. I firstly show how the cellular interconversion phenomenon can be modeled as a transition between attractor states on an epigenetic landscape constructed using neural network models. The model allows the identification of a reaction coordinate of cellular interconversion by analyzing experimental and simulation time course data. Monte Carlo simulations of the model show that the initial phase of cellular interconversion is a Poisson process and the later phase of cellular interconversion is a deterministic process. Secondly, I explore the static features of landscapes generated using neural network models, such as sizes of basins of attraction and densities of metastable states. The simulation results show that the static landscape features are strongly dependent on the correlation strength and correlation structure between patterns. Using different hierarchical structures of the correlation between patterns affects the landscape features

  18. Hypothetical neural mechanism that may play a role in mental rotation: an attractor neural network model.

    Science.gov (United States)

    Benusková, L; Estok, S

    1998-11-01

    We propose an attractor neural network (ANN) model that performs rotation-invariant pattern recognition in such a way that it can account for a neural mechanism being involved in the image transformation accompanying the experience of mental rotation. We compared the performance of our ANN model with the results of the chronometric psychophysical experiments of Cooper and Shepard (Cooper L A and Shepard R N 1973 Visual Information Processing (New York: Academic) pp 204-7) on discrimination of alphanumeric characters presented in various angular departures from their canonical upright position. Comparing the times required for pattern retrieval in its canonical upright position with the reaction times of human subjects, we found agreement in that (i) retrieval times for clockwise and anticlockwise departures of the same angular magnitude (up to 180 degrees) were not different, (ii) retrieval times increased with departure from upright and (iii) increased more sharply as departure from upright approached 180 degrees. The rotation-invariant retrieval of the activity pattern has been accomplished by means of the modified algorithm of Dotsenko (Dotsenko V S 1988 J. Phys. A: Math. Gen. 21 L783-7) proposed for translation-, rotation- and size-invariant pattern recognition, which uses relaxation of neuronal firing thresholds to guide the evolution of the ANN in state space towards the desired memory attractor. The dynamics of neuronal relaxation has been modified for storage and retrieval of low-activity patterns and the original gradient optimization of threshold dynamics has been replaced with optimization by simulated annealing.

  19. Modeling of the pyruvate production with Escherichia coli: comparison of mechanistic and neural networks-based models.

    Science.gov (United States)

    Zelić, B; Bolf, N; Vasić-Racki, D

    2006-06-01

    Three different models: the unstructured mechanistic black-box model, the input-output neural network-based model and the externally recurrent neural network model were used to describe the pyruvate production process from glucose and acetate using the genetically modified Escherichia coli YYC202 ldhA::Kan strain. The experimental data were used from the recently described batch and fed-batch experiments [ Zelić B, Study of the process development for Escherichia coli-based pyruvate production. PhD Thesis, University of Zagreb, Faculty of Chemical Engineering and Technology, Zagreb, Croatia, July 2003. (In English); Zelić et al. Bioproc Biosyst Eng 26:249-258 (2004); Zelić et al. Eng Life Sci 3:299-305 (2003); Zelić et al Biotechnol Bioeng 85:638-646 (2004)]. The neural networks were built out of the experimental data obtained in the fed-batch pyruvate production experiments with the constant glucose feed rate. The model validation was performed using the experimental results obtained from the batch and fed-batch pyruvate production experiments with the constant acetate feed rate. Dynamics of the substrate and product concentration changes was estimated using two neural network-based models for biomass and pyruvate. It was shown that neural networks could be used for the modeling of complex microbial fermentation processes, even in conditions in which mechanistic unstructured models cannot be applied.

  20. Circuit Models and Experimental Noise Measurements of Micropipette Amplifiers for Extracellular Neural Recordings from Live Animals

    Directory of Open Access Journals (Sweden)

    Chang Hao Chen

    2014-01-01

    Full Text Available Glass micropipettes are widely used to record neural activity from single neurons or clusters of neurons extracellularly in live animals. However, to date, there has been no comprehensive study of noise in extracellular recordings with glass micropipettes. The purpose of this work was to assess various noise sources that affect extracellular recordings and to create model systems in which novel micropipette neural amplifier designs can be tested. An equivalent circuit of the glass micropipette and the noise model of this circuit, which accurately describe the various noise sources involved in extracellular recordings, have been developed. Measurement schemes using dead brain tissue as well as extracellular recordings from neurons in the inferior colliculus, an auditory brain nucleus of an anesthetized gerbil, were used to characterize noise performance and amplification efficacy of the proposed micropipette neural amplifier. According to our model, the major noise sources which influence the signal to noise ratio are the intrinsic noise of the neural amplifier and the thermal noise from distributed pipette resistance. These two types of noise were calculated and measured and were shown to be the dominating sources of background noise for in vivo experiments.

  1. Extended Neural Metastability in an Embodied Model of Sensorimotor Coupling

    Directory of Open Access Journals (Sweden)

    Miguel Aguilera

    2016-09-01

    Full Text Available The hypothesis that brain organization is based on mechanisms of metastable synchronization in neural assemblies has been popularized during the last decades of neuroscientific research. Nevertheless, the role of body and environment for understanding the functioning of metastable assemblies is frequently dismissed. The main goal of this paper is to investigate the contribution of sensorimotor coupling to neural and behavioural metastability using a minimal computational model of plastic neural ensembles embedded in a robotic agent in a behavioural preference task. Our hypothesis is that, under some conditions, the metastability of the system is not restricted to the brain but extends to the system composed by the interaction of brain, body and environment. We test this idea, comparing an agent in continuous interaction with its environment in a task demanding behavioural flexibility with an equivalent model from the point of view of 'internalist neuroscience'. A statistical characterization of our model and tools from information theory allows us to show how (1 the bidirectional coupling between agent and environment brings the system closer to a regime of criticality and triggers the emergence of additional metastable states which are not found in the brain in isolation but extended to the whole system of sensorimotor interaction, (2 the synaptic plasticity of the agent is fundamental to sustain open structures in the neural controller of the agent flexibly engaging and disengaging different behavioural patterns that sustain sensorimotor metastable states, and (3 these extended metastable states emerge when the agent generates an asymmetrical circular loop of causal interaction with its environment, in which the agent responds to variability of the environment at fast timescales while acting over the environment at slow timescales, suggesting the constitution of the agent as an autonomous entity actively modulating its sensorimotor coupling

  2. Extended Neural Metastability in an Embodied Model of Sensorimotor Coupling.

    Science.gov (United States)

    Aguilera, Miguel; Bedia, Manuel G; Barandiaran, Xabier E

    2016-01-01

    The hypothesis that brain organization is based on mechanisms of metastable synchronization in neural assemblies has been popularized during the last decades of neuroscientific research. Nevertheless, the role of body and environment for understanding the functioning of metastable assemblies is frequently dismissed. The main goal of this paper is to investigate the contribution of sensorimotor coupling to neural and behavioral metastability using a minimal computational model of plastic neural ensembles embedded in a robotic agent in a behavioral preference task. Our hypothesis is that, under some conditions, the metastability of the system is not restricted to the brain but extends to the system composed by the interaction of brain, body and environment. We test this idea, comparing an agent in continuous interaction with its environment in a task demanding behavioral flexibility with an equivalent model from the point of view of "internalist neuroscience." A statistical characterization of our model and tools from information theory allow us to show how (1) the bidirectional coupling between agent and environment brings the system closer to a regime of criticality and triggers the emergence of additional metastable states which are not found in the brain in isolation but extended to the whole system of sensorimotor interaction, (2) the synaptic plasticity of the agent is fundamental to sustain open structures in the neural controller of the agent flexibly engaging and disengaging different behavioral patterns that sustain sensorimotor metastable states, and (3) these extended metastable states emerge when the agent generates an asymmetrical circular loop of causal interaction with its environment, in which the agent responds to variability of the environment at fast timescales while acting over the environment at slow timescales, suggesting the constitution of the agent as an autonomous entity actively modulating its sensorimotor coupling with the world. We

  3. Fuzzy Entropy: Axiomatic Definition and Neural Networks Model

    Institute of Scientific and Technical Information of China (English)

    QINGMing; CAOYue; HUANGTian-min

    2004-01-01

    The measure of uncertainty is adopted as a measure of information. The measures of fuzziness are known as fuzzy information measures. The measure of a quantity of fuzzy information gained from a fuzzy set or fuzzy system is known as fuzzy entropy. Fuzzy entropy has been focused and studied by many researchers in various fields. In this paper, firstly, the axiomatic definition of fuzzy entropy is discussed. Then, neural networks model of fuzzy entropy is proposed, based on the computing capability of neural networks. In the end, two examples are discussed to show the efficiency of the model.

  4. Mouse neuroblastoma cell-based model and the effect of epileptic events on calcium oscillations and neural spikes

    Science.gov (United States)

    Kim, Suhwan; Jung, Unsang; Baek, Juyoung; Lee, Sangwon; Jung, Woonggyu; Kim, Jeehyun; Kang, Shinwon

    2013-01-01

    Recently, mouse neuroblastoma cells have been considered as an attractive model for the study of human neurological and prion diseases, and they have been intensively used as a model system in different areas. For example, the differentiation of neuro2a (N2A) cells, receptor-mediated ion current, and glutamate-induced physiological responses have been actively investigated with these cells. These mouse neuroblastoma N2A cells are of interest because they grow faster than other cells of neural origin and have a number of other advantages. The calcium oscillations and neural spikes of mouse neuroblastoma N2A cells in epileptic conditions are evaluated. Based on our observations of neural spikes in these cells with our proposed imaging modality, we reported that they can be an important model in epileptic activity studies. We concluded that mouse neuroblastoma N2A cells produce epileptic spikes in vitro in the same way as those produced by neurons or astrocytes. This evidence suggests that increased levels of neurotransmitter release due to the enhancement of free calcium from 4-aminopyridine causes the mouse neuroblastoma N2A cells to produce epileptic spikes and calcium oscillations.

  5. Active voltammetric microsensors with neural signal processing.

    Energy Technology Data Exchange (ETDEWEB)

    Vogt, M. C.

    1998-12-11

    Many industrial and environmental processes, including bioremediation, would benefit from the feedback and control information provided by a local multi-analyte chemical sensor. For most processes, such a sensor would need to be rugged enough to be placed in situ for long-term remote monitoring, and inexpensive enough to be fielded in useful numbers. The multi-analyte capability is difficult to obtain from common passive sensors, but can be provided by an active device that produces a spectrum-type response. Such new active gas microsensor technology has been developed at Argonne National Laboratory. The technology couples an electrocatalytic ceramic-metallic (cermet) microsensor with a voltammetric measurement technique and advanced neural signal processing. It has been demonstrated to be flexible, rugged, and very economical to produce and deploy. Both narrow interest detectors and wide spectrum instruments have been developed around this technology. Much of this technology's strength lies in the active measurement technique employed. The technique involves applying voltammetry to a miniature electrocatalytic cell to produce unique chemical ''signatures'' from the analytes. These signatures are processed with neural pattern recognition algorithms to identify and quantify the components in the analyte. The neural signal processing allows for innovative sampling and analysis strategies to be employed with the microsensor. In most situations, the whole response signature from the voltammogram can be used to identify, classify, and quantify an analyte, without dissecting it into component parts. This allows an instrument to be calibrated once for a specific gas or mixture of gases by simple exposure to a multi-component standard rather than by a series of individual gases. The sampled unknown analytes can vary in composition or in concentration, the calibration, sensing, and processing methods of these active voltammetric microsensors can

  6. Active voltammetric microsensors with neural signal processing

    Science.gov (United States)

    Vogt, Michael C.; Skubal, Laura R.

    1999-02-01

    Many industrial and environmental processes, including bioremediation, would benefit from the feedback and control information provided by a local multi-analyte chemical sensor. For most processes, such a sensor would need to be rugged enough to be placed in situ for long-term remote monitoring, and inexpensive enough to be fielded in useful numbers. The multi-analyte capability is difficult to obtain from common passive sensors, but can be provided by an active device that produces a spectrum-type response. Such new active gas microsensor technology has been developed at Argonne National Laboratory. The technology couples an electrocatalytic ceramic-metallic (cermet) microsensor with a voltammetric measurement technique and advanced neural signal processing. It has been demonstrated to be flexible, rugged, and very economical to produce and deploy. Both narrow interest detectors and wide spectrum instruments have been developed around this technology. Much of this technology's strength lies in the active measurement technique employed. The technique involves applying voltammetry to a miniature electrocatalytic cell to produce unique chemical 'signatures' from the analytes. These signatures are processed with neural pattern recognition algorithms to identify and quantify the components in the analyte. The neural signal processing allows for innovative sampling and analysis strategies to be employed with the microsensor. In most situations, the whole response signature from the voltammogram can be used to identify, classify, and quantify an analyte, without dissecting it into component parts. This allows an instrument to be calibrated once for a specific gas or mixture of gases by simple exposure to a multi-component standard rather than by a series of individual gases. The sampled unknown analytes can vary in composition or in concentration; the calibration, sensing, and processing methods of these active voltammetric microsensors can detect, recognize, and

  7. Altered Synchronizations among Neural Networks in Geriatric Depression.

    Science.gov (United States)

    Wang, Lihong; Chou, Ying-Hui; Potter, Guy G; Steffens, David C

    2015-01-01

    Although major depression has been considered as a manifestation of discoordinated activity between affective and cognitive neural networks, only a few studies have examined the relationships among neural networks directly. Because of the known disconnection theory, geriatric depression could be a useful model in studying the interactions among different networks. In the present study, using independent component analysis to identify intrinsically connected neural networks, we investigated the alterations in synchronizations among neural networks in geriatric depression to better understand the underlying neural mechanisms. Resting-state fMRI data was collected from thirty-two patients with geriatric depression and thirty-two age-matched never-depressed controls. We compared the resting-state activities between the two groups in the default-mode, central executive, attention, salience, and affective networks as well as correlations among these networks. The depression group showed stronger activity than the controls in an affective network, specifically within the orbitofrontal region. However, unlike the never-depressed controls, geriatric depression group lacked synchronized/antisynchronized activity between the affective network and the other networks. Those depressed patients with lower executive function has greater synchronization between the salience network with the executive and affective networks. Our results demonstrate the effectiveness of the between-network analyses in examining neural models for geriatric depression.

  8. Cognitive-affective neural plasticity following active-controlled mindfulness intervention

    DEFF Research Database (Denmark)

    Allen, Micah Galen

    Mindfulness meditation is a set of attention-based, regulatory and self-inquiry training regimes. Although the impact of mindfulness meditation training (MT) on self-regulation is well established, the neural mechanisms supporting such plasticity are poorly understood. MT is thought to act through...... prefrontal cortex (mPFC), and right anterior insula during negative valence processing. Our findings highlight the importance of active control in MT research, indicate unique neural mechanisms for progressive stages of mindfulness training, and suggest that optimal application of MT may differ depending...

  9. Deep Convolutional and LSTM Recurrent Neural Networks for Multimodal Wearable Activity Recognition

    OpenAIRE

    Francisco Javier Ordóñez; Daniel Roggen

    2016-01-01

    Human activity recognition (HAR) tasks have traditionally been solved using engineered features obtained by heuristic processes. Current research suggests that deep convolutional neural networks are suited to automate feature extraction from raw sensor inputs. However, human activities are made of complex sequences of motor movements, and capturing this temporal dynamics is fundamental for successful HAR. Based on the recent success of recurrent neural networks for time series domains, we pro...

  10. A Pruning Neural Network Model in Credit Classification Analysis

    Directory of Open Access Journals (Sweden)

    Yajiao Tang

    2018-01-01

    Full Text Available Nowadays, credit classification models are widely applied because they can help financial decision-makers to handle credit classification issues. Among them, artificial neural networks (ANNs have been widely accepted as the convincing methods in the credit industry. In this paper, we propose a pruning neural network (PNN and apply it to solve credit classification problem by adopting the well-known Australian and Japanese credit datasets. The model is inspired by synaptic nonlinearity of a dendritic tree in a biological neural model. And it is trained by an error back-propagation algorithm. The model is capable of realizing a neuronal pruning function by removing the superfluous synapses and useless dendrites and forms a tidy dendritic morphology at the end of learning. Furthermore, we utilize logic circuits (LCs to simulate the dendritic structures successfully which makes PNN be implemented on the hardware effectively. The statistical results of our experiments have verified that PNN obtains superior performance in comparison with other classical algorithms in terms of accuracy and computational efficiency.

  11. Comparison of Multiple Linear Regressions and Neural Networks based QSAR models for the design of new antitubercular compounds.

    Science.gov (United States)

    Ventura, Cristina; Latino, Diogo A R S; Martins, Filomena

    2013-01-01

    The performance of two QSAR methodologies, namely Multiple Linear Regressions (MLR) and Neural Networks (NN), towards the modeling and prediction of antitubercular activity was evaluated and compared. A data set of 173 potentially active compounds belonging to the hydrazide family and represented by 96 descriptors was analyzed. Models were built with Multiple Linear Regressions (MLR), single Feed-Forward Neural Networks (FFNNs), ensembles of FFNNs and Associative Neural Networks (AsNNs) using four different data sets and different types of descriptors. The predictive ability of the different techniques used were assessed and discussed on the basis of different validation criteria and results show in general a better performance of AsNNs in terms of learning ability and prediction of antitubercular behaviors when compared with all other methods. MLR have, however, the advantage of pinpointing the most relevant molecular characteristics responsible for the behavior of these compounds against Mycobacterium tuberculosis. The best results for the larger data set (94 compounds in training set and 18 in test set) were obtained with AsNNs using seven descriptors (R(2) of 0.874 and RMSE of 0.437 against R(2) of 0.845 and RMSE of 0.472 in MLRs, for test set). Counter-Propagation Neural Networks (CPNNs) were trained with the same data sets and descriptors. From the scrutiny of the weight levels in each CPNN and the information retrieved from MLRs, a rational design of potentially active compounds was attempted. Two new compounds were synthesized and tested against M. tuberculosis showing an activity close to that predicted by the majority of the models. Copyright © 2013 Elsevier Masson SAS. All rights reserved.

  12. Fluctuation-Driven Neural Dynamics Reproduce Drosophila Locomotor Patterns.

    Directory of Open Access Journals (Sweden)

    Andrea Maesani

    2015-11-01

    Full Text Available The neural mechanisms determining the timing of even simple actions, such as when to walk or rest, are largely mysterious. One intriguing, but untested, hypothesis posits a role for ongoing activity fluctuations in neurons of central action selection circuits that drive animal behavior from moment to moment. To examine how fluctuating activity can contribute to action timing, we paired high-resolution measurements of freely walking Drosophila melanogaster with data-driven neural network modeling and dynamical systems analysis. We generated fluctuation-driven network models whose outputs-locomotor bouts-matched those measured from sensory-deprived Drosophila. From these models, we identified those that could also reproduce a second, unrelated dataset: the complex time-course of odor-evoked walking for genetically diverse Drosophila strains. Dynamical models that best reproduced both Drosophila basal and odor-evoked locomotor patterns exhibited specific characteristics. First, ongoing fluctuations were required. In a stochastic resonance-like manner, these fluctuations allowed neural activity to escape stable equilibria and to exceed a threshold for locomotion. Second, odor-induced shifts of equilibria in these models caused a depression in locomotor frequency following olfactory stimulation. Our models predict that activity fluctuations in action selection circuits cause behavioral output to more closely match sensory drive and may therefore enhance navigation in complex sensory environments. Together these data reveal how simple neural dynamics, when coupled with activity fluctuations, can give rise to complex patterns of animal behavior.

  13. Modeling of surface dust concentrations using neural networks and kriging

    Science.gov (United States)

    Buevich, Alexander G.; Medvedev, Alexander N.; Sergeev, Alexander P.; Tarasov, Dmitry A.; Shichkin, Andrey V.; Sergeeva, Marina V.; Atanasova, T. B.

    2016-12-01

    Creating models which are able to accurately predict the distribution of pollutants based on a limited set of input data is an important task in environmental studies. In the paper two neural approaches: (multilayer perceptron (MLP)) and generalized regression neural network (GRNN)), and two geostatistical approaches: (kriging and cokriging), are using for modeling and forecasting of dust concentrations in snow cover. The area of study is under the influence of dust emissions from a copper quarry and a several industrial companies. The comparison of two mentioned approaches is conducted. Three indices are used as the indicators of the models accuracy: the mean absolute error (MAE), root mean square error (RMSE) and relative root mean square error (RRMSE). Models based on artificial neural networks (ANN) have shown better accuracy. When considering all indices, the most precision model was the GRNN, which uses as input parameters for modeling the coordinates of sampling points and the distance to the probable emissions source. The results of work confirm that trained ANN may be more suitable tool for modeling of dust concentrations in snow cover.

  14. High baseline activity in inferior temporal cortex improves neural and behavioral discriminability during visual categorization

    Science.gov (United States)

    Emadi, Nazli; Rajimehr, Reza; Esteky, Hossein

    2014-01-01

    Spontaneous firing is a ubiquitous property of neural activity in the brain. Recent literature suggests that this baseline activity plays a key role in perception. However, it is not known how the baseline activity contributes to neural coding and behavior. Here, by recording from the single neurons in the inferior temporal cortex of monkeys performing a visual categorization task, we thoroughly explored the relationship between baseline activity, the evoked response, and behavior. Specifically we found that a low-frequency (baseline activity. This enhancement of the baseline activity was then followed by an increase in the neural selectivity and the response reliability and eventually a higher behavioral performance. PMID:25404900

  15. Differences in Neural Activation as a Function of Risk-taking Task Parameters

    Directory of Open Access Journals (Sweden)

    Eliza eCongdon

    2013-09-01

    Full Text Available Despite evidence supporting a relationship between impulsivity and naturalistic risk-taking, the relationship of impulsivity with laboratory-based measures of risky decision-making remains unclear. One factor contributing to this gap in our understanding is the degree to which different risky decision-making tasks vary in their details. We conducted an fMRI investigation of the Angling Risk Task (ART, which is an improved behavioral measure of risky decision-making. In order to examine whether the observed pattern of neural activation was specific to the ART or generalizable, we also examined correlates of the Balloon Analogue Risk Taking (BART task in the same sample of 23 healthy adults. Exploratory analyses were conducted to examine the relationship between neural activation, performance, impulsivity and self-reported risk-taking. While activation in a valuation network was associated with reward tracking during the ART but not the BART, increased fronto-cingulate activation was seen during risky choice trials in the BART as compared to the ART. Thus, neural activation during risky decision-making trials differed between the two tasks, and this observation was likely driven by differences in task parameters, namely the absence vs. presence of ambiguity and/or stationary vs. increasing probability of loss on the ART and BART, respectively. Exploratory association analyses suggest that sensitivity of neural response to the magnitude of potential reward during the ART was associated with a suboptimal performance strategy, higher scores on a scale of dysfunctional impulsivity and a greater likelihood of engaging in risky behaviors, while this pattern was not seen for the BART. Our results suggest that the ART is decomposable and associated with distinct patterns of neural activation; this represents a preliminary step towards characterizing a behavioral measure of risky decision-making that may support a better understanding of naturalistic risk-taking.

  16. Comparing Models GRM, Refraction Tomography and Neural Network to Analyze Shallow Landslide

    Directory of Open Access Journals (Sweden)

    Armstrong F. Sompotan

    2011-11-01

    Full Text Available Detailed investigations of landslides are essential to understand fundamental landslide mechanisms. Seismic refraction method has been proven as a useful geophysical tool for investigating shallow landslides. The objective of this study is to introduce a new workflow using neural network in analyzing seismic refraction data and to compare the result with some methods; that are general reciprocal method (GRM and refraction tomography. The GRM is effective when the velocity structure is relatively simple and refractors are gently dipping. Refraction tomography is capable of modeling the complex velocity structures of landslides. Neural network is found to be more potential in application especially in time consuming and complicated numerical methods. Neural network seem to have the ability to establish a relationship between an input and output space for mapping seismic velocity. Therefore, we made a preliminary attempt to evaluate the applicability of neural network to determine velocity and elevation of subsurface synthetic models corresponding to arrival times. The training and testing process of the neural network is successfully accomplished using the synthetic data. Furthermore, we evaluated the neural network using observed data. The result of the evaluation indicates that the neural network can compute velocity and elevation corresponding to arrival times. The similarity of those models shows the success of neural network as a new alternative in seismic refraction data interpretation.

  17. Microglia modulate hippocampal neural precursor activity in response to exercise and aging.

    Science.gov (United States)

    Vukovic, Jana; Colditz, Michael J; Blackmore, Daniel G; Ruitenberg, Marc J; Bartlett, Perry F

    2012-05-09

    Exercise has been shown to positively augment adult hippocampal neurogenesis; however, the cellular and molecular pathways mediating this effect remain largely unknown. Previous studies have suggested that microglia may have the ability to differentially instruct neurogenesis in the adult brain. Here, we used transgenic Csf1r-GFP mice to investigate whether hippocampal microglia directly influence the activation of neural precursor cells. Our results revealed that an exercise-induced increase in neural precursor cell activity was mediated via endogenous microglia and abolished when these cells were selectively removed from hippocampal cultures. Conversely, microglia from the hippocampi of animals that had exercised were able to activate latent neural precursor cells when added to neurosphere preparations from sedentary mice. We also investigated the role of CX(3)CL1, a chemokine that is known to provide a more neuroprotective microglial phenotype. Intraparenchymal infusion of a blocking antibody against the CX(3)CL1 receptor, CX(3)CR1, but not control IgG, dramatically reduced the neurosphere formation frequency in mice that had exercised. While an increase in soluble CX(3)CL1 was observed following running, reduced levels of this chemokine were found in the aged brain. Lower levels of CX(3)CL1 with advancing age correlated with the natural decline in neural precursor cell activity, a state that could be partially alleviated through removal of microglia. These findings provide the first direct evidence that endogenous microglia can exert a dual and opposing influence on neural precursor cell activity within the hippocampus, and that signaling through the CX(3)CL1-CX(3)CR1 axis critically contributes toward this process.

  18. The reliability of nonlinear least-squares algorithm for data analysis of neural response activity during sinusoidal rotational stimulation in semicircular canal neurons.

    Science.gov (United States)

    Ren, Pengyu; Li, Bowen; Dong, Shiyao; Chen, Lin; Zhang, Yuelin

    2018-01-01

    Although many mathematical methods were used to analyze the neural activity under sinusoidal stimulation within linear response range in vestibular system, the reliabilities of these methods are still not reported, especially in nonlinear response range. Here we chose nonlinear least-squares algorithm (NLSA) with sinusoidal model to analyze the neural response of semicircular canal neurons (SCNs) during sinusoidal rotational stimulation (SRS) over a nonlinear response range. Our aim was to acquire a reliable mathematical method for data analysis under SRS in vestibular system. Our data indicated that the reliability of this method in an entire SCNs population was quite satisfactory. However, the reliability was strongly negatively depended on the neural discharge regularity. In addition, stimulation parameters were the vital impact factors influencing the reliability. The frequency had a significant negative effect but the amplitude had a conspicuous positive effect on the reliability. Thus, NLSA with sinusoidal model resulted a reliable mathematical tool for data analysis of neural response activity under SRS in vestibular system and more suitable for those under the stimulation with low frequency but high amplitude, suggesting that this method can be used in nonlinear response range. This method broke out of the restriction of neural activity analysis under nonlinear response range and provided a solid foundation for future study in nonlinear response range in vestibular system.

  19. A Quantum Implementation Model for Artificial Neural Networks

    OpenAIRE

    Ammar Daskin

    2018-01-01

    The learning process for multilayered neural networks with many nodes makes heavy demands on computational resources. In some neural network models, the learning formulas, such as the Widrow–Hoff formula, do not change the eigenvectors of the weight matrix while flatting the eigenvalues. In infinity, these iterative formulas result in terms formed by the principal components of the weight matrix, namely, the eigenvectors corresponding to the non-zero eigenvalues. In quantum computing, the pha...

  20. Modelling the nonlinearity of piezoelectric actuators in active ...

    African Journals Online (AJOL)

    Piezoelectric actuators have great capabilities as elements of intelligent structures for active vibration cancellation. One problem with this type of actuator is its nonlinear behaviour. In active vibration control systems, it is important to have an accurate model of the control branch. This paper demonstrates the ability of neural ...

  1. Synaptic model for spontaneous activity in developing networks

    DEFF Research Database (Denmark)

    Lerchner, Alexander; Rinzel, J.

    2005-01-01

    Spontaneous rhythmic activity occurs in many developing neural networks. The activity in these hyperexcitable networks is comprised of recurring "episodes" consisting of "cycles" of high activity that alternate with "silent phases" with little or no activity. We introduce a new model of synaptic...... dynamics that takes into account that only a fraction of the vesicles stored in a synaptic terminal is readily available for release. We show that our model can reproduce spontaneous rhythmic activity with the same general features as observed in experiments, including a positive correlation between...

  2. Electrocardiogram (ECG Signal Modeling and Noise Reduction Using Hopfield Neural Networks

    Directory of Open Access Journals (Sweden)

    F. Bagheri

    2013-02-01

    Full Text Available The Electrocardiogram (ECG signal is one of the diagnosing approaches to detect heart disease. In this study the Hopfield Neural Network (HNN is applied and proposed for ECG signal modeling and noise reduction. The Hopfield Neural Network (HNN is a recurrent neural network that stores the information in a dynamic stable pattern. This algorithm retrieves a pattern stored in memory in response to the presentation of an incomplete or noisy version of that pattern. Computer simulation results show that this method can successfully model the ECG signal and remove high-frequency noise.

  3. The BDNF Val66Met Polymorphism Influences Reading Ability and Patterns of Neural Activation in Children.

    Directory of Open Access Journals (Sweden)

    Kaja K Jasińska

    Full Text Available Understanding how genes impact the brain's functional activation for learning and cognition during development remains limited. We asked whether a common genetic variant in the BDNF gene (the Val66Met polymorphism modulates neural activation in the young brain during a critical period for the emergence and maturation of the neural circuitry for reading. In animal models, the bdnf variation has been shown to be associated with the structure and function of the developing brain and in humans it has been associated with multiple aspects of cognition, particularly memory, which are relevant for the development of skilled reading. Yet, little is known about the impact of the Val66Met polymorphism on functional brain activation in development, either in animal models or in humans. Here, we examined whether the BDNF Val66Met polymorphism (dbSNP rs6265 is associated with children's (age 6-10 neural activation patterns during a reading task (n = 81 using functional magnetic resonance imaging (fMRI, genotyping, and standardized behavioral assessments of cognitive and reading development. Children homozygous for the Val allele at the SNP rs6265 of the BDNF gene outperformed Met allele carriers on reading comprehension and phonological memory, tasks that have a strong memory component. Consistent with these behavioral findings, Met allele carriers showed greater activation in reading-related brain regions including the fusiform gyrus, the left inferior frontal gyrus and left superior temporal gyrus as well as greater activation in the hippocampus during a word and pseudoword reading task. Increased engagement of memory and spoken language regions for Met allele carriers relative to Val/Val homozygotes during reading suggests that Met carriers have to exert greater effort required to retrieve phonological codes.

  4. The BDNF Val66Met Polymorphism Influences Reading Ability and Patterns of Neural Activation in Children.

    Science.gov (United States)

    Jasińska, Kaja K; Molfese, Peter J; Kornilov, Sergey A; Mencl, W Einar; Frost, Stephen J; Lee, Maria; Pugh, Kenneth R; Grigorenko, Elena L; Landi, Nicole

    2016-01-01

    Understanding how genes impact the brain's functional activation for learning and cognition during development remains limited. We asked whether a common genetic variant in the BDNF gene (the Val66Met polymorphism) modulates neural activation in the young brain during a critical period for the emergence and maturation of the neural circuitry for reading. In animal models, the bdnf variation has been shown to be associated with the structure and function of the developing brain and in humans it has been associated with multiple aspects of cognition, particularly memory, which are relevant for the development of skilled reading. Yet, little is known about the impact of the Val66Met polymorphism on functional brain activation in development, either in animal models or in humans. Here, we examined whether the BDNF Val66Met polymorphism (dbSNP rs6265) is associated with children's (age 6-10) neural activation patterns during a reading task (n = 81) using functional magnetic resonance imaging (fMRI), genotyping, and standardized behavioral assessments of cognitive and reading development. Children homozygous for the Val allele at the SNP rs6265 of the BDNF gene outperformed Met allele carriers on reading comprehension and phonological memory, tasks that have a strong memory component. Consistent with these behavioral findings, Met allele carriers showed greater activation in reading-related brain regions including the fusiform gyrus, the left inferior frontal gyrus and left superior temporal gyrus as well as greater activation in the hippocampus during a word and pseudoword reading task. Increased engagement of memory and spoken language regions for Met allele carriers relative to Val/Val homozygotes during reading suggests that Met carriers have to exert greater effort required to retrieve phonological codes.

  5. Recurrent Neural Network Model for Constructive Peptide Design.

    Science.gov (United States)

    Müller, Alex T; Hiss, Jan A; Schneider, Gisbert

    2018-02-26

    We present a generative long short-term memory (LSTM) recurrent neural network (RNN) for combinatorial de novo peptide design. RNN models capture patterns in sequential data and generate new data instances from the learned context. Amino acid sequences represent a suitable input for these machine-learning models. Generative models trained on peptide sequences could therefore facilitate the design of bespoke peptide libraries. We trained RNNs with LSTM units on pattern recognition of helical antimicrobial peptides and used the resulting model for de novo sequence generation. Of these sequences, 82% were predicted to be active antimicrobial peptides compared to 65% of randomly sampled sequences with the same amino acid distribution as the training set. The generated sequences also lie closer to the training data than manually designed amphipathic helices. The results of this study showcase the ability of LSTM RNNs to construct new amino acid sequences within the applicability domain of the model and motivate their prospective application to peptide and protein design without the need for the exhaustive enumeration of sequence libraries.

  6. Information content of neural networks with self-control and variable activity

    International Nuclear Information System (INIS)

    Bolle, D.; Amari, S.I.; Dominguez Carreta, D.R.C.; Massolo, G.

    2001-01-01

    A self-control mechanism for the dynamics of neural networks with variable activity is discussed using a recursive scheme for the time evolution of the local field. It is based upon the introduction of a self-adapting time-dependent threshold as a function of both the neural and pattern activity in the network. This mechanism leads to an improvement of the information content of the network as well as an increase of the storage capacity and the basins of attraction. Different architectures are considered and the results are compared with numerical simulations

  7. Nonlinear signal processing using neural networks: Prediction and system modelling

    Energy Technology Data Exchange (ETDEWEB)

    Lapedes, A.; Farber, R.

    1987-06-01

    The backpropagation learning algorithm for neural networks is developed into a formalism for nonlinear signal processing. We illustrate the method by selecting two common topics in signal processing, prediction and system modelling, and show that nonlinear applications can be handled extremely well by using neural networks. The formalism is a natural, nonlinear extension of the linear Least Mean Squares algorithm commonly used in adaptive signal processing. Simulations are presented that document the additional performance achieved by using nonlinear neural networks. First, we demonstrate that the formalism may be used to predict points in a highly chaotic time series with orders of magnitude increase in accuracy over conventional methods including the Linear Predictive Method and the Gabor-Volterra-Weiner Polynomial Method. Deterministic chaos is thought to be involved in many physical situations including the onset of turbulence in fluids, chemical reactions and plasma physics. Secondly, we demonstrate the use of the formalism in nonlinear system modelling by providing a graphic example in which it is clear that the neural network has accurately modelled the nonlinear transfer function. It is interesting to note that the formalism provides explicit, analytic, global, approximations to the nonlinear maps underlying the various time series. Furthermore, the neural net seems to be extremely parsimonious in its requirements for data points from the time series. We show that the neural net is able to perform well because it globally approximates the relevant maps by performing a kind of generalized mode decomposition of the maps. 24 refs., 13 figs.

  8. Field-theoretic approach to fluctuation effects in neural networks

    International Nuclear Information System (INIS)

    Buice, Michael A.; Cowan, Jack D.

    2007-01-01

    A well-defined stochastic theory for neural activity, which permits the calculation of arbitrary statistical moments and equations governing them, is a potentially valuable tool for theoretical neuroscience. We produce such a theory by analyzing the dynamics of neural activity using field theoretic methods for nonequilibrium statistical processes. Assuming that neural network activity is Markovian, we construct the effective spike model, which describes both neural fluctuations and response. This analysis leads to a systematic expansion of corrections to mean field theory, which for the effective spike model is a simple version of the Wilson-Cowan equation. We argue that neural activity governed by this model exhibits a dynamical phase transition which is in the universality class of directed percolation. More general models (which may incorporate refractoriness) can exhibit other universality classes, such as dynamic isotropic percolation. Because of the extremely high connectivity in typical networks, it is expected that higher-order terms in the systematic expansion are small for experimentally accessible measurements, and thus, consistent with measurements in neocortical slice preparations, we expect mean field exponents for the transition. We provide a quantitative criterion for the relative magnitude of each term in the systematic expansion, analogous to the Ginsburg criterion. Experimental identification of dynamic universality classes in vivo is an outstanding and important question for neuroscience

  9. Neural activity in the hippocampus predicts individual visual short-term memory capacity.

    Science.gov (United States)

    von Allmen, David Yoh; Wurmitzer, Karoline; Martin, Ernst; Klaver, Peter

    2013-07-01

    Although the hippocampus had been traditionally thought to be exclusively involved in long-term memory, recent studies raised controversial explanations why hippocampal activity emerged during short-term memory tasks. For example, it has been argued that long-term memory processes might contribute to performance within a short-term memory paradigm when memory capacity has been exceeded. It is still unclear, though, whether neural activity in the hippocampus predicts visual short-term memory (VSTM) performance. To investigate this question, we measured BOLD activity in 21 healthy adults (age range 19-27 yr, nine males) while they performed a match-to-sample task requiring processing of object-location associations (delay period  =  900 ms; set size conditions 1, 2, 4, and 6). Based on individual memory capacity (estimated by Cowan's K-formula), two performance groups were formed (high and low performers). Within whole brain analyses, we found a robust main effect of "set size" in the posterior parietal cortex (PPC). In line with a "set size × group" interaction in the hippocampus, a subsequent Finite Impulse Response (FIR) analysis revealed divergent hippocampal activation patterns between performance groups: Low performers (mean capacity  =  3.63) elicited increased neural activity at set size two, followed by a drop in activity at set sizes four and six, whereas high performers (mean capacity  =  5.19) showed an incremental activity increase with larger set size (maximal activation at set size six). Our data demonstrated that performance-related neural activity in the hippocampus emerged below capacity limit. In conclusion, we suggest that hippocampal activity reflected successful processing of object-location associations in VSTM. Neural activity in the PPC might have been involved in attentional updating. Copyright © 2013 Wiley Periodicals, Inc.

  10. Adaptive control using a hybrid-neural model: application to a polymerisation reactor

    Directory of Open Access Journals (Sweden)

    Cubillos F.

    2001-01-01

    Full Text Available This work presents the use of a hybrid-neural model for predictive control of a plug flow polymerisation reactor. The hybrid-neural model (HNM is based on fundamental conservation laws associated with a neural network (NN used to model the uncertain parameters. By simulations, the performance of this approach was studied for a peroxide-initiated styrene tubular reactor. The HNM was synthesised for a CSTR reactor with a radial basis function neural net (RBFN used to estimate the reaction rates recursively. The adaptive HNM was incorporated in two model predictive control strategies, a direct synthesis scheme and an optimum steady state scheme. Tests for servo and regulator control showed excellent behaviour following different setpoint variations, and rejecting perturbations. The good generalisation and training capacities of hybrid models, associated with the simplicity and robustness characteristics of the MPC formulations, make an attractive combination for the control of a polymerisation reactor.

  11. Maximum solid concentrations of coal water slurries predicted by neural network models

    Energy Technology Data Exchange (ETDEWEB)

    Cheng, Jun; Li, Yanchang; Zhou, Junhu; Liu, Jianzhong; Cen, Kefa

    2010-12-15

    The nonlinear back-propagation (BP) neural network models were developed to predict the maximum solid concentration of coal water slurry (CWS) which is a substitute for oil fuel, based on physicochemical properties of 37 typical Chinese coals. The Levenberg-Marquardt algorithm was used to train five BP neural network models with different input factors. The data pretreatment method, learning rate and hidden neuron number were optimized by training models. It is found that the Hardgrove grindability index (HGI), moisture and coalification degree of parent coal are 3 indispensable factors for the prediction of CWS maximum solid concentration. Each BP neural network model gives a more accurate prediction result than the traditional polynomial regression equation. The BP neural network model with 3 input factors of HGI, moisture and oxygen/carbon ratio gives the smallest mean absolute error of 0.40%, which is much lower than that of 1.15% given by the traditional polynomial regression equation. (author)

  12. Neural Networks Method in modeling of the financial company’s performance

    Directory of Open Access Journals (Sweden)

    I. P. Kurochkina

    2017-01-01

    with a range of quantitative parameters: conditional-ideal, real, the worst. The copyright factor selection algorithm complements the developed model. Because of the functioning of the neural network, a management report on the financial performance of the company is formed. During the research, the following methods have been used: the system approach in factors’ classification of financial results, factor analysis and mathematical modeling at development of the corresponding neural model. The research is based on a complex of theoretical and empirical developments of domestic and foreign authors. The actual digital materials of the real economic entity are involved in the verification phase of the research results.The advantage of the model is the ability to track changes in the input data and indicators in the online mode, to build quality forecasts for future periods with different combinations of the whole set of factors. The proposed instrument of factor analysis has been tested in the activities of real companies. The factors can ensure growth in terms of financial results; visualization of business processes is enhanced, as well as the probability of making rational management decisions. 

  13. A neural model of motion processing and visual navigation by cortical area MST.

    Science.gov (United States)

    Grossberg, S; Mingolla, E; Pack, C

    1999-12-01

    Cells in the dorsal medial superior temporal cortex (MSTd) process optic flow generated by self-motion during visually guided navigation. A neural model shows how interactions between well-known neural mechanisms (log polar cortical magnification, Gaussian motion-sensitive receptive fields, spatial pooling of motion-sensitive signals and subtractive extraretinal eye movement signals) lead to emergent properties that quantitatively simulate neurophysiological data about MSTd cell properties and psychophysical data about human navigation. Model cells match MSTd neuron responses to optic flow stimuli placed in different parts of the visual field, including position invariance, tuning curves, preferred spiral directions, direction reversals, average response curves and preferred locations for stimulus motion centers. The model shows how the preferred motion direction of the most active MSTd cells can explain human judgments of self-motion direction (heading), without using complex heading templates. The model explains when extraretinal eye movement signals are needed for accurate heading perception, and when retinal input is sufficient, and how heading judgments depend on scene layouts and rotation rates.

  14. MODELLING OF CONCENTRATION LIMITS BASED ON NEURAL NETWORKS.

    Directory of Open Access Journals (Sweden)

    A. L. Osipov

    2017-02-01

    Full Text Available We study the forecasting model with the concentration limits is-the use of neural network technology. The software for the implementation of these models. It is shown that the efficiency of the system in the experimental material.

  15. A Possible Neural Representation of Mathematical Group Structures.

    Science.gov (United States)

    Pomi, Andrés

    2016-09-01

    Every cognitive activity has a neural representation in the brain. When humans deal with abstract mathematical structures, for instance finite groups, certain patterns of activity are occurring in the brain that constitute their neural representation. A formal neurocognitive theory must account for all the activities developed by our brain and provide a possible neural representation for them. Associative memories are neural network models that have a good chance of achieving a universal representation of cognitive phenomena. In this work, we present a possible neural representation of mathematical group structures based on associative memory models that store finite groups through their Cayley graphs. A context-dependent associative memory stores the transitions between elements of the group when multiplied by each generator of a given presentation of the group. Under a convenient election of the vector basis mapping the elements of the group in the neural activity, the input of a vector corresponding to a generator of the group collapses the context-dependent rectangular matrix into a virtual square permutation matrix that is the matrix representation of the generator. This neural representation corresponds to the regular representation of the group, in which to each element is assigned a permutation matrix. This action of the generator on the memory matrix can also be seen as the dissection of the corresponding monochromatic subgraph of the Cayley graph of the group, and the adjacency matrix of this subgraph is the permutation matrix corresponding to the generator.

  16. Artificial Neural Network L* from different magnetospheric field models

    Science.gov (United States)

    Yu, Y.; Koller, J.; Zaharia, S. G.; Jordanova, V. K.

    2011-12-01

    The third adiabatic invariant L* plays an important role in modeling and understanding the radiation belt dynamics. The popular way to numerically obtain the L* value follows the recipe described by Roederer [1970], which is, however, slow and computational expensive. This work focuses on a new technique, which can compute the L* value in microseconds without losing much accuracy: artificial neural networks. Since L* is related to the magnetic flux enclosed by a particle drift shell, global magnetic field information needed to trace the drift shell is required. A series of currently popular empirical magnetic field models are applied to create the L* data pool using 1 million data samples which are randomly selected within a solar cycle and within the global magnetosphere. The networks, trained from the above L* data pool, can thereby be used for fairly efficient L* calculation given input parameters valid within the trained temporal and spatial range. Besides the empirical magnetospheric models, a physics-based self-consistent inner magnetosphere model (RAM-SCB) developed at LANL is also utilized to calculate L* values and then to train the L* neural network. This model better predicts the magnetospheric configuration and therefore can significantly improve the L*. The above neural network L* technique will enable, for the first time, comprehensive solar-cycle long studies of radiation belt processes. However, neural networks trained from different magnetic field models can result in different L* values, which could cause mis-interpretation of radiation belt dynamics, such as where the source of the radiation belt charged particle is and which mechanism is dominant in accelerating the particles. Such a fact calls for attention to cautiously choose a magnetospheric field model for the L* calculation.

  17. Evidence-Based Systematic Review: Effects of Neuromuscular Electrical Stimulation on Swallowing and Neural Activation

    Science.gov (United States)

    Clark, Heather; Lazarus, Cathy; Arvedson, Joan; Schooling, Tracy; Frymark, Tobi

    2009-01-01

    Purpose: To systematically review the literature examining the effects of neuromuscular electrical stimulation (NMES) on swallowing and neural activation. The review was conducted as part of a series examining the effects of oral motor exercises (OMEs) on speech, swallowing, and neural activation. Method: A systematic search was conducted to…

  18. A Quantum Implementation Model for Artificial Neural Networks

    OpenAIRE

    Daskin, Ammar

    2016-01-01

    The learning process for multi layered neural networks with many nodes makes heavy demands on computational resources. In some neural network models, the learning formulas, such as the Widrow-Hoff formula, do not change the eigenvectors of the weight matrix while flatting the eigenvalues. In infinity, this iterative formulas result in terms formed by the principal components of the weight matrix: i.e., the eigenvectors corresponding to the non-zero eigenvalues. In quantum computing, the phase...

  19. Isolating Discriminant Neural Activity in the Presence of Eye Movements and Concurrent Task Demands

    Directory of Open Access Journals (Sweden)

    Jon Touryan

    2017-07-01

    Full Text Available A growing number of studies use the combination of eye-tracking and electroencephalographic (EEG measures to explore the neural processes that underlie visual perception. In these studies, fixation-related potentials (FRPs are commonly used to quantify early and late stages of visual processing that follow the onset of each fixation. However, FRPs reflect a mixture of bottom-up (sensory-driven and top-down (goal-directed processes, in addition to eye movement artifacts and unrelated neural activity. At present there is little consensus on how to separate this evoked response into its constituent elements. In this study we sought to isolate the neural sources of target detection in the presence of eye movements and over a range of concurrent task demands. Here, participants were asked to identify visual targets (Ts amongst a grid of distractor stimuli (Ls, while simultaneously performing an auditory N-back task. To identify the discriminant activity, we used independent components analysis (ICA for the separation of EEG into neural and non-neural sources. We then further separated the neural sources, using a modified measure-projection approach, into six regions of interest (ROIs: occipital, fusiform, temporal, parietal, cingulate, and frontal cortices. Using activity from these ROIs, we identified target from non-target fixations in all participants at a level similar to other state-of-the-art classification techniques. Importantly, we isolated the time course and spectral features of this discriminant activity in each ROI. In addition, we were able to quantify the effect of cognitive load on both fixation-locked potential and classification performance across regions. Together, our results show the utility of a measure-projection approach for separating task-relevant neural activity into meaningful ROIs within more complex contexts that include eye movements.

  20. Modeling and prediction of Turkey's electricity consumption using Artificial Neural Networks

    International Nuclear Information System (INIS)

    Kavaklioglu, Kadir; Ozturk, Harun Kemal; Canyurt, Olcay Ersel; Ceylan, Halim

    2009-01-01

    Artificial Neural Networks are proposed to model and predict electricity consumption of Turkey. Multi layer perceptron with backpropagation training algorithm is used as the neural network topology. Tangent-sigmoid and pure-linear transfer functions are selected in the hidden and output layer processing elements, respectively. These input-output network models are a result of relationships that exist among electricity consumption and several other socioeconomic variables. Electricity consumption is modeled as a function of economic indicators such as population, gross national product, imports and exports. It is also modeled using export-import ratio and time input only. Performance comparison among different models is made based on absolute and percentage mean square error. Electricity consumption of Turkey is predicted until 2027 using data from 1975 to 2006 along with other economic indicators. The results show that electricity consumption can be modeled using Artificial Neural Networks, and the models can be used to predict future electricity consumption. (author)

  1. Neural-network analysis of irradiation hardening in low-activation steels

    Energy Technology Data Exchange (ETDEWEB)

    Kemp, R. [Department of Materials Science and Metallurgy, University of Cambridge, Pembroke Street, Cambridge CB2 3QZ, UK (United Kingdom)]. E-mail: rk237@cam.ac.uk; Cottrell, G.A. [EURATOM/UKAEA Fusion Association, Culham Science Centre, Abingdon, Oxon OX14 3DB, UK (United Kingdom); Bhadeshia, H.K.D.H. [Department of Materials Science and Metallurgy, University of Cambridge, Pembroke Street, Cambridge CB2 3QZ, UK (United Kingdom); Odette, G.R. [Department of Mechanical and Environmental Engineering and Department of Materials, University of California Santa Barbara, Santa Barbara, CA 93106 (United States); Yamamoto, T. [Department of Mechanical and Environmental Engineering and Department of Materials, University of California Santa Barbara, Santa Barbara, CA 93106 (United States); Kishimoto, H. [Department of Mechanical and Environmental Engineering and Department of Materials, University of California Santa Barbara, Santa Barbara, CA 93106 (United States)

    2006-02-01

    An artificial neural network has been used to model the irradiation hardening of low-activation ferritic/martensitic steels. The data used to create the model span a range of displacement damage of 0-90 dpa, within a temperature range of 273-973 K and contain 1800 points. The trained model has been able to capture the non-linear dependence of yield strength on the chemical composition and irradiation parameters. The ability of the model to generalise on unseen data has been tested and regions within the input domain that are sparsely populated have been identified. These are the regions where future experiments could be focused. It is shown that this method of analysis, because of its ability to capture complex relationships between the many variables, could help in the design of maximally informative experiments on materials in future irradiation test facilities. This will accelerate the acquisition of the key missing knowledge to assist the materials choices in a future fusion power plant.

  2. A neural network model of the relativistic electron flux at geosynchronous orbit

    International Nuclear Information System (INIS)

    Koons, H.C.; Gorney, D.J.

    1991-01-01

    A neural network has been developed to model the temporal variations of relativistic (>3 MeV) electrons at geosynchronous orbit based on model inputs consisting of 10 consecutive days of the daily sum of the planetary magnetic index ΣKp. The neural network consists of three layers of neurons, containing 10 neurons in the input layer, 6 neurons in a hidden layer, and 1 output neuron. The output is a prediction of the daily-averaged electron flux for the tenth day. The neural network was trained using 62 days of data from July 1, 1984, through August 31, 1984, from the SEE spectrometer on the geosynchronous spacecraft 1982-019. The performance of the model was measured by comparing model outputs with measured fluxes over a 6-year period from April 19, 1982, to June 4, 1988. For the entire data set the rms logarithmic error of the neural network is 0.76, and the average logarithmic error is 0.58. The neural network is essentially zero biased, and for accumulation intervals of 3 days or longer the average logarithmic error is less than 0.1. The neural network provides results that are significantly more accurate than those from linear prediction filters. The model has been used to simulate conditions which are rarely observed in nature, such as long periods of quiet (ΣKp = 0) and ideal impulses. It has also been used to make reasonably accurate day-ahead forecasts of the relativistic electron flux at geosynchronous orbit

  3. Cognitive and neural correlates of depression-like behaviour in socially defeated mice: an animal model of depression with cognitive dysfunction.

    Science.gov (United States)

    Yu, Tao; Guo, Ming; Garza, Jacob; Rendon, Samantha; Sun, Xue-Li; Zhang, Wei; Lu, Xin-Yun

    2011-04-01

    Human depression is associated with cognitive deficits. It is critical to have valid animal models in order to investigate mechanisms and treatment strategies for these associated conditions. The goal of this study was to determine the association of cognitive dysfunction with depression-like behaviour in an animal model of depression and investigate the neural circuits underlying the behaviour. Mice that were exposed to social defeat for 14 d developed depression-like behaviour, i.e. anhedonia and social avoidance as indicated by reduced sucrose preference and decreased social interaction. The assessment of cognitive performance of defeated mice demonstrated impaired working memory in the T-maze continuous alternation task and enhanced fear memory in the contextual and cued fear-conditioning tests. In contrast, reference learning and memory in the Morris water maze test were intact in defeated mice. Neuronal activation following chronic social defeat was investigated by c-fosin-situ hybridization. Defeated mice exhibited preferential neural activity in the prefrontal cortex, cingulate cortex, hippocampal formation, septum, amygdala, and hypothalamic nuclei. Taken together, our results suggest that the chronic social defeat mouse model could serve as a valid animal model to study depression with cognitive impairments. The patterns of neuronal activation provide a neural basis for social defeat-induced changes in behaviour.

  4. Probabilistic Models and Generative Neural Networks: Towards an Unified Framework for Modeling Normal and Impaired Neurocognitive Functions.

    Science.gov (United States)

    Testolin, Alberto; Zorzi, Marco

    2016-01-01

    Connectionist models can be characterized within the more general framework of probabilistic graphical models, which allow to efficiently describe complex statistical distributions involving a large number of interacting variables. This integration allows building more realistic computational models of cognitive functions, which more faithfully reflect the underlying neural mechanisms at the same time providing a useful bridge to higher-level descriptions in terms of Bayesian computations. Here we discuss a powerful class of graphical models that can be implemented as stochastic, generative neural networks. These models overcome many limitations associated with classic connectionist models, for example by exploiting unsupervised learning in hierarchical architectures (deep networks) and by taking into account top-down, predictive processing supported by feedback loops. We review some recent cognitive models based on generative networks, and we point out promising research directions to investigate neuropsychological disorders within this approach. Though further efforts are required in order to fill the gap between structured Bayesian models and more realistic, biophysical models of neuronal dynamics, we argue that generative neural networks have the potential to bridge these levels of analysis, thereby improving our understanding of the neural bases of cognition and of pathologies caused by brain damage.

  5. Neural network versus classical time series forecasting models

    Science.gov (United States)

    Nor, Maria Elena; Safuan, Hamizah Mohd; Shab, Noorzehan Fazahiyah Md; Asrul, Mohd; Abdullah, Affendi; Mohamad, Nurul Asmaa Izzati; Lee, Muhammad Hisyam

    2017-05-01

    Artificial neural network (ANN) has advantage in time series forecasting as it has potential to solve complex forecasting problems. This is because ANN is data driven approach which able to be trained to map past values of a time series. In this study the forecast performance between neural network and classical time series forecasting method namely seasonal autoregressive integrated moving average models was being compared by utilizing gold price data. Moreover, the effect of different data preprocessing on the forecast performance of neural network being examined. The forecast accuracy was evaluated using mean absolute deviation, root mean square error and mean absolute percentage error. It was found that ANN produced the most accurate forecast when Box-Cox transformation was used as data preprocessing.

  6. Validating neural-network refinements of nuclear mass models

    Science.gov (United States)

    Utama, R.; Piekarewicz, J.

    2018-01-01

    Background: Nuclear astrophysics centers on the role of nuclear physics in the cosmos. In particular, nuclear masses at the limits of stability are critical in the development of stellar structure and the origin of the elements. Purpose: We aim to test and validate the predictions of recently refined nuclear mass models against the newly published AME2016 compilation. Methods: The basic paradigm underlining the recently refined nuclear mass models is based on existing state-of-the-art models that are subsequently refined through the training of an artificial neural network. Bayesian inference is used to determine the parameters of the neural network so that statistical uncertainties are provided for all model predictions. Results: We observe a significant improvement in the Bayesian neural network (BNN) predictions relative to the corresponding "bare" models when compared to the nearly 50 new masses reported in the AME2016 compilation. Further, AME2016 estimates for the handful of impactful isotopes in the determination of r -process abundances are found to be in fairly good agreement with our theoretical predictions. Indeed, the BNN-improved Duflo-Zuker model predicts a root-mean-square deviation relative to experiment of σrms≃400 keV. Conclusions: Given the excellent performance of the BNN refinement in confronting the recently published AME2016 compilation, we are confident of its critical role in our quest for mass models of the highest quality. Moreover, as uncertainty quantification is at the core of the BNN approach, the improved mass models are in a unique position to identify those nuclei that will have the strongest impact in resolving some of the outstanding questions in nuclear astrophysics.

  7. Modelling of word usage frequency dynamics using artificial neural network

    International Nuclear Information System (INIS)

    Maslennikova, Yu S; Bochkarev, V V; Voloskov, D S

    2014-01-01

    In this paper the method for modelling of word usage frequency time series is proposed. An artificial feedforward neural network was used to predict word usage frequencies. The neural network was trained using the maximum likelihood criterion. The Google Books Ngram corpus was used for the analysis. This database provides a large amount of data on frequency of specific word forms for 7 languages. Statistical modelling of word usage frequency time series allows finding optimal fitting and filtering algorithm for subsequent lexicographic analysis and verification of frequency trend models

  8. Neural Activity During The Formation Of A Giant Auditory Synapse

    NARCIS (Netherlands)

    M.C. Sierksma (Martijn)

    2018-01-01

    markdownabstractThe formation of synapses is a critical step in the development of the brain. During this developmental stage neural activity propagates across the brain from synapse to synapse. This activity is thought to instruct the precise, topological connectivity found in the sensory central

  9. Mouse neuroblastoma cell based model and the effect of epileptic events on calcium oscillations and neural spikes

    Science.gov (United States)

    Kim, Suhwan; Baek, Juyeong; Jung, Unsang; Lee, Sangwon; Jung, Woonggyu; Kim, Jeehyun; Kang, Shinwon

    2013-05-01

    Recently, Mouse neuroblastoma cells are considered as an attractive model for the study of human neurological and prion diseases, and intensively used as a model system in different areas. Among those areas, differentiation of neuro2a (N2A) cells, receptor mediated ion current, and glutamate induced physiological response are actively investigated. The reason for the interest to mouse neuroblastoma N2A cells is that they have a fast growing rate than other cells in neural origin with a few another advantages. This study evaluated the calcium oscillations and neural spikes recording of mouse neuroblastoma N2A cells in an epileptic condition. Based on our observation of neural spikes in mouse N2A cell with our proposed imaging modality, we report that mouse neuroblastoma N2A cells can be an important model related to epileptic activity studies. It is concluded that the mouse neuroblastoma N2A cells produce the epileptic spikes in vitro in the same way as produced by the neurons or the astrocytes. This evidence advocates the increased and strong level of neurotransmitters release by enhancement in free calcium using the 4-aminopyridine which causes the mouse neuroblastoma N2A cells to produce the epileptic spikes and calcium oscillation.

  10. A new neural network model for solving random interval linear programming problems.

    Science.gov (United States)

    Arjmandzadeh, Ziba; Safi, Mohammadreza; Nazemi, Alireza

    2017-05-01

    This paper presents a neural network model for solving random interval linear programming problems. The original problem involving random interval variable coefficients is first transformed into an equivalent convex second order cone programming problem. A neural network model is then constructed for solving the obtained convex second order cone problem. Employing Lyapunov function approach, it is also shown that the proposed neural network model is stable in the sense of Lyapunov and it is globally convergent to an exact satisfactory solution of the original problem. Several illustrative examples are solved in support of this technique. Copyright © 2017 Elsevier Ltd. All rights reserved.

  11. DAILY RAINFALL-RUNOFF MODELLING BY NEURAL NETWORKS ...

    African Journals Online (AJOL)

    K. Benzineb, M. Remaoun

    2016-09-01

    Sep 1, 2016 ... The hydrologic behaviour modelling of w. Journal of ... i Ouahrane's basin from rainfall-runoff relation which is non-linea networks ... will allow checking efficiency of formal neural networks for flows simulation in semi-arid zone.

  12. Stacked Heterogeneous Neural Networks for Time Series Forecasting

    Directory of Open Access Journals (Sweden)

    Florin Leon

    2010-01-01

    Full Text Available A hybrid model for time series forecasting is proposed. It is a stacked neural network, containing one normal multilayer perceptron with bipolar sigmoid activation functions, and the other with an exponential activation function in the output layer. As shown by the case studies, the proposed stacked hybrid neural model performs well on a variety of benchmark time series. The combination of weights of the two stack components that leads to optimal performance is also studied.

  13. Re-evaluation of the AASHTO-flexible pavement design equation with neural network modeling.

    Science.gov (United States)

    Tiğdemir, Mesut

    2014-01-01

    Here we establish that equivalent single-axle loads values can be estimated using artificial neural networks without the complex design equality of American Association of State Highway and Transportation Officials (AASHTO). More importantly, we find that the neural network model gives the coefficients to be able to obtain the actual load values using the AASHTO design values. Thus, those design traffic values that might result in deterioration can be better calculated using the neural networks model than with the AASHTO design equation. The artificial neural network method is used for this purpose. The existing AASHTO flexible pavement design equation does not currently predict the pavement performance of the strategic highway research program (Long Term Pavement Performance studies) test sections very accurately, and typically over-estimates the number of equivalent single axle loads needed to cause a measured loss of the present serviceability index. Here we aimed to demonstrate that the proposed neural network model can more accurately represent the loads values data, compared against the performance of the AASHTO formula. It is concluded that the neural network may be an appropriate tool for the development of databased-nonparametric models of pavement performance.

  14. Neural principles of memory and a neural theory of analogical insight

    Science.gov (United States)

    Lawson, David I.; Lawson, Anton E.

    1993-12-01

    Grossberg's principles of neural modeling are reviewed and extended to provide a neural level theory to explain how analogies greatly increase the rate of learning and can, in fact, make learning and retention possible. In terms of memory, the key point is that the mind is able to recognize and recall when it is able to match sensory input from new objects, events, or situations with past memory records of similar objects, events, or situations. When a match occurs, an adaptive resonance is set up in which the synaptic strengths of neurons are increased; thus a long term record of the new input is formed in memory. Systems of neurons called outstars and instars are presumably the underlying units that enable this to occur. Analogies can greatly facilitate learning and retention because they activate the outstars (i.e., the cells that are sampling the to-be-learned pattern) and cause the neural activity to grow exponentially by forming feedback loops. This increased activity insures the boost in synaptic strengths of neurons, thus causing storage and retention in long-term memory (i.e., learning).

  15. GH mediates exercise-dependent activation of SVZ neural precursor cells in aged mice.

    Directory of Open Access Journals (Sweden)

    Daniel G Blackmore

    Full Text Available Here we demonstrate, both in vivo and in vitro, that growth hormone (GH mediates precursor cell activation in the subventricular zone (SVZ of the aged (12-month-old brain following exercise, and that GH signaling stimulates precursor activation to a similar extent to exercise. Our results reveal that both addition of GH in culture and direct intracerebroventricular infusion of GH stimulate neural precursor cells in the aged brain. In contrast, no increase in neurosphere numbers was observed in GH receptor null animals following exercise. Continuous infusion of a GH antagonist into the lateral ventricle of wild-type animals completely abolished the exercise-induced increase in neural precursor cell number. Given that the aged brain does not recover well after injury, we investigated the direct effect of exercise and GH on neural precursor cell activation following irradiation. This revealed that physical exercise as well as infusion of GH promoted repopulation of neural precursor cells in irradiated aged animals. Conversely, infusion of a GH antagonist during exercise prevented recovery of precursor cells in the SVZ following irradiation.

  16. GH Mediates Exercise-Dependent Activation of SVZ Neural Precursor Cells in Aged Mice

    Science.gov (United States)

    Blackmore, Daniel G.; Vukovic, Jana; Waters, Michael J.; Bartlett, Perry F.

    2012-01-01

    Here we demonstrate, both in vivo and in vitro, that growth hormone (GH) mediates precursor cell activation in the subventricular zone (SVZ) of the aged (12-month-old) brain following exercise, and that GH signaling stimulates precursor activation to a similar extent to exercise. Our results reveal that both addition of GH in culture and direct intracerebroventricular infusion of GH stimulate neural precursor cells in the aged brain. In contrast, no increase in neurosphere numbers was observed in GH receptor null animals following exercise. Continuous infusion of a GH antagonist into the lateral ventricle of wild-type animals completely abolished the exercise-induced increase in neural precursor cell number. Given that the aged brain does not recover well after injury, we investigated the direct effect of exercise and GH on neural precursor cell activation following irradiation. This revealed that physical exercise as well as infusion of GH promoted repopulation of neural precursor cells in irradiated aged animals. Conversely, infusion of a GH antagonist during exercise prevented recovery of precursor cells in the SVZ following irradiation. PMID:23209615

  17. Use of artificial neural networks for transport energy demand modeling

    International Nuclear Information System (INIS)

    Murat, Yetis Sazi; Ceylan, Halim

    2006-01-01

    The paper illustrates an artificial neural network (ANN) approach based on supervised neural networks for the transport energy demand forecasting using socio-economic and transport related indicators. The ANN transport energy demand model is developed. The actual forecast is obtained using a feed forward neural network, trained with back propagation algorithm. In order to investigate the influence of socio-economic indicators on the transport energy demand, the ANN is analyzed based on gross national product (GNP), population and the total annual average veh-km along with historical energy data available from 1970 to 2001. Comparing model predictions with energy data in testing period performs the model validation. The projections are made with two scenarios. It is obtained that the ANN reflects the fluctuation in historical data for both dependent and independent variables. The results obtained bear out the suitability of the adopted methodology for the transport energy-forecasting problem

  18. Purification of human induced pluripotent stem cell-derived neural precursors using magnetic activated cell sorting.

    Science.gov (United States)

    Rodrigues, Gonçalo M C; Fernandes, Tiago G; Rodrigues, Carlos A V; Cabral, Joaquim M S; Diogo, Maria Margarida

    2015-01-01

    Neural precursor (NP) cells derived from human induced pluripotent stem cells (hiPSCs), and their neuronal progeny, will play an important role in disease modeling, drug screening tests, central nervous system development studies, and may even become valuable for regenerative medicine treatments. Nonetheless, it is challenging to obtain homogeneous and synchronously differentiated NP populations from hiPSCs, and after neural commitment many pluripotent stem cells remain in the differentiated cultures. Here, we describe an efficient and simple protocol to differentiate hiPSC-derived NPs in 12 days, and we include a final purification stage where Tra-1-60+ pluripotent stem cells (PSCs) are removed using magnetic activated cell sorting (MACS), leaving the NP population nearly free of PSCs.

  19. Toward Rigorous Parameterization of Underconstrained Neural Network Models Through Interactive Visualization and Steering of Connectivity Generation

    Directory of Open Access Journals (Sweden)

    Christian Nowke

    2018-06-01

    Full Text Available Simulation models in many scientific fields can have non-unique solutions or unique solutions which can be difficult to find. Moreover, in evolving systems, unique final state solutions can be reached by multiple different trajectories. Neuroscience is no exception. Often, neural network models are subject to parameter fitting to obtain desirable output comparable to experimental data. Parameter fitting without sufficient constraints and a systematic exploration of the possible solution space can lead to conclusions valid only around local minima or around non-minima. To address this issue, we have developed an interactive tool for visualizing and steering parameters in neural network simulation models. In this work, we focus particularly on connectivity generation, since finding suitable connectivity configurations for neural network models constitutes a complex parameter search scenario. The development of the tool has been guided by several use cases—the tool allows researchers to steer the parameters of the connectivity generation during the simulation, thus quickly growing networks composed of multiple populations with a targeted mean activity. The flexibility of the software allows scientists to explore other connectivity and neuron variables apart from the ones presented as use cases. With this tool, we enable an interactive exploration of parameter spaces and a better understanding of neural network models and grapple with the crucial problem of non-unique network solutions and trajectories. In addition, we observe a reduction in turn around times for the assessment of these models, due to interactive visualization while the simulation is computed.

  20. THE USE OF NEURAL NETWORK TECHNOLOGY TO MODEL SWIMMING PERFORMANCE

    Directory of Open Access Journals (Sweden)

    António José Silva

    2007-03-01

    Full Text Available The aims of the present study were: to identify the factors which are able to explain the performance in the 200 meters individual medley and 400 meters front crawl events in young swimmers, to model the performance in those events using non-linear mathematic methods through artificial neural networks (multi-layer perceptrons and to assess the neural network models precision to predict the performance. A sample of 138 young swimmers (65 males and 73 females of national level was submitted to a test battery comprising four different domains: kinanthropometric evaluation, dry land functional evaluation (strength and flexibility, swimming functional evaluation (hydrodynamics, hydrostatic and bioenergetics characteristics and swimming technique evaluation. To establish a profile of the young swimmer non-linear combinations between preponderant variables for each gender and swim performance in the 200 meters medley and 400 meters font crawl events were developed. For this purpose a feed forward neural network was used (Multilayer Perceptron with three neurons in a single hidden layer. The prognosis precision of the model (error lower than 0.8% between true and estimated performances is supported by recent evidence. Therefore, we consider that the neural network tool can be a good approach in the resolution of complex problems such as performance modeling and the talent identification in swimming and, possibly, in a wide variety of sports

  1. Effect of short-term escitalopram treatment on neural activation during emotional processing.

    Science.gov (United States)

    Maron, Eduard; Wall, Matt; Norbury, Ray; Godlewska, Beata; Terbeck, Sylvia; Cowen, Philip; Matthews, Paul; Nutt, David J

    2016-01-01

    Recent functional magnetic resonance (fMRI) imaging studies have revealed that subchronic medication with escitalopram leads to significant reduction in both amygdala and medial frontal gyrus reactivity during processing of emotional faces, suggesting that escitalopram may have a distinguishable modulatory effect on neural activation as compared with other serotonin-selective antidepressants. In this fMRI study we aimed to explore whether short-term medication with escitalopram in healthy volunteers is associated with reduced neural response to emotional processing, and whether this effect is predicted by drug plasma concentration. The neural response to fearful and happy faces was measured before and on day 7 of treatment with escitalopram (10mg) in 15 healthy volunteers and compared with those in a control unmedicated group (n=14). Significantly reduced activation to fearful, but not to happy facial expressions was observed in the bilateral amygdala, cingulate and right medial frontal gyrus following escitalopram medication. This effect was not correlated with plasma drug concentration. In accordance with previous data, we showed that escitalopram exerts its rapid direct effect on emotional processing via attenuation of neural activation in pathways involving medial frontal gyrus and amygdala, an effect that seems to be distinguishable from that of other SSRIs. © The Author(s) 2015.

  2. Neural Activations of Guided Imagery and Music in Negative Emotional Processing: A Functional MRI Study.

    Science.gov (United States)

    Lee, Sang Eun; Han, Yeji; Park, HyunWook

    2016-01-01

    The Bonny Method of Guided Imagery and Music uses music and imagery to access and explore personal emotions associated with episodic memories. Understanding the neural mechanism of guided imagery and music (GIM) as combined stimuli for emotional processing informs clinical application. We performed functional magnetic resonance imaging (fMRI) to demonstrate neural mechanisms of GIM for negative emotional processing when personal episodic memory is recalled and re-experienced through GIM processes. Twenty-four healthy volunteers participated in the study, which used classical music and verbal instruction stimuli to evoke negative emotions. To analyze the neural mechanism, activated regions associated with negative emotional and episodic memory processing were extracted by conducting volume analyses for the contrast between GIM and guided imagery (GI) or music (M). The GIM stimuli showed increased activation over the M-only stimuli in five neural regions associated with negative emotional and episodic memory processing, including the left amygdala, left anterior cingulate gyrus, left insula, bilateral culmen, and left angular gyrus (AG). Compared with GI alone, GIM showed increased activation in three regions associated with episodic memory processing in the emotional context, including the right posterior cingulate gyrus, bilateral parahippocampal gyrus, and AG. No neural regions related to negative emotional and episodic memory processing showed more activation for M and GI than for GIM. As a combined multimodal stimulus, GIM may increase neural activations related to negative emotions and episodic memory processing. Findings suggest a neural basis for GIM with personal episodic memories affecting cortical and subcortical structures and functions. © the American Music Therapy Association 2016. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  3. Recursive Bayesian recurrent neural networks for time-series modeling.

    Science.gov (United States)

    Mirikitani, Derrick T; Nikolaev, Nikolay

    2010-02-01

    This paper develops a probabilistic approach to recursive second-order training of recurrent neural networks (RNNs) for improved time-series modeling. A general recursive Bayesian Levenberg-Marquardt algorithm is derived to sequentially update the weights and the covariance (Hessian) matrix. The main strengths of the approach are a principled handling of the regularization hyperparameters that leads to better generalization, and stable numerical performance. The framework involves the adaptation of a noise hyperparameter and local weight prior hyperparameters, which represent the noise in the data and the uncertainties in the model parameters. Experimental investigations using artificial and real-world data sets show that RNNs equipped with the proposed approach outperform standard real-time recurrent learning and extended Kalman training algorithms for recurrent networks, as well as other contemporary nonlinear neural models, on time-series modeling.

  4. Characterization of K-complexes and slow wave activity in a neural mass model.

    Directory of Open Access Journals (Sweden)

    Arne Weigenand

    2014-11-01

    Full Text Available NREM sleep is characterized by two hallmarks, namely K-complexes (KCs during sleep stage N2 and cortical slow oscillations (SOs during sleep stage N3. While the underlying dynamics on the neuronal level is well known and can be easily measured, the resulting behavior on the macroscopic population level remains unclear. On the basis of an extended neural mass model of the cortex, we suggest a new interpretation of the mechanisms responsible for the generation of KCs and SOs. As the cortex transitions from wake to deep sleep, in our model it approaches an oscillatory regime via a Hopf bifurcation. Importantly, there is a canard phenomenon arising from a homoclinic bifurcation, whose orbit determines the shape of large amplitude SOs. A KC corresponds to a single excursion along the homoclinic orbit, while SOs are noise-driven oscillations around a stable focus. The model generates both time series and spectra that strikingly resemble real electroencephalogram data and points out possible differences between the different stages of natural sleep.

  5. High baseline activity in inferior temporal cortex improves neural and behavioral discriminability during visual categorization

    Directory of Open Access Journals (Sweden)

    Nazli eEmadi

    2014-11-01

    Full Text Available Spontaneous firing is a ubiquitous property of neural activity in the brain. Recent literature suggests that this baseline activity plays a key role in perception. However, it is not known how the baseline activity contributes to neural coding and behavior. Here, by recording from the single neurons in the inferior temporal cortex of monkeys performing a visual categorization task, we thoroughly explored the relationship between baseline activity, the evoked response, and behavior. Specifically we found that a low-frequency (< 8 Hz oscillation in the spike train, prior and phase-locked to the stimulus onset, was correlated with increased gamma power and neuronal baseline activity. This enhancement of the baseline activity was then followed by an increase in the neural selectivity and the response reliability and eventually a higher behavioral performance.

  6. Collaborative Recurrent Neural Networks forDynamic Recommender Systems

    Science.gov (United States)

    2016-11-22

    JMLR: Workshop and Conference Proceedings 63:366–381, 2016 ACML 2016 Collaborative Recurrent Neural Networks for Dynamic Recommender Systems Young...an unprece- dented scale. Although such activity logs are abundantly available, most approaches to recommender systems are based on the rating...Recurrent Neural Network, Recommender System , Neural Language Model, Collaborative Filtering 1. Introduction As ever larger parts of the population

  7. Application of neural networks to software quality modeling of a very large telecommunications system.

    Science.gov (United States)

    Khoshgoftaar, T M; Allen, E B; Hudepohl, J P; Aud, S J

    1997-01-01

    Society relies on telecommunications to such an extent that telecommunications software must have high reliability. Enhanced measurement for early risk assessment of latent defects (EMERALD) is a joint project of Nortel and Bell Canada for improving the reliability of telecommunications software products. This paper reports a case study of neural-network modeling techniques developed for the EMERALD system. The resulting neural network is currently in the prototype testing phase at Nortel. Neural-network models can be used to identify fault-prone modules for extra attention early in development, and thus reduce the risk of operational problems with those modules. We modeled a subset of modules representing over seven million lines of code from a very large telecommunications software system. The set consisted of those modules reused with changes from the previous release. The dependent variable was membership in the class of fault-prone modules. The independent variables were principal components of nine measures of software design attributes. We compared the neural-network model with a nonparametric discriminant model and found the neural-network model had better predictive accuracy.

  8. Neural Network Models for Free Radical Polymerization of Methyl Methacrylate

    International Nuclear Information System (INIS)

    Curteanu, S.; Leon, F.; Galea, D.

    2003-01-01

    In this paper, a neural network modeling of the batch bulk methyl methacrylate polymerization is performed. To obtain conversion, number and weight average molecular weights, three neural networks were built. Each was a multilayer perception with one or two hidden layers. The choice of network topology, i.e. the number of hidden layers and the number of neurons in these layers, was based on achieving a compromise between precision and complexity. Thus, it was intended to have an error as small as possible at the end of back-propagation training phases, while using a network with reduced complexity. The performances of the networks were evaluated by comparing network predictions with training data, validation data (which were not uses for training), and with the results of a mechanistic model. The accurate predictions of neural networks for monomer conversion, number average molecular weight and weight average molecular weight proves that this modeling methodology gives a good representation and generalization of the batch bulk methyl methacrylate polymerization. (author)

  9. Activity-dependent modulation of neural circuit synaptic connectivity

    Directory of Open Access Journals (Sweden)

    Charles R Tessier

    2009-07-01

    Full Text Available In many nervous systems, the establishment of neural circuits is known to proceed via a two-stage process; 1 early, activity-independent wiring to produce a rough map characterized by excessive synaptic connections, and 2 subsequent, use-dependent pruning to eliminate inappropriate connections and reinforce maintained synapses. In invertebrates, however, evidence of the activity-dependent phase of synaptic refinement has been elusive, and the dogma has long been that invertebrate circuits are “hard-wired” in a purely activity-independent manner. This conclusion has been challenged recently through the use of new transgenic tools employed in the powerful Drosophila system, which have allowed unprecedented temporal control and single neuron imaging resolution. These recent studies reveal that activity-dependent mechanisms are indeed required to refine circuit maps in Drosophila during precise, restricted windows of late-phase development. Such mechanisms of circuit refinement may be key to understanding a number of human neurological diseases, including developmental disorders such as Fragile X syndrome (FXS and autism, which are hypothesized to result from defects in synaptic connectivity and activity-dependent circuit function. This review focuses on our current understanding of activity-dependent synaptic connectivity in Drosophila, primarily through analyzing the role of the fragile X mental retardation protein (FMRP in the Drosophila FXS disease model. The particular emphasis of this review is on the expanding array of new genetically-encoded tools that are allowing cellular events and molecular players to be dissected with ever greater precision and detail.

  10. Acute stress evokes sexually dimorphic, stressor-specific patterns of neural activation across multiple limbic brain regions in adult rats.

    Science.gov (United States)

    Sood, Ankit; Chaudhari, Karina; Vaidya, Vidita A

    2018-03-01

    Stress enhances the risk for psychiatric disorders such as anxiety and depression. Stress responses vary across sex and may underlie the heightened vulnerability to psychopathology in females. Here, we examined the influence of acute immobilization stress (AIS) and a two-day short-term forced swim stress (FS) on neural activation in multiple cortical and subcortical brain regions, implicated as targets of stress and in the regulation of neuroendocrine stress responses, in male and female rats using Fos as a neural activity marker. AIS evoked a sex-dependent pattern of neural activation within the cingulate and infralimbic subdivisions of the medial prefrontal cortex (mPFC), lateral septum (LS), habenula, and hippocampal subfields. The degree of neural activation in the mPFC, LS, and habenula was higher in males. Female rats exhibited reduced Fos positive cell numbers in the dentate gyrus hippocampal subfield, an effect not observed in males. We addressed whether the sexually dimorphic neural activation pattern noted following AIS was also observed with the short-term stress of FS. In the paraventricular nucleus of the hypothalamus and the amygdala, FS similar to AIS resulted in robust increases in neural activation in both sexes. The pattern of neural activation evoked by FS was distinct across sexes, with a heightened neural activation noted in the prelimbic mPFC subdivision and hippocampal subfields in females and differed from the pattern noted with AIS. This indicates that the sex differences in neural activation patterns observed within stress-responsive brain regions are dependent on the nature of stressor experience.

  11. Empirical Modeling of the Plasmasphere Dynamics Using Neural Networks

    Science.gov (United States)

    Zhelavskaya, I. S.; Shprits, Y.; Spasojevic, M.

    2017-12-01

    We present a new empirical model for reconstructing the global dynamics of the cold plasma density distribution based only on solar wind data and geomagnetic indices. Utilizing the density database obtained using the NURD (Neural-network-based Upper hybrid Resonance Determination) algorithm for the period of October 1, 2012 - July 1, 2016, in conjunction with solar wind data and geomagnetic indices, we develop a neural network model that is capable of globally reconstructing the dynamics of the cold plasma density distribution for 2 ≤ L ≤ 6 and all local times. We validate and test the model by measuring its performance on independent datasets withheld from the training set and by comparing the model predicted global evolution with global images of He+ distribution in the Earth's plasmasphere from the IMAGE Extreme UltraViolet (EUV) instrument. We identify the parameters that best quantify the plasmasphere dynamics by training and comparing multiple neural networks with different combinations of input parameters (geomagnetic indices, solar wind data, and different durations of their time history). We demonstrate results of both local and global plasma density reconstruction. This study illustrates how global dynamics can be reconstructed from local in-situ observations by using machine learning techniques.

  12. Ocean wave prediction using numerical and neural network models

    Digital Repository Service at National Institute of Oceanography (India)

    Mandal, S.; Prabaharan, N.

    This paper presents an overview of the development of the numerical wave prediction models and recently used neural networks for ocean wave hindcasting and forecasting. The numerical wave models express the physical concepts of the phenomena...

  13. Artificial neural network modelling approach for a biomass gasification process in fixed bed gasifiers

    International Nuclear Information System (INIS)

    Mikulandrić, Robert; Lončar, Dražen; Böhning, Dorith; Böhme, Rene; Beckmann, Michael

    2014-01-01

    Highlights: • 2 Different equilibrium models are developed and their performance is analysed. • Neural network prediction models for 2 different fixed bed gasifier types are developed. • The influence of different input parameters on neural network model performance is analysed. • Methodology for neural network model development for different gasifier types is described. • Neural network models are verified for various operating conditions based on measured data. - Abstract: The number of the small and middle-scale biomass gasification combined heat and power plants as well as syngas production plants has been significantly increased in the last decade mostly due to extensive incentives. However, existing issues regarding syngas quality, process efficiency, emissions and environmental standards are preventing biomass gasification technology to become more economically viable. To encounter these issues, special attention is given to the development of mathematical models which can be used for a process analysis or plant control purposes. The presented paper analyses possibilities of neural networks to predict process parameters with high speed and accuracy. After a related literature review and measurement data analysis, different modelling approaches for the process parameter prediction that can be used for an on-line process control were developed and their performance were analysed. Neural network models showed good capability to predict biomass gasification process parameters with reasonable accuracy and speed. Measurement data for the model development, verification and performance analysis were derived from biomass gasification plant operated by Technical University Dresden

  14. Artificial neural network cardiopulmonary modeling and diagnosis

    Science.gov (United States)

    Kangas, Lars J.; Keller, Paul E.

    1997-01-01

    The present invention is a method of diagnosing a cardiopulmonary condition in an individual by comparing data from a progressive multi-stage test for the individual to a non-linear multi-variate model, preferably a recurrent artificial neural network having sensor fusion. The present invention relies on a cardiovascular model developed from physiological measurements of an individual. Any differences between the modeled parameters and the parameters of an individual at a given time are used for diagnosis.

  15. Using a virtual cortical module implementing a neural field model to modulate brain rhythms in Parkinson's disease

    Directory of Open Access Journals (Sweden)

    Julien Modolo

    2010-06-01

    Full Text Available We propose a new method for selective modulation of cortical rhythms based on neural field theory, in which the activity of a cortical area is extensively monitored using a two-dimensional microelectrode array. The example of Parkinson's disease illustrates the proposed method, in which a neural field model is assumed to accurately describe experimentally recorded activity. In addition, we propose a new closed-loop stimulation signal that is both space- and time- dependent. This method is especially designed to specifically modulate a targeted brain rhythm, without interfering with other rhythms. A new class of neuroprosthetic devices is also proposed, in which the multielectrode array is seen as an artificial neural network interacting with biological tissue. Such a bio-inspired approach may provide a solution to optimize interactions between the stimulation device and the cortex aiming to attenuate or augment specific cortical rhythms. The next step will be to validate this new approach experimentally in patients with Parkinson's disease.

  16. Enhancing neural activity to drive respiratory plasticity following cervical spinal cord injury

    Science.gov (United States)

    Hormigo, Kristiina M.; Zholudeva, Lyandysha V.; Spruance, Victoria M.; Marchenko, Vitaliy; Cote, Marie-Pascale; Vinit, Stephane; Giszter, Simon; Bezdudnaya, Tatiana; Lane, Michael A.

    2016-01-01

    Cervical spinal cord injury (SCI) results in permanent life-altering sensorimotor deficits, among which impaired breathing is one of the most devastating and life-threatening. While clinical and experimental research has revealed that some spontaneous respiratory improvement (functional plasticity) can occur post-SCI, the extent of the recovery is limited and significant deficits persist. Thus, increasing effort is being made to develop therapies that harness and enhance this neuroplastic potential to optimize long-term recovery of breathing in injured individuals. One strategy with demonstrated therapeutic potential is the use of treatments that increase neural and muscular activity (e.g. locomotor training, neural and muscular stimulation) and promote plasticity. With a focus on respiratory function post-SCI, this review will discuss advances in the use of neural interfacing strategies and activity-based treatments, and highlights some recent results from our own research. PMID:27582085

  17. Localizing Tortoise Nests by Neural Networks.

    Directory of Open Access Journals (Sweden)

    Roberto Barbuti

    Full Text Available The goal of this research is to recognize the nest digging activity of tortoises using a device mounted atop the tortoise carapace. The device classifies tortoise movements in order to discriminate between nest digging, and non-digging activity (specifically walking and eating. Accelerometer data was collected from devices attached to the carapace of a number of tortoises during their two-month nesting period. Our system uses an accelerometer and an activity recognition system (ARS which is modularly structured using an artificial neural network and an output filter. For the purpose of experiment and comparison, and with the aim of minimizing the computational cost, the artificial neural network has been modelled according to three different architectures based on the input delay neural network (IDNN. We show that the ARS can achieve very high accuracy on segments of data sequences, with an extremely small neural network that can be embedded in programmable low power devices. Given that digging is typically a long activity (up to two hours, the application of ARS on data segments can be repeated over time to set up a reliable and efficient system, called Tortoise@, for digging activity recognition.

  18. Efficient universal computing architectures for decoding neural activity.

    Directory of Open Access Journals (Sweden)

    Benjamin I Rapoport

    Full Text Available The ability to decode neural activity into meaningful control signals for prosthetic devices is critical to the development of clinically useful brain- machine interfaces (BMIs. Such systems require input from tens to hundreds of brain-implanted recording electrodes in order to deliver robust and accurate performance; in serving that primary function they should also minimize power dissipation in order to avoid damaging neural tissue; and they should transmit data wirelessly in order to minimize the risk of infection associated with chronic, transcutaneous implants. Electronic architectures for brain- machine interfaces must therefore minimize size and power consumption, while maximizing the ability to compress data to be transmitted over limited-bandwidth wireless channels. Here we present a system of extremely low computational complexity, designed for real-time decoding of neural signals, and suited for highly scalable implantable systems. Our programmable architecture is an explicit implementation of a universal computing machine emulating the dynamics of a network of integrate-and-fire neurons; it requires no arithmetic operations except for counting, and decodes neural signals using only computationally inexpensive logic operations. The simplicity of this architecture does not compromise its ability to compress raw neural data by factors greater than [Formula: see text]. We describe a set of decoding algorithms based on this computational architecture, one designed to operate within an implanted system, minimizing its power consumption and data transmission bandwidth; and a complementary set of algorithms for learning, programming the decoder, and postprocessing the decoded output, designed to operate in an external, nonimplanted unit. The implementation of the implantable portion is estimated to require fewer than 5000 operations per second. A proof-of-concept, 32-channel field-programmable gate array (FPGA implementation of this portion

  19. Neural modeling of prefrontal executive function

    Energy Technology Data Exchange (ETDEWEB)

    Levine, D.S. [Univ. of Texas, Arlington, TX (United States)

    1996-12-31

    Brain executive function is based in a distributed system whereby prefrontal cortex is interconnected with other cortical. and subcortical loci. Executive function is divided roughly into three interacting parts: affective guidance of responses; linkage among working memory representations; and forming complex behavioral schemata. Neural network models of each of these parts are reviewed and fit into a preliminary theoretical framework.

  20. Intermittent reductions in respiratory neural activity elicit spinal TNF-α-independent, atypical PKC-dependent inactivity-induced phrenic motor facilitation.

    Science.gov (United States)

    Baertsch, Nathan A; Baker-Herman, Tracy L

    2015-04-15

    In many neural networks, mechanisms of compensatory plasticity respond to prolonged reductions in neural activity by increasing cellular excitability or synaptic strength. In the respiratory control system, a prolonged reduction in synaptic inputs to the phrenic motor pool elicits a TNF-α- and atypical PKC-dependent form of spinal plasticity known as inactivity-induced phrenic motor facilitation (iPMF). Although iPMF may be elicited by a prolonged reduction in respiratory neural activity, iPMF is more efficiently induced when reduced respiratory neural activity (neural apnea) occurs intermittently. Mechanisms giving rise to iPMF following intermittent neural apnea are unknown. The purpose of this study was to test the hypothesis that iPMF following intermittent reductions in respiratory neural activity requires spinal TNF-α and aPKC. Phrenic motor output was recorded in anesthetized and ventilated rats exposed to brief intermittent (5, ∼1.25 min), brief sustained (∼6.25 min), or prolonged sustained (30 min) neural apnea. iPMF was elicited following brief intermittent and prolonged sustained neural apnea, but not following brief sustained neural apnea. Unlike iPMF following prolonged neural apnea, spinal TNF-α was not required to initiate iPMF during intermittent neural apnea; however, aPKC was still required for its stabilization. These results suggest that different patterns of respiratory neural activity induce iPMF through distinct cellular mechanisms but ultimately converge on a similar downstream pathway. Understanding the diverse cellular mechanisms that give rise to inactivity-induced respiratory plasticity may lead to development of novel therapeutic strategies to treat devastating respiratory control disorders when endogenous compensatory mechanisms fail. Copyright © 2015 the American Physiological Society.

  1. Photovoltaic Pixels for Neural Stimulation: Circuit Models and Performance.

    Science.gov (United States)

    Boinagrov, David; Lei, Xin; Goetz, Georges; Kamins, Theodore I; Mathieson, Keith; Galambos, Ludwig; Harris, James S; Palanker, Daniel

    2016-02-01

    Photovoltaic conversion of pulsed light into pulsed electric current enables optically-activated neural stimulation with miniature wireless implants. In photovoltaic retinal prostheses, patterns of near-infrared light projected from video goggles onto subretinal arrays of photovoltaic pixels are converted into patterns of current to stimulate the inner retinal neurons. We describe a model of these devices and evaluate the performance of photovoltaic circuits, including the electrode-electrolyte interface. Characteristics of the electrodes measured in saline with various voltages, pulse durations, and polarities were modeled as voltage-dependent capacitances and Faradaic resistances. The resulting mathematical model of the circuit yielded dynamics of the electric current generated by the photovoltaic pixels illuminated by pulsed light. Voltages measured in saline with a pipette electrode above the pixel closely matched results of the model. Using the circuit model, our pixel design was optimized for maximum charge injection under various lighting conditions and for different stimulation thresholds. To speed discharge of the electrodes between the pulses of light, a shunt resistor was introduced and optimized for high frequency stimulation.

  2. Real-time process optimization based on grey-box neural models

    Directory of Open Access Journals (Sweden)

    F. A. Cubillos

    2007-09-01

    Full Text Available This paper investigates the feasibility of using grey-box neural models (GNM in Real Time Optimization (RTO. These models are based on a suitable combination of fundamental conservation laws and neural networks, being used in at least two different ways: to complement available phenomenological knowledge with empirical information, or to reduce dimensionality of complex rigorous physical models. We have observed that the benefits of using these simple adaptable models are counteracted by some difficulties associated with the solution of the optimization problem. Nonlinear Programming (NLP algorithms failed in finding the global optimum due to the fact that neural networks can introduce multimodal objective functions. One alternative considered to solve this problem was the use of some kind of evolutionary algorithms, like Genetic Algorithms (GA. Although these algorithms produced better results in terms of finding the appropriate region, they took long periods of time to reach the global optimum. It was found that a combination of genetic and nonlinear programming algorithms can be use to fast obtain the optimum solution. The proposed approach was applied to the Williams-Otto reactor, considering three different GNM models of increasing complexity. Results demonstrated that the use of GNM models and mixed GA/NLP optimization algorithms is a promissory approach for solving dynamic RTO problems.

  3. Gait pattern recognition in cerebral palsy patients using neural network modelling

    International Nuclear Information System (INIS)

    Muhammad, J.; Gibbs, S.; Abboud, R.; Anand, S.

    2015-01-01

    Interpretation of gait data obtained from modern 3D gait analysis is a challenging and time consuming task. The aim of this study was to create neural network models which can recognise the gait patterns from pre- and post-treatment and the normal ones. Neural network is a method which works on the principle of learning from experience and then uses the obtained knowledge to predict the unknown. Methods: Twenty-eight patients with cerebral palsy were recruited as subjects whose gait was analysed in pre- and post-treatment. A group of twenty-six normal subjects also participated in this study as control group. All subjects gait was analysed using Vicon Nexus to obtain the gait parameters and kinetic and kinematic parameters of hip, knee and ankle joints in three planes of both limbs. The gait data was used as input to create neural network models. A total of approximately 300 trials were split into 70% and 30% to train and test the models, respectively. Different models were built using different parameters. The gait was categorised as three patterns, i.e., normal, pre- and post-treatments. Result: The results showed that the models using all parameters or using the joint angles and moments could predict the gait patterns with approximately 95% accuracy. Some of the models e.g., the models using joint power and moments, had lower rate in recognition of gait patterns with approximately 70-90% successful ratio. Conclusion: Neural network model can be used in clinical practice to recognise the gait pattern for cerebral palsy patients. (author)

  4. Risk prediction model: Statistical and artificial neural network approach

    Science.gov (United States)

    Paiman, Nuur Azreen; Hariri, Azian; Masood, Ibrahim

    2017-04-01

    Prediction models are increasingly gaining popularity and had been used in numerous areas of studies to complement and fulfilled clinical reasoning and decision making nowadays. The adoption of such models assist physician's decision making, individual's behavior, and consequently improve individual outcomes and the cost-effectiveness of care. The objective of this paper is to reviewed articles related to risk prediction model in order to understand the suitable approach, development and the validation process of risk prediction model. A qualitative review of the aims, methods and significant main outcomes of the nineteen published articles that developed risk prediction models from numerous fields were done. This paper also reviewed on how researchers develop and validate the risk prediction models based on statistical and artificial neural network approach. From the review done, some methodological recommendation in developing and validating the prediction model were highlighted. According to studies that had been done, artificial neural network approached in developing the prediction model were more accurate compared to statistical approach. However currently, only limited published literature discussed on which approach is more accurate for risk prediction model development.

  5. Robust recurrent neural network modeling for software fault detection and correction prediction

    International Nuclear Information System (INIS)

    Hu, Q.P.; Xie, M.; Ng, S.H.; Levitin, G.

    2007-01-01

    Software fault detection and correction processes are related although different, and they should be studied together. A practical approach is to apply software reliability growth models to model fault detection, and fault correction process is assumed to be a delayed process. On the other hand, the artificial neural networks model, as a data-driven approach, tries to model these two processes together with no assumptions. Specifically, feedforward backpropagation networks have shown their advantages over analytical models in fault number predictions. In this paper, the following approach is explored. First, recurrent neural networks are applied to model these two processes together. Within this framework, a systematic networks configuration approach is developed with genetic algorithm according to the prediction performance. In order to provide robust predictions, an extra factor characterizing the dispersion of prediction repetitions is incorporated into the performance function. Comparisons with feedforward neural networks and analytical models are developed with respect to a real data set

  6. Global Neural Pattern Similarity as a Common Basis for Categorization and Recognition Memory

    Science.gov (United States)

    Xue, Gui; Love, Bradley C.; Preston, Alison R.; Poldrack, Russell A.

    2014-01-01

    Familiarity, or memory strength, is a central construct in models of cognition. In previous categorization and long-term memory research, correlations have been found between psychological measures of memory strength and activation in the medial temporal lobes (MTLs), which suggests a common neural locus for memory strength. However, activation alone is insufficient for determining whether the same mechanisms underlie neural function across domains. Guided by mathematical models of categorization and long-term memory, we develop a theory and a method to test whether memory strength arises from the global similarity among neural representations. In human subjects, we find significant correlations between global similarity among activation patterns in the MTLs and both subsequent memory confidence in a recognition memory task and model-based measures of memory strength in a category learning task. Our work bridges formal cognitive theories and neuroscientific models by illustrating that the same global similarity computations underlie processing in multiple cognitive domains. Moreover, by establishing a link between neural similarity and psychological memory strength, our findings suggest that there may be an isomorphism between psychological and neural representational spaces that can be exploited to test cognitive theories at both the neural and behavioral levels. PMID:24872552

  7. Role of SDF1/CXCR4 Interaction in Experimental Hemiplegic Models with Neural Cell Transplantation

    Directory of Open Access Journals (Sweden)

    Noboru Suzuki

    2012-02-01

    Full Text Available Much attention has been focused on neural cell transplantation because of its promising clinical applications. We have reported that embryonic stem (ES cell derived neural stem/progenitor cell transplantation significantly improved motor functions in a hemiplegic mouse model. It is important to understand the molecular mechanisms governing neural regeneration of the damaged motor cortex after the transplantation. Recent investigations disclosed that chemokines participated in the regulation of migration and maturation of neural cell grafts. In this review, we summarize the involvement of inflammatory chemokines including stromal cell derived factor 1 (SDF1 in neural regeneration after ES cell derived neural stem/progenitor cell transplantation in mouse stroke models.

  8. Topological probability and connection strength induced activity in complex neural networks

    International Nuclear Information System (INIS)

    Du-Qu, Wei; Bo, Zhang; Dong-Yuan, Qiu; Xiao-Shu, Luo

    2010-01-01

    Recent experimental evidence suggests that some brain activities can be assigned to small-world networks. In this work, we investigate how the topological probability p and connection strength C affect the activities of discrete neural networks with small-world (SW) connections. Network elements are described by two-dimensional map neurons (2DMNs) with the values of parameters at which no activity occurs. It is found that when the value of p is smaller or larger, there are no active neurons in the network, no matter what the value of connection strength is; for a given appropriate connection strength, there is an intermediate range of topological probability where the activity of 2DMN network is induced and enhanced. On the other hand, for a given intermediate topological probability level, there exists an optimal value of connection strength such that the frequency of activity reaches its maximum. The possible mechanism behind the action of topological probability and connection strength is addressed based on the bifurcation method. Furthermore, the effects of noise and transmission delay on the activity of neural network are also studied. (general)

  9. Social power and approach-related neural activity.

    Science.gov (United States)

    Boksem, Maarten A S; Smolders, Ruud; De Cremer, David

    2012-06-01

    It has been argued that power activates a general tendency to approach whereas powerlessness activates a tendency to inhibit. The assumption is that elevated power involves reward-rich environments, freedom and, as a consequence, triggers an approach-related motivational orientation and attention to rewards. In contrast, reduced power is associated with increased threat, punishment and social constraint and thereby activates inhibition-related motivation. Moreover, approach motivation has been found to be associated with increased relative left-sided frontal brain activity, while withdrawal motivation has been associated with increased right sided activations. We measured EEG activity while subjects engaged in a task priming either high or low social power. Results show that high social power is indeed associated with greater left-frontal brain activity compared to low social power, providing the first neural evidence for the theory that high power is associated with approach-related motivation. We propose a framework accounting for differences in both approach motivation and goal-directed behaviour associated with different levels of power.

  10. Accurate prediction of the dew points of acidic combustion gases by using an artificial neural network model

    International Nuclear Information System (INIS)

    ZareNezhad, Bahman; Aminian, Ali

    2011-01-01

    This paper presents a new approach based on using an artificial neural network (ANN) model for predicting the acid dew points of the combustion gases in process and power plants. The most important acidic combustion gases namely, SO 3 , SO 2 , NO 2 , HCl and HBr are considered in this investigation. Proposed Network is trained using the Levenberg-Marquardt back propagation algorithm and the hyperbolic tangent sigmoid activation function is applied to calculate the output values of the neurons of the hidden layer. According to the network's training, validation and testing results, a three layer neural network with nine neurons in the hidden layer is selected as the best architecture for accurate prediction of the acidic combustion gases dew points over wide ranges of acid and moisture concentrations. The proposed neural network model can have significant application in predicting the condensation temperatures of different acid gases to mitigate the corrosion problems in stacks, pollution control devices and energy recovery systems.

  11. Artificial Neural Network Modelling of the Energy Content of Municipal Solid Wastes in Northern Nigeria

    Directory of Open Access Journals (Sweden)

    M. B. Oumarou

    2017-12-01

    Full Text Available The study presents an application of the artificial neural network model using the back propagation learning algorithm to predict the actual calorific value of the municipal solid waste in major cities of the northern part of Nigeria, with high population densities and intense industrial activities. These cities are: Kano, Damaturu, Dutse, Bauchi, Birnin Kebbi, Gusau, Maiduguri, Katsina and Sokoto. Experimental data of the energy content and the physical characterization of the municipal solid waste serve as the input parameter in nature of wood, grass, metal, plastic, food remnants, leaves, glass and paper. Comparative studies were made by using the developed model, the experimental results and a correlation which was earlier developed by the authors to predict the energy content. While predicting the actual calorific value, the maximum error was 0.94% for the artificial neural network model and 5.20% by the statistical correlation. The network with eight neurons and an R2 = 0.96881 in the hidden layer results in a stable and optimum network. This study showed that the artificial neural network approach could successfully be used for energy content predictions from the municipal solid wastes in Northern Nigeria and other areas of similar waste stream and composition.

  12. Computational Models and Emergent Properties of Respiratory Neural Networks

    Science.gov (United States)

    Lindsey, Bruce G.; Rybak, Ilya A.; Smith, Jeffrey C.

    2012-01-01

    Computational models of the neural control system for breathing in mammals provide a theoretical and computational framework bringing together experimental data obtained from different animal preparations under various experimental conditions. Many of these models were developed in parallel and iteratively with experimental studies and provided predictions guiding new experiments. This data-driven modeling approach has advanced our understanding of respiratory network architecture and neural mechanisms underlying generation of the respiratory rhythm and pattern, including their functional reorganization under different physiological conditions. Models reviewed here vary in neurobiological details and computational complexity and span multiple spatiotemporal scales of respiratory control mechanisms. Recent models describe interacting populations of respiratory neurons spatially distributed within the Bötzinger and pre-Bötzinger complexes and rostral ventrolateral medulla that contain core circuits of the respiratory central pattern generator (CPG). Network interactions within these circuits along with intrinsic rhythmogenic properties of neurons form a hierarchy of multiple rhythm generation mechanisms. The functional expression of these mechanisms is controlled by input drives from other brainstem components, including the retrotrapezoid nucleus and pons, which regulate the dynamic behavior of the core circuitry. The emerging view is that the brainstem respiratory network has rhythmogenic capabilities at multiple levels of circuit organization. This allows flexible, state-dependent expression of different neural pattern-generation mechanisms under various physiological conditions, enabling a wide repertoire of respiratory behaviors. Some models consider control of the respiratory CPG by pulmonary feedback and network reconfiguration during defensive behaviors such as cough. Future directions in modeling of the respiratory CPG are considered. PMID:23687564

  13. Comparing Neural Networks and ARMA Models in Artificial Stock Market

    Czech Academy of Sciences Publication Activity Database

    Krtek, Jiří; Vošvrda, Miloslav

    2011-01-01

    Roč. 18, č. 28 (2011), s. 53-65 ISSN 1212-074X R&D Projects: GA ČR GD402/09/H045 Institutional research plan: CEZ:AV0Z10750506 Keywords : neural networks * vector ARMA * artificial market Subject RIV: AH - Economics http://library.utia.cas.cz/separaty/2011/E/krtek-comparing neural networks and arma models in artificial stock market.pdf

  14. Abnormal neural activities of directional brain networks in patients with long-term bilateral hearing loss.

    Science.gov (United States)

    Xu, Long-Chun; Zhang, Gang; Zou, Yue; Zhang, Min-Feng; Zhang, Dong-Sheng; Ma, Hua; Zhao, Wen-Bo; Zhang, Guang-Yu

    2017-10-13

    The objective of the study is to provide some implications for rehabilitation of hearing impairment by investigating changes of neural activities of directional brain networks in patients with long-term bilateral hearing loss. Firstly, we implemented neuropsychological tests of 21 subjects (11 patients with long-term bilateral hearing loss, and 10 subjects with normal hearing), and these tests revealed significant differences between the deaf group and the controls. Then we constructed the individual specific virtual brain based on functional magnetic resonance data of participants by utilizing effective connectivity and multivariate regression methods. We exerted the stimulating signal to the primary auditory cortices of the virtual brain and observed the brain region activations. We found that patients with long-term bilateral hearing loss presented weaker brain region activations in the auditory and language networks, but enhanced neural activities in the default mode network as compared with normally hearing subjects. Especially, the right cerebral hemisphere presented more changes than the left. Additionally, weaker neural activities in the primary auditor cortices were also strongly associated with poorer cognitive performance. Finally, causal analysis revealed several interactional circuits among activated brain regions, and these interregional causal interactions implied that abnormal neural activities of the directional brain networks in the deaf patients impacted cognitive function.

  15. Neural Fuzzy Inference System-Based Weather Prediction Model and Its Precipitation Predicting Experiment

    Directory of Open Access Journals (Sweden)

    Jing Lu

    2014-11-01

    Full Text Available We propose a weather prediction model in this article based on neural network and fuzzy inference system (NFIS-WPM, and then apply it to predict daily fuzzy precipitation given meteorological premises for testing. The model consists of two parts: the first part is the “fuzzy rule-based neural network”, which simulates sequential relations among fuzzy sets using artificial neural network; and the second part is the “neural fuzzy inference system”, which is based on the first part, but could learn new fuzzy rules from the previous ones according to the algorithm we proposed. NFIS-WPM (High Pro and NFIS-WPM (Ave are improved versions of this model. It is well known that the need for accurate weather prediction is apparent when considering the benefits. However, the excessive pursuit of accuracy in weather prediction makes some of the “accurate” prediction results meaningless and the numerical prediction model is often complex and time-consuming. By adapting this novel model to a precipitation prediction problem, we make the predicted outcomes of precipitation more accurate and the prediction methods simpler than by using the complex numerical forecasting model that would occupy large computation resources, be time-consuming and which has a low predictive accuracy rate. Accordingly, we achieve more accurate predictive precipitation results than by using traditional artificial neural networks that have low predictive accuracy.

  16. Implications of the dependence of neuronal activity on neural network states for the design of brain-machine interfaces

    Directory of Open Access Journals (Sweden)

    Stefano ePanzeri

    2016-04-01

    Full Text Available Brain-machine interfaces (BMIs can improve the quality of life of patients with sensory and motor disabilities by both decoding motor intentions expressed by neural activity, and by encoding artificially sensed information into patterns of neural activity elicited by causal interventions on the neural tissue. Yet, current BMIs can exchange relatively small amounts of information with the brain. This problem has proved difficult to overcome by simply increasing the number of recording or stimulating electrodes, because trial-to-trial variability of neural activity partly arises from intrinsic factors (collectively known as the network state that include ongoing spontaneous activity and neuromodulation, and so is shared among neurons. Here we review recent progress in characterizing the state dependence of neural responses, and in particular of how neural responses depend on endogenous slow fluctuations of network excitability. We then elaborate on how this knowledge may be used to increase the amount of information that BMIs exchange with brains. Knowledge of network state can be used to fine-tune the stimulation pattern that should reliably elicit a target neural response used to encode information in the brain, and to discount part of the trial-by-trial variability of neural responses, so that they can be decoded more accurately.

  17. An Adaptive Neural Mechanism with a Lizard Ear Model for Binaural Acoustic Tracking

    DEFF Research Database (Denmark)

    Shaikh, Danish; Manoonpong, Poramate

    2016-01-01

    expensive algorithms. We present a novel bioinspired solution to acoustic tracking that uses only two microphones. The system is based on a neural mechanism coupled with a model of the peripheral auditory system of lizards. The peripheral auditory model provides sound direction information which the neural...

  18. Bidirectional neural interface: Closed-loop feedback control for hybrid neural systems.

    Science.gov (United States)

    Chou, Zane; Lim, Jeffrey; Brown, Sophie; Keller, Melissa; Bugbee, Joseph; Broccard, Frédéric D; Khraiche, Massoud L; Silva, Gabriel A; Cauwenberghs, Gert

    2015-01-01

    Closed-loop neural prostheses enable bidirectional communication between the biological and artificial components of a hybrid system. However, a major challenge in this field is the limited understanding of how these components, the two separate neural networks, interact with each other. In this paper, we propose an in vitro model of a closed-loop system that allows for easy experimental testing and modification of both biological and artificial network parameters. The interface closes the system loop in real time by stimulating each network based on recorded activity of the other network, within preset parameters. As a proof of concept we demonstrate that the bidirectional interface is able to establish and control network properties, such as synchrony, in a hybrid system of two neural networks more significantly more effectively than the same system without the interface or with unidirectional alternatives. This success holds promise for the application of closed-loop systems in neural prostheses, brain-machine interfaces, and drug testing.

  19. Intranasal oxytocin reduces social perception in women: Neural activation and individual variation.

    Science.gov (United States)

    Hecht, Erin E; Robins, Diana L; Gautam, Pritam; King, Tricia Z

    2017-02-15

    Most intranasal oxytocin research to date has been carried out in men, but recent studies indicate that females' responses can differ substantially from males'. This randomized, double-blind, placebo-controlled study involved an all-female sample of 28 women not using hormonal contraception. Participants viewed animations of geometric shapes depicting either random movement or social interactions such as playing, chasing, or fighting. Probe questions asked whether any shapes were "friends" or "not friends." Social videos were preceded by cues to attend to either social relationships or physical size changes. All subjects received intranasal placebo spray at scan 1. While the experimenter was not blinded to nasal spray contents at Scan 1, the participants were. Scan 2 followed a randomized, double-blind design. At scan 2, half received a second placebo dose while the other half received 24 IU of intranasal oxytocin. We measured neural responses to these animations at baseline, as well as the change in neural activity induced by oxytocin. Oxytocin reduced activation in early visual cortex and dorsal-stream motion processing regions for the social > size contrast, indicating reduced activity related to social attention. Oxytocin also reduced endorsements that shapes were "friends" or "not friends," and this significantly correlated with reduction in neural activation. Furthermore, participants who perceived fewer social relationships at baseline were more likely to show oxytocin-induced increases in a broad network of regions involved in social perception and social cognition, suggesting that lower social processing at baseline may predict more positive neural responses to oxytocin. Copyright © 2016 Elsevier Inc. All rights reserved.

  20. Distinct Neural Activity Associated with Focused-Attention Meditation and Loving-Kindness Meditation

    Science.gov (United States)

    Lee, Tatia M. C.; Leung, Mei-Kei; Hou, Wai-Kai; Tang, Joey C. Y.; Yin, Jing; So, Kwok-Fai; Lee, Chack-Fan; Chan, Chetwyn C. H.

    2012-01-01

    This study examined the dissociable neural effects of ānāpānasati (focused-attention meditation, FAM) and mettā (loving-kindness meditation, LKM) on BOLD signals during cognitive (continuous performance test, CPT) and affective (emotion-processing task, EPT, in which participants viewed affective pictures) processing. Twenty-two male Chinese expert meditators (11 FAM experts, 11 LKM experts) and 22 male Chinese novice meditators (11 FAM novices, 11 LKM novices) had their brain activity monitored by a 3T MRI scanner while performing the cognitive and affective tasks in both meditation and baseline states. We examined the interaction between state (meditation vs. baseline) and expertise (expert vs. novice) separately during LKM and FAM, using a conjunction approach to reveal common regions sensitive to the expert meditative state. Additionally, exclusive masking techniques revealed distinct interactions between state and group during LKM and FAM. Specifically, we demonstrated that the practice of FAM was associated with expertise-related behavioral improvements and neural activation differences in attention task performance. However, the effect of state LKM meditation did not carry over to attention task performance. On the other hand, both FAM and LKM practice appeared to affect the neural responses to affective pictures. For viewing sad faces, the regions activated for FAM practitioners were consistent with attention-related processing; whereas responses of LKM experts to sad pictures were more in line with differentiating emotional contagion from compassion/emotional regulation processes. Our findings provide the first report of distinct neural activity associated with forms of meditation during sustained attention and emotion processing. PMID:22905090

  1. Bias-dependent hybrid PKI empirical-neural model of microwave FETs

    Science.gov (United States)

    Marinković, Zlatica; Pronić-Rančić, Olivera; Marković, Vera

    2011-10-01

    Empirical models of microwave transistors based on an equivalent circuit are valid for only one bias point. Bias-dependent analysis requires repeated extractions of the model parameters for each bias point. In order to make model bias-dependent, a new hybrid empirical-neural model of microwave field-effect transistors is proposed in this article. The model is a combination of an equivalent circuit model including noise developed for one bias point and two prior knowledge input artificial neural networks (PKI ANNs) aimed at introducing bias dependency of scattering (S) and noise parameters, respectively. The prior knowledge of the proposed ANNs involves the values of the S- and noise parameters obtained by the empirical model. The proposed hybrid model is valid in the whole range of bias conditions. Moreover, the proposed model provides better accuracy than the empirical model, which is illustrated by an appropriate modelling example of a pseudomorphic high-electron mobility transistor device.

  2. Spatially Compact Neural Clusters in the Dorsal Striatum Encode Locomotion Relevant Information.

    Science.gov (United States)

    Barbera, Giovanni; Liang, Bo; Zhang, Lifeng; Gerfen, Charles R; Culurciello, Eugenio; Chen, Rong; Li, Yun; Lin, Da-Ting

    2016-10-05

    An influential striatal model postulates that neural activities in the striatal direct and indirect pathways promote and inhibit movement, respectively. Normal behavior requires coordinated activity in the direct pathway to facilitate intended locomotion and indirect pathway to inhibit unwanted locomotion. In this striatal model, neuronal population activity is assumed to encode locomotion relevant information. Here, we propose a novel encoding mechanism for the dorsal striatum. We identified spatially compact neural clusters in both the direct and indirect pathways. Detailed characterization revealed similar cluster organization between the direct and indirect pathways, and cluster activities from both pathways were correlated with mouse locomotion velocities. Using machine-learning algorithms, cluster activities could be used to decode locomotion relevant behavioral states and locomotion velocity. We propose that neural clusters in the dorsal striatum encode locomotion relevant information and that coordinated activities of direct and indirect pathway neural clusters are required for normal striatal controlled behavior. VIDEO ABSTRACT. Published by Elsevier Inc.

  3. ACTIVIS: Visual Exploration of Industry-Scale Deep Neural Network Models.

    Science.gov (United States)

    Kahng, Minsuk; Andrews, Pierre Y; Kalro, Aditya; Polo Chau, Duen Horng

    2017-08-30

    While deep learning models have achieved state-of-the-art accuracies for many prediction tasks, understanding these models remains a challenge. Despite the recent interest in developing visual tools to help users interpret deep learning models, the complexity and wide variety of models deployed in industry, and the large-scale datasets that they used, pose unique design challenges that are inadequately addressed by existing work. Through participatory design sessions with over 15 researchers and engineers at Facebook, we have developed, deployed, and iteratively improved ACTIVIS, an interactive visualization system for interpreting large-scale deep learning models and results. By tightly integrating multiple coordinated views, such as a computation graph overview of the model architecture, and a neuron activation view for pattern discovery and comparison, users can explore complex deep neural network models at both the instance- and subset-level. ACTIVIS has been deployed on Facebook's machine learning platform. We present case studies with Facebook researchers and engineers, and usage scenarios of how ACTIVIS may work with different models.

  4. Sustained Activity in Hierarchical Modular Neural Networks: Self-Organized Criticality and Oscillations

    Science.gov (United States)

    Wang, Sheng-Jun; Hilgetag, Claus C.; Zhou, Changsong

    2010-01-01

    Cerebral cortical brain networks possess a number of conspicuous features of structure and dynamics. First, these networks have an intricate, non-random organization. In particular, they are structured in a hierarchical modular fashion, from large-scale regions of the whole brain, via cortical areas and area subcompartments organized as structural and functional maps to cortical columns, and finally circuits made up of individual neurons. Second, the networks display self-organized sustained activity, which is persistent in the absence of external stimuli. At the systems level, such activity is characterized by complex rhythmical oscillations over a broadband background, while at the cellular level, neuronal discharges have been observed to display avalanches, indicating that cortical networks are at the state of self-organized criticality (SOC). We explored the relationship between hierarchical neural network organization and sustained dynamics using large-scale network modeling. Previously, it was shown that sparse random networks with balanced excitation and inhibition can sustain neural activity without external stimulation. We found that a hierarchical modular architecture can generate sustained activity better than random networks. Moreover, the system can simultaneously support rhythmical oscillations and SOC, which are not present in the respective random networks. The mechanism underlying the sustained activity is that each dense module cannot sustain activity on its own, but displays SOC in the presence of weak perturbations. Therefore, the hierarchical modular networks provide the coupling among subsystems with SOC. These results imply that the hierarchical modular architecture of cortical networks plays an important role in shaping the ongoing spontaneous activity of the brain, potentially allowing the system to take advantage of both the sensitivity of critical states and the predictability and timing of oscillations for efficient information

  5. Orphan nuclear receptor TLX activates Wnt/β-catenin signalling to stimulate neural stem cell proliferation and self-renewal

    Science.gov (United States)

    Qu, Qiuhao; Sun, Guoqiang; Li, Wenwu; Yang, Su; Ye, Peng; Zhao, Chunnian; Yu, Ruth T.; Gage, Fred H.; Evans, Ronald M.; Shi, Yanhong

    2010-01-01

    The nuclear receptor TLX (also known as NR2E1) is essential for adult neural stem cell self-renewal; however, the molecular mechanisms involved remain elusive. Here we show that TLX activates the canonical Wnt/β-catenin pathway in adult mouse neural stem cells. Furthermore, we demonstrate that Wnt/β-catenin signalling is important in the proliferation and self-renewal of adult neural stem cells in the presence of epidermal growth factor and fibroblast growth factor. Wnt7a and active β-catenin promote neural stem cell self-renewal, whereas the deletion of Wnt7a or the lentiviral transduction of axin, a β-catenin inhibitor, led to decreased cell proliferation in adult neurogenic areas. Lentiviral transduction of active β-catenin led to increased numbers of type B neural stem cells in the subventricular zone of adult brains, whereas deletion of Wnt7a or TLX resulted in decreased numbers of neural stem cells retaining bromodeoxyuridine label in the adult brain. Both Wnt7a and active β-catenin significantly rescued a TLX (also known as Nr2e1) short interfering RNA-induced deficiency in neural stem cell proliferation. Lentiviral transduction of an active β-catenin increased cell proliferation in neurogenic areas of TLX-null adult brains markedly. These results strongly support the hypothesis that TLX acts through the Wnt/β-catenin pathway to regulate neural stem cell proliferation and self-renewal. Moreover, this study suggests that neural stem cells can promote their own self-renewal by secreting signalling molecules that act in an autocrine/paracrine mode. PMID:20010817

  6. Orphan nuclear receptor TLX activates Wnt/beta-catenin signalling to stimulate neural stem cell proliferation and self-renewal.

    Science.gov (United States)

    Qu, Qiuhao; Sun, Guoqiang; Li, Wenwu; Yang, Su; Ye, Peng; Zhao, Chunnian; Yu, Ruth T; Gage, Fred H; Evans, Ronald M; Shi, Yanhong

    2010-01-01

    The nuclear receptor TLX (also known as NR2E1) is essential for adult neural stem cell self-renewal; however, the molecular mechanisms involved remain elusive. Here we show that TLX activates the canonical Wnt/beta-catenin pathway in adult mouse neural stem cells. Furthermore, we demonstrate that Wnt/beta-catenin signalling is important in the proliferation and self-renewal of adult neural stem cells in the presence of epidermal growth factor and fibroblast growth factor. Wnt7a and active beta-catenin promote neural stem cell self-renewal, whereas the deletion of Wnt7a or the lentiviral transduction of axin, a beta-catenin inhibitor, led to decreased cell proliferation in adult neurogenic areas. Lentiviral transduction of active beta-catenin led to increased numbers of type B neural stem cells in the subventricular zone of adult brains, whereas deletion of Wnt7a or TLX resulted in decreased numbers of neural stem cells retaining bromodeoxyuridine label in the adult brain. Both Wnt7a and active beta-catenin significantly rescued a TLX (also known as Nr2e1) short interfering RNA-induced deficiency in neural stem cell proliferation. Lentiviral transduction of an active beta-catenin increased cell proliferation in neurogenic areas of TLX-null adult brains markedly. These results strongly support the hypothesis that TLX acts through the Wnt/beta-catenin pathway to regulate neural stem cell proliferation and self-renewal. Moreover, this study suggests that neural stem cells can promote their own self-renewal by secreting signalling molecules that act in an autocrine/paracrine mode.

  7. Modulation of neural activity by reward in medial intraparietal cortex is sensitive to temporal sequence of reward

    Science.gov (United States)

    Rajalingham, Rishi; Stacey, Richard Greg; Tsoulfas, Georgios

    2014-01-01

    To restore movements to paralyzed patients, neural prosthetic systems must accurately decode patients' intentions from neural signals. Despite significant advancements, current systems are unable to restore complex movements. Decoding reward-related signals from the medial intraparietal area (MIP) could enhance prosthetic performance. However, the dynamics of reward sensitivity in MIP is not known. Furthermore, reward-related modulation in premotor areas has been attributed to behavioral confounds. Here we investigated the stability of reward encoding in MIP by assessing the effect of reward history on reward sensitivity. We recorded from neurons in MIP while monkeys performed a delayed-reach task under two reward schedules. In the variable schedule, an equal number of small- and large-rewards trials were randomly interleaved. In the constant schedule, one reward size was delivered for a block of trials. The memory period firing rate of most neurons in response to identical rewards varied according to schedule. Using systems identification tools, we attributed the schedule sensitivity to the dependence of neural activity on the history of reward. We did not find schedule-dependent behavioral changes, suggesting that reward modulates neural activity in MIP. Neural discrimination between rewards was less in the variable than in the constant schedule, degrading our ability to decode reach target and reward simultaneously. The effect of schedule was mitigated by adding Haar wavelet coefficients to the decoding model. This raises the possibility of multiple encoding schemes at different timescales and reinforces the potential utility of reward information for prosthetic performance. PMID:25008408

  8. Modulation of neural activity by reward in medial intraparietal cortex is sensitive to temporal sequence of reward.

    Science.gov (United States)

    Rajalingham, Rishi; Stacey, Richard Greg; Tsoulfas, Georgios; Musallam, Sam

    2014-10-01

    To restore movements to paralyzed patients, neural prosthetic systems must accurately decode patients' intentions from neural signals. Despite significant advancements, current systems are unable to restore complex movements. Decoding reward-related signals from the medial intraparietal area (MIP) could enhance prosthetic performance. However, the dynamics of reward sensitivity in MIP is not known. Furthermore, reward-related modulation in premotor areas has been attributed to behavioral confounds. Here we investigated the stability of reward encoding in MIP by assessing the effect of reward history on reward sensitivity. We recorded from neurons in MIP while monkeys performed a delayed-reach task under two reward schedules. In the variable schedule, an equal number of small- and large-rewards trials were randomly interleaved. In the constant schedule, one reward size was delivered for a block of trials. The memory period firing rate of most neurons in response to identical rewards varied according to schedule. Using systems identification tools, we attributed the schedule sensitivity to the dependence of neural activity on the history of reward. We did not find schedule-dependent behavioral changes, suggesting that reward modulates neural activity in MIP. Neural discrimination between rewards was less in the variable than in the constant schedule, degrading our ability to decode reach target and reward simultaneously. The effect of schedule was mitigated by adding Haar wavelet coefficients to the decoding model. This raises the possibility of multiple encoding schemes at different timescales and reinforces the potential utility of reward information for prosthetic performance. Copyright © 2014 the American Physiological Society.

  9. HIV lipodystrophy case definition using artificial neural network modelling

    DEFF Research Database (Denmark)

    Ioannidis, John P A; Trikalinos, Thomas A; Law, Matthew

    2003-01-01

    OBJECTIVE: A case definition of HIV lipodystrophy has recently been developed from a combination of clinical, metabolic and imaging/body composition variables using logistic regression methods. We aimed to evaluate whether artificial neural networks could improve the diagnostic accuracy. METHODS......: The database of the case-control Lipodystrophy Case Definition Study was split into 504 subjects (265 with and 239 without lipodystrophy) used for training and 284 independent subjects (152 with and 132 without lipodystrophy) used for validation. Back-propagation neural networks with one or two middle layers...... were trained and validated. Results were compared against logistic regression models using the same information. RESULTS: Neural networks using clinical variables only (41 items) achieved consistently superior performance than logistic regression in terms of specificity, overall accuracy and area under...

  10. Endogenous testosterone levels are associated with neural activity in men with schizophrenia during facial emotion processing.

    Science.gov (United States)

    Ji, Ellen; Weickert, Cynthia Shannon; Lenroot, Rhoshel; Catts, Stanley V; Vercammen, Ans; White, Christopher; Gur, Raquel E; Weickert, Thomas W

    2015-06-01

    Growing evidence suggests that testosterone may play a role in the pathophysiology of schizophrenia given that testosterone has been linked to cognition and negative symptoms in schizophrenia. Here, we determine the extent to which serum testosterone levels are related to neural activity in affective processing circuitry in men with schizophrenia. Functional magnetic resonance imaging was used to measure blood-oxygen-level-dependent signal changes as 32 healthy controls and 26 people with schizophrenia performed a facial emotion identification task. Whole brain analyses were performed to determine regions of differential activity between groups during processing of angry versus non-threatening faces. A follow-up ROI analysis using a regression model in a subset of 16 healthy men and 16 men with schizophrenia was used to determine the extent to which serum testosterone levels were related to neural activity. Healthy controls displayed significantly greater activation than people with schizophrenia in the left inferior frontal gyrus (IFG). There was no significant difference in circulating testosterone levels between healthy men and men with schizophrenia. Regression analyses between activation in the IFG and circulating testosterone levels revealed a significant positive correlation in men with schizophrenia (r=.63, p=.01) and no significant relationship in healthy men. This study provides the first evidence that circulating serum testosterone levels are related to IFG activation during emotion face processing in men with schizophrenia but not in healthy men, which suggests that testosterone levels modulate neural processes relevant to facial emotion processing that may interfere with social functioning in men with schizophrenia. Crown Copyright © 2015. Published by Elsevier B.V. All rights reserved.

  11. Adaptive neural networks control for camera stabilization with active suspension system

    Directory of Open Access Journals (Sweden)

    Feng Zhao

    2015-08-01

    Full Text Available The camera always suffers from image instability on the moving vehicle due to unintentional vibrations caused by road roughness. This article presents an adaptive neural network approach mixed with linear quadratic regulator control for a quarter-car active suspension system to stabilize the image captured area of the camera. An active suspension system provides extra force through the actuator which allows it to suppress vertical vibration of sprung mass. First, to deal with the road disturbance and the system uncertainties, radial basis function neural network is proposed to construct the map between the state error and the compensation component, which can correct the optimal state-feedback control law. The weights matrix of radial basis function neural network is adaptively tuned online. Then, the closed-loop stability and asymptotic convergence performance is guaranteed by Lyapunov analysis. Finally, the simulation results demonstrate that the proposed controller effectively suppresses the vibration of the camera and enhances the stabilization of the entire camera, where different excitations are considered to validate the system performance.

  12. Differentiation between non-neural and neural contributors to ankle joint stiffness in cerebral palsy.

    Science.gov (United States)

    de Gooijer-van de Groep, Karin L; de Vlugt, Erwin; de Groot, Jurriaan H; van der Heijden-Maessen, Hélène C M; Wielheesen, Dennis H M; van Wijlen-Hempel, Rietje M S; Arendzen, J Hans; Meskers, Carel G M

    2013-07-23

    Spastic paresis in cerebral palsy (CP) is characterized by increased joint stiffness that may be of neural origin, i.e. improper muscle activation caused by e.g. hyperreflexia or non-neural origin, i.e. altered tissue viscoelastic properties (clinically: "spasticity" vs. "contracture"). Differentiation between these components is hard to achieve by common manual tests. We applied an assessment instrument to obtain quantitative measures of neural and non-neural contributions to ankle joint stiffness in CP. Twenty-three adolescents with CP and eleven healthy subjects were seated with their foot fixated to an electrically powered single axis footplate. Passive ramp-and-hold rotations were applied over full ankle range of motion (RoM) at low and high velocities. Subject specific tissue stiffness, viscosity and reflexive torque were estimated from ankle angle, torque and triceps surae EMG activity using a neuromuscular model. In CP, triceps surae reflexive torque was on average 5.7 times larger (p = .002) and tissue stiffness 2.1 times larger (p = .018) compared to controls. High tissue stiffness was associated with reduced RoM (p therapy.

  13. Modeling of Activated Sludge Process Using Sequential Adaptive Neuro-fuzzy Inference System

    Directory of Open Access Journals (Sweden)

    Mahsa Vajedi

    2014-10-01

    Full Text Available In this study, an adaptive neuro-fuzzy inference system (ANFIS has been applied to model activated sludge wastewater treatment process of Mobin petrochemical company. The correlation coefficients between the input variables and the output variable were calculated to determine the input with the highest influence on the output (the quality of the outlet flow in order to compare three neuro-fuzzy structures with different number of parameters. The predictions of the neuro-fuzzy models were compared with those of multilayer artificial neural network models with similar structure. The comparison indicated that both methods resulted in flexible, robust and effective models for the activated sludge system. Moreover, the root mean square of the error for neuro-fuzzy and neural network models were 5.14 and 6.59, respectively, which means the former is the superior method.

  14. Constitutively active Notch1 converts cranial neural crest-derived frontonasal mesenchyme to perivascular cells in vivo

    Directory of Open Access Journals (Sweden)

    Sophie R. Miller

    2017-03-01

    Full Text Available Perivascular/mural cells originate from either the mesoderm or the cranial neural crest. Regardless of their origin, Notch signalling is necessary for their formation. Furthermore, in both chicken and mouse, constitutive Notch1 activation (via expression of the Notch1 intracellular domain is sufficient in vivo to convert trunk mesoderm-derived somite cells to perivascular cells, at the expense of skeletal muscle. In experiments originally designed to investigate the effect of premature Notch1 activation on the development of neural crest-derived olfactory ensheathing glial cells (OECs, we used in ovo electroporation to insert a tetracycline-inducible NotchΔE construct (encoding a constitutively active mutant of mouse Notch1 into the genome of chicken cranial neural crest cell precursors, and activated NotchΔE expression by doxycycline injection at embryonic day 4. NotchΔE-targeted cells formed perivascular cells within the frontonasal mesenchyme, and expressed a perivascular marker on the olfactory nerve. Hence, constitutively activating Notch1 is sufficient in vivo to drive not only somite cells, but also neural crest-derived frontonasal mesenchyme and perhaps developing OECs, to a perivascular cell fate. These results also highlight the plasticity of neural crest-derived mesenchyme and glia.

  15. Accurate estimation of CO2 adsorption on activated carbon with multi-layer feed-forward neural network (MLFNN algorithm

    Directory of Open Access Journals (Sweden)

    Alireza Rostami

    2018-03-01

    Full Text Available Global warming due to greenhouse effect has been considered as a serious problem for many years around the world. Among the different gases which cause greenhouse gas effect, carbon dioxide is of great difficulty by entering into the surrounding atmosphere. So CO2 capturing and separation especially by adsorption is one of the most interesting approaches because of the low equipment cost, ease of operation, simplicity of design, and low energy consumption.In this study, experimental results are presented for the adsorption equilibria of carbon dioxide on activated carbon. The adsorption equilibrium data for carbon dioxide were predicted with two commonly used isotherm models in order to compare with multi-layer feed-forward neural network (MLFNN algorithm for a wide range of partial pressure. As a result, the ANN-based algorithm shows much better efficiency and accuracy than the Sips and Langmuir isotherms. In addition, the applicability of the Sips and Langmuir models are limited to isothermal conditions, even though the ANN-based algorithm is not restricted to the constant temperature condition. Consequently, it is proved that MLFNN algorithm is a promising model for calculation of CO2 adsorption density on activated carbon. Keywords: Global warming, CO2 adsorption, Activated carbon, Multi-layer feed-forward neural network algorithm, Statistical quality measures

  16. Cognitive emotion regulation in children: Reappraisal of emotional faces modulates neural source activity in a frontoparietal network.

    Science.gov (United States)

    Wessing, Ida; Rehbein, Maimu A; Romer, Georg; Achtergarde, Sandra; Dobel, Christian; Zwitserlood, Pienie; Fürniss, Tilman; Junghöfer, Markus

    2015-06-01

    Emotion regulation has an important role in child development and psychopathology. Reappraisal as cognitive regulation technique can be used effectively by children. Moreover, an ERP component known to reflect emotional processing called late positive potential (LPP) can be modulated by children using reappraisal and this modulation is also related to children's emotional adjustment. The present study seeks to elucidate the neural generators of such LPP effects. To this end, children aged 8-14 years reappraised emotional faces, while neural activity in an LPP time window was estimated using magnetoencephalography-based source localization. Additionally, neural activity was correlated with two indexes of emotional adjustment and age. Reappraisal reduced activity in the left dorsolateral prefrontal cortex during down-regulation and enhanced activity in the right parietal cortex during up-regulation. Activity in the visual cortex decreased with increasing age, more adaptive emotion regulation and less anxiety. Results demonstrate that reappraisal changed activity within a frontoparietal network in children. Decreasing activity in the visual cortex with increasing age is suggested to reflect neural maturation. A similar decrease with adaptive emotion regulation and less anxiety implies that better emotional adjustment may be associated with an advance in neural maturation. Copyright © 2015 The Authors. Published by Elsevier Ltd.. All rights reserved.

  17. Studying the Relationship between High-Latitude Geomagnetic Activity and Parameters of Interplanetary Magnetic Clouds with the Use of Artificial Neural Networks

    Science.gov (United States)

    Barkhatov, N. A.; Revunov, S. E.; Vorobjev, V. G.; Yagodkina, O. I.

    2018-03-01

    The cause-and-effect relations of the dynamics of high-latitude geomagnetic activity (in terms of the AL index) and the type of the magnetic cloud of the solar wind are studied with the use of artificial neural networks. A recurrent neural network model has been created based on the search for the optimal physically coupled input and output parameters characterizing the action of a plasma flux belonging to a certain magnetic cloud type on the magnetosphere. It has been shown that, with IMF components as input parameters of neural networks with allowance for a 90-min prehistory, it is possible to retrieve the AL sequence with an accuracy to 80%. The successful retrieval of the AL dynamics by the used data indicates the presence of a close nonlinear connection of the AL index with cloud parameters. The created neural network models can be applied with high efficiency to retrieve the AL index, both in periods of isolated magnetospheric substorms and in periods of the interaction between the Earth's magnetosphere and magnetic clouds of different types. The developed model of AL index retrieval can be used to detect magnetic clouds.

  18. Teaching methodology for modeling reference evapotranspiration with artificial neural networks

    OpenAIRE

    Martí, Pau; Pulido Calvo, Inmaculada; Gutiérrez Estrada, Juan Carlos

    2015-01-01

    [EN] Artificial neural networks are a robust alternative to conventional models for estimating different targets in irrigation engineering, among others, reference evapotranspiration, a key variable for estimating crop water requirements. This paper presents a didactic methodology for introducing students in the application of artificial neural networks for reference evapotranspiration estimation using MatLab c . Apart from learning a specific application of this software wi...

  19. Chronic mild stress impairs latent inhibition and induces region-specific neural activation in CHL1-deficient mice, a mouse model of schizophrenia.

    Science.gov (United States)

    Buhusi, Mona; Obray, Daniel; Guercio, Bret; Bartlett, Mitchell J; Buhusi, Catalin V

    2017-08-30

    Schizophrenia is a neurodevelopmental disorder characterized by abnormal processing of information and attentional deficits. Schizophrenia has a high genetic component but is precipitated by environmental factors, as proposed by the 'two-hit' theory of schizophrenia. Here we compared latent inhibition as a measure of learning and attention, in CHL1-deficient mice, an animal model of schizophrenia, and their wild-type littermates, under no-stress and chronic mild stress conditions. All unstressed mice as well as the stressed wild-type mice showed latent inhibition. In contrast, CHL1-deficient mice did not show latent inhibition after exposure to chronic stress. Differences in neuronal activation (c-Fos-positive cell counts) were noted in brain regions associated with latent inhibition: Neuronal activation in the prelimbic/infralimbic cortices and the nucleus accumbens shell was affected solely by stress. Neuronal activation in basolateral amygdala and ventral hippocampus was affected independently by stress and genotype. Most importantly, neural activation in nucleus accumbens core was affected by the interaction between stress and genotype. These results provide strong support for a 'two-hit' (genes x environment) effect on latent inhibition in CHL1-deficient mice, and identify CHL1-deficient mice as a model of schizophrenia-like learning and attention impairments. Copyright © 2017 Elsevier B.V. All rights reserved.

  20. Modelling of solar energy potential in Nigeria using an artificial neural network model

    International Nuclear Information System (INIS)

    Fadare, D.A.

    2009-01-01

    In this study, an artificial neural network (ANN) based model for prediction of solar energy potential in Nigeria (lat. 4-14 o N, log. 2-15 o E) was developed. Standard multilayered, feed-forward, back-propagation neural networks with different architecture were designed using neural toolbox for MATLAB. Geographical and meteorological data of 195 cities in Nigeria for period of 10 years (1983-1993) from the NASA geo-satellite database were used for the training and testing the network. Meteorological and geographical data (latitude, longitude, altitude, month, mean sunshine duration, mean temperature, and relative humidity) were used as inputs to the network, while the solar radiation intensity was used as the output of the network. The results show that the correlation coefficients between the ANN predictions and actual mean monthly global solar radiation intensities for training and testing datasets were higher than 90%, thus suggesting a high reliability of the model for evaluation of solar radiation in locations where solar radiation data are not available. The predicted solar radiation values from the model were given in form of monthly maps. The monthly mean solar radiation potential in northern and southern regions ranged from 7.01-5.62 to 5.43-3.54 kW h/m 2 day, respectively. A graphical user interface (GUI) was developed for the application of the model. The model can be used easily for estimation of solar radiation for preliminary design of solar applications.

  1. The neural basis of financial risk taking.

    Science.gov (United States)

    Kuhnen, Camelia M; Knutson, Brian

    2005-09-01

    Investors systematically deviate from rationality when making financial decisions, yet the mechanisms responsible for these deviations have not been identified. Using event-related fMRI, we examined whether anticipatory neural activity would predict optimal and suboptimal choices in a financial decision-making task. We characterized two types of deviations from the optimal investment strategy of a rational risk-neutral agent as risk-seeking mistakes and risk-aversion mistakes. Nucleus accumbens activation preceded risky choices as well as risk-seeking mistakes, while anterior insula activation preceded riskless choices as well as risk-aversion mistakes. These findings suggest that distinct neural circuits linked to anticipatory affect promote different types of financial choices and indicate that excessive activation of these circuits may lead to investing mistakes. Thus, consideration of anticipatory neural mechanisms may add predictive power to the rational actor model of economic decision making.

  2. Refrigerant flow through electronic expansion valve: Experiment and neural network modeling

    International Nuclear Information System (INIS)

    Cao, Xiang; Li, Ze-Yu; Shao, Liang-Liang; Zhang, Chun-Lu

    2016-01-01

    Highlights: • Experimental data from different sources were used in comparison of EEV models. • Artificial neural network in EEV modeling is superior to literature correlations. • Artificial neural network with 4-4-1 structure and S function is recommended. • Artificial neural network is flexible for EEV mass flow rate and opening prediction. - Abstract: Electronic expansion valve (EEV) plays a crucial role in controlling refrigerant mass flow rate of refrigeration or heat pump systems for energy savings. However, complexities in two-phase throttling process and geometry make accurate modeling of EEV flow characteristics more difficult. This paper developed an artificial neural network (ANN) model using refrigerant inlet and outlet pressures, inlet subcooling, EEV opening as ANN inputs, refrigerant mass flow rate as ANN output. Both linear and nonlinear transfer functions in hidden layer were used and compared to each other. Experimental data from multiple sources including in-house experiments of one EEV with R410A were used for ANN training and test. In addition, literature correlations were compared with ANN as well. Results showed that the ANN model with nonlinear transfer function worked well in all cases and it is much accurate than the literature correlations. In all cases, nonlinear ANN predicted refrigerant mass flow rates within ±0.4% average relative deviation (A.D.) and 2.7% standard deviation (S.D.), meanwhile it predicted the EEV opening at 0.1% A.D. and 2.1% S.D.

  3. Determination of daily solar ultraviolet radiation using statistical models and artificial neural networks

    Directory of Open Access Journals (Sweden)

    F. J. Barbero

    2006-09-01

    Full Text Available In this study, two different methodologies are used to develop two models for estimating daily solar UV radiation. The first is based on traditional statistical techniques whereas the second is based on artificial neural network methods. Both models use daily solar global broadband radiation as the only measured input. The statistical model is derived from a relationship between the daily UV and the global clearness indices but modulated by the relative optical air mass. The inputs to the neural network model were determined from a large number of radiometric and atmospheric parameters using the automatic relevance determination method, although only the daily solar global irradiation, daily global clearness index and relative optical air mass were shown to be the optimal input variables. Both statistical and neural network models were developed using data measured at Almería (Spain, a semiarid and coastal climate, and tested against data from Table Mountain (Golden, CO, USA, a mountainous and dry environment. Results show that the statistical model performs adequately in both sites for all weather conditions, especially when only snow-free days at Golden were considered (RMSE=4.6%, MBE= –0.1%. The neural network based model provides the best overall estimates in the site where it has been trained, but presents an inadequate performance for the Golden site when snow-covered days are included (RMSE=6.5%, MBE= –3.0%. This result confirms that the neural network model does not adequately respond on those ranges of the input parameters which were not used for its development.

  4. Modeling Markov Switching ARMA-GARCH Neural Networks Models and an Application to Forecasting Stock Returns

    Directory of Open Access Journals (Sweden)

    Melike Bildirici

    2014-01-01

    Full Text Available The study has two aims. The first aim is to propose a family of nonlinear GARCH models that incorporate fractional integration and asymmetric power properties to MS-GARCH processes. The second purpose of the study is to augment the MS-GARCH type models with artificial neural networks to benefit from the universal approximation properties to achieve improved forecasting accuracy. Therefore, the proposed Markov-switching MS-ARMA-FIGARCH, APGARCH, and FIAPGARCH processes are further augmented with MLP, Recurrent NN, and Hybrid NN type neural networks. The MS-ARMA-GARCH family and MS-ARMA-GARCH-NN family are utilized for modeling the daily stock returns in an emerging market, the Istanbul Stock Index (ISE100. Forecast accuracy is evaluated in terms of MAE, MSE, and RMSE error criteria and Diebold-Mariano equal forecast accuracy tests. The results suggest that the fractionally integrated and asymmetric power counterparts of Gray’s MS-GARCH model provided promising results, while the best results are obtained for their neural network based counterparts. Further, among the models analyzed, the models based on the Hybrid-MLP and Recurrent-NN, the MS-ARMA-FIAPGARCH-HybridMLP, and MS-ARMA-FIAPGARCH-RNN provided the best forecast performances over the baseline single regime GARCH models and further, over the Gray’s MS-GARCH model. Therefore, the models are promising for various economic applications.

  5. Neural plasticity and its initiating conditions in tinnitus.

    Science.gov (United States)

    Roberts, L E

    2018-03-01

    Deafferentation caused by cochlear pathology (which can be hidden from the audiogram) activates forms of neural plasticity in auditory pathways, generating tinnitus and its associated conditions including hyperacusis. This article discusses tinnitus mechanisms and suggests how these mechanisms may relate to those involved in normal auditory information processing. Research findings from animal models of tinnitus and from electromagnetic imaging of tinnitus patients are reviewed which pertain to the role of deafferentation and neural plasticity in tinnitus and hyperacusis. Auditory neurons compensate for deafferentation by increasing their input/output functions (gain) at multiple levels of the auditory system. Forms of homeostatic plasticity are believed to be responsible for this neural change, which increases the spontaneous and driven activity of neurons in central auditory structures in animals expressing behavioral evidence of tinnitus. Another tinnitus correlate, increased neural synchrony among the affected neurons, is forged by spike-timing-dependent neural plasticity in auditory pathways. Slow oscillations generated by bursting thalamic neurons verified in tinnitus animals appear to modulate neural plasticity in the cortex, integrating tinnitus neural activity with information in brain regions supporting memory, emotion, and consciousness which exhibit increased metabolic activity in tinnitus patients. The latter process may be induced by transient auditory events in normal processing but it persists in tinnitus, driven by phantom signals from the auditory pathway. Several tinnitus therapies attempt to suppress tinnitus through plasticity, but repeated sessions will likely be needed to prevent tinnitus activity from returning owing to deafferentation as its initiating condition.

  6. The application of neural networks with artificial intelligence technique in the modeling of industrial processes

    International Nuclear Information System (INIS)

    Saini, K. K.; Saini, Sanju

    2008-01-01

    Neural networks are a relatively new artificial intelligence technique that emulates the behavior of biological neural systems in digital software or hardware. These networks can 'learn', automatically, complex relationships among data. This feature makes the technique very useful in modeling processes for which mathematical modeling is difficult or impossible. The work described here outlines some examples of the application of neural networks with artificial intelligence technique in the modeling of industrial processes.

  7. Hybrid Neural Network Approach Based Tool for the Modelling of Photovoltaic Panels

    Directory of Open Access Journals (Sweden)

    Antonino Laudani

    2015-01-01

    Full Text Available A hybrid neural network approach based tool for identifying the photovoltaic one-diode model is presented. The generalization capabilities of neural networks are used together with the robustness of the reduced form of one-diode model. Indeed, from the studies performed by the authors and the works present in the literature, it was found that a direct computation of the five parameters via multiple inputs and multiple outputs neural network is a very difficult task. The reduced form consists in a series of explicit formulae for the support to the neural network that, in our case, is aimed at predicting just two parameters among the five ones identifying the model: the other three parameters are computed by reduced form. The present hybrid approach is efficient from the computational cost point of view and accurate in the estimation of the five parameters. It constitutes a complete and extremely easy tool suitable to be implemented in a microcontroller based architecture. Validations are made on about 10000 PV panels belonging to the California Energy Commission database.

  8. Nonlinear Model Predictive Control Based on a Self-Organizing Recurrent Neural Network.

    Science.gov (United States)

    Han, Hong-Gui; Zhang, Lu; Hou, Ying; Qiao, Jun-Fei

    2016-02-01

    A nonlinear model predictive control (NMPC) scheme is developed in this paper based on a self-organizing recurrent radial basis function (SR-RBF) neural network, whose structure and parameters are adjusted concurrently in the training process. The proposed SR-RBF neural network is represented in a general nonlinear form for predicting the future dynamic behaviors of nonlinear systems. To improve the modeling accuracy, a spiking-based growing and pruning algorithm and an adaptive learning algorithm are developed to tune the structure and parameters of the SR-RBF neural network, respectively. Meanwhile, for the control problem, an improved gradient method is utilized for the solution of the optimization problem in NMPC. The stability of the resulting control system is proved based on the Lyapunov stability theory. Finally, the proposed SR-RBF neural network-based NMPC (SR-RBF-NMPC) is used to control the dissolved oxygen (DO) concentration in a wastewater treatment process (WWTP). Comparisons with other existing methods demonstrate that the SR-RBF-NMPC can achieve a considerably better model fitting for WWTP and a better control performance for DO concentration.

  9. The neural basis of the bystander effect--the influence of group size on neural activity when witnessing an emergency.

    Science.gov (United States)

    Hortensius, Ruud; de Gelder, Beatrice

    2014-06-01

    Naturalistic observation and experimental studies in humans and other primates show that observing an individual in need automatically triggers helping behavior. The aim of the present study is to clarify the neurofunctional basis of social influences on individual helping behavior. We investigate whether when participants witness an emergency, while performing an unrelated color-naming task in an fMRI scanner, the number of bystanders present at the emergency influences neural activity in regions related to action preparation. The results show a decrease in activity with the increase in group size in the left pre- and postcentral gyri and left medial frontal gyrus. In contrast, regions related to visual perception and attention show an increase in activity. These results demonstrate the neural mechanisms of social influence on automatic action preparation that is at the core of helping behavior when witnessing an emergency. Copyright © 2014 Elsevier Inc. All rights reserved.

  10. New recursive-least-squares algorithms for nonlinear active control of sound and vibration using neural networks.

    Science.gov (United States)

    Bouchard, M

    2001-01-01

    In recent years, a few articles describing the use of neural networks for nonlinear active control of sound and vibration were published. Using a control structure with two multilayer feedforward neural networks (one as a nonlinear controller and one as a nonlinear plant model), steepest descent algorithms based on two distinct gradient approaches were introduced for the training of the controller network. The two gradient approaches were sometimes called the filtered-x approach and the adjoint approach. Some recursive-least-squares algorithms were also introduced, using the adjoint approach. In this paper, an heuristic procedure is introduced for the development of recursive-least-squares algorithms based on the filtered-x and the adjoint gradient approaches. This leads to the development of new recursive-least-squares algorithms for the training of the controller neural network in the two networks structure. These new algorithms produce a better convergence performance than previously published algorithms. Differences in the performance of algorithms using the filtered-x and the adjoint gradient approaches are discussed in the paper. The computational load of the algorithms discussed in the paper is evaluated for multichannel systems of nonlinear active control. Simulation results are presented to compare the convergence performance of the algorithms, showing the convergence gain provided by the new algorithms.

  11. TOUCHING MOMENTS: DESIRE MODULATES THE NEURAL ANTICIPATION OF ACTIVE ROMANTIC CARESS

    Directory of Open Access Journals (Sweden)

    Sjoerd J.H. Ebisch

    2014-02-01

    Full Text Available A romantic caress is a basic expression of affiliative behavior and a primary reinforcer. Given its inherent affective valence, its performance also would imply the prediction of reward values. For example, touching a person for whom one has strong passionate feelings likely is motivated by a strong desire for physical contact and associated with the anticipation of hedonic experiences. The present study aims at investigating how the anticipatory neural processes of active romantic caress are modulated by the intensity of the desire for affective contact as reflected by passionate feelings for the other. Functional magnetic resonance imaging scanning was performed in romantically involved partners using a paradigm that allowed to isolate the specific anticipatory representations of active romantic caress, compared with control caress, while testing for the relationship between neural activity and measures of feelings of passionate love for the other. The results demonstrated that right posterior insula activity in anticipation of romantic caress significantly co-varied with the intensity of desire for union with the other. This effect was independent of the sensory-affective properties of the performed touch, like its pleasantness. Furthermore, functional connectivity analysis showed that the same posterior insula cluster interacted with brain regions related to sensory-motor functions as well as to the processing and anticipation of reward. The findings provide insight on the neural substrate mediating between the desire for and the performance of romantic caress. In particular, we propose that anticipatory activity patterns in posterior insula may modulate subsequent sensory-affective processing of skin-to-skin contact.

  12. Sustained activity in hierarchical modular neural networks: self-organized criticality and oscillations

    Directory of Open Access Journals (Sweden)

    Sheng-Jun Wang

    2011-06-01

    Full Text Available Cerebral cortical brain networks possess a number of conspicuous features of structure and dynamics. First, these networks have an intricate, non-random organization. They are structured in a hierarchical modular fashion, from large-scale regions of the whole brain, via cortical areas and area subcompartments organized as structural and functional maps to cortical columns, and finally circuits made up of individual neurons. Second, the networks display self-organized sustained activity, which is persistent in the absence of external stimuli. At the systems level, such activity is characterized by complex rhythmical oscillations over a broadband background, while at the cellular level, neuronal discharges have been observed to display avalanches, indicating that cortical networks are at the state of self-organized criticality. We explored the relationship between hierarchical neural network organization and sustained dynamics using large-scale network modeling. It was shown that sparse random networks with balanced excitation and inhibition can sustain neural activity without external stimulation. We find that a hierarchical modular architecture can generate sustained activity better than random networks. Moreover, the system can simultaneously support rhythmical oscillations and self-organized criticality, which are not present in the respective random networks. The underlying mechanism is that each dense module cannot sustain activity on its own, but displays self-organized criticality in the presence of weak perturbations. The hierarchical modular networks provide the coupling among subsystems with self-organized criticality. These results imply that the hierarchical modular architecture of cortical networks plays an important role in shaping the ongoing spontaneous activity of the brain, potentially allowing the system to take advantage of both the sensitivityof critical state and predictability and timing of oscillations for efficient

  13. Neural network model for proton-proton collision at high energy

    International Nuclear Information System (INIS)

    El-Bakry, M.Y.; El-Metwally, K.A.

    2003-01-01

    Developments in artificial intelligence (AI) techniques and their applications to physics have made it feasible to develop and implement new modeling techniques for high-energy interactions. In particular, AI techniques of artificial neural networks (ANN) have recently been used to design and implement more effective models. The primary purpose of this paper is to model the proton-proton (p-p) collision using the ANN technique. Following a review of the conventional techniques and an introduction to the neural network, the paper presents simulation test results using an p-p based ANN model trained with experimental data. The p-p based ANN model calculates the multiplicity distribution of charged particles and the inelastic cross section of the p-p collision at high energies. The results amply demonstrate the feasibility of such new technique in extracting the collision features and prove its effectiveness

  14. Learning and adaptation: neural and behavioural mechanisms behind behaviour change

    Science.gov (United States)

    Lowe, Robert; Sandamirskaya, Yulia

    2018-01-01

    This special issue presents perspectives on learning and adaptation as they apply to a number of cognitive phenomena including pupil dilation in humans and attention in robots, natural language acquisition and production in embodied agents (robots), human-robot game play and social interaction, neural-dynamic modelling of active perception and neural-dynamic modelling of infant development in the Piagetian A-not-B task. The aim of the special issue, through its contributions, is to highlight some of the critical neural-dynamic and behavioural aspects of learning as it grounds adaptive responses in robotic- and neural-dynamic systems.

  15. Beyond excitation/inhibition imbalance in multidimensional models of neural circuit changes in brain disorders.

    Science.gov (United States)

    O'Donnell, Cian; Gonçalves, J Tiago; Portera-Cailliau, Carlos; Sejnowski, Terrence J

    2017-10-11

    A leading theory holds that neurodevelopmental brain disorders arise from imbalances in excitatory and inhibitory (E/I) brain circuitry. However, it is unclear whether this one-dimensional model is rich enough to capture the multiple neural circuit alterations underlying brain disorders. Here, we combined computational simulations with analysis of in vivo two-photon Ca 2+ imaging data from somatosensory cortex of Fmr1 knock-out (KO) mice, a model of Fragile-X Syndrome, to test the E/I imbalance theory. We found that: (1) The E/I imbalance model cannot account for joint alterations in the observed neural firing rates and correlations; (2) Neural circuit function is vastly more sensitive to changes in some cellular components over others; (3) The direction of circuit alterations in Fmr1 KO mice changes across development. These findings suggest that the basic E/I imbalance model should be updated to higher dimensional models that can better capture the multidimensional computational functions of neural circuits.

  16. Social exclusion in middle childhood: rejection events, slow-wave neural activity, and ostracism distress.

    Science.gov (United States)

    Crowley, Michael J; Wu, Jia; Molfese, Peter J; Mayes, Linda C

    2010-01-01

    This study examined neural activity with event-related potentials (ERPs) in middle childhood during a computer-simulated ball-toss game, Cyberball. After experiencing fair play initially, children were ultimately excluded by the other players. We focused specifically on “not my turn” events within fair play and rejection events within social exclusion. Dense-array ERPs revealed that rejection events are perceived rapidly. Condition differences (“not my turn” vs. rejection) were evident in a posterior ERP peaking at 420 ms consistent, with a larger P3 effect for rejection events indicating that in middle childhood rejection events are differentiated in <500 ms. Condition differences were evident for slow-wave activity (500-900 ms) in the medial frontal cortical region and the posterior occipital-parietal region, with rejection events more negative frontally and more positive posteriorly. Distress from the rejection experience was associated with a more negative frontal slow wave and a larger late positive slow wave, but only for rejection events. Source modeling with Geosouce software suggested that slow-wave neural activity in cortical regions previously identified in functional imaging studies of ostracism, including subgenual cortex, ventral anterior cingulate cortex, and insula, was greater for rejection events vs. “not my turn” events. © 2010 Psychology Press

  17. Neural decoding of visual imagery during sleep.

    Science.gov (United States)

    Horikawa, T; Tamaki, M; Miyawaki, Y; Kamitani, Y

    2013-05-03

    Visual imagery during sleep has long been a topic of persistent speculation, but its private nature has hampered objective analysis. Here we present a neural decoding approach in which machine-learning models predict the contents of visual imagery during the sleep-onset period, given measured brain activity, by discovering links between human functional magnetic resonance imaging patterns and verbal reports with the assistance of lexical and image databases. Decoding models trained on stimulus-induced brain activity in visual cortical areas showed accurate classification, detection, and identification of contents. Our findings demonstrate that specific visual experience during sleep is represented by brain activity patterns shared by stimulus perception, providing a means to uncover subjective contents of dreaming using objective neural measurement.

  18. Relation of obesity to neural activation in response to food commercials.

    Science.gov (United States)

    Gearhardt, Ashley N; Yokum, Sonja; Stice, Eric; Harris, Jennifer L; Brownell, Kelly D

    2014-07-01

    Adolescents view thousands of food commercials annually, but the neural response to food advertising and its association with obesity is largely unknown. This study is the first to examine how neural response to food commercials differs from other stimuli (e.g. non-food commercials and television show) and to explore how this response may differ by weight status. The blood oxygen level-dependent functional magnetic resonance imaging activation was measured in 30 adolescents ranging from lean to obese in response to food and non-food commercials imbedded in a television show. Adolescents exhibited greater activation in regions implicated in visual processing (e.g. occipital gyrus), attention (e.g. parietal lobes), cognition (e.g. temporal gyrus and posterior cerebellar lobe), movement (e.g. anterior cerebellar cortex), somatosensory response (e.g. postcentral gyrus) and reward [e.g. orbitofrontal cortex and anterior cingulate cortex (ACC)] during food commercials. Obese participants exhibited less activation during food relative to non-food commercials in neural regions implicated in visual processing (e.g. cuneus), attention (e.g. posterior cerebellar lobe), reward (e.g. ventromedial prefrontal cortex and ACC) and salience detection (e.g. precuneus). Obese participants did exhibit greater activation in a region implicated in semantic control (e.g. medial temporal gyrus). These findings may inform current policy debates regarding the impact of food advertising to minors. © The Author (2013). Published by Oxford University Press. For Permissions, please email: journals.permissions@oup.com.

  19. Standard representation and unified stability analysis for dynamic artificial neural network models.

    Science.gov (United States)

    Kim, Kwang-Ki K; Patrón, Ernesto Ríos; Braatz, Richard D

    2018-02-01

    An overview is provided of dynamic artificial neural network models (DANNs) for nonlinear dynamical system identification and control problems, and convex stability conditions are proposed that are less conservative than past results. The three most popular classes of dynamic artificial neural network models are described, with their mathematical representations and architectures followed by transformations based on their block diagrams that are convenient for stability and performance analyses. Classes of nonlinear dynamical systems that are universally approximated by such models are characterized, which include rigorous upper bounds on the approximation errors. A unified framework and linear matrix inequality-based stability conditions are described for different classes of dynamic artificial neural network models that take additional information into account such as local slope restrictions and whether the nonlinearities within the DANNs are odd. A theoretical example shows reduced conservatism obtained by the conditions. Copyright © 2017. Published by Elsevier Ltd.

  20. Local TEC Modelling and Forecasting using Neural Networks

    Science.gov (United States)

    Tebabal, A.; Radicella, S. M.; Nigussie, M.; Damtie, B.; Nava, B.; Yizengaw, E.

    2017-12-01

    Abstract Modelling the Earth's ionospheric characteristics is the focal task for the ionospheric community to mitigate its effect on the radio communication, satellite navigation and technologies. However, several aspects of modelling are still challenging, for example, the storm time characteristics. This paper presents modelling efforts of TEC taking into account solar and geomagnetic activity, time of the day and day of the year using neural networks (NNs) modelling technique. The NNs have been designed with GPS-TEC measured data from low and mid-latitude GPS stations. The training was conducted using the data obtained for the period from 2011 to 2014. The model prediction accuracy was evaluated using data of year 2015. The model results show that diurnal and seasonal trend of the GPS-TEC is well reproduced by the model for the two stations. The seasonal characteristics of GPS-TEC is compared with NN and NeQuick 2 models prediction when the latter one is driven by the monthly average value of solar flux. It is found that NN model performs better than the corresponding NeQuick 2 model for low latitude region. For the mid-latitude both NN and NeQuick 2 models reproduce the average characteristics of TEC variability quite successfully. An attempt of one day ahead forecast of TEC at the two locations has been made by introducing as driver previous day solar flux and geomagnetic index values. The results show that a reasonable day ahead forecast of local TEC can be achieved.

  1. Optimization behavior of brainstem respiratory neurons. A cerebral neural network model.

    Science.gov (United States)

    Poon, C S

    1991-01-01

    A recent model of respiratory control suggested that the steady-state respiratory responses to CO2 and exercise may be governed by an optimal control law in the brainstem respiratory neurons. It was not certain, however, whether such complex optimization behavior could be accomplished by a realistic biological neural network. To test this hypothesis, we developed a hybrid computer-neural model in which the dynamics of the lung, brain and other tissue compartments were simulated on a digital computer. Mimicking the "controller" was a human subject who pedalled on a bicycle with varying speed (analog of ventilatory output) with a view to minimize an analog signal of the total cost of breathing (chemical and mechanical) which was computed interactively and displayed on an oscilloscope. In this manner, the visuomotor cortex served as a proxy (homolog) of the brainstem respiratory neurons in the model. Results in 4 subjects showed a linear steady-state ventilatory CO2 response to arterial PCO2 during simulated CO2 inhalation and a nearly isocapnic steady-state response during simulated exercise. Thus, neural optimization is a plausible mechanism for respiratory control during exercise and can be achieved by a neural network with cognitive computational ability without the need for an exercise stimulus.

  2. Decision Making under Uncertainty: A Neural Model based on Partially Observable Markov Decision Processes

    Directory of Open Access Journals (Sweden)

    Rajesh P N Rao

    2010-11-01

    Full Text Available A fundamental problem faced by animals is learning to select actions based on noisy sensory information and incomplete knowledge of the world. It has been suggested that the brain engages in Bayesian inference during perception but how such probabilistic representations are used to select actions has remained unclear. Here we propose a neural model of action selection and decision making based on the theory of partially observable Markov decision processes (POMDPs. Actions are selected based not on a single optimal estimate of state but on the posterior distribution over states (the belief state. We show how such a model provides a unified framework for explaining experimental results in decision making that involve both information gathering and overt actions. The model utilizes temporal difference (TD learning for maximizing expected reward. The resulting neural architecture posits an active role for the neocortex in belief computation while ascribing a role to the basal ganglia in belief representation, value computation, and action selection. When applied to the random dots motion discrimination task, model neurons representing belief exhibit responses similar to those of LIP neurons in primate neocortex. The appropriate threshold for switching from information gathering to overt actions emerges naturally during reward maximization. Additionally, the time course of reward prediction error in the model shares similarities with dopaminergic responses in the basal ganglia during the random dots task. For tasks with a deadline, the model learns a decision making strategy that changes with elapsed time, predicting a collapsing decision threshold consistent with some experimental studies. The model provides a new framework for understanding neural decision making and suggests an important role for interactions between the neocortex and the basal ganglia in learning the mapping between probabilistic sensory representations and actions that maximize

  3. Morphogens, modeling and patterning the neural tube: an interview with James Briscoe.

    Science.gov (United States)

    Briscoe, James

    2015-01-20

    James Briscoe has a BSc in Microbiology and Virology (from the University of Warwick, UK) and a PhD in Molecular and Cellular Biology (from the Imperial Cancer Research Fund, London, now Cancer Research UK). He started working on the development of the neural tube in the lab of Tom Jessel as a postdoctoral fellow, establishing that there was graded sonic hedgehog signaling in the ventral neural tube. He is currently a group leader and Head of Division in Developmental Biology at the MRC National Institute for Medical Research (which will become part of the Francis Crick Institute in April 2015). He is working to understand the molecular and cellular mechanisms of graded signaling in the vertebrate neural tube.We interviewed him about the development of ideas on morphogenetic gradients and his own work on modeling the development of the neural tube for our series on modeling in biology.

  4. Empirical modeling of nuclear power plants using neural networks

    International Nuclear Information System (INIS)

    Parlos, A.G.; Atiya, A.; Chong, K.T.

    1991-01-01

    A summary of a procedure for nonlinear identification of process dynamics encountered in nuclear power plant components is presented in this paper using artificial neural systems. A hybrid feedforward/feedback neural network, namely, a recurrent multilayer perceptron, is used as the nonlinear structure for system identification. In the overall identification process, the feedforward portion of the network architecture provides its well-known interpolation property, while through recurrency and cross-talk, the local information feedback enables representation of time-dependent system nonlinearities. The standard backpropagation learning algorithm is modified and is used to train the proposed hybrid network in a supervised manner. The performance of recurrent multilayer perceptron networks in identifying process dynamics is investigated via the case study of a U-tube steam generator. The nonlinear response of a representative steam generator is predicted using a neural network and is compared to the response obtained from a sophisticated physical model during both high- and low-power operation. The transient responses compare well, though further research is warranted for training and testing of recurrent neural networks during more severe operational transients and accident scenarios

  5. Dynamics of a modified Hindmarsh-Rose neural model with random perturbations: Moment analysis and firing activities

    Science.gov (United States)

    Mondal, Argha; Upadhyay, Ranjit Kumar

    2017-11-01

    In this paper, an attempt has been made to understand the activity of mean membrane voltage and subsidiary system variables with moment equations (i.e., mean, variance and covariance's) under noisy environment. We consider a biophysically plausible modified Hindmarsh-Rose (H-R) neural system injected by an applied current exhibiting spiking-bursting phenomenon. The effects of predominant parameters on the dynamical behavior of a modified H-R system are investigated. Numerically, it exhibits period-doubling, period halving bifurcation and chaos phenomena. Further, a nonlinear system has been analyzed for the first and second order moments with additive stochastic perturbations. It has been solved using fourth order Runge-Kutta method and noisy systems by Euler's scheme. It has been demonstrated that the firing properties of neurons to evoke an action potential in a certain parameter space of the large exact systems can be estimated using an approximated model. Strong stimulation can cause a change in increase or decrease of the firing patterns. Corresponding to a fixed set of parameter values, the firing behavior and dynamical differences of the collective variables of a large, exact and approximated systems are investigated.

  6. Soft tissue deformation modelling through neural dynamics-based reaction-diffusion mechanics.

    Science.gov (United States)

    Zhang, Jinao; Zhong, Yongmin; Gu, Chengfan

    2018-05-30

    Soft tissue deformation modelling forms the basis of development of surgical simulation, surgical planning and robotic-assisted minimally invasive surgery. This paper presents a new methodology for modelling of soft tissue deformation based on reaction-diffusion mechanics via neural dynamics. The potential energy stored in soft tissues due to a mechanical load to deform tissues away from their rest state is treated as the equivalent transmembrane potential energy, and it is distributed in the tissue masses in the manner of reaction-diffusion propagation of nonlinear electrical waves. The reaction-diffusion propagation of mechanical potential energy and nonrigid mechanics of motion are combined to model soft tissue deformation and its dynamics, both of which are further formulated as the dynamics of cellular neural networks to achieve real-time computational performance. The proposed methodology is implemented with a haptic device for interactive soft tissue deformation with force feedback. Experimental results demonstrate that the proposed methodology exhibits nonlinear force-displacement relationship for nonlinear soft tissue deformation. Homogeneous, anisotropic and heterogeneous soft tissue material properties can be modelled through the inherent physical properties of mass points. Graphical abstract Soft tissue deformation modelling with haptic feedback via neural dynamics-based reaction-diffusion mechanics.

  7. Multivariate Analysis and Modeling of Sediment Pollution Using Neural Network Models and Geostatistics

    Science.gov (United States)

    Golay, Jean; Kanevski, Mikhaïl

    2013-04-01

    The present research deals with the exploration and modeling of a complex dataset of 200 measurement points of sediment pollution by heavy metals in Lake Geneva. The fundamental idea was to use multivariate Artificial Neural Networks (ANN) along with geostatistical models and tools in order to improve the accuracy and the interpretability of data modeling. The results obtained with ANN were compared to those of traditional geostatistical algorithms like ordinary (co)kriging and (co)kriging with an external drift. Exploratory data analysis highlighted a great variety of relationships (i.e. linear, non-linear, independence) between the 11 variables of the dataset (i.e. Cadmium, Mercury, Zinc, Copper, Titanium, Chromium, Vanadium and Nickel as well as the spatial coordinates of the measurement points and their depth). Then, exploratory spatial data analysis (i.e. anisotropic variography, local spatial correlations and moving window statistics) was carried out. It was shown that the different phenomena to be modeled were characterized by high spatial anisotropies, complex spatial correlation structures and heteroscedasticity. A feature selection procedure based on General Regression Neural Networks (GRNN) was also applied to create subsets of variables enabling to improve the predictions during the modeling phase. The basic modeling was conducted using a Multilayer Perceptron (MLP) which is a workhorse of ANN. MLP models are robust and highly flexible tools which can incorporate in a nonlinear manner different kind of high-dimensional information. In the present research, the input layer was made of either two (spatial coordinates) or three neurons (when depth as auxiliary information could possibly capture an underlying trend) and the output layer was composed of one (univariate MLP) to eight neurons corresponding to the heavy metals of the dataset (multivariate MLP). MLP models with three input neurons can be referred to as Artificial Neural Networks with EXternal

  8. DataHigh: graphical user interface for visualizing and interacting with high-dimensional neural activity

    Science.gov (United States)

    Cowley, Benjamin R.; Kaufman, Matthew T.; Butler, Zachary S.; Churchland, Mark M.; Ryu, Stephen I.; Shenoy, Krishna V.; Yu, Byron M.

    2013-12-01

    Objective. Analyzing and interpreting the activity of a heterogeneous population of neurons can be challenging, especially as the number of neurons, experimental trials, and experimental conditions increases. One approach is to extract a set of latent variables that succinctly captures the prominent co-fluctuation patterns across the neural population. A key problem is that the number of latent variables needed to adequately describe the population activity is often greater than 3, thereby preventing direct visualization of the latent space. By visualizing a small number of 2-d projections of the latent space or each latent variable individually, it is easy to miss salient features of the population activity. Approach. To address this limitation, we developed a Matlab graphical user interface (called DataHigh) that allows the user to quickly and smoothly navigate through a continuum of different 2-d projections of the latent space. We also implemented a suite of additional visualization tools (including playing out population activity timecourses as a movie and displaying summary statistics, such as covariance ellipses and average timecourses) and an optional tool for performing dimensionality reduction. Main results. To demonstrate the utility and versatility of DataHigh, we used it to analyze single-trial spike count and single-trial timecourse population activity recorded using a multi-electrode array, as well as trial-averaged population activity recorded using single electrodes. Significance. DataHigh was developed to fulfil a need for visualization in exploratory neural data analysis, which can provide intuition that is critical for building scientific hypotheses and models of population activity.

  9. DataHigh: graphical user interface for visualizing and interacting with high-dimensional neural activity.

    Science.gov (United States)

    Cowley, Benjamin R; Kaufman, Matthew T; Butler, Zachary S; Churchland, Mark M; Ryu, Stephen I; Shenoy, Krishna V; Yu, Byron M

    2013-12-01

    Analyzing and interpreting the activity of a heterogeneous population of neurons can be challenging, especially as the number of neurons, experimental trials, and experimental conditions increases. One approach is to extract a set of latent variables that succinctly captures the prominent co-fluctuation patterns across the neural population. A key problem is that the number of latent variables needed to adequately describe the population activity is often greater than 3, thereby preventing direct visualization of the latent space. By visualizing a small number of 2-d projections of the latent space or each latent variable individually, it is easy to miss salient features of the population activity. To address this limitation, we developed a Matlab graphical user interface (called DataHigh) that allows the user to quickly and smoothly navigate through a continuum of different 2-d projections of the latent space. We also implemented a suite of additional visualization tools (including playing out population activity timecourses as a movie and displaying summary statistics, such as covariance ellipses and average timecourses) and an optional tool for performing dimensionality reduction. To demonstrate the utility and versatility of DataHigh, we used it to analyze single-trial spike count and single-trial timecourse population activity recorded using a multi-electrode array, as well as trial-averaged population activity recorded using single electrodes. DataHigh was developed to fulfil a need for visualization in exploratory neural data analysis, which can provide intuition that is critical for building scientific hypotheses and models of population activity.

  10. DataHigh: Graphical user interface for visualizing and interacting with high-dimensional neural activity

    Science.gov (United States)

    Cowley, Benjamin R.; Kaufman, Matthew T.; Butler, Zachary S.; Churchland, Mark M.; Ryu, Stephen I.; Shenoy, Krishna V.; Yu, Byron M.

    2014-01-01

    Objective Analyzing and interpreting the activity of a heterogeneous population of neurons can be challenging, especially as the number of neurons, experimental trials, and experimental conditions increases. One approach is to extract a set of latent variables that succinctly captures the prominent co-fluctuation patterns across the neural population. A key problem is that the number of latent variables needed to adequately describe the population activity is often greater than three, thereby preventing direct visualization of the latent space. By visualizing a small number of 2-d projections of the latent space or each latent variable individually, it is easy to miss salient features of the population activity. Approach To address this limitation, we developed a Matlab graphical user interface (called DataHigh) that allows the user to quickly and smoothly navigate through a continuum of different 2-d projections of the latent space. We also implemented a suite of additional visualization tools (including playing out population activity timecourses as a movie and displaying summary statistics, such as covariance ellipses and average timecourses) and an optional tool for performing dimensionality reduction. Main results To demonstrate the utility and versatility of DataHigh, we used it to analyze single-trial spike count and single-trial timecourse population activity recorded using a multi-electrode array, as well as trial-averaged population activity recorded using single electrodes. Significance DataHigh was developed to fulfill a need for visualization in exploratory neural data analysis, which can provide intuition that is critical for building scientific hypotheses and models of population activity. PMID:24216250

  11. Generalized Net Model of the Cognitive and Neural Algorithm for Adaptive Resonance Theory 1

    Directory of Open Access Journals (Sweden)

    Todor Petkov

    2013-12-01

    Full Text Available The artificial neural networks are inspired by biological properties of human and animal brains. One of the neural networks type is called ART [4]. The abbreviation of ART stands for Adaptive Resonance Theory that has been invented by Stephen Grossberg in 1976 [5]. ART represents a family of Neural Networks. It is a cognitive and neural theory that describes how the brain autonomously learns to categorize, recognize and predict objects and events in the changing world. In this paper we introduce a GN model that represent ART1 Neural Network learning algorithm [1]. The purpose of this model is to explain when the input vector will be clustered or rejected among all nodes by the network. It can also be used for explanation and optimization of ART1 learning algorithm.

  12. Robust nonlinear autoregressive moving average model parameter estimation using stochastic recurrent artificial neural networks

    DEFF Research Database (Denmark)

    Chon, K H; Hoyer, D; Armoundas, A A

    1999-01-01

    In this study, we introduce a new approach for estimating linear and nonlinear stochastic autoregressive moving average (ARMA) model parameters, given a corrupt signal, using artificial recurrent neural networks. This new approach is a two-step approach in which the parameters of the deterministic...... part of the stochastic ARMA model are first estimated via a three-layer artificial neural network (deterministic estimation step) and then reestimated using the prediction error as one of the inputs to the artificial neural networks in an iterative algorithm (stochastic estimation step). The prediction...... error is obtained by subtracting the corrupt signal of the estimated ARMA model obtained via the deterministic estimation step from the system output response. We present computer simulation examples to show the efficacy of the proposed stochastic recurrent neural network approach in obtaining accurate...

  13. Periodicity and global exponential stability of generalized Cohen-Grossberg neural networks with discontinuous activations and mixed delays.

    Science.gov (United States)

    Wang, Dongshu; Huang, Lihong

    2014-03-01

    In this paper, we investigate the periodic dynamical behaviors for a class of general Cohen-Grossberg neural networks with discontinuous right-hand sides, time-varying and distributed delays. By means of retarded differential inclusions theory and the fixed point theorem of multi-valued maps, the existence of periodic solutions for the neural networks is obtained. After that, we derive some sufficient conditions for the global exponential stability and convergence of the neural networks, in terms of nonsmooth analysis theory with generalized Lyapunov approach. Without assuming the boundedness (or the growth condition) and monotonicity of the discontinuous neuron activation functions, our results will also be valid. Moreover, our results extend previous works not only on discrete time-varying and distributed delayed neural networks with continuous or even Lipschitz continuous activations, but also on discrete time-varying and distributed delayed neural networks with discontinuous activations. We give some numerical examples to show the applicability and effectiveness of our main results. Copyright © 2013 Elsevier Ltd. All rights reserved.

  14. Time Multiplexed Active Neural Probe with 1356 Parallel Recording Sites

    Directory of Open Access Journals (Sweden)

    Bogdan C. Raducanu

    2017-10-01

    Full Text Available We present a high electrode density and high channel count CMOS (complementary metal-oxide-semiconductor active neural probe containing 1344 neuron sized recording pixels (20 µm × 20 µm and 12 reference pixels (20 µm × 80 µm, densely packed on a 50 µm thick, 100 µm wide, and 8 mm long shank. The active electrodes or pixels consist of dedicated in-situ circuits for signal source amplification, which are directly located under each electrode. The probe supports the simultaneous recording of all 1356 electrodes with sufficient signal to noise ratio for typical neuroscience applications. For enhanced performance, further noise reduction can be achieved while using half of the electrodes (678. Both of these numbers considerably surpass the state-of-the art active neural probes in both electrode count and number of recording channels. The measured input referred noise in the action potential band is 12.4 µVrms, while using 678 electrodes, with just 3 µW power dissipation per pixel and 45 µW per read-out channel (including data transmission.

  15. Neural activation to monetary reward is associated with amphetamine reward sensitivity.

    Science.gov (United States)

    Crane, Natania A; Gorka, Stephanie M; Weafer, Jessica; Langenecker, Scott A; de Wit, Harriet; Phan, K Luan

    2018-03-14

    One known risk factor for drug use and abuse is sensitivity to rewarding effects of drugs. It is not known whether this risk factor extends to sensitivity to non-drug rewards. In this study with healthy young adults, we examined the association between sensitivity to the subjective rewarding effects of amphetamine and a neural indicator of anticipation of monetary reward. We hypothesized that greater euphorigenic response to amphetamine would be associated with greater neural activation to anticipation of monetary reward (Win > Loss). Healthy participants (N = 61) completed four laboratory sessions in which they received d-amphetamine (20 mg) and placebo in alternating order, providing self-report measures of euphoria and stimulation at regular intervals. At a separate visit 1-3 weeks later, participants completed the guessing reward task (GRT) during fMRI in a drug-free state. Participants reporting greater euphoria after amphetamine also exhibited greater neural activation during monetary reward anticipation in mesolimbic reward regions, including the bilateral caudate and putamen. This is the first study to show a relationship between neural correlates of monetary reward and sensitivity to the subjective rewarding effects of amphetamine in humans. These findings support growing evidence that sensitivity to reward in general is a risk factor for drug use and abuse, and suggest that sensitivity of drug-induced euphoria may reflect a general sensitivity to rewards. This may be an index of vulnerability for drug use or abuse.

  16. Hierarchical modeling of molecular energies using a deep neural network

    Science.gov (United States)

    Lubbers, Nicholas; Smith, Justin S.; Barros, Kipton

    2018-06-01

    We introduce the Hierarchically Interacting Particle Neural Network (HIP-NN) to model molecular properties from datasets of quantum calculations. Inspired by a many-body expansion, HIP-NN decomposes properties, such as energy, as a sum over hierarchical terms. These terms are generated from a neural network—a composition of many nonlinear transformations—acting on a representation of the molecule. HIP-NN achieves the state-of-the-art performance on a dataset of 131k ground state organic molecules and predicts energies with 0.26 kcal/mol mean absolute error. With minimal tuning, our model is also competitive on a dataset of molecular dynamics trajectories. In addition to enabling accurate energy predictions, the hierarchical structure of HIP-NN helps to identify regions of model uncertainty.

  17. Healthy human CSF promotes glial differentiation of hESC-derived neural cells while retaining spontaneous activity in existing neuronal networks

    Directory of Open Access Journals (Sweden)

    Heikki Kiiski

    2013-05-01

    The possibilities of human pluripotent stem cell-derived neural cells from the basic research tool to a treatment option in regenerative medicine have been well recognized. These cells also offer an interesting tool for in vitro models of neuronal networks to be used for drug screening and neurotoxicological studies and for patient/disease specific in vitro models. Here, as aiming to develop a reductionistic in vitro human neuronal network model, we tested whether human embryonic stem cell (hESC-derived neural cells could be cultured in human cerebrospinal fluid (CSF in order to better mimic the in vivo conditions. Our results showed that CSF altered the differentiation of hESC-derived neural cells towards glial cells at the expense of neuronal differentiation. The proliferation rate was reduced in CSF cultures. However, even though the use of CSF as the culture medium altered the glial vs. neuronal differentiation rate, the pre-existing spontaneous activity of the neuronal networks persisted throughout the study. These results suggest that it is possible to develop fully human cell and culture-based environments that can further be modified for various in vitro modeling purposes.

  18. Modeling of an industrial process of pleuromutilin fermentation using feed-forward neural networks

    Directory of Open Access Journals (Sweden)

    L. Khaouane

    2013-03-01

    Full Text Available This work investigates the use of artificial neural networks in modeling an industrial fermentation process of Pleuromutilin produced by Pleurotus mutilus in a fed-batch mode. Three feed-forward neural network models characterized by a similar structure (five neurons in the input layer, one hidden layer and one neuron in the output layer are constructed and optimized with the aim to predict the evolution of three main bioprocess variables: biomass, substrate and product. Results show a good fit between the predicted and experimental values for each model (the root mean squared errors were 0.4624% - 0.1234 g/L and 0.0016 mg/g respectively. Furthermore, the comparison between the optimized models and the unstructured kinetic models in terms of simulation results shows that neural network models gave more significant results. These results encourage further studies to integrate the mathematical formulae extracted from these models into an industrial control loop of the process.

  19. Increased Neural Activation during Picture Encoding and Retrieval in 60-Year-Olds Compared to 20-Year-Olds

    Science.gov (United States)

    Burgmans, S.; van Boxtel, M. P. J.; Vuurman, E. F. P. M.; Evers, E. A. T.; Jolles, J.

    2010-01-01

    Brain aging has been associated with both reduced and increased neural activity during task execution. The purpose of the present study was to investigate whether increased neural activation during memory encoding and retrieval is already present at the age of 60 as well as to obtain more insight into the mechanism behind increased activity.…

  20. Model for neural signaling leap statistics

    International Nuclear Information System (INIS)

    Chevrollier, Martine; Oria, Marcos

    2011-01-01

    We present a simple model for neural signaling leaps in the brain considering only the thermodynamic (Nernst) potential in neuron cells and brain temperature. We numerically simulated connections between arbitrarily localized neurons and analyzed the frequency distribution of the distances reached. We observed qualitative change between Normal statistics (with T 37.5 0 C, awaken regime) and Levy statistics (T = 35.5 0 C, sleeping period), characterized by rare events of long range connections.

  1. Model for neural signaling leap statistics

    Science.gov (United States)

    Chevrollier, Martine; Oriá, Marcos

    2011-03-01

    We present a simple model for neural signaling leaps in the brain considering only the thermodynamic (Nernst) potential in neuron cells and brain temperature. We numerically simulated connections between arbitrarily localized neurons and analyzed the frequency distribution of the distances reached. We observed qualitative change between Normal statistics (with T = 37.5°C, awaken regime) and Lévy statistics (T = 35.5°C, sleeping period), characterized by rare events of long range connections.

  2. Evidence for Neural Computations of Temporal Coherence in an Auditory Scene and Their Enhancement during Active Listening.

    Science.gov (United States)

    O'Sullivan, James A; Shamma, Shihab A; Lalor, Edmund C

    2015-05-06

    The human brain has evolved to operate effectively in highly complex acoustic environments, segregating multiple sound sources into perceptually distinct auditory objects. A recent theory seeks to explain this ability by arguing that stream segregation occurs primarily due to the temporal coherence of the neural populations that encode the various features of an individual acoustic source. This theory has received support from both psychoacoustic and functional magnetic resonance imaging (fMRI) studies that use stimuli which model complex acoustic environments. Termed stochastic figure-ground (SFG) stimuli, they are composed of a "figure" and background that overlap in spectrotemporal space, such that the only way to segregate the figure is by computing the coherence of its frequency components over time. Here, we extend these psychoacoustic and fMRI findings by using the greater temporal resolution of electroencephalography to investigate the neural computation of temporal coherence. We present subjects with modified SFG stimuli wherein the temporal coherence of the figure is modulated stochastically over time, which allows us to use linear regression methods to extract a signature of the neural processing of this temporal coherence. We do this under both active and passive listening conditions. Our findings show an early effect of coherence during passive listening, lasting from ∼115 to 185 ms post-stimulus. When subjects are actively listening to the stimuli, these responses are larger and last longer, up to ∼265 ms. These findings provide evidence for early and preattentive neural computations of temporal coherence that are enhanced by active analysis of an auditory scene. Copyright © 2015 the authors 0270-6474/15/357256-08$15.00/0.

  3. Convolutional Neural Networks for Human Activity Recognition Using Body-Worn Sensors

    Directory of Open Access Journals (Sweden)

    Fernando Moya Rueda

    2018-05-01

    Full Text Available Human activity recognition (HAR is a classification task for recognizing human movements. Methods of HAR are of great interest as they have become tools for measuring occurrences and durations of human actions, which are the basis of smart assistive technologies and manual processes analysis. Recently, deep neural networks have been deployed for HAR in the context of activities of daily living using multichannel time-series. These time-series are acquired from body-worn devices, which are composed of different types of sensors. The deep architectures process these measurements for finding basic and complex features in human corporal movements, and for classifying them into a set of human actions. As the devices are worn at different parts of the human body, we propose a novel deep neural network for HAR. This network handles sequence measurements from different body-worn devices separately. An evaluation of the architecture is performed on three datasets, the Oportunity, Pamap2, and an industrial dataset, outperforming the state-of-the-art. In addition, different network configurations will also be evaluated. We find that applying convolutions per sensor channel and per body-worn device improves the capabilities of convolutional neural network (CNNs.

  4. Chitosan derived co-spheroids of neural stem cells and mesenchymal stem cells for neural regeneration.

    Science.gov (United States)

    Han, Hao-Wei; Hsu, Shan-Hui

    2017-10-01

    Chitosan has been considered as candidate biomaterials for neural applications. The effective treatment of neurodegeneration or injury to the central nervous system (CNS) is still in lack nowadays. Adult neural stem cells (NSCs) represents a promising cell source to treat the CNS diseases but they are limited in number. Here, we developed the core-shell spheroids of NSCs (shell) and mesenchymal stem cells (MSCs, core) by co-culturing cells on the chitosan surface. The NSCs in chitosan derived co-spheroids displayed a higher survival rate than those in NSC homo-spheroids. The direct interaction of NSCs with MSCs in the co-spheroids increased the Notch activity and differentiation tendency of NSCs. Meanwhile, the differentiation potential of MSCs in chitosan derived co-spheroids was significantly enhanced toward neural lineages. Furthermore, NSC homo-spheroids and NSC/MSC co-spheroids derived on chitosan were evaluated for their in vivo efficacy by the embryonic and adult zebrafish brain injury models. The locomotion activity of zebrafish receiving chitosan derived NSC homo-spheroids or NSC/MSC co-spheroids was partially rescued in both models. Meanwhile, the higher survival rate was observed in the group of adult zebrafish implanted with chitosan derived NSC/MSC co-spheroids as compared to NSC homo-spheroids. These evidences indicate that chitosan may provide an extracellular matrix-like environment to drive the interaction and the morphological assembly between NSCs and MSCs and promote their neural differentiation capacities, which can be used for neural regeneration. Copyright © 2017 Elsevier B.V. All rights reserved.

  5. QSAR models for prediction study of HIV protease inhibitors using support vector machines, neural networks and multiple linear regression

    Directory of Open Access Journals (Sweden)

    Rachid Darnag

    2017-02-01

    Full Text Available Support vector machines (SVM represent one of the most promising Machine Learning (ML tools that can be applied to develop a predictive quantitative structure–activity relationship (QSAR models using molecular descriptors. Multiple linear regression (MLR and artificial neural networks (ANNs were also utilized to construct quantitative linear and non linear models to compare with the results obtained by SVM. The prediction results are in good agreement with the experimental value of HIV activity; also, the results reveal the superiority of the SVM over MLR and ANN model. The contribution of each descriptor to the structure–activity relationships was evaluated.

  6. Neural Underpinnings of Decision Strategy Selection: A Review and a Theoretical Model.

    Science.gov (United States)

    Wichary, Szymon; Smolen, Tomasz

    2016-01-01

    In multi-attribute choice, decision makers use decision strategies to arrive at the final choice. What are the neural mechanisms underlying decision strategy selection? The first goal of this paper is to provide a literature review on the neural underpinnings and cognitive models of decision strategy selection and thus set the stage for a neurocognitive model of this process. The second goal is to outline such a unifying, mechanistic model that can explain the impact of noncognitive factors (e.g., affect, stress) on strategy selection. To this end, we review the evidence for the factors influencing strategy selection, the neural basis of strategy use and the cognitive models of this process. We also present the Bottom-Up Model of Strategy Selection (BUMSS). The model assumes that the use of the rational Weighted Additive strategy and the boundedly rational heuristic Take The Best can be explained by one unifying, neurophysiologically plausible mechanism, based on the interaction of the frontoparietal network, orbitofrontal cortex, anterior cingulate cortex and the brainstem nucleus locus coeruleus. According to BUMSS, there are three processes that form the bottom-up mechanism of decision strategy selection and lead to the final choice: (1) cue weight computation, (2) gain modulation, and (3) weighted additive evaluation of alternatives. We discuss how these processes might be implemented in the brain, and how this knowledge allows us to formulate novel predictions linking strategy use and neural signals.

  7. Neural Underpinnings of Decision Strategy Selection: A Review and a Theoretical Model

    Science.gov (United States)

    Wichary, Szymon; Smolen, Tomasz

    2016-01-01

    In multi-attribute choice, decision makers use decision strategies to arrive at the final choice. What are the neural mechanisms underlying decision strategy selection? The first goal of this paper is to provide a literature review on the neural underpinnings and cognitive models of decision strategy selection and thus set the stage for a neurocognitive model of this process. The second goal is to outline such a unifying, mechanistic model that can explain the impact of noncognitive factors (e.g., affect, stress) on strategy selection. To this end, we review the evidence for the factors influencing strategy selection, the neural basis of strategy use and the cognitive models of this process. We also present the Bottom-Up Model of Strategy Selection (BUMSS). The model assumes that the use of the rational Weighted Additive strategy and the boundedly rational heuristic Take The Best can be explained by one unifying, neurophysiologically plausible mechanism, based on the interaction of the frontoparietal network, orbitofrontal cortex, anterior cingulate cortex and the brainstem nucleus locus coeruleus. According to BUMSS, there are three processes that form the bottom-up mechanism of decision strategy selection and lead to the final choice: (1) cue weight computation, (2) gain modulation, and (3) weighted additive evaluation of alternatives. We discuss how these processes might be implemented in the brain, and how this knowledge allows us to formulate novel predictions linking strategy use and neural signals. PMID:27877103

  8. Nonlinear neural network for hemodynamic model state and input estimation using fMRI data

    KAUST Repository

    Karam, Ayman M.

    2014-11-01

    Originally inspired by biological neural networks, artificial neural networks (ANNs) are powerful mathematical tools that can solve complex nonlinear problems such as filtering, classification, prediction and more. This paper demonstrates the first successful implementation of ANN, specifically nonlinear autoregressive with exogenous input (NARX) networks, to estimate the hemodynamic states and neural activity from simulated and measured real blood oxygenation level dependent (BOLD) signals. Blocked and event-related BOLD data are used to test the algorithm on real experiments. The proposed method is accurate and robust even in the presence of signal noise and it does not depend on sampling interval. Moreover, the structure of the NARX networks is optimized to yield the best estimate with minimal network architecture. The results of the estimated neural activity are also discussed in terms of their potential use.

  9. Statistical modelling of neural networks in γ-spectrometry applications

    International Nuclear Information System (INIS)

    Vigneron, V.; Martinez, J.M.; Morel, J.; Lepy, M.C.

    1995-01-01

    Layered Neural Networks, which are a class of models based on neural computation, are applied to the measurement of uranium enrichment, i.e. the isotope ratio 235 U/( 235 U + 236 U + 238 U). The usual method consider a limited number of Γ-ray and X-ray peaks, and require previously calibrated instrumentation for each sample. But, in practice, the source-detector ensemble geometry conditions are critically different, thus a means of improving the above convention methods is to reduce the region of interest: this is possible by focusing on the K α X region where the three elementary components are present. Real data are used to study the performance of neural networks. Training is done with a Maximum Likelihood method to measure uranium 235 U and 238 U quantities in infinitely thick samples. (authors). 18 refs., 6 figs., 3 tabs

  10. Dynamic decomposition of spatiotemporal neural signals.

    Directory of Open Access Journals (Sweden)

    Luca Ambrogioni

    2017-05-01

    Full Text Available Neural signals are characterized by rich temporal and spatiotemporal dynamics that reflect the organization of cortical networks. Theoretical research has shown how neural networks can operate at different dynamic ranges that correspond to specific types of information processing. Here we present a data analysis framework that uses a linearized model of these dynamic states in order to decompose the measured neural signal into a series of components that capture both rhythmic and non-rhythmic neural activity. The method is based on stochastic differential equations and Gaussian process regression. Through computer simulations and analysis of magnetoencephalographic data, we demonstrate the efficacy of the method in identifying meaningful modulations of oscillatory signals corrupted by structured temporal and spatiotemporal noise. These results suggest that the method is particularly suitable for the analysis and interpretation of complex temporal and spatiotemporal neural signals.

  11. Modeling the dynamics of human brain activity with recurrent neural networks

    NARCIS (Netherlands)

    Güçlü, U.; Gerven, M.A.J. van

    2017-01-01

    Encoding models are used for predicting brain activity in response to sensory stimuli with the objective of elucidating how sensory information is represented in the brain. Encoding models typically comprise a nonlinear transformation of stimuli to features (feature model) and a linear convolution

  12. A direct comparison of appetitive and aversive anticipation: Overlapping and distinct neural activation.

    Science.gov (United States)

    Sege, Christopher T; Bradley, Margaret M; Weymar, Mathias; Lang, Peter J

    2017-05-30

    fMRI studies of reward find increased neural activity in ventral striatum and medial prefrontal cortex (mPFC), whereas other regions, including the dorsolateral prefrontal cortex (dlPFC), anterior cingulate cortex (ACC), and anterior insula, are activated when anticipating aversive exposure. Although these data suggest differential activation during anticipation of pleasant or of unpleasant exposure, they also arise in the context of different paradigms (e.g., preparation for reward vs. threat of shock) and participants. To determine overlapping and unique regions active during emotional anticipation, we compared neural activity during anticipation of pleasant or unpleasant exposure in the same participants. Cues signalled the upcoming presentation of erotic/romantic, violent, or everyday pictures while BOLD activity during the 9-s anticipatory period was measured using fMRI. Ventral striatum and a ventral mPFC subregion were activated when anticipating pleasant, but not unpleasant or neutral, pictures, whereas activation in other regions was enhanced when anticipating appetitive or aversive scenes. Copyright © 2017 Elsevier B.V. All rights reserved.

  13. Modal demultiplexing properties of tapered and nanostructured optical fibers for in vivo optogenetic control of neural activity.

    Science.gov (United States)

    Pisanello, Marco; Della Patria, Andrea; Sileo, Leonardo; Sabatini, Bernardo L; De Vittorio, Massimo; Pisanello, Ferruccio

    2015-10-01

    Optogenetic approaches to manipulate neural activity have revolutionized the ability of neuroscientists to uncover the functional connectivity underlying brain function. At the same time, the increasing complexity of in vivo optogenetic experiments has increased the demand for new techniques to precisely deliver light into the brain, in particular to illuminate selected portions of the neural tissue. Tapered and nanopatterned gold-coated optical fibers were recently proposed as minimally invasive multipoint light delivery devices, allowing for site-selective optogenetic stimulation in the mammalian brain [Pisanello , Neuron82, 1245 (2014)]. Here we demonstrate that the working principle behind these devices is based on the mode-selective photonic properties of the fiber taper. Using analytical and ray tracing models we model the finite conductance of the metal coating, and show that single or multiple optical windows located at specific taper sections can outcouple only specific subsets of guided modes injected into the fiber.

  14. Comparison of different artificial neural network architectures in modeling of Chlorella sp. flocculation.

    Science.gov (United States)

    Zenooz, Alireza Moosavi; Ashtiani, Farzin Zokaee; Ranjbar, Reza; Nikbakht, Fatemeh; Bolouri, Oberon

    2017-07-03

    Biodiesel production from microalgae feedstock should be performed after growth and harvesting of the cells, and the most feasible method for harvesting and dewatering of microalgae is flocculation. Flocculation modeling can be used for evaluation and prediction of its performance under different affective parameters. However, the modeling of flocculation in microalgae is not simple and has not performed yet, under all experimental conditions, mostly due to different behaviors of microalgae cells during the process under different flocculation conditions. In the current study, the modeling of microalgae flocculation is studied with different neural network architectures. Microalgae species, Chlorella sp., was flocculated with ferric chloride under different conditions and then the experimental data modeled using artificial neural network. Neural network architectures of multilayer perceptron (MLP) and radial basis function architectures, failed to predict the targets successfully, though, modeling was effective with ensemble architecture of MLP networks. Comparison between the performances of the ensemble and each individual network explains the ability of the ensemble architecture in microalgae flocculation modeling.

  15. Parasympathetic neural activity accounts for the lowering of exercise heart rate at high altitude

    DEFF Research Database (Denmark)

    Boushel, Robert Christopher; Calbet, J A; Rådegran, G

    2001-01-01

    In chronic hypoxia, both heart rate (HR) and cardiac output (Q) are reduced during exercise. The role of parasympathetic neural activity in lowering HR is unresolved, and its influence on Q and oxygen transport at high altitude has never been studied.......In chronic hypoxia, both heart rate (HR) and cardiac output (Q) are reduced during exercise. The role of parasympathetic neural activity in lowering HR is unresolved, and its influence on Q and oxygen transport at high altitude has never been studied....

  16. Forecasting Macroeconomic Variables using Neural Network Models and Three Automated Model Selection Techniques

    DEFF Research Database (Denmark)

    Kock, Anders Bredahl; Teräsvirta, Timo

    such as the neural network model is not appropriate if the data is generated by a linear mechanism. Hence, it might be appropriate to test the null of linearity prior to building a nonlinear model. We investigate whether this kind of pretesting improves the forecast accuracy compared to the case where...

  17. Forecasting solar proton event with artificial neural network

    Science.gov (United States)

    Gong, J.; Wang, J.; Xue, B.; Liu, S.; Zou, Z.

    Solar proton event (SPE), relatively rare but popular in solar maximum, can bring hazard situation to spacecraft. As a special event, SPE always accompanies flare, which is also called proton flare. To produce such an eruptive event, large amount energy must be accumulated within the active region. So we can investigate the character of the active region and its evolving trend, together with other such as cm radio emission and soft X-ray background to evaluate the potential of SEP in chosen area. In order to summarize the omen of SPEs in the active regions behind the observed parameters, we employed AI technology. Full connecting neural network was chosen to fulfil this job. After constructing the network, we train it with 13 parameters that was able to exhibit the character of active regions and their evolution trend. More than 80 sets of event parameter were defined to teach the neural network to identify whether an active region was potential of SPE. Then we test this model with a data base consisting SPE and non-SPE cases that was not used to train the neural network. The result showed that 75% of the choice by the model was right.

  18. Animal models for studying neural crest development: is the mouse different?

    Science.gov (United States)

    Barriga, Elias H; Trainor, Paul A; Bronner, Marianne; Mayor, Roberto

    2015-05-01

    The neural crest is a uniquely vertebrate cell type and has been well studied in a number of model systems. Zebrafish, Xenopus and chick embryos largely show consistent requirements for specific genes in early steps of neural crest development. By contrast, knockouts of homologous genes in the mouse often do not exhibit comparable early neural crest phenotypes. In this Spotlight article, we discuss these species-specific differences, suggest possible explanations for the divergent phenotypes in mouse and urge the community to consider these issues and the need for further research in complementary systems. © 2015. Published by The Company of Biologists Ltd.

  19. Dispositional Mindfulness and Depressive Symptomatology: Correlations with Limbic and Self-Referential Neural Activity during Rest

    Science.gov (United States)

    Way, Baldwin M.; Creswell, J. David; Eisenberger, Naomi I.; Lieberman, Matthew D.

    2010-01-01

    To better understand the relationship between mindfulness and depression, we studied normal young adults (n=27) who completed measures of dispositional mindfulness and depressive symptomatology, which were then correlated with: a) Rest: resting neural activity during passive viewing of a fixation cross, relative to a simple goal-directed task (shape-matching); and b) Reactivity: neural reactivity during viewing of negative emotional faces, relative to the same shape-matching task. Dispositional mindfulness was negatively correlated with resting activity in self-referential processing areas, while depressive symptomatology was positively correlated with resting activity in similar areas. In addition, dispositional mindfulness was negatively correlated with resting activity in the amygdala, bilaterally, while depressive symptomatology was positively correlated with activity in the right amygdala. Similarly, when viewing emotional faces, amygdala reactivity was positively correlated with depressive symptomatology and negatively correlated with dispositional mindfulness, an effect that was largely attributable to differences in resting activity. These findings indicate that mindfulness is associated with intrinsic neural activity and that changes in resting amygdala activity could be a potential mechanism by which mindfulness-based depression treatments elicit therapeutic improvement. PMID:20141298

  20. Data acquisition in modeling using neural networks and decision trees

    Directory of Open Access Journals (Sweden)

    R. Sika

    2011-04-01

    Full Text Available The paper presents a comparison of selected models from area of artificial neural networks and decision trees in relation with actualconditions of foundry processes. The work contains short descriptions of used algorithms, their destination and method of data preparation,which is a domain of work of Data Mining systems. First part concerns data acquisition realized in selected iron foundry, indicating problems to solve in aspect of casting process modeling. Second part is a comparison of selected algorithms: a decision tree and artificial neural network, that is CART (Classification And Regression Trees and BP (Backpropagation in MLP (Multilayer Perceptron networks algorithms.Aim of the paper is to show an aspect of selecting data for modeling, cleaning it and reducing, for example due to too strong correlationbetween some of recorded process parameters. Also, it has been shown what results can be obtained using two different approaches:first when modeling using available commercial software, for example Statistica, second when modeling step by step using Excel spreadsheetbasing on the same algorithm, like BP-MLP. Discrepancy of results obtained from these two approaches originates from a priorimade assumptions. Mentioned earlier Statistica universal software package, when used without awareness of relations of technologicalparameters, i.e. without user having experience in foundry and without scheduling ranks of particular parameters basing on acquisition, can not give credible basis to predict the quality of the castings. Also, a decisive influence of data acquisition method has been clearly indicated, the acquisition should be conducted according to repetitive measurement and control procedures. This paper is based on about 250 records of actual data, for one assortment for 6 month period, where only 12 data sets were complete (including two that were used for validation of neural network and useful for creating a model. It is definitely too

  1. Real-time cerebellar neuroprosthetic system based on a spiking neural network model of motor learning.

    Science.gov (United States)

    Xu, Tao; Xiao, Na; Zhai, Xiaolong; Kwan Chan, Pak; Tin, Chung

    2018-02-01

    Damage to the brain, as a result of various medical conditions, impacts the everyday life of patients and there is still no complete cure to neurological disorders. Neuroprostheses that can functionally replace the damaged neural circuit have recently emerged as a possible solution to these problems. Here we describe the development of a real-time cerebellar neuroprosthetic system to substitute neural function in cerebellar circuitry for learning delay eyeblink conditioning (DEC). The system was empowered by a biologically realistic spiking neural network (SNN) model of the cerebellar neural circuit, which considers the neuronal population and anatomical connectivity of the network. The model simulated synaptic plasticity critical for learning DEC. This SNN model was carefully implemented on a field programmable gate array (FPGA) platform for real-time simulation. This hardware system was interfaced in in vivo experiments with anesthetized rats and it used neural spikes recorded online from the animal to learn and trigger conditioned eyeblink in the animal during training. This rat-FPGA hybrid system was able to process neuronal spikes in real-time with an embedded cerebellum model of ~10 000 neurons and reproduce learning of DEC with different inter-stimulus intervals. Our results validated that the system performance is physiologically relevant at both the neural (firing pattern) and behavioral (eyeblink pattern) levels. This integrated system provides the sufficient computation power for mimicking the cerebellar circuit in real-time. The system interacts with the biological system naturally at the spike level and can be generalized for including other neural components (neuron types and plasticity) and neural functions for potential neuroprosthetic applications.

  2. Deep neural networks for direct, featureless learning through observation: The case of two-dimensional spin models

    Science.gov (United States)

    Mills, Kyle; Tamblyn, Isaac

    2018-03-01

    We demonstrate the capability of a convolutional deep neural network in predicting the nearest-neighbor energy of the 4 ×4 Ising model. Using its success at this task, we motivate the study of the larger 8 ×8 Ising model, showing that the deep neural network can learn the nearest-neighbor Ising Hamiltonian after only seeing a vanishingly small fraction of configuration space. Additionally, we show that the neural network has learned both the energy and magnetization operators with sufficient accuracy to replicate the low-temperature Ising phase transition. We then demonstrate the ability of the neural network to learn other spin models, teaching the convolutional deep neural network to accurately predict the long-range interaction of a screened Coulomb Hamiltonian, a sinusoidally attenuated screened Coulomb Hamiltonian, and a modified Potts model Hamiltonian. In the case of the long-range interaction, we demonstrate the ability of the neural network to recover the phase transition with equivalent accuracy to the numerically exact method. Furthermore, in the case of the long-range interaction, the benefits of the neural network become apparent; it is able to make predictions with a high degree of accuracy, and do so 1600 times faster than a CUDA-optimized exact calculation. Additionally, we demonstrate how the neural network succeeds at these tasks by looking at the weights learned in a simplified demonstration.

  3. Novel Adaptive Forward Neural MIMO NARX Model for the Identification of Industrial 3-DOF Robot Arm Kinematics

    Directory of Open Access Journals (Sweden)

    Ho Pham Huy Anh

    2012-10-01

    Full Text Available In this paper, a novel forward adaptive neural MIMO NARX model is used for modelling and identifying the forward kinematics of an industrial 3-DOF robot arm system. The nonlinear features of the forward kinematics of the industrial robot arm drive are thoroughly modelled based on the forward adaptive neural NARX model-based identification process using experimental input-output training data. This paper proposes a novel use of a back propagation (BP algorithm to generate the forward neural MIMO NARX (FNMN model for the forward kinematics of the industrial 3-DOF robot arm. The results show that the proposed adaptive neural NARX model trained by a Back Propagation learning algorithm yields outstanding performance and perfect accuracy.

  4. Semi-empirical neural network models of controlled dynamical systems

    Directory of Open Access Journals (Sweden)

    Mihail V. Egorchev

    2017-12-01

    Full Text Available A simulation approach is discussed for maneuverable aircraft motion as nonlinear controlled dynamical system under multiple and diverse uncertainties including knowledge imperfection concerning simulated plant and its environment exposure. The suggested approach is based on a merging of theoretical knowledge for the plant with training tools of artificial neural network field. The efficiency of this approach is demonstrated using the example of motion modeling and the identification of the aerodynamic characteristics of a maneuverable aircraft. A semi-empirical recurrent neural network based model learning algorithm is proposed for multi-step ahead prediction problem. This algorithm sequentially states and solves numerical optimization subproblems of increasing complexity, using each solution as initial guess for subsequent subproblem. We also consider a procedure for representative training set acquisition that utilizes multisine control signals.

  5. Neural Control and Adaptive Neural Forward Models for Insect-like, Energy-Efficient, and Adaptable Locomotion of Walking Machines

    Directory of Open Access Journals (Sweden)

    Poramate eManoonpong

    2013-02-01

    Full Text Available Living creatures, like walking animals, have found fascinating solutions for the problem of locomotion control. Their movements show the impression of elegance including versatile, energy-efficient, and adaptable locomotion. During the last few decades, roboticists have tried to imitate such natural properties with artificial legged locomotion systems by using different approaches including machine learning algorithms, classical engineering control techniques, and biologically-inspired control mechanisms. However, their levels of performance are still far from the natural ones. By contrast, animal locomotion mechanisms seem to largely depend not only on central mechanisms (central pattern generators, CPGs and sensory feedback (afferent-based control but also on internal forward models (efference copies. They are used to a different degree in different animals. Generally, CPGs organize basic rhythmic motions which are shaped by sensory feedback while internal models are used for sensory prediction and state estimations. According to this concept, we present here adaptive neural locomotion control consisting of a CPG mechanism with neuromodulation and local leg control mechanisms based on sensory feedback and adaptive neural forward models with efference copies. This neural closed-loop controller enables a walking machine to perform a multitude of different walking patterns including insect-like leg movements and gaits as well as energy-efficient locomotion. In addition, the forward models allow the machine to autonomously adapt its locomotion to deal with a change of terrain, losing of ground contact during stance phase, stepping on or hitting an obstacle during swing phase, leg damage, and even to promote cockroach-like climbing behavior. Thus, the results presented here show that the employed embodied neural closed-loop system can be a powerful way for developing robust and adaptable machines.

  6. Self-reported empathy and neural activity during action imitation and observation in schizophrenia

    OpenAIRE

    Horan, William P.; Iacoboni, Marco; Cross, Katy A.; Korb, Alex; Lee, Junghee; Nori, Poorang; Quintana, Javier; Wynn, Jonathan K.; Green, Michael F.

    2014-01-01

    Introduction: Although social cognitive impairments are key determinants of functional outcome in schizophrenia their neural bases are poorly understood. This study investigated neural activity during imitation and observation of finger movements and facial expressions in schizophrenia, and their correlates with self-reported empathy. Methods: 23 schizophrenia outpatients and 23 healthy controls were studied with functional magnetic resonance imaging (fMRI) while they imitated, executed, o...

  7. Secondary clarifier hybrid model calibration in full scale pulp and paper activated sludge wastewater treatment

    Energy Technology Data Exchange (ETDEWEB)

    Sreckovic, G.; Hall, E.R. [British Columbia Univ., Dept. of Civil Engineering, Vancouver, BC (Canada); Thibault, J. [Laval Univ., Dept. of Chemical Engineering, Ste-Foy, PQ (Canada); Savic, D. [Exeter Univ., School of Engineering, Exeter (United Kingdom)

    1999-05-01

    The issue of proper model calibration techniques applied to mechanistic mathematical models relating to activated sludge systems was discussed. Such calibrations are complex because of the non-linearity and multi-model objective functions of the process. This paper presents a hybrid model which was developed using two techniques to model and calibrate secondary clarifier parts of an activated sludge system. Genetic algorithms were used to successfully calibrate the settler mechanistic model, and neural networks were used to reduce the error between the mechanistic model output and real world data. Results of the modelling study show that the long term response of a one-dimensional settler mechanistic model calibrated by genetic algorithms and compared to full scale plant data can be improved by coupling the calibrated mechanistic model to as black-box model, such as a neural network. 11 refs., 2 figs.

  8. The Synapse Project: Engagement in mentally challenging activities enhances neural efficiency.

    Science.gov (United States)

    McDonough, Ian M; Haber, Sara; Bischof, Gérard N; Park, Denise C

    2015-01-01

    Correlational and limited experimental evidence suggests that an engaged lifestyle is associated with the maintenance of cognitive vitality in old age. However, the mechanisms underlying these engagement effects are poorly understood. We hypothesized that mental effort underlies engagement effects and used fMRI to examine the impact of high-challenge activities (digital photography and quilting) compared with low-challenge activities (socializing or performing low-challenge cognitive tasks) on neural function at pretest, posttest, and one year after the engagement program. In the scanner, participants performed a semantic-classification task with two levels of difficulty to assess the modulation of brain activity in response to task demands. The High-Challenge group, but not the Low-Challenge group, showed increased modulation of brain activity in medial frontal, lateral temporal, and parietal cortex-regions associated with attention and semantic processing-some of which were maintained a year later. This increased modulation stemmed from decreases in brain activity during the easy condition for the High-Challenge group and was associated with time committed to the program, age, and cognition. Sustained engagement in cognitively demanding activities facilitated cognition by increasing neural efficiency. Mentally-challenging activities may be neuroprotective and an important element to maintaining a healthy brain into late adulthood.

  9. Model for neural signaling leap statistics

    Energy Technology Data Exchange (ETDEWEB)

    Chevrollier, Martine; Oria, Marcos, E-mail: oria@otica.ufpb.br [Laboratorio de Fisica Atomica e Lasers Departamento de Fisica, Universidade Federal da ParaIba Caixa Postal 5086 58051-900 Joao Pessoa, Paraiba (Brazil)

    2011-03-01

    We present a simple model for neural signaling leaps in the brain considering only the thermodynamic (Nernst) potential in neuron cells and brain temperature. We numerically simulated connections between arbitrarily localized neurons and analyzed the frequency distribution of the distances reached. We observed qualitative change between Normal statistics (with T 37.5{sup 0}C, awaken regime) and Levy statistics (T = 35.5{sup 0}C, sleeping period), characterized by rare events of long range connections.

  10. Evaluation of the Performance of Feedforward and Recurrent Neural Networks in Active Cancellation of Sound Noise

    Directory of Open Access Journals (Sweden)

    Mehrshad Salmasi

    2012-07-01

    Full Text Available Active noise control is based on the destructive interference between the primary noise and generated noise from the secondary source. An antinoise of equal amplitude and opposite phase is generated and combined with the primary noise. In this paper, performance of the neural networks is evaluated in active cancellation of sound noise. For this reason, feedforward and recurrent neural networks are designed and trained. After training, performance of the feedforwrad and recurrent networks in noise attenuation are compared. We use Elman network as a recurrent neural network. For simulations, noise signals from a SPIB database are used. In order to compare the networks appropriately, equal number of layers and neurons are considered for the networks. Moreover, training and test samples are similar. Simulation results show that feedforward and recurrent neural networks present good performance in noise cancellation. As it is seen, the ability of recurrent neural network in noise attenuation is better than feedforward network.

  11. Hand Posture Prediction Using Neural Networks within a Biomechanical Model

    Directory of Open Access Journals (Sweden)

    Marta C. Mora

    2012-10-01

    Full Text Available This paper proposes the use of artificial neural networks (ANNs in the framework of a biomechanical hand model for grasping. ANNs enhance the model capabilities as they substitute estimated data for the experimental inputs required by the grasping algorithm used. These inputs are the tentative grasping posture and the most open posture during grasping. As a consequence, more realistic grasping postures are predicted by the grasping algorithm, along with the contact information required by the dynamic biomechanical model (contact points and normals. Several neural network architectures are tested and compared in terms of prediction errors, leading to encouraging results. The performance of the overall proposal is also shown through simulation, where a grasping experiment is replicated and compared to the real grasping data collected by a data glove device.

  12. Chimera States in Neural Oscillators

    Science.gov (United States)

    Bahar, Sonya; Glaze, Tera

    2014-03-01

    Chimera states have recently been explored both theoretically and experimentally, in various coupled nonlinear oscillators, ranging from phase-oscillator models to coupled chemical reactions. In a chimera state, both coherent and incoherent (or synchronized and desynchronized) states occur simultaneously in populations of identical oscillators. We investigate chimera behavior in a population of neural oscillators using the Huber-Braun model, a Hodgkin-Huxley-like model originally developed to characterize the temperature-dependent bursting behavior of mammalian cold receptors. One population of neurons is allowed to synchronize, with each neuron receiving input from all the others in its group (global within-group coupling). Subsequently, a second population of identical neurons is placed under an identical global within-group coupling, and the two populations are also coupled to each other (between-group coupling). For certain values of the coupling constants, the neurons in the two populations exhibit radically different synchronization behavior. We will discuss the range of chimera activity in the model, and discuss its implications for actual neural activity, such as unihemispheric sleep.

  13. Mitochondrial metabolism in early neural fate and its relevance for neuronal disease modeling.

    Science.gov (United States)

    Lorenz, Carmen; Prigione, Alessandro

    2017-12-01

    Modulation of energy metabolism is emerging as a key aspect associated with cell fate transition. The establishment of a correct metabolic program is particularly relevant for neural cells given their high bioenergetic requirements. Accordingly, diseases of the nervous system commonly involve mitochondrial impairment. Recent studies in animals and in neural derivatives of human pluripotent stem cells (PSCs) highlighted the importance of mitochondrial metabolism for neural fate decisions in health and disease. The mitochondria-based metabolic program of early neurogenesis suggests that PSC-derived neural stem cells (NSCs) may be used for modeling neurological disorders. Understanding how metabolic programming is orchestrated during neural commitment may provide important information for the development of therapies against conditions affecting neural functions, including aging and mitochondrial disorders. Copyright © 2017. Published by Elsevier Ltd.

  14. Investigation and modeling on protective textiles using artificial neural networks for defense applications

    International Nuclear Information System (INIS)

    Ramaiah, Gurumurthy B.; Chennaiah, Radhalakshmi Y.; Satyanarayanarao, Gurumurthy K.

    2010-01-01

    Kevlar 29 is a class of Kevlar fiber used for protective applications primarily by the military and law enforcement agencies for bullet resistant vests, hence for these reasons military has found that armors reinforced with Kevlar 29 multilayer fabrics which offer 25-40% better fragmentation resistance and provide better fit with greater comfort. The objective of this study is to investigate and develop an artificial neural network model for analyzing the performance of ballistic fabrics made from Kevlar 29 single layer fabrics using their material properties as inputs. Data from fragment simulation projectile (FSP) ballistic penetration measurements at 244 m/s has been used to demonstrate the modeling aspects of artificial neural networks. The neural network models demonstrated in this paper is based on back propagation (BP) algorithm which is inbuilt in MATLAB 7.1 software and is used for studies in science, technology and engineering. In the present research, comparisons are also made between the measured values of samples selected for building the neural network model and network predicted results. The analysis of the results for network predicted and experimental samples used in this study showed similarity.

  15. Techniques for extracting single-trial activity patterns from large-scale neural recordings

    Science.gov (United States)

    Churchland, Mark M; Yu, Byron M; Sahani, Maneesh; Shenoy, Krishna V

    2008-01-01

    Summary Large, chronically-implanted arrays of microelectrodes are an increasingly common tool for recording from primate cortex, and can provide extracellular recordings from many (order of 100) neurons. While the desire for cortically-based motor prostheses has helped drive their development, such arrays also offer great potential to advance basic neuroscience research. Here we discuss the utility of array recording for the study of neural dynamics. Neural activity often has dynamics beyond that driven directly by the stimulus. While governed by those dynamics, neural responses may nevertheless unfold differently for nominally identical trials, rendering many traditional analysis methods ineffective. We review recent studies – some employing simultaneous recording, some not – indicating that such variability is indeed present both during movement generation, and during the preceding premotor computations. In such cases, large-scale simultaneous recordings have the potential to provide an unprecedented view of neural dynamics at the level of single trials. However, this enterprise will depend not only on techniques for simultaneous recording, but also on the use and further development of analysis techniques that can appropriately reduce the dimensionality of the data, and allow visualization of single-trial neural behavior. PMID:18093826

  16. Neural Networks in Modelling Maintenance Unit Load Status

    Directory of Open Access Journals (Sweden)

    Anđelko Vojvoda

    2002-03-01

    Full Text Available This paper deals with a way of applying a neural networkfor describing se1vice station load in a maintenance unit. Dataacquired by measuring the workload of single stations in amaintenance unit were used in the process of training the neuralnetwork in order to create a model of the obse1ved system.The model developed in this way enables us to make more accuratepredictions over critical overload. Modelling was realisedby developing and using m-functions of the Matlab software.

  17. A new method to estimate parameters of linear compartmental models using artificial neural networks

    International Nuclear Information System (INIS)

    Gambhir, Sanjiv S.; Keppenne, Christian L.; Phelps, Michael E.; Banerjee, Pranab K.

    1998-01-01

    At present, the preferred tool for parameter estimation in compartmental analysis is an iterative procedure; weighted nonlinear regression. For a large number of applications, observed data can be fitted to sums of exponentials whose parameters are directly related to the rate constants/coefficients of the compartmental models. Since weighted nonlinear regression often has to be repeated for many different data sets, the process of fitting data from compartmental systems can be very time consuming. Furthermore the minimization routine often converges to a local (as opposed to global) minimum. In this paper, we examine the possibility of using artificial neural networks instead of weighted nonlinear regression in order to estimate model parameters. We train simple feed-forward neural networks to produce as outputs the parameter values of a given model when kinetic data are fed to the networks' input layer. The artificial neural networks produce unbiased estimates and are orders of magnitude faster than regression algorithms. At noise levels typical of many real applications, the neural networks are found to produce lower variance estimates than weighted nonlinear regression in the estimation of parameters from mono- and biexponential models. These results are primarily due to the inability of weighted nonlinear regression to converge. These results establish that artificial neural networks are powerful tools for estimating parameters for simple compartmental models. (author)

  18. Geometry of neural networks and models with singularities

    International Nuclear Information System (INIS)

    Fukumizu, Kenji

    2001-01-01

    This paper discusses maximum likelihood estimation with unidentifiability of parameters. Unidentifiability is formulated as a conic singularity of the model. It is known that the likelihood ratio may have unusually large order in unidentifiable cases. A sufficient condition for such large order is given and applied to neural networks

  19. A simple method for estimating the entropy of neural activity

    International Nuclear Information System (INIS)

    Berry II, Michael J; Tkačik, Gašper; Dubuis, Julien; Marre, Olivier; Da Silveira, Rava Azeredo

    2013-01-01

    The number of possible activity patterns in a population of neurons grows exponentially with the size of the population. Typical experiments explore only a tiny fraction of the large space of possible activity patterns in the case of populations with more than 10 or 20 neurons. It is thus impossible, in this undersampled regime, to estimate the probabilities with which most of the activity patterns occur. As a result, the corresponding entropy—which is a measure of the computational power of the neural population—cannot be estimated directly. We propose a simple scheme for estimating the entropy in the undersampled regime, which bounds its value from both below and above. The lower bound is the usual ‘naive’ entropy of the experimental frequencies. The upper bound results from a hybrid approximation of the entropy which makes use of the naive estimate, a maximum entropy fit, and a coverage adjustment. We apply our simple scheme to artificial data, in order to check their accuracy; we also compare its performance to those of several previously defined entropy estimators. We then apply it to actual measurements of neural activity in populations with up to 100 cells. Finally, we discuss the similarities and differences between the proposed simple estimation scheme and various earlier methods. (paper)

  20. Modified multiblock partial least squares path modeling algorithm with backpropagation neural networks approach

    Science.gov (United States)

    Yuniarto, Budi; Kurniawan, Robert

    2017-03-01

    PLS Path Modeling (PLS-PM) is different from covariance based SEM, where PLS-PM use an approach based on variance or component, therefore, PLS-PM is also known as a component based SEM. Multiblock Partial Least Squares (MBPLS) is a method in PLS regression which can be used in PLS Path Modeling which known as Multiblock PLS Path Modeling (MBPLS-PM). This method uses an iterative procedure in its algorithm. This research aims to modify MBPLS-PM with Back Propagation Neural Network approach. The result is MBPLS-PM algorithm can be modified using the Back Propagation Neural Network approach to replace the iterative process in backward and forward step to get the matrix t and the matrix u in the algorithm. By modifying the MBPLS-PM algorithm using Back Propagation Neural Network approach, the model parameters obtained are relatively not significantly different compared to model parameters obtained by original MBPLS-PM algorithm.

  1. Efficient Embedded Decoding of Neural Network Language Models in a Machine Translation System.

    Science.gov (United States)

    Zamora-Martinez, Francisco; Castro-Bleda, Maria Jose

    2018-02-22

    Neural Network Language Models (NNLMs) are a successful approach to Natural Language Processing tasks, such as Machine Translation. We introduce in this work a Statistical Machine Translation (SMT) system which fully integrates NNLMs in the decoding stage, breaking the traditional approach based on [Formula: see text]-best list rescoring. The neural net models (both language models (LMs) and translation models) are fully coupled in the decoding stage, allowing to more strongly influence the translation quality. Computational issues were solved by using a novel idea based on memorization and smoothing of the softmax constants to avoid their computation, which introduces a trade-off between LM quality and computational cost. These ideas were studied in a machine translation task with different combinations of neural networks used both as translation models and as target LMs, comparing phrase-based and [Formula: see text]-gram-based systems, showing that the integrated approach seems more promising for [Formula: see text]-gram-based systems, even with nonfull-quality NNLMs.

  2. Statistical Modeling and Prediction for Tourism Economy Using Dendritic Neural Network.

    Science.gov (United States)

    Yu, Ying; Wang, Yirui; Gao, Shangce; Tang, Zheng

    2017-01-01

    With the impact of global internationalization, tourism economy has also been a rapid development. The increasing interest aroused by more advanced forecasting methods leads us to innovate forecasting methods. In this paper, the seasonal trend autoregressive integrated moving averages with dendritic neural network model (SA-D model) is proposed to perform the tourism demand forecasting. First, we use the seasonal trend autoregressive integrated moving averages model (SARIMA model) to exclude the long-term linear trend and then train the residual data by the dendritic neural network model and make a short-term prediction. As the result showed in this paper, the SA-D model can achieve considerably better predictive performances. In order to demonstrate the effectiveness of the SA-D model, we also use the data that other authors used in the other models and compare the results. It also proved that the SA-D model achieved good predictive performances in terms of the normalized mean square error, absolute percentage of error, and correlation coefficient.

  3. Mining Gene Regulatory Networks by Neural Modeling of Expression Time-Series.

    Science.gov (United States)

    Rubiolo, Mariano; Milone, Diego H; Stegmayer, Georgina

    2015-01-01

    Discovering gene regulatory networks from data is one of the most studied topics in recent years. Neural networks can be successfully used to infer an underlying gene network by modeling expression profiles as times series. This work proposes a novel method based on a pool of neural networks for obtaining a gene regulatory network from a gene expression dataset. They are used for modeling each possible interaction between pairs of genes in the dataset, and a set of mining rules is applied to accurately detect the subjacent relations among genes. The results obtained on artificial and real datasets confirm the method effectiveness for discovering regulatory networks from a proper modeling of the temporal dynamics of gene expression profiles.

  4. A recurrent neural model for proto-object based contour integration and figure-ground segregation.

    Science.gov (United States)

    Hu, Brian; Niebur, Ernst

    2017-12-01

    Visual processing of objects makes use of both feedforward and feedback streams of information. However, the nature of feedback signals is largely unknown, as is the identity of the neuronal populations in lower visual areas that receive them. Here, we develop a recurrent neural model to address these questions in the context of contour integration and figure-ground segregation. A key feature of our model is the use of grouping neurons whose activity represents tentative objects ("proto-objects") based on the integration of local feature information. Grouping neurons receive input from an organized set of local feature neurons, and project modulatory feedback to those same neurons. Additionally, inhibition at both the local feature level and the object representation level biases the interpretation of the visual scene in agreement with principles from Gestalt psychology. Our model explains several sets of neurophysiological results (Zhou et al. Journal of Neuroscience, 20(17), 6594-6611 2000; Qiu et al. Nature Neuroscience, 10(11), 1492-1499 2007; Chen et al. Neuron, 82(3), 682-694 2014), and makes testable predictions about the influence of neuronal feedback and attentional selection on neural responses across different visual areas. Our model also provides a framework for understanding how object-based attention is able to select both objects and the features associated with them.

  5. SOME QUESTIONS OF THE GRID AND NEURAL NETWORK MODELING OF AIRPORT AVIATION SECURITY CONTROL TASKS

    Directory of Open Access Journals (Sweden)

    N. Elisov Lev

    2017-01-01

    Full Text Available The authors’ original problem-solution-approach concerning aviation security management in civil aviation apply- ing parallel calculation processes method and the usage of neural computers is considered in this work. The statement of secure environment modeling problems for grid models and with the use of neural networks is presented. The research sub- ject area of this article is airport activity in the field of civil aviation, considered in the context of aviation security, defined as the state of aviation security against unlawful interference with the aviation field. The key issue in this subject area is aviation safety provision at an acceptable level. In this case, airport security level management becomes one of the main objectives of aviation security. Aviation security management is organizational-regulation in modern systems that can no longer correspond to changing requirements, increasingly getting complex and determined by external and internal envi- ronment factors, associated with a set of potential threats to airport activity. Optimal control requires the most accurate identification of management parameters and their quantitative assessment. The authors examine the possibility of applica- tion of mathematical methods for the modeling of security management processes and procedures in their latest works. Par- allel computing methods and network neurocomputing for modeling of airport security control processes are examined in this work. It is shown that the methods’ practical application of the methods is possible along with the decision support system, where the decision maker plays the leading role.

  6. Rejuvenation of MPTP-induced human neural precursor cell senescence by activating autophagy

    Energy Technology Data Exchange (ETDEWEB)

    Zhu, Liang [East Hospital, Tongji University School of Medicine, Shanghai (China); Dong, Chuanming [East Hospital, Tongji University School of Medicine, Shanghai (China); Department of Anatomy and Neurobiology, The Jiangsu Key Laboratory of Neuroregeneration, Nantong University, Nantong (China); Sun, Chenxi; Ma, Rongjie; Yang, Danjing [East Hospital, Tongji University School of Medicine, Shanghai (China); Zhu, Hongwen, E-mail: hongwen_zhu@hotmail.com [Tianjin Hospital, Tianjin Academy of Integrative Medicine, Tianjin (China); Xu, Jun, E-mail: xunymc2000@yahoo.com [East Hospital, Tongji University School of Medicine, Shanghai (China)

    2015-08-21

    Aging of neural stem cell, which can affect brain homeostasis, may be caused by many cellular mechanisms. Autophagy dysfunction was found in aged and neurodegenerative brains. However, little is known about the relationship between autophagy and human neural stem cell (hNSC) aging. The present study used 1-methyl-4-phenyl-1, 2, 3, 6-tetrahydropyridine (MPTP) to treat neural precursor cells (NPCs) derived from human embryonic stem cell (hESC) line H9 and investigate related molecular mechanisms involved in this process. MPTP-treated NPCs were found to undergo premature senescence [determined by increased senescence-associated-β-galactosidase (SA-β-gal) activity, elevated intracellular reactive oxygen species level, and decreased proliferation] and were associated with impaired autophagy. Additionally, the cellular senescence phenotypes were manifested at the molecular level by a significant increase in p21 and p53 expression, a decrease in SOD2 expression, and a decrease in expression of some key autophagy-related genes such as Atg5, Atg7, Atg12, and Beclin 1. Furthermore, we found that the senescence-like phenotype of MPTP-treated hNPCs was rejuvenated through treatment with a well-known autophagy enhancer rapamycin, which was blocked by suppression of essential autophagy gene Beclin 1. Taken together, these findings reveal the critical role of autophagy in the process of hNSC aging, and this process can be reversed by activating autophagy. - Highlights: • We successfully establish hESC-derived neural precursor cells. • MPTP treatment induced senescence-like state in hESC-derived NPCs. • MPTP treatment induced impaired autophagy of hESC-derived NPCs. • MPTP-induced hESC-derived NPC senescence was rejuvenated by activating autophagy.

  7. Real-time cerebellar neuroprosthetic system based on a spiking neural network model of motor learning

    Science.gov (United States)

    Xu, Tao; Xiao, Na; Zhai, Xiaolong; Chan, Pak Kwan; Tin, Chung

    2018-02-01

    Objective. Damage to the brain, as a result of various medical conditions, impacts the everyday life of patients and there is still no complete cure to neurological disorders. Neuroprostheses that can functionally replace the damaged neural circuit have recently emerged as a possible solution to these problems. Here we describe the development of a real-time cerebellar neuroprosthetic system to substitute neural function in cerebellar circuitry for learning delay eyeblink conditioning (DEC). Approach. The system was empowered by a biologically realistic spiking neural network (SNN) model of the cerebellar neural circuit, which considers the neuronal population and anatomical connectivity of the network. The model simulated synaptic plasticity critical for learning DEC. This SNN model was carefully implemented on a field programmable gate array (FPGA) platform for real-time simulation. This hardware system was interfaced in in vivo experiments with anesthetized rats and it used neural spikes recorded online from the animal to learn and trigger conditioned eyeblink in the animal during training. Main results. This rat-FPGA hybrid system was able to process neuronal spikes in real-time with an embedded cerebellum model of ~10 000 neurons and reproduce learning of DEC with different inter-stimulus intervals. Our results validated that the system performance is physiologically relevant at both the neural (firing pattern) and behavioral (eyeblink pattern) levels. Significance. This integrated system provides the sufficient computation power for mimicking the cerebellar circuit in real-time. The system interacts with the biological system naturally at the spike level and can be generalized for including other neural components (neuron types and plasticity) and neural functions for potential neuroprosthetic applications.

  8. Cyclosporin A-Mediated Activation of Endogenous Neural Precursor Cells Promotes Cognitive Recovery in a Mouse Model of Stroke

    Directory of Open Access Journals (Sweden)

    Labeeba Nusrat

    2018-04-01

    Full Text Available Cognitive dysfunction following stroke significantly impacts quality of life and functional independance; yet, despite the prevalence and negative impact of cognitive deficits, post-stroke interventions almost exclusively target motor impairments. As a result, current treatment options are limited in their ability to promote post-stroke cognitive recovery. Cyclosporin A (CsA has been previously shown to improve post-stroke functional recovery of sensorimotor deficits. Interestingly, CsA is a commonly used immunosuppressant and also acts directly on endogenous neural precursor cells (NPCs in the neurogenic regions of the brain (the periventricular region and the dentate gyrus. The immunosuppressive and NPC activation effects are mediated by calcineurin-dependent and calcineurin-independent pathways, respectively. To develop a cognitive stroke model, focal bilateral lesions were induced in the medial prefrontal cortex (mPFC of adult mice using endothelin-1. First, we characterized this stroke model in the acute and chronic phase, using problem-solving and memory-based cognitive tests. mPFC stroke resulted in early and persistent deficits in short-term memory, problem-solving and behavioral flexibility, without affecting anxiety. Second, we investigated the effects of acute and chronic CsA treatment on NPC activation, neuroprotection, and tissue damage. Acute CsA administration post-stroke increased the size of the NPC pool. There was no effect on neurodegeneration or lesion volume. Lastly, we looked at the effects of chronic CsA treatment on cognitive recovery. Long-term CsA administration promoted NPC migration toward the lesion site and rescued cognitive deficits to control levels. This study demonstrates that CsA treatment activates the NPC population, promotes migration of NPCs to the site of injury, and leads to improved cognitive recovery following long-term treatment.

  9. Neural Models for the Broadside-Coupled V-Shaped Microshield Coplanar Waveguides

    Science.gov (United States)

    Guney, K.; Yildiz, C.; Kaya, S.; Turkmen, M.

    2006-09-01

    This article presents a new approach based on multilayered perceptron neural networks (MLPNNs) to calculate the odd-and even-mode characteristic impedances and effective permittivities of the broadside-coupled V-shaped microshield coplanar waveguides (BC-VSMCPWs). Six learning algorithms, bayesian regulation (BR), Levenberg-Marquardt (LM), quasi-Newton (QN), scaled conjugate gradient (SCG), resilient propagation (RP), and conjugate gradient of Fletcher-Powell (CGF), are used to train the MLPNNs. The neural results are in very good agreement with the results reported elsewhere. When the performances of neural models are compared with each other, the best and worst results are obtained from the MLPNNs trained by the BR and CGF algorithms, respectively.

  10. Escherichia coli growth modeling using neural network | Shamsudin ...

    African Journals Online (AJOL)

    technique that has the ability to predict with efficient and good performance. Using NARX, a highly accurate model was developed to predict the growth of Escherichia coli (E. coli) based on pH water parameter. The multiparameter portable sensor and spectrophotometer data were used to build and train the neural network.

  11. Introducing Artificial Neural Networks through a Spreadsheet Model

    Science.gov (United States)

    Rienzo, Thomas F.; Athappilly, Kuriakose K.

    2012-01-01

    Business students taking data mining classes are often introduced to artificial neural networks (ANN) through point and click navigation exercises in application software. Even if correct outcomes are obtained, students frequently do not obtain a thorough understanding of ANN processes. This spreadsheet model was created to illuminate the roles of…

  12. Dynamic Neural State Identification in Deep Brain Local Field Potentials of Neuropathic Pain.

    Science.gov (United States)

    Luo, Huichun; Huang, Yongzhi; Du, Xueying; Zhang, Yunpeng; Green, Alexander L; Aziz, Tipu Z; Wang, Shouyan

    2018-01-01

    In neuropathic pain, the neurophysiological and neuropathological function of the ventro-posterolateral nucleus of the thalamus (VPL) and the periventricular gray/periaqueductal gray area (PVAG) involves multiple frequency oscillations. Moreover, oscillations related to pain perception and modulation change dynamically over time. Fluctuations in these neural oscillations reflect the dynamic neural states of the nucleus. In this study, an approach to classifying the synchronization level was developed to dynamically identify the neural states. An oscillation extraction model based on windowed wavelet packet transform was designed to characterize the activity level of oscillations. The wavelet packet coefficients sparsely represented the activity level of theta and alpha oscillations in local field potentials (LFPs). Then, a state discrimination model was designed to calculate an adaptive threshold to determine the activity level of oscillations. Finally, the neural state was represented by the activity levels of both theta and alpha oscillations. The relationship between neural states and pain relief was further evaluated. The performance of the state identification approach achieved sensitivity and specificity beyond 80% in simulation signals. Neural states of the PVAG and VPL were dynamically identified from LFPs of neuropathic pain patients. The occurrence of neural states based on theta and alpha oscillations were correlated to the degree of pain relief by deep brain stimulation. In the PVAG LFPs, the occurrence of the state with high activity levels of theta oscillations independent of alpha and the state with low-level alpha and high-level theta oscillations were significantly correlated with pain relief by deep brain stimulation. This study provides a reliable approach to identifying the dynamic neural states in LFPs with a low signal-to-noise ratio by using sparse representation based on wavelet packet transform. Furthermore, it may advance closed-loop deep

  13. Artificial neural network modelling in heavy ion collisions

    International Nuclear Information System (INIS)

    El-dahshan, E.; Radi, A.; El-Bakry, M.Y.; El Mashad, M.

    2008-01-01

    The neural network (NN) model and parton two fireball model (PTFM) have been used to study the pseudo-rapidity distribution of the shower particles for C 12, O 16, Si 28 and S 32 on nuclear emulsion. The trained NN shows a better fitting with experimental data than the PTFM calculations. The NN is then used to predict the distributions that are not present in the training set and matched them effectively. The NN simulation results prove a strong presence modeling in heavy ion collisions

  14. Nondestructive pavement evaluation using ILLI-PAVE based artificial neural network models.

    Science.gov (United States)

    2008-09-01

    The overall objective in this research project is to develop advanced pavement structural analysis models for more accurate solutions with fast computation schemes. Soft computing and modeling approaches, specifically the Artificial Neural Network (A...

  15. Artificial neural network models for biomass gasification in fluidized bed gasifiers

    DEFF Research Database (Denmark)

    Puig Arnavat, Maria; Hernández, J. Alfredo; Bruno, Joan Carles

    2013-01-01

    Artificial neural networks (ANNs) have been applied for modeling biomass gasification process in fluidized bed reactors. Two architectures of ANNs models are presented; one for circulating fluidized bed gasifiers (CFB) and the other for bubbling fluidized bed gasifiers (BFB). Both models determine...

  16. Neural computing thermal comfort index for HVAC systems

    International Nuclear Information System (INIS)

    Atthajariyakul, S.; Leephakpreeda, T.

    2005-01-01

    The primary purpose of a heating, ventilating and air conditioning (HVAC) system within a building is to make occupants comfortable. Without real time determination of human thermal comfort, it is not feasible for the HVAC system to yield controlled conditions of the air for human comfort all the time. This paper presents a practical approach to determine human thermal comfort quantitatively via neural computing. The neural network model allows real time determination of the thermal comfort index, where it is not practical to compute the conventional predicted mean vote (PMV) index itself in real time. The feed forward neural network model is proposed as an explicit function of the relation of the PMV index to accessible variables, i.e. the air temperature, wet bulb temperature, globe temperature, air velocity, clothing insulation and human activity. An experiment in an air conditioned office room was done to demonstrate the effectiveness of the proposed methodology. The results show good agreement between the thermal comfort index calculated from the neural network model in real time and those calculated from the conventional PMV model

  17. Prediction of activity coefficients at infinite dilution for organic solutes in ionic liquids by artificial neural network

    Energy Technology Data Exchange (ETDEWEB)

    Nami, Faezeh [Department of Chemistry, Shahid Beheshti University, G.C., Evin-Tehran 1983963113 (Iran, Islamic Republic of); Deyhimi, Farzad, E-mail: f-deyhimi@sbu.ac.i [Department of Chemistry, Shahid Beheshti University, G.C., Evin-Tehran 1983963113 (Iran, Islamic Republic of)

    2011-01-15

    To our knowledge, this work illustrates for the first time the ability of artificial neural network (ANN) to predict activity coefficients at infinite dilution for organic solutes in ionic liquids (ILs). Activity coefficient at infinite dilution ({gamma}{sup {infinity}}) is a useful parameter which can be used for the selection of effective solvent in the separation processes. Using a multi-layer feed-forward network with Levenberg-Marquardt optimization algorithm, the resulting ANN model generated activity coefficient at infinite dilution data over a temperature range of 298 to 363 K. The unavailable input data concerning softness (S) of organic compounds (solutes) and dipole moment ({mu}) of ionic liquids were calculated using GAMESS suites of quantum chemistry programs. The resulting ANN model and its validation are based on the investigation of up to 24 structurally different organic compounds (alkanes, alkenes, alkynes, cycloalkanes, aromatics, and alcohols) in 16 common imidazolium-based ionic liquids, at different temperatures within the range of 298 to 363 K (i.e. a total number of 914 {gamma}{sub Solute}{sup {infinity}}for each IL data point). The results show a satisfactory agreement between the predicted ANN and experimental data, where, the root mean square error (RMSE) and the determination coefficient (R{sup 2}) of the designed neural network were found to be 0.103, 0.996 for training data and 0.128, 0.994 for testing data, respectively.

  18. Prediction of activity coefficients at infinite dilution for organic solutes in ionic liquids by artificial neural network

    International Nuclear Information System (INIS)

    Nami, Faezeh; Deyhimi, Farzad

    2011-01-01

    To our knowledge, this work illustrates for the first time the ability of artificial neural network (ANN) to predict activity coefficients at infinite dilution for organic solutes in ionic liquids (ILs). Activity coefficient at infinite dilution (γ ∞ ) is a useful parameter which can be used for the selection of effective solvent in the separation processes. Using a multi-layer feed-forward network with Levenberg-Marquardt optimization algorithm, the resulting ANN model generated activity coefficient at infinite dilution data over a temperature range of 298 to 363 K. The unavailable input data concerning softness (S) of organic compounds (solutes) and dipole moment (μ) of ionic liquids were calculated using GAMESS suites of quantum chemistry programs. The resulting ANN model and its validation are based on the investigation of up to 24 structurally different organic compounds (alkanes, alkenes, alkynes, cycloalkanes, aromatics, and alcohols) in 16 common imidazolium-based ionic liquids, at different temperatures within the range of 298 to 363 K (i.e. a total number of 914 γ Solute ∞ for each IL data point). The results show a satisfactory agreement between the predicted ANN and experimental data, where, the root mean square error (RMSE) and the determination coefficient (R 2 ) of the designed neural network were found to be 0.103, 0.996 for training data and 0.128, 0.994 for testing data, respectively.

  19. A neural approach for the numerical modeling of two-dimensional magnetic hysteresis

    International Nuclear Information System (INIS)

    Cardelli, E.; Faba, A.; Laudani, A.; Riganti Fulginei, F.; Salvini, A.

    2015-01-01

    This paper deals with a neural network approach to model magnetic hysteresis at macro-magnetic scale. Such approach to the problem seems promising in order to couple the numerical treatment of magnetic hysteresis to FEM numerical solvers of the Maxwell's equations in time domain, as in case of the non-linear dynamic analysis of electrical machines, and other similar devices, making possible a full computer simulation in a reasonable time. The neural system proposed consists of four inputs representing the magnetic field and the magnetic inductions components at each time step and it is trained by 2-d measurements performed on the magnetic material to be modeled. The magnetic induction B is assumed as entry point and the output of the neural system returns the predicted value of the field H at the same time step. A suitable partitioning of the neural system, described in the paper, makes the computing process rather fast. Validations with experimental tests and simulations for non-symmetric and minor loops are presented

  20. KOMPARASI MODEL SUPPORT VECTOR MACHINES (SVM DAN NEURAL NETWORK UNTUK MENGETAHUI TINGKAT AKURASI PREDIKSI TERTINGGI HARGA SAHAM

    Directory of Open Access Journals (Sweden)

    R. Hadapiningradja Kusumodestoni

    2017-09-01

    Full Text Available There are many types of investments to make money, one of which is in the form of shares. Shares is a trading company dealing with securities in the global capital markets. Stock Exchange or also called stock market is actually the activities of private companies in the form of buying and selling investments. To avoid losses in investing, we need a model of predictive analysis with high accuracy and supported by data - lots of data and accurately. The correct techniques in the analysis will be able to reduce the risk for investors in investing. There are many models used in the analysis of stock price movement prediction, in this study the researchers used models of neural networks (NN and a model of support vector machine (SVM. Based on the background of the problems that have been mentioned in the previous description it can be formulated the problem as follows: need an algorithm that can predict stock prices, and need a high accuracy rate by adding a data set on the prediction, two algorithms will be investigated expected results last researchers can deduce where the algorithm accuracy rate predictions are the highest or accurate, then the purpose of this study was to mengkomparasi or compare between the two algorithms are algorithms Neural Network algorithm and Support Vector Machine which later on the end result has an accuracy rate forecast stock prices highest to see the error value RMSEnya. After doing research using the model of neural network and model of support vector machine (SVM to predict the stock using the data value of the shares on the stock index hongkong dated July 20, 2016 at 16:26 pm until the date of 15 September 2016 at 17:40 pm as many as 729 data sets within an interval of 5 minute through a process of training, learning, and then continue the process of testing so the result is that by using a neural network model of the prediction accuracy of 0.503 +/- 0.009 (micro 503 while using the model of support vector machine

  1. Neural activity, neural connectivity, and the processing of emotionally valenced information in older adults: links with life satisfaction.

    Science.gov (United States)

    Waldinger, Robert J; Kensinger, Elizabeth A; Schulz, Marc S

    2011-09-01

    This study examines whether differences in late-life well-being are linked to how older adults encode emotionally valenced information. Using fMRI with 39 older adults varying in life satisfaction, we examined how viewing positive and negative images would affect activation and connectivity of an emotion-processing network. Participants engaged most regions within this network more robustly for positive than for negative images, but within the PFC this effect was moderated by life satisfaction, with individuals higher in satisfaction showing lower levels of activity during the processing of positive images. Participants high in satisfaction showed stronger correlations among network regions-particularly between the amygdala and other emotion processing regions-when viewing positive, as compared with negative, images. Participants low in satisfaction showed no valence effect. Findings suggest that late-life satisfaction is linked with how emotion-processing regions are engaged and connected during processing of valenced information. This first demonstration of a link between neural recruitment and late-life well-being suggests that differences in neural network activation and connectivity may account for the preferential encoding of positive information seen in some older adults.

  2. A hybrid model based on neural networks for biomedical relation extraction.

    Science.gov (United States)

    Zhang, Yijia; Lin, Hongfei; Yang, Zhihao; Wang, Jian; Zhang, Shaowu; Sun, Yuanyuan; Yang, Liang

    2018-05-01

    Biomedical relation extraction can automatically extract high-quality biomedical relations from biomedical texts, which is a vital step for the mining of biomedical knowledge hidden in the literature. Recurrent neural networks (RNNs) and convolutional neural networks (CNNs) are two major neural network models for biomedical relation extraction. Neural network-based methods for biomedical relation extraction typically focus on the sentence sequence and employ RNNs or CNNs to learn the latent features from sentence sequences separately. However, RNNs and CNNs have their own advantages for biomedical relation extraction. Combining RNNs and CNNs may improve biomedical relation extraction. In this paper, we present a hybrid model for the extraction of biomedical relations that combines RNNs and CNNs. First, the shortest dependency path (SDP) is generated based on the dependency graph of the candidate sentence. To make full use of the SDP, we divide the SDP into a dependency word sequence and a relation sequence. Then, RNNs and CNNs are employed to automatically learn the features from the sentence sequence and the dependency sequences, respectively. Finally, the output features of the RNNs and CNNs are combined to detect and extract biomedical relations. We evaluate our hybrid model using five public (protein-protein interaction) PPI corpora and a (drug-drug interaction) DDI corpus. The experimental results suggest that the advantages of RNNs and CNNs in biomedical relation extraction are complementary. Combining RNNs and CNNs can effectively boost biomedical relation extraction performance. Copyright © 2018 Elsevier Inc. All rights reserved.

  3. Recurrent Convolutional Neural Networks: A Better Model of Biological Object Recognition.

    Science.gov (United States)

    Spoerer, Courtney J; McClure, Patrick; Kriegeskorte, Nikolaus

    2017-01-01

    Feedforward neural networks provide the dominant model of how the brain performs visual object recognition. However, these networks lack the lateral and feedback connections, and the resulting recurrent neuronal dynamics, of the ventral visual pathway in the human and non-human primate brain. Here we investigate recurrent convolutional neural networks with bottom-up (B), lateral (L), and top-down (T) connections. Combining these types of connections yields four architectures (B, BT, BL, and BLT), which we systematically test and compare. We hypothesized that recurrent dynamics might improve recognition performance in the challenging scenario of partial occlusion. We introduce two novel occluded object recognition tasks to test the efficacy of the models, digit clutter (where multiple target digits occlude one another) and digit debris (where target digits are occluded by digit fragments). We find that recurrent neural networks outperform feedforward control models (approximately matched in parametric complexity) at recognizing objects, both in the absence of occlusion and in all occlusion conditions. Recurrent networks were also found to be more robust to the inclusion of additive Gaussian noise. Recurrent neural networks are better in two respects: (1) they are more neurobiologically realistic than their feedforward counterparts; (2) they are better in terms of their ability to recognize objects, especially under challenging conditions. This work shows that computer vision can benefit from using recurrent convolutional architectures and suggests that the ubiquitous recurrent connections in biological brains are essential for task performance.

  4. Activational and effort-related aspects of motivation: neural mechanisms and implications for psychopathology

    Science.gov (United States)

    Yohn, Samantha E.; López-Cruz, Laura; San Miguel, Noemí; Correa, Mercè

    2016-01-01

    Abstract Motivation has been defined as the process that allows organisms to regulate their internal and external environment, and control the probability, proximity and availability of stimuli. As such, motivation is a complex process that is critical for survival, which involves multiple behavioural functions mediated by a number of interacting neural circuits. Classical theories of motivation suggest that there are both directional and activational aspects of motivation, and activational aspects (i.e. speed and vigour of both the instigation and persistence of behaviour) are critical for enabling organisms to overcome work-related obstacles or constraints that separate them from significant stimuli. The present review discusses the role of brain dopamine and related circuits in behavioural activation, exertion of effort in instrumental behaviour, and effort-related decision-making, based upon both animal and human studies. Impairments in behavioural activation and effort-related aspects of motivation are associated with psychiatric symptoms such as anergia, fatigue, lassitude and psychomotor retardation, which cross multiple pathologies, including depression, schizophrenia, and Parkinson’s disease. Therefore, this review also attempts to provide an interdisciplinary approach that integrates findings from basic behavioural neuroscience, behavioural economics, clinical neuropsychology, psychiatry, and neurology, to provide a coherent framework for future research and theory in this critical field. Although dopamine systems are a critical part of the brain circuitry regulating behavioural activation, exertion of effort, and effort-related decision-making, mesolimbic dopamine is only one part of a distributed circuitry that includes multiple neurotransmitters and brain areas. Overall, there is a striking similarity between the brain areas involved in behavioural activation and effort-related processes in rodents and in humans. Animal models of effort

  5. Multistability of neural networks with discontinuous non-monotonic piecewise linear activation functions and time-varying delays.

    Science.gov (United States)

    Nie, Xiaobing; Zheng, Wei Xing

    2015-05-01

    This paper is concerned with the problem of coexistence and dynamical behaviors of multiple equilibrium points for neural networks with discontinuous non-monotonic piecewise linear activation functions and time-varying delays. The fixed point theorem and other analytical tools are used to develop certain sufficient conditions that ensure that the n-dimensional discontinuous neural networks with time-varying delays can have at least 5(n) equilibrium points, 3(n) of which are locally stable and the others are unstable. The importance of the derived results is that it reveals that the discontinuous neural networks can have greater storage capacity than the continuous ones. Moreover, different from the existing results on multistability of neural networks with discontinuous activation functions, the 3(n) locally stable equilibrium points obtained in this paper are located in not only saturated regions, but also unsaturated regions, due to the non-monotonic structure of discontinuous activation functions. A numerical simulation study is conducted to illustrate and support the derived theoretical results. Copyright © 2015 Elsevier Ltd. All rights reserved.

  6. Model-Based Fault Diagnosis in Electric Drive Inverters Using Artificial Neural Network

    National Research Council Canada - National Science Library

    Masrur, Abul; Chen, ZhiHang; Zhang, Baifang; Jia, Hongbin; Murphey, Yi-Lu

    2006-01-01

    .... A normal model and various faulted models of the inverter-motor combination were developed, and voltages and current signals were generated from those models to train an artificial neural network for fault diagnosis...

  7. Artificial neural network modeling of jatropha oil fueled diesel engine for emission predictions

    Directory of Open Access Journals (Sweden)

    Ganapathy Thirunavukkarasu

    2009-01-01

    Full Text Available This paper deals with artificial neural network modeling of diesel engine fueled with jatropha oil to predict the unburned hydrocarbons, smoke, and NOx emissions. The experimental data from the literature have been used as the data base for the proposed neural network model development. For training the networks, the injection timing, injector opening pressure, plunger diameter, and engine load are used as the input layer. The outputs are hydrocarbons, smoke, and NOx emissions. The feed forward back propagation learning algorithms with two hidden layers are used in the networks. For each output a different network is developed with required topology. The artificial neural network models for hydrocarbons, smoke, and NOx emissions gave R2 values of 0.9976, 0.9976, and 0.9984 and mean percent errors of smaller than 2.7603, 4.9524, and 3.1136, respectively, for training data sets, while the R2 values of 0.9904, 0.9904, and 0.9942, and mean percent errors of smaller than 6.5557, 6.1072, and 4.4682, respectively, for testing data sets. The best linear fit of regression to the artificial neural network models of hydrocarbons, smoke, and NOx emissions gave the correlation coefficient values of 0.98, 0.995, and 0.997, respectively.

  8. Wide-field optical mapping of neural activity and brain haemodynamics: considerations and novel approaches

    Science.gov (United States)

    Ma, Ying; Shaik, Mohammed A.; Kozberg, Mariel G.; Thibodeaux, David N.; Zhao, Hanzhi T.; Yu, Hang

    2016-01-01

    Although modern techniques such as two-photon microscopy can now provide cellular-level three-dimensional imaging of the intact living brain, the speed and fields of view of these techniques remain limited. Conversely, two-dimensional wide-field optical mapping (WFOM), a simpler technique that uses a camera to observe large areas of the exposed cortex under visible light, can detect changes in both neural activity and haemodynamics at very high speeds. Although WFOM may not provide single-neuron or capillary-level resolution, it is an attractive and accessible approach to imaging large areas of the brain in awake, behaving mammals at speeds fast enough to observe widespread neural firing events, as well as their dynamic coupling to haemodynamics. Although such wide-field optical imaging techniques have a long history, the advent of genetically encoded fluorophores that can report neural activity with high sensitivity, as well as modern technologies such as light emitting diodes and sensitive and high-speed digital cameras have driven renewed interest in WFOM. To facilitate the wider adoption and standardization of WFOM approaches for neuroscience and neurovascular coupling research, we provide here an overview of the basic principles of WFOM, considerations for implementation of wide-field fluorescence imaging of neural activity, spectroscopic analysis and interpretation of results. This article is part of the themed issue ‘Interpreting BOLD: a dialogue between cognitive and cellular neuroscience’. PMID:27574312

  9. Wide-field optical mapping of neural activity and brain haemodynamics: considerations and novel approaches.

    Science.gov (United States)

    Ma, Ying; Shaik, Mohammed A; Kim, Sharon H; Kozberg, Mariel G; Thibodeaux, David N; Zhao, Hanzhi T; Yu, Hang; Hillman, Elizabeth M C

    2016-10-05

    Although modern techniques such as two-photon microscopy can now provide cellular-level three-dimensional imaging of the intact living brain, the speed and fields of view of these techniques remain limited. Conversely, two-dimensional wide-field optical mapping (WFOM), a simpler technique that uses a camera to observe large areas of the exposed cortex under visible light, can detect changes in both neural activity and haemodynamics at very high speeds. Although WFOM may not provide single-neuron or capillary-level resolution, it is an attractive and accessible approach to imaging large areas of the brain in awake, behaving mammals at speeds fast enough to observe widespread neural firing events, as well as their dynamic coupling to haemodynamics. Although such wide-field optical imaging techniques have a long history, the advent of genetically encoded fluorophores that can report neural activity with high sensitivity, as well as modern technologies such as light emitting diodes and sensitive and high-speed digital cameras have driven renewed interest in WFOM. To facilitate the wider adoption and standardization of WFOM approaches for neuroscience and neurovascular coupling research, we provide here an overview of the basic principles of WFOM, considerations for implementation of wide-field fluorescence imaging of neural activity, spectroscopic analysis and interpretation of results.This article is part of the themed issue 'Interpreting BOLD: a dialogue between cognitive and cellular neuroscience'. © 2016 The Authors.

  10. GABA and Gap Junctions in the Development of Synchronized Activity in Human Pluripotent Stem Cell-Derived Neural Networks

    Directory of Open Access Journals (Sweden)

    Meeri Eeva-Liisa Mäkinen

    2018-03-01

    Full Text Available The electrical activity of the brain arises from single neurons communicating with each other. However, how single neurons interact during early development to give rise to neural network activity remains poorly understood. We studied the emergence of synchronous neural activity in human pluripotent stem cell (hPSC-derived neural networks simultaneously on a single-neuron level and network level. The contribution of gamma-aminobutyric acid (GABA and gap junctions to the development of synchronous activity in hPSC-derived neural networks was studied with GABA agonist and antagonist and by blocking gap junctional communication, respectively. We characterized the dynamics of the network-wide synchrony in hPSC-derived neural networks with high spatial resolution (calcium imaging and temporal resolution microelectrode array (MEA. We found that the emergence of synchrony correlates with a decrease in very strong GABA excitation. However, the synchronous network was found to consist of a heterogeneous mixture of synchronously active cells with variable responses to GABA, GABA agonists and gap junction blockers. Furthermore, we show how single-cell distributions give rise to the network effect of GABA, GABA agonists and gap junction blockers. Finally, based on our observations, we suggest that the earliest form of synchronous neuronal activity depends on gap junctions and a decrease in GABA induced depolarization but not on GABAA mediated signaling.

  11. GABA and Gap Junctions in the Development of Synchronized Activity in Human Pluripotent Stem Cell-Derived Neural Networks

    Science.gov (United States)

    Mäkinen, Meeri Eeva-Liisa; Ylä-Outinen, Laura; Narkilahti, Susanna

    2018-01-01

    The electrical activity of the brain arises from single neurons communicating with each other. However, how single neurons interact during early development to give rise to neural network activity remains poorly understood. We studied the emergence of synchronous neural activity in human pluripotent stem cell (hPSC)-derived neural networks simultaneously on a single-neuron level and network level. The contribution of gamma-aminobutyric acid (GABA) and gap junctions to the development of synchronous activity in hPSC-derived neural networks was studied with GABA agonist and antagonist and by blocking gap junctional communication, respectively. We characterized the dynamics of the network-wide synchrony in hPSC-derived neural networks with high spatial resolution (calcium imaging) and temporal resolution microelectrode array (MEA). We found that the emergence of synchrony correlates with a decrease in very strong GABA excitation. However, the synchronous network was found to consist of a heterogeneous mixture of synchronously active cells with variable responses to GABA, GABA agonists and gap junction blockers. Furthermore, we show how single-cell distributions give rise to the network effect of GABA, GABA agonists and gap junction blockers. Finally, based on our observations, we suggest that the earliest form of synchronous neuronal activity depends on gap junctions and a decrease in GABA induced depolarization but not on GABAA mediated signaling. PMID:29559893

  12. Enhanced Dynamic Model of Pneumatic Muscle Actuator with Elman Neural Network

    Directory of Open Access Journals (Sweden)

    Alexander Hošovský

    2015-01-01

    Full Text Available To make effective use of model-based control system design techniques, one needs a good model which captures system’s dynamic properties in the range of interest. Here an analytical model of pneumatic muscle actuator with two pneumatic artificial muscles driving a rotational joint is developed. Use of analytical model makes it possible to retain the physical interpretation of the model and the model is validated using open-loop responses. Since it was considered important to design a robust controller based on this model, the effect of changed moment of inertia (as a representation of uncertain parameter was taken into account and compared with nominal case. To improve the accuracy of the model, these effects are treated as a disturbance modeled using the recurrent (Elman neural network. Recurrent neural network was preferred over feedforward type due to its better long-term prediction capabilities well suited for simulation use of the model. The results confirm that this method improves the model performance (tested for five of the measured variables: joint angle, muscle pressures, and muscle forces while retaining its physical interpretation.

  13. Prediction of Clinical Deterioration in Hospitalized Adult Patients with Hematologic Malignancies Using a Neural Network Model.

    Directory of Open Access Journals (Sweden)

    Scott B Hu

    Full Text Available Clinical deterioration (ICU transfer and cardiac arrest occurs during approximately 5-10% of hospital admissions. Existing prediction models have a high false positive rate, leading to multiple false alarms and alarm fatigue. We used routine vital signs and laboratory values obtained from the electronic medical record (EMR along with a machine learning algorithm called a neural network to develop a prediction model that would increase the predictive accuracy and decrease false alarm rates.Retrospective cohort study.The hematologic malignancy unit in an academic medical center in the United States.Adult patients admitted to the hematologic malignancy unit from 2009 to 2010.None.Vital signs and laboratory values were obtained from the electronic medical record system and then used as predictors (features. A neural network was used to build a model to predict clinical deterioration events (ICU transfer and cardiac arrest. The performance of the neural network model was compared to the VitalPac Early Warning Score (ViEWS. Five hundred sixty five consecutive total admissions were available with 43 admissions resulting in clinical deterioration. Using simulation, the neural network outperformed the ViEWS model with a positive predictive value of 82% compared to 24%, respectively.We developed and tested a neural network-based prediction model for clinical deterioration in patients hospitalized in the hematologic malignancy unit. Our neural network model outperformed an existing model, substantially increasing the positive predictive value, allowing the clinician to be confident in the alarm raised. This system can be readily implemented in a real-time fashion in existing EMR systems.

  14. Generalized versus non-generalized neural network model for multi-lead inflow forecasting at Aswan High Dam

    Directory of Open Access Journals (Sweden)

    A. El-Shafie

    2011-03-01

    Full Text Available Artificial neural networks (ANN have been found efficient, particularly in problems where characteristics of the processes are stochastic and difficult to describe using explicit mathematical models. However, time series prediction based on ANN algorithms is fundamentally difficult and faces problems. One of the major shortcomings is the search for the optimal input pattern in order to enhance the forecasting capabilities for the output. The second challenge is the over-fitting problem during the training procedure and this occurs when ANN loses its generalization. In this research, autocorrelation and cross correlation analyses are suggested as a method for searching the optimal input pattern. On the other hand, two generalized methods namely, Regularized Neural Network (RNN and Ensemble Neural Network (ENN models are developed to overcome the drawbacks of classical ANN models. Using Generalized Neural Network (GNN helped avoid over-fitting of training data which was observed as a limitation of classical ANN models. Real inflow data collected over the last 130 years at Lake Nasser was used to train, test and validate the proposed model. Results show that the proposed GNN model outperforms non-generalized neural network and conventional auto-regressive models and it could provide accurate inflow forecasting.

  15. Application of Artificial Neural Networks in the Heart Electrical Axis Position Conclusion Modeling

    Science.gov (United States)

    Bakanovskaya, L. N.

    2016-08-01

    The article touches upon building of a heart electrical axis position conclusion model using an artificial neural network. The input signals of the neural network are the values of deflections Q, R and S; and the output signal is the value of the heart electrical axis position. Training of the network is carried out by the error propagation method. The test results allow concluding that the created neural network makes a conclusion with a high degree of accuracy.

  16. Estimating tree bole volume using artificial neural network models for four species in Turkey.

    Science.gov (United States)

    Ozçelik, Ramazan; Diamantopoulou, Maria J; Brooks, John R; Wiant, Harry V

    2010-01-01

    Tree bole volumes of 89 Scots pine (Pinus sylvestris L.), 96 Brutian pine (Pinus brutia Ten.), 107 Cilicica fir (Abies cilicica Carr.) and 67 Cedar of Lebanon (Cedrus libani A. Rich.) trees were estimated using Artificial Neural Network (ANN) models. Neural networks offer a number of advantages including the ability to implicitly detect complex nonlinear relationships between input and output variables, which is very helpful in tree volume modeling. Two different neural network architectures were used and produced the Back propagation (BPANN) and the Cascade Correlation (CCANN) Artificial Neural Network models. In addition, tree bole volume estimates were compared to other established tree bole volume estimation techniques including the centroid method, taper equations, and existing standard volume tables. An overview of the features of ANNs and traditional methods is presented and the advantages and limitations of each one of them are discussed. For validation purposes, actual volumes were determined by aggregating the volumes of measured short sections (average 1 meter) of the tree bole using Smalian's formula. The results reported in this research suggest that the selected cascade correlation artificial neural network (CCANN) models are reliable for estimating the tree bole volume of the four examined tree species since they gave unbiased results and were superior to almost all methods in terms of error (%) expressed as the mean of the percentage errors. 2009 Elsevier Ltd. All rights reserved.

  17. A Neural Network Model of the Visual Short-Term Memory

    DEFF Research Database (Denmark)

    Petersen, Anders; Kyllingsbæk, Søren; Hansen, Lars Kai

    2009-01-01

    In this paper a neural network model of Visual Short-Term Memory (VSTM) is presented. The model links closely with Bundesen’s (1990) well-established mathematical theory of visual attention. We evaluate the model’s ability to fit experimental data from a classical whole and partial report study...

  18. Multistability of memristive Cohen-Grossberg neural networks with non-monotonic piecewise linear activation functions and time-varying delays.

    Science.gov (United States)

    Nie, Xiaobing; Zheng, Wei Xing; Cao, Jinde

    2015-11-01

    The problem of coexistence and dynamical behaviors of multiple equilibrium points is addressed for a class of memristive Cohen-Grossberg neural networks with non-monotonic piecewise linear activation functions and time-varying delays. By virtue of the fixed point theorem, nonsmooth analysis theory and other analytical tools, some sufficient conditions are established to guarantee that such n-dimensional memristive Cohen-Grossberg neural networks can have 5(n) equilibrium points, among which 3(n) equilibrium points are locally exponentially stable. It is shown that greater storage capacity can be achieved by neural networks with the non-monotonic activation functions introduced herein than the ones with Mexican-hat-type activation function. In addition, unlike most existing multistability results of neural networks with monotonic activation functions, those obtained 3(n) locally stable equilibrium points are located both in saturated regions and unsaturated regions. The theoretical findings are verified by an illustrative example with computer simulations. Copyright © 2015 Elsevier Ltd. All rights reserved.

  19. Global convergence of periodic solution of neural networks with discontinuous activation functions

    International Nuclear Information System (INIS)

    Huang Lihong; Guo Zhenyuan

    2009-01-01

    In this paper, without assuming boundedness and monotonicity of the activation functions, we establish some sufficient conditions ensuring the existence and global asymptotic stability of periodic solution of neural networks with discontinuous activation functions by using the Yoshizawa-like theorem and constructing proper Lyapunov function. The obtained results improve and extend previous works.

  20. Artificial neural network models for prediction of intestinal permeability of oligopeptides

    Directory of Open Access Journals (Sweden)

    Kim Min-Kook

    2007-07-01

    Full Text Available Abstract Background Oral delivery is a highly desirable property for candidate drugs under development. Computational modeling could provide a quick and inexpensive way to assess the intestinal permeability of a molecule. Although there have been several studies aimed at predicting the intestinal absorption of chemical compounds, there have been no attempts to predict intestinal permeability on the basis of peptide sequence information. To develop models for predicting the intestinal permeability of peptides, we adopted an artificial neural network as a machine-learning algorithm. The positive control data consisted of intestinal barrier-permeable peptides obtained by the peroral phage display technique, and the negative control data were prepared from random sequences. Results The capacity of our models to make appropriate predictions was validated by statistical indicators including sensitivity, specificity, enrichment curve, and the area under the receiver operating characteristic (ROC curve (the ROC score. The training and test set statistics indicated that our models were of strikingly good quality and could discriminate between permeable and random sequences with a high level of confidence. Conclusion We developed artificial neural network models to predict the intestinal permeabilities of oligopeptides on the basis of peptide sequence information. Both binary and VHSE (principal components score Vectors of Hydrophobic, Steric and Electronic properties descriptors produced statistically significant training models; the models with simple neural network architectures showed slightly greater predictive power than those with complex ones. We anticipate that our models will be applicable to the selection of intestinal barrier-permeable peptides for generating peptide drugs or peptidomimetics.

  1. ReSeg: A Recurrent Neural Network-Based Model for Semantic Segmentation

    OpenAIRE

    Visin, Francesco; Ciccone, Marco; Romero, Adriana; Kastner, Kyle; Cho, Kyunghyun; Bengio, Yoshua; Matteucci, Matteo; Courville, Aaron

    2015-01-01

    We propose a structured prediction architecture, which exploits the local generic features extracted by Convolutional Neural Networks and the capacity of Recurrent Neural Networks (RNN) to retrieve distant dependencies. The proposed architecture, called ReSeg, is based on the recently introduced ReNet model for image classification. We modify and extend it to perform the more challenging task of semantic segmentation. Each ReNet layer is composed of four RNN that sweep the image horizontally ...

  2. ARTIFICIAL NEURAL NETWORK AND FUZZY LOGIC CONTROLLER FOR GTAW MODELING AND CONTROL

    Institute of Scientific and Technical Information of China (English)

    2002-01-01

    An artificial neural network(ANN) and a self-adjusting fuzzy logic controller(FLC) for modeling and control of gas tungsten arc welding(GTAW) process are presented. The discussion is mainly focused on the modeling and control of the weld pool depth with ANN and the intelligent control for weld seam tracking with FLC. The proposed neural network can produce highly complex nonlinear multi-variable model of the GTAW process that offers the accurate prediction of welding penetration depth. A self-adjusting fuzzy controller used for seam tracking adjusts the control parameters on-line automatically according to the tracking errors so that the torch position can be controlled accurately.

  3. Neural Network-Based Model for Landslide Susceptibility and Soil Longitudinal Profile Analyses

    DEFF Research Database (Denmark)

    Farrokhzad, F.; Barari, Amin; Choobbasti, A. J.

    2011-01-01

    The purpose of this study was to create an empirical model for assessing the landslide risk potential at Savadkouh Azad University, which is located in the rural surroundings of Savadkouh, about 5 km from the city of Pol-Sefid in northern Iran. The soil longitudinal profile of the city of Babol......, located 25 km from the Caspian Sea, also was predicted with an artificial neural network (ANN). A multilayer perceptron neural network model was applied to the landslide area and was used to analyze specific elements in the study area that contributed to previous landsliding events. The ANN models were...... studies in landslide susceptibility zonation....

  4. Validation of artificial neural network models for predicting biochemical markers associated with male infertility.

    Science.gov (United States)

    Vickram, A S; Kamini, A Rao; Das, Raja; Pathy, M Ramesh; Parameswari, R; Archana, K; Sridharan, T B

    2016-08-01

    Seminal fluid is the secretion from many glands comprised of several organic and inorganic compounds including free amino acids, proteins, fructose, glucosidase, zinc, and other scavenging elements like Mg(2+), Ca(2+), K(+), and Na(+). Therefore, in the view of development of novel approaches and proper diagnosis to male infertility, overall understanding of the biochemical and molecular composition and its role in regulation of sperm quality is highly desirable. Perhaps this can be achieved through artificial intelligence. This study was aimed to elucidate and predict various biochemical markers present in human seminal plasma with three different neural network models. A total of 177 semen samples were collected for this research (both fertile and infertile samples) and immediately processed to prepare a semen analysis report, based on the protocol of the World Health Organization (WHO [2010]). The semen samples were then categorized into oligoasthenospermia (n=35), asthenospermia (n=35), azoospermia (n=22), normospermia (n=34), oligospermia (n=34), and control (n=17). The major biochemical parameters like total protein content, fructose, glucosidase, and zinc content were elucidated by standard protocols. All the biochemical markers were predicted by using three different artificial neural network (ANN) models with semen parameters as inputs. Of the three models, the back propagation neural network model (BPNN) yielded the best results with mean absolute error 0.025, -0.080, 0.166, and -0.057 for protein, fructose, glucosidase, and zinc, respectively. This suggests that BPNN can be used to predict biochemical parameters for the proper diagnosis of male infertility in assisted reproductive technology (ART) centres. AAS: absorption spectroscopy; AI: artificial intelligence; ANN: artificial neural networks; ART: assisted reproductive technology; BPNN: back propagation neural network model; DT: decision tress; MLP: multilayer perceptron; PESA: percutaneous

  5. Evolutionary Design of Convolutional Neural Networks for Human Activity Recognition in Sensor-Rich Environments.

    Science.gov (United States)

    Baldominos, Alejandro; Saez, Yago; Isasi, Pedro

    2018-04-23

    Human activity recognition is a challenging problem for context-aware systems and applications. It is gaining interest due to the ubiquity of different sensor sources, wearable smart objects, ambient sensors, etc. This task is usually approached as a supervised machine learning problem, where a label is to be predicted given some input data, such as the signals retrieved from different sensors. For tackling the human activity recognition problem in sensor network environments, in this paper we propose the use of deep learning (convolutional neural networks) to perform activity recognition using the publicly available OPPORTUNITY dataset. Instead of manually choosing a suitable topology, we will let an evolutionary algorithm design the optimal topology in order to maximize the classification F1 score. After that, we will also explore the performance of committees of the models resulting from the evolutionary process. Results analysis indicates that the proposed model was able to perform activity recognition within a heterogeneous sensor network environment, achieving very high accuracies when tested with new sensor data. Based on all conducted experiments, the proposed neuroevolutionary system has proved to be able to systematically find a classification model which is capable of outperforming previous results reported in the state-of-the-art, showing that this approach is useful and improves upon previously manually-designed architectures.

  6. Modeling of chemical exergy of agricultural biomass using improved general regression neural network

    International Nuclear Information System (INIS)

    Huang, Y.W.; Chen, M.Q.; Li, Y.; Guo, J.

    2016-01-01

    A comprehensive evaluation for energy potential contained in agricultural biomass was a vital step for energy utilization of agricultural biomass. The chemical exergy of typical agricultural biomass was evaluated based on the second law of thermodynamics. The chemical exergy was significantly influenced by C and O elements rather than H element. The standard entropy of the samples also was examined based on their element compositions. Two predicted models of the chemical exergy were developed, which referred to a general regression neural network model based upon the element composition, and a linear model based upon the high heat value. An auto-refinement algorithm was firstly developed to improve the performance of regression neural network model. The developed general regression neural network model with K-fold cross-validation had a better ability for predicting the chemical exergy than the linear model, which had lower predicted errors (±1.5%). - Highlights: • Chemical exergies of agricultural biomass were evaluated based upon fifty samples. • Values for the standard entropy of agricultural biomass samples were calculated. • A linear relationship between chemical exergy and HHV of samples was detected. • An improved GRNN prediction model for the chemical exergy of biomass was developed.

  7. NOVEL APPROACH TO IMPROVE GEOCENTRIC TRANSLATION MODEL PERFORMANCE USING ARTIFICIAL NEURAL NETWORK TECHNOLOGY

    Directory of Open Access Journals (Sweden)

    Yao Yevenyo Ziggah

    Full Text Available Abstract: Geocentric translation model (GTM in recent times has not gained much popularity in coordinate transformation research due to its attainable accuracy. Accurate transformation of coordinate is a major goal and essential procedure for the solution of a number of important geodetic problems. Therefore, motivated by the successful application of Artificial Intelligence techniques in geodesy, this study developed, tested and compared a novel technique capable of improving the accuracy of GTM. First, GTM based on official parameters (OP and new parameters determined using the arithmetic mean (AM were applied to transform coordinate from global WGS84 datum to local Accra datum. On the basis of the results, the new parameters (AM attained a maximum horizontal position error of 1.99 m compared to the 2.75 m attained by OP. In line with this, artificial neural network technology of backpropagation neural network (BPNN, radial basis function neural network (RBFNN and generalized regression neural network (GRNN were then used to compensate for the GTM generated errors based on AM parameters to obtain a new coordinate transformation model. The new implemented models offered significant improvement in the horizontal position error from 1.99 m to 0.93 m.

  8. An Inventory Controlled Supply Chain Model Based on Improved BP Neural Network

    Directory of Open Access Journals (Sweden)

    Wei He

    2013-01-01

    Full Text Available Inventory control is a key factor for reducing supply chain cost and increasing customer satisfaction. However, prediction of inventory level is a challenging task for managers. As one of the widely used techniques for inventory control, standard BP neural network has such problems as low convergence rate and poor prediction accuracy. Aiming at these problems, a new fast convergent BP neural network model for predicting inventory level is developed in this paper. By adding an error offset, this paper deduces the new chain propagation rule and the new weight formula. This paper also applies the improved BP neural network model to predict the inventory level of an automotive parts company. The results show that the improved algorithm not only significantly exceeds the standard algorithm but also outperforms some other improved BP algorithms both on convergence rate and prediction accuracy.

  9. A Novel Modeling Method for Aircraft Engine Using Nonlinear Autoregressive Exogenous (NARX) Models Based on Wavelet Neural Networks

    Science.gov (United States)

    Yu, Bing; Shu, Wenjun; Cao, Can

    2018-05-01

    A novel modeling method for aircraft engine using nonlinear autoregressive exogenous (NARX) models based on wavelet neural networks is proposed. The identification principle and process based on wavelet neural networks are studied, and the modeling scheme based on NARX is proposed. Then, the time series data sets from three types of aircraft engines are utilized to build the corresponding NARX models, and these NARX models are validated by the simulation. The results show that all the best NARX models can capture the original aircraft engine's dynamic characteristic well with the high accuracy. For every type of engine, the relative identification errors of its best NARX model and the component level model are no more than 3.5 % and most of them are within 1 %.

  10. Copula Entropy coupled with Wavelet Neural Network Model for Hydrological Prediction

    Science.gov (United States)

    Wang, Yin; Yue, JiGuang; Liu, ShuGuang; Wang, Li

    2018-02-01

    Artificial Neural network(ANN) has been widely used in hydrological forecasting. in this paper an attempt has been made to find an alternative method for hydrological prediction by combining Copula Entropy(CE) with Wavelet Neural Network(WNN), CE theory permits to calculate mutual information(MI) to select Input variables which avoids the limitations of the traditional linear correlation(LCC) analysis. Wavelet analysis can provide the exact locality of any changes in the dynamical patterns of the sequence Coupled with ANN Strong non-linear fitting ability. WNN model was able to provide a good fit with the hydrological data. finally, the hybrid model(CE+WNN) have been applied to daily water level of Taihu Lake Basin, and compared with CE ANN, LCC WNN and LCC ANN. Results showed that the hybrid model produced better results in estimating the hydrograph properties than the latter models.

  11. Copper is an endogenous modulator of neural circuit spontaneous activity.

    Science.gov (United States)

    Dodani, Sheel C; Firl, Alana; Chan, Jefferson; Nam, Christine I; Aron, Allegra T; Onak, Carl S; Ramos-Torres, Karla M; Paek, Jaeho; Webster, Corey M; Feller, Marla B; Chang, Christopher J

    2014-11-18

    For reasons that remain insufficiently understood, the brain requires among the highest levels of metals in the body for normal function. The traditional paradigm for this organ and others is that fluxes of alkali and alkaline earth metals are required for signaling, but transition metals are maintained in static, tightly bound reservoirs for metabolism and protection against oxidative stress. Here we show that copper is an endogenous modulator of spontaneous activity, a property of functional neural circuitry. Using Copper Fluor-3 (CF3), a new fluorescent Cu(+) sensor for one- and two-photon imaging, we show that neurons and neural tissue maintain basal stores of loosely bound copper that can be attenuated by chelation, which define a labile copper pool. Targeted disruption of these labile copper stores by acute chelation or genetic knockdown of the CTR1 (copper transporter 1) copper channel alters the spatiotemporal properties of spontaneous activity in developing hippocampal and retinal circuits. The data identify an essential role for copper neuronal function and suggest broader contributions of this transition metal to cell signaling.

  12. Psychopathic traits linked to alterations in neural activity during personality judgments of self and others.

    Science.gov (United States)

    Deming, Philip; Philippi, Carissa L; Wolf, Richard C; Dargis, Monika; Kiehl, Kent A; Koenigs, Michael

    2018-01-01

    Psychopathic individuals are notorious for their grandiose sense of self-worth and disregard for the welfare of others. One potential psychological mechanism underlying these traits is the relative consideration of "self" versus "others". Here we used task-based functional magnetic resonance imaging (fMRI) to identify neural responses during personality trait judgments about oneself and a familiar other in a sample of adult male incarcerated offenders ( n  = 57). Neural activity was regressed on two clusters of psychopathic traits: Factor 1 (e.g., egocentricity and lack of empathy) and Factor 2 (e.g., impulsivity and irresponsibility). Contrary to our hypotheses, Factor 1 scores were not significantly related to neural activity during self- or other-judgments. However, Factor 2 traits were associated with diminished activation to self-judgments, in relation to other-judgments, in bilateral posterior cingulate cortex and right temporoparietal junction. These findings highlight cortical regions associated with a dimension of social-affective cognition that may underlie psychopathic individuals' impulsive traits.

  13. Artificial neural networks in the evaluation of the radioactive waste drums activity

    International Nuclear Information System (INIS)

    Potiens, J.R.A.J.; Hiromoto, G.

    2006-01-01

    The mathematical techniques are becoming more important to solve geometry and standard identification problems. The gamma spectrometry of radioactive waste drums would be a complex solution problem. The main difficulty is the detectors calibration for this geometry; the waste is not homogeneously distributed inside the drums, therefore there are many possible combinations between the activity and the position of these radionuclides inside the drums, making the preparation of calibration standards impracticable. This work describes the development of a methodology to estimate the activity of a 200 L radioactive waste drum, as well as a mapping of the waste distribution, using Artificial Neural Network. The neural network data set entry obtaining was based on the possible detection efficiency combination with 10 sources activities varying from 0 to 74 x 10 3 Bq. The set up consists of a 200 L drum divided in 5 layers. Ten detectors were positioned all the way through a parallel line to the drum axis, from 15 cm of its surface. The Cesium -137 radionuclide source was used. The 50 efficiency obtained values (10 detectors and 5 layers), combined with the 10 source intensities resulted in a 100,000 lines for 15 columns matrix, with all the possible combinations of source intensity and the Cs-137 position in the 5 layers of the drum. This archive was divided in 2 parts to compose the set of training: input and target files. The MatLab 7.0 module of neural networks was used for training. The net architecture has 10 neurons in the input layer, 18 in the hidden layer and 5 in the output layer. The training algorithm was the 'traincgb' and after 300 'epoch s' the medium square error was 0.00108172. This methodology allows knowing the detection positions answers in a heterogeneous distribution of radionuclides inside a 200 L waste drum; in consequence it is possible to estimate the total activity of the drum in the training neural network limits. The results accuracy depends

  14. Statistical Modeling and Prediction for Tourism Economy Using Dendritic Neural Network

    Directory of Open Access Journals (Sweden)

    Ying Yu

    2017-01-01

    Full Text Available With the impact of global internationalization, tourism economy has also been a rapid development. The increasing interest aroused by more advanced forecasting methods leads us to innovate forecasting methods. In this paper, the seasonal trend autoregressive integrated moving averages with dendritic neural network model (SA-D model is proposed to perform the tourism demand forecasting. First, we use the seasonal trend autoregressive integrated moving averages model (SARIMA model to exclude the long-term linear trend and then train the residual data by the dendritic neural network model and make a short-term prediction. As the result showed in this paper, the SA-D model can achieve considerably better predictive performances. In order to demonstrate the effectiveness of the SA-D model, we also use the data that other authors used in the other models and compare the results. It also proved that the SA-D model achieved good predictive performances in terms of the normalized mean square error, absolute percentage of error, and correlation coefficient.

  15. Electronic bypass of spinal lesions: activation of lower motor neurons directly driven by cortical neural signals.

    Science.gov (United States)

    Li, Yan; Alam, Monzurul; Guo, Shanshan; Ting, K H; He, Jufang

    2014-07-03

    Lower motor neurons in the spinal cord lose supraspinal inputs after complete spinal cord injury, leading to a loss of volitional control below the injury site. Extensive locomotor training with spinal cord stimulation can restore locomotion function after spinal cord injury in humans and animals. However, this locomotion is non-voluntary, meaning that subjects cannot control stimulation via their natural "intent". A recent study demonstrated an advanced system that triggers a stimulator using forelimb stepping electromyographic patterns to restore quadrupedal walking in rats with spinal cord transection. However, this indirect source of "intent" may mean that other non-stepping forelimb activities may false-trigger the spinal stimulator and thus produce unwanted hindlimb movements. We hypothesized that there are distinguishable neural activities in the primary motor cortex during treadmill walking, even after low-thoracic spinal transection in adult guinea pigs. We developed an electronic spinal bridge, called "Motolink", which detects these neural patterns and triggers a "spinal" stimulator for hindlimb movement. This hardware can be head-mounted or carried in a backpack. Neural data were processed in real-time and transmitted to a computer for analysis by an embedded processor. Off-line neural spike analysis was conducted to calculate and preset the spike threshold for "Motolink" hardware. We identified correlated activities of primary motor cortex neurons during treadmill walking of guinea pigs with spinal cord transection. These neural activities were used to predict the kinematic states of the animals. The appropriate selection of spike threshold value enabled the "Motolink" system to detect the neural "intent" of walking, which triggered electrical stimulation of the spinal cord and induced stepping-like hindlimb movements. We present a direct cortical "intent"-driven electronic spinal bridge to restore hindlimb locomotion after complete spinal cord injury.

  16. Exponential stabilization and synchronization for fuzzy model of memristive neural networks by periodically intermittent control.

    Science.gov (United States)

    Yang, Shiju; Li, Chuandong; Huang, Tingwen

    2016-03-01

    The problem of exponential stabilization and synchronization for fuzzy model of memristive neural networks (MNNs) is investigated by using periodically intermittent control in this paper. Based on the knowledge of memristor and recurrent neural network, the model of MNNs is formulated. Some novel and useful stabilization criteria and synchronization conditions are then derived by using the Lyapunov functional and differential inequality techniques. It is worth noting that the methods used in this paper are also applied to fuzzy model for complex networks and general neural networks. Numerical simulations are also provided to verify the effectiveness of theoretical results. Copyright © 2015 Elsevier Ltd. All rights reserved.

  17. Hydrological time series modeling: A comparison between adaptive neuro-fuzzy, neural network and autoregressive techniques

    Science.gov (United States)

    Lohani, A. K.; Kumar, Rakesh; Singh, R. D.

    2012-06-01

    SummaryTime series modeling is necessary for the planning and management of reservoirs. More recently, the soft computing techniques have been used in hydrological modeling and forecasting. In this study, the potential of artificial neural networks and neuro-fuzzy system in monthly reservoir inflow forecasting are examined by developing and comparing monthly reservoir inflow prediction models, based on autoregressive (AR), artificial neural networks (ANNs) and adaptive neural-based fuzzy inference system (ANFIS). To take care the effect of monthly periodicity in the flow data, cyclic terms are also included in the ANN and ANFIS models. Working with time series flow data of the Sutlej River at Bhakra Dam, India, several ANN and adaptive neuro-fuzzy models are trained with different input vectors. To evaluate the performance of the selected ANN and adaptive neural fuzzy inference system (ANFIS) models, comparison is made with the autoregressive (AR) models. The ANFIS model trained with the input data vector including previous inflows and cyclic terms of monthly periodicity has shown a significant improvement in the forecast accuracy in comparison with the ANFIS models trained with the input vectors considering only previous inflows. In all cases ANFIS gives more accurate forecast than the AR and ANN models. The proposed ANFIS model coupled with the cyclic terms is shown to provide better representation of the monthly inflow forecasting for planning and operation of reservoir.

  18. A Wavelet Neural Network Optimal Control Model for Traffic-Flow Prediction in Intelligent Transport Systems

    Science.gov (United States)

    Huang, Darong; Bai, Xing-Rong

    Based on wavelet transform and neural network theory, a traffic-flow prediction model, which was used in optimal control of Intelligent Traffic system, is constructed. First of all, we have extracted the scale coefficient and wavelet coefficient from the online measured raw data of traffic flow via wavelet transform; Secondly, an Artificial Neural Network model of Traffic-flow Prediction was constructed and trained using the coefficient sequences as inputs and raw data as outputs; Simultaneous, we have designed the running principium of the optimal control system of traffic-flow Forecasting model, the network topological structure and the data transmitted model; Finally, a simulated example has shown that the technique is effectively and exactly. The theoretical results indicated that the wavelet neural network prediction model and algorithms have a broad prospect for practical application.

  19. Artificial neural network model of pork meat cubes osmotic dehydratation

    Directory of Open Access Journals (Sweden)

    Pezo Lato L.

    2013-01-01

    Full Text Available Mass transfer of pork meat cubes (M. triceps brachii, shaped as 1x1x1 cm, during osmotic dehydration (OD and under atmospheric pressure was investigated in this paper. The effects of different parameters, such as concentration of sugar beet molasses (60-80%, w/w, temperature (20-50ºC, and immersion time (1-5 h in terms of water loss (WL, solid gain (SG, final dry matter content (DM, and water activity (aw, were investigated using experimental results. Five artificial neural network (ANN models were developed for the prediction of WL, SG, DM, and aw in OD of pork meat cubes. These models were able to predict process outputs with coefficient of determination, r2, of 0.990 for SG, 0.985 for WL, 0.986 for aw, and 0.992 for DM compared to experimental measurements. The wide range of processing variables considered for the formulation of these models, and their easy implementation in a spreadsheet calculus make it very useful and practical for process design and control.

  20. Neural Control and Adaptive Neural Forward Models for Insect-like, Energy-Efficient, and Adaptable Locomotion of Walking Machines

    DEFF Research Database (Denmark)

    Manoonpong, Poramate; Parlitz, Ulrich; Wörgötter, Florentin

    2013-01-01

    such natural properties with artificial legged locomotion systems by using different approaches including machine learning algorithms, classical engineering control techniques, and biologically-inspired control mechanisms. However, their levels of performance are still far from the natural ones. By contrast...... on sensory feedback and adaptive neural forward models with efference copies. This neural closed-loop controller enables a walking machine to perform a multitude of different walking patterns including insect-like leg movements and gaits as well as energy-efficient locomotion. In addition, the forward models...... allow the machine to autonomously adapt its locomotion to deal with a change of terrain, losing of ground contact during stance phase, stepping on or hitting an obstacle during swing phase, leg damage, and even to promote cockroach-like climbing behavior. Thus, the results presented here show...