WorldWideScience

Sample records for neural activity model

  1. Associative memory model with spontaneous neural activity

    Science.gov (United States)

    Kurikawa, Tomoki; Kaneko, Kunihiko

    2012-05-01

    We propose a novel associative memory model wherein the neural activity without an input (i.e., spontaneous activity) is modified by an input to generate a target response that is memorized for recall upon the same input. Suitable design of synaptic connections enables the model to memorize input/output (I/O) mappings equaling 70% of the total number of neurons, where the evoked activity distinguishes a target pattern from others. Spontaneous neural activity without an input shows chaotic dynamics but keeps some similarity with evoked activities, as reported in recent experimental studies.

  2. Active Neural Localization

    OpenAIRE

    Chaplot, Devendra Singh; Parisotto, Emilio; Salakhutdinov, Ruslan

    2018-01-01

    Localization is the problem of estimating the location of an autonomous agent from an observation and a map of the environment. Traditional methods of localization, which filter the belief based on the observations, are sub-optimal in the number of steps required, as they do not decide the actions taken by the agent. We propose "Active Neural Localizer", a fully differentiable neural network that learns to localize accurately and efficiently. The proposed model incorporates ideas of tradition...

  3. Neural electrical activity and neural network growth.

    Science.gov (United States)

    Gafarov, F M

    2018-05-01

    The development of central and peripheral neural system depends in part on the emergence of the correct functional connectivity in its input and output pathways. Now it is generally accepted that molecular factors guide neurons to establish a primary scaffold that undergoes activity-dependent refinement for building a fully functional circuit. However, a number of experimental results obtained recently shows that the neuronal electrical activity plays an important role in the establishing of initial interneuronal connections. Nevertheless, these processes are rather difficult to study experimentally, due to the absence of theoretical description and quantitative parameters for estimation of the neuronal activity influence on growth in neural networks. In this work we propose a general framework for a theoretical description of the activity-dependent neural network growth. The theoretical description incorporates a closed-loop growth model in which the neural activity can affect neurite outgrowth, which in turn can affect neural activity. We carried out the detailed quantitative analysis of spatiotemporal activity patterns and studied the relationship between individual cells and the network as a whole to explore the relationship between developing connectivity and activity patterns. The model, developed in this work will allow us to develop new experimental techniques for studying and quantifying the influence of the neuronal activity on growth processes in neural networks and may lead to a novel techniques for constructing large-scale neural networks by self-organization. Copyright © 2018 Elsevier Ltd. All rights reserved.

  4. Computational modeling of neural activities for statistical inference

    CERN Document Server

    Kolossa, Antonio

    2016-01-01

    This authored monograph supplies empirical evidence for the Bayesian brain hypothesis by modeling event-related potentials (ERP) of the human electroencephalogram (EEG) during successive trials in cognitive tasks. The employed observer models are useful to compute probability distributions over observable events and hidden states, depending on which are present in the respective tasks. Bayesian model selection is then used to choose the model which best explains the ERP amplitude fluctuations. Thus, this book constitutes a decisive step towards a better understanding of the neural coding and computing of probabilities following Bayesian rules. The target audience primarily comprises research experts in the field of computational neurosciences, but the book may also be beneficial for graduate students who want to specialize in this field. .

  5. Predicting Neural Activity Patterns Associated with Sentences Using a Neurobiologically Motivated Model of Semantic Representation.

    Science.gov (United States)

    Anderson, Andrew James; Binder, Jeffrey R; Fernandino, Leonardo; Humphries, Colin J; Conant, Lisa L; Aguilar, Mario; Wang, Xixi; Doko, Donias; Raizada, Rajeev D S

    2017-09-01

    We introduce an approach that predicts neural representations of word meanings contained in sentences then superposes these to predict neural representations of new sentences. A neurobiological semantic model based on sensory, motor, social, emotional, and cognitive attributes was used as a foundation to define semantic content. Previous studies have predominantly predicted neural patterns for isolated words, using models that lack neurobiological interpretation. Fourteen participants read 240 sentences describing everyday situations while undergoing fMRI. To connect sentence-level fMRI activation patterns to the word-level semantic model, we devised methods to decompose the fMRI data into individual words. Activation patterns associated with each attribute in the model were then estimated using multiple-regression. This enabled synthesis of activation patterns for trained and new words, which were subsequently averaged to predict new sentences. Region-of-interest analyses revealed that prediction accuracy was highest using voxels in the left temporal and inferior parietal cortex, although a broad range of regions returned statistically significant results, showing that semantic information is widely distributed across the brain. The results show how a neurobiologically motivated semantic model can decompose sentence-level fMRI data into activation features for component words, which can be recombined to predict activation patterns for new sentences. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  6. Model Integrating Fuzzy Argument with Neural Network Enhancing the Performance of Active Queue Management

    Directory of Open Access Journals (Sweden)

    Nguyen Kim Quoc

    2015-08-01

    Full Text Available The bottleneck control by active queue management mechanisms at network nodes is essential. In recent years, some researchers have used fuzzy argument to improve the active queue management mechanisms to enhance the network performance. However, the projects using the fuzzy controller depend heavily on professionals and their parameters cannot be updated according to changes in the network, so the effectiveness of this mechanism is not high. Therefore, we propose a model combining the fuzzy controller with neural network (FNN to overcome the limitations above. Results of the training of the neural networks will find the optimal parameters for the adaptive fuzzy controller well to changes of the network. This improves the operational efficiency of the active queue management mechanisms at network nodes.

  7. Modeling long-term human activeness using recurrent neural networks for biometric data.

    Science.gov (United States)

    Kim, Zae Myung; Oh, Hyungrai; Kim, Han-Gyu; Lim, Chae-Gyun; Oh, Kyo-Joong; Choi, Ho-Jin

    2017-05-18

    With the invention of fitness trackers, it has been possible to continuously monitor a user's biometric data such as heart rates, number of footsteps taken, and amount of calories burned. This paper names the time series of these three types of biometric data, the user's "activeness", and investigates the feasibility in modeling and predicting the long-term activeness of the user. The dataset used in this study consisted of several months of biometric time-series data gathered by seven users independently. Four recurrent neural network (RNN) architectures-as well as a deep neural network and a simple regression model-were proposed to investigate the performance on predicting the activeness of the user under various length-related hyper-parameter settings. In addition, the learned model was tested to predict the time period when the user's activeness falls below a certain threshold. A preliminary experimental result shows that each type of activeness data exhibited a short-term autocorrelation; and among the three types of data, the consumed calories and the number of footsteps were positively correlated, while the heart rate data showed almost no correlation with neither of them. It is probably due to this characteristic of the dataset that although the RNN models produced the best results on modeling the user's activeness, the difference was marginal; and other baseline models, especially the linear regression model, performed quite admirably as well. Further experimental results show that it is feasible to predict a user's future activeness with precision, for example, a trained RNN model could predict-with the precision of 84%-when the user would be less active within the next hour given the latest 15 min of his activeness data. This paper defines and investigates the notion of a user's "activeness", and shows that forecasting the long-term activeness of the user is indeed possible. Such information can be utilized by a health-related application to proactively

  8. Model for a flexible motor memory based on a self-active recurrent neural network.

    Science.gov (United States)

    Boström, Kim Joris; Wagner, Heiko; Prieske, Markus; de Lussanet, Marc

    2013-10-01

    Using recent recurrent network architecture based on the reservoir computing approach, we propose and numerically simulate a model that is focused on the aspects of a flexible motor memory for the storage of elementary movement patterns into the synaptic weights of a neural network, so that the patterns can be retrieved at any time by simple static commands. The resulting motor memory is flexible in that it is capable to continuously modulate the stored patterns. The modulation consists in an approximately linear inter- and extrapolation, generating a large space of possible movements that have not been learned before. A recurrent network of thousand neurons is trained in a manner that corresponds to a realistic exercising scenario, with experimentally measured muscular activations and with kinetic data representing proprioceptive feedback. The network is "self-active" in that it maintains recurrent flow of activation even in the absence of input, a feature that resembles the "resting-state activity" found in the human and animal brain. The model involves the concept of "neural outsourcing" which amounts to the permanent shifting of computational load from higher to lower-level neural structures, which might help to explain why humans are able to execute learned skills in a fluent and flexible manner without the need for attention to the details of the movement. Copyright © 2013 Elsevier B.V. All rights reserved.

  9. Emergence of gamma motor activity in an artificial neural network model of the corticospinal system.

    Science.gov (United States)

    Grandjean, Bernard; Maier, Marc A

    2017-02-01

    Muscle spindle discharge during active movement is a function of mechanical and neural parameters. Muscle length changes (and their derivatives) represent its primary mechanical, fusimotor drive its neural component. However, neither the action nor the function of fusimotor and in particular of γ-drive, have been clearly established, since γ-motor activity during voluntary, non-locomotor movements remains largely unknown. Here, using a computational approach, we explored whether γ-drive emerges in an artificial neural network model of the corticospinal system linked to a biomechanical antagonist wrist simulator. The wrist simulator included length-sensitive and γ-drive-dependent type Ia and type II muscle spindle activity. Network activity and connectivity were derived by a gradient descent algorithm to generate reciprocal, known target α-motor unit activity during wrist flexion-extension (F/E) movements. Two tasks were simulated: an alternating F/E task and a slow F/E tracking task. Emergence of γ-motor activity in the alternating F/E network was a function of α-motor unit drive: if muscle afferent (together with supraspinal) input was required for driving α-motor units, then γ-drive emerged in the form of α-γ coactivation, as predicted by empirical studies. In the slow F/E tracking network, γ-drive emerged in the form of α-γ dissociation and provided critical, bidirectional muscle afferent activity to the cortical network, containing known bidirectional target units. The model thus demonstrates the complementary aspects of spindle output and hence γ-drive: i) muscle spindle activity as a driving force of α-motor unit activity, and ii) afferent activity providing continuous sensory information, both of which crucially depend on γ-drive.

  10. Analysis of Oscillatory Neural Activity in Series Network Models of Parkinson's Disease During Deep Brain Stimulation.

    Science.gov (United States)

    Davidson, Clare M; de Paor, Annraoi M; Cagnan, Hayriye; Lowery, Madeleine M

    2016-01-01

    Parkinson's disease is a progressive, neurodegenerative disorder, characterized by hallmark motor symptoms. It is associated with pathological, oscillatory neural activity in the basal ganglia. Deep brain stimulation (DBS) is often successfully used to treat medically refractive Parkinson's disease. However, the selection of stimulation parameters is based on qualitative assessment of the patient, which can result in a lengthy tuning period and a suboptimal choice of parameters. This study explores fourth-order, control theory-based models of oscillatory activity in the basal ganglia. Describing function analysis is applied to examine possible mechanisms for the generation of oscillations in interacting nuclei and to investigate the suppression of oscillations with high-frequency stimulation. The theoretical results for the suppression of the oscillatory activity obtained using both the fourth-order model, and a previously described second-order model, are optimized to fit clinically recorded local field potential data obtained from Parkinsonian patients with implanted DBS. Close agreement between the power of oscillations recorded for a range of stimulation amplitudes is observed ( R(2)=0.69-0.99 ). The results suggest that the behavior of the system and the suppression of pathological neural oscillations with DBS is well described by the macroscopic models presented. The results also demonstrate that in this instance, a second-order model is sufficient to model the clinical data, without the need for added complexity. Describing the system behavior with computationally efficient models could aid in the identification of optimal stimulation parameters for patients in a clinical environment.

  11. Artificial Neural Networks for Reducing Computational Effort in Active Truncated Model Testing of Mooring Lines

    DEFF Research Database (Denmark)

    Christiansen, Niels Hørbye; Voie, Per Erlend Torbergsen; Høgsberg, Jan Becker

    2015-01-01

    simultaneously, this method is very demanding in terms of numerical efficiency and computational power. Therefore, this method has not yet proved to be feasible. It has recently been shown how a hybrid method combining classical numerical models and artificial neural networks (ANN) can provide a dramatic...... prior to the experiment and with a properly trained ANN it is no problem to obtain accurate simulations much faster than real time-without any need for large computational capacity. The present study demonstrates how this hybrid method can be applied to the active truncated experiments yielding a system...

  12. Modeling and preparation of activated carbon for methane storage II. Neural network modeling and experimental studies of the activated carbon preparation

    International Nuclear Information System (INIS)

    Namvar-Asl, Mahnaz; Soltanieh, Mohammad; Rashidi, Alimorad

    2008-01-01

    This study describes the activated carbon (AC) preparation for methane storage. Due to the need for the introduction of a model, correlating the effective preparation parameters with the characteristic parameters of the activated carbon, a model was developed by neural networks. In a previous study [Namvar-Asl M, Soltanieh M, Rashidi A, Irandoukht A. Modeling and preparation of activated carbon for methane storage: (I) modeling of activated carbon characteristics with neural networks and response surface method. Proceedings of CESEP07, Krakow, Poland; 2007.], the model was designed with the MATLAB toolboxes providing the best response for the correlation of the characteristics parameters and the methane uptake of the activated carbon. Regarding this model, the characteristics of the activated carbon were determined for a target methane uptake. After the determination of the characteristics, the demonstrated model of this work guided us to the selection of the effective AC preparation parameters. According to the modeling results, some samples were prepared and their methane storage capacity was measured. The results were compared with those of a target methane uptake (special amount of methane storage). Among the designed models, one of them illustrated the methane storage capacity of 180 v/v. It was finally found that the neural network modeling for the assay of the efficient AC preparation parameters was financially feasible, with respect to the determined methane storage capacity. This study could be useful for the development of the Adsorbed Natural Gas (ANG) technology

  13. Neural networks with discontinuous/impact activations

    CERN Document Server

    Akhmet, Marat

    2014-01-01

    This book presents as its main subject new models in mathematical neuroscience. A wide range of neural networks models with discontinuities are discussed, including impulsive differential equations, differential equations with piecewise constant arguments, and models of mixed type. These models involve discontinuities, which are natural because huge velocities and short distances are usually observed in devices modeling the networks. A discussion of the models, appropriate for the proposed applications, is also provided. This book also: Explores questions related to the biological underpinning for models of neural networks\\ Considers neural networks modeling using differential equations with impulsive and piecewise constant argument discontinuities Provides all necessary mathematical basics for application to the theory of neural networks Neural Networks with Discontinuous/Impact Activations is an ideal book for researchers and professionals in the field of engineering mathematics that have an interest in app...

  14. Artificial neural network modelling

    CERN Document Server

    Samarasinghe, Sandhya

    2016-01-01

    This book covers theoretical aspects as well as recent innovative applications of Artificial Neural networks (ANNs) in natural, environmental, biological, social, industrial and automated systems. It presents recent results of ANNs in modelling small, large and complex systems under three categories, namely, 1) Networks, Structure Optimisation, Robustness and Stochasticity 2) Advances in Modelling Biological and Environmental Systems and 3) Advances in Modelling Social and Economic Systems. The book aims at serving undergraduates, postgraduates and researchers in ANN computational modelling. .

  15. A neural model for temporal order judgments and their active recalibration: a common mechanism for space and time?

    Directory of Open Access Journals (Sweden)

    Mingbo eCai

    2012-11-01

    Full Text Available When observers experience a constant delay between their motor actions and sensory feedback, their perception of the temporal order between actions and sensations adapt (Stetson et al., 2006a. We present here a novel neural model that can explain temporal order judgments (TOJs and their recalibration. Our model employs three ubiquitous features of neural systems: 1 information pooling, 2 opponent processing, and 3 synaptic scaling. Specifically, the model proposes that different populations of neurons encode different delays between motor-sensory events, the outputs of these populations feed into rivaling neural populations (encoding before and after, and the activity difference between these populations determines the perceptual judgment. As a consequence of synaptic scaling of input weights, motor acts which are consistently followed by delayed sensory feedback will cause the network to recalibrate its point of subjective simultaneity. The structure of our model raises the possibility that recalibration of TOJs is a temporal analogue to the motion aftereffect. In other words, identical neural mechanisms may be used to make perceptual determinations about both space and time. Our model captures behavioral recalibration results for different numbers of adapting trials and different adapting delays. In line with predictions of the model, we additionally demonstrate that temporal recalibration can last through time, in analogy to storage of the motion aftereffect.

  16. Influence of neural adaptation on dynamics and equilibrium state of neural activities in a ring neural network

    Science.gov (United States)

    Takiyama, Ken

    2017-12-01

    How neural adaptation affects neural information processing (i.e. the dynamics and equilibrium state of neural activities) is a central question in computational neuroscience. In my previous works, I analytically clarified the dynamics and equilibrium state of neural activities in a ring-type neural network model that is widely used to model the visual cortex, motor cortex, and several other brain regions. The neural dynamics and the equilibrium state in the neural network model corresponded to a Bayesian computation and statistically optimal multiple information integration, respectively, under a biologically inspired condition. These results were revealed in an analytically tractable manner; however, adaptation effects were not considered. Here, I analytically reveal how the dynamics and equilibrium state of neural activities in a ring neural network are influenced by spike-frequency adaptation (SFA). SFA is an adaptation that causes gradual inhibition of neural activity when a sustained stimulus is applied, and the strength of this inhibition depends on neural activities. I reveal that SFA plays three roles: (1) SFA amplifies the influence of external input in neural dynamics; (2) SFA allows the history of the external input to affect neural dynamics; and (3) the equilibrium state corresponds to the statistically optimal multiple information integration independent of the existence of SFA. In addition, the equilibrium state in a ring neural network model corresponds to the statistically optimal integration of multiple information sources under biologically inspired conditions, independent of the existence of SFA.

  17. A cardiac electrical activity model based on a cellular automata system in comparison with neural network model.

    Science.gov (United States)

    Khan, Muhammad Sadiq Ali; Yousuf, Sidrah

    2016-03-01

    Cardiac Electrical Activity is commonly distributed into three dimensions of Cardiac Tissue (Myocardium) and evolves with duration of time. The indicator of heart diseases can occur randomly at any time of a day. Heart rate, conduction and each electrical activity during cardiac cycle should be monitor non-invasively for the assessment of "Action Potential" (regular) and "Arrhythmia" (irregular) rhythms. Many heart diseases can easily be examined through Automata model like Cellular Automata concepts. This paper deals with the different states of cardiac rhythms using cellular automata with the comparison of neural network also provides fast and highly effective stimulation for the contraction of cardiac muscles on the Atria in the result of genesis of electrical spark or wave. The specific formulated model named as "States of automaton Proposed Model for CEA (Cardiac Electrical Activity)" by using Cellular Automata Methodology is commonly shows the three states of cardiac tissues conduction phenomena (i) Resting (Relax and Excitable state), (ii) ARP (Excited but Absolutely refractory Phase i.e. Excited but not able to excite neighboring cells) (iii) RRP (Excited but Relatively Refractory Phase i.e. Excited and able to excite neighboring cells). The result indicates most efficient modeling with few burden of computation and it is Action Potential during the pumping of blood in cardiac cycle.

  18. Altered behavior and neural activity in conspecific cagemates co-housed with mouse models of brain disorders.

    Science.gov (United States)

    Yang, Hyunwoo; Jung, Seungmoon; Seo, Jinsoo; Khalid, Arshi; Yoo, Jung-Seok; Park, Jihyun; Kim, Soyun; Moon, Jangsup; Lee, Soon-Tae; Jung, Keun-Hwa; Chu, Kon; Lee, Sang Kun; Jeon, Daejong

    2016-09-01

    The psychosocial environment is one of the major contributors of social stress. Family members or caregivers who consistently communicate with individuals with brain disorders are considered at risk for physical and mental health deterioration, possibly leading to mental disorders. However, the underlying neural mechanisms of this phenomenon remain poorly understood. To address this, we developed a social stress paradigm in which a mouse model of epilepsy or depression was housed long-term (>4weeks) with normal conspecifics. We characterized the behavioral phenotypes and electrophysiologically investigated the neural activity of conspecific cagemate mice. The cagemates exhibited deficits in behavioral tasks assessing anxiety, locomotion, learning/memory, and depression-like behavior. Furthermore, they showed severe social impairment in social behavioral tasks involving social interaction or aggression. Strikingly, behavioral dysfunction remained in the cagemates 4weeks following co-housing cessation with the mouse models. In an electrophysiological study, the cagemates showed an increased number of spikes in medial prefrontal cortex (mPFC) neurons. Our results demonstrate that conspecifics co-housed with mouse models of brain disorders develop chronic behavioral dysfunctions, and suggest a possible association between abnormal mPFC neural activity and their behavioral pathogenesis. These findings contribute to the understanding of the psychosocial and psychiatric symptoms frequently present in families or caregivers of patients with brain disorders. Copyright © 2016 Elsevier Inc. All rights reserved.

  19. Improving quantitative structure-activity relationship models using Artificial Neural Networks trained with dropout.

    Science.gov (United States)

    Mendenhall, Jeffrey; Meiler, Jens

    2016-02-01

    Dropout is an Artificial Neural Network (ANN) training technique that has been shown to improve ANN performance across canonical machine learning (ML) datasets. Quantitative Structure Activity Relationship (QSAR) datasets used to relate chemical structure to biological activity in Ligand-Based Computer-Aided Drug Discovery pose unique challenges for ML techniques, such as heavily biased dataset composition, and relatively large number of descriptors relative to the number of actives. To test the hypothesis that dropout also improves QSAR ANNs, we conduct a benchmark on nine large QSAR datasets. Use of dropout improved both enrichment false positive rate and log-scaled area under the receiver-operating characteristic curve (logAUC) by 22-46 % over conventional ANN implementations. Optimal dropout rates are found to be a function of the signal-to-noise ratio of the descriptor set, and relatively independent of the dataset. Dropout ANNs with 2D and 3D autocorrelation descriptors outperform conventional ANNs as well as optimized fingerprint similarity search methods.

  20. Characterization of K-complexes and slow wave activity in a neural mass model.

    Directory of Open Access Journals (Sweden)

    Arne Weigenand

    2014-11-01

    Full Text Available NREM sleep is characterized by two hallmarks, namely K-complexes (KCs during sleep stage N2 and cortical slow oscillations (SOs during sleep stage N3. While the underlying dynamics on the neuronal level is well known and can be easily measured, the resulting behavior on the macroscopic population level remains unclear. On the basis of an extended neural mass model of the cortex, we suggest a new interpretation of the mechanisms responsible for the generation of KCs and SOs. As the cortex transitions from wake to deep sleep, in our model it approaches an oscillatory regime via a Hopf bifurcation. Importantly, there is a canard phenomenon arising from a homoclinic bifurcation, whose orbit determines the shape of large amplitude SOs. A KC corresponds to a single excursion along the homoclinic orbit, while SOs are noise-driven oscillations around a stable focus. The model generates both time series and spectra that strikingly resemble real electroencephalogram data and points out possible differences between the different stages of natural sleep.

  1. Modeling the dynamics of human brain activity with recurrent neural networks

    NARCIS (Netherlands)

    Güçlü, U.; Gerven, M.A.J. van

    2017-01-01

    Encoding models are used for predicting brain activity in response to sensory stimuli with the objective of elucidating how sensory information is represented in the brain. Encoding models typically comprise a nonlinear transformation of stimuli to features (feature model) and a linear convolution

  2. Neural network modelling of antifungal activity of a series of oxazole derivatives based on in silico pharmacokinetic parameters

    Directory of Open Access Journals (Sweden)

    Kovačević Strahinja Z.

    2013-01-01

    Full Text Available In the present paper, the antifungal activity of a series of benzoxazole and oxazolo[ 4,5-b]pyridine derivatives was evaluated against Candida albicans by using quantitative structure-activity relationships chemometric methodology with artificial neural network (ANN regression approach. In vitro antifungal activity of the tested compounds was presented by minimum inhibitory concentration expressed as log(1/cMIC. In silico pharmacokinetic parameters related to absorption, distribution, metabolism and excretion (ADME were calculated for all studied compounds by using PreADMET software. A feedforward back-propagation ANN with gradient descent learning algorithm was applied for modelling of the relationship between ADME descriptors (blood-brain barrier penetration, plasma protein binding, Madin-Darby cell permeability and Caco-2 cell permeability and experimental log(1/cMIC values. A 4-6-1 ANN was developed with the optimum momentum and learning rates of 0.3 and 0.05, respectively. An excellent correlation between experimental antifungal activity and values predicted by the ANN was obtained with a correlation coefficient of 0.9536. [Projekat Ministarstva nauke Republike Srbije, br. 172012 i br. 172014

  3. Spike Neural Models Part II: Abstract Neural Models

    Directory of Open Access Journals (Sweden)

    Johnson, Melissa G.

    2018-02-01

    Full Text Available Neurons are complex cells that require a lot of time and resources to model completely. In spiking neural networks (SNN though, not all that complexity is required. Therefore simple, abstract models are often used. These models save time, use less computer resources, and are easier to understand. This tutorial presents two such models: Izhikevich's model, which is biologically realistic in the resulting spike trains but not in the parameters, and the Leaky Integrate and Fire (LIF model which is not biologically realistic but does quickly and easily integrate input to produce spikes. Izhikevich's model is based on Hodgkin-Huxley's model but simplified such that it uses only two differentiation equations and four parameters to produce various realistic spike patterns. LIF is based on a standard electrical circuit and contains one equation. Either of these two models, or any of the many other models in literature can be used in a SNN. Choosing a neural model is an important task that depends on the goal of the research and the resources available. Once a model is chosen, network decisions such as connectivity, delay, and sparseness, need to be made. Understanding neural models and how they are incorporated into the network is the first step in creating a SNN.

  4. Sequential neural models with stochastic layers

    DEFF Research Database (Denmark)

    Fraccaro, Marco; Sønderby, Søren Kaae; Paquet, Ulrich

    2016-01-01

    How can we efficiently propagate uncertainty in a latent state representation with recurrent neural networks? This paper introduces stochastic recurrent neural networks which glue a deterministic recurrent neural network and a state space model together to form a stochastic and sequential neural...... generative model. The clear separation of deterministic and stochastic layers allows a structured variational inference network to track the factorization of the model's posterior distribution. By retaining both the nonlinear recursive structure of a recurrent neural network and averaging over...

  5. Plasmodium berghei ANKA: erythropoietin activates neural stem cells in an experimental cerebral malaria model

    DEFF Research Database (Denmark)

    Core, Andrew; Hempel, Casper; Kurtzhals, Jørgen A L

    2011-01-01

    investigated if EPO's neuroprotective effects include activation of endogenous neural stem cells (NSC). By using immunohistochemical markers of different NSC maturation stages, we show that EPO increased the number of nestin(+) cells in the dentate gyrus and in the sub-ventricular zone of the lateral...

  6. Cyclosporin A-Mediated Activation of Endogenous Neural Precursor Cells Promotes Cognitive Recovery in a Mouse Model of Stroke

    Directory of Open Access Journals (Sweden)

    Labeeba Nusrat

    2018-04-01

    Full Text Available Cognitive dysfunction following stroke significantly impacts quality of life and functional independance; yet, despite the prevalence and negative impact of cognitive deficits, post-stroke interventions almost exclusively target motor impairments. As a result, current treatment options are limited in their ability to promote post-stroke cognitive recovery. Cyclosporin A (CsA has been previously shown to improve post-stroke functional recovery of sensorimotor deficits. Interestingly, CsA is a commonly used immunosuppressant and also acts directly on endogenous neural precursor cells (NPCs in the neurogenic regions of the brain (the periventricular region and the dentate gyrus. The immunosuppressive and NPC activation effects are mediated by calcineurin-dependent and calcineurin-independent pathways, respectively. To develop a cognitive stroke model, focal bilateral lesions were induced in the medial prefrontal cortex (mPFC of adult mice using endothelin-1. First, we characterized this stroke model in the acute and chronic phase, using problem-solving and memory-based cognitive tests. mPFC stroke resulted in early and persistent deficits in short-term memory, problem-solving and behavioral flexibility, without affecting anxiety. Second, we investigated the effects of acute and chronic CsA treatment on NPC activation, neuroprotection, and tissue damage. Acute CsA administration post-stroke increased the size of the NPC pool. There was no effect on neurodegeneration or lesion volume. Lastly, we looked at the effects of chronic CsA treatment on cognitive recovery. Long-term CsA administration promoted NPC migration toward the lesion site and rescued cognitive deficits to control levels. This study demonstrates that CsA treatment activates the NPC population, promotes migration of NPCs to the site of injury, and leads to improved cognitive recovery following long-term treatment.

  7. A hardware model of the auditory periphery to transduce acoustic signals into neural activity

    Directory of Open Access Journals (Sweden)

    Takashi eTateno

    2013-11-01

    Full Text Available To improve the performance of cochlear implants, we have integrated a microdevice into a model of the auditory periphery with the goal of creating a microprocessor. We constructed an artificial peripheral auditory system using a hybrid model in which polyvinylidene difluoride was used as a piezoelectric sensor to convert mechanical stimuli into electric signals. To produce frequency selectivity, the slit on a stainless steel base plate was designed such that the local resonance frequency of the membrane over the slit reflected the transfer function. In the acoustic sensor, electric signals were generated based on the piezoelectric effect from local stress in the membrane. The electrodes on the resonating plate produced relatively large electric output signals. The signals were fed into a computer model that mimicked some functions of inner hair cells, inner hair cell–auditory nerve synapses, and auditory nerve fibers. In general, the responses of the model to pure-tone burst and complex stimuli accurately represented the discharge rates of high-spontaneous-rate auditory nerve fibers across a range of frequencies greater than 1 kHz and middle to high sound pressure levels. Thus, the model provides a tool to understand information processing in the peripheral auditory system and a basic design for connecting artificial acoustic sensors to the peripheral auditory nervous system. Finally, we discuss the need for stimulus control with an appropriate model of the auditory periphery based on auditory brainstem responses that were electrically evoked by different temporal pulse patterns with the same pulse number.

  8. Dynamics of a modified Hindmarsh-Rose neural model with random perturbations: Moment analysis and firing activities

    Science.gov (United States)

    Mondal, Argha; Upadhyay, Ranjit Kumar

    2017-11-01

    In this paper, an attempt has been made to understand the activity of mean membrane voltage and subsidiary system variables with moment equations (i.e., mean, variance and covariance's) under noisy environment. We consider a biophysically plausible modified Hindmarsh-Rose (H-R) neural system injected by an applied current exhibiting spiking-bursting phenomenon. The effects of predominant parameters on the dynamical behavior of a modified H-R system are investigated. Numerically, it exhibits period-doubling, period halving bifurcation and chaos phenomena. Further, a nonlinear system has been analyzed for the first and second order moments with additive stochastic perturbations. It has been solved using fourth order Runge-Kutta method and noisy systems by Euler's scheme. It has been demonstrated that the firing properties of neurons to evoke an action potential in a certain parameter space of the large exact systems can be estimated using an approximated model. Strong stimulation can cause a change in increase or decrease of the firing patterns. Corresponding to a fixed set of parameter values, the firing behavior and dynamical differences of the collective variables of a large, exact and approximated systems are investigated.

  9. Neural network modeling of emotion

    Science.gov (United States)

    Levine, Daniel S.

    2007-03-01

    This article reviews the history and development of computational neural network modeling of cognitive and behavioral processes that involve emotion. The exposition starts with models of classical conditioning dating from the early 1970s. Then it proceeds toward models of interactions between emotion and attention. Then models of emotional influences on decision making are reviewed, including some speculative (not and not yet simulated) models of the evolution of decision rules. Through the late 1980s, the neural networks developed to model emotional processes were mainly embodiments of significant functional principles motivated by psychological data. In the last two decades, network models of these processes have become much more detailed in their incorporation of known physiological properties of specific brain regions, while preserving many of the psychological principles from the earlier models. Most network models of emotional processes so far have dealt with positive and negative emotion in general, rather than specific emotions such as fear, joy, sadness, and anger. But a later section of this article reviews a few models relevant to specific emotions: one family of models of auditory fear conditioning in rats, and one model of induced pleasure enhancing creativity in humans. Then models of emotional disorders are reviewed. The article concludes with philosophical statements about the essential contributions of emotion to intelligent behavior and the importance of quantitative theories and models to the interdisciplinary enterprise of understanding the interactions of emotion, cognition, and behavior.

  10. Activating receptor NKG2D targets RAE-1-expressing allogeneic neural precursor cells in a viral model of multiple sclerosis.

    Science.gov (United States)

    Weinger, Jason G; Plaisted, Warren C; Maciejewski, Sonia M; Lanier, Lewis L; Walsh, Craig M; Lane, Thomas E

    2014-10-01

    Transplantation of major histocompatibility complex-mismatched mouse neural precursor cells (NPCs) into mice persistently infected with the neurotropic JHM strain of mouse hepatitis virus (JHMV) results in rapid rejection that is mediated, in part, by T cells. However, the contribution of the innate immune response to allograft rejection in a model of viral-induced neurological disease has not been well defined. Herein, we demonstrate that the natural killer (NK) cell-expressing-activating receptor NKG2D participates in transplanted allogeneic NPC rejection in mice persistently infected with JHMV. Cultured NPCs derived from C57BL/6 (H-2(b) ) mice express the NKG2D ligand retinoic acid early precursor transcript (RAE)-1 but expression was dramatically reduced upon differentiation into either glia or neurons. RAE-1(+) NPCs were susceptible to NK cell-mediated killing whereas RAE-1(-) cells were resistant to lysis. Transplantation of C57BL/6-derived NPCs into JHMV-infected BALB/c (H-2(d) ) mice resulted in infiltration of NKG2D(+) CD49b(+) NK cells and treatment with blocking antibody specific for NKG2D increased survival of allogeneic NPCs. Furthermore, transplantation of differentiated RAE-1(-) allogeneic NPCs into JHMV-infected BALB/c mice resulted in enhanced survival, highlighting a role for the NKG2D/RAE-1 signaling axis in allograft rejection. We also demonstrate that transplantation of allogeneic NPCs into JHMV-infected mice resulted in infection of the transplanted cells suggesting that these cells may be targets for infection. Viral infection of cultured cells increased RAE-1 expression, resulting in enhanced NK cell-mediated killing through NKG2D recognition. Collectively, these results show that in a viral-induced demyelination model, NK cells contribute to rejection of allogeneic NPCs through an NKG2D signaling pathway. © 2014 AlphaMed Press.

  11. Modeling and control of magnetorheological fluid dampers using neural networks

    Science.gov (United States)

    Wang, D. H.; Liao, W. H.

    2005-02-01

    Due to the inherent nonlinear nature of magnetorheological (MR) fluid dampers, one of the challenging aspects for utilizing these devices to achieve high system performance is the development of accurate models and control algorithms that can take advantage of their unique characteristics. In this paper, the direct identification and inverse dynamic modeling for MR fluid dampers using feedforward and recurrent neural networks are studied. The trained direct identification neural network model can be used to predict the damping force of the MR fluid damper on line, on the basis of the dynamic responses across the MR fluid damper and the command voltage, and the inverse dynamic neural network model can be used to generate the command voltage according to the desired damping force through supervised learning. The architectures and the learning methods of the dynamic neural network models and inverse neural network models for MR fluid dampers are presented, and some simulation results are discussed. Finally, the trained neural network models are applied to predict and control the damping force of the MR fluid damper. Moreover, validation methods for the neural network models developed are proposed and used to evaluate their performance. Validation results with different data sets indicate that the proposed direct identification dynamic model using the recurrent neural network can be used to predict the damping force accurately and the inverse identification dynamic model using the recurrent neural network can act as a damper controller to generate the command voltage when the MR fluid damper is used in a semi-active mode.

  12. The effect of the neural activity on topological properties of growing neural networks.

    Science.gov (United States)

    Gafarov, F M; Gafarova, V R

    2016-09-01

    The connectivity structure in cortical networks defines how information is transmitted and processed, and it is a source of the complex spatiotemporal patterns of network's development, and the process of creation and deletion of connections is continuous in the whole life of the organism. In this paper, we study how neural activity influences the growth process in neural networks. By using a two-dimensional activity-dependent growth model we demonstrated the neural network growth process from disconnected neurons to fully connected networks. For making quantitative investigation of the network's activity influence on its topological properties we compared it with the random growth network not depending on network's activity. By using the random graphs theory methods for the analysis of the network's connections structure it is shown that the growth in neural networks results in the formation of a well-known "small-world" network.

  13. Windowed active sampling for reliable neural learning

    NARCIS (Netherlands)

    Barakova, E.I; Spaanenburg, L

    The composition of the example set has a major impact on the quality of neural learning. The popular approach is focused on extensive pre-processing to bridge the representation gap between process measurement and neural presentation. In contrast, windowed active sampling attempts to solve these

  14. The Effects of GABAergic Polarity Changes on Episodic Neural Network Activity in Developing Neural Systems

    Directory of Open Access Journals (Sweden)

    Wilfredo Blanco

    2017-09-01

    Full Text Available Early in development, neural systems have primarily excitatory coupling, where even GABAergic synapses are excitatory. Many of these systems exhibit spontaneous episodes of activity that have been characterized through both experimental and computational studies. As development progress the neural system goes through many changes, including synaptic remodeling, intrinsic plasticity in the ion channel expression, and a transformation of GABAergic synapses from excitatory to inhibitory. What effect each of these, and other, changes have on the network behavior is hard to know from experimental studies since they all happen in parallel. One advantage of a computational approach is that one has the ability to study developmental changes in isolation. Here, we examine the effects of GABAergic synapse polarity change on the spontaneous activity of both a mean field and a neural network model that has both glutamatergic and GABAergic coupling, representative of a developing neural network. We find some intuitive behavioral changes as the GABAergic neurons go from excitatory to inhibitory, shared by both models, such as a decrease in the duration of episodes. We also find some paradoxical changes in the activity that are only present in the neural network model. In particular, we find that during early development the inter-episode durations become longer on average, while later in development they become shorter. In addressing this unexpected finding, we uncover a priming effect that is particularly important for a small subset of neurons, called the “intermediate neurons.” We characterize these neurons and demonstrate why they are crucial to episode initiation, and why the paradoxical behavioral change result from priming of these neurons. The study illustrates how even arguably the simplest of developmental changes that occurs in neural systems can present non-intuitive behaviors. It also makes predictions about neural network behavioral changes

  15. Embedding responses in spontaneous neural activity shaped through sequential learning.

    Directory of Open Access Journals (Sweden)

    Tomoki Kurikawa

    Full Text Available Recent experimental measurements have demonstrated that spontaneous neural activity in the absence of explicit external stimuli has remarkable spatiotemporal structure. This spontaneous activity has also been shown to play a key role in the response to external stimuli. To better understand this role, we proposed a viewpoint, "memories-as-bifurcations," that differs from the traditional "memories-as-attractors" viewpoint. Memory recall from the memories-as-bifurcations viewpoint occurs when the spontaneous neural activity is changed to an appropriate output activity upon application of an input, known as a bifurcation in dynamical systems theory, wherein the input modifies the flow structure of the neural dynamics. Learning, then, is a process that helps create neural dynamical systems such that a target output pattern is generated as an attractor upon a given input. Based on this novel viewpoint, we introduce in this paper an associative memory model with a sequential learning process. Using a simple hebbian-type learning, the model is able to memorize a large number of input/output mappings. The neural dynamics shaped through the learning exhibit different bifurcations to make the requested targets stable upon an increase in the input, and the neural activity in the absence of input shows chaotic dynamics with occasional approaches to the memorized target patterns. These results suggest that these dynamics facilitate the bifurcations to each target attractor upon application of the corresponding input, which thus increases the capacity for learning. This theoretical finding about the behavior of the spontaneous neural activity is consistent with recent experimental observations in which the neural activity without stimuli wanders among patterns evoked by previously applied signals. In addition, the neural networks shaped by learning properly reflect the correlations of input and target-output patterns in a similar manner to those designed in

  16. The relationship between structural and functional connectivity: graph theoretical analysis of an EEG neural mass model

    NARCIS (Netherlands)

    Ponten, S.C.; Daffertshofer, A.; Hillebrand, A.; Stam, C.J.

    2010-01-01

    We investigated the relationship between structural network properties and both synchronization strength and functional characteristics in a combined neural mass and graph theoretical model of the electroencephalogram (EEG). Thirty-two neural mass models (NMMs), each representing the lump activity

  17. Pooling and correlated neural activity

    Directory of Open Access Journals (Sweden)

    Robert Rosenbaum

    2010-04-01

    Full Text Available Correlations between spike trains can strongly modulate neuronal activity and affect the ability of neurons to encode information. Neurons integrate inputs from thousands of afferents. Similarly, a number of experimental techniques are designed to record pooled cell activity. We review and generalize a number of previous results that show how correlations between cells in a population can be amplified and distorted in signals that reflect their collective activity. The structure of the underlying neuronal response can significantly impact correlations between such pooled signals. Therefore care needs to be taken when interpreting pooled recordings, or modeling networks of cells that receive inputs from large presynaptic populations. We also show that the frequently observed runaway synchrony in feedforward chains is primarily due to the pooling of correlated inputs.

  18. Neural activation in stress-related exhaustion

    DEFF Research Database (Denmark)

    Gavelin, Hanna Malmberg; Neely, Anna Stigsdotter; Andersson, Micael

    2017-01-01

    The primary purpose of this study was to investigate the association between burnout and neural activation during working memory processing in patients with stress-related exhaustion. Additionally, we investigated the neural effects of cognitive training as part of stress rehabilitation. Fifty...... association between burnout level and working memory performance was found, however, our findings indicate that frontostriatal neural responses related to working memory were modulated by burnout severity. We suggest that patients with high levels of burnout need to recruit additional cognitive resources...... to uphold task performance. Following cognitive training, increased neural activation was observed during 3-back in working memory-related regions, including the striatum, however, low sample size limits any firm conclusions....

  19. Race modulates neural activity during imitation

    Science.gov (United States)

    Losin, Elizabeth A. Reynolds; Iacoboni, Marco; Martin, Alia; Cross, Katy A.; Dapretto, Mirella

    2014-01-01

    Imitation plays a central role in the acquisition of culture. People preferentially imitate others who are self-similar, prestigious or successful. Because race can indicate a person's self-similarity or status, race influences whom people imitate. Prior studies of the neural underpinnings of imitation have not considered the effects of race. Here we measured neural activity with fMRI while European American participants imitated meaningless gestures performed by actors of their own race, and two racial outgroups, African American, and Chinese American. Participants also passively observed the actions of these actors and their portraits. Frontal, parietal and occipital areas were differentially activated while participants imitated actors of different races. More activity was present when imitating African Americans than the other racial groups, perhaps reflecting participants' reported lack of experience with and negative attitudes towards this group, or the group's lower perceived social status. This pattern of neural activity was not found when participants passively observed the gestures of the actors or simply looked at their faces. Instead, during face-viewing neural responses were overall greater for own-race individuals, consistent with prior race perception studies not involving imitation. Our findings represent a first step in elucidating neural mechanisms involved in cultural learning, a process that influences almost every aspect of our lives but has thus far received little neuroscientific study. PMID:22062193

  20. Modeling and optimization by particle swarm embedded neural network for adsorption of zinc (II) by palm kernel shell based activated carbon from aqueous environment.

    Science.gov (United States)

    Karri, Rama Rao; Sahu, J N

    2018-01-15

    Zn (II) is one the common pollutant among heavy metals found in industrial effluents. Removal of pollutant from industrial effluents can be accomplished by various techniques, out of which adsorption was found to be an efficient method. Applications of adsorption limits itself due to high cost of adsorbent. In this regard, a low cost adsorbent produced from palm oil kernel shell based agricultural waste is examined for its efficiency to remove Zn (II) from waste water and aqueous solution. The influence of independent process variables like initial concentration, pH, residence time, activated carbon (AC) dosage and process temperature on the removal of Zn (II) by palm kernel shell based AC from batch adsorption process are studied systematically. Based on the design of experimental matrix, 50 experimental runs are performed with each process variable in the experimental range. The optimal values of process variables to achieve maximum removal efficiency is studied using response surface methodology (RSM) and artificial neural network (ANN) approaches. A quadratic model, which consists of first order and second order degree regressive model is developed using the analysis of variance and RSM - CCD framework. The particle swarm optimization which is a meta-heuristic optimization is embedded on the ANN architecture to optimize the search space of neural network. The optimized trained neural network well depicts the testing data and validation data with R 2 equal to 0.9106 and 0.9279 respectively. The outcomes indicates that the superiority of ANN-PSO based model predictions over the quadratic model predictions provided by RSM. Copyright © 2017 Elsevier Ltd. All rights reserved.

  1. Computational modeling of neural plasticity for self-organization of neural networks.

    Science.gov (United States)

    Chrol-Cannon, Joseph; Jin, Yaochu

    2014-11-01

    Self-organization in biological nervous systems during the lifetime is known to largely occur through a process of plasticity that is dependent upon the spike-timing activity in connected neurons. In the field of computational neuroscience, much effort has been dedicated to building up computational models of neural plasticity to replicate experimental data. Most recently, increasing attention has been paid to understanding the role of neural plasticity in functional and structural neural self-organization, as well as its influence on the learning performance of neural networks for accomplishing machine learning tasks such as classification and regression. Although many ideas and hypothesis have been suggested, the relationship between the structure, dynamics and learning performance of neural networks remains elusive. The purpose of this article is to review the most important computational models for neural plasticity and discuss various ideas about neural plasticity's role. Finally, we suggest a few promising research directions, in particular those along the line that combines findings in computational neuroscience and systems biology, and their synergetic roles in understanding learning, memory and cognition, thereby bridging the gap between computational neuroscience, systems biology and computational intelligence. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  2. An Analysis of Audio Features to Develop a Human Activity Recognition Model Using Genetic Algorithms, Random Forests, and Neural Networks

    Directory of Open Access Journals (Sweden)

    Carlos E. Galván-Tejada

    2016-01-01

    Full Text Available This work presents a human activity recognition (HAR model based on audio features. The use of sound as an information source for HAR models represents a challenge because sound wave analyses generate very large amounts of data. However, feature selection techniques may reduce the amount of data required to represent an audio signal sample. Some of the audio features that were analyzed include Mel-frequency cepstral coefficients (MFCC. Although MFCC are commonly used in voice and instrument recognition, their utility within HAR models is yet to be confirmed, and this work validates their usefulness. Additionally, statistical features were extracted from the audio samples to generate the proposed HAR model. The size of the information is necessary to conform a HAR model impact directly on the accuracy of the model. This problem also was tackled in the present work; our results indicate that we are capable of recognizing a human activity with an accuracy of 85% using the HAR model proposed. This means that minimum computational costs are needed, thus allowing portable devices to identify human activities using audio as an information source.

  3. Neural Activity Reveals Preferences Without Choices

    Science.gov (United States)

    Smith, Alec; Bernheim, B. Douglas; Camerer, Colin

    2014-01-01

    We investigate the feasibility of inferring the choices people would make (if given the opportunity) based on their neural responses to the pertinent prospects when they are not engaged in actual decision making. The ability to make such inferences is of potential value when choice data are unavailable, or limited in ways that render standard methods of estimating choice mappings problematic. We formulate prediction models relating choices to “non-choice” neural responses and use them to predict out-of-sample choices for new items and for new groups of individuals. The predictions are sufficiently accurate to establish the feasibility of our approach. PMID:25729468

  4. Neural network modeling of associative memory: Beyond the Hopfield model

    Science.gov (United States)

    Dasgupta, Chandan

    1992-07-01

    A number of neural network models, in which fixed-point and limit-cycle attractors of the underlying dynamics are used to store and associatively recall information, are described. In the first class of models, a hierarchical structure is used to store an exponentially large number of strongly correlated memories. The second class of models uses limit cycles to store and retrieve individual memories. A neurobiologically plausible network that generates low-amplitude periodic variations of activity, similar to the oscillations observed in electroencephalographic recordings, is also described. Results obtained from analytic and numerical studies of the properties of these networks are discussed.

  5. Numerical Analysis of Modeling Based on Improved Elman Neural Network

    Directory of Open Access Journals (Sweden)

    Shao Jie

    2014-01-01

    Full Text Available A modeling based on the improved Elman neural network (IENN is proposed to analyze the nonlinear circuits with the memory effect. The hidden layer neurons are activated by a group of Chebyshev orthogonal basis functions instead of sigmoid functions in this model. The error curves of the sum of squared error (SSE varying with the number of hidden neurons and the iteration step are studied to determine the number of the hidden layer neurons. Simulation results of the half-bridge class-D power amplifier (CDPA with two-tone signal and broadband signals as input have shown that the proposed behavioral modeling can reconstruct the system of CDPAs accurately and depict the memory effect of CDPAs well. Compared with Volterra-Laguerre (VL model, Chebyshev neural network (CNN model, and basic Elman neural network (BENN model, the proposed model has better performance.

  6. Understanding the Implications of Neural Population Activity on Behavior

    Science.gov (United States)

    Briguglio, John

    Learning how neural activity in the brain leads to the behavior we exhibit is one of the fundamental questions in Neuroscience. In this dissertation, several lines of work are presented to that use principles of neural coding to understand behavior. In one line of work, we formulate the efficient coding hypothesis in a non-traditional manner in order to test human perceptual sensitivity to complex visual textures. We find a striking agreement between how variable a particular texture signal is and how sensitive humans are to its presence. This reveals that the efficient coding hypothesis is still a guiding principle for neural organization beyond the sensory periphery, and that the nature of cortical constraints differs from the peripheral counterpart. In another line of work, we relate frequency discrimination acuity to neural responses from auditory cortex in mice. It has been previously observed that optogenetic manipulation of auditory cortex, in addition to changing neural responses, evokes changes in behavioral frequency discrimination. We are able to account for changes in frequency discrimination acuity on an individual basis by examining the Fisher information from the neural population with and without optogenetic manipulation. In the third line of work, we address the question of what a neural population should encode given that its inputs are responses from another group of neurons. Drawing inspiration from techniques in machine learning, we train Deep Belief Networks on fake retinal data and show the emergence of Garbor-like filters, reminiscent of responses in primary visual cortex. In the last line of work, we model the state of a cortical excitatory-inhibitory network during complex adaptive stimuli. Using a rate model with Wilson-Cowan dynamics, we demonstrate that simple non-linearities in the signal transferred from inhibitory to excitatory neurons can account for real neural recordings taken from auditory cortex. This work establishes and tests

  7. Stimulus-dependent maximum entropy models of neural population codes.

    Directory of Open Access Journals (Sweden)

    Einat Granot-Atedgi

    Full Text Available Neural populations encode information about their stimulus in a collective fashion, by joint activity patterns of spiking and silence. A full account of this mapping from stimulus to neural activity is given by the conditional probability distribution over neural codewords given the sensory input. For large populations, direct sampling of these distributions is impossible, and so we must rely on constructing appropriate models. We show here that in a population of 100 retinal ganglion cells in the salamander retina responding to temporal white-noise stimuli, dependencies between cells play an important encoding role. We introduce the stimulus-dependent maximum entropy (SDME model-a minimal extension of the canonical linear-nonlinear model of a single neuron, to a pairwise-coupled neural population. We find that the SDME model gives a more accurate account of single cell responses and in particular significantly outperforms uncoupled models in reproducing the distributions of population codewords emitted in response to a stimulus. We show how the SDME model, in conjunction with static maximum entropy models of population vocabulary, can be used to estimate information-theoretic quantities like average surprise and information transmission in a neural population.

  8. Solar energetic particle flux enhancement as a predictor of geomagnetic activity in a neural network-based model

    Czech Academy of Sciences Publication Activity Database

    Valach, F.; Revallo, M.; Bochníček, Josef; Hejda, Pavel

    2009-01-01

    Roč. 7, April (2009), S04004/1-S04004/7 ISSN 1542-7390 R&D Projects: GA AV ČR(CZ) IAA300120608; GA AV ČR 1QS300120506 Institutional research plan: CEZ:AV0Z30120515 Keywords : neural networks * coronal mass ejections * energetic particles * flares * radio emissions * magnetic storms Subject RIV: DE - Earth Magnetism, Geodesy, Geography Impact factor: 1.845, year: 2009

  9. Chronic mild stress impairs latent inhibition and induces region-specific neural activation in CHL1-deficient mice, a mouse model of schizophrenia.

    Science.gov (United States)

    Buhusi, Mona; Obray, Daniel; Guercio, Bret; Bartlett, Mitchell J; Buhusi, Catalin V

    2017-08-30

    Schizophrenia is a neurodevelopmental disorder characterized by abnormal processing of information and attentional deficits. Schizophrenia has a high genetic component but is precipitated by environmental factors, as proposed by the 'two-hit' theory of schizophrenia. Here we compared latent inhibition as a measure of learning and attention, in CHL1-deficient mice, an animal model of schizophrenia, and their wild-type littermates, under no-stress and chronic mild stress conditions. All unstressed mice as well as the stressed wild-type mice showed latent inhibition. In contrast, CHL1-deficient mice did not show latent inhibition after exposure to chronic stress. Differences in neuronal activation (c-Fos-positive cell counts) were noted in brain regions associated with latent inhibition: Neuronal activation in the prelimbic/infralimbic cortices and the nucleus accumbens shell was affected solely by stress. Neuronal activation in basolateral amygdala and ventral hippocampus was affected independently by stress and genotype. Most importantly, neural activation in nucleus accumbens core was affected by the interaction between stress and genotype. These results provide strong support for a 'two-hit' (genes x environment) effect on latent inhibition in CHL1-deficient mice, and identify CHL1-deficient mice as a model of schizophrenia-like learning and attention impairments. Copyright © 2017 Elsevier B.V. All rights reserved.

  10. Models of neural dynamics in brain information processing - the developments of 'the decade'

    International Nuclear Information System (INIS)

    Borisyuk, G N; Borisyuk, R M; Kazanovich, Yakov B; Ivanitskii, Genrikh R

    2002-01-01

    Neural network models are discussed that have been developed during the last decade with the purpose of reproducing spatio-temporal patterns of neural activity in different brain structures. The main goal of the modeling was to test hypotheses of synchronization, temporal and phase relations in brain information processing. The models being considered are those of temporal structure of spike sequences, of neural activity dynamics, and oscillatory models of attention and feature integration. (reviews of topical problems)

  11. On the origin of reproducible sequential activity in neural circuits

    Science.gov (United States)

    Afraimovich, V. S.; Zhigulin, V. P.; Rabinovich, M. I.

    2004-12-01

    Robustness and reproducibility of sequential spatio-temporal responses is an essential feature of many neural circuits in sensory and motor systems of animals. The most common mathematical images of dynamical regimes in neural systems are fixed points, limit cycles, chaotic attractors, and continuous attractors (attractive manifolds of neutrally stable fixed points). These are not suitable for the description of reproducible transient sequential neural dynamics. In this paper we present the concept of a stable heteroclinic sequence (SHS), which is not an attractor. SHS opens the way for understanding and modeling of transient sequential activity in neural circuits. We show that this new mathematical object can be used to describe robust and reproducible sequential neural dynamics. Using the framework of a generalized high-dimensional Lotka-Volterra model, that describes the dynamics of firing rates in an inhibitory network, we present analytical results on the existence of the SHS in the phase space of the network. With the help of numerical simulations we confirm its robustness in presence of noise in spite of the transient nature of the corresponding trajectories. Finally, by referring to several recent neurobiological experiments, we discuss possible applications of this new concept to several problems in neuroscience.

  12. PREDIKSI FOREX MENGGUNAKAN MODEL NEURAL NETWORK

    Directory of Open Access Journals (Sweden)

    R. Hadapiningradja Kusumodestoni

    2015-11-01

    Full Text Available ABSTRAK Prediksi adalah salah satu teknik yang paling penting dalam menjalankan bisnis forex. Keputusan dalam memprediksi adalah sangatlah penting, karena dengan prediksi dapat membantu mengetahui nilai forex di waktu tertentu kedepan sehingga dapat mengurangi resiko kerugian. Tujuan dari penelitian ini dimaksudkan memprediksi bisnis fores menggunakan model neural network dengan data time series per 1 menit untuk mengetahui nilai akurasi prediksi sehingga dapat mengurangi resiko dalam menjalankan bisnis forex. Metode penelitian pada penelitian ini meliputi metode pengumpulan data kemudian dilanjutkan ke metode training, learning, testing menggunakan neural network. Setelah di evaluasi hasil penelitian ini menunjukan bahwa penerapan algoritma Neural Network mampu untuk memprediksi forex dengan tingkat akurasi prediksi 0.431 +/- 0.096 sehingga dengan prediksi ini dapat membantu mengurangi resiko dalam menjalankan bisnis forex. Kata kunci: prediksi, forex, neural network.

  13. Modelling collective cell migration of neural crest.

    Science.gov (United States)

    Szabó, András; Mayor, Roberto

    2016-10-01

    Collective cell migration has emerged in the recent decade as an important phenomenon in cell and developmental biology and can be defined as the coordinated and cooperative movement of groups of cells. Most studies concentrate on tightly connected epithelial tissues, even though collective migration does not require a constant physical contact. Movement of mesenchymal cells is more independent, making their emergent collective behaviour less intuitive and therefore lending importance to computational modelling. Here we focus on such modelling efforts that aim to understand the collective migration of neural crest cells, a mesenchymal embryonic population that migrates large distances as a group during early vertebrate development. By comparing different models of neural crest migration, we emphasize the similarity and complementary nature of these approaches and suggest a future direction for the field. The principles derived from neural crest modelling could aid understanding the collective migration of other mesenchymal cell types. Copyright © 2016 Elsevier Ltd. All rights reserved.

  14. Neural network tagging in a toy model

    International Nuclear Information System (INIS)

    Milek, Marko; Patel, Popat

    1999-01-01

    The purpose of this study is a comparison of Artificial Neural Network approach to HEP analysis against the traditional methods. A toy model used in this analysis consists of two types of particles defined by four generic properties. A number of 'events' was created according to the model using standard Monte Carlo techniques. Several fully connected, feed forward multi layered Artificial Neural Networks were trained to tag the model events. The performance of each network was compared to the standard analysis mechanisms and significant improvement was observed

  15. A neural network model for credit risk evaluation.

    Science.gov (United States)

    Khashman, Adnan

    2009-08-01

    Credit scoring is one of the key analytical techniques in credit risk evaluation which has been an active research area in financial risk management. This paper presents a credit risk evaluation system that uses a neural network model based on the back propagation learning algorithm. We train and implement the neural network to decide whether to approve or reject a credit application, using seven learning schemes and real world credit applications from the Australian credit approval datasets. A comparison of the system performance under the different learning schemes is provided, furthermore, we compare the performance of two neural networks; with one and two hidden layers following the ideal learning scheme. Experimental results suggest that neural networks can be effectively used in automatic processing of credit applications.

  16. Neural network models of categorical perception.

    Science.gov (United States)

    Damper, R I; Harnad, S R

    2000-05-01

    Studies of the categorical perception (CP) of sensory continua have a long and rich history in psychophysics. In 1977, Macmillan, Kaplan, and Creelman introduced the use of signal detection theory to CP studies. Anderson and colleagues simultaneously proposed the first neural model for CP, yet this line of research has been less well explored. In this paper, we assess the ability of neural-network models of CP to predict the psychophysical performance of real observers with speech sounds and artificial/novel stimuli. We show that a variety of neural mechanisms are capable of generating the characteristics of CP. Hence, CP may not be a special model of perception but an emergent property of any sufficiently powerful general learning system.

  17. A quantum-implementable neural network model

    Science.gov (United States)

    Chen, Jialin; Wang, Lingli; Charbon, Edoardo

    2017-10-01

    A quantum-implementable neural network, namely quantum probability neural network (QPNN) model, is proposed in this paper. QPNN can use quantum parallelism to trace all possible network states to improve the result. Due to its unique quantum nature, this model is robust to several quantum noises under certain conditions, which can be efficiently implemented by the qubus quantum computer. Another advantage is that QPNN can be used as memory to retrieve the most relevant data and even to generate new data. The MATLAB experimental results of Iris data classification and MNIST handwriting recognition show that much less neuron resources are required in QPNN to obtain a good result than the classical feedforward neural network. The proposed QPNN model indicates that quantum effects are useful for real-life classification tasks.

  18. Weather forecasting based on hybrid neural model

    Science.gov (United States)

    Saba, Tanzila; Rehman, Amjad; AlGhamdi, Jarallah S.

    2017-11-01

    Making deductions and expectations about climate has been a challenge all through mankind's history. Challenges with exact meteorological directions assist to foresee and handle problems well in time. Different strategies have been investigated using various machine learning techniques in reported forecasting systems. Current research investigates climate as a major challenge for machine information mining and deduction. Accordingly, this paper presents a hybrid neural model (MLP and RBF) to enhance the accuracy of weather forecasting. Proposed hybrid model ensure precise forecasting due to the specialty of climate anticipating frameworks. The study concentrates on the data representing Saudi Arabia weather forecasting. The main input features employed to train individual and hybrid neural networks that include average dew point, minimum temperature, maximum temperature, mean temperature, average relative moistness, precipitation, normal wind speed, high wind speed and average cloudiness. The output layer composed of two neurons to represent rainy and dry weathers. Moreover, trial and error approach is adopted to select an appropriate number of inputs to the hybrid neural network. Correlation coefficient, RMSE and scatter index are the standard yard sticks adopted for forecast accuracy measurement. On individual standing MLP forecasting results are better than RBF, however, the proposed simplified hybrid neural model comes out with better forecasting accuracy as compared to both individual networks. Additionally, results are better than reported in the state of art, using a simple neural structure that reduces training time and complexity.

  19. Death and rebirth of neural activity in sparse inhibitory networks

    Science.gov (United States)

    Angulo-Garcia, David; Luccioli, Stefano; Olmi, Simona; Torcini, Alessandro

    2017-05-01

    Inhibition is a key aspect of neural dynamics playing a fundamental role for the emergence of neural rhythms and the implementation of various information coding strategies. Inhibitory populations are present in several brain structures, and the comprehension of their dynamics is strategical for the understanding of neural processing. In this paper, we clarify the mechanisms underlying a general phenomenon present in pulse-coupled heterogeneous inhibitory networks: inhibition can induce not only suppression of neural activity, as expected, but can also promote neural re-activation. In particular, for globally coupled systems, the number of firing neurons monotonically reduces upon increasing the strength of inhibition (neuronal death). However, the random pruning of connections is able to reverse the action of inhibition, i.e. in a random sparse network a sufficiently strong synaptic strength can surprisingly promote, rather than depress, the activity of neurons (neuronal rebirth). Thus, the number of firing neurons reaches a minimum value at some intermediate synaptic strength. We show that this minimum signals a transition from a regime dominated by neurons with a higher firing activity to a phase where all neurons are effectively sub-threshold and their irregular firing is driven by current fluctuations. We explain the origin of the transition by deriving a mean field formulation of the problem able to provide the fraction of active neurons as well as the first two moments of their firing statistics. The introduction of a synaptic time scale does not modify the main aspects of the reported phenomenon. However, for sufficiently slow synapses the transition becomes dramatic, and the system passes from a perfectly regular evolution to irregular bursting dynamics. In this latter regime the model provides predictions consistent with experimental findings for a specific class of neurons, namely the medium spiny neurons in the striatum.

  20. Isotherm and kinetics study of malachite green adsorption onto copper nanowires loaded on activated carbon: artificial neural network modeling and genetic algorithm optimization.

    Science.gov (United States)

    Ghaedi, M; Shojaeipour, E; Ghaedi, A M; Sahraei, Reza

    2015-05-05

    In this study, copper nanowires loaded on activated carbon (Cu-NWs-AC) was used as novel efficient adsorbent for the removal of malachite green (MG) from aqueous solution. This new material was synthesized through simple protocol and its surface properties such as surface area, pore volume and functional groups were characterized with different techniques such XRD, BET and FESEM analysis. The relation between removal percentages with variables such as solution pH, adsorbent dosage (0.005, 0.01, 0.015, 0.02 and 0.1g), contact time (1-40min) and initial MG concentration (5, 10, 20, 70 and 100mg/L) was investigated and optimized. A three-layer artificial neural network (ANN) model was utilized to predict the malachite green dye removal (%) by Cu-NWs-AC following conduction of 248 experiments. When the training of the ANN was performed, the parameters of ANN model were as follows: linear transfer function (purelin) at output layer, Levenberg-Marquardt algorithm (LMA), and a tangent sigmoid transfer function (tansig) at the hidden layer with 11 neurons. The minimum mean squared error (MSE) of 0.0017 and coefficient of determination (R(2)) of 0.9658 were found for prediction and modeling of dye removal using testing data set. A good agreement between experimental data and predicted data using the ANN model was obtained. Fitting the experimental data on previously optimized condition confirm the suitability of Langmuir isotherm models for their explanation with maximum adsorption capacity of 434.8mg/g at 25°C. Kinetic studies at various adsorbent mass and initial MG concentration show that the MG maximum removal percentage was achieved within 20min. The adsorption of MG follows the pseudo-second-order with a combination of intraparticle diffusion model. Copyright © 2015 Elsevier B.V. All rights reserved.

  1. Artificial neural network cardiopulmonary modeling and diagnosis

    Science.gov (United States)

    Kangas, Lars J.; Keller, Paul E.

    1997-01-01

    The present invention is a method of diagnosing a cardiopulmonary condition in an individual by comparing data from a progressive multi-stage test for the individual to a non-linear multi-variate model, preferably a recurrent artificial neural network having sensor fusion. The present invention relies on a cardiovascular model developed from physiological measurements of an individual. Any differences between the modeled parameters and the parameters of an individual at a given time are used for diagnosis.

  2. Neural modeling of prefrontal executive function

    Energy Technology Data Exchange (ETDEWEB)

    Levine, D.S. [Univ. of Texas, Arlington, TX (United States)

    1996-12-31

    Brain executive function is based in a distributed system whereby prefrontal cortex is interconnected with other cortical. and subcortical loci. Executive function is divided roughly into three interacting parts: affective guidance of responses; linkage among working memory representations; and forming complex behavioral schemata. Neural network models of each of these parts are reviewed and fit into a preliminary theoretical framework.

  3. Task-dependent modulation of oscillatory neural activity during movements

    DEFF Research Database (Denmark)

    Herz, D. M.; Christensen, M. S.; Reck, C.

    2011-01-01

    connectivity was strongest between central and cerebellar regions. Our results show that neural coupling within motor networks is modulated in distinct frequency bands depending on the motor task. They provide evidence that dynamic causal modeling in combination with EEG source analysis is a valuable tool......Neural oscillations in different frequency bands have been observed in a range of sensorimotor tasks and have been linked to coupling of spatially distinct neurons. The goal of this study was to detect a general motor network that is activated during phasic and tonic movements and to study the task......-dependent modulation of frequency coupling within this network. To this end we recorded 122-multichannel EEG in 13 healthy subjects while they performed three simple motor tasks. EEG data source modeling using individual MR images was carried out with a multiple source beamformer approach. A bilateral motor network...

  4. Artificial neural network (ANN) method for modeling of sunset yellow dye adsorption using zinc oxide nanorods loaded on activated carbon: Kinetic and isotherm study

    Science.gov (United States)

    Maghsoudi, M.; Ghaedi, M.; Zinali, A.; Ghaedi, A. M.; Habibi, M. H.

    2015-01-01

    In this research, ZnO nanoparticle loaded on activated carbon (ZnO-NPs-AC) was synthesized simply by a low cost and nontoxic procedure. The characterization and identification have been completed by different techniques such as SEM and XRD analysis. A three layer artificial neural network (ANN) model is applicable for accurate prediction of dye removal percentage from aqueous solution by ZnO-NRs-AC following conduction of 270 experimental data. The network was trained using the obtained experimental data at optimum pH with different ZnO-NRs-AC amount (0.005-0.015 g) and 5-40 mg/L of sunset yellow dye over contact time of 0.5-30 min. The ANN model was applied for prediction of the removal percentage of present systems with Levenberg-Marquardt algorithm (LMA), a linear transfer function (purelin) at output layer and a tangent sigmoid transfer function (tansig) in the hidden layer with 6 neurons. The minimum mean squared error (MSE) of 0.0008 and coefficient of determination (R2) of 0.998 were found for prediction and modeling of SY removal. The influence of parameters including adsorbent amount, initial dye concentration, pH and contact time on sunset yellow (SY) removal percentage were investigated and optimal experimental conditions were ascertained. Optimal conditions were set as follows: pH, 2.0; 10 min contact time; an adsorbent dose of 0.015 g. Equilibrium data fitted truly with the Langmuir model with maximum adsorption capacity of 142.85 mg/g for 0.005 g adsorbent. The adsorption of sunset yellow followed the pseudo-second-order rate equation.

  5. Active voltammetric microsensors with neural signal processing.

    Energy Technology Data Exchange (ETDEWEB)

    Vogt, M. C.

    1998-12-11

    Many industrial and environmental processes, including bioremediation, would benefit from the feedback and control information provided by a local multi-analyte chemical sensor. For most processes, such a sensor would need to be rugged enough to be placed in situ for long-term remote monitoring, and inexpensive enough to be fielded in useful numbers. The multi-analyte capability is difficult to obtain from common passive sensors, but can be provided by an active device that produces a spectrum-type response. Such new active gas microsensor technology has been developed at Argonne National Laboratory. The technology couples an electrocatalytic ceramic-metallic (cermet) microsensor with a voltammetric measurement technique and advanced neural signal processing. It has been demonstrated to be flexible, rugged, and very economical to produce and deploy. Both narrow interest detectors and wide spectrum instruments have been developed around this technology. Much of this technology's strength lies in the active measurement technique employed. The technique involves applying voltammetry to a miniature electrocatalytic cell to produce unique chemical ''signatures'' from the analytes. These signatures are processed with neural pattern recognition algorithms to identify and quantify the components in the analyte. The neural signal processing allows for innovative sampling and analysis strategies to be employed with the microsensor. In most situations, the whole response signature from the voltammogram can be used to identify, classify, and quantify an analyte, without dissecting it into component parts. This allows an instrument to be calibrated once for a specific gas or mixture of gases by simple exposure to a multi-component standard rather than by a series of individual gases. The sampled unknown analytes can vary in composition or in concentration, the calibration, sensing, and processing methods of these active voltammetric microsensors can

  6. Active voltammetric microsensors with neural signal processing

    Science.gov (United States)

    Vogt, Michael C.; Skubal, Laura R.

    1999-02-01

    Many industrial and environmental processes, including bioremediation, would benefit from the feedback and control information provided by a local multi-analyte chemical sensor. For most processes, such a sensor would need to be rugged enough to be placed in situ for long-term remote monitoring, and inexpensive enough to be fielded in useful numbers. The multi-analyte capability is difficult to obtain from common passive sensors, but can be provided by an active device that produces a spectrum-type response. Such new active gas microsensor technology has been developed at Argonne National Laboratory. The technology couples an electrocatalytic ceramic-metallic (cermet) microsensor with a voltammetric measurement technique and advanced neural signal processing. It has been demonstrated to be flexible, rugged, and very economical to produce and deploy. Both narrow interest detectors and wide spectrum instruments have been developed around this technology. Much of this technology's strength lies in the active measurement technique employed. The technique involves applying voltammetry to a miniature electrocatalytic cell to produce unique chemical 'signatures' from the analytes. These signatures are processed with neural pattern recognition algorithms to identify and quantify the components in the analyte. The neural signal processing allows for innovative sampling and analysis strategies to be employed with the microsensor. In most situations, the whole response signature from the voltammogram can be used to identify, classify, and quantify an analyte, without dissecting it into component parts. This allows an instrument to be calibrated once for a specific gas or mixture of gases by simple exposure to a multi-component standard rather than by a series of individual gases. The sampled unknown analytes can vary in composition or in concentration; the calibration, sensing, and processing methods of these active voltammetric microsensors can detect, recognize, and

  7. UAV Trajectory Modeling Using Neural Networks

    Science.gov (United States)

    Xue, Min

    2017-01-01

    Massive small unmanned aerial vehicles are envisioned to operate in the near future. While there are lots of research problems need to be addressed before dense operations can happen, trajectory modeling remains as one of the keys to understand and develop policies, regulations, and requirements for safe and efficient unmanned aerial vehicle operations. The fidelity requirement of a small unmanned vehicle trajectory model is high because these vehicles are sensitive to winds due to their small size and low operational altitude. Both vehicle control systems and dynamic models are needed for trajectory modeling, which makes the modeling a great challenge, especially considering the fact that manufactures are not willing to share their control systems. This work proposed to use a neural network approach for modelling small unmanned vehicle's trajectory without knowing its control system and bypassing exhaustive efforts for aerodynamic parameter identification. As a proof of concept, instead of collecting data from flight tests, this work used the trajectory data generated by a mathematical vehicle model for training and testing the neural network. The results showed great promise because the trained neural network can predict 4D trajectories accurately, and prediction errors were less than 2:0 meters in both temporal and spatial dimensions.

  8. A theory of how active behavior stabilises neural activity: Neural gain modulation by closed-loop environmental feedback.

    Directory of Open Access Journals (Sweden)

    Christopher L Buckley

    2018-01-01

    Full Text Available During active behaviours like running, swimming, whisking or sniffing, motor actions shape sensory input and sensory percepts guide future motor commands. Ongoing cycles of sensory and motor processing constitute a closed-loop feedback system which is central to motor control and, it has been argued, for perceptual processes. This closed-loop feedback is mediated by brainwide neural circuits but how the presence of feedback signals impacts on the dynamics and function of neurons is not well understood. Here we present a simple theory suggesting that closed-loop feedback between the brain/body/environment can modulate neural gain and, consequently, change endogenous neural fluctuations and responses to sensory input. We support this theory with modeling and data analysis in two vertebrate systems. First, in a model of rodent whisking we show that negative feedback mediated by whisking vibrissa can suppress coherent neural fluctuations and neural responses to sensory input in the barrel cortex. We argue this suppression provides an appealing account of a brain state transition (a marked change in global brain activity coincident with the onset of whisking in rodents. Moreover, this mechanism suggests a novel signal detection mechanism that selectively accentuates active, rather than passive, whisker touch signals. This mechanism is consistent with a predictive coding strategy that is sensitive to the consequences of motor actions rather than the difference between the predicted and actual sensory input. We further support the theory by re-analysing previously published two-photon data recorded in zebrafish larvae performing closed-loop optomotor behaviour in a virtual swim simulator. We show, as predicted by this theory, that the degree to which each cell contributes in linking sensory and motor signals well explains how much its neural fluctuations are suppressed by closed-loop optomotor behaviour. More generally we argue that our results

  9. A theory of how active behavior stabilises neural activity: Neural gain modulation by closed-loop environmental feedback.

    Science.gov (United States)

    Buckley, Christopher L; Toyoizumi, Taro

    2018-01-01

    During active behaviours like running, swimming, whisking or sniffing, motor actions shape sensory input and sensory percepts guide future motor commands. Ongoing cycles of sensory and motor processing constitute a closed-loop feedback system which is central to motor control and, it has been argued, for perceptual processes. This closed-loop feedback is mediated by brainwide neural circuits but how the presence of feedback signals impacts on the dynamics and function of neurons is not well understood. Here we present a simple theory suggesting that closed-loop feedback between the brain/body/environment can modulate neural gain and, consequently, change endogenous neural fluctuations and responses to sensory input. We support this theory with modeling and data analysis in two vertebrate systems. First, in a model of rodent whisking we show that negative feedback mediated by whisking vibrissa can suppress coherent neural fluctuations and neural responses to sensory input in the barrel cortex. We argue this suppression provides an appealing account of a brain state transition (a marked change in global brain activity) coincident with the onset of whisking in rodents. Moreover, this mechanism suggests a novel signal detection mechanism that selectively accentuates active, rather than passive, whisker touch signals. This mechanism is consistent with a predictive coding strategy that is sensitive to the consequences of motor actions rather than the difference between the predicted and actual sensory input. We further support the theory by re-analysing previously published two-photon data recorded in zebrafish larvae performing closed-loop optomotor behaviour in a virtual swim simulator. We show, as predicted by this theory, that the degree to which each cell contributes in linking sensory and motor signals well explains how much its neural fluctuations are suppressed by closed-loop optomotor behaviour. More generally we argue that our results demonstrate the dependence

  10. Functional model of biological neural networks.

    Science.gov (United States)

    Lo, James Ting-Ho

    2010-12-01

    A functional model of biological neural networks, called temporal hierarchical probabilistic associative memory (THPAM), is proposed in this paper. THPAM comprises functional models of dendritic trees for encoding inputs to neurons, a first type of neuron for generating spike trains, a second type of neuron for generating graded signals to modulate neurons of the first type, supervised and unsupervised Hebbian learning mechanisms for easy learning and retrieving, an arrangement of dendritic trees for maximizing generalization, hardwiring for rotation-translation-scaling invariance, and feedback connections with different delay durations for neurons to make full use of present and past informations generated by neurons in the same and higher layers. These functional models and their processing operations have many functions of biological neural networks that have not been achieved by other models in the open literature and provide logically coherent answers to many long-standing neuroscientific questions. However, biological justifications of these functional models and their processing operations are required for THPAM to qualify as a macroscopic model (or low-order approximate) of biological neural networks.

  11. Resting-state hemodynamics are spatiotemporally coupled to synchronized and symmetric neural activity in excitatory neurons

    Science.gov (United States)

    Ma, Ying; Shaik, Mohammed A.; Kozberg, Mariel G.; Portes, Jacob P.; Timerman, Dmitriy

    2016-01-01

    Brain hemodynamics serve as a proxy for neural activity in a range of noninvasive neuroimaging techniques including functional magnetic resonance imaging (fMRI). In resting-state fMRI, hemodynamic fluctuations have been found to exhibit patterns of bilateral synchrony, with correlated regions inferred to have functional connectivity. However, the relationship between resting-state hemodynamics and underlying neural activity has not been well established, making the neural underpinnings of functional connectivity networks unclear. In this study, neural activity and hemodynamics were recorded simultaneously over the bilateral cortex of awake and anesthetized Thy1-GCaMP mice using wide-field optical mapping. Neural activity was visualized via selective expression of the calcium-sensitive fluorophore GCaMP in layer 2/3 and 5 excitatory neurons. Characteristic patterns of resting-state hemodynamics were accompanied by more rapidly changing bilateral patterns of resting-state neural activity. Spatiotemporal hemodynamics could be modeled by convolving this neural activity with hemodynamic response functions derived through both deconvolution and gamma-variate fitting. Simultaneous imaging and electrophysiology confirmed that Thy1-GCaMP signals are well-predicted by multiunit activity. Neurovascular coupling between resting-state neural activity and hemodynamics was robust and fast in awake animals, whereas coupling in urethane-anesthetized animals was slower, and in some cases included lower-frequency (resting-state hemodynamics in the awake and anesthetized brain are coupled to underlying patterns of excitatory neural activity. The patterns of bilaterally-symmetric spontaneous neural activity revealed by wide-field Thy1-GCaMP imaging may depict the neural foundation of functional connectivity networks detected in resting-state fMRI. PMID:27974609

  12. Temporal-pattern learning in neural models

    CERN Document Server

    Genís, Carme Torras

    1985-01-01

    While the ability of animals to learn rhythms is an unquestionable fact, the underlying neurophysiological mechanisms are still no more than conjectures. This monograph explores the requirements of such mechanisms, reviews those previously proposed and postulates a new one based on a direct electric coding of stimulation frequencies. Experi­ mental support for the option taken is provided both at the single neuron and neural network levels. More specifically, the material presented divides naturally into four parts: a description of the experimental and theoretical framework where this work becomes meaningful (Chapter 2), a detailed specifica­ tion of the pacemaker neuron model proposed together with its valida­ tion through simulation (Chapter 3), an analytic study of the behavior of this model when submitted to rhythmic stimulation (Chapter 4) and a description of the neural network model proposed for learning, together with an analysis of the simulation results obtained when varying seve­ ral factors r...

  13. Artificial Neural Network Model for Predicting Compressive

    Directory of Open Access Journals (Sweden)

    Salim T. Yousif

    2013-05-01

    Full Text Available   Compressive strength of concrete is a commonly used criterion in evaluating concrete. Although testing of the compressive strength of concrete specimens is done routinely, it is performed on the 28th day after concrete placement. Therefore, strength estimation of concrete at early time is highly desirable. This study presents the effort in applying neural network-based system identification techniques to predict the compressive strength of concrete based on concrete mix proportions, maximum aggregate size (MAS, and slump of fresh concrete. Back-propagation neural networks model is successively developed, trained, and tested using actual data sets of concrete mix proportions gathered from literature.    The test of the model by un-used data within the range of input parameters shows that the maximum absolute error for model is about 20% and 88% of the output results has absolute errors less than 10%. The parametric study shows that water/cement ratio (w/c is the most significant factor  affecting the output of the model.     The results showed that neural networks has strong potential as a feasible tool for predicting compressive strength of concrete.

  14. Neural activity in the hippocampus during conflict resolution.

    Science.gov (United States)

    Sakimoto, Yuya; Okada, Kana; Hattori, Minoru; Takeda, Kozue; Sakata, Shogo

    2013-01-15

    This study examined configural association theory and conflict resolution models in relation to hippocampal neural activity during positive patterning tasks. According to configural association theory, the hippocampus is important for responses to compound stimuli in positive patterning tasks. In contrast, according to the conflict resolution model, the hippocampus is important for responses to single stimuli in positive patterning tasks. We hypothesized that if configural association theory is applicable, and not the conflict resolution model, the hippocampal theta power should be increased when compound stimuli are presented. If, on the other hand, the conflict resolution model is applicable, but not configural association theory, then the hippocampal theta power should be increased when single stimuli are presented. If both models are valid and applicable in the positive patterning task, we predict that the hippocampal theta power should be increased by presentation of both compound and single stimuli during the positive patterning task. To examine our hypotheses, we measured hippocampal theta power in rats during a positive patterning task. The results showed that hippocampal theta power increased during the presentation of a single stimulus, but did not increase during the presentation of a compound stimulus. This finding suggests that the conflict resolution model is more applicable than the configural association theory for describing neural activity during positive patterning tasks. Copyright © 2012 Elsevier B.V. All rights reserved.

  15. Ising model for neural data

    DEFF Research Database (Denmark)

    Roudi, Yasser; Tyrcha, Joanna; Hertz, John

    2009-01-01

    (dansk abstrakt findes ikke) We study pairwise Ising models for describing the statistics of multi-neuron spike trains, using data from a simulated cortical network. We explore efficient ways of finding the optimal couplings in these models and examine their statistical properties. To do this, we...... extract the optimal couplings for subsets of size up to $200$ neurons, essentially exactly, using Boltzmann learning. We then study the quality of several approximate methods for finding the couplings by comparing their results with those found from Boltzmann learning. Two of these methods -- inversion...... of the Thouless-Anderson-Palmer equations and an approximation proposed by Sessak and Monasson -- are remarkably accurate. Using these approximations for larger subsets of neurons, we find that extracting couplings using data from a subset smaller than the full network tends systematically to overestimate...

  16. UAV Trajectory Modeling Using Neural Networks

    Science.gov (United States)

    Xue, Min

    2017-01-01

    Large amount of small Unmanned Aerial Vehicles (sUAVs) are projected to operate in the near future. Potential sUAV applications include, but not limited to, search and rescue, inspection and surveillance, aerial photography and video, precision agriculture, and parcel delivery. sUAVs are expected to operate in the uncontrolled Class G airspace, which is at or below 500 feet above ground level (AGL), where many static and dynamic constraints exist, such as ground properties and terrains, restricted areas, various winds, manned helicopters, and conflict avoidance among sUAVs. How to enable safe, efficient, and massive sUAV operations at the low altitude airspace remains a great challenge. NASA's Unmanned aircraft system Traffic Management (UTM) research initiative works on establishing infrastructure and developing policies, requirement, and rules to enable safe and efficient sUAVs' operations. To achieve this goal, it is important to gain insights of future UTM traffic operations through simulations, where the accurate trajectory model plays an extremely important role. On the other hand, like what happens in current aviation development, trajectory modeling should also serve as the foundation for any advanced concepts and tools in UTM. Accurate models of sUAV dynamics and control systems are very important considering the requirement of the meter level precision in UTM operations. The vehicle dynamics are relatively easy to derive and model, however, vehicle control systems remain unknown as they are usually kept by manufactures as a part of intellectual properties. That brings challenges to trajectory modeling for sUAVs. How to model the vehicle's trajectories with unknown control system? This work proposes to use a neural network to model a vehicle's trajectory. The neural network is first trained to learn the vehicle's responses at numerous conditions. Once being fully trained, given current vehicle states, winds, and desired future trajectory, the neural

  17. An Activity for Demonstrating the Concept of a Neural Circuit

    Science.gov (United States)

    Kreiner, David S.

    2012-01-01

    College students in two sections of a general psychology course participated in a demonstration of a simple neural circuit. The activity was based on a neural circuit that Jeffress proposed for localizing sounds. Students in one section responded to a questionnaire prior to participating in the activity, while students in the other section…

  18. Model for neural signaling leap statistics

    International Nuclear Information System (INIS)

    Chevrollier, Martine; Oria, Marcos

    2011-01-01

    We present a simple model for neural signaling leaps in the brain considering only the thermodynamic (Nernst) potential in neuron cells and brain temperature. We numerically simulated connections between arbitrarily localized neurons and analyzed the frequency distribution of the distances reached. We observed qualitative change between Normal statistics (with T 37.5 0 C, awaken regime) and Levy statistics (T = 35.5 0 C, sleeping period), characterized by rare events of long range connections.

  19. Model for neural signaling leap statistics

    Science.gov (United States)

    Chevrollier, Martine; Oriá, Marcos

    2011-03-01

    We present a simple model for neural signaling leaps in the brain considering only the thermodynamic (Nernst) potential in neuron cells and brain temperature. We numerically simulated connections between arbitrarily localized neurons and analyzed the frequency distribution of the distances reached. We observed qualitative change between Normal statistics (with T = 37.5°C, awaken regime) and Lévy statistics (T = 35.5°C, sleeping period), characterized by rare events of long range connections.

  20. Model for neural signaling leap statistics

    Energy Technology Data Exchange (ETDEWEB)

    Chevrollier, Martine; Oria, Marcos, E-mail: oria@otica.ufpb.br [Laboratorio de Fisica Atomica e Lasers Departamento de Fisica, Universidade Federal da ParaIba Caixa Postal 5086 58051-900 Joao Pessoa, Paraiba (Brazil)

    2011-03-01

    We present a simple model for neural signaling leaps in the brain considering only the thermodynamic (Nernst) potential in neuron cells and brain temperature. We numerically simulated connections between arbitrarily localized neurons and analyzed the frequency distribution of the distances reached. We observed qualitative change between Normal statistics (with T 37.5{sup 0}C, awaken regime) and Levy statistics (T = 35.5{sup 0}C, sleeping period), characterized by rare events of long range connections.

  1. Artificial Neural Network Modeling of an Inverse Fluidized Bed ...

    African Journals Online (AJOL)

    A Radial Basis Function neural network has been successfully employed for the modeling of the inverse fluidized bed reactor. In the proposed model, the trained neural network represents the kinetics of biological decomposition of pollutants in the reactor. The neural network has been trained with experimental data ...

  2. Can Neural Activity Propagate by Endogenous Electrical Field?

    Science.gov (United States)

    Qiu, Chen; Shivacharan, Rajat S.; Zhang, Mingming

    2015-01-01

    It is widely accepted that synaptic transmissions and gap junctions are the major governing mechanisms for signal traveling in the neural system. Yet, a group of neural waves, either physiological or pathological, share the same speed of ∼0.1 m/s without synaptic transmission or gap junctions, and this speed is not consistent with axonal conduction or ionic diffusion. The only explanation left is an electrical field effect. We tested the hypothesis that endogenous electric fields are sufficient to explain the propagation with in silico and in vitro experiments. Simulation results show that field effects alone can indeed mediate propagation across layers of neurons with speeds of 0.12 ± 0.09 m/s with pathological kinetics, and 0.11 ± 0.03 m/s with physiologic kinetics, both generating weak field amplitudes of ∼2–6 mV/mm. Further, the model predicted that propagation speed values are inversely proportional to the cell-to-cell distances, but do not significantly change with extracellular resistivity, membrane capacitance, or membrane resistance. In vitro recordings in mice hippocampi produced similar speeds (0.10 ± 0.03 m/s) and field amplitudes (2.5–5 mV/mm), and by applying a blocking field, the propagation speed was greatly reduced. Finally, osmolarity experiments confirmed the model's prediction that cell-to-cell distance inversely affects propagation speed. Together, these results show that despite their weak amplitude, electric fields can be solely responsible for spike propagation at ∼0.1 m/s. This phenomenon could be important to explain the slow propagation of epileptic activity and other normal propagations at similar speeds. SIGNIFICANCE STATEMENT Neural activity (waves or spikes) can propagate using well documented mechanisms such as synaptic transmission, gap junctions, or diffusion. However, the purpose of this paper is to provide an explanation for experimental data showing that neural signals can propagate by means other than synaptic

  3. Exponential stability of Cohen-Grossberg neural networks with a general class of activation functions

    International Nuclear Information System (INIS)

    Wan Anhua; Wang Miansen; Peng Jigen; Qiao Hong

    2006-01-01

    In this Letter, the dynamics of Cohen-Grossberg neural networks model are investigated. The activation functions are only assumed to be Lipschitz continuous, which provide a much wider application domain for neural networks than the previous results. By means of the extended nonlinear measure approach, new and relaxed sufficient conditions for the existence, uniqueness and global exponential stability of equilibrium of the neural networks are obtained. Moreover, an estimate for the exponential convergence rate of the neural networks is precisely characterized. Our results improve those existing ones

  4. Embedding recurrent neural networks into predator-prey models.

    Science.gov (United States)

    Moreau, Yves; Louiès, Stephane; Vandewalle, Joos; Brenig, Leon

    1999-03-01

    We study changes of coordinates that allow the embedding of ordinary differential equations describing continuous-time recurrent neural networks into differential equations describing predator-prey models-also called Lotka-Volterra systems. We transform the equations for the neural network first into quasi-monomial form (Brenig, L. (1988). Complete factorization and analytic solutions of generalized Lotka-Volterra equations. Physics Letters A, 133(7-8), 378-382), where we express the vector field of the dynamical system as a linear combination of products of powers of the variables. In practice, this transformation is possible only if the activation function is the hyperbolic tangent or the logistic sigmoid. From this quasi-monomial form, we can directly transform the system further into Lotka-Volterra equations. The resulting Lotka-Volterra system is of higher dimension than the original system, but the behavior of its first variables is equivalent to the behavior of the original neural network. We expect that this transformation will permit the application of existing techniques for the analysis of Lotka-Volterra systems to recurrent neural networks. Furthermore, our results show that Lotka-Volterra systems are universal approximators of dynamical systems, just as are continuous-time neural networks.

  5. Identifying Emotions on the Basis of Neural Activation.

    Science.gov (United States)

    Kassam, Karim S; Markey, Amanda R; Cherkassky, Vladimir L; Loewenstein, George; Just, Marcel Adam

    2013-01-01

    We attempt to determine the discriminability and organization of neural activation corresponding to the experience of specific emotions. Method actors were asked to self-induce nine emotional states (anger, disgust, envy, fear, happiness, lust, pride, sadness, and shame) while in an fMRI scanner. Using a Gaussian Naïve Bayes pooled variance classifier, we demonstrate the ability to identify specific emotions experienced by an individual at well over chance accuracy on the basis of: 1) neural activation of the same individual in other trials, 2) neural activation of other individuals who experienced similar trials, and 3) neural activation of the same individual to a qualitatively different type of emotion induction. Factor analysis identified valence, arousal, sociality, and lust as dimensions underlying the activation patterns. These results suggest a structure for neural representations of emotion and inform theories of emotional processing.

  6. Identifying Emotions on the Basis of Neural Activation.

    Directory of Open Access Journals (Sweden)

    Karim S Kassam

    Full Text Available We attempt to determine the discriminability and organization of neural activation corresponding to the experience of specific emotions. Method actors were asked to self-induce nine emotional states (anger, disgust, envy, fear, happiness, lust, pride, sadness, and shame while in an fMRI scanner. Using a Gaussian Naïve Bayes pooled variance classifier, we demonstrate the ability to identify specific emotions experienced by an individual at well over chance accuracy on the basis of: 1 neural activation of the same individual in other trials, 2 neural activation of other individuals who experienced similar trials, and 3 neural activation of the same individual to a qualitatively different type of emotion induction. Factor analysis identified valence, arousal, sociality, and lust as dimensions underlying the activation patterns. These results suggest a structure for neural representations of emotion and inform theories of emotional processing.

  7. A dynamic neural field model of temporal order judgments.

    Science.gov (United States)

    Hecht, Lauren N; Spencer, John P; Vecera, Shaun P

    2015-12-01

    Temporal ordering of events is biased, or influenced, by perceptual organization-figure-ground organization-and by spatial attention. For example, within a region assigned figural status or at an attended location, onset events are processed earlier (Lester, Hecht, & Vecera, 2009; Shore, Spence, & Klein, 2001), and offset events are processed for longer durations (Hecht & Vecera, 2011; Rolke, Ulrich, & Bausenhart, 2006). Here, we present an extension of a dynamic field model of change detection (Johnson, Spencer, Luck, & Schöner, 2009; Johnson, Spencer, & Schöner, 2009) that accounts for both the onset and offset performance for figural and attended regions. The model posits that neural populations processing the figure are more active, resulting in a peak of activation that quickly builds toward a detection threshold when the onset of a target is presented. This same enhanced activation for some neural populations is maintained when a present target is removed, creating delays in the perception of the target's offset. We discuss the broader implications of this model, including insights regarding how neural activation can be generated in response to the disappearance of information. (c) 2015 APA, all rights reserved).

  8. Deep Recurrent Neural Networks for Human Activity Recognition

    Directory of Open Access Journals (Sweden)

    Abdulmajid Murad

    2017-11-01

    Full Text Available Adopting deep learning methods for human activity recognition has been effective in extracting discriminative features from raw input sequences acquired from body-worn sensors. Although human movements are encoded in a sequence of successive samples in time, typical machine learning methods perform recognition tasks without exploiting the temporal correlations between input data samples. Convolutional neural networks (CNNs address this issue by using convolutions across a one-dimensional temporal sequence to capture dependencies among input data. However, the size of convolutional kernels restricts the captured range of dependencies between data samples. As a result, typical models are unadaptable to a wide range of activity-recognition configurations and require fixed-length input windows. In this paper, we propose the use of deep recurrent neural networks (DRNNs for building recognition models that are capable of capturing long-range dependencies in variable-length input sequences. We present unidirectional, bidirectional, and cascaded architectures based on long short-term memory (LSTM DRNNs and evaluate their effectiveness on miscellaneous benchmark datasets. Experimental results show that our proposed models outperform methods employing conventional machine learning, such as support vector machine (SVM and k-nearest neighbors (KNN. Additionally, the proposed models yield better performance than other deep learning techniques, such as deep believe networks (DBNs and CNNs.

  9. Deep Recurrent Neural Networks for Human Activity Recognition.

    Science.gov (United States)

    Murad, Abdulmajid; Pyun, Jae-Young

    2017-11-06

    Adopting deep learning methods for human activity recognition has been effective in extracting discriminative features from raw input sequences acquired from body-worn sensors. Although human movements are encoded in a sequence of successive samples in time, typical machine learning methods perform recognition tasks without exploiting the temporal correlations between input data samples. Convolutional neural networks (CNNs) address this issue by using convolutions across a one-dimensional temporal sequence to capture dependencies among input data. However, the size of convolutional kernels restricts the captured range of dependencies between data samples. As a result, typical models are unadaptable to a wide range of activity-recognition configurations and require fixed-length input windows. In this paper, we propose the use of deep recurrent neural networks (DRNNs) for building recognition models that are capable of capturing long-range dependencies in variable-length input sequences. We present unidirectional, bidirectional, and cascaded architectures based on long short-term memory (LSTM) DRNNs and evaluate their effectiveness on miscellaneous benchmark datasets. Experimental results show that our proposed models outperform methods employing conventional machine learning, such as support vector machine (SVM) and k-nearest neighbors (KNN). Additionally, the proposed models yield better performance than other deep learning techniques, such as deep believe networks (DBNs) and CNNs.

  10. Neural activity associated with self-reflection.

    Science.gov (United States)

    Herwig, Uwe; Kaffenberger, Tina; Schell, Caroline; Jäncke, Lutz; Brühl, Annette B

    2012-05-24

    Self-referential cognitions are important for self-monitoring and self-regulation. Previous studies have addressed the neural correlates of self-referential processes in response to or related to external stimuli. We here investigated brain activity associated with a short, exclusively mental process of self-reflection in the absence of external stimuli or behavioural requirements. Healthy subjects reflected either on themselves, a personally known or an unknown person during functional magnetic resonance imaging (fMRI). The reflection period was initialized by a cue and followed by photographs of the respective persons (perception of pictures of oneself or the other person). Self-reflection, compared with reflecting on the other persons and to a major part also compared with perceiving photographs of one-self, was associated with more prominent dorsomedial and lateral prefrontal, insular, anterior and posterior cingulate activations. Whereas some of these areas showed activity in the "other"-conditions as well, self-selective characteristics were revealed in right dorsolateral prefrontal and posterior cingulate cortex for self-reflection; in anterior cingulate cortex for self-perception and in the left inferior parietal lobe for self-reflection and -perception. Altogether, cingulate, medial and lateral prefrontal, insular and inferior parietal regions show relevance for self-related cognitions, with in part self-specificity in terms of comparison with the known-, unknown- and perception-conditions. Notably, the results are obtained here without behavioural response supporting the reliability of this methodological approach of applying a solely mental intervention. We suggest considering the reported structures when investigating psychopathologically affected self-related processing.

  11. A Tensor-Product-Kernel Framework for Multiscale Neural Activity Decoding and Control

    Science.gov (United States)

    Li, Lin; Brockmeier, Austin J.; Choi, John S.; Francis, Joseph T.; Sanchez, Justin C.; Príncipe, José C.

    2014-01-01

    Brain machine interfaces (BMIs) have attracted intense attention as a promising technology for directly interfacing computers or prostheses with the brain's motor and sensory areas, thereby bypassing the body. The availability of multiscale neural recordings including spike trains and local field potentials (LFPs) brings potential opportunities to enhance computational modeling by enriching the characterization of the neural system state. However, heterogeneity on data type (spike timing versus continuous amplitude signals) and spatiotemporal scale complicates the model integration of multiscale neural activity. In this paper, we propose a tensor-product-kernel-based framework to integrate the multiscale activity and exploit the complementary information available in multiscale neural activity. This provides a common mathematical framework for incorporating signals from different domains. The approach is applied to the problem of neural decoding and control. For neural decoding, the framework is able to identify the nonlinear functional relationship between the multiscale neural responses and the stimuli using general purpose kernel adaptive filtering. In a sensory stimulation experiment, the tensor-product-kernel decoder outperforms decoders that use only a single neural data type. In addition, an adaptive inverse controller for delivering electrical microstimulation patterns that utilizes the tensor-product kernel achieves promising results in emulating the responses to natural stimulation. PMID:24829569

  12. Neural networks in economic modelling : An empirical study

    NARCIS (Netherlands)

    Verkooijen, W.J.H.

    1996-01-01

    This dissertation addresses the statistical aspects of neural networks and their usability for solving problems in economics and finance. Neural networks are discussed in a framework of modelling which is generally accepted in econometrics. Within this framework a neural network is regarded as a

  13. The gamma model : a new neural network for temporal processing

    NARCIS (Netherlands)

    Vries, de B.

    1992-01-01

    In this paper we develop the gamma neural model, a new neural net architecture for processing of temporal patterns. Time varying patterns are normally segmented into a sequence of static patterns that are successively presented to a neural net. In the approach presented here segmentation is avoided.

  14. Activity patterns of cultured neural networks on micro electrode arrays

    NARCIS (Netherlands)

    Rutten, Wim; van Pelt, J.

    2001-01-01

    A hybrid neuro-electronic interface is a cell-cultured micro electrode array, acting as a neural information transducer for stimulation and/or recording of neural activity in the brain or the spinal cord (ventral motor region or dorsal sensory region). It consists of an array of micro electrodes on

  15. A Biophysical Neural Model To Describe Spatial Visual Attention

    International Nuclear Information System (INIS)

    Hugues, Etienne; Jose, Jorge V.

    2008-01-01

    Visual scenes have enormous spatial and temporal information that are transduced into neural spike trains. Psychophysical experiments indicate that only a small portion of a spatial image is consciously accessible. Electrophysiological experiments in behaving monkeys have revealed a number of modulations of the neural activity in special visual area known as V4, when the animal is paying attention directly towards a particular stimulus location. The nature of the attentional input to V4, however, remains unknown as well as to the mechanisms responsible for these modulations. We use a biophysical neural network model of V4 to address these issues. We first constrain our model to reproduce the experimental results obtained for different external stimulus configurations and without paying attention. To reproduce the known neuronal response variability, we found that the neurons should receive about equal, or balanced, levels of excitatory and inhibitory inputs and whose levels are high as they are in in vivo conditions. Next we consider attentional inputs that can induce and reproduce the observed spiking modulations. We also elucidate the role played by the neural network to generate these modulations

  16. The Energy Coding of a Structural Neural Network Based on the Hodgkin-Huxley Model.

    Science.gov (United States)

    Zhu, Zhenyu; Wang, Rubin; Zhu, Fengyun

    2018-01-01

    Based on the Hodgkin-Huxley model, the present study established a fully connected structural neural network to simulate the neural activity and energy consumption of the network by neural energy coding theory. The numerical simulation result showed that the periodicity of the network energy distribution was positively correlated to the number of neurons and coupling strength, but negatively correlated to signal transmitting delay. Moreover, a relationship was established between the energy distribution feature and the synchronous oscillation of the neural network, which showed that when the proportion of negative energy in power consumption curve was high, the synchronous oscillation of the neural network was apparent. In addition, comparison with the simulation result of structural neural network based on the Wang-Zhang biophysical model of neurons showed that both models were essentially consistent.

  17. Flood routing modelling with Artificial Neural Networks

    Directory of Open Access Journals (Sweden)

    R. Peters

    2006-01-01

    Full Text Available For the modelling of the flood routing in the lower reaches of the Freiberger Mulde river and its tributaries the one-dimensional hydrodynamic modelling system HEC-RAS has been applied. Furthermore, this model was used to generate a database to train multilayer feedforward networks. To guarantee numerical stability for the hydrodynamic modelling of some 60 km of streamcourse an adequate resolution in space requires very small calculation time steps, which are some two orders of magnitude smaller than the input data resolution. This leads to quite high computation requirements seriously restricting the application – especially when dealing with real time operations such as online flood forecasting. In order to solve this problem we tested the application of Artificial Neural Networks (ANN. First studies show the ability of adequately trained multilayer feedforward networks (MLFN to reproduce the model performance.

  18. Models of neural dynamics in brain information processing - the developments of 'the decade'

    Energy Technology Data Exchange (ETDEWEB)

    Borisyuk, G N; Borisyuk, R M; Kazanovich, Yakov B [Institute of Mathematical Problems of Biology, Russian Academy of Sciences, Pushchino, Moscow region (Russian Federation); Ivanitskii, Genrikh R [Institute for Theoretical and Experimental Biophysics, Russian Academy of Sciences, Pushchino, Moscow region (Russian Federation)

    2002-10-31

    Neural network models are discussed that have been developed during the last decade with the purpose of reproducing spatio-temporal patterns of neural activity in different brain structures. The main goal of the modeling was to test hypotheses of synchronization, temporal and phase relations in brain information processing. The models being considered are those of temporal structure of spike sequences, of neural activity dynamics, and oscillatory models of attention and feature integration. (reviews of topical problems)

  19. Modeling Broadband Microwave Structures by Artificial Neural Networks

    Directory of Open Access Journals (Sweden)

    V. Otevrel

    2004-06-01

    Full Text Available The paper describes the exploitation of feed-forward neural networksand recurrent neural networks for replacing full-wave numerical modelsof microwave structures in complex microwave design tools. Building aneural model, attention is turned to the modeling accuracy and to theefficiency of building a model. Dealing with the accuracy, we describea method of increasing it by successive completing a training set.Neural models are mutually compared in order to highlight theiradvantages and disadvantages. As a reference model for comparisons,approximations based on standard cubic splines are used. Neural modelsare used to replace both the time-domain numeric models and thefrequency-domain ones.

  20. Large-scale multielectrode recording and stimulation of neural activity

    International Nuclear Information System (INIS)

    Sher, A.; Chichilnisky, E.J.; Dabrowski, W.; Grillo, A.A.; Grivich, M.; Gunning, D.; Hottowy, P.; Kachiguine, S.; Litke, A.M.; Mathieson, K.; Petrusca, D.

    2007-01-01

    Large circuits of neurons are employed by the brain to encode and process information. How this encoding and processing is carried out is one of the central questions in neuroscience. Since individual neurons communicate with each other through electrical signals (action potentials), the recording of neural activity with arrays of extracellular electrodes is uniquely suited for the investigation of this question. Such recordings provide the combination of the best spatial (individual neurons) and temporal (individual action-potentials) resolutions compared to other large-scale imaging methods. Electrical stimulation of neural activity in turn has two very important applications: it enhances our understanding of neural circuits by allowing active interactions with them, and it is a basis for a large variety of neural prosthetic devices. Until recently, the state-of-the-art in neural activity recording systems consisted of several dozen electrodes with inter-electrode spacing ranging from tens to hundreds of microns. Using silicon microstrip detector expertise acquired in the field of high-energy physics, we created a unique neural activity readout and stimulation framework that consists of high-density electrode arrays, multi-channel custom-designed integrated circuits, a data acquisition system, and data-processing software. Using this framework we developed a number of neural readout and stimulation systems: (1) a 512-electrode system for recording the simultaneous activity of as many as hundreds of neurons, (2) a 61-electrode system for electrical stimulation and readout of neural activity in retinas and brain-tissue slices, and (3) a system with telemetry capabilities for recording neural activity in the intact brain of awake, naturally behaving animals. We will report on these systems, their various applications to the field of neurobiology, and novel scientific results obtained with some of them. We will also outline future directions

  1. ChainMail based neural dynamics modeling of soft tissue deformation for surgical simulation.

    Science.gov (United States)

    Zhang, Jinao; Zhong, Yongmin; Smith, Julian; Gu, Chengfan

    2017-07-20

    Realistic and real-time modeling and simulation of soft tissue deformation is a fundamental research issue in the field of surgical simulation. In this paper, a novel cellular neural network approach is presented for modeling and simulation of soft tissue deformation by combining neural dynamics of cellular neural network with ChainMail mechanism. The proposed method formulates the problem of elastic deformation into cellular neural network activities to avoid the complex computation of elasticity. The local position adjustments of ChainMail are incorporated into the cellular neural network as the local connectivity of cells, through which the dynamic behaviors of soft tissue deformation are transformed into the neural dynamics of cellular neural network. Experiments demonstrate that the proposed neural network approach is capable of modeling the soft tissues' nonlinear deformation and typical mechanical behaviors. The proposed method not only improves ChainMail's linear deformation with the nonlinear characteristics of neural dynamics but also enables the cellular neural network to follow the principle of continuum mechanics to simulate soft tissue deformation.

  2. QSAR modelling using combined simple competitive learning networks and RBF neural networks.

    Science.gov (United States)

    Sheikhpour, R; Sarram, M A; Rezaeian, M; Sheikhpour, E

    2018-04-01

    The aim of this study was to propose a QSAR modelling approach based on the combination of simple competitive learning (SCL) networks with radial basis function (RBF) neural networks for predicting the biological activity of chemical compounds. The proposed QSAR method consisted of two phases. In the first phase, an SCL network was applied to determine the centres of an RBF neural network. In the second phase, the RBF neural network was used to predict the biological activity of various phenols and Rho kinase (ROCK) inhibitors. The predictive ability of the proposed QSAR models was evaluated and compared with other QSAR models using external validation. The results of this study showed that the proposed QSAR modelling approach leads to better performances than other models in predicting the biological activity of chemical compounds. This indicated the efficiency of simple competitive learning networks in determining the centres of RBF neural networks.

  3. Novel mathematical neural models for visual attention

    DEFF Research Database (Denmark)

    Li, Kang

    for the visual attention theories and spiking neuron models for single spike trains. Statistical inference and model selection are performed and various numerical methods are explored. The designed methods also give a framework for neural coding under visual attention theories. We conduct both analysis on real......Visual attention has been extensively studied in psychology, but some fundamental questions remain controversial. We focus on two questions in this study. First, we investigate how a neuron in visual cortex responds to multiple stimuli inside the receptive eld, described by either a response...... system, supported by simulation study. Finally, we present the decoding of multiple temporal stimuli under these visual attention theories, also in a realistic biophysical situation with simulations....

  4. Neural Network Based Models for Fusion Applications

    Science.gov (United States)

    Meneghini, Orso; Tema Biwole, Arsene; Luda, Teobaldo; Zywicki, Bailey; Rea, Cristina; Smith, Sterling; Snyder, Phil; Belli, Emily; Staebler, Gary; Canty, Jeff

    2017-10-01

    Whole device modeling, engineering design, experimental planning and control applications demand models that are simultaneously physically accurate and fast. This poster reports on the ongoing effort towards the development and validation of a series of models that leverage neural-­network (NN) multidimensional regression techniques to accelerate some of the most mission critical first principle models for the fusion community, such as: the EPED workflow for prediction of the H-Mode and Super H-Mode pedestal structure the TGLF and NEO models for the prediction of the turbulent and neoclassical particle, energy and momentum fluxes; and the NEO model for the drift-kinetic solution of the bootstrap current. We also applied NNs on DIII-D experimental data for disruption prediction and quantifying the effect of RMPs on the pedestal and ELMs. All of these projects were supported by the infrastructure provided by the OMFIT integrated modeling framework. Work supported by US DOE under DE-SC0012656, DE-FG02-95ER54309, DE-FC02-04ER54698.

  5. Neural Activity Patterns in the Human Brain Reflect Tactile Stickiness Perception

    Science.gov (United States)

    Kim, Junsuk; Yeon, Jiwon; Ryu, Jaekyun; Park, Jang-Yeon; Chung, Soon-Cheol; Kim, Sung-Phil

    2017-01-01

    Our previous human fMRI study found brain activations correlated with tactile stickiness perception using the uni-variate general linear model (GLM) (Yeon et al., 2017). Here, we conducted an in-depth investigation on neural correlates of sticky sensations by employing a multivoxel pattern analysis (MVPA) on the same dataset. In particular, we statistically compared multi-variate neural activities in response to the three groups of sticky stimuli: A supra-threshold group including a set of sticky stimuli that evoked vivid sticky perception; an infra-threshold group including another set of sticky stimuli that barely evoked sticky perception; and a sham group including acrylic stimuli with no physically sticky property. Searchlight MVPAs were performed to search for local activity patterns carrying neural information of stickiness perception. Similar to the uni-variate GLM results, significant multi-variate neural activity patterns were identified in postcentral gyrus, subcortical (basal ganglia and thalamus), and insula areas (insula and adjacent areas). Moreover, MVPAs revealed that activity patterns in posterior parietal cortex discriminated the perceptual intensities of stickiness, which was not present in the uni-variate analysis. Next, we applied a principal component analysis (PCA) to the voxel response patterns within identified clusters so as to find low-dimensional neural representations of stickiness intensities. Follow-up clustering analyses clearly showed separate neural grouping configurations between the Supra- and Infra-threshold groups. Interestingly, this neural categorization was in line with the perceptual grouping pattern obtained from the psychophysical data. Our findings thus suggest that different stickiness intensities would elicit distinct neural activity patterns in the human brain and may provide a neural basis for the perception and categorization of tactile stickiness. PMID:28936171

  6. Hypothetical neural mechanism that may play a role in mental rotation: an attractor neural network model.

    Science.gov (United States)

    Benusková, L; Estok, S

    1998-11-01

    We propose an attractor neural network (ANN) model that performs rotation-invariant pattern recognition in such a way that it can account for a neural mechanism being involved in the image transformation accompanying the experience of mental rotation. We compared the performance of our ANN model with the results of the chronometric psychophysical experiments of Cooper and Shepard (Cooper L A and Shepard R N 1973 Visual Information Processing (New York: Academic) pp 204-7) on discrimination of alphanumeric characters presented in various angular departures from their canonical upright position. Comparing the times required for pattern retrieval in its canonical upright position with the reaction times of human subjects, we found agreement in that (i) retrieval times for clockwise and anticlockwise departures of the same angular magnitude (up to 180 degrees) were not different, (ii) retrieval times increased with departure from upright and (iii) increased more sharply as departure from upright approached 180 degrees. The rotation-invariant retrieval of the activity pattern has been accomplished by means of the modified algorithm of Dotsenko (Dotsenko V S 1988 J. Phys. A: Math. Gen. 21 L783-7) proposed for translation-, rotation- and size-invariant pattern recognition, which uses relaxation of neuronal firing thresholds to guide the evolution of the ANN in state space towards the desired memory attractor. The dynamics of neuronal relaxation has been modified for storage and retrieval of low-activity patterns and the original gradient optimization of threshold dynamics has been replaced with optimization by simulated annealing.

  7. A Simple Quantum Neural Net with a Periodic Activation Function

    OpenAIRE

    Daskin, Ammar

    2018-01-01

    In this paper, we propose a simple neural net that requires only $O(nlog_2k)$ number of qubits and $O(nk)$ quantum gates: Here, $n$ is the number of input parameters, and $k$ is the number of weights applied to these parameters in the proposed neural net. We describe the network in terms of a quantum circuit, and then draw its equivalent classical neural net which involves $O(k^n)$ nodes in the hidden layer. Then, we show that the network uses a periodic activation function of cosine values o...

  8. Active Engine Mounting Control Algorithm Using Neural Network

    Directory of Open Access Journals (Sweden)

    Fadly Jashi Darsivan

    2009-01-01

    Full Text Available This paper proposes the application of neural network as a controller to isolate engine vibration in an active engine mounting system. It has been shown that the NARMA-L2 neurocontroller has the ability to reject disturbances from a plant. The disturbance is assumed to be both impulse and sinusoidal disturbances that are induced by the engine. The performance of the neural network controller is compared with conventional PD and PID controllers tuned using Ziegler-Nichols. From the result simulated the neural network controller has shown better ability to isolate the engine vibration than the conventional controllers.

  9. Patterns recognition of electric brain activity using artificial neural networks

    Science.gov (United States)

    Musatov, V. Yu.; Pchelintseva, S. V.; Runnova, A. E.; Hramov, A. E.

    2017-04-01

    An approach for the recognition of various cognitive processes in the brain activity in the perception of ambiguous images. On the basis of developed theoretical background and the experimental data, we propose a new classification of oscillating patterns in the human EEG by using an artificial neural network approach. After learning of the artificial neural network reliably identified cube recognition processes, for example, left-handed or right-oriented Necker cube with different intensity of their edges, construct an artificial neural network based on Perceptron architecture and demonstrate its effectiveness in the pattern recognition of the EEG in the experimental.

  10. Causal Learning and Explanation of Deep Neural Networks via Autoencoded Activations

    OpenAIRE

    Harradon, Michael; Druce, Jeff; Ruttenberg, Brian

    2018-01-01

    Deep neural networks are complex and opaque. As they enter application in a variety of important and safety critical domains, users seek methods to explain their output predictions. We develop an approach to explaining deep neural networks by constructing causal models on salient concepts contained in a CNN. We develop methods to extract salient concepts throughout a target network by using autoencoders trained to extract human-understandable representations of network activations. We then bu...

  11. Two stage neural network modelling for robust model predictive control.

    Science.gov (United States)

    Patan, Krzysztof

    2018-01-01

    The paper proposes a novel robust model predictive control scheme realized by means of artificial neural networks. The neural networks are used twofold: to design the so-called fundamental model of a plant and to catch uncertainty associated with the plant model. In order to simplify the optimization process carried out within the framework of predictive control an instantaneous linearization is applied which renders it possible to define the optimization problem in the form of constrained quadratic programming. Stability of the proposed control system is also investigated by showing that a cost function is monotonically decreasing with respect to time. Derived robust model predictive control is tested and validated on the example of a pneumatic servomechanism working at different operating regimes. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.

  12. Bio-Inspired Neural Model for Learning Dynamic Models

    Science.gov (United States)

    Duong, Tuan; Duong, Vu; Suri, Ronald

    2009-01-01

    A neural-network mathematical model that, relative to prior such models, places greater emphasis on some of the temporal aspects of real neural physical processes, has been proposed as a basis for massively parallel, distributed algorithms that learn dynamic models of possibly complex external processes by means of learning rules that are local in space and time. The algorithms could be made to perform such functions as recognition and prediction of words in speech and of objects depicted in video images. The approach embodied in this model is said to be "hardware-friendly" in the following sense: The algorithms would be amenable to execution by special-purpose computers implemented as very-large-scale integrated (VLSI) circuits that would operate at relatively high speeds and low power demands.

  13. Extended Neural Metastability in an Embodied Model of Sensorimotor Coupling

    Directory of Open Access Journals (Sweden)

    Miguel Aguilera

    2016-09-01

    Full Text Available The hypothesis that brain organization is based on mechanisms of metastable synchronization in neural assemblies has been popularized during the last decades of neuroscientific research. Nevertheless, the role of body and environment for understanding the functioning of metastable assemblies is frequently dismissed. The main goal of this paper is to investigate the contribution of sensorimotor coupling to neural and behavioural metastability using a minimal computational model of plastic neural ensembles embedded in a robotic agent in a behavioural preference task. Our hypothesis is that, under some conditions, the metastability of the system is not restricted to the brain but extends to the system composed by the interaction of brain, body and environment. We test this idea, comparing an agent in continuous interaction with its environment in a task demanding behavioural flexibility with an equivalent model from the point of view of 'internalist neuroscience'. A statistical characterization of our model and tools from information theory allows us to show how (1 the bidirectional coupling between agent and environment brings the system closer to a regime of criticality and triggers the emergence of additional metastable states which are not found in the brain in isolation but extended to the whole system of sensorimotor interaction, (2 the synaptic plasticity of the agent is fundamental to sustain open structures in the neural controller of the agent flexibly engaging and disengaging different behavioural patterns that sustain sensorimotor metastable states, and (3 these extended metastable states emerge when the agent generates an asymmetrical circular loop of causal interaction with its environment, in which the agent responds to variability of the environment at fast timescales while acting over the environment at slow timescales, suggesting the constitution of the agent as an autonomous entity actively modulating its sensorimotor coupling

  14. Extended Neural Metastability in an Embodied Model of Sensorimotor Coupling.

    Science.gov (United States)

    Aguilera, Miguel; Bedia, Manuel G; Barandiaran, Xabier E

    2016-01-01

    The hypothesis that brain organization is based on mechanisms of metastable synchronization in neural assemblies has been popularized during the last decades of neuroscientific research. Nevertheless, the role of body and environment for understanding the functioning of metastable assemblies is frequently dismissed. The main goal of this paper is to investigate the contribution of sensorimotor coupling to neural and behavioral metastability using a minimal computational model of plastic neural ensembles embedded in a robotic agent in a behavioral preference task. Our hypothesis is that, under some conditions, the metastability of the system is not restricted to the brain but extends to the system composed by the interaction of brain, body and environment. We test this idea, comparing an agent in continuous interaction with its environment in a task demanding behavioral flexibility with an equivalent model from the point of view of "internalist neuroscience." A statistical characterization of our model and tools from information theory allow us to show how (1) the bidirectional coupling between agent and environment brings the system closer to a regime of criticality and triggers the emergence of additional metastable states which are not found in the brain in isolation but extended to the whole system of sensorimotor interaction, (2) the synaptic plasticity of the agent is fundamental to sustain open structures in the neural controller of the agent flexibly engaging and disengaging different behavioral patterns that sustain sensorimotor metastable states, and (3) these extended metastable states emerge when the agent generates an asymmetrical circular loop of causal interaction with its environment, in which the agent responds to variability of the environment at fast timescales while acting over the environment at slow timescales, suggesting the constitution of the agent as an autonomous entity actively modulating its sensorimotor coupling with the world. We

  15. A novel neural network based image reconstruction model with scale and rotation invariance for target identification and classification for Active millimetre wave imaging

    Science.gov (United States)

    Agarwal, Smriti; Bisht, Amit Singh; Singh, Dharmendra; Pathak, Nagendra Prasad

    2014-12-01

    Millimetre wave imaging (MMW) is gaining tremendous interest among researchers, which has potential applications for security check, standoff personal screening, automotive collision-avoidance, and lot more. Current state-of-art imaging techniques viz. microwave and X-ray imaging suffers from lower resolution and harmful ionizing radiation, respectively. In contrast, MMW imaging operates at lower power and is non-ionizing, hence, medically safe. Despite these favourable attributes, MMW imaging encounters various challenges as; still it is very less explored area and lacks suitable imaging methodology for extracting complete target information. Keeping in view of these challenges, a MMW active imaging radar system at 60 GHz was designed for standoff imaging application. A C-scan (horizontal and vertical scanning) methodology was developed that provides cross-range resolution of 8.59 mm. The paper further details a suitable target identification and classification methodology. For identification of regular shape targets: mean-standard deviation based segmentation technique was formulated and further validated using a different target shape. For classification: probability density function based target material discrimination methodology was proposed and further validated on different dataset. Lastly, a novel artificial neural network based scale and rotation invariant, image reconstruction methodology has been proposed to counter the distortions in the image caused due to noise, rotation or scale variations. The designed neural network once trained with sample images, automatically takes care of these deformations and successfully reconstructs the corrected image for the test targets. Techniques developed in this paper are tested and validated using four different regular shapes viz. rectangle, square, triangle and circle.

  16. Computational modeling of spiking neural network with learning rules from STDP and intrinsic plasticity

    Science.gov (United States)

    Li, Xiumin; Wang, Wei; Xue, Fangzheng; Song, Yongduan

    2018-02-01

    Recently there has been continuously increasing interest in building up computational models of spiking neural networks (SNN), such as the Liquid State Machine (LSM). The biologically inspired self-organized neural networks with neural plasticity can enhance the capability of computational performance, with the characteristic features of dynamical memory and recurrent connection cycles which distinguish them from the more widely used feedforward neural networks. Despite a variety of computational models for brain-like learning and information processing have been proposed, the modeling of self-organized neural networks with multi-neural plasticity is still an important open challenge. The main difficulties lie in the interplay among different forms of neural plasticity rules and understanding how structures and dynamics of neural networks shape the computational performance. In this paper, we propose a novel approach to develop the models of LSM with a biologically inspired self-organizing network based on two neural plasticity learning rules. The connectivity among excitatory neurons is adapted by spike-timing-dependent plasticity (STDP) learning; meanwhile, the degrees of neuronal excitability are regulated to maintain a moderate average activity level by another learning rule: intrinsic plasticity (IP). Our study shows that LSM with STDP+IP performs better than LSM with a random SNN or SNN obtained by STDP alone. The noticeable improvement with the proposed method is due to the better reflected competition among different neurons in the developed SNN model, as well as the more effectively encoded and processed relevant dynamic information with its learning and self-organizing mechanism. This result gives insights to the optimization of computational models of spiking neural networks with neural plasticity.

  17. Computationally efficient model predictive control algorithms a neural network approach

    CERN Document Server

    Ławryńczuk, Maciej

    2014-01-01

    This book thoroughly discusses computationally efficient (suboptimal) Model Predictive Control (MPC) techniques based on neural models. The subjects treated include: ·         A few types of suboptimal MPC algorithms in which a linear approximation of the model or of the predicted trajectory is successively calculated on-line and used for prediction. ·         Implementation details of the MPC algorithms for feedforward perceptron neural models, neural Hammerstein models, neural Wiener models and state-space neural models. ·         The MPC algorithms based on neural multi-models (inspired by the idea of predictive control). ·         The MPC algorithms with neural approximation with no on-line linearization. ·         The MPC algorithms with guaranteed stability and robustness. ·         Cooperation between the MPC algorithms and set-point optimization. Thanks to linearization (or neural approximation), the presented suboptimal algorithms do not require d...

  18. Parametric models to relate spike train and LFP dynamics with neural information processing.

    Science.gov (United States)

    Banerjee, Arpan; Dean, Heather L; Pesaran, Bijan

    2012-01-01

    Spike trains and local field potentials (LFPs) resulting from extracellular current flows provide a substrate for neural information processing. Understanding the neural code from simultaneous spike-field recordings and subsequent decoding of information processing events will have widespread applications. One way to demonstrate an understanding of the neural code, with particular advantages for the development of applications, is to formulate a parametric statistical model of neural activity and its covariates. Here, we propose a set of parametric spike-field models (unified models) that can be used with existing decoding algorithms to reveal the timing of task or stimulus specific processing. Our proposed unified modeling framework captures the effects of two important features of information processing: time-varying stimulus-driven inputs and ongoing background activity that occurs even in the absence of environmental inputs. We have applied this framework for decoding neural latencies in simulated and experimentally recorded spike-field sessions obtained from the lateral intraparietal area (LIP) of awake, behaving monkeys performing cued look-and-reach movements to spatial targets. Using both simulated and experimental data, we find that estimates of trial-by-trial parameters are not significantly affected by the presence of ongoing background activity. However, including background activity in the unified model improves goodness of fit for predicting individual spiking events. Uncovering the relationship between the model parameters and the timing of movements offers new ways to test hypotheses about the relationship between neural activity and behavior. We obtained significant spike-field onset time correlations from single trials using a previously published data set where significantly strong correlation was only obtained through trial averaging. We also found that unified models extracted a stronger relationship between neural response latency and trial

  19. Hybrid neural network bushing model for vehicle dynamics simulation

    International Nuclear Information System (INIS)

    Sohn, Jeong Hyun; Lee, Seung Kyu; Yoo, Wan Suk

    2008-01-01

    Although the linear model was widely used for the bushing model in vehicle suspension systems, it could not express the nonlinear characteristics of bushing in terms of the amplitude and the frequency. An artificial neural network model was suggested to consider the hysteretic responses of bushings. This model, however, often diverges due to the uncertainties of the neural network under the unexpected excitation inputs. In this paper, a hybrid neural network bushing model combining linear and neural network is suggested. A linear model was employed to represent linear stiffness and damping effects, and the artificial neural network algorithm was adopted to take into account the hysteretic responses. A rubber test was performed to capture bushing characteristics, where sine excitation with different frequencies and amplitudes is applied. Random test results were used to update the weighting factors of the neural network model. It is proven that the proposed model has more robust characteristics than a simple neural network model under step excitation input. A full car simulation was carried out to verify the proposed bushing models. It was shown that the hybrid model results are almost identical to the linear model under several maneuvers

  20. Mode Choice Modeling Using Artificial Neural Networks

    OpenAIRE

    Edara, Praveen Kumar

    2003-01-01

    Artificial intelligence techniques have produced excellent results in many diverse fields of engineering. Techniques such as neural networks and fuzzy systems have found their way into transportation engineering. In recent years, neural networks are being used instead of regression techniques for travel demand forecasting purposes. The basic reason lies in the fact that neural networks are able to capture complex relationships and learn from examples and also able to adapt when new data becom...

  1. Runoff Modelling in Urban Storm Drainage by Neural Networks

    DEFF Research Database (Denmark)

    Rasmussen, Michael R.; Brorsen, Michael; Schaarup-Jensen, Kjeld

    1995-01-01

    A neural network is used to simulate folw and water levels in a sewer system. The calibration of th neural network is based on a few measured events and the network is validated against measureed events as well as flow simulated with the MOUSE model (Lindberg and Joergensen, 1986). The neural...... network is used to compute flow or water level at selected points in the sewer system, and to forecast the flow from a small residential area. The main advantages of the neural network are the build-in self calibration procedure and high speed performance, but the neural network cannot be used to extract...... knowledge of the runoff process. The neural network was found to simulate 150 times faster than e.g. the MOUSE model....

  2. Bayesian Recurrent Neural Network for Language Modeling.

    Science.gov (United States)

    Chien, Jen-Tzung; Ku, Yuan-Chu

    2016-02-01

    A language model (LM) is calculated as the probability of a word sequence that provides the solution to word prediction for a variety of information systems. A recurrent neural network (RNN) is powerful to learn the large-span dynamics of a word sequence in the continuous space. However, the training of the RNN-LM is an ill-posed problem because of too many parameters from a large dictionary size and a high-dimensional hidden layer. This paper presents a Bayesian approach to regularize the RNN-LM and apply it for continuous speech recognition. We aim to penalize the too complicated RNN-LM by compensating for the uncertainty of the estimated model parameters, which is represented by a Gaussian prior. The objective function in a Bayesian classification network is formed as the regularized cross-entropy error function. The regularized model is constructed not only by calculating the regularized parameters according to the maximum a posteriori criterion but also by estimating the Gaussian hyperparameter by maximizing the marginal likelihood. A rapid approximation to a Hessian matrix is developed to implement the Bayesian RNN-LM (BRNN-LM) by selecting a small set of salient outer-products. The proposed BRNN-LM achieves a sparser model than the RNN-LM. Experiments on different corpora show the robustness of system performance by applying the rapid BRNN-LM under different conditions.

  3. An artificial neural network model for periodic trajectory generation

    Science.gov (United States)

    Shankar, S.; Gander, R. E.; Wood, H. C.

    A neural network model based on biological systems was developed for potential robotic application. The model consists of three interconnected layers of artificial neurons or units: an input layer subdivided into state and plan units, an output layer, and a hidden layer between the two outer layers which serves to implement nonlinear mappings between the input and output activation vectors. Weighted connections are created between the three layers, and learning is effected by modifying these weights. Feedback connections between the output and the input state serve to make the network operate as a finite state machine. The activation vector of the plan units of the input layer emulates the supraspinal commands in biological central pattern generators in that different plan activation vectors correspond to different sequences or trajectories being recalled, even with different frequencies. Three trajectories were chosen for implementation, and learning was accomplished in 10,000 trials. The fault tolerant behavior, adaptiveness, and phase maintenance of the implemented network are discussed.

  4. Modulation of Neural Activity during Guided Viewing of Visual Art.

    Science.gov (United States)

    Herrera-Arcos, Guillermo; Tamez-Duque, Jesús; Acosta-De-Anda, Elsa Y; Kwan-Loo, Kevin; de-Alba, Mayra; Tamez-Duque, Ulises; Contreras-Vidal, Jose L; Soto, Rogelio

    2017-01-01

    Mobile Brain-Body Imaging (MoBI) technology was deployed to record multi-modal data from 209 participants to examine the brain's response to artistic stimuli at the Museo de Arte Contemporáneo (MARCO) in Monterrey, México. EEG signals were recorded as the subjects walked through the exhibit in guided groups of 6-8 people. Moreover, guided groups were either provided with an explanation of each art piece (Guided-E), or given no explanation (Guided-NE). The study was performed using portable Muse (InteraXon, Inc, Toronto, ON, Canada) headbands with four dry electrodes located at AF7, AF8, TP9, and TP10. Each participant performed a baseline (BL) control condition devoid of artistic stimuli and selected his/her favorite piece of art (FP) during the guided tour. In this study, we report data related to participants' demographic information and aesthetic preference as well as effects of art viewing on neural activity (EEG) in a select subgroup of 18-30 year-old subjects (Nc = 25) that generated high-quality EEG signals, on both BL and FP conditions. Dependencies on gender, sensor placement, and presence or absence of art explanation were also analyzed. After denoising, clustering of spectral EEG models was used to identify neural patterns associated with BL and FP conditions. Results indicate statistically significant suppression of beta band frequencies (15-25 Hz) in the prefrontal electrodes (AF7 and AF8) during appreciation of subjects' favorite painting, compared to the BL condition, which was significantly different from EEG responses to non-favorite paintings (NFP). No significant differences in brain activity in relation to the presence or absence of explanation during exhibit tours were found. Moreover, a frontal to posterior asymmetry in neural activity was observed, for both BL and FP conditions. These findings provide new information about frequency-related effects of preferred art viewing in brain activity, and support the view that art appreciation is

  5. Decorrelation of Neural-Network Activity by Inhibitory Feedback

    Science.gov (United States)

    Einevoll, Gaute T.; Diesmann, Markus

    2012-01-01

    Correlations in spike-train ensembles can seriously impair the encoding of information by their spatio-temporal structure. An inevitable source of correlation in finite neural networks is common presynaptic input to pairs of neurons. Recent studies demonstrate that spike correlations in recurrent neural networks are considerably smaller than expected based on the amount of shared presynaptic input. Here, we explain this observation by means of a linear network model and simulations of networks of leaky integrate-and-fire neurons. We show that inhibitory feedback efficiently suppresses pairwise correlations and, hence, population-rate fluctuations, thereby assigning inhibitory neurons the new role of active decorrelation. We quantify this decorrelation by comparing the responses of the intact recurrent network (feedback system) and systems where the statistics of the feedback channel is perturbed (feedforward system). Manipulations of the feedback statistics can lead to a significant increase in the power and coherence of the population response. In particular, neglecting correlations within the ensemble of feedback channels or between the external stimulus and the feedback amplifies population-rate fluctuations by orders of magnitude. The fluctuation suppression in homogeneous inhibitory networks is explained by a negative feedback loop in the one-dimensional dynamics of the compound activity. Similarly, a change of coordinates exposes an effective negative feedback loop in the compound dynamics of stable excitatory-inhibitory networks. The suppression of input correlations in finite networks is explained by the population averaged correlations in the linear network model: In purely inhibitory networks, shared-input correlations are canceled by negative spike-train correlations. In excitatory-inhibitory networks, spike-train correlations are typically positive. Here, the suppression of input correlations is not a result of the mere existence of correlations between

  6. Ocean wave prediction using numerical and neural network models

    Digital Repository Service at National Institute of Oceanography (India)

    Mandal, S.; Prabaharan, N.

    This paper presents an overview of the development of the numerical wave prediction models and recently used neural networks for ocean wave hindcasting and forecasting. The numerical wave models express the physical concepts of the phenomena...

  7. A neural network model of lateralization during letter identification.

    Science.gov (United States)

    Shevtsova, N; Reggia, J A

    1999-03-01

    The causes of cerebral lateralization of cognitive and other functions are currently not well understood. To investigate one aspect of function lateralization, a bihemispheric neural network model for a simple visual identification task was developed that has two parallel interacting paths of information processing. The model is based on commonly accepted concepts concerning neural connectivity, activity dynamics, and synaptic plasticity. A combination of both unsupervised (Hebbian) and supervised (Widrow-Hoff) learning rules is used to train the model to identify a small set of letters presented as input stimuli in the left visual hemifield, in the central position, and in the right visual hemifield. Each visual hemifield projects onto the contralateral hemisphere, and the two hemispheres interact via a simulated corpus callosum. The contribution of each individual hemisphere to the process of input stimuli identification was studied for a variety of underlying asymmetries. The results indicate that multiple asymmetries may cause lateralization. Lateralization occurred toward the side having larger size, higher excitability, or higher learning rate parameters. It appeared more intensively with strong inhibitory callosal connections, supporting the hypothesis that the corpus callosum plays a functionally inhibitory role. The model demonstrates clearly the dependence of lateralization on different hemisphere parameters and suggests that computational models can be useful in better understanding the mechanisms underlying emergence of lateralization.

  8. Evaluation of neural networks to identify types of activity using accelerometers

    NARCIS (Netherlands)

    Vries, S.I. de; Garre, F.G.; Engbers, L.H.; Hildebrandt, V.H.; Buuren, S. van

    2011-01-01

    Purpose: To develop and evaluate two artificial neural network (ANN) models based on single-sensor accelerometer data and an ANN model based on the data of two accelerometers for the identification of types of physical activity in adults. Methods: Forty-nine subjects (21 men and 28 women; age range

  9. Neural Activity During The Formation Of A Giant Auditory Synapse

    NARCIS (Netherlands)

    M.C. Sierksma (Martijn)

    2018-01-01

    markdownabstractThe formation of synapses is a critical step in the development of the brain. During this developmental stage neural activity propagates across the brain from synapse to synapse. This activity is thought to instruct the precise, topological connectivity found in the sensory central

  10. A neural network model of causative actions

    Directory of Open Access Journals (Sweden)

    Jeremy eLee-Hand

    2015-06-01

    Full Text Available A common idea in models of action representation is that actions are represented in terms of their perceptual effects (see e.g. Prinz, 1997; Hommel et al., 2001; Sahin et al., 2007; Umilta et al., 2008; Hommel et al., 2013. In this paper we extend existing models of effect-based action representations to account for a novel distinction. Some actions bring about effects that are independent events in their own right: for instance, if John 'smashes' a cup, he brings about the event of 'the cup smashing'. Other actions do not bring about such effects. For instance, if John 'grabs' a cup, this action does not cause the cup to 'do' anything: a grab action has well-defined perceptual effects, but these are not registered by the perceptual system that detects independent events involving external objects in the world. In our model, effect-based actions are implemented in several distinct neural circuits, which are organised into a hierarchy based on the complexity of their associated perceptual effects. The circuit at the top of this hierarchy is responsible for actions that bring about independently perceivable events. This circuit receives input from the perceptual module that recognises arbitrary events taking place in the world, and learns movements that reliably cause such events. We assess our model against existing experimental observations about effect-based motor representations, and make some novel experimental predictions. We also consider the possibility that the 'causative actions' circuit in our model can be identified with a motor pathway reported in other work, specialising in 'functional' actions on manipulable tools (Bub et al., 2008; Binkofski and Buxbaum, 2013.

  11. A neural network model of causative actions.

    Science.gov (United States)

    Lee-Hand, Jeremy; Knott, Alistair

    2015-01-01

    A common idea in models of action representation is that actions are represented in terms of their perceptual effects (see e.g., Prinz, 1997; Hommel et al., 2001; Sahin et al., 2007; Umiltà et al., 2008; Hommel, 2013). In this paper we extend existing models of effect-based action representations to account for a novel distinction. Some actions bring about effects that are independent events in their own right: for instance, if John smashes a cup, he brings about the event of the cup smashing. Other actions do not bring about such effects. For instance, if John grabs a cup, this action does not cause the cup to "do" anything: a grab action has well-defined perceptual effects, but these are not registered by the perceptual system that detects independent events involving external objects in the world. In our model, effect-based actions are implemented in several distinct neural circuits, which are organized into a hierarchy based on the complexity of their associated perceptual effects. The circuit at the top of this hierarchy is responsible for actions that bring about independently perceivable events. This circuit receives input from the perceptual module that recognizes arbitrary events taking place in the world, and learns movements that reliably cause such events. We assess our model against existing experimental observations about effect-based motor representations, and make some novel experimental predictions. We also consider the possibility that the "causative actions" circuit in our model can be identified with a motor pathway reported in other work, specializing in "functional" actions on manipulable tools (Bub et al., 2008; Binkofski and Buxbaum, 2013).

  12. Acquiring neural signals for developing a perception and cognition model

    Science.gov (United States)

    Li, Wei; Li, Yunyi; Chen, Genshe; Shen, Dan; Blasch, Erik; Pham, Khanh; Lynch, Robert

    2012-06-01

    The understanding of how humans process information, determine salience, and combine seemingly unrelated information is essential to automated processing of large amounts of information that is partially relevant, or of unknown relevance. Recent neurological science research in human perception, and in information science regarding contextbased modeling, provides us with a theoretical basis for using a bottom-up approach for automating the management of large amounts of information in ways directly useful for human operators. However, integration of human intelligence into a game theoretic framework for dynamic and adaptive decision support needs a perception and cognition model. For the purpose of cognitive modeling, we present a brain-computer-interface (BCI) based humanoid robot system to acquire brainwaves during human mental activities of imagining a humanoid robot-walking behavior. We use the neural signals to investigate relationships between complex humanoid robot behaviors and human mental activities for developing the perception and cognition model. The BCI system consists of a data acquisition unit with an electroencephalograph (EEG), a humanoid robot, and a charge couple CCD camera. An EEG electrode cup acquires brainwaves from the skin surface on scalp. The humanoid robot has 20 degrees of freedom (DOFs); 12 DOFs located on hips, knees, and ankles for humanoid robot walking, 6 DOFs on shoulders and arms for arms motion, and 2 DOFs for head yaw and pitch motion. The CCD camera takes video clips of the human subject's hand postures to identify mental activities that are correlated to the robot-walking behaviors. We use the neural signals to investigate relationships between complex humanoid robot behaviors and human mental activities for developing the perception and cognition model.

  13. Modelling the permeability of polymers: a neural network approach

    NARCIS (Netherlands)

    Wessling, Matthias; Mulder, M.H.V.; Bos, A.; Bos, A.; van der Linden, M.K.T.; Bos, M.; van der Linden, W.E.

    1994-01-01

    In this short communication, the prediction of the permeability of carbon dioxide through different polymers using a neural network is studied. A neural network is a numeric-mathematical construction that can model complex non-linear relationships. Here it is used to correlate the IR spectrum of a

  14. A neural network model of ventriloquism effect and aftereffect.

    Science.gov (United States)

    Magosso, Elisa; Cuppini, Cristiano; Ursino, Mauro

    2012-01-01

    Presenting simultaneous but spatially discrepant visual and auditory stimuli induces a perceptual translocation of the sound towards the visual input, the ventriloquism effect. General explanation is that vision tends to dominate over audition because of its higher spatial reliability. The underlying neural mechanisms remain unclear. We address this question via a biologically inspired neural network. The model contains two layers of unimodal visual and auditory neurons, with visual neurons having higher spatial resolution than auditory ones. Neurons within each layer communicate via lateral intra-layer synapses; neurons across layers are connected via inter-layer connections. The network accounts for the ventriloquism effect, ascribing it to a positive feedback between the visual and auditory neurons, triggered by residual auditory activity at the position of the visual stimulus. Main results are: i) the less localized stimulus is strongly biased toward the most localized stimulus and not vice versa; ii) amount of the ventriloquism effect changes with visual-auditory spatial disparity; iii) ventriloquism is a robust behavior of the network with respect to parameter value changes. Moreover, the model implements Hebbian rules for potentiation and depression of lateral synapses, to explain ventriloquism aftereffect (that is, the enduring sound shift after exposure to spatially disparate audio-visual stimuli). By adaptively changing the weights of lateral synapses during cross-modal stimulation, the model produces post-adaptive shifts of auditory localization that agree with in-vivo observations. The model demonstrates that two unimodal layers reciprocally interconnected may explain ventriloquism effect and aftereffect, even without the presence of any convergent multimodal area. The proposed study may provide advancement in understanding neural architecture and mechanisms at the basis of visual-auditory integration in the spatial realm.

  15. A neural network model of ventriloquism effect and aftereffect.

    Directory of Open Access Journals (Sweden)

    Elisa Magosso

    Full Text Available Presenting simultaneous but spatially discrepant visual and auditory stimuli induces a perceptual translocation of the sound towards the visual input, the ventriloquism effect. General explanation is that vision tends to dominate over audition because of its higher spatial reliability. The underlying neural mechanisms remain unclear. We address this question via a biologically inspired neural network. The model contains two layers of unimodal visual and auditory neurons, with visual neurons having higher spatial resolution than auditory ones. Neurons within each layer communicate via lateral intra-layer synapses; neurons across layers are connected via inter-layer connections. The network accounts for the ventriloquism effect, ascribing it to a positive feedback between the visual and auditory neurons, triggered by residual auditory activity at the position of the visual stimulus. Main results are: i the less localized stimulus is strongly biased toward the most localized stimulus and not vice versa; ii amount of the ventriloquism effect changes with visual-auditory spatial disparity; iii ventriloquism is a robust behavior of the network with respect to parameter value changes. Moreover, the model implements Hebbian rules for potentiation and depression of lateral synapses, to explain ventriloquism aftereffect (that is, the enduring sound shift after exposure to spatially disparate audio-visual stimuli. By adaptively changing the weights of lateral synapses during cross-modal stimulation, the model produces post-adaptive shifts of auditory localization that agree with in-vivo observations. The model demonstrates that two unimodal layers reciprocally interconnected may explain ventriloquism effect and aftereffect, even without the presence of any convergent multimodal area. The proposed study may provide advancement in understanding neural architecture and mechanisms at the basis of visual-auditory integration in the spatial realm.

  16. Proposal of a model of mammalian neural induction

    Science.gov (United States)

    Levine, Ariel J.; Brivanlou, Ali H.

    2009-01-01

    How does the vertebrate embryo make a nervous system? This complex question has been at the center of developmental biology for many years. The earliest step in this process – the induction of neural tissue – is intimately linked to patterning of the entire early embryo, and the molecular and embryological basis these processes are beginning to emerge. Here, we analyze classic and cutting-edge findings on neural induction in the mouse. We find that data from genetics, tissue explants, tissue grafting, and molecular marker expression support a coherent framework for mammalian neural induction. In this model, the gastrula organizer of the mouse embryo inhibits BMP signaling to allow neural tissue to form as a default fate – in the absence of instructive signals. The first neural tissue induced is anterior and subsequent neural tissue is posteriorized to form the midbrain, hindbrain, and spinal cord. The anterior visceral endoderm protects the pre-specified anterior neural fate from similar posteriorization, allowing formation of forebrain. This model is very similar to the default model of neural induction in the frog, thus bridging the evolutionary gap between amphibians and mammals. PMID:17585896

  17. Spike neural models (part I: The Hodgkin-Huxley model

    Directory of Open Access Journals (Sweden)

    Johnson, Melissa G.

    2017-05-01

    Full Text Available Artificial neural networks, or ANNs, have grown a lot since their inception back in the 1940s. But no matter the changes, one of the most important components of neural networks is still the node, which represents the neuron. Within spiking neural networks, the node is especially important because it contains the functions and properties of neurons that are necessary for their network. One important aspect of neurons is the ionic flow which produces action potentials, or spikes. Forces of diffusion and electrostatic pressure work together with the physical properties of the cell to move ions around changing the cell membrane potential which ultimately produces the action potential. This tutorial reviews the Hodkgin-Huxley model and shows how it simulates the ionic flow of the giant squid axon via four differential equations. The model is implemented in Matlab using Euler's Method to approximate the differential equations. By using Euler's method, an extra parameter is created, the time step. This new parameter needs to be carefully considered or the results of the node may be impaired.

  18. Neural activation toward erotic stimuli in homosexual and heterosexual males.

    Science.gov (United States)

    Kagerer, Sabine; Klucken, Tim; Wehrum, Sina; Zimmermann, Mark; Schienle, Anne; Walter, Bertram; Vaitl, Dieter; Stark, Rudolf

    2011-11-01

    Studies investigating sexual arousal exist, yet there are diverging findings on the underlying neural mechanisms with regard to sexual orientation. Moreover, sexual arousal effects have often been confounded with general arousal effects. Hence, it is still unclear which structures underlie the sexual arousal response in homosexual and heterosexual men. Neural activity and subjective responses were investigated in order to disentangle sexual from general arousal. Considering sexual orientation, differential and conjoint neural activations were of interest. The functional magnetic resonance imaging (fMRI) study focused on the neural networks involved in the processing of sexual stimuli in 21 male participants (11 homosexual, 10 heterosexual). Both groups viewed pictures with erotic content as well as aversive and neutral stimuli. The erotic pictures were subdivided into three categories (most sexually arousing, least sexually arousing, and rest) based on the individual subjective ratings of each participant. Blood oxygen level-dependent responses measured by fMRI and subjective ratings. A conjunction analysis revealed conjoint neural activation related to sexual arousal in thalamus, hypothalamus, occipital cortex, and nucleus accumbens. Increased insula, amygdala, and anterior cingulate gyrus activation could be linked to general arousal. Group differences emerged neither when viewing the most sexually arousing pictures compared with highly arousing aversive pictures nor compared with neutral pictures. Results suggest that a widespread neural network is activated by highly sexually arousing visual stimuli. A partly distinct network of structures underlies sexual and general arousal effects. The processing of preferred, highly sexually arousing stimuli recruited similar structures in homosexual and heterosexual males. © 2011 International Society for Sexual Medicine.

  19. Nonlinear adaptive inverse control via the unified model neural network

    Science.gov (United States)

    Jeng, Jin-Tsong; Lee, Tsu-Tian

    1999-03-01

    In this paper, we propose a new nonlinear adaptive inverse control via a unified model neural network. In order to overcome nonsystematic design and long training time in nonlinear adaptive inverse control, we propose the approximate transformable technique to obtain a Chebyshev Polynomials Based Unified Model (CPBUM) neural network for the feedforward/recurrent neural networks. It turns out that the proposed method can use less training time to get an inverse model. Finally, we apply this proposed method to control magnetic bearing system. The experimental results show that the proposed nonlinear adaptive inverse control architecture provides a greater flexibility and better performance in controlling magnetic bearing systems.

  20. Neural network modeling for near wall turbulent flow

    International Nuclear Information System (INIS)

    Milano, Michele; Koumoutsakos, Petros

    2002-01-01

    A neural network methodology is developed in order to reconstruct the near wall field in a turbulent flow by exploiting flow fields provided by direct numerical simulations. The results obtained from the neural network methodology are compared with the results obtained from prediction and reconstruction using proper orthogonal decomposition (POD). Using the property that the POD is equivalent to a specific linear neural network, a nonlinear neural network extension is presented. It is shown that for a relatively small additional computational cost nonlinear neural networks provide us with improved reconstruction and prediction capabilities for the near wall velocity fields. Based on these results advantages and drawbacks of both approaches are discussed with an outlook toward the development of near wall models for turbulence modeling and control

  1. Neural Network Models for Time Series Forecasts

    OpenAIRE

    Tim Hill; Marcus O'Connor; William Remus

    1996-01-01

    Neural networks have been advocated as an alternative to traditional statistical forecasting methods. In the present experiment, time series forecasts produced by neural networks are compared with forecasts from six statistical time series methods generated in a major forecasting competition (Makridakis et al. [Makridakis, S., A. Anderson, R. Carbone, R. Fildes, M. Hibon, R. Lewandowski, J. Newton, E. Parzen, R. Winkler. 1982. The accuracy of extrapolation (time series) methods: Results of a ...

  2. Forecasting Flare Activity Using Deep Convolutional Neural Networks

    Science.gov (United States)

    Hernandez, T.

    2017-12-01

    Current operational flare forecasting relies on human morphological analysis of active regions and the persistence of solar flare activity through time (i.e. that the Sun will continue to do what it is doing right now: flaring or remaining calm). In this talk we present the results of applying deep Convolutional Neural Networks (CNNs) to the problem of solar flare forecasting. CNNs operate by training a set of tunable spatial filters that, in combination with neural layer interconnectivity, allow CNNs to automatically identify significant spatial structures predictive for classification and regression problems. We will start by discussing the applicability and success rate of the approach, the advantages it has over non-automated forecasts, and how mining our trained neural network provides a fresh look into the mechanisms behind magnetic energy storage and release.

  3. Neural field model of memory-guided search.

    Science.gov (United States)

    Kilpatrick, Zachary P; Poll, Daniel B

    2017-12-01

    Many organisms can remember locations they have previously visited during a search. Visual search experiments have shown exploration is guided away from these locations, reducing redundancies in the search path before finding a hidden target. We develop and analyze a two-layer neural field model that encodes positional information during a search task. A position-encoding layer sustains a bump attractor corresponding to the searching agent's current location, and search is modeled by velocity input that propagates the bump. A memory layer sustains persistent activity bounded by a wave front, whose edges expand in response to excitatory input from the position layer. Search can then be biased in response to remembered locations, influencing velocity inputs to the position layer. Asymptotic techniques are used to reduce the dynamics of our model to a low-dimensional system of equations that track the bump position and front boundary. Performance is compared for different target-finding tasks.

  4. Neural field model of memory-guided search

    Science.gov (United States)

    Kilpatrick, Zachary P.; Poll, Daniel B.

    2017-12-01

    Many organisms can remember locations they have previously visited during a search. Visual search experiments have shown exploration is guided away from these locations, reducing redundancies in the search path before finding a hidden target. We develop and analyze a two-layer neural field model that encodes positional information during a search task. A position-encoding layer sustains a bump attractor corresponding to the searching agent's current location, and search is modeled by velocity input that propagates the bump. A memory layer sustains persistent activity bounded by a wave front, whose edges expand in response to excitatory input from the position layer. Search can then be biased in response to remembered locations, influencing velocity inputs to the position layer. Asymptotic techniques are used to reduce the dynamics of our model to a low-dimensional system of equations that track the bump position and front boundary. Performance is compared for different target-finding tasks.

  5. Typology of nonlinear activity waves in a layered neural continuum.

    Science.gov (United States)

    Koch, Paul; Leisman, Gerry

    2006-04-01

    Neural tissue, a medium containing electro-chemical energy, can amplify small increments in cellular activity. The growing disturbance, measured as the fraction of active cells, manifests as propagating waves. In a layered geometry with a time delay in synaptic signals between the layers, the delay is instrumental in determining the amplified wavelengths. The growth of the waves is limited by the finite number of neural cells in a given region of the continuum. As wave growth saturates, the resulting activity patterns in space and time show a variety of forms, ranging from regular monochromatic waves to highly irregular mixtures of different spatial frequencies. The type of wave configuration is determined by a number of parameters, including alertness and synaptic conditioning as well as delay. For all cases studied, using numerical solution of the nonlinear Wilson-Cowan (1973) equations, there is an interval in delay in which the wave mixing occurs. As delay increases through this interval, during a series of consecutive waves propagating through a continuum region, the activity within that region changes from a single-frequency to a multiple-frequency pattern and back again. The diverse spatio-temporal patterns give a more concrete form to several metaphors advanced over the years to attempt an explanation of cognitive phenomena: Activity waves embody the "holographic memory" (Pribram, 1991); wave mixing provides a plausible cause of the competition called "neural Darwinism" (Edelman, 1988); finally the consecutive generation of growing neural waves can explain the discontinuousness of "psychological time" (Stroud, 1955).

  6. Generalized activity equations for spiking neural network dynamics

    Directory of Open Access Journals (Sweden)

    Michael A Buice

    2013-11-01

    Full Text Available Much progress has been made in uncovering the computational capabilities of spiking neural networks. However, spiking neurons will always be more expensive to simulate compared to rate neurons because of the inherent disparity in time scales - the spike duration time is much shorter than the inter-spike time, which is much shorter than any learning time scale. In numerical analysis, this is a classic stiff problem. Spiking neurons are also much more difficult to study analytically. One possible approach to making spiking networks more tractable is to augment mean field activity models with some information about spiking correlations. For example, such a generalized activity model could carry information about spiking rates and correlations between spikes self-consistently. Here, we will show how this can be accomplished by constructing a complete formal probabilistic description of the network and then expanding around a small parameter such as the inverse of the number of neurons in the network. The mean field theory of the system gives a rate-like description. The first order terms in the perturbation expansion keep track of covariances.

  7. Cultured Neural Networks: Optimization of Patterned Network Adhesiveness and Characterization of their Neural Activity

    Directory of Open Access Journals (Sweden)

    W. L. C. Rutten

    2006-01-01

    Full Text Available One type of future, improved neural interface is the “cultured probe”. It is a hybrid type of neural information transducer or prosthesis, for stimulation and/or recording of neural activity. It would consist of a microelectrode array (MEA on a planar substrate, each electrode being covered and surrounded by a local circularly confined network (“island” of cultured neurons. The main purpose of the local networks is that they act as biofriendly intermediates for collateral sprouts from the in vivo system, thus allowing for an effective and selective neuron–electrode interface. As a secondary purpose, one may envisage future information processing applications of these intermediary networks. In this paper, first, progress is shown on how substrates can be chemically modified to confine developing networks, cultured from dissociated rat cortex cells, to “islands” surrounding an electrode site. Additional coating of neurophobic, polyimide-coated substrate by triblock-copolymer coating enhances neurophilic-neurophobic adhesion contrast. Secondly, results are given on neuronal activity in patterned, unconnected and connected, circular “island” networks. For connected islands, the larger the island diameter (50, 100 or 150 μm, the more spontaneous activity is seen. Also, activity may show a very high degree of synchronization between two islands. For unconnected islands, activity may start at 22 days in vitro (DIV, which is two weeks later than in unpatterned networks.

  8. Local TEC Modelling and Forecasting using Neural Networks

    Science.gov (United States)

    Tebabal, A.; Radicella, S. M.; Nigussie, M.; Damtie, B.; Nava, B.; Yizengaw, E.

    2017-12-01

    Abstract Modelling the Earth's ionospheric characteristics is the focal task for the ionospheric community to mitigate its effect on the radio communication, satellite navigation and technologies. However, several aspects of modelling are still challenging, for example, the storm time characteristics. This paper presents modelling efforts of TEC taking into account solar and geomagnetic activity, time of the day and day of the year using neural networks (NNs) modelling technique. The NNs have been designed with GPS-TEC measured data from low and mid-latitude GPS stations. The training was conducted using the data obtained for the period from 2011 to 2014. The model prediction accuracy was evaluated using data of year 2015. The model results show that diurnal and seasonal trend of the GPS-TEC is well reproduced by the model for the two stations. The seasonal characteristics of GPS-TEC is compared with NN and NeQuick 2 models prediction when the latter one is driven by the monthly average value of solar flux. It is found that NN model performs better than the corresponding NeQuick 2 model for low latitude region. For the mid-latitude both NN and NeQuick 2 models reproduce the average characteristics of TEC variability quite successfully. An attempt of one day ahead forecast of TEC at the two locations has been made by introducing as driver previous day solar flux and geomagnetic index values. The results show that a reasonable day ahead forecast of local TEC can be achieved.

  9. Neural speech recognition: continuous phoneme decoding using spatiotemporal representations of human cortical activity

    Science.gov (United States)

    Moses, David A.; Mesgarani, Nima; Leonard, Matthew K.; Chang, Edward F.

    2016-10-01

    Objective. The superior temporal gyrus (STG) and neighboring brain regions play a key role in human language processing. Previous studies have attempted to reconstruct speech information from brain activity in the STG, but few of them incorporate the probabilistic framework and engineering methodology used in modern speech recognition systems. In this work, we describe the initial efforts toward the design of a neural speech recognition (NSR) system that performs continuous phoneme recognition on English stimuli with arbitrary vocabulary sizes using the high gamma band power of local field potentials in the STG and neighboring cortical areas obtained via electrocorticography. Approach. The system implements a Viterbi decoder that incorporates phoneme likelihood estimates from a linear discriminant analysis model and transition probabilities from an n-gram phonemic language model. Grid searches were used in an attempt to determine optimal parameterizations of the feature vectors and Viterbi decoder. Main results. The performance of the system was significantly improved by using spatiotemporal representations of the neural activity (as opposed to purely spatial representations) and by including language modeling and Viterbi decoding in the NSR system. Significance. These results emphasize the importance of modeling the temporal dynamics of neural responses when analyzing their variations with respect to varying stimuli and demonstrate that speech recognition techniques can be successfully leveraged when decoding speech from neural signals. Guided by the results detailed in this work, further development of the NSR system could have applications in the fields of automatic speech recognition and neural prosthetics.

  10. Lukasiewicz-Topos Models of Neural Networks, Cell Genome and Interactome Nonlinear Dynamic Models

    CERN Document Server

    Baianu, I C

    2004-01-01

    A categorical and Lukasiewicz-Topos framework for Lukasiewicz Algebraic Logic models of nonlinear dynamics in complex functional systems such as neural networks, genomes and cell interactomes is proposed. Lukasiewicz Algebraic Logic models of genetic networks and signaling pathways in cells are formulated in terms of nonlinear dynamic systems with n-state components that allow for the generalization of previous logical models of both genetic activities and neural networks. An algebraic formulation of variable 'next-state functions' is extended to a Lukasiewicz Topos with an n-valued Lukasiewicz Algebraic Logic subobject classifier description that represents non-random and nonlinear network activities as well as their transformations in developmental processes and carcinogenesis.

  11. Statistical mechanics of attractor neural network models with synaptic depression

    International Nuclear Information System (INIS)

    Igarashi, Yasuhiko; Oizumi, Masafumi; Otsubo, Yosuke; Nagata, Kenji; Okada, Masato

    2009-01-01

    Synaptic depression is known to control gain for presynaptic inputs. Since cortical neurons receive thousands of presynaptic inputs, and their outputs are fed into thousands of other neurons, the synaptic depression should influence macroscopic properties of neural networks. We employ simple neural network models to explore the macroscopic effects of synaptic depression. Systems with the synaptic depression cannot be analyzed due to asymmetry of connections with the conventional equilibrium statistical-mechanical approach. Thus, we first propose a microscopic dynamical mean field theory. Next, we derive macroscopic steady state equations and discuss the stabilities of steady states for various types of neural network models.

  12. Numeral eddy current sensor modelling based on genetic neural network

    International Nuclear Information System (INIS)

    Yu Along

    2008-01-01

    This paper presents a method used to the numeral eddy current sensor modelling based on the genetic neural network to settle its nonlinear problem. The principle and algorithms of genetic neural network are introduced. In this method, the nonlinear model parameters of the numeral eddy current sensor are optimized by genetic neural network (GNN) according to measurement data. So the method remains both the global searching ability of genetic algorithm and the good local searching ability of neural network. The nonlinear model has the advantages of strong robustness, on-line modelling and high precision. The maximum nonlinearity error can be reduced to 0.037% by using GNN. However, the maximum nonlinearity error is 0.075% using the least square method

  13. Particle swarm optimization of a neural network model in a ...

    Indian Academy of Sciences (India)

    . Since tool life is critically affected by the tool wear, accurate prediction of this wear ... In their work, they established an improvement in the quality ... objective optimization of hard turning using neural network modelling and swarm intelligence ...

  14. Activity-dependent modulation of neural circuit synaptic connectivity

    Directory of Open Access Journals (Sweden)

    Charles R Tessier

    2009-07-01

    Full Text Available In many nervous systems, the establishment of neural circuits is known to proceed via a two-stage process; 1 early, activity-independent wiring to produce a rough map characterized by excessive synaptic connections, and 2 subsequent, use-dependent pruning to eliminate inappropriate connections and reinforce maintained synapses. In invertebrates, however, evidence of the activity-dependent phase of synaptic refinement has been elusive, and the dogma has long been that invertebrate circuits are “hard-wired” in a purely activity-independent manner. This conclusion has been challenged recently through the use of new transgenic tools employed in the powerful Drosophila system, which have allowed unprecedented temporal control and single neuron imaging resolution. These recent studies reveal that activity-dependent mechanisms are indeed required to refine circuit maps in Drosophila during precise, restricted windows of late-phase development. Such mechanisms of circuit refinement may be key to understanding a number of human neurological diseases, including developmental disorders such as Fragile X syndrome (FXS and autism, which are hypothesized to result from defects in synaptic connectivity and activity-dependent circuit function. This review focuses on our current understanding of activity-dependent synaptic connectivity in Drosophila, primarily through analyzing the role of the fragile X mental retardation protein (FMRP in the Drosophila FXS disease model. The particular emphasis of this review is on the expanding array of new genetically-encoded tools that are allowing cellular events and molecular players to be dissected with ever greater precision and detail.

  15. neural network based model o work based model of an industrial oil

    African Journals Online (AJOL)

    eobe

    technique. g, Neural Network Model, Regression, Mean Square Error, PID controller. ... during the training processes. An additio ... used to carry out simulation studies of the mode .... A two-layer feed-forward neural network with Matlab.

  16. Neural activity predicts attitude change in cognitive dissonance.

    Science.gov (United States)

    van Veen, Vincent; Krug, Marie K; Schooler, Jonathan W; Carter, Cameron S

    2009-11-01

    When our actions conflict with our prior attitudes, we often change our attitudes to be more consistent with our actions. This phenomenon, known as cognitive dissonance, is considered to be one of the most influential theories in psychology. However, the neural basis of this phenomenon is unknown. Using a Solomon four-group design, we scanned participants with functional MRI while they argued that the uncomfortable scanner environment was nevertheless a pleasant experience. We found that cognitive dissonance engaged the dorsal anterior cingulate cortex and anterior insula; furthermore, we found that the activation of these regions tightly predicted participants' subsequent attitude change. These effects were not observed in a control group. Our findings elucidate the neural representation of cognitive dissonance, and support the role of the anterior cingulate cortex in detecting cognitive conflict and the neural prediction of attitude change.

  17. Neural networkbased semi-active control strategy for structural vibration mitigation with magnetorheological damper

    DEFF Research Database (Denmark)

    Bhowmik, Subrata

    2011-01-01

    This paper presents a neural network based semi-active control method for a rotary type magnetorheological (MR) damper. The characteristics of the MR damper are described by the classic Bouc-Wen model, and the performance of the proposed control method is evaluated in terms of a base exited shear...... to determine the damper current based on the derived optimal damper force. For that reason an inverse MR damper model is also designed based on the neural network identification of the particular rotary MR damper. The performance of the proposed controller is compared to that of an optimal pure viscous damper...

  18. Neural activity when people solve verbal problems with insight.

    Directory of Open Access Journals (Sweden)

    Mark Jung-Beeman

    2004-04-01

    Full Text Available People sometimes solve problems with a unique process called insight, accompanied by an "Aha!" experience. It has long been unclear whether different cognitive and neural processes lead to insight versus noninsight solutions, or if solutions differ only in subsequent subjective feeling. Recent behavioral studies indicate distinct patterns of performance and suggest differential hemispheric involvement for insight and noninsight solutions. Subjects solved verbal problems, and after each correct solution indicated whether they solved with or without insight. We observed two objective neural correlates of insight. Functional magnetic resonance imaging (Experiment 1 revealed increased activity in the right hemisphere anterior superior temporal gyrus for insight relative to noninsight solutions. The same region was active during initial solving efforts. Scalp electroencephalogram recordings (Experiment 2 revealed a sudden burst of high-frequency (gamma-band neural activity in the same area beginning 0.3 s prior to insight solutions. This right anterior temporal area is associated with making connections across distantly related information during comprehension. Although all problem solving relies on a largely shared cortical network, the sudden flash of insight occurs when solvers engage distinct neural and cognitive processes that allow them to see connections that previously eluded them.

  19. Strategies influence neural activity for feedback learning across child and adolescent development.

    Science.gov (United States)

    Peters, Sabine; Koolschijn, P Cédric M P; Crone, Eveline A; Van Duijvenvoorde, Anna C K; Raijmakers, Maartje E J

    2014-09-01

    Learning from feedback is an important aspect of executive functioning that shows profound improvements during childhood and adolescence. This is accompanied by neural changes in the feedback-learning network, which includes pre-supplementary motor area (pre- SMA)/anterior cingulate cortex (ACC), dorsolateral prefrontal cortex (DLPFC), superior parietal cortex (SPC), and the basal ganglia. However, there can be considerable differences within age ranges in performance that are ascribed to differences in strategy use. This is problematic for traditional approaches of analyzing developmental data, in which age groups are assumed to be homogenous in strategy use. In this study, we used latent variable models to investigate if underlying strategy groups could be detected for a feedback-learning task and whether there were differences in neural activation patterns between strategies. In a sample of 268 participants between ages 8 to 25 years, we observed four underlying strategy groups, which were cut across age groups and varied in the optimality of executive functioning. These strategy groups also differed in neural activity during learning; especially the most optimal performing group showed more activity in DLPFC, SPC and pre-SMA/ACC compared to the other groups. However, age differences remained an important contributor to neural activation, even when correcting for strategy. These findings contribute to the debate of age versus performance predictors of neural development, and highlight the importance of studying individual differences in strategy use when studying development. Copyright © 2014 Elsevier Ltd. All rights reserved.

  20. Linear and nonlinear ARMA model parameter estimation using an artificial neural network

    Science.gov (United States)

    Chon, K. H.; Cohen, R. J.

    1997-01-01

    This paper addresses parametric system identification of linear and nonlinear dynamic systems by analysis of the input and output signals. Specifically, we investigate the relationship between estimation of the system using a feedforward neural network model and estimation of the system by use of linear and nonlinear autoregressive moving-average (ARMA) models. By utilizing a neural network model incorporating a polynomial activation function, we show the equivalence of the artificial neural network to the linear and nonlinear ARMA models. We compare the parameterization of the estimated system using the neural network and ARMA approaches by utilizing data generated by means of computer simulations. Specifically, we show that the parameters of a simulated ARMA system can be obtained from the neural network analysis of the simulated data or by conventional least squares ARMA analysis. The feasibility of applying neural networks with polynomial activation functions to the analysis of experimental data is explored by application to measurements of heart rate (HR) and instantaneous lung volume (ILV) fluctuations.

  1. Interacting Neural Processes of Feeding, Hyperactivity, Stress, Reward, and the Utility of the Activity-Based Anorexia Model of Anorexia Nervosa.

    Science.gov (United States)

    Ross, Rachel A; Mandelblat-Cerf, Yael; Verstegen, Anne M J

    Anorexia nervosa (AN) is a psychiatric illness with minimal effective treatments and a very high rate of mortality. Understanding the neurobiological underpinnings of the disease is imperative for improving outcomes and can be aided by the study of animal models. The activity-based anorexia rodent model (ABA) is the current best parallel for the study of AN. This review describes the basic neurobiology of feeding and hyperactivity seen in both ABA and AN, and compiles the research on the role that stress-response and reward pathways play in modulating the homeostatic drive to eat and to expend energy, which become dysfunctional in ABA and AN.

  2. Feed forward neural networks modeling for K-P interactions

    International Nuclear Information System (INIS)

    El-Bakry, M.Y.

    2003-01-01

    Artificial intelligence techniques involving neural networks became vital modeling tools where model dynamics are difficult to track with conventional techniques. The paper make use of the feed forward neural networks (FFNN) to model the charged multiplicity distribution of K-P interactions at high energies. The FFNN was trained using experimental data for the multiplicity distributions at different lab momenta. Results of the FFNN model were compared to that generated using the parton two fireball model and the experimental data. The proposed FFNN model results showed good fitting to the experimental data. The neural network model performance was also tested at non-trained space and was found to be in good agreement with the experimental data

  3. Neural Ranking Models with Weak Supervision

    NARCIS (Netherlands)

    Dehghani, M.; Zamani, H.; Severyn, A.; Kamps, J.; Croft, W.B.

    2017-01-01

    Despite the impressive improvements achieved by unsupervised deep neural networks in computer vision and NLP tasks, such improvements have not yet been observed in ranking for information retrieval. The reason may be the complexity of the ranking problem, as it is not obvious how to learn from

  4. Neural Network Based Model of an Industrial Oil-Fired Boiler System ...

    African Journals Online (AJOL)

    A two-layer feed-forward neural network with Hyperbolic tangent sigmoid ... The neural network model when subjected to test, using the validation input data; ... Proportional Integral Derivative (PID) Controller is used to control the neural ...

  5. Application of neural networks to seismic active control

    International Nuclear Information System (INIS)

    Tang, Yu.

    1995-01-01

    An exploratory study on seismic active control using an artificial neural network (ANN) is presented in which a singledegree-of-freedom (SDF) structural system is controlled by a trained neural network. A feed-forward neural network and the backpropagation training method are used in the study. In backpropagation training, the learning rate is determined by ensuring the decrease of the error function at each training cycle. The training patterns for the neural net are generated randomly. Then, the trained ANN is used to compute the control force according to the control algorithm. The control strategy proposed herein is to apply the control force at every time step to destroy the build-up of the system response. The ground motions considered in the simulations are the N21E and N69W components of the Lake Hughes No. 12 record that occurred in the San Fernando Valley in California on February 9, 1971. Significant reduction of the structural response by one order of magnitude is observed. Also, it is shown that the proposed control strategy has the ability to reduce the peak that occurs during the first few cycles of the time history. These promising results assert the potential of applying ANNs to active structural control under seismic loads

  6. Discriminative training of self-structuring hidden control neural models

    DEFF Research Database (Denmark)

    Sørensen, Helge Bjarup Dissing; Hartmann, Uwe; Hunnerup, Preben

    1995-01-01

    This paper presents a new training algorithm for self-structuring hidden control neural (SHC) models. The SHC models were trained non-discriminatively for speech recognition applications. Better recognition performance can generally be achieved, if discriminative training is applied instead. Thus...... we developed a discriminative training algorithm for SHC models, where each SHC model for a specific speech pattern is trained with utterances of the pattern to be recognized and with other utterances. The discriminative training of SHC neural models has been tested on the TIDIGITS database...

  7. Performance of Deep and Shallow Neural Networks, the Universal Approximation Theorem, Activity Cliffs, and QSAR.

    Science.gov (United States)

    Winkler, David A; Le, Tu C

    2017-01-01

    Neural networks have generated valuable Quantitative Structure-Activity/Property Relationships (QSAR/QSPR) models for a wide variety of small molecules and materials properties. They have grown in sophistication and many of their initial problems have been overcome by modern mathematical techniques. QSAR studies have almost always used so-called "shallow" neural networks in which there is a single hidden layer between the input and output layers. Recently, a new and potentially paradigm-shifting type of neural network based on Deep Learning has appeared. Deep learning methods have generated impressive improvements in image and voice recognition, and are now being applied to QSAR and QSAR modelling. This paper describes the differences in approach between deep and shallow neural networks, compares their abilities to predict the properties of test sets for 15 large drug data sets (the kaggle set), discusses the results in terms of the Universal Approximation theorem for neural networks, and describes how DNN may ameliorate or remove troublesome "activity cliffs" in QSAR data sets. © 2017 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.

  8. Neural activity in the rat basal ganglia

    NARCIS (Netherlands)

    Zhao, Yan; Stegenga, J.; Heida, Tjitske; van Wezel, Richard Jack Anton

    2013-01-01

    Objectives: Pathological oscillations in the beta frequencies (8-30Hz) have been found in the local field potentials of Parkinson's disease (PD) patients and non-human primate models of PD1. In particular, these synchronizations appear in the subthalamic nucleus (STN), a common target for deep brain

  9. Teaching methodology for modeling reference evapotranspiration with artificial neural networks

    OpenAIRE

    Martí, Pau; Pulido Calvo, Inmaculada; Gutiérrez Estrada, Juan Carlos

    2015-01-01

    [EN] Artificial neural networks are a robust alternative to conventional models for estimating different targets in irrigation engineering, among others, reference evapotranspiration, a key variable for estimating crop water requirements. This paper presents a didactic methodology for introducing students in the application of artificial neural networks for reference evapotranspiration estimation using MatLab c . Apart from learning a specific application of this software wi...

  10. Comparing Neural Networks and ARMA Models in Artificial Stock Market

    Czech Academy of Sciences Publication Activity Database

    Krtek, Jiří; Vošvrda, Miloslav

    2011-01-01

    Roč. 18, č. 28 (2011), s. 53-65 ISSN 1212-074X R&D Projects: GA ČR GD402/09/H045 Institutional research plan: CEZ:AV0Z10750506 Keywords : neural networks * vector ARMA * artificial market Subject RIV: AH - Economics http://library.utia.cas.cz/separaty/2011/E/krtek-comparing neural networks and arma models in artificial stock market.pdf

  11. A Quantum Implementation Model for Artificial Neural Networks

    OpenAIRE

    Daskin, Ammar

    2016-01-01

    The learning process for multi layered neural networks with many nodes makes heavy demands on computational resources. In some neural network models, the learning formulas, such as the Widrow-Hoff formula, do not change the eigenvectors of the weight matrix while flatting the eigenvalues. In infinity, this iterative formulas result in terms formed by the principal components of the weight matrix: i.e., the eigenvectors corresponding to the non-zero eigenvalues. In quantum computing, the phase...

  12. A Quantum Implementation Model for Artificial Neural Networks

    OpenAIRE

    Ammar Daskin

    2018-01-01

    The learning process for multilayered neural networks with many nodes makes heavy demands on computational resources. In some neural network models, the learning formulas, such as the Widrow–Hoff formula, do not change the eigenvectors of the weight matrix while flatting the eigenvalues. In infinity, these iterative formulas result in terms formed by the principal components of the weight matrix, namely, the eigenvectors corresponding to the non-zero eigenvalues. In quantum computing, the pha...

  13. Correlation of neural activity with behavioral kinematics reveals distinct sensory encoding and evidence accumulation processes during active tactile sensing.

    Science.gov (United States)

    Delis, Ioannis; Dmochowski, Jacek P; Sajda, Paul; Wang, Qi

    2018-03-23

    Many real-world decisions rely on active sensing, a dynamic process for directing our sensors (e.g. eyes or fingers) across a stimulus to maximize information gain. Though ecologically pervasive, limited work has focused on identifying neural correlates of the active sensing process. In tactile perception, we often make decisions about an object/surface by actively exploring its shape/texture. Here we investigate the neural correlates of active tactile decision-making by simultaneously measuring electroencephalography (EEG) and finger kinematics while subjects interrogated a haptic surface to make perceptual judgments. Since sensorimotor behavior underlies decision formation in active sensing tasks, we hypothesized that the neural correlates of decision-related processes would be detectable by relating active sensing to neural activity. Novel brain-behavior correlation analysis revealed that three distinct EEG components, localizing to right-lateralized occipital cortex (LOC), middle frontal gyrus (MFG), and supplementary motor area (SMA), respectively, were coupled with active sensing as their activity significantly correlated with finger kinematics. To probe the functional role of these components, we fit their single-trial-couplings to decision-making performance using a hierarchical-drift-diffusion-model (HDDM), revealing that the LOC modulated the encoding of the tactile stimulus whereas the MFG predicted the rate of information integration towards a choice. Interestingly, the MFG disappeared from components uncovered from control subjects performing active sensing but not required to make perceptual decisions. By uncovering the neural correlates of distinct stimulus encoding and evidence accumulation processes, this study delineated, for the first time, the functional role of cortical areas in active tactile decision-making. Copyright © 2018 Elsevier Inc. All rights reserved.

  14. Nonlinear signal processing using neural networks: Prediction and system modelling

    Energy Technology Data Exchange (ETDEWEB)

    Lapedes, A.; Farber, R.

    1987-06-01

    The backpropagation learning algorithm for neural networks is developed into a formalism for nonlinear signal processing. We illustrate the method by selecting two common topics in signal processing, prediction and system modelling, and show that nonlinear applications can be handled extremely well by using neural networks. The formalism is a natural, nonlinear extension of the linear Least Mean Squares algorithm commonly used in adaptive signal processing. Simulations are presented that document the additional performance achieved by using nonlinear neural networks. First, we demonstrate that the formalism may be used to predict points in a highly chaotic time series with orders of magnitude increase in accuracy over conventional methods including the Linear Predictive Method and the Gabor-Volterra-Weiner Polynomial Method. Deterministic chaos is thought to be involved in many physical situations including the onset of turbulence in fluids, chemical reactions and plasma physics. Secondly, we demonstrate the use of the formalism in nonlinear system modelling by providing a graphic example in which it is clear that the neural network has accurately modelled the nonlinear transfer function. It is interesting to note that the formalism provides explicit, analytic, global, approximations to the nonlinear maps underlying the various time series. Furthermore, the neural net seems to be extremely parsimonious in its requirements for data points from the time series. We show that the neural net is able to perform well because it globally approximates the relevant maps by performing a kind of generalized mode decomposition of the maps. 24 refs., 13 figs.

  15. Cognon Neural Model Software Verification and Hardware Implementation Design

    Science.gov (United States)

    Haro Negre, Pau

    Little is known yet about how the brain can recognize arbitrary sensory patterns within milliseconds using neural spikes to communicate information between neurons. In a typical brain there are several layers of neurons, with each neuron axon connecting to ˜104 synapses of neurons in an adjacent layer. The information necessary for cognition is contained in theses synapses, which strengthen during the learning phase in response to newly presented spike patterns. Continuing on the model proposed in "Models for Neural Spike Computation and Cognition" by David H. Staelin and Carl H. Staelin, this study seeks to understand cognition from an information theoretic perspective and develop potential models for artificial implementation of cognition based on neuronal models. To do so we focus on the mathematical properties and limitations of spike-based cognition consistent with existing neurological observations. We validate the cognon model through software simulation and develop concepts for an optical hardware implementation of a network of artificial neural cognons.

  16. Modeling polyvinyl chloride Plasma Modification by Neural Networks

    Science.gov (United States)

    Wang, Changquan

    2018-03-01

    Neural networks model were constructed to analyze the connection between dielectric barrier discharge parameters and surface properties of material. The experiment data were generated from polyvinyl chloride plasma modification by using uniform design. Discharge voltage, discharge gas gap and treatment time were as neural network input layer parameters. The measured values of contact angle were as the output layer parameters. A nonlinear mathematical model of the surface modification for polyvinyl chloride was developed based upon the neural networks. The optimum model parameters were obtained by the simulation evaluation and error analysis. The results of the optimal model show that the predicted value is very close to the actual test value. The prediction model obtained here are useful for discharge plasma surface modification analysis.

  17. Efficient Neural Network Modeling for Flight and Space Dynamics Simulation

    Directory of Open Access Journals (Sweden)

    Ayman Hamdy Kassem

    2011-01-01

    Full Text Available This paper represents an efficient technique for neural network modeling of flight and space dynamics simulation. The technique will free the neural network designer from guessing the size and structure for the required neural network model and will help to minimize the number of neurons. For linear flight/space dynamics systems, the technique can find the network weights and biases directly by solving a system of linear equations without the need for training. Nonlinear flight dynamic systems can be easily modeled by training its linearized models keeping the same network structure. The training is fast, as it uses the linear system knowledge to speed up the training process. The technique is tested on different flight/space dynamic models and showed promising results.

  18. Stability of a neural predictive controller scheme on a neural model

    DEFF Research Database (Denmark)

    Luther, Jim Benjamin; Sørensen, Paul Haase

    2009-01-01

    In previous works presenting various forms of neural-network-based predictive controllers, the main emphasis has been on the implementation aspects, i.e. the development of a robust optimization algorithm for the controller, which will be able to perform in real time. However, the stability issue....... The resulting controller is tested on a nonlinear pneumatic servo system.......In previous works presenting various forms of neural-network-based predictive controllers, the main emphasis has been on the implementation aspects, i.e. the development of a robust optimization algorithm for the controller, which will be able to perform in real time. However, the stability issue...... has not been addressed specifically for these controllers. On the other hand a number of results concerning the stability of receding horizon controllers on a nonlinear system exist. In this paper we present a proof of stability for a predictive controller controlling a neural network model...

  19. Integrating probabilistic models of perception and interactive neural networks: a historical and tutorial review.

    Science.gov (United States)

    McClelland, James L

    2013-01-01

    This article seeks to establish a rapprochement between explicitly Bayesian models of contextual effects in perception and neural network models of such effects, particularly the connectionist interactive activation (IA) model of perception. The article is in part an historical review and in part a tutorial, reviewing the probabilistic Bayesian approach to understanding perception and how it may be shaped by context, and also reviewing ideas about how such probabilistic computations may be carried out in neural networks, focusing on the role of context in interactive neural networks, in which both bottom-up and top-down signals affect the interpretation of sensory inputs. It is pointed out that connectionist units that use the logistic or softmax activation functions can exactly compute Bayesian posterior probabilities when the bias terms and connection weights affecting such units are set to the logarithms of appropriate probabilistic quantities. Bayesian concepts such the prior, likelihood, (joint and marginal) posterior, probability matching and maximizing, and calculating vs. sampling from the posterior are all reviewed and linked to neural network computations. Probabilistic and neural network models are explicitly linked to the concept of a probabilistic generative model that describes the relationship between the underlying target of perception (e.g., the word intended by a speaker or other source of sensory stimuli) and the sensory input that reaches the perceiver for use in inferring the underlying target. It is shown how a new version of the IA model called the multinomial interactive activation (MIA) model can sample correctly from the joint posterior of a proposed generative model for perception of letters in words, indicating that interactive processing is fully consistent with principled probabilistic computation. Ways in which these computations might be realized in real neural systems are also considered.

  20. SCYNet. Testing supersymmetric models at the LHC with neural networks

    Energy Technology Data Exchange (ETDEWEB)

    Bechtle, Philip; Belkner, Sebastian; Hamer, Matthias [Universitaet Bonn, Bonn (Germany); Dercks, Daniel [Universitaet Hamburg, Hamburg (Germany); Keller, Tim; Kraemer, Michael; Sarrazin, Bjoern; Schuette-Engel, Jan; Tattersall, Jamie [RWTH Aachen University, Institute for Theoretical Particle Physics and Cosmology, Aachen (Germany)

    2017-10-15

    SCYNet (SUSY Calculating Yield Net) is a tool for testing supersymmetric models against LHC data. It uses neural network regression for a fast evaluation of the profile likelihood ratio. Two neural network approaches have been developed: one network has been trained using the parameters of the 11-dimensional phenomenological Minimal Supersymmetric Standard Model (pMSSM-11) as an input and evaluates the corresponding profile likelihood ratio within milliseconds. It can thus be used in global pMSSM-11 fits without time penalty. In the second approach, the neural network has been trained using model-independent signature-related objects, such as energies and particle multiplicities, which were estimated from the parameters of a given new physics model. (orig.)

  1. Neural Networks for Modeling and Control of Particle Accelerators

    Science.gov (United States)

    Edelen, A. L.; Biedron, S. G.; Chase, B. E.; Edstrom, D.; Milton, S. V.; Stabile, P.

    2016-04-01

    Particle accelerators are host to myriad nonlinear and complex physical phenomena. They often involve a multitude of interacting systems, are subject to tight performance demands, and should be able to run for extended periods of time with minimal interruptions. Often times, traditional control techniques cannot fully meet these requirements. One promising avenue is to introduce machine learning and sophisticated control techniques inspired by artificial intelligence, particularly in light of recent theoretical and practical advances in these fields. Within machine learning and artificial intelligence, neural networks are particularly well-suited to modeling, control, and diagnostic analysis of complex, nonlinear, and time-varying systems, as well as systems with large parameter spaces. Consequently, the use of neural network-based modeling and control techniques could be of significant benefit to particle accelerators. For the same reasons, particle accelerators are also ideal test-beds for these techniques. Many early attempts to apply neural networks to particle accelerators yielded mixed results due to the relative immaturity of the technology for such tasks. The purpose of this paper is to re-introduce neural networks to the particle accelerator community and report on some work in neural network control that is being conducted as part of a dedicated collaboration between Fermilab and Colorado State University (CSU). We describe some of the challenges of particle accelerator control, highlight recent advances in neural network techniques, discuss some promising avenues for incorporating neural networks into particle accelerator control systems, and describe a neural network-based control system that is being developed for resonance control of an RF electron gun at the Fermilab Accelerator Science and Technology (FAST) facility, including initial experimental results from a benchmark controller.

  2. The effect of nonstationarity on models inferred from neural data

    International Nuclear Information System (INIS)

    Tyrcha, Joanna; Roudi, Yasser; Marsili, Matteo; Hertz, John

    2013-01-01

    Neurons subject to a common nonstationary input may exhibit a correlated firing behavior. Correlations in the statistics of neural spike trains also arise as the effect of interaction between neurons. Here we show that these two situations can be distinguished with machine learning techniques, provided that the data are rich enough. In order to do this, we study the problem of inferring a kinetic Ising model, stationary or nonstationary, from the available data. We apply the inference procedure to two data sets: one from salamander retinal ganglion cells and the other from a realistic computational cortical network model. We show that many aspects of the concerted activity of the salamander retinal neurons can be traced simply to the external input. A model of non-interacting neurons subject to a nonstationary external field outperforms a model with stationary input with couplings between neurons, even accounting for the differences in the number of model parameters. When couplings are added to the nonstationary model, for the retinal data, little is gained: the inferred couplings are generally not significant. Likewise, the distribution of the sizes of sets of neurons that spike simultaneously and the frequency of spike patterns as a function of their rank (Zipf plots) are well explained by an independent-neuron model with time-dependent external input, and adding connections to such a model does not offer significant improvement. For the cortical model data, robust couplings, well correlated with the real connections, can be inferred using the nonstationary model. Adding connections to this model slightly improves the agreement with the data for the probability of synchronous spikes but hardly affects the Zipf plot. (paper)

  3. The effect of nonstationarity on models inferred from neural data

    Energy Technology Data Exchange (ETDEWEB)

    Tyrcha, Joanna [Department of Mathematical Statistics, Stockholm University, SE-10691 Stockholm (Sweden); Roudi, Yasser [Kavli Institute for Systems Neuroscience, NTNU, NO-7010 Trondheim (Norway); Marsili, Matteo [The Abdus Salam ICTP, Strada Costiera 11, I-34151, Trieste (Italy); Hertz, John [Nordita, Royal Institute of Technology and Stockholm University, SE-106 91 Stockholm (Sweden)

    2013-03-01

    Neurons subject to a common nonstationary input may exhibit a correlated firing behavior. Correlations in the statistics of neural spike trains also arise as the effect of interaction between neurons. Here we show that these two situations can be distinguished with machine learning techniques, provided that the data are rich enough. In order to do this, we study the problem of inferring a kinetic Ising model, stationary or nonstationary, from the available data. We apply the inference procedure to two data sets: one from salamander retinal ganglion cells and the other from a realistic computational cortical network model. We show that many aspects of the concerted activity of the salamander retinal neurons can be traced simply to the external input. A model of non-interacting neurons subject to a nonstationary external field outperforms a model with stationary input with couplings between neurons, even accounting for the differences in the number of model parameters. When couplings are added to the nonstationary model, for the retinal data, little is gained: the inferred couplings are generally not significant. Likewise, the distribution of the sizes of sets of neurons that spike simultaneously and the frequency of spike patterns as a function of their rank (Zipf plots) are well explained by an independent-neuron model with time-dependent external input, and adding connections to such a model does not offer significant improvement. For the cortical model data, robust couplings, well correlated with the real connections, can be inferred using the nonstationary model. Adding connections to this model slightly improves the agreement with the data for the probability of synchronous spikes but hardly affects the Zipf plot. (paper)

  4. Modelling of word usage frequency dynamics using artificial neural network

    International Nuclear Information System (INIS)

    Maslennikova, Yu S; Bochkarev, V V; Voloskov, D S

    2014-01-01

    In this paper the method for modelling of word usage frequency time series is proposed. An artificial feedforward neural network was used to predict word usage frequencies. The neural network was trained using the maximum likelihood criterion. The Google Books Ngram corpus was used for the analysis. This database provides a large amount of data on frequency of specific word forms for 7 languages. Statistical modelling of word usage frequency time series allows finding optimal fitting and filtering algorithm for subsequent lexicographic analysis and verification of frequency trend models

  5. Efficient universal computing architectures for decoding neural activity.

    Directory of Open Access Journals (Sweden)

    Benjamin I Rapoport

    Full Text Available The ability to decode neural activity into meaningful control signals for prosthetic devices is critical to the development of clinically useful brain- machine interfaces (BMIs. Such systems require input from tens to hundreds of brain-implanted recording electrodes in order to deliver robust and accurate performance; in serving that primary function they should also minimize power dissipation in order to avoid damaging neural tissue; and they should transmit data wirelessly in order to minimize the risk of infection associated with chronic, transcutaneous implants. Electronic architectures for brain- machine interfaces must therefore minimize size and power consumption, while maximizing the ability to compress data to be transmitted over limited-bandwidth wireless channels. Here we present a system of extremely low computational complexity, designed for real-time decoding of neural signals, and suited for highly scalable implantable systems. Our programmable architecture is an explicit implementation of a universal computing machine emulating the dynamics of a network of integrate-and-fire neurons; it requires no arithmetic operations except for counting, and decodes neural signals using only computationally inexpensive logic operations. The simplicity of this architecture does not compromise its ability to compress raw neural data by factors greater than [Formula: see text]. We describe a set of decoding algorithms based on this computational architecture, one designed to operate within an implanted system, minimizing its power consumption and data transmission bandwidth; and a complementary set of algorithms for learning, programming the decoder, and postprocessing the decoded output, designed to operate in an external, nonimplanted unit. The implementation of the implantable portion is estimated to require fewer than 5000 operations per second. A proof-of-concept, 32-channel field-programmable gate array (FPGA implementation of this portion

  6. Photovoltaic Pixels for Neural Stimulation: Circuit Models and Performance.

    Science.gov (United States)

    Boinagrov, David; Lei, Xin; Goetz, Georges; Kamins, Theodore I; Mathieson, Keith; Galambos, Ludwig; Harris, James S; Palanker, Daniel

    2016-02-01

    Photovoltaic conversion of pulsed light into pulsed electric current enables optically-activated neural stimulation with miniature wireless implants. In photovoltaic retinal prostheses, patterns of near-infrared light projected from video goggles onto subretinal arrays of photovoltaic pixels are converted into patterns of current to stimulate the inner retinal neurons. We describe a model of these devices and evaluate the performance of photovoltaic circuits, including the electrode-electrolyte interface. Characteristics of the electrodes measured in saline with various voltages, pulse durations, and polarities were modeled as voltage-dependent capacitances and Faradaic resistances. The resulting mathematical model of the circuit yielded dynamics of the electric current generated by the photovoltaic pixels illuminated by pulsed light. Voltages measured in saline with a pipette electrode above the pixel closely matched results of the model. Using the circuit model, our pixel design was optimized for maximum charge injection under various lighting conditions and for different stimulation thresholds. To speed discharge of the electrodes between the pulses of light, a shunt resistor was introduced and optimized for high frequency stimulation.

  7. Recurrent Neural Network Model for Constructive Peptide Design.

    Science.gov (United States)

    Müller, Alex T; Hiss, Jan A; Schneider, Gisbert

    2018-02-26

    We present a generative long short-term memory (LSTM) recurrent neural network (RNN) for combinatorial de novo peptide design. RNN models capture patterns in sequential data and generate new data instances from the learned context. Amino acid sequences represent a suitable input for these machine-learning models. Generative models trained on peptide sequences could therefore facilitate the design of bespoke peptide libraries. We trained RNNs with LSTM units on pattern recognition of helical antimicrobial peptides and used the resulting model for de novo sequence generation. Of these sequences, 82% were predicted to be active antimicrobial peptides compared to 65% of randomly sampled sequences with the same amino acid distribution as the training set. The generated sequences also lie closer to the training data than manually designed amphipathic helices. The results of this study showcase the ability of LSTM RNNs to construct new amino acid sequences within the applicability domain of the model and motivate their prospective application to peptide and protein design without the need for the exhaustive enumeration of sequence libraries.

  8. MODELLING OF CONCENTRATION LIMITS BASED ON NEURAL NETWORKS.

    Directory of Open Access Journals (Sweden)

    A. L. Osipov

    2017-02-01

    Full Text Available We study the forecasting model with the concentration limits is-the use of neural network technology. The software for the implementation of these models. It is shown that the efficiency of the system in the experimental material.

  9. A Quantum Implementation Model for Artificial Neural Networks

    Directory of Open Access Journals (Sweden)

    Ammar Daskin

    2018-02-01

    Full Text Available The learning process for multilayered neural networks with many nodes makes heavy demands on computational resources. In some neural network models, the learning formulas, such as the Widrow–Hoff formula, do not change the eigenvectors of the weight matrix while flatting the eigenvalues. In infinity, these iterative formulas result in terms formed by the principal components of the weight matrix, namely, the eigenvectors corresponding to the non-zero eigenvalues. In quantum computing, the phase estimation algorithm is known to provide speedups over the conventional algorithms for the eigenvalue-related problems. Combining the quantum amplitude amplification with the phase estimation algorithm, a quantum implementation model for artificial neural networks using the Widrow–Hoff learning rule is presented. The complexity of the model is found to be linear in the size of the weight matrix. This provides a quadratic improvement over the classical algorithms. Quanta 2018; 7: 7–18.

  10. Use of artificial neural networks for transport energy demand modeling

    International Nuclear Information System (INIS)

    Murat, Yetis Sazi; Ceylan, Halim

    2006-01-01

    The paper illustrates an artificial neural network (ANN) approach based on supervised neural networks for the transport energy demand forecasting using socio-economic and transport related indicators. The ANN transport energy demand model is developed. The actual forecast is obtained using a feed forward neural network, trained with back propagation algorithm. In order to investigate the influence of socio-economic indicators on the transport energy demand, the ANN is analyzed based on gross national product (GNP), population and the total annual average veh-km along with historical energy data available from 1970 to 2001. Comparing model predictions with energy data in testing period performs the model validation. The projections are made with two scenarios. It is obtained that the ANN reflects the fluctuation in historical data for both dependent and independent variables. The results obtained bear out the suitability of the adopted methodology for the transport energy-forecasting problem

  11. Neural network-based model reference adaptive control system.

    Science.gov (United States)

    Patino, H D; Liu, D

    2000-01-01

    In this paper, an approach to model reference adaptive control based on neural networks is proposed and analyzed for a class of first-order continuous-time nonlinear dynamical systems. The controller structure can employ either a radial basis function network or a feedforward neural network to compensate adaptively the nonlinearities in the plant. A stable controller-parameter adjustment mechanism, which is determined using the Lyapunov theory, is constructed using a sigma-modification-type updating law. The evaluation of control error in terms of the neural network learning error is performed. That is, the control error converges asymptotically to a neighborhood of zero, whose size is evaluated and depends on the approximation error of the neural network. In the design and analysis of neural network-based control systems, it is important to take into account the neural network learning error and its influence on the control error of the plant. Simulation results showing the feasibility and performance of the proposed approach are given.

  12. Fuzzy Entropy: Axiomatic Definition and Neural Networks Model

    Institute of Scientific and Technical Information of China (English)

    QINGMing; CAOYue; HUANGTian-min

    2004-01-01

    The measure of uncertainty is adopted as a measure of information. The measures of fuzziness are known as fuzzy information measures. The measure of a quantity of fuzzy information gained from a fuzzy set or fuzzy system is known as fuzzy entropy. Fuzzy entropy has been focused and studied by many researchers in various fields. In this paper, firstly, the axiomatic definition of fuzzy entropy is discussed. Then, neural networks model of fuzzy entropy is proposed, based on the computing capability of neural networks. In the end, two examples are discussed to show the efficiency of the model.

  13. Combining BMI stimulation and mathematical modeling for acute stroke recovery and neural repair

    Directory of Open Access Journals (Sweden)

    Sara L Gonzalez Andino

    2011-07-01

    Full Text Available Rehabilitation is a neural plasticity-exploiting approach that forces undamaged neural circuits to undertake the functionality of other circuits damaged by stroke. It aims to partial restoration of the neural functions by circuit remodeling rather than by the regeneration of damaged circuits. The core hypothesis of the present paper is that - in stroke - Brain Machine Interfaces can be designed to target neural repair instead of rehabilitation. To support this hypothesis we first review existing evidence on the role of endogenous or externally applied electric fields on all processes involved in CNS repair. We then describe our own results to illustrate the neuroprotective and neuroregenerative effects of BMI- electrical stimulation on sensory deprivation-related degenerative processes of the CNS. Finally, we discuss three of the crucial issues involved in the design of neural repair-oriented BMIs: when to stimulate, where to stimulate and - the particularly important but unsolved issue of - how to stimulate. We argue that optimal parameters for the electrical stimulation can be determined from studying and modeling the dynamics of the electric fields that naturally emerge at the central and peripheral nervous system during spontaneous healing in both, experimental animals and human patients. We conclude that a closed-loop BMI that defines the optimal stimulation parameters from a priori developed experimental models of the dynamics of spontaneous repair and the on-line monitoring of neural activity might place BMIs as an alternative or complement to stem-cell transplantation or pharmacological approaches, intensively pursued nowadays.

  14. Highly efficient simultaneous ultrasonic assisted adsorption of brilliant green and eosin B onto ZnS nanoparticles loaded activated carbon: Artificial neural network modeling and central composite design optimization

    Science.gov (United States)

    Jamshidi, M.; Ghaedi, M.; Dashtian, K.; Ghaedi, A. M.; Hajati, S.; Goudarzi, A.; Alipanahpour, E.

    2016-01-01

    In this work, central composite design (CCD) combined with response surface methodology (RSM) and desirability function approach (DFA) gives useful information about operational condition and also to obtain useful information about interaction and main effect of variables concerned to simultaneous ultrasound-assisted removal of brilliant green (BG) and eosin B (EB) by zinc sulfide nanoparticles loaded on activated carbon (ZnS-NPs-AC). Spectra overlap between BG and EB dyes was extensively reduced and/or omitted by derivative spectrophotometric method, while multi-layer artificial neural network (ML-ANN) model learned with Levenberg-Marquardt (LM) algorithm was used for building up a predictive model and prediction of the BG and EB removal. The ANN efficiently was able to forecast the simultaneous BG and EB removal that was confirmed by reasonable numerical value i.e. MSE of 0.0021 and R2 of 0.9589 and MSE of 0.0022 and R2 of 0.9455 for testing data set, respectively. The results reveal acceptable agreement among experimental data and ANN predicted results. Langmuir as the best model for fitting experimental data relevant to BG and EB removal indicates high, economic and profitable adsorption capacity (258.7 and 222.2 mg g- 1) that supports and confirms its applicability for wastewater treatment.

  15. A neural population model incorporating dopaminergic neurotransmission during complex voluntary behaviors.

    Directory of Open Access Journals (Sweden)

    Stefan Fürtinger

    2014-11-01

    Full Text Available Assessing brain activity during complex voluntary motor behaviors that require the recruitment of multiple neural sites is a field of active research. Our current knowledge is primarily based on human brain imaging studies that have clear limitations in terms of temporal and spatial resolution. We developed a physiologically informed non-linear multi-compartment stochastic neural model to simulate functional brain activity coupled with neurotransmitter release during complex voluntary behavior, such as speech production. Due to its state-dependent modulation of neural firing, dopaminergic neurotransmission plays a key role in the organization of functional brain circuits controlling speech and language and thus has been incorporated in our neural population model. A rigorous mathematical proof establishing existence and uniqueness of solutions to the proposed model as well as a computationally efficient strategy to numerically approximate these solutions are presented. Simulated brain activity during the resting state and sentence production was analyzed using functional network connectivity, and graph theoretical techniques were employed to highlight differences between the two conditions. We demonstrate that our model successfully reproduces characteristic changes seen in empirical data between the resting state and speech production, and dopaminergic neurotransmission evokes pronounced changes in modeled functional connectivity by acting on the underlying biological stochastic neural model. Specifically, model and data networks in both speech and rest conditions share task-specific network features: both the simulated and empirical functional connectivity networks show an increase in nodal influence and segregation in speech over the resting state. These commonalities confirm that dopamine is a key neuromodulator of the functional connectome of speech control. Based on reproducible characteristic aspects of empirical data, we suggest a number

  16. Similar patterns of neural activity predict memory function during encoding and retrieval.

    Science.gov (United States)

    Kragel, James E; Ezzyat, Youssef; Sperling, Michael R; Gorniak, Richard; Worrell, Gregory A; Berry, Brent M; Inman, Cory; Lin, Jui-Jui; Davis, Kathryn A; Das, Sandhitsu R; Stein, Joel M; Jobst, Barbara C; Zaghloul, Kareem A; Sheth, Sameer A; Rizzuto, Daniel S; Kahana, Michael J

    2017-07-15

    Neural networks that span the medial temporal lobe (MTL), prefrontal cortex, and posterior cortical regions are essential to episodic memory function in humans. Encoding and retrieval are supported by the engagement of both distinct neural pathways across the cortex and common structures within the medial temporal lobes. However, the degree to which memory performance can be determined by neural processing that is common to encoding and retrieval remains to be determined. To identify neural signatures of successful memory function, we administered a delayed free-recall task to 187 neurosurgical patients implanted with subdural or intraparenchymal depth electrodes. We developed multivariate classifiers to identify patterns of spectral power across the brain that independently predicted successful episodic encoding and retrieval. During encoding and retrieval, patterns of increased high frequency activity in prefrontal, MTL, and inferior parietal cortices, accompanied by widespread decreases in low frequency power across the brain predicted successful memory function. Using a cross-decoding approach, we demonstrate the ability to predict memory function across distinct phases of the free-recall task. Furthermore, we demonstrate that classifiers that combine information from both encoding and retrieval states can outperform task-independent models. These findings suggest that the engagement of a core memory network during either encoding or retrieval shapes the ability to remember the past, despite distinct neural interactions that facilitate encoding and retrieval. Copyright © 2017 Elsevier Inc. All rights reserved.

  17. Modeling fMRI signals can provide insights into neural processing in the cerebral cortex.

    Science.gov (United States)

    Vanni, Simo; Sharifian, Fariba; Heikkinen, Hanna; Vigário, Ricardo

    2015-08-01

    Every stimulus or task activates multiple areas in the mammalian cortex. These distributed activations can be measured with functional magnetic resonance imaging (fMRI), which has the best spatial resolution among the noninvasive brain imaging methods. Unfortunately, the relationship between the fMRI activations and distributed cortical processing has remained unclear, both because the coupling between neural and fMRI activations has remained poorly understood and because fMRI voxels are too large to directly sense the local neural events. To get an idea of the local processing given the macroscopic data, we need models to simulate the neural activity and to provide output that can be compared with fMRI data. Such models can describe neural mechanisms as mathematical functions between input and output in a specific system, with little correspondence to physiological mechanisms. Alternatively, models can be biomimetic, including biological details with straightforward correspondence to experimental data. After careful balancing between complexity, computational efficiency, and realism, a biomimetic simulation should be able to provide insight into how biological structures or functions contribute to actual data processing as well as to promote theory-driven neuroscience experiments. This review analyzes the requirements for validating system-level computational models with fMRI. In particular, we study mesoscopic biomimetic models, which include a limited set of details from real-life networks and enable system-level simulations of neural mass action. In addition, we discuss how recent developments in neurophysiology and biophysics may significantly advance the modelling of fMRI signals. Copyright © 2015 the American Physiological Society.

  18. Artificial neural network model of pork meat cubes osmotic dehydratation

    Directory of Open Access Journals (Sweden)

    Pezo Lato L.

    2013-01-01

    Full Text Available Mass transfer of pork meat cubes (M. triceps brachii, shaped as 1x1x1 cm, during osmotic dehydration (OD and under atmospheric pressure was investigated in this paper. The effects of different parameters, such as concentration of sugar beet molasses (60-80%, w/w, temperature (20-50ºC, and immersion time (1-5 h in terms of water loss (WL, solid gain (SG, final dry matter content (DM, and water activity (aw, were investigated using experimental results. Five artificial neural network (ANN models were developed for the prediction of WL, SG, DM, and aw in OD of pork meat cubes. These models were able to predict process outputs with coefficient of determination, r2, of 0.990 for SG, 0.985 for WL, 0.986 for aw, and 0.992 for DM compared to experimental measurements. The wide range of processing variables considered for the formulation of these models, and their easy implementation in a spreadsheet calculus make it very useful and practical for process design and control.

  19. The effects of gratitude expression on neural activity.

    Science.gov (United States)

    Kini, Prathik; Wong, Joel; McInnis, Sydney; Gabana, Nicole; Brown, Joshua W

    2016-03-01

    Gratitude is a common aspect of social interaction, yet relatively little is known about the neural bases of gratitude expression, nor how gratitude expression may lead to longer-term effects on brain activity. To address these twin issues, we recruited subjects who coincidentally were entering psychotherapy for depression and/or anxiety. One group participated in a gratitude writing intervention, which required them to write letters expressing gratitude. The therapy-as-usual control group did not perform a writing intervention. After three months, subjects performed a "Pay It Forward" task in the fMRI scanner. In the task, subjects were repeatedly endowed with a monetary gift and then asked to pass it on to a charitable cause to the extent they felt grateful for the gift. Operationalizing gratitude as monetary gifts allowed us to engage the subjects and quantify the gratitude expression for subsequent analyses. We measured brain activity and found regions where activity correlated with self-reported gratitude experience during the task, even including related constructs such as guilt motivation and desire to help as statistical controls. These were mostly distinct from brain regions activated by empathy or theory of mind. Also, our between groups cross-sectional study found that a simple gratitude writing intervention was associated with significantly greater and lasting neural sensitivity to gratitude - subjects who participated in gratitude letter writing showed both behavioral increases in gratitude and significantly greater neural modulation by gratitude in the medial prefrontal cortex three months later. Copyright © 2015 Elsevier Inc. All rights reserved.

  20. Development of an artificial neural network model for risk assessment of skin sensitization using human cell line activation test, direct peptide reactivity assay, KeratinoSens™ and in silico structure alert parameter.

    Science.gov (United States)

    Hirota, Morihiko; Ashikaga, Takao; Kouzuki, Hirokazu

    2018-04-01

    It is important to predict the potential of cosmetic ingredients to cause skin sensitization, and in accordance with the European Union cosmetic directive for the replacement of animal tests, several in vitro tests based on the adverse outcome pathway have been developed for hazard identification, such as the direct peptide reactivity assay, KeratinoSens™ and the human cell line activation test. Here, we describe the development of an artificial neural network (ANN) prediction model for skin sensitization risk assessment based on the integrated testing strategy concept, using direct peptide reactivity assay, KeratinoSens™, human cell line activation test and an in silico or structure alert parameter. We first investigated the relationship between published murine local lymph node assay EC3 values, which represent skin sensitization potency, and in vitro test results using a panel of about 134 chemicals for which all the required data were available. Predictions based on ANN analysis using combinations of parameters from all three in vitro tests showed a good correlation with local lymph node assay EC3 values. However, when the ANN model was applied to a testing set of 28 chemicals that had not been included in the training set, predicted EC3s were overestimated for some chemicals. Incorporation of an additional in silico or structure alert descriptor (obtained with TIMES-M or Toxtree software) in the ANN model improved the results. Our findings suggest that the ANN model based on the integrated testing strategy concept could be useful for evaluating the skin sensitization potential. Copyright © 2017 John Wiley & Sons, Ltd.

  1. Modeling of steam generator in nuclear power plant using neural network ensemble

    International Nuclear Information System (INIS)

    Lee, S. K.; Lee, E. C.; Jang, J. W.

    2003-01-01

    Neural network is now being used in modeling the steam generator is known to be difficult due to the reverse dynamics. However, Neural network is prone to the problem of overfitting. This paper investigates the use of neural network combining methods to model steam generator water level and compares with single neural network. The results show that neural network ensemble is effective tool which can offer improved generalization, lower dependence of the training set and reduced training time

  2. Advanced models of neural networks nonlinear dynamics and stochasticity in biological neurons

    CERN Document Server

    Rigatos, Gerasimos G

    2015-01-01

    This book provides a complete study on neural structures exhibiting nonlinear and stochastic dynamics, elaborating on neural dynamics by introducing advanced models of neural networks. It overviews the main findings in the modelling of neural dynamics in terms of electrical circuits and examines their stability properties with the use of dynamical systems theory. It is suitable for researchers and postgraduate students engaged with neural networks and dynamical systems theory.

  3. A simple method for estimating the entropy of neural activity

    International Nuclear Information System (INIS)

    Berry II, Michael J; Tkačik, Gašper; Dubuis, Julien; Marre, Olivier; Da Silveira, Rava Azeredo

    2013-01-01

    The number of possible activity patterns in a population of neurons grows exponentially with the size of the population. Typical experiments explore only a tiny fraction of the large space of possible activity patterns in the case of populations with more than 10 or 20 neurons. It is thus impossible, in this undersampled regime, to estimate the probabilities with which most of the activity patterns occur. As a result, the corresponding entropy—which is a measure of the computational power of the neural population—cannot be estimated directly. We propose a simple scheme for estimating the entropy in the undersampled regime, which bounds its value from both below and above. The lower bound is the usual ‘naive’ entropy of the experimental frequencies. The upper bound results from a hybrid approximation of the entropy which makes use of the naive estimate, a maximum entropy fit, and a coverage adjustment. We apply our simple scheme to artificial data, in order to check their accuracy; we also compare its performance to those of several previously defined entropy estimators. We then apply it to actual measurements of neural activity in populations with up to 100 cells. Finally, we discuss the similarities and differences between the proposed simple estimation scheme and various earlier methods. (paper)

  4. Modeling of methane emissions using artificial neural network approach

    Directory of Open Access Journals (Sweden)

    Stamenković Lidija J.

    2015-01-01

    Full Text Available The aim of this study was to develop a model for forecasting CH4 emissions at the national level, using Artificial Neural Networks (ANN with broadly available sustainability, economical and industrial indicators as their inputs. ANN modeling was performed using two different types of architecture; a Backpropagation Neural Network (BPNN and a General Regression Neural Network (GRNN. A conventional multiple linear regression (MLR model was also developed in order to compare model performance and assess which model provides the best results. ANN and MLR models were developed and tested using the same annual data for 20 European countries. The ANN model demonstrated very good performance, significantly better than the MLR model. It was shown that a forecast of CH4 emissions at the national level using the ANN model can be made successfully and accurately for a future period of up to two years, thereby opening the possibility to apply such a modeling technique which can be used to support the implementation of sustainable development strategies and environmental management policies. [Projekat Ministarstva nauke Republike Srbije, br. 172007

  5. Escherichia coli growth modeling using neural network | Shamsudin ...

    African Journals Online (AJOL)

    technique that has the ability to predict with efficient and good performance. Using NARX, a highly accurate model was developed to predict the growth of Escherichia coli (E. coli) based on pH water parameter. The multiparameter portable sensor and spectrophotometer data were used to build and train the neural network.

  6. A model of interval timing by neural integration.

    Science.gov (United States)

    Simen, Patrick; Balci, Fuat; de Souza, Laura; Cohen, Jonathan D; Holmes, Philip

    2011-06-22

    We show that simple assumptions about neural processing lead to a model of interval timing as a temporal integration process, in which a noisy firing-rate representation of time rises linearly on average toward a response threshold over the course of an interval. Our assumptions include: that neural spike trains are approximately independent Poisson processes, that correlations among them can be largely cancelled by balancing excitation and inhibition, that neural populations can act as integrators, and that the objective of timed behavior is maximal accuracy and minimal variance. The model accounts for a variety of physiological and behavioral findings in rodents, monkeys, and humans, including ramping firing rates between the onset of reward-predicting cues and the receipt of delayed rewards, and universally scale-invariant response time distributions in interval timing tasks. It furthermore makes specific, well-supported predictions about the skewness of these distributions, a feature of timing data that is usually ignored. The model also incorporates a rapid (potentially one-shot) duration-learning procedure. Human behavioral data support the learning rule's predictions regarding learning speed in sequences of timed responses. These results suggest that simple, integration-based models should play as prominent a role in interval timing theory as they do in theories of perceptual decision making, and that a common neural mechanism may underlie both types of behavior.

  7. Introducing Artificial Neural Networks through a Spreadsheet Model

    Science.gov (United States)

    Rienzo, Thomas F.; Athappilly, Kuriakose K.

    2012-01-01

    Business students taking data mining classes are often introduced to artificial neural networks (ANN) through point and click navigation exercises in application software. Even if correct outcomes are obtained, students frequently do not obtain a thorough understanding of ANN processes. This spreadsheet model was created to illuminate the roles of…

  8. A Constructive Neural-Network Approach to Modeling Psychological Development

    Science.gov (United States)

    Shultz, Thomas R.

    2012-01-01

    This article reviews a particular computational modeling approach to the study of psychological development--that of constructive neural networks. This approach is applied to a variety of developmental domains and issues, including Piagetian tasks, shift learning, language acquisition, number comparison, habituation of visual attention, concept…

  9. Bilingual Lexical Interactions in an Unsupervised Neural Network Model

    Science.gov (United States)

    Zhao, Xiaowei; Li, Ping

    2010-01-01

    In this paper we present an unsupervised neural network model of bilingual lexical development and interaction. We focus on how the representational structures of the bilingual lexicons can emerge, develop, and interact with each other as a function of the learning history. The results show that: (1) distinct representations for the two lexicons…

  10. Using artificial neural network approach for modelling rainfall–runoff ...

    Indian Academy of Sciences (India)

    Department of Civil Engineering, National Pingtung University of Science and Technology, Neipu Hsiang,. Pingtung ... study, a model for estimating runoff by using rainfall data from a river basin is developed and a neural ... For example, 2009 typhoon Morakot in Pingtung ... Tokar and Markus (2000) applied ANN to predict.

  11. DAILY RAINFALL-RUNOFF MODELLING BY NEURAL NETWORKS ...

    African Journals Online (AJOL)

    K. Benzineb, M. Remaoun

    2016-09-01

    Sep 1, 2016 ... The hydrologic behaviour modelling of w. Journal of ... i Ouahrane's basin from rainfall-runoff relation which is non-linea networks ... will allow checking efficiency of formal neural networks for flows simulation in semi-arid zone.

  12. THE USE OF NEURAL NETWORK TECHNOLOGY TO MODEL SWIMMING PERFORMANCE

    Directory of Open Access Journals (Sweden)

    António José Silva

    2007-03-01

    Full Text Available The aims of the present study were: to identify the factors which are able to explain the performance in the 200 meters individual medley and 400 meters front crawl events in young swimmers, to model the performance in those events using non-linear mathematic methods through artificial neural networks (multi-layer perceptrons and to assess the neural network models precision to predict the performance. A sample of 138 young swimmers (65 males and 73 females of national level was submitted to a test battery comprising four different domains: kinanthropometric evaluation, dry land functional evaluation (strength and flexibility, swimming functional evaluation (hydrodynamics, hydrostatic and bioenergetics characteristics and swimming technique evaluation. To establish a profile of the young swimmer non-linear combinations between preponderant variables for each gender and swim performance in the 200 meters medley and 400 meters font crawl events were developed. For this purpose a feed forward neural network was used (Multilayer Perceptron with three neurons in a single hidden layer. The prognosis precision of the model (error lower than 0.8% between true and estimated performances is supported by recent evidence. Therefore, we consider that the neural network tool can be a good approach in the resolution of complex problems such as performance modeling and the talent identification in swimming and, possibly, in a wide variety of sports

  13. Neural network modeling of a dolphin's sonar discrimination capabilities

    DEFF Research Database (Denmark)

    Andersen, Lars Nonboe; René Rasmussen, A; Au, WWL

    1994-01-01

    The capability of an echo-locating dolphin to discriminate differences in the wall thickness of cylinders was previously modeled by a counterpropagation neural network using only spectral information of the echoes [W. W. L. Au, J. Acoust. Soc. Am. 95, 2728–2735 (1994)]. In this study, both time a...

  14. Pragmatic Bootstrapping: A Neural Network Model of Vocabulary Acquisition

    Science.gov (United States)

    Caza, Gregory A.; Knott, Alistair

    2012-01-01

    The social-pragmatic theory of language acquisition proposes that children only become efficient at learning the meanings of words once they acquire the ability to understand the intentions of other agents, in particular the intention to communicate (Akhtar & Tomasello, 2000). In this paper we present a neural network model of word learning which…

  15. Geometry of neural networks and models with singularities

    International Nuclear Information System (INIS)

    Fukumizu, Kenji

    2001-01-01

    This paper discusses maximum likelihood estimation with unidentifiability of parameters. Unidentifiability is formulated as a conic singularity of the model. It is known that the likelihood ratio may have unusually large order in unidentifiable cases. A sufficient condition for such large order is given and applied to neural networks

  16. Determination of the Corona model parameters with artificial neural networks

    International Nuclear Information System (INIS)

    Ahmet, Nayir; Bekir, Karlik; Arif, Hashimov

    2005-01-01

    Full text : The aim of this study is to calculate new model parameters taking into account the corona of electrical transmission line wires. For this purpose, a neural network modeling proposed for the corona frequent characteristics modeling. Then this model was compared with the other model developed at the Polytechnic Institute of Saint Petersburg. The results of development of the specified corona model for calculation of its influence on the wave processes in multi-wires line and determination of its parameters are submitted. Results of obtained calculation equations are brought for electrical transmission line with allowance for superficial effect in the ground and wires with reference to developed corona model

  17. Statistical modelling of neural networks in γ-spectrometry applications

    International Nuclear Information System (INIS)

    Vigneron, V.; Martinez, J.M.; Morel, J.; Lepy, M.C.

    1995-01-01

    Layered Neural Networks, which are a class of models based on neural computation, are applied to the measurement of uranium enrichment, i.e. the isotope ratio 235 U/( 235 U + 236 U + 238 U). The usual method consider a limited number of Γ-ray and X-ray peaks, and require previously calibrated instrumentation for each sample. But, in practice, the source-detector ensemble geometry conditions are critically different, thus a means of improving the above convention methods is to reduce the region of interest: this is possible by focusing on the K α X region where the three elementary components are present. Real data are used to study the performance of neural networks. Training is done with a Maximum Likelihood method to measure uranium 235 U and 238 U quantities in infinitely thick samples. (authors). 18 refs., 6 figs., 3 tabs

  18. Neural network versus classical time series forecasting models

    Science.gov (United States)

    Nor, Maria Elena; Safuan, Hamizah Mohd; Shab, Noorzehan Fazahiyah Md; Asrul, Mohd; Abdullah, Affendi; Mohamad, Nurul Asmaa Izzati; Lee, Muhammad Hisyam

    2017-05-01

    Artificial neural network (ANN) has advantage in time series forecasting as it has potential to solve complex forecasting problems. This is because ANN is data driven approach which able to be trained to map past values of a time series. In this study the forecast performance between neural network and classical time series forecasting method namely seasonal autoregressive integrated moving average models was being compared by utilizing gold price data. Moreover, the effect of different data preprocessing on the forecast performance of neural network being examined. The forecast accuracy was evaluated using mean absolute deviation, root mean square error and mean absolute percentage error. It was found that ANN produced the most accurate forecast when Box-Cox transformation was used as data preprocessing.

  19. HIV lipodystrophy case definition using artificial neural network modelling

    DEFF Research Database (Denmark)

    Ioannidis, John P A; Trikalinos, Thomas A; Law, Matthew

    2003-01-01

    OBJECTIVE: A case definition of HIV lipodystrophy has recently been developed from a combination of clinical, metabolic and imaging/body composition variables using logistic regression methods. We aimed to evaluate whether artificial neural networks could improve the diagnostic accuracy. METHODS......: The database of the case-control Lipodystrophy Case Definition Study was split into 504 subjects (265 with and 239 without lipodystrophy) used for training and 284 independent subjects (152 with and 132 without lipodystrophy) used for validation. Back-propagation neural networks with one or two middle layers...... were trained and validated. Results were compared against logistic regression models using the same information. RESULTS: Neural networks using clinical variables only (41 items) achieved consistently superior performance than logistic regression in terms of specificity, overall accuracy and area under...

  20. Time Multiplexed Active Neural Probe with 1356 Parallel Recording Sites

    Directory of Open Access Journals (Sweden)

    Bogdan C. Raducanu

    2017-10-01

    Full Text Available We present a high electrode density and high channel count CMOS (complementary metal-oxide-semiconductor active neural probe containing 1344 neuron sized recording pixels (20 µm × 20 µm and 12 reference pixels (20 µm × 80 µm, densely packed on a 50 µm thick, 100 µm wide, and 8 mm long shank. The active electrodes or pixels consist of dedicated in-situ circuits for signal source amplification, which are directly located under each electrode. The probe supports the simultaneous recording of all 1356 electrodes with sufficient signal to noise ratio for typical neuroscience applications. For enhanced performance, further noise reduction can be achieved while using half of the electrodes (678. Both of these numbers considerably surpass the state-of-the art active neural probes in both electrode count and number of recording channels. The measured input referred noise in the action potential band is 12.4 µVrms, while using 678 electrodes, with just 3 µW power dissipation per pixel and 45 µW per read-out channel (including data transmission.

  1. Shape perception simultaneously up- and downregulates neural activity in the primary visual cortex.

    Science.gov (United States)

    Kok, Peter; de Lange, Floris P

    2014-07-07

    An essential part of visual perception is the grouping of local elements (such as edges and lines) into coherent shapes. Previous studies have shown that this grouping process modulates neural activity in the primary visual cortex (V1) that is signaling the local elements [1-4]. However, the nature of this modulation is controversial. Some studies find that shape perception reduces neural activity in V1 [2, 5, 6], while others report increased V1 activity during shape perception [1, 3, 4, 7-10]. Neurocomputational theories that cast perception as a generative process [11-13] propose that feedback connections carry predictions (i.e., the generative model), while feedforward connections signal the mismatch between top-down predictions and bottom-up inputs. Within this framework, the effect of feedback on early visual cortex may be either enhancing or suppressive, depending on whether the feedback signal is met by congruent bottom-up input. Here, we tested this hypothesis by quantifying the spatial profile of neural activity in V1 during the perception of illusory shapes using population receptive field mapping. We find that shape perception concurrently increases neural activity in regions of V1 that have a receptive field on the shape but do not receive bottom-up input and suppresses activity in regions of V1 that receive bottom-up input that is predicted by the shape. These effects were not modulated by task requirements. Together, these findings suggest that shape perception changes lower-order sensory representations in a highly specific and automatic manner, in line with theories that cast perception in terms of hierarchical generative models. Copyright © 2014 Elsevier Ltd. All rights reserved.

  2. Neural Networks Modelling of Municipal Real Estate Market Rent Rates

    Directory of Open Access Journals (Sweden)

    Muczyński Andrzej

    2016-12-01

    Full Text Available This paper presents the results of research on the application of neural networks modelling of municipal real estate market rent rates. The test procedure was based on selected networks trained on the local real estate market data and transformation of the detected dependencies – through established models – to estimate the potential market rent rates of municipal premises. On this basis, the assessment of the adequacy of the actual market rent rates of municipal properties was made. Empirical research was conducted on the local real estate market of the city of Olsztyn in Poland. In order to describe the phenomenon of market rent rates formation an unidirectional three-layer network and a network of radial base was selected. Analyses showed a relatively low degree of convergence of the actual municipal rent rents with potential market rent rates. This degree was strongly varied depending on the type of business ran on the property and its’ social and economic impact. The applied research methodology and the obtained results can be used in order to rationalize municipal property management, including the activation of rental policy.

  3. An approach to the interpretation of backpropagation neural network models in QSAR studies.

    Science.gov (United States)

    Baskin, I I; Ait, A O; Halberstam, N M; Palyulin, V A; Zefirov, N S

    2002-03-01

    An approach to the interpretation of backpropagation neural network models for quantitative structure-activity and structure-property relationships (QSAR/QSPR) studies is proposed. The method is based on analyzing the first and second moments of distribution of the values of the first and the second partial derivatives of neural network outputs with respect to inputs calculated at data points. The use of such statistics makes it possible not only to obtain actually the same characteristics as for the case of traditional "interpretable" statistical methods, such as the linear regression analysis, but also to reveal important additional information regarding the non-linear character of QSAR/QSPR relationships. The approach is illustrated by an example of interpreting a backpropagation neural network model for predicting position of the long-wave absorption band of cyane dyes.

  4. A continuous-time neural model for sequential action.

    Science.gov (United States)

    Kachergis, George; Wyatte, Dean; O'Reilly, Randall C; de Kleijn, Roy; Hommel, Bernhard

    2014-11-05

    Action selection, planning and execution are continuous processes that evolve over time, responding to perceptual feedback as well as evolving top-down constraints. Existing models of routine sequential action (e.g. coffee- or pancake-making) generally fall into one of two classes: hierarchical models that include hand-built task representations, or heterarchical models that must learn to represent hierarchy via temporal context, but thus far lack goal-orientedness. We present a biologically motivated model of the latter class that, because it is situated in the Leabra neural architecture, affords an opportunity to include both unsupervised and goal-directed learning mechanisms. Moreover, we embed this neurocomputational model in the theoretical framework of the theory of event coding (TEC), which posits that actions and perceptions share a common representation with bidirectional associations between the two. Thus, in this view, not only does perception select actions (along with task context), but actions are also used to generate perceptions (i.e. intended effects). We propose a neural model that implements TEC to carry out sequential action control in hierarchically structured tasks such as coffee-making. Unlike traditional feedforward discrete-time neural network models, which use static percepts to generate static outputs, our biological model accepts continuous-time inputs and likewise generates non-stationary outputs, making short-timescale dynamic predictions. © 2014 The Author(s) Published by the Royal Society. All rights reserved.

  5. Theories of Person Perception Predict Patterns of Neural Activity During Mentalizing.

    Science.gov (United States)

    Thornton, Mark A; Mitchell, Jason P

    2017-08-22

    Social life requires making inferences about other people. What information do perceivers spontaneously draw upon to make such inferences? Here, we test 4 major theories of person perception, and 1 synthetic theory that combines their features, to determine whether the dimensions of such theories can serve as bases for describing patterns of neural activity during mentalizing. While undergoing functional magnetic resonance imaging, participants made social judgments about well-known public figures. Patterns of brain activity were then predicted using feature encoding models that represented target people's positions on theoretical dimensions such as warmth and competence. All 5 theories of person perception proved highly accurate at reconstructing activity patterns, indicating that each could describe the informational basis of mentalizing. Cross-validation indicated that the theories robustly generalized across both targets and participants. The synthetic theory consistently attained the best performance-approximately two-thirds of noise ceiling accuracy--indicating that, in combination, the theories considered here can account for much of the neural representation of other people. Moreover, encoding models trained on the present data could reconstruct patterns of activity associated with mental state representations in independent data, suggesting the use of a common neural code to represent others' traits and states. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  6. A neural model of decision making

    OpenAIRE

    Larsen, Torben

    2008-01-01

    Background: A descriptive neuroeconomic model is aimed for relativity of the concept of economic man to empirical science.Method: A 4-level client-server-integrator model integrating the brain models of McLean and Luria is the general framework for the model of empirical findings.Results: Decision making relies on integration across brain levels of emotional intelligence (LU) and logico-matematico intelligence (RIA), respectively. The integrated decision making formula approaching zero by bot...

  7. A model for integrating elementary neural functions into delayed-response behavior.

    Directory of Open Access Journals (Sweden)

    Thomas Gisiger

    2006-04-01

    Full Text Available It is well established that various cortical regions can implement a wide array of neural processes, yet the mechanisms which integrate these processes into behavior-producing, brain-scale activity remain elusive. We propose that an important role in this respect might be played by executive structures controlling the traffic of information between the cortical regions involved. To illustrate this hypothesis, we present a neural network model comprising a set of interconnected structures harboring stimulus-related activity (visual representation, working memory, and planning, and a group of executive units with task-related activity patterns that manage the information flowing between them. The resulting dynamics allows the network to perform the dual task of either retaining an image during a delay (delayed-matching to sample task, or recalling from this image another one that has been associated with it during training (delayed-pair association task. The model reproduces behavioral and electrophysiological data gathered on the inferior temporal and prefrontal cortices of primates performing these same tasks. It also makes predictions on how neural activity coding for the recall of the image associated with the sample emerges and becomes prospective during the training phase. The network dynamics proves to be very stable against perturbations, and it exhibits signs of scale-invariant organization and cooperativity. The present network represents a possible neural implementation for active, top-down, prospective memory retrieval in primates. The model suggests that brain activity leading to performance of cognitive tasks might be organized in modular fashion, simple neural functions becoming integrated into more complex behavior by executive structures harbored in prefrontal cortex and/or basal ganglia.

  8. A model for integrating elementary neural functions into delayed-response behavior.

    Science.gov (United States)

    Gisiger, Thomas; Kerszberg, Michel

    2006-04-01

    It is well established that various cortical regions can implement a wide array of neural processes, yet the mechanisms which integrate these processes into behavior-producing, brain-scale activity remain elusive. We propose that an important role in this respect might be played by executive structures controlling the traffic of information between the cortical regions involved. To illustrate this hypothesis, we present a neural network model comprising a set of interconnected structures harboring stimulus-related activity (visual representation, working memory, and planning), and a group of executive units with task-related activity patterns that manage the information flowing between them. The resulting dynamics allows the network to perform the dual task of either retaining an image during a delay (delayed-matching to sample task), or recalling from this image another one that has been associated with it during training (delayed-pair association task). The model reproduces behavioral and electrophysiological data gathered on the inferior temporal and prefrontal cortices of primates performing these same tasks. It also makes predictions on how neural activity coding for the recall of the image associated with the sample emerges and becomes prospective during the training phase. The network dynamics proves to be very stable against perturbations, and it exhibits signs of scale-invariant organization and cooperativity. The present network represents a possible neural implementation for active, top-down, prospective memory retrieval in primates. The model suggests that brain activity leading to performance of cognitive tasks might be organized in modular fashion, simple neural functions becoming integrated into more complex behavior by executive structures harbored in prefrontal cortex and/or basal ganglia.

  9. Modeling of surface dust concentrations using neural networks and kriging

    Science.gov (United States)

    Buevich, Alexander G.; Medvedev, Alexander N.; Sergeev, Alexander P.; Tarasov, Dmitry A.; Shichkin, Andrey V.; Sergeeva, Marina V.; Atanasova, T. B.

    2016-12-01

    Creating models which are able to accurately predict the distribution of pollutants based on a limited set of input data is an important task in environmental studies. In the paper two neural approaches: (multilayer perceptron (MLP)) and generalized regression neural network (GRNN)), and two geostatistical approaches: (kriging and cokriging), are using for modeling and forecasting of dust concentrations in snow cover. The area of study is under the influence of dust emissions from a copper quarry and a several industrial companies. The comparison of two mentioned approaches is conducted. Three indices are used as the indicators of the models accuracy: the mean absolute error (MAE), root mean square error (RMSE) and relative root mean square error (RRMSE). Models based on artificial neural networks (ANN) have shown better accuracy. When considering all indices, the most precision model was the GRNN, which uses as input parameters for modeling the coordinates of sampling points and the distance to the probable emissions source. The results of work confirm that trained ANN may be more suitable tool for modeling of dust concentrations in snow cover.

  10. Recursive Bayesian recurrent neural networks for time-series modeling.

    Science.gov (United States)

    Mirikitani, Derrick T; Nikolaev, Nikolay

    2010-02-01

    This paper develops a probabilistic approach to recursive second-order training of recurrent neural networks (RNNs) for improved time-series modeling. A general recursive Bayesian Levenberg-Marquardt algorithm is derived to sequentially update the weights and the covariance (Hessian) matrix. The main strengths of the approach are a principled handling of the regularization hyperparameters that leads to better generalization, and stable numerical performance. The framework involves the adaptation of a noise hyperparameter and local weight prior hyperparameters, which represent the noise in the data and the uncertainties in the model parameters. Experimental investigations using artificial and real-world data sets show that RNNs equipped with the proposed approach outperform standard real-time recurrent learning and extended Kalman training algorithms for recurrent networks, as well as other contemporary nonlinear neural models, on time-series modeling.

  11. Probabilistic models for neural populations that naturally capture global coupling and criticality.

    Science.gov (United States)

    Humplik, Jan; Tkačik, Gašper

    2017-09-01

    Advances in multi-unit recordings pave the way for statistical modeling of activity patterns in large neural populations. Recent studies have shown that the summed activity of all neurons strongly shapes the population response. A separate recent finding has been that neural populations also exhibit criticality, an anomalously large dynamic range for the probabilities of different population activity patterns. Motivated by these two observations, we introduce a class of probabilistic models which takes into account the prior knowledge that the neural population could be globally coupled and close to critical. These models consist of an energy function which parametrizes interactions between small groups of neurons, and an arbitrary positive, strictly increasing, and twice differentiable function which maps the energy of a population pattern to its probability. We show that: 1) augmenting a pairwise Ising model with a nonlinearity yields an accurate description of the activity of retinal ganglion cells which outperforms previous models based on the summed activity of neurons; 2) prior knowledge that the population is critical translates to prior expectations about the shape of the nonlinearity; 3) the nonlinearity admits an interpretation in terms of a continuous latent variable globally coupling the system whose distribution we can infer from data. Our method is independent of the underlying system's state space; hence, it can be applied to other systems such as natural scenes or amino acid sequences of proteins which are also known to exhibit criticality.

  12. Interpretation of correlated neural variability from models of feed-forward and recurrent circuits

    Science.gov (United States)

    2018-01-01

    Neural populations respond to the repeated presentations of a sensory stimulus with correlated variability. These correlations have been studied in detail, with respect to their mechanistic origin, as well as their influence on stimulus discrimination and on the performance of population codes. A number of theoretical studies have endeavored to link network architecture to the nature of the correlations in neural activity. Here, we contribute to this effort: in models of circuits of stochastic neurons, we elucidate the implications of various network architectures—recurrent connections, shared feed-forward projections, and shared gain fluctuations—on the stimulus dependence in correlations. Specifically, we derive mathematical relations that specify the dependence of population-averaged covariances on firing rates, for different network architectures. In turn, these relations can be used to analyze data on population activity. We examine recordings from neural populations in mouse auditory cortex. We find that a recurrent network model with random effective connections captures the observed statistics. Furthermore, using our circuit model, we investigate the relation between network parameters, correlations, and how well different stimuli can be discriminated from one another based on the population activity. As such, our approach allows us to relate properties of the neural circuit to information processing. PMID:29408930

  13. Interpretation of correlated neural variability from models of feed-forward and recurrent circuits.

    Directory of Open Access Journals (Sweden)

    Volker Pernice

    2018-02-01

    Full Text Available Neural populations respond to the repeated presentations of a sensory stimulus with correlated variability. These correlations have been studied in detail, with respect to their mechanistic origin, as well as their influence on stimulus discrimination and on the performance of population codes. A number of theoretical studies have endeavored to link network architecture to the nature of the correlations in neural activity. Here, we contribute to this effort: in models of circuits of stochastic neurons, we elucidate the implications of various network architectures-recurrent connections, shared feed-forward projections, and shared gain fluctuations-on the stimulus dependence in correlations. Specifically, we derive mathematical relations that specify the dependence of population-averaged covariances on firing rates, for different network architectures. In turn, these relations can be used to analyze data on population activity. We examine recordings from neural populations in mouse auditory cortex. We find that a recurrent network model with random effective connections captures the observed statistics. Furthermore, using our circuit model, we investigate the relation between network parameters, correlations, and how well different stimuli can be discriminated from one another based on the population activity. As such, our approach allows us to relate properties of the neural circuit to information processing.

  14. Copper is an endogenous modulator of neural circuit spontaneous activity.

    Science.gov (United States)

    Dodani, Sheel C; Firl, Alana; Chan, Jefferson; Nam, Christine I; Aron, Allegra T; Onak, Carl S; Ramos-Torres, Karla M; Paek, Jaeho; Webster, Corey M; Feller, Marla B; Chang, Christopher J

    2014-11-18

    For reasons that remain insufficiently understood, the brain requires among the highest levels of metals in the body for normal function. The traditional paradigm for this organ and others is that fluxes of alkali and alkaline earth metals are required for signaling, but transition metals are maintained in static, tightly bound reservoirs for metabolism and protection against oxidative stress. Here we show that copper is an endogenous modulator of spontaneous activity, a property of functional neural circuitry. Using Copper Fluor-3 (CF3), a new fluorescent Cu(+) sensor for one- and two-photon imaging, we show that neurons and neural tissue maintain basal stores of loosely bound copper that can be attenuated by chelation, which define a labile copper pool. Targeted disruption of these labile copper stores by acute chelation or genetic knockdown of the CTR1 (copper transporter 1) copper channel alters the spatiotemporal properties of spontaneous activity in developing hippocampal and retinal circuits. The data identify an essential role for copper neuronal function and suggest broader contributions of this transition metal to cell signaling.

  15. Artificial Neural Network Based Model of Photovoltaic Cell

    Directory of Open Access Journals (Sweden)

    Messaouda Azzouzi

    2017-03-01

    Full Text Available This work concerns the modeling of a photovoltaic system and the prediction of the sensitivity of electrical parameters (current, power of the six types of photovoltaic cells based on voltage applied between terminals using one of the best known artificial intelligence technique which is the Artificial Neural Networks. The results of the modeling and prediction have been well shown as a function of number of iterations and using different learning algorithms to obtain the best results. 

  16. Neural network modeling of a dolphin's sonar discrimination capabilities

    OpenAIRE

    Andersen, Lars Nonboe; René Rasmussen, A; Au, WWL; Nachtigall, PE; Roitblat, H.

    1994-01-01

    The capability of an echo-locating dolphin to discriminate differences in the wall thickness of cylinders was previously modeled by a counterpropagation neural network using only spectral information of the echoes [W. W. L. Au, J. Acoust. Soc. Am. 95, 2728–2735 (1994)]. In this study, both time and frequency information were used to model the dolphin discrimination capabilities. Echoes from the same cylinders were digitized using a broadband simulated dolphin sonar signal with the transducer ...

  17. Preparatory neural activity predicts performance on a conflict task.

    Science.gov (United States)

    Stern, Emily R; Wager, Tor D; Egner, Tobias; Hirsch, Joy; Mangels, Jennifer A

    2007-10-24

    Advance preparation has been shown to improve the efficiency of conflict resolution. Yet, with little empirical work directly linking preparatory neural activity to the performance benefits of advance cueing, it is not clear whether this relationship results from preparatory activation of task-specific networks, or from activity associated with general alerting processes. Here, fMRI data were acquired during a spatial Stroop task in which advance cues either informed subjects of the upcoming relevant feature of conflict stimuli (spatial or semantic) or were neutral. Informative cues decreased reaction time (RT) relative to neutral cues, and cues indicating that spatial information would be task-relevant elicited greater activity than neutral cues in multiple areas, including right anterior prefrontal and bilateral parietal cortex. Additionally, preparatory activation in bilateral parietal cortex and right dorsolateral prefrontal cortex predicted faster RT when subjects responded to spatial location. No regions were found to be specific to semantic cues at conventional thresholds, and lowering the threshold further revealed little overlap between activity associated with spatial and semantic cueing effects, thereby demonstrating a single dissociation between activations related to preparing a spatial versus semantic task-set. This relationship between preparatory activation of spatial processing networks and efficient conflict resolution suggests that advance information can benefit performance by leading to domain-specific biasing of task-relevant information.

  18. Toward Rigorous Parameterization of Underconstrained Neural Network Models Through Interactive Visualization and Steering of Connectivity Generation

    Directory of Open Access Journals (Sweden)

    Christian Nowke

    2018-06-01

    Full Text Available Simulation models in many scientific fields can have non-unique solutions or unique solutions which can be difficult to find. Moreover, in evolving systems, unique final state solutions can be reached by multiple different trajectories. Neuroscience is no exception. Often, neural network models are subject to parameter fitting to obtain desirable output comparable to experimental data. Parameter fitting without sufficient constraints and a systematic exploration of the possible solution space can lead to conclusions valid only around local minima or around non-minima. To address this issue, we have developed an interactive tool for visualizing and steering parameters in neural network simulation models. In this work, we focus particularly on connectivity generation, since finding suitable connectivity configurations for neural network models constitutes a complex parameter search scenario. The development of the tool has been guided by several use cases—the tool allows researchers to steer the parameters of the connectivity generation during the simulation, thus quickly growing networks composed of multiple populations with a targeted mean activity. The flexibility of the software allows scientists to explore other connectivity and neuron variables apart from the ones presented as use cases. With this tool, we enable an interactive exploration of parameter spaces and a better understanding of neural network models and grapple with the crucial problem of non-unique network solutions and trajectories. In addition, we observe a reduction in turn around times for the assessment of these models, due to interactive visualization while the simulation is computed.

  19. Adaptive control using neural networks and approximate models.

    Science.gov (United States)

    Narendra, K S; Mukhopadhyay, S

    1997-01-01

    The NARMA model is an exact representation of the input-output behavior of finite-dimensional nonlinear discrete-time dynamical systems in a neighborhood of the equilibrium state. However, it is not convenient for purposes of adaptive control using neural networks due to its nonlinear dependence on the control input. Hence, quite often, approximate methods are used for realizing the neural controllers to overcome computational complexity. In this paper, we introduce two classes of models which are approximations to the NARMA model, and which are linear in the control input. The latter fact substantially simplifies both the theoretical analysis as well as the practical implementation of the controller. Extensive simulation studies have shown that the neural controllers designed using the proposed approximate models perform very well, and in many cases even better than an approximate controller designed using the exact NARMA model. In view of their mathematical tractability as well as their success in simulation studies, a case is made in this paper that such approximate input-output models warrant a detailed study in their own right.

  20. Artificial Neural Network L* from different magnetospheric field models

    Science.gov (United States)

    Yu, Y.; Koller, J.; Zaharia, S. G.; Jordanova, V. K.

    2011-12-01

    The third adiabatic invariant L* plays an important role in modeling and understanding the radiation belt dynamics. The popular way to numerically obtain the L* value follows the recipe described by Roederer [1970], which is, however, slow and computational expensive. This work focuses on a new technique, which can compute the L* value in microseconds without losing much accuracy: artificial neural networks. Since L* is related to the magnetic flux enclosed by a particle drift shell, global magnetic field information needed to trace the drift shell is required. A series of currently popular empirical magnetic field models are applied to create the L* data pool using 1 million data samples which are randomly selected within a solar cycle and within the global magnetosphere. The networks, trained from the above L* data pool, can thereby be used for fairly efficient L* calculation given input parameters valid within the trained temporal and spatial range. Besides the empirical magnetospheric models, a physics-based self-consistent inner magnetosphere model (RAM-SCB) developed at LANL is also utilized to calculate L* values and then to train the L* neural network. This model better predicts the magnetospheric configuration and therefore can significantly improve the L*. The above neural network L* technique will enable, for the first time, comprehensive solar-cycle long studies of radiation belt processes. However, neural networks trained from different magnetic field models can result in different L* values, which could cause mis-interpretation of radiation belt dynamics, such as where the source of the radiation belt charged particle is and which mechanism is dominant in accelerating the particles. Such a fact calls for attention to cautiously choose a magnetospheric field model for the L* calculation.

  1. What are the odds? The neural correlates of active choice during gambling

    Directory of Open Access Journals (Sweden)

    Bettina eStuder

    2012-04-01

    Full Text Available Gambling is a widespread recreational activity and requires pitting the values of potential wins and losses against their probability of occurrence. Neuropsychological research showed that betting behavior on laboratory gambling tasks is highly sensitive to focal lesions to the ventromedial prefrontal cortex (vmPFC and insula. In the current study, we assessed the neural basis of betting choices in healthy participants, using functional magnetic resonance imaging of the Roulette Betting Task. In half of the trials participants actively chose their bets; in the other half the computer dictated the bet size. Our results highlight the impact of volitional choice upon the neural substrates of gambling: Neural activity in a distributed network - including key structures of the reward circuitry (midbrain, striatum - was higher during active compared to computer-dictated bet selection. In line with neuropsychological data, the anterior insula and vmPFC were more activated during self-directed bet selection, and responses in these areas were differentially modulated by the odds of winning in the two choice conditions. In addition, responses in the vmPFC and ventral striatum were modulated by the bet size. Convergent with electrophysiological research in macaques, our results further implicate the inferior parietal cortex (IPC in the processing of the likelihood of potential outcomes: Neural responses in the IPC bilaterally reflected the probability of winning during bet selection. Moreover, the IPC was particularly sensitive to the odds of winning in the active choice condition, where this information was used to guide bet selection. Our results indicate a neglected role of the IPC in human decision-making under risk and help to integrate neuropsychological data of risk-taking following vmPFC and insula damage with models of choice derived from human neuroimaging and monkey electrophysiology.

  2. Hand Posture Prediction Using Neural Networks within a Biomechanical Model

    Directory of Open Access Journals (Sweden)

    Marta C. Mora

    2012-10-01

    Full Text Available This paper proposes the use of artificial neural networks (ANNs in the framework of a biomechanical hand model for grasping. ANNs enhance the model capabilities as they substitute estimated data for the experimental inputs required by the grasping algorithm used. These inputs are the tentative grasping posture and the most open posture during grasping. As a consequence, more realistic grasping postures are predicted by the grasping algorithm, along with the contact information required by the dynamic biomechanical model (contact points and normals. Several neural network architectures are tested and compared in terms of prediction errors, leading to encouraging results. The performance of the overall proposal is also shown through simulation, where a grasping experiment is replicated and compared to the real grasping data collected by a data glove device.

  3. Hierarchical modeling of molecular energies using a deep neural network

    Science.gov (United States)

    Lubbers, Nicholas; Smith, Justin S.; Barros, Kipton

    2018-06-01

    We introduce the Hierarchically Interacting Particle Neural Network (HIP-NN) to model molecular properties from datasets of quantum calculations. Inspired by a many-body expansion, HIP-NN decomposes properties, such as energy, as a sum over hierarchical terms. These terms are generated from a neural network—a composition of many nonlinear transformations—acting on a representation of the molecule. HIP-NN achieves the state-of-the-art performance on a dataset of 131k ground state organic molecules and predicts energies with 0.26 kcal/mol mean absolute error. With minimal tuning, our model is also competitive on a dataset of molecular dynamics trajectories. In addition to enabling accurate energy predictions, the hierarchical structure of HIP-NN helps to identify regions of model uncertainty.

  4. Social power and approach-related neural activity.

    Science.gov (United States)

    Boksem, Maarten A S; Smolders, Ruud; De Cremer, David

    2012-06-01

    It has been argued that power activates a general tendency to approach whereas powerlessness activates a tendency to inhibit. The assumption is that elevated power involves reward-rich environments, freedom and, as a consequence, triggers an approach-related motivational orientation and attention to rewards. In contrast, reduced power is associated with increased threat, punishment and social constraint and thereby activates inhibition-related motivation. Moreover, approach motivation has been found to be associated with increased relative left-sided frontal brain activity, while withdrawal motivation has been associated with increased right sided activations. We measured EEG activity while subjects engaged in a task priming either high or low social power. Results show that high social power is indeed associated with greater left-frontal brain activity compared to low social power, providing the first neural evidence for the theory that high power is associated with approach-related motivation. We propose a framework accounting for differences in both approach motivation and goal-directed behaviour associated with different levels of power.

  5. Dynamic neural network models of the premotoneuronal circuitry controlling wrist movements in primates.

    Science.gov (United States)

    Maier, M A; Shupe, L E; Fetz, E E

    2005-10-01

    Dynamic recurrent neural networks were derived to simulate neuronal populations generating bidirectional wrist movements in the monkey. The models incorporate anatomical connections of cortical and rubral neurons, muscle afferents, segmental interneurons and motoneurons; they also incorporate the response profiles of four populations of neurons observed in behaving monkeys. The networks were derived by gradient descent algorithms to generate the eight characteristic patterns of motor unit activations observed during alternating flexion-extension wrist movements. The resulting model generated the appropriate input-output transforms and developed connection strengths resembling those in physiological pathways. We found that this network could be further trained to simulate additional tasks, such as experimentally observed reflex responses to limb perturbations that stretched or shortened the active muscles, and scaling of response amplitudes in proportion to inputs. In the final comprehensive network, motor units are driven by the combined activity of cortical, rubral, spinal and afferent units during step tracking and perturbations. The model displayed many emergent properties corresponding to physiological characteristics. The resulting neural network provides a working model of premotoneuronal circuitry and elucidates the neural mechanisms controlling motoneuron activity. It also predicts several features to be experimentally tested, for example the consequences of eliminating inhibitory connections in cortex and red nucleus. It also reveals that co-contraction can be achieved by simultaneous activation of the flexor and extensor circuits without invoking features specific to co-contraction.

  6. Neural Machine Translation with Recurrent Attention Modeling

    OpenAIRE

    Yang, Zichao; Hu, Zhiting; Deng, Yuntian; Dyer, Chris; Smola, Alex

    2016-01-01

    Knowing which words have been attended to in previous time steps while generating a translation is a rich source of information for predicting what words will be attended to in the future. We improve upon the attention model of Bahdanau et al. (2014) by explicitly modeling the relationship between previous and subsequent attention levels for each word using one recurrent network per input word. This architecture easily captures informative features, such as fertility and regularities in relat...

  7. Functional Modeling of Neural-Glia Interaction

    DEFF Research Database (Denmark)

    Postnov, D.E.; Brazhe, N.A.; Sosnovtseva, Olga

    2012-01-01

    Functional modeling is an approach that focuses on the representation of the qualitative dynamics of the individual components (e.g. cells) of a system and on the structure of the interaction network.......Functional modeling is an approach that focuses on the representation of the qualitative dynamics of the individual components (e.g. cells) of a system and on the structure of the interaction network....

  8. A neural model of mechanisms of empathy deficits in narcissism

    Science.gov (United States)

    Jankowiak-Siuda, Kamila; Zajkowski, Wojciech

    2013-01-01

    From a multidimensional perspective, empathy is a process that includes affective sharing and imagining and understanding the emotions of others. The primary brain structures involved in mediating the components of empathy are the anterior insula (AI), the anterior cingulate cortex (ACC), and specific regions of the medial prefrontal cortex (MPFC). The AI and ACC are the main nodes in the salience network (SN), which selects and coordinates the information flow from the intero- and exteroreceptors. AI might play a role as a crucial hub – a dynamic switch between 2 separate networks of cognitive processing: the central executive network (CEN), which is concerned with effective task execution, and the default mode network (DMN), which is involved with self-reflective processes. Given various classifications, a deficit in empathy may be considered a central dysfunctional trait in narcissism. A recent fMRI study suggests that deficit in empathy is due to a dysfunction in the right AI. Based on the acquired data, we propose a theoretical model of imbalanced SN functioning in narcissism in which the dysfunctional AI hub is responsible for constant DMN activation, which, in turn, centers one’s attention on the self. This might hinder the ability to affectively share and understand the emotions of others. This review paper on neural mechanisms of empathy deficits in narcissism aims to inspire and direct future research in this area. PMID:24189465

  9. Neural dynamics as sampling: a model for stochastic computation in recurrent networks of spiking neurons.

    Science.gov (United States)

    Buesing, Lars; Bill, Johannes; Nessler, Bernhard; Maass, Wolfgang

    2011-11-01

    The organization of computations in networks of spiking neurons in the brain is still largely unknown, in particular in view of the inherently stochastic features of their firing activity and the experimentally observed trial-to-trial variability of neural systems in the brain. In principle there exists a powerful computational framework for stochastic computations, probabilistic inference by sampling, which can explain a large number of macroscopic experimental data in neuroscience and cognitive science. But it has turned out to be surprisingly difficult to create a link between these abstract models for stochastic computations and more detailed models of the dynamics of networks of spiking neurons. Here we create such a link and show that under some conditions the stochastic firing activity of networks of spiking neurons can be interpreted as probabilistic inference via Markov chain Monte Carlo (MCMC) sampling. Since common methods for MCMC sampling in distributed systems, such as Gibbs sampling, are inconsistent with the dynamics of spiking neurons, we introduce a different approach based on non-reversible Markov chains that is able to reflect inherent temporal processes of spiking neuronal activity through a suitable choice of random variables. We propose a neural network model and show by a rigorous theoretical analysis that its neural activity implements MCMC sampling of a given distribution, both for the case of discrete and continuous time. This provides a step towards closing the gap between abstract functional models of cortical computation and more detailed models of networks of spiking neurons.

  10. Data acquisition in modeling using neural networks and decision trees

    Directory of Open Access Journals (Sweden)

    R. Sika

    2011-04-01

    Full Text Available The paper presents a comparison of selected models from area of artificial neural networks and decision trees in relation with actualconditions of foundry processes. The work contains short descriptions of used algorithms, their destination and method of data preparation,which is a domain of work of Data Mining systems. First part concerns data acquisition realized in selected iron foundry, indicating problems to solve in aspect of casting process modeling. Second part is a comparison of selected algorithms: a decision tree and artificial neural network, that is CART (Classification And Regression Trees and BP (Backpropagation in MLP (Multilayer Perceptron networks algorithms.Aim of the paper is to show an aspect of selecting data for modeling, cleaning it and reducing, for example due to too strong correlationbetween some of recorded process parameters. Also, it has been shown what results can be obtained using two different approaches:first when modeling using available commercial software, for example Statistica, second when modeling step by step using Excel spreadsheetbasing on the same algorithm, like BP-MLP. Discrepancy of results obtained from these two approaches originates from a priorimade assumptions. Mentioned earlier Statistica universal software package, when used without awareness of relations of technologicalparameters, i.e. without user having experience in foundry and without scheduling ranks of particular parameters basing on acquisition, can not give credible basis to predict the quality of the castings. Also, a decisive influence of data acquisition method has been clearly indicated, the acquisition should be conducted according to repetitive measurement and control procedures. This paper is based on about 250 records of actual data, for one assortment for 6 month period, where only 12 data sets were complete (including two that were used for validation of neural network and useful for creating a model. It is definitely too

  11. Forecasting macroeconomic variables using neural network models and three automated model selection techniques

    DEFF Research Database (Denmark)

    Kock, Anders Bredahl; Teräsvirta, Timo

    2016-01-01

    When forecasting with neural network models one faces several problems, all of which influence the accuracy of the forecasts. First, neural networks are often hard to estimate due to their highly nonlinear structure. To alleviate the problem, White (2006) presented a solution (QuickNet) that conv...

  12. Model of Cholera Forecasting Using Artificial Neural Network in Chabahar City, Iran

    Directory of Open Access Journals (Sweden)

    Zahra Pezeshki

    2016-02-01

    Full Text Available Background: Cholera as an endemic disease remains a health issue in Iran despite decrease in incidence. Since forecasting epidemic diseases provides appropriate preventive actions in disease spread, different forecasting methods including artificial neural networks have been developed to study parameters involved in incidence and spread of epidemic diseases such as cholera. Objectives: In this study, cholera in rural area of Chabahar, Iran was investigated to achieve a proper forecasting model. Materials and Methods: Data of cholera was gathered from 465 villages, of which 104 reported cholera during ten years period of study. Logistic regression modeling and correlate bivariate were used to determine risk factors and achieve possible predictive model one-hidden-layer perception neural network with backpropagation training algorithm and the sigmoid activation function was trained and tested between the two groups of infected and non-infected villages after preprocessing. For determining validity of prediction, the ROC diagram was used. The study variables included climate conditions and geographical parameters. Results: After determining significant variables of cholera incidence, the described artificial neural network model was capable of forecasting cholera event among villages of test group with accuracy up to 80%. The highest accuracy was achieved when model was trained with variables that were significant in statistical analysis describing that the two methods confirm the result of each other. Conclusions: Application of artificial neural networking assists forecasting cholera for adopting protective measures. For a more accurate prediction, comprehensive information is required including data on hygienic, social and demographic parameters.

  13. Encoding Time in Feedforward Trajectories of a Recurrent Neural Network Model.

    Science.gov (United States)

    Hardy, N F; Buonomano, Dean V

    2018-02-01

    Brain activity evolves through time, creating trajectories of activity that underlie sensorimotor processing, behavior, and learning and memory. Therefore, understanding the temporal nature of neural dynamics is essential to understanding brain function and behavior. In vivo studies have demonstrated that sequential transient activation of neurons can encode time. However, it remains unclear whether these patterns emerge from feedforward network architectures or from recurrent networks and, furthermore, what role network structure plays in timing. We address these issues using a recurrent neural network (RNN) model with distinct populations of excitatory and inhibitory units. Consistent with experimental data, a single RNN could autonomously produce multiple functionally feedforward trajectories, thus potentially encoding multiple timed motor patterns lasting up to several seconds. Importantly, the model accounted for Weber's law, a hallmark of timing behavior. Analysis of network connectivity revealed that efficiency-a measure of network interconnectedness-decreased as the number of stored trajectories increased. Additionally, the balance of excitation (E) and inhibition (I) shifted toward excitation during each unit's activation time, generating the prediction that observed sequential activity relies on dynamic control of the E/I balance. Our results establish for the first time that the same RNN can generate multiple functionally feedforward patterns of activity as a result of dynamic shifts in the E/I balance imposed by the connectome of the RNN. We conclude that recurrent network architectures account for sequential neural activity, as well as for a fundamental signature of timing behavior: Weber's law.

  14. Empirical modeling of nuclear power plants using neural networks

    International Nuclear Information System (INIS)

    Parlos, A.G.; Atiya, A.; Chong, K.T.

    1991-01-01

    A summary of a procedure for nonlinear identification of process dynamics encountered in nuclear power plant components is presented in this paper using artificial neural systems. A hybrid feedforward/feedback neural network, namely, a recurrent multilayer perceptron, is used as the nonlinear structure for system identification. In the overall identification process, the feedforward portion of the network architecture provides its well-known interpolation property, while through recurrency and cross-talk, the local information feedback enables representation of time-dependent system nonlinearities. The standard backpropagation learning algorithm is modified and is used to train the proposed hybrid network in a supervised manner. The performance of recurrent multilayer perceptron networks in identifying process dynamics is investigated via the case study of a U-tube steam generator. The nonlinear response of a representative steam generator is predicted using a neural network and is compared to the response obtained from a sophisticated physical model during both high- and low-power operation. The transient responses compare well, though further research is warranted for training and testing of recurrent neural networks during more severe operational transients and accident scenarios

  15. A neural model of decision making

    DEFF Research Database (Denmark)

    Larsen, Torben

    2008-01-01

    of Daniel Defoe. However, in the last decade neuroimaging technologies have become that sensitive that the activity of small groups of nerve cells may be detected i.e. by functional magnetic resonance tomography (fMRI). fMRI tracks blood flow in the brain using changes in magnetic properties due...... and inhibitory processes and EEG is still useful for research as a broader and direct measure of brain activity. On this background a new interdisciplinary field linking behavioural economics and neuroscience into a neuroeconomic discipline emerges. Recent reviews of neuroeconomics represent a platform...... as subjective/ behavioural rather than neurophysiological with ‘blackboxes' which is difficult to falsify for further development.    In client-server programs the client requests services from the server, which responds to the request. As the brain contains more specialized servers an integrator is required...

  16. A Pruning Neural Network Model in Credit Classification Analysis

    Directory of Open Access Journals (Sweden)

    Yajiao Tang

    2018-01-01

    Full Text Available Nowadays, credit classification models are widely applied because they can help financial decision-makers to handle credit classification issues. Among them, artificial neural networks (ANNs have been widely accepted as the convincing methods in the credit industry. In this paper, we propose a pruning neural network (PNN and apply it to solve credit classification problem by adopting the well-known Australian and Japanese credit datasets. The model is inspired by synaptic nonlinearity of a dendritic tree in a biological neural model. And it is trained by an error back-propagation algorithm. The model is capable of realizing a neuronal pruning function by removing the superfluous synapses and useless dendrites and forms a tidy dendritic morphology at the end of learning. Furthermore, we utilize logic circuits (LCs to simulate the dendritic structures successfully which makes PNN be implemented on the hardware effectively. The statistical results of our experiments have verified that PNN obtains superior performance in comparison with other classical algorithms in terms of accuracy and computational efficiency.

  17. Empirical Modeling of the Plasmasphere Dynamics Using Neural Networks

    Science.gov (United States)

    Zhelavskaya, I. S.; Shprits, Y.; Spasojevic, M.

    2017-12-01

    We present a new empirical model for reconstructing the global dynamics of the cold plasma density distribution based only on solar wind data and geomagnetic indices. Utilizing the density database obtained using the NURD (Neural-network-based Upper hybrid Resonance Determination) algorithm for the period of October 1, 2012 - July 1, 2016, in conjunction with solar wind data and geomagnetic indices, we develop a neural network model that is capable of globally reconstructing the dynamics of the cold plasma density distribution for 2 ≤ L ≤ 6 and all local times. We validate and test the model by measuring its performance on independent datasets withheld from the training set and by comparing the model predicted global evolution with global images of He+ distribution in the Earth's plasmasphere from the IMAGE Extreme UltraViolet (EUV) instrument. We identify the parameters that best quantify the plasmasphere dynamics by training and comparing multiple neural networks with different combinations of input parameters (geomagnetic indices, solar wind data, and different durations of their time history). We demonstrate results of both local and global plasma density reconstruction. This study illustrates how global dynamics can be reconstructed from local in-situ observations by using machine learning techniques.

  18. A model of microsaccade-related neural responses induced by short-term depression in thalamocortical synapses

    Directory of Open Access Journals (Sweden)

    Wujie eYuan

    2013-04-01

    Full Text Available Microsaccades during fixation have been suggested to counteract visual fading. Recent experi- ments have also observed microsaccade-related neural responses from cellular record, scalp elec- troencephalogram (EEG and functional magnetic resonance imaging (fMRI. The underlying mechanism, however, is not yet understood and highly debated. It has been proposed that the neural activity of primary visual cortex (V1 is a crucial component for counteracting visual adaptation. In this paper, we use computational modeling to investigate how short-term depres- sion (STD in thalamocortical synapses might affect the neural responses of V1 in the presence of microsaccades. Our model not only gives a possible synaptic explanation for microsaccades in counteracting visual fading, but also reproduces several features in experimental findings. These modeling results suggest that STD in thalamocortical synapses plays an important role in microsaccade-related neural responses and the model may be useful for further investigation of behavioral properties and functional roles of microsaccades.

  19. A model of microsaccade-related neural responses induced by short-term depression in thalamocortical synapses

    Science.gov (United States)

    Yuan, Wu-Jie; Dimigen, Olaf; Sommer, Werner; Zhou, Changsong

    2013-01-01

    Microsaccades during fixation have been suggested to counteract visual fading. Recent experiments have also observed microsaccade-related neural responses from cellular record, scalp electroencephalogram (EEG), and functional magnetic resonance imaging (fMRI). The underlying mechanism, however, is not yet understood and highly debated. It has been proposed that the neural activity of primary visual cortex (V1) is a crucial component for counteracting visual adaptation. In this paper, we use computational modeling to investigate how short-term depression (STD) in thalamocortical synapses might affect the neural responses of V1 in the presence of microsaccades. Our model not only gives a possible synaptic explanation for microsaccades in counteracting visual fading, but also reproduces several features in experimental findings. These modeling results suggest that STD in thalamocortical synapses plays an important role in microsaccade-related neural responses and the model may be useful for further investigation of behavioral properties and functional roles of microsaccades. PMID:23630494

  20. Weak correlations between hemodynamic signals and ongoing neural activity during the resting state

    Science.gov (United States)

    Winder, Aaron T.; Echagarruga, Christina; Zhang, Qingguang; Drew, Patrick J.

    2017-01-01

    Spontaneous fluctuations in hemodynamic signals in the absence of a task or overt stimulation are used to infer neural activity. We tested this coupling by simultaneously measuring neural activity and changes in cerebral blood volume (CBV) in the somatosensory cortex of awake, head-fixed mice during periods of true rest, and during whisker stimulation and volitional whisking. Here we show that neurovascular coupling was similar across states, and large spontaneous CBV changes in the absence of sensory input were driven by volitional whisker and body movements. Hemodynamic signals during periods of rest were weakly correlated with neural activity. Spontaneous fluctuations in CBV and vessel diameter persisted when local neural spiking and glutamatergic input was blocked, and during blockade of noradrenergic receptors, suggesting a non-neuronal origin for spontaneous CBV fluctuations. Spontaneous hemodynamic signals reflect a combination of behavior, local neural activity, and putatively non-neural processes. PMID:29184204

  1. Hysteretic recurrent neural networks: a tool for modeling hysteretic materials and systems

    International Nuclear Information System (INIS)

    Veeramani, Arun S; Crews, John H; Buckner, Gregory D

    2009-01-01

    This paper introduces a novel recurrent neural network, the hysteretic recurrent neural network (HRNN), that is ideally suited to modeling hysteretic materials and systems. This network incorporates a hysteretic neuron consisting of conjoined sigmoid activation functions. Although similar hysteretic neurons have been explored previously, the HRNN is unique in its utilization of simple recurrence to 'self-select' relevant activation functions. Furthermore, training is facilitated by placing the network weights on the output side, allowing standard backpropagation of error training algorithms to be used. We present two- and three-phase versions of the HRNN for modeling hysteretic materials with distinct phases. These models are experimentally validated using data collected from shape memory alloys and ferromagnetic materials. The results demonstrate the HRNN's ability to accurately generalize hysteretic behavior with a relatively small number of neurons. Additional benefits lie in the network's ability to identify statistical information concerning the macroscopic material by analyzing the weights of the individual neurons

  2. Delta Learning Rule for the Active Sites Model

    OpenAIRE

    Lingashetty, Krishna Chaithanya

    2010-01-01

    This paper reports the results on methods of comparing the memory retrieval capacity of the Hebbian neural network which implements the B-Matrix approach, by using the Widrow-Hoff rule of learning. We then, extend the recently proposed Active Sites model by developing a delta rule to increase memory capacity. Also, this paper extends the binary neural network to a multi-level (non-binary) neural network.

  3. Role of SDF1/CXCR4 Interaction in Experimental Hemiplegic Models with Neural Cell Transplantation

    Directory of Open Access Journals (Sweden)

    Noboru Suzuki

    2012-02-01

    Full Text Available Much attention has been focused on neural cell transplantation because of its promising clinical applications. We have reported that embryonic stem (ES cell derived neural stem/progenitor cell transplantation significantly improved motor functions in a hemiplegic mouse model. It is important to understand the molecular mechanisms governing neural regeneration of the damaged motor cortex after the transplantation. Recent investigations disclosed that chemokines participated in the regulation of migration and maturation of neural cell grafts. In this review, we summarize the involvement of inflammatory chemokines including stromal cell derived factor 1 (SDF1 in neural regeneration after ES cell derived neural stem/progenitor cell transplantation in mouse stroke models.

  4. Modeling of light absorption in tissue during infrared neural stimulation

    Science.gov (United States)

    Thompson, Alexander C.; Wade, Scott A.; Brown, William G. A.; Stoddart, Paul R.

    2012-07-01

    A Monte Carlo model has been developed to simulate light transport and absorption in neural tissue during infrared neural stimulation (INS). A range of fiber core sizes and numerical apertures are compared illustrating the advantages of using simulations when designing a light delivery system. A range of wavelengths, commonly used for INS, are also compared for stimulation of nerves in the cochlea, in terms of both the energy absorbed and the change in temperature due to a laser pulse. Modeling suggests that a fiber with core diameter of 200 μm and NA=0.22 is optimal for optical stimulation in the geometry used and that temperature rises in the spiral ganglion neurons are as low as 0.1°C. The results show a need for more careful experimentation to allow different proposed mechanisms of INS to be distinguished.

  5. Modeling of an industrial drying process by artificial neural networks

    Directory of Open Access Journals (Sweden)

    E. Assidjo

    2008-09-01

    Full Text Available A suitable method is needed to solve the nonquality problem in the grated coconut industry due to the poor control of product humidity during the process. In this study the possibility of using an artificial neural network (ANN, precisely a Multilayer Perceptron, for modeling the drying step of the production of grated coconut process is highlighted. Drying must confer to the product a final moisture of 3%. Unfortunately, under industrial conditions, this moisture varies from 1.9 to 4.8 %. In order to control this parameter and consequently reduce the proportion of the product that does not meet the humidity specification, a 9-4-1 neural network architecture was established using data gathered from an industrial plant. This Multilayer Perceptron can satisfactorily model the process with less bias, ranging from -0.35 to 0.34%, and can reduce the rate of rejected products from 92% to 3% during the first cycle of drying.

  6. Semi-empirical neural network models of controlled dynamical systems

    Directory of Open Access Journals (Sweden)

    Mihail V. Egorchev

    2017-12-01

    Full Text Available A simulation approach is discussed for maneuverable aircraft motion as nonlinear controlled dynamical system under multiple and diverse uncertainties including knowledge imperfection concerning simulated plant and its environment exposure. The suggested approach is based on a merging of theoretical knowledge for the plant with training tools of artificial neural network field. The efficiency of this approach is demonstrated using the example of motion modeling and the identification of the aerodynamic characteristics of a maneuverable aircraft. A semi-empirical recurrent neural network based model learning algorithm is proposed for multi-step ahead prediction problem. This algorithm sequentially states and solves numerical optimization subproblems of increasing complexity, using each solution as initial guess for subsequent subproblem. We also consider a procedure for representative training set acquisition that utilizes multisine control signals.

  7. Self-reported empathy and neural activity during action imitation and observation in schizophrenia

    Directory of Open Access Journals (Sweden)

    William P. Horan

    2014-01-01

    Conclusions: Although patients with schizophrenia demonstrated largely normal patterns of neural activation across the finger movement and facial expression tasks, they reported decreased self perceived empathy and failed to show the typical relationship between neural activity and self-reported empathy seen in controls. These findings suggest that patients show a disjunction between automatic neural responses to low level social cues and higher level, integrative social cognitive processes involved in self-perceived empathy.

  8. Accurate lithography simulation model based on convolutional neural networks

    Science.gov (United States)

    Watanabe, Yuki; Kimura, Taiki; Matsunawa, Tetsuaki; Nojima, Shigeki

    2017-07-01

    Lithography simulation is an essential technique for today's semiconductor manufacturing process. In order to calculate an entire chip in realistic time, compact resist model is commonly used. The model is established for faster calculation. To have accurate compact resist model, it is necessary to fix a complicated non-linear model function. However, it is difficult to decide an appropriate function manually because there are many options. This paper proposes a new compact resist model using CNN (Convolutional Neural Networks) which is one of deep learning techniques. CNN model makes it possible to determine an appropriate model function and achieve accurate simulation. Experimental results show CNN model can reduce CD prediction errors by 70% compared with the conventional model.

  9. Explaining neural signals in human visual cortex with an associative learning model.

    Science.gov (United States)

    Jiang, Jiefeng; Schmajuk, Nestor; Egner, Tobias

    2012-08-01

    "Predictive coding" models posit a key role for associative learning in visual cognition, viewing perceptual inference as a process of matching (learned) top-down predictions (or expectations) against bottom-up sensory evidence. At the neural level, these models propose that each region along the visual processing hierarchy entails one set of processing units encoding predictions of bottom-up input, and another set computing mismatches (prediction error or surprise) between predictions and evidence. This contrasts with traditional views of visual neurons operating purely as bottom-up feature detectors. In support of the predictive coding hypothesis, a recent human neuroimaging study (Egner, Monti, & Summerfield, 2010) showed that neural population responses to expected and unexpected face and house stimuli in the "fusiform face area" (FFA) could be well-described as a summation of hypothetical face-expectation and -surprise signals, but not by feature detector responses. Here, we used computer simulations to test whether these imaging data could be formally explained within the broader framework of a mathematical neural network model of associative learning (Schmajuk, Gray, & Lam, 1996). Results show that FFA responses could be fit very closely by model variables coding for conditional predictions (and their violations) of stimuli that unconditionally activate the FFA. These data document that neural population signals in the ventral visual stream that deviate from classic feature detection responses can formally be explained by associative prediction and surprise signals.

  10. Assessing neural activity related to decision-making through flexible odds ratio curves and their derivatives.

    Science.gov (United States)

    Roca-Pardiñas, Javier; Cadarso-Suárez, Carmen; Pardo-Vazquez, Jose L; Leboran, Victor; Molenberghs, Geert; Faes, Christel; Acuña, Carlos

    2011-06-30

    It is well established that neural activity is stochastically modulated over time. Therefore, direct comparisons across experimental conditions and determination of change points or maximum firing rates are not straightforward. This study sought to compare temporal firing probability curves that may vary across groups defined by different experimental conditions. Odds-ratio (OR) curves were used as a measure of comparison, and the main goal was to provide a global test to detect significant differences of such curves through the study of their derivatives. An algorithm is proposed that enables ORs based on generalized additive models, including factor-by-curve-type interactions to be flexibly estimated. Bootstrap methods were used to draw inferences from the derivatives curves, and binning techniques were applied to speed up computation in the estimation and testing processes. A simulation study was conducted to assess the validity of these bootstrap-based tests. This methodology was applied to study premotor ventral cortex neural activity associated with decision-making. The proposed statistical procedures proved very useful in revealing the neural activity correlates of decision-making in a visual discrimination task. Copyright © 2011 John Wiley & Sons, Ltd.

  11. Natural lecithin promotes neural network complexity and activity

    Science.gov (United States)

    Latifi, Shahrzad; Tamayol, Ali; Habibey, Rouhollah; Sabzevari, Reza; Kahn, Cyril; Geny, David; Eftekharpour, Eftekhar; Annabi, Nasim; Blau, Axel; Linder, Michel; Arab-Tehrany, Elmira

    2016-01-01

    Phospholipids in the brain cell membranes contain different polyunsaturated fatty acids (PUFAs), which are critical to nervous system function and structure. In particular, brain function critically depends on the uptake of the so-called “essential” fatty acids such as omega-3 (n-3) and omega-6 (n-6) PUFAs that cannot be readily synthesized by the human body. We extracted natural lecithin rich in various PUFAs from a marine source and transformed it into nanoliposomes. These nanoliposomes increased neurite outgrowth, network complexity and neural activity of cortical rat neurons in vitro. We also observed an upregulation of synapsin I (SYN1), which supports the positive role of lecithin in synaptogenesis, synaptic development and maturation. These findings suggest that lecithin nanoliposomes enhance neuronal development, which may have an impact on devising new lecithin delivery strategies for therapeutic applications. PMID:27228907

  12. Natural lecithin promotes neural network complexity and activity.

    Science.gov (United States)

    Latifi, Shahrzad; Tamayol, Ali; Habibey, Rouhollah; Sabzevari, Reza; Kahn, Cyril; Geny, David; Eftekharpour, Eftekhar; Annabi, Nasim; Blau, Axel; Linder, Michel; Arab-Tehrany, Elmira

    2016-05-27

    Phospholipids in the brain cell membranes contain different polyunsaturated fatty acids (PUFAs), which are critical to nervous system function and structure. In particular, brain function critically depends on the uptake of the so-called "essential" fatty acids such as omega-3 (n-3) and omega-6 (n-6) PUFAs that cannot be readily synthesized by the human body. We extracted natural lecithin rich in various PUFAs from a marine source and transformed it into nanoliposomes. These nanoliposomes increased neurite outgrowth, network complexity and neural activity of cortical rat neurons in vitro. We also observed an upregulation of synapsin I (SYN1), which supports the positive role of lecithin in synaptogenesis, synaptic development and maturation. These findings suggest that lecithin nanoliposomes enhance neuronal development, which may have an impact on devising new lecithin delivery strategies for therapeutic applications.

  13. Neural activity reveals perceptual grouping in working memory.

    Science.gov (United States)

    Rabbitt, Laura R; Roberts, Daniel M; McDonald, Craig G; Peterson, Matthew S

    2017-03-01

    There is extensive evidence that the contralateral delay activity (CDA), a scalp recorded event-related brain potential, provides a reliable index of the number of objects held in visual working memory. Here we present evidence that the CDA not only indexes visual object working memory, but also the number of locations held in spatial working memory. In addition, we demonstrate that the CDA can be predictably modulated by the type of encoding strategy employed. When individual locations were held in working memory, the pattern of CDA modulation mimicked previous findings for visual object working memory. Specifically, CDA amplitude increased monotonically until working memory capacity was reached. However, when participants were instructed to group individual locations to form a constellation, the CDA was prolonged and reached an asymptote at two locations. This result provides neural evidence for the formation of a unitary representation of multiple spatial locations. Published by Elsevier B.V.

  14. Neural Network Models for Free Radical Polymerization of Methyl Methacrylate

    International Nuclear Information System (INIS)

    Curteanu, S.; Leon, F.; Galea, D.

    2003-01-01

    In this paper, a neural network modeling of the batch bulk methyl methacrylate polymerization is performed. To obtain conversion, number and weight average molecular weights, three neural networks were built. Each was a multilayer perception with one or two hidden layers. The choice of network topology, i.e. the number of hidden layers and the number of neurons in these layers, was based on achieving a compromise between precision and complexity. Thus, it was intended to have an error as small as possible at the end of back-propagation training phases, while using a network with reduced complexity. The performances of the networks were evaluated by comparing network predictions with training data, validation data (which were not uses for training), and with the results of a mechanistic model. The accurate predictions of neural networks for monomer conversion, number average molecular weight and weight average molecular weight proves that this modeling methodology gives a good representation and generalization of the batch bulk methyl methacrylate polymerization. (author)

  15. Risk prediction model: Statistical and artificial neural network approach

    Science.gov (United States)

    Paiman, Nuur Azreen; Hariri, Azian; Masood, Ibrahim

    2017-04-01

    Prediction models are increasingly gaining popularity and had been used in numerous areas of studies to complement and fulfilled clinical reasoning and decision making nowadays. The adoption of such models assist physician's decision making, individual's behavior, and consequently improve individual outcomes and the cost-effectiveness of care. The objective of this paper is to reviewed articles related to risk prediction model in order to understand the suitable approach, development and the validation process of risk prediction model. A qualitative review of the aims, methods and significant main outcomes of the nineteen published articles that developed risk prediction models from numerous fields were done. This paper also reviewed on how researchers develop and validate the risk prediction models based on statistical and artificial neural network approach. From the review done, some methodological recommendation in developing and validating the prediction model were highlighted. According to studies that had been done, artificial neural network approached in developing the prediction model were more accurate compared to statistical approach. However currently, only limited published literature discussed on which approach is more accurate for risk prediction model development.

  16. Effects of Near-Infrared Laser on Neural Cell Activity

    International Nuclear Information System (INIS)

    Mochizuki-Oda, Noriko; Kataoka, Yosky; Yamada, Hisao; Awazu, Kunio

    2004-01-01

    Near-infrared laser has been used to relieve patients from various kinds of pain caused by postherpetic neuralgesia, myofascial dysfunction, surgical and traumatic wound, cancer, and rheumatoid arthritis. Clinically, He-Ne (λ=632.8 nm, 780 nm) and Ga-Al-As (805 ± 25 nm) lasers are used to irradiate trigger points or nerve ganglion. However the precise mechanisms of such biological actions of the laser have not yet been resolved. Since laser therapy is often effective to suppress the pain caused by hyperactive excitation of sensory neurons, interactions with laser light and neural cells are suggested. As neural excitation requires large amount of energy liberated from adenosine triphosphate (ATP), we examined the effect of 830-nm laser irradiation on the energy metabolism of the rat central nervous system and isolated mitochondria from brain. The diode laser was applied for 15 min with irradiance of 4.8 W/cm2 on a 2 mm-diameter spot at the brain surface. Tissue ATP content of the irradiated area in the cerebral cortex was 19% higher than that of the non-treated area (opposite side of the cortex), whereas the ADP content showed no significant difference. Irradiation at another wavelength (652 nm) had no effect on either ATP or ADP contents. The temperature of the brain tissue was increased 4.5-5.0 deg. C during the irradiation of both 830-nm and 652-nm laser light. Direct irradiation of the mitochondrial suspension did not show any wavelength-dependent acceleration of respiration rate nor ATP synthesis. These results suggest that the increase in tissue ATP content did not result from the thermal effect, but from specific effect of the laser operated at 830 nm. Electrophysiological studies showed the hyperpolarization of membrane potential of isolated neurons and decrease in membrane resistance with irradiation of the laser, suggesting an activation of potassium channels. Intracellular ATP is reported to regulate some kinds of potassium channels. Possible mechanisms

  17. Artificial neural network modelling in heavy ion collisions

    International Nuclear Information System (INIS)

    El-dahshan, E.; Radi, A.; El-Bakry, M.Y.; El Mashad, M.

    2008-01-01

    The neural network (NN) model and parton two fireball model (PTFM) have been used to study the pseudo-rapidity distribution of the shower particles for C 12, O 16, Si 28 and S 32 on nuclear emulsion. The trained NN shows a better fitting with experimental data than the PTFM calculations. The NN is then used to predict the distributions that are not present in the training set and matched them effectively. The NN simulation results prove a strong presence modeling in heavy ion collisions

  18. Trend time-series modeling and forecasting with neural networks.

    Science.gov (United States)

    Qi, Min; Zhang, G Peter

    2008-05-01

    Despite its great importance, there has been no general consensus on how to model the trends in time-series data. Compared to traditional approaches, neural networks (NNs) have shown some promise in time-series forecasting. This paper investigates how to best model trend time series using NNs. Four different strategies (raw data, raw data with time index, detrending, and differencing) are used to model various trend patterns (linear, nonlinear, deterministic, stochastic, and breaking trend). We find that with NNs differencing often gives meritorious results regardless of the underlying data generating processes (DGPs). This finding is also confirmed by the real gross national product (GNP) series.

  19. Neural Networks in Modelling Maintenance Unit Load Status

    Directory of Open Access Journals (Sweden)

    Anđelko Vojvoda

    2002-03-01

    Full Text Available This paper deals with a way of applying a neural networkfor describing se1vice station load in a maintenance unit. Dataacquired by measuring the workload of single stations in amaintenance unit were used in the process of training the neuralnetwork in order to create a model of the obse1ved system.The model developed in this way enables us to make more accuratepredictions over critical overload. Modelling was realisedby developing and using m-functions of the Matlab software.

  20. Data Driven Broiler Weight Forecasting using Dynamic Neural Network Models

    DEFF Research Database (Denmark)

    Johansen, Simon Vestergaard; Bendtsen, Jan Dimon; Riisgaard-Jensen, Martin

    2017-01-01

    In this article, the dynamic influence of environmental broiler house conditions and broiler growth is investigated. Dynamic neural network forecasting models have been trained on farm-scale broiler batch production data from 12 batches from the same house. The model forecasts future broiler weight...... and uses environmental conditions such as heating, ventilation, and temperature along with broiler behavior such as feed and water consumption. Training data and forecasting data is analyzed to explain when the model might fail at generalizing. We present ensemble broiler weight forecasts to day 7, 14, 21...

  1. Validating neural-network refinements of nuclear mass models

    Science.gov (United States)

    Utama, R.; Piekarewicz, J.

    2018-01-01

    Background: Nuclear astrophysics centers on the role of nuclear physics in the cosmos. In particular, nuclear masses at the limits of stability are critical in the development of stellar structure and the origin of the elements. Purpose: We aim to test and validate the predictions of recently refined nuclear mass models against the newly published AME2016 compilation. Methods: The basic paradigm underlining the recently refined nuclear mass models is based on existing state-of-the-art models that are subsequently refined through the training of an artificial neural network. Bayesian inference is used to determine the parameters of the neural network so that statistical uncertainties are provided for all model predictions. Results: We observe a significant improvement in the Bayesian neural network (BNN) predictions relative to the corresponding "bare" models when compared to the nearly 50 new masses reported in the AME2016 compilation. Further, AME2016 estimates for the handful of impactful isotopes in the determination of r -process abundances are found to be in fairly good agreement with our theoretical predictions. Indeed, the BNN-improved Duflo-Zuker model predicts a root-mean-square deviation relative to experiment of σrms≃400 keV. Conclusions: Given the excellent performance of the BNN refinement in confronting the recently published AME2016 compilation, we are confident of its critical role in our quest for mass models of the highest quality. Moreover, as uncertainty quantification is at the core of the BNN approach, the improved mass models are in a unique position to identify those nuclei that will have the strongest impact in resolving some of the outstanding questions in nuclear astrophysics.

  2. A Multiobjective Sparse Feature Learning Model for Deep Neural Networks.

    Science.gov (United States)

    Gong, Maoguo; Liu, Jia; Li, Hao; Cai, Qing; Su, Linzhi

    2015-12-01

    Hierarchical deep neural networks are currently popular learning models for imitating the hierarchical architecture of human brain. Single-layer feature extractors are the bricks to build deep networks. Sparse feature learning models are popular models that can learn useful representations. But most of those models need a user-defined constant to control the sparsity of representations. In this paper, we propose a multiobjective sparse feature learning model based on the autoencoder. The parameters of the model are learnt by optimizing two objectives, reconstruction error and the sparsity of hidden units simultaneously to find a reasonable compromise between them automatically. We design a multiobjective induced learning procedure for this model based on a multiobjective evolutionary algorithm. In the experiments, we demonstrate that the learning procedure is effective, and the proposed multiobjective model can learn useful sparse features.

  3. Purification of human induced pluripotent stem cell-derived neural precursors using magnetic activated cell sorting.

    Science.gov (United States)

    Rodrigues, Gonçalo M C; Fernandes, Tiago G; Rodrigues, Carlos A V; Cabral, Joaquim M S; Diogo, Maria Margarida

    2015-01-01

    Neural precursor (NP) cells derived from human induced pluripotent stem cells (hiPSCs), and their neuronal progeny, will play an important role in disease modeling, drug screening tests, central nervous system development studies, and may even become valuable for regenerative medicine treatments. Nonetheless, it is challenging to obtain homogeneous and synchronously differentiated NP populations from hiPSCs, and after neural commitment many pluripotent stem cells remain in the differentiated cultures. Here, we describe an efficient and simple protocol to differentiate hiPSC-derived NPs in 12 days, and we include a final purification stage where Tra-1-60+ pluripotent stem cells (PSCs) are removed using magnetic activated cell sorting (MACS), leaving the NP population nearly free of PSCs.

  4. Linking dynamic patterns of neural activity in orbitofrontal cortex with decision making.

    Science.gov (United States)

    Rich, Erin L; Stoll, Frederic M; Rudebeck, Peter H

    2018-04-01

    Humans and animals demonstrate extraordinary flexibility in choice behavior, particularly when deciding based on subjective preferences. We evaluate options on different scales, deliberate, and often change our minds. Little is known about the neural mechanisms that underlie these dynamic aspects of decision-making, although neural activity in orbitofrontal cortex (OFC) likely plays a central role. Recent evidence from studies in macaques shows that attention modulates value responses in OFC, and that ensembles of OFC neurons dynamically signal different options during choices. When contexts change, these ensembles flexibly remap to encode the new task. Determining how these dynamic patterns emerge and relate to choices will inform models of decision-making and OFC function. Copyright © 2017 Elsevier Ltd. All rights reserved.

  5. The BDNF Val66Met Polymorphism Influences Reading Ability and Patterns of Neural Activation in Children.

    Directory of Open Access Journals (Sweden)

    Kaja K Jasińska

    Full Text Available Understanding how genes impact the brain's functional activation for learning and cognition during development remains limited. We asked whether a common genetic variant in the BDNF gene (the Val66Met polymorphism modulates neural activation in the young brain during a critical period for the emergence and maturation of the neural circuitry for reading. In animal models, the bdnf variation has been shown to be associated with the structure and function of the developing brain and in humans it has been associated with multiple aspects of cognition, particularly memory, which are relevant for the development of skilled reading. Yet, little is known about the impact of the Val66Met polymorphism on functional brain activation in development, either in animal models or in humans. Here, we examined whether the BDNF Val66Met polymorphism (dbSNP rs6265 is associated with children's (age 6-10 neural activation patterns during a reading task (n = 81 using functional magnetic resonance imaging (fMRI, genotyping, and standardized behavioral assessments of cognitive and reading development. Children homozygous for the Val allele at the SNP rs6265 of the BDNF gene outperformed Met allele carriers on reading comprehension and phonological memory, tasks that have a strong memory component. Consistent with these behavioral findings, Met allele carriers showed greater activation in reading-related brain regions including the fusiform gyrus, the left inferior frontal gyrus and left superior temporal gyrus as well as greater activation in the hippocampus during a word and pseudoword reading task. Increased engagement of memory and spoken language regions for Met allele carriers relative to Val/Val homozygotes during reading suggests that Met carriers have to exert greater effort required to retrieve phonological codes.

  6. The BDNF Val66Met Polymorphism Influences Reading Ability and Patterns of Neural Activation in Children.

    Science.gov (United States)

    Jasińska, Kaja K; Molfese, Peter J; Kornilov, Sergey A; Mencl, W Einar; Frost, Stephen J; Lee, Maria; Pugh, Kenneth R; Grigorenko, Elena L; Landi, Nicole

    2016-01-01

    Understanding how genes impact the brain's functional activation for learning and cognition during development remains limited. We asked whether a common genetic variant in the BDNF gene (the Val66Met polymorphism) modulates neural activation in the young brain during a critical period for the emergence and maturation of the neural circuitry for reading. In animal models, the bdnf variation has been shown to be associated with the structure and function of the developing brain and in humans it has been associated with multiple aspects of cognition, particularly memory, which are relevant for the development of skilled reading. Yet, little is known about the impact of the Val66Met polymorphism on functional brain activation in development, either in animal models or in humans. Here, we examined whether the BDNF Val66Met polymorphism (dbSNP rs6265) is associated with children's (age 6-10) neural activation patterns during a reading task (n = 81) using functional magnetic resonance imaging (fMRI), genotyping, and standardized behavioral assessments of cognitive and reading development. Children homozygous for the Val allele at the SNP rs6265 of the BDNF gene outperformed Met allele carriers on reading comprehension and phonological memory, tasks that have a strong memory component. Consistent with these behavioral findings, Met allele carriers showed greater activation in reading-related brain regions including the fusiform gyrus, the left inferior frontal gyrus and left superior temporal gyrus as well as greater activation in the hippocampus during a word and pseudoword reading task. Increased engagement of memory and spoken language regions for Met allele carriers relative to Val/Val homozygotes during reading suggests that Met carriers have to exert greater effort required to retrieve phonological codes.

  7. Computational Models and Emergent Properties of Respiratory Neural Networks

    Science.gov (United States)

    Lindsey, Bruce G.; Rybak, Ilya A.; Smith, Jeffrey C.

    2012-01-01

    Computational models of the neural control system for breathing in mammals provide a theoretical and computational framework bringing together experimental data obtained from different animal preparations under various experimental conditions. Many of these models were developed in parallel and iteratively with experimental studies and provided predictions guiding new experiments. This data-driven modeling approach has advanced our understanding of respiratory network architecture and neural mechanisms underlying generation of the respiratory rhythm and pattern, including their functional reorganization under different physiological conditions. Models reviewed here vary in neurobiological details and computational complexity and span multiple spatiotemporal scales of respiratory control mechanisms. Recent models describe interacting populations of respiratory neurons spatially distributed within the Bötzinger and pre-Bötzinger complexes and rostral ventrolateral medulla that contain core circuits of the respiratory central pattern generator (CPG). Network interactions within these circuits along with intrinsic rhythmogenic properties of neurons form a hierarchy of multiple rhythm generation mechanisms. The functional expression of these mechanisms is controlled by input drives from other brainstem components, including the retrotrapezoid nucleus and pons, which regulate the dynamic behavior of the core circuitry. The emerging view is that the brainstem respiratory network has rhythmogenic capabilities at multiple levels of circuit organization. This allows flexible, state-dependent expression of different neural pattern-generation mechanisms under various physiological conditions, enabling a wide repertoire of respiratory behaviors. Some models consider control of the respiratory CPG by pulmonary feedback and network reconfiguration during defensive behaviors such as cough. Future directions in modeling of the respiratory CPG are considered. PMID:23687564

  8. High baseline activity in inferior temporal cortex improves neural and behavioral discriminability during visual categorization

    Science.gov (United States)

    Emadi, Nazli; Rajimehr, Reza; Esteky, Hossein

    2014-01-01

    Spontaneous firing is a ubiquitous property of neural activity in the brain. Recent literature suggests that this baseline activity plays a key role in perception. However, it is not known how the baseline activity contributes to neural coding and behavior. Here, by recording from the single neurons in the inferior temporal cortex of monkeys performing a visual categorization task, we thoroughly explored the relationship between baseline activity, the evoked response, and behavior. Specifically we found that a low-frequency (baseline activity. This enhancement of the baseline activity was then followed by an increase in the neural selectivity and the response reliability and eventually a higher behavioral performance. PMID:25404900

  9. Evidence-Based Systematic Review: Effects of Neuromuscular Electrical Stimulation on Swallowing and Neural Activation

    Science.gov (United States)

    Clark, Heather; Lazarus, Cathy; Arvedson, Joan; Schooling, Tracy; Frymark, Tobi

    2009-01-01

    Purpose: To systematically review the literature examining the effects of neuromuscular electrical stimulation (NMES) on swallowing and neural activation. The review was conducted as part of a series examining the effects of oral motor exercises (OMEs) on speech, swallowing, and neural activation. Method: A systematic search was conducted to…

  10. Circuit Models and Experimental Noise Measurements of Micropipette Amplifiers for Extracellular Neural Recordings from Live Animals

    Directory of Open Access Journals (Sweden)

    Chang Hao Chen

    2014-01-01

    Full Text Available Glass micropipettes are widely used to record neural activity from single neurons or clusters of neurons extracellularly in live animals. However, to date, there has been no comprehensive study of noise in extracellular recordings with glass micropipettes. The purpose of this work was to assess various noise sources that affect extracellular recordings and to create model systems in which novel micropipette neural amplifier designs can be tested. An equivalent circuit of the glass micropipette and the noise model of this circuit, which accurately describe the various noise sources involved in extracellular recordings, have been developed. Measurement schemes using dead brain tissue as well as extracellular recordings from neurons in the inferior colliculus, an auditory brain nucleus of an anesthetized gerbil, were used to characterize noise performance and amplification efficacy of the proposed micropipette neural amplifier. According to our model, the major noise sources which influence the signal to noise ratio are the intrinsic noise of the neural amplifier and the thermal noise from distributed pipette resistance. These two types of noise were calculated and measured and were shown to be the dominating sources of background noise for in vivo experiments.

  11. Neural network connectivity and response latency modelled by stochastic processes

    DEFF Research Database (Denmark)

    Tamborrino, Massimiliano

    is connected to thousands of other neurons. The rst question is: how to model neural networks through stochastic processes? A multivariate Ornstein-Uhlenbeck process, obtained as a diffusion approximation of a jump process, is the proposed answer. Obviously, dependencies between neurons imply dependencies......Stochastic processes and their rst passage times have been widely used to describe the membrane potential dynamics of single neurons and to reproduce neuronal spikes, respectively.However, cerebral cortex in human brains is estimated to contain 10-20 billions of neurons and each of them...... between their spike times. Therefore, the second question is: how to detect neural network connectivity from simultaneously recorded spike trains? Answering this question corresponds to investigate the joint distribution of sequences of rst passage times. A non-parametric method based on copulas...

  12. Adaptive model predictive process control using neural networks

    Science.gov (United States)

    Buescher, K.L.; Baum, C.C.; Jones, R.D.

    1997-08-19

    A control system for controlling the output of at least one plant process output parameter is implemented by adaptive model predictive control using a neural network. An improved method and apparatus provides for sampling plant output and control input at a first sampling rate to provide control inputs at the fast rate. The MPC system is, however, provided with a network state vector that is constructed at a second, slower rate so that the input control values used by the MPC system are averaged over a gapped time period. Another improvement is a provision for on-line training that may include difference training, curvature training, and basis center adjustment to maintain the weights and basis centers of the neural in an updated state that can follow changes in the plant operation apart from initial off-line training data. 46 figs.

  13. Evolutionary neural network modeling for software cumulative failure time prediction

    International Nuclear Information System (INIS)

    Tian Liang; Noore, Afzel

    2005-01-01

    An evolutionary neural network modeling approach for software cumulative failure time prediction based on multiple-delayed-input single-output architecture is proposed. Genetic algorithm is used to globally optimize the number of the delayed input neurons and the number of neurons in the hidden layer of the neural network architecture. Modification of Levenberg-Marquardt algorithm with Bayesian regularization is used to improve the ability to predict software cumulative failure time. The performance of our proposed approach has been compared using real-time control and flight dynamic application data sets. Numerical results show that both the goodness-of-fit and the next-step-predictability of our proposed approach have greater accuracy in predicting software cumulative failure time compared to existing approaches

  14. Super capacitor modeling with artificial neural network (ANN)

    Energy Technology Data Exchange (ETDEWEB)

    Marie-Francoise, J.N.; Gualous, H.; Berthon, A. [Universite de Franche-Comte, Lab. en Electronique, Electrotechnique et Systemes (L2ES), UTBM, INRETS (LRE T31) 90 - Belfort (France)

    2004-07-01

    This paper presents super-capacitors modeling using Artificial Neural Network (ANN). The principle consists on a black box nonlinear multiple inputs single output (MISO) model. The system inputs are temperature and current, the output is the super-capacitor voltage. The learning and the validation of the ANN model from experimental charge and discharge of super-capacitor establish the relationship between inputs and output. The learning and the validation of the ANN model use experimental results of 2700 F, 3700 F and a super-capacitor pack. Once the network is trained, the ANN model can predict the super-capacitor behaviour with temperature variations. The update parameters of the ANN model are performed thanks to Levenberg-Marquardt method in order to minimize the error between the output of the system and the predicted output. The obtained results with the ANN model of super-capacitor and experimental ones are in good agreement. (authors)

  15. Integration of active devices on smart polymers for neural interfaces

    Science.gov (United States)

    Avendano-Bolivar, Adrian Emmanuel

    The increasing ability to ever more precisely identify and measure neural interactions and other phenomena in the central and peripheral nervous systems is revolutionizing our understanding of the human body and brain. To facilitate further understanding, more sophisticated neural devices, perhaps using microelectronics processing, must be fabricated. Materials often used in these neural interfaces, while compatible with these fabrication processes, are not optimized for long-term use in the body and are often orders of magnitude stiffer than the tissue with which they interact. Using the smart polymer substrates described in this work, suitability for processing as well as chronic implantation is demonstrated. We explore how to integrate reliable circuitry onto these flexible, biocompatible substrates that can withstand the aggressive environment of the body. To increase the capabilities of these devices beyond individual channel sensing and stimulation, active electronics must also be included onto our systems. In order to add this functionality to these substrates and explore the limits of these devices, we developed a process to fabricate single organic thin film transistors with mobilities up to 0.4 cm2/Vs and threshold voltages close to 0V. A process for fabricating organic light emitting diodes on flexible substrates is also addressed. We have set a foundation and demonstrated initial feasibility for integrating multiple transistors onto thin-film flexible devices to create new applications, such as matrix addressable functionalized electrodes and organic light emitting diodes. A brief description on how to integrate waveguides for their use in optogenetics is addressed. We have built understanding about device constraints on mechanical, electrical and in vivo reliability and how various conditions affect the electronics' lifetime. We use a bi-layer gate dielectric using an inorganic material such as HfO 2 combined with organic Parylene-c. A study of

  16. Analysis of Neural-BOLD Coupling through Four Models of the Neural Metabolic Demand

    Directory of Open Access Journals (Sweden)

    Christopher W Tyler

    2015-12-01

    Full Text Available The coupling of the neuronal energetics to the blood-oxygen-level-dependent (BOLD response is still incompletely understood. To address this issue, we compared the fits of four plausible models of neurometabolic coupling dynamics to available data for simultaneous recordings of the local field potential (LFP and the local BOLD response recorded from monkey primary visual cortex over a wide range of stimulus durations. The four models of the metabolic demand driving the BOLD response were: direct coupling with the overall LFP; rectified coupling to the LFP; coupling with a slow adaptive component of the implied neural population response; and coupling with the non-adaptive intracellular input signal defined by the stimulus time course. Taking all stimulus durations into account, the results imply that the BOLD response is most closely coupled with metabolic demand derived from the intracellular input waveform, without significant influence from the adaptive transients and nonlinearities exhibited by the LFP waveform.

  17. Activation of postnatal neural stem cells requires nuclear receptor TLX.

    Science.gov (United States)

    Niu, Wenze; Zou, Yuhua; Shen, Chengcheng; Zhang, Chun-Li

    2011-09-28

    Neural stem cells (NSCs) continually produce new neurons in postnatal brains. However, the majority of these cells stay in a nondividing, inactive state. The molecular mechanism that is required for these cells to enter proliferation still remains largely unknown. Here, we show that nuclear receptor TLX (NR2E1) controls the activation status of postnatal NSCs in mice. Lineage tracing indicates that TLX-expressing cells give rise to both activated and inactive postnatal NSCs. Surprisingly, loss of TLX function does not result in spontaneous glial differentiation, but rather leads to a precipitous age-dependent increase of inactive cells with marker expression and radial morphology for NSCs. These inactive cells are mispositioned throughout the granular cell layer of the dentate gyrus during development and can proliferate again after reintroduction of ectopic TLX. RNA-seq analysis of sorted NSCs revealed a TLX-dependent global expression signature, which includes the p53 signaling pathway. TLX regulates p21 expression in a p53-dependent manner, and acute removal of p53 can rescue the proliferation defect of TLX-null NSCs in culture. Together, these findings suggest that TLX acts as an essential regulator that ensures the proliferative ability of postnatal NSCs by controlling their activation through genetic interaction with p53 and other signaling pathways.

  18. Sociocultural patterning of neural activity during self-reflection.

    Science.gov (United States)

    Ma, Yina; Bang, Dan; Wang, Chenbo; Allen, Micah; Frith, Chris; Roepstorff, Andreas; Han, Shihui

    2014-01-01

    Western cultures encourage self-construals independent of social contexts, whereas East Asian cultures foster interdependent self-construals that rely on how others perceive the self. How are culturally specific self-construals mediated by the human brain? Using functional magnetic resonance imaging, we monitored neural responses from adults in East Asian (Chinese) and Western (Danish) cultural contexts during judgments of social, mental and physical attributes of themselves and public figures to assess cultural influences on self-referential processing of personal attributes in different dimensions. We found that judgments of self vs a public figure elicited greater activation in the medial prefrontal cortex (mPFC) in Danish than in Chinese participants regardless of attribute dimensions for judgments. However, self-judgments of social attributes induced greater activity in the temporoparietal junction (TPJ) in Chinese than in Danish participants. Moreover, the group difference in TPJ activity was mediated by a measure of a cultural value (i.e. interdependence of self-construal). Our findings suggest that individuals in different sociocultural contexts may learn and/or adopt distinct strategies for self-reflection by changing the weight of the mPFC and TPJ in the social brain network.

  19. Altered Neural Activity Associated with Mindfulness during Nociception: A Systematic Review of Functional MRI.

    Science.gov (United States)

    Bilevicius, Elena; Kolesar, Tiffany A; Kornelsen, Jennifer

    2016-04-19

    To assess the neural activity associated with mindfulness-based alterations of pain perception. The Cochrane Central, EMBASE, Ovid Medline, PsycINFO, Scopus, and Web of Science databases were searched on 2 February 2016. Titles, abstracts, and full-text articles were independently screened by two reviewers. Data were independently extracted from records that included topics of functional neuroimaging, pain, and mindfulness interventions. The literature search produced 946 total records, of which five met the inclusion criteria. Records reported pain in terms of anticipation (n = 2), unpleasantness (n = 5), and intensity (n = 5), and how mindfulness conditions altered the neural activity during noxious stimulation accordingly. Although the studies were inconsistent in relating pain components to neural activity, in general, mindfulness was able to reduce pain anticipation and unpleasantness ratings, as well as alter the corresponding neural activity. The major neural underpinnings of mindfulness-based pain reduction consisted of altered activity in the anterior cingulate cortex, insula, and dorsolateral prefrontal cortex.

  20. Population-wide distributions of neural activity during perceptual decision-making

    Science.gov (United States)

    Machens, Christian

    2018-01-01

    Cortical activity involves large populations of neurons, even when it is limited to functionally coherent areas. Electrophysiological recordings, on the other hand, involve comparatively small neural ensembles, even when modern-day techniques are used. Here we review results which have started to fill the gap between these two scales of inquiry, by shedding light on the statistical distributions of activity in large populations of cells. We put our main focus on data recorded in awake animals that perform simple decision-making tasks and consider statistical distributions of activity throughout cortex, across sensory, associative, and motor areas. We transversally review the complexity of these distributions, from distributions of firing rates and metrics of spike-train structure, through distributions of tuning to stimuli or actions and of choice signals, and finally the dynamical evolution of neural population activity and the distributions of (pairwise) neural interactions. This approach reveals shared patterns of statistical organization across cortex, including: (i) long-tailed distributions of activity, where quasi-silence seems to be the rule for a majority of neurons; that are barely distinguishable between spontaneous and active states; (ii) distributions of tuning parameters for sensory (and motor) variables, which show an extensive extrapolation and fragmentation of their representations in the periphery; and (iii) population-wide dynamics that reveal rotations of internal representations over time, whose traces can be found both in stimulus-driven and internally generated activity. We discuss how these insights are leading us away from the notion of discrete classes of cells, and are acting as powerful constraints on theories and models of cortical organization and population coding. PMID:23123501

  1. EEG-fMRI Bayesian framework for neural activity estimation: a simulation study

    Science.gov (United States)

    Croce, Pierpaolo; Basti, Alessio; Marzetti, Laura; Zappasodi, Filippo; Del Gratta, Cosimo

    2016-12-01

    Objective. Due to the complementary nature of electroencephalography (EEG) and functional magnetic resonance imaging (fMRI), and given the possibility of simultaneous acquisition, the joint data analysis can afford a better understanding of the underlying neural activity estimation. In this simulation study we want to show the benefit of the joint EEG-fMRI neural activity estimation in a Bayesian framework. Approach. We built a dynamic Bayesian framework in order to perform joint EEG-fMRI neural activity time course estimation. The neural activity is originated by a given brain area and detected by means of both measurement techniques. We have chosen a resting state neural activity situation to address the worst case in terms of the signal-to-noise ratio. To infer information by EEG and fMRI concurrently we used a tool belonging to the sequential Monte Carlo (SMC) methods: the particle filter (PF). Main results. First, despite a high computational cost, we showed the feasibility of such an approach. Second, we obtained an improvement in neural activity reconstruction when using both EEG and fMRI measurements. Significance. The proposed simulation shows the improvements in neural activity reconstruction with EEG-fMRI simultaneous data. The application of such an approach to real data allows a better comprehension of the neural dynamics.

  2. Reactor pressure vessel embrittlement: Insights from neural network modelling

    Science.gov (United States)

    Mathew, J.; Parfitt, D.; Wilford, K.; Riddle, N.; Alamaniotis, M.; Chroneos, A.; Fitzpatrick, M. E.

    2018-04-01

    Irradiation embrittlement of steel pressure vessels is an important consideration for the operation of current and future light water nuclear reactors. In this study we employ an ensemble of artificial neural networks in order to provide predictions of the embrittlement using two literature datasets, one based on US surveillance data and the second from the IVAR experiment. We use these networks to examine trends with input variables and to assess various literature models including compositional effects and the role of flux and temperature. Overall, the networks agree with the existing literature models and we comment on their more general use in predicting irradiation embrittlement.

  3. Optimizing Markovian modeling of chaotic systems with recurrent neural networks

    International Nuclear Information System (INIS)

    Cechin, Adelmo L.; Pechmann, Denise R.; Oliveira, Luiz P.L. de

    2008-01-01

    In this paper, we propose a methodology for optimizing the modeling of an one-dimensional chaotic time series with a Markov Chain. The model is extracted from a recurrent neural network trained for the attractor reconstructed from the data set. Each state of the obtained Markov Chain is a region of the reconstructed state space where the dynamics is approximated by a specific piecewise linear map, obtained from the network. The Markov Chain represents the dynamics of the time series in its statistical essence. An application to a time series resulted from Lorenz system is included

  4. Where's the Noise? Key Features of Spontaneous Activity and Neural Variability Arise through Learning in a Deterministic Network.

    Directory of Open Access Journals (Sweden)

    Christoph Hartmann

    2015-12-01

    Full Text Available Even in the absence of sensory stimulation the brain is spontaneously active. This background "noise" seems to be the dominant cause of the notoriously high trial-to-trial variability of neural recordings. Recent experimental observations have extended our knowledge of trial-to-trial variability and spontaneous activity in several directions: 1. Trial-to-trial variability systematically decreases following the onset of a sensory stimulus or the start of a motor act. 2. Spontaneous activity states in sensory cortex outline the region of evoked sensory responses. 3. Across development, spontaneous activity aligns itself with typical evoked activity patterns. 4. The spontaneous brain activity prior to the presentation of an ambiguous stimulus predicts how the stimulus will be interpreted. At present it is unclear how these observations relate to each other and how they arise in cortical circuits. Here we demonstrate that all of these phenomena can be accounted for by a deterministic self-organizing recurrent neural network model (SORN, which learns a predictive model of its sensory environment. The SORN comprises recurrently coupled populations of excitatory and inhibitory threshold units and learns via a combination of spike-timing dependent plasticity (STDP and homeostatic plasticity mechanisms. Similar to balanced network architectures, units in the network show irregular activity and variable responses to inputs. Additionally, however, the SORN exhibits sequence learning abilities matching recent findings from visual cortex and the network's spontaneous activity reproduces the experimental findings mentioned above. Intriguingly, the network's behaviour is reminiscent of sampling-based probabilistic inference, suggesting that correlates of sampling-based inference can develop from the interaction of STDP and homeostasis in deterministic networks. We conclude that key observations on spontaneous brain activity and the variability of neural

  5. A deep convolutional neural network model to classify heartbeats.

    Science.gov (United States)

    Acharya, U Rajendra; Oh, Shu Lih; Hagiwara, Yuki; Tan, Jen Hong; Adam, Muhammad; Gertych, Arkadiusz; Tan, Ru San

    2017-10-01

    The electrocardiogram (ECG) is a standard test used to monitor the activity of the heart. Many cardiac abnormalities will be manifested in the ECG including arrhythmia which is a general term that refers to an abnormal heart rhythm. The basis of arrhythmia diagnosis is the identification of normal versus abnormal individual heart beats, and their correct classification into different diagnoses, based on ECG morphology. Heartbeats can be sub-divided into five categories namely non-ectopic, supraventricular ectopic, ventricular ectopic, fusion, and unknown beats. It is challenging and time-consuming to distinguish these heartbeats on ECG as these signals are typically corrupted by noise. We developed a 9-layer deep convolutional neural network (CNN) to automatically identify 5 different categories of heartbeats in ECG signals. Our experiment was conducted in original and noise attenuated sets of ECG signals derived from a publicly available database. This set was artificially augmented to even out the number of instances the 5 classes of heartbeats and filtered to remove high-frequency noise. The CNN was trained using the augmented data and achieved an accuracy of 94.03% and 93.47% in the diagnostic classification of heartbeats in original and noise free ECGs, respectively. When the CNN was trained with highly imbalanced data (original dataset), the accuracy of the CNN reduced to 89.07%% and 89.3% in noisy and noise-free ECGs. When properly trained, the proposed CNN model can serve as a tool for screening of ECG to quickly identify different types and frequency of arrhythmic heartbeats. Copyright © 2017 Elsevier Ltd. All rights reserved.

  6. SPR imaging combined with cyclic voltammetry for the detection of neural activity

    Directory of Open Access Journals (Sweden)

    Hui Li

    2014-03-01

    Full Text Available Surface plasmon resonance (SPR detects changes in refractive index at a metal-dielectric interface. In this study, SPR imaging (SPRi combined with cyclic voltammetry (CV was applied to detect neural activity in isolated bullfrog sciatic nerves. The neural activities induced by chemical and electrical stimulation led to an SPR response, and the activities were recorded in real time. The activities of different parts of the sciatic nerve were recorded and compared. The results demonstrated that SPR imaging combined with CV is a powerful tool for the investigation of neural activity.

  7. The effects of noise on binocular rivalry waves: a stochastic neural field model

    International Nuclear Information System (INIS)

    Webber, Matthew A; Bressloff, Paul C

    2013-01-01

    We analyze the effects of extrinsic noise on traveling waves of visual perception in a competitive neural field model of binocular rivalry. The model consists of two one-dimensional excitatory neural fields, whose activity variables represent the responses to left-eye and right-eye stimuli, respectively. The two networks mutually inhibit each other, and slow adaptation is incorporated into the model by taking the network connections to exhibit synaptic depression. We first show how, in the absence of any noise, the system supports a propagating composite wave consisting of an invading activity front in one network co-moving with a retreating front in the other network. Using a separation of time scales and perturbation methods previously developed for stochastic reaction–diffusion equations, we then show how extrinsic noise in the activity variables leads to a diffusive-like displacement (wandering) of the composite wave from its uniformly translating position at long time scales, and fluctuations in the wave profile around its instantaneous position at short time scales. We use our analysis to calculate the first-passage-time distribution for a stochastic rivalry wave to travel a fixed distance, which we find to be given by an inverse Gaussian. Finally, we investigate the effects of noise in the depression variables, which under an adiabatic approximation lead to quenched disorder in the neural fields during propagation of a wave. (paper)

  8. The effects of noise on binocular rivalry waves: a stochastic neural field model

    KAUST Repository

    Webber, Matthew A

    2013-03-12

    We analyze the effects of extrinsic noise on traveling waves of visual perception in a competitive neural field model of binocular rivalry. The model consists of two one-dimensional excitatory neural fields, whose activity variables represent the responses to left-eye and right-eye stimuli, respectively. The two networks mutually inhibit each other, and slow adaptation is incorporated into the model by taking the network connections to exhibit synaptic depression. We first show how, in the absence of any noise, the system supports a propagating composite wave consisting of an invading activity front in one network co-moving with a retreating front in the other network. Using a separation of time scales and perturbation methods previously developed for stochastic reaction-diffusion equations, we then show how extrinsic noise in the activity variables leads to a diffusive-like displacement (wandering) of the composite wave from its uniformly translating position at long time scales, and fluctuations in the wave profile around its instantaneous position at short time scales. We use our analysis to calculate the first-passage-time distribution for a stochastic rivalry wave to travel a fixed distance, which we find to be given by an inverse Gaussian. Finally, we investigate the effects of noise in the depression variables, which under an adiabatic approximation lead to quenched disorder in the neural fields during propagation of a wave. © 2013 IOP Publishing Ltd and SISSA Medialab srl.

  9. Time Series Neural Network Model for Part-of-Speech Tagging Indonesian Language

    Science.gov (United States)

    Tanadi, Theo

    2018-03-01

    Part-of-speech tagging (POS tagging) is an important part in natural language processing. Many methods have been used to do this task, including neural network. This paper models a neural network that attempts to do POS tagging. A time series neural network is modelled to solve the problems that a basic neural network faces when attempting to do POS tagging. In order to enable the neural network to have text data input, the text data will get clustered first using Brown Clustering, resulting a binary dictionary that the neural network can use. To further the accuracy of the neural network, other features such as the POS tag, suffix, and affix of previous words would also be fed to the neural network.

  10. Models of neural networks temporal aspects of coding and information processing in biological systems

    CERN Document Server

    Hemmen, J; Schulten, Klaus

    1994-01-01

    Since the appearance of Vol. 1 of Models of Neural Networks in 1991, the theory of neural nets has focused on two paradigms: information coding through coherent firing of the neurons and functional feedback. Information coding through coherent neuronal firing exploits time as a cardinal degree of freedom. This capacity of a neural network rests on the fact that the neuronal action potential is a short, say 1 ms, spike, localized in space and time. Spatial as well as temporal correlations of activity may represent different states of a network. In particular, temporal correlations of activity may express that neurons process the same "object" of, for example, a visual scene by spiking at the very same time. The traditional description of a neural network through a firing rate, the famous S-shaped curve, presupposes a wide time window of, say, at least 100 ms. It thus fails to exploit the capacity to "bind" sets of coherently firing neurons for the purpose of both scene segmentation and figure-ground segregatio...

  11. Calculations of dose distributions using a neural network model

    International Nuclear Information System (INIS)

    Mathieu, R; Martin, E; Gschwind, R; Makovicka, L; Contassot-Vivier, S; Bahi, J

    2005-01-01

    The main goal of external beam radiotherapy is the treatment of tumours, while sparing, as much as possible, surrounding healthy tissues. In order to master and optimize the dose distribution within the patient, dosimetric planning has to be carried out. Thus, for determining the most accurate dose distribution during treatment planning, a compromise must be found between the precision and the speed of calculation. Current techniques, using analytic methods, models and databases, are rapid but lack precision. Enhanced precision can be achieved by using calculation codes based, for example, on Monte Carlo methods. However, in spite of all efforts to optimize speed (methods and computer improvements), Monte Carlo based methods remain painfully slow. A newer way to handle all of these problems is to use a new approach in dosimetric calculation by employing neural networks. Neural networks (Wu and Zhu 2000 Phys. Med. Biol. 45 913-22) provide the advantages of those various approaches while avoiding their main inconveniences, i.e., time-consumption calculations. This permits us to obtain quick and accurate results during clinical treatment planning. Currently, results obtained for a single depth-dose calculation using a Monte Carlo based code (such as BEAM (Rogers et al 2003 NRCC Report PIRS-0509(A) rev G)) require hours of computing. By contrast, the practical use of neural networks (Mathieu et al 2003 Proceedings Journees Scientifiques Francophones, SFRP) provides almost instant results and quite low errors (less than 2%) for a two-dimensional dosimetric map

  12. Endogenous testosterone levels are associated with neural activity in men with schizophrenia during facial emotion processing.

    Science.gov (United States)

    Ji, Ellen; Weickert, Cynthia Shannon; Lenroot, Rhoshel; Catts, Stanley V; Vercammen, Ans; White, Christopher; Gur, Raquel E; Weickert, Thomas W

    2015-06-01

    Growing evidence suggests that testosterone may play a role in the pathophysiology of schizophrenia given that testosterone has been linked to cognition and negative symptoms in schizophrenia. Here, we determine the extent to which serum testosterone levels are related to neural activity in affective processing circuitry in men with schizophrenia. Functional magnetic resonance imaging was used to measure blood-oxygen-level-dependent signal changes as 32 healthy controls and 26 people with schizophrenia performed a facial emotion identification task. Whole brain analyses were performed to determine regions of differential activity between groups during processing of angry versus non-threatening faces. A follow-up ROI analysis using a regression model in a subset of 16 healthy men and 16 men with schizophrenia was used to determine the extent to which serum testosterone levels were related to neural activity. Healthy controls displayed significantly greater activation than people with schizophrenia in the left inferior frontal gyrus (IFG). There was no significant difference in circulating testosterone levels between healthy men and men with schizophrenia. Regression analyses between activation in the IFG and circulating testosterone levels revealed a significant positive correlation in men with schizophrenia (r=.63, p=.01) and no significant relationship in healthy men. This study provides the first evidence that circulating serum testosterone levels are related to IFG activation during emotion face processing in men with schizophrenia but not in healthy men, which suggests that testosterone levels modulate neural processes relevant to facial emotion processing that may interfere with social functioning in men with schizophrenia. Crown Copyright © 2015. Published by Elsevier B.V. All rights reserved.

  13. A model of stimulus-specific neural assemblies in the insect antennal lobe.

    Directory of Open Access Journals (Sweden)

    Dominique Martinez

    2008-08-01

    Full Text Available It has been proposed that synchronized neural assemblies in the antennal lobe of insects encode the identity of olfactory stimuli. In response to an odor, some projection neurons exhibit synchronous firing, phase-locked to the oscillations of the field potential, whereas others do not. Experimental data indicate that neural synchronization and field oscillations are induced by fast GABA(A-type inhibition, but it remains unclear how desynchronization occurs. We hypothesize that slow inhibition plays a key role in desynchronizing projection neurons. Because synaptic noise is believed to be the dominant factor that limits neuronal reliability, we consider a computational model of the antennal lobe in which a population of oscillatory neurons interact through unreliable GABA(A and GABA(B inhibitory synapses. From theoretical analysis and extensive computer simulations, we show that transmission failures at slow GABA(B synapses make the neural response unpredictable. Depending on the balance between GABA(A and GABA(B inputs, particular neurons may either synchronize or desynchronize. These findings suggest a wiring scheme that triggers stimulus-specific synchronized assemblies. Inhibitory connections are set by Hebbian learning and selectively activated by stimulus patterns to form a spiking associative memory whose storage capacity is comparable to that of classical binary-coded models. We conclude that fast inhibition acts in concert with slow inhibition to reformat the glomerular input into odor-specific synchronized neural assemblies.

  14. Emotion disrupts neural activity during selective attention in psychopathy.

    Science.gov (United States)

    Sadeh, Naomi; Spielberg, Jeffrey M; Heller, Wendy; Herrington, John D; Engels, Anna S; Warren, Stacie L; Crocker, Laura D; Sutton, Bradley P; Miller, Gregory A

    2013-03-01

    Dimensions of psychopathy are theorized to be associated with distinct cognitive and emotional abnormalities that may represent unique neurobiological risk factors for the disorder. This hypothesis was investigated by examining whether the psychopathic personality dimensions of fearless-dominance and impulsive-antisociality moderated neural activity and behavioral responses associated with selective attention and emotional processing during an emotion-word Stroop task in 49 adults. As predicted, the dimensions evidenced divergent selective-attention deficits and sensitivity to emotional distraction. Fearless-dominance was associated with disrupted attentional control to positive words, and activation in right superior frontal gyrus mediated the relationship between fearless-dominance and errors to positive words. In contrast, impulsive-antisociality evidenced increased behavioral interference to both positive and negative words and correlated positively with recruitment of regions associated with motivational salience (amygdala, orbitofrontal cortex, insula), emotion regulation (temporal cortex, superior frontal gyrus) and attentional control (dorsal anterior cingulate cortex). Individuals high on both dimensions had increased recruitment of regions related to attentional control (temporal cortex, rostral anterior cingulate cortex), response preparation (pre-/post-central gyri) and motivational value (orbitofrontal cortex) in response to negative words. These findings provide evidence that the psychopathy dimensions represent dual sets of risk factors characterized by divergent dysfunction in cognitive and affective processes.

  15. Continuous Online Sequence Learning with an Unsupervised Neural Network Model.

    Science.gov (United States)

    Cui, Yuwei; Ahmad, Subutar; Hawkins, Jeff

    2016-09-14

    The ability to recognize and predict temporal sequences of sensory inputs is vital for survival in natural environments. Based on many known properties of cortical neurons, hierarchical temporal memory (HTM) sequence memory recently has been proposed as a theoretical framework for sequence learning in the cortex. In this letter, we analyze properties of HTM sequence memory and apply it to sequence learning and prediction problems with streaming data. We show the model is able to continuously learn a large number of variableorder temporal sequences using an unsupervised Hebbian-like learning rule. The sparse temporal codes formed by the model can robustly handle branching temporal sequences by maintaining multiple predictions until there is sufficient disambiguating evidence. We compare the HTM sequence memory with other sequence learning algorithms, including statistical methods: autoregressive integrated moving average; feedforward neural networks-time delay neural network and online sequential extreme learning machine; and recurrent neural networks-long short-term memory and echo-state networks on sequence prediction problems with both artificial and real-world data. The HTM model achieves comparable accuracy to other state-of-the-art algorithms. The model also exhibits properties that are critical for sequence learning, including continuous online learning, the ability to handle multiple predictions and branching sequences with high-order statistics, robustness to sensor noise and fault tolerance, and good performance without task-specific hyperparameter tuning. Therefore, the HTM sequence memory not only advances our understanding of how the brain may solve the sequence learning problem but is also applicable to real-world sequence learning problems from continuous data streams.

  16. What if? Neural activity underlying semantic and episodic counterfactual thinking.

    Science.gov (United States)

    Parikh, Natasha; Ruzic, Luka; Stewart, Gregory W; Spreng, R Nathan; De Brigard, Felipe

    2018-05-25

    Counterfactual thinking (CFT) is the process of mentally simulating alternative versions of known facts. In the past decade, cognitive neuroscientists have begun to uncover the neural underpinnings of CFT, particularly episodic CFT (eCFT), which activates regions in the default network (DN) also activated by episodic memory (eM) recall. However, the engagement of DN regions is different for distinct kinds of eCFT. More plausible counterfactuals and counterfactuals about oneself show stronger activity in DN regions compared to implausible and other- or object-focused counterfactuals. The current study sought to identify a source for this difference in DN activity. Specifically, self-focused counterfactuals may also be more plausible, suggesting that DN core regions are sensitive to the plausibility of a simulation. On the other hand, plausible and self-focused counterfactuals may involve more episodic information than implausible and other-focused counterfactuals, which would imply DN sensitivity to episodic information. In the current study, we compared episodic and semantic counterfactuals generated to be plausible or implausible against episodic and semantic memory reactivation using fMRI. Taking multivariate and univariate approaches, we found that the DN is engaged more during episodic simulations, including eM and all eCFT, than during semantic simulations. Semantic simulations engaged more inferior temporal and lateral occipital regions. The only region that showed strong plausibility effects was the hippocampus, which was significantly engaged for implausible CFT but not for plausible CFT, suggestive of binding more disparate information. Consequences of these findings for the cognitive neuroscience of mental simulation are discussed. Published by Elsevier Inc.

  17. Artificial neural network model of pork meat cubes osmotic dehydration

    OpenAIRE

    Pezo, Lato L.; Ćurčić, Biljana Lj.; Filipović, Vladimir S.; Nićetin, Milica R.; Koprivica, Gordana B.; Mišljenović, Nevena M.; Lević, Ljubinko B.

    2013-01-01

    Mass transfer of pork meat cubes (M. triceps brachii), shaped as 1x1x1 cm, during osmotic dehydration (OD) and under atmospheric pressure was investigated in this paper. The effects of different parameters, such as concentration of sugar beet molasses (60-80%, w/w), temperature (20-50ºC), and immersion time (1-5 h) in terms of water loss (WL), solid gain (SG), final dry matter content (DM), and water activity (aw), were investigated using experimental results. Five artificial neural net...

  18. Optimal Hierarchical Modular Topologies for Producing Limited Sustained Activation of Neural Networks

    OpenAIRE

    Kaiser, Marcus; Hilgetag, Claus C.

    2010-01-01

    An essential requirement for the representation of functional patterns in complex neural networks, such as the mammalian cerebral cortex, is the existence of stable regimes of network activation, typically arising from a limited parameter range. In this range of limited sustained activity (LSA), the activity of neural populations in the network persists between the extremes of either quickly dying out or activating the whole network. Hierarchical modular networks were previously found to show...

  19. Deep Convolutional and LSTM Recurrent Neural Networks for Multimodal Wearable Activity Recognition

    Science.gov (United States)

    Ordóñez, Francisco Javier; Roggen, Daniel

    2016-01-01

    Human activity recognition (HAR) tasks have traditionally been solved using engineered features obtained by heuristic processes. Current research suggests that deep convolutional neural networks are suited to automate feature extraction from raw sensor inputs. However, human activities are made of complex sequences of motor movements, and capturing this temporal dynamics is fundamental for successful HAR. Based on the recent success of recurrent neural networks for time series domains, we propose a generic deep framework for activity recognition based on convolutional and LSTM recurrent units, which: (i) is suitable for multimodal wearable sensors; (ii) can perform sensor fusion naturally; (iii) does not require expert knowledge in designing features; and (iv) explicitly models the temporal dynamics of feature activations. We evaluate our framework on two datasets, one of which has been used in a public activity recognition challenge. Our results show that our framework outperforms competing deep non-recurrent networks on the challenge dataset by 4% on average; outperforming some of the previous reported results by up to 9%. Our results show that the framework can be applied to homogeneous sensor modalities, but can also fuse multimodal sensors to improve performance. We characterise key architectural hyperparameters’ influence on performance to provide insights about their optimisation. PMID:26797612

  20. Deep Convolutional and LSTM Recurrent Neural Networks for Multimodal Wearable Activity Recognition

    Directory of Open Access Journals (Sweden)

    Francisco Javier Ordóñez

    2016-01-01

    Full Text Available Human activity recognition (HAR tasks have traditionally been solved using engineered features obtained by heuristic processes. Current research suggests that deep convolutional neural networks are suited to automate feature extraction from raw sensor inputs. However, human activities are made of complex sequences of motor movements, and capturing this temporal dynamics is fundamental for successful HAR. Based on the recent success of recurrent neural networks for time series domains, we propose a generic deep framework for activity recognition based on convolutional and LSTM recurrent units, which: (i is suitable for multimodal wearable sensors; (ii can perform sensor fusion naturally; (iii does not require expert knowledge in designing features; and (iv explicitly models the temporal dynamics of feature activations. We evaluate our framework on two datasets, one of which has been used in a public activity recognition challenge. Our results show that our framework outperforms competing deep non-recurrent networks on the challenge dataset by 4% on average; outperforming some of the previous reported results by up to 9%. Our results show that the framework can be applied to homogeneous sensor modalities, but can also fuse multimodal sensors to improve performance. We characterise key architectural hyperparameters’ influence on performance to provide insights about their optimisation.

  1. Deep Convolutional and LSTM Recurrent Neural Networks for Multimodal Wearable Activity Recognition.

    Science.gov (United States)

    Ordóñez, Francisco Javier; Roggen, Daniel

    2016-01-18

    Human activity recognition (HAR) tasks have traditionally been solved using engineered features obtained by heuristic processes. Current research suggests that deep convolutional neural networks are suited to automate feature extraction from raw sensor inputs. However, human activities are made of complex sequences of motor movements, and capturing this temporal dynamics is fundamental for successful HAR. Based on the recent success of recurrent neural networks for time series domains, we propose a generic deep framework for activity recognition based on convolutional and LSTM recurrent units, which: (i) is suitable for multimodal wearable sensors; (ii) can perform sensor fusion naturally; (iii) does not require expert knowledge in designing features; and (iv) explicitly models the temporal dynamics of feature activations. We evaluate our framework on two datasets, one of which has been used in a public activity recognition challenge. Our results show that our framework outperforms competing deep non-recurrent networks on the challenge dataset by 4% on average; outperforming some of the previous reported results by up to 9%. Our results show that the framework can be applied to homogeneous sensor modalities, but can also fuse multimodal sensors to improve performance. We characterise key architectural hyperparameters' influence on performance to provide insights about their optimisation.

  2. Modeling and Speed Control of Induction Motor Drives Using Neural Networks

    Directory of Open Access Journals (Sweden)

    V. Jamuna

    2010-08-01

    Full Text Available Speed control of induction motor drives using neural networks is presented. The mathematical model of single phase induction motor is developed. A new simulink model for a neural network-controlled bidirectional chopper fed single phase induction motor is proposed. Under normal operation, the true drive parameters are real-time identified and they are converted into the controller parameters through multilayer forward computation by neural networks. Comparative study has been made between the conventional and neural network controllers. It is observed that the neural network controlled drive system has better dynamic performance, reduced overshoot and faster transient response than the conventional controlled system.

  3. Sensory Entrainment Mechanisms in Auditory Perception: Neural Synchronization Cortico-Striatal Activation.

    Science.gov (United States)

    Sameiro-Barbosa, Catia M; Geiser, Eveline

    2016-01-01

    The auditory system displays modulations in sensitivity that can align with the temporal structure of the acoustic environment. This sensory entrainment can facilitate sensory perception and is particularly relevant for audition. Systems neuroscience is slowly uncovering the neural mechanisms underlying the behaviorally observed sensory entrainment effects in the human sensory system. The present article summarizes the prominent behavioral effects of sensory entrainment and reviews our current understanding of the neural basis of sensory entrainment, such as synchronized neural oscillations, and potentially, neural activation in the cortico-striatal system.

  4. Sensory Entrainment Mechanisms in Auditory Perception: Neural Synchronization Cortico-Striatal Activation

    Science.gov (United States)

    Sameiro-Barbosa, Catia M.; Geiser, Eveline

    2016-01-01

    The auditory system displays modulations in sensitivity that can align with the temporal structure of the acoustic environment. This sensory entrainment can facilitate sensory perception and is particularly relevant for audition. Systems neuroscience is slowly uncovering the neural mechanisms underlying the behaviorally observed sensory entrainment effects in the human sensory system. The present article summarizes the prominent behavioral effects of sensory entrainment and reviews our current understanding of the neural basis of sensory entrainment, such as synchronized neural oscillations, and potentially, neural activation in the cortico-striatal system. PMID:27559306

  5. Stimulus Sensitivity of a Spiking Neural Network Model

    Science.gov (United States)

    Chevallier, Julien

    2018-02-01

    Some recent papers relate the criticality of complex systems to their maximal capacity of information processing. In the present paper, we consider high dimensional point processes, known as age-dependent Hawkes processes, which have been used to model spiking neural networks. Using mean-field approximation, the response of the network to a stimulus is computed and we provide a notion of stimulus sensitivity. It appears that the maximal sensitivity is achieved in the sub-critical regime, yet almost critical for a range of biologically relevant parameters.

  6. How fast can we learn maximum entropy models of neural populations?

    Energy Technology Data Exchange (ETDEWEB)

    Ganmor, Elad; Schneidman, Elad [Department of Neuroscience, Weizmann Institute of Science, Rehovot 76100 (Israel); Segev, Ronen, E-mail: elad.ganmor@weizmann.ac.i, E-mail: elad.schneidman@weizmann.ac.i [Department of Life Sciences and Zlotowski Center for Neuroscience, Ben-Gurion University of the Negev, Beer-Sheva 84105 (Israel)

    2009-12-01

    Most of our knowledge about how the brain encodes information comes from recordings of single neurons. However, computations in the brain are carried out by large groups of neurons. Modelling the joint activity of many interacting elements is computationally hard because of the large number of possible activity patterns and limited experimental data. Recently it was shown in several different neural systems that maximum entropy pairwise models, which rely only on firing rates and pairwise correlations of neurons, are excellent models for the distribution of activity patterns of neural populations, and in particular, their responses to natural stimuli. Using simultaneous recordings of large groups of neurons in the vertebrate retina responding to naturalistic stimuli, we show here that the relevant statistics required for finding the pairwise model can be accurately estimated within seconds. Furthermore, while higher order statistics may, in theory, improve model accuracy, they are, in practice, harmful for times of up to 20 minutes due to sampling noise. Finally, we demonstrate that trading accuracy for entropy may actually improve model performance when data is limited, and suggest an optimization method that automatically adjusts model constraints in order to achieve good performance.

  7. How fast can we learn maximum entropy models of neural populations?

    International Nuclear Information System (INIS)

    Ganmor, Elad; Schneidman, Elad; Segev, Ronen

    2009-01-01

    Most of our knowledge about how the brain encodes information comes from recordings of single neurons. However, computations in the brain are carried out by large groups of neurons. Modelling the joint activity of many interacting elements is computationally hard because of the large number of possible activity patterns and limited experimental data. Recently it was shown in several different neural systems that maximum entropy pairwise models, which rely only on firing rates and pairwise correlations of neurons, are excellent models for the distribution of activity patterns of neural populations, and in particular, their responses to natural stimuli. Using simultaneous recordings of large groups of neurons in the vertebrate retina responding to naturalistic stimuli, we show here that the relevant statistics required for finding the pairwise model can be accurately estimated within seconds. Furthermore, while higher order statistics may, in theory, improve model accuracy, they are, in practice, harmful for times of up to 20 minutes due to sampling noise. Finally, we demonstrate that trading accuracy for entropy may actually improve model performance when data is limited, and suggest an optimization method that automatically adjusts model constraints in order to achieve good performance.

  8. Fast neutron spectra determination by threshold activation detectors using neural networks

    International Nuclear Information System (INIS)

    Kardan, M.R.; Koohi-Fayegh, R.; Setayeshi, S.; Ghiassi-Nejad, M.

    2004-01-01

    Neural network method was used for fast neutron spectra unfolding in spectrometry by threshold activation detectors. The input layer of the neural networks consisted of 11 neurons for the specific activities of neutron-induced nuclear reaction products, while the output layers were fast neutron spectra which had been subdivided into 6, 8, 10, 12, 15 and 20 energy bins. Neural network training was performed by 437 fast neutron spectra and corresponding threshold activation detector readings. The trained neural network have been applied for unfolding 50 spectra, which were not in training sets and the results were compared with real spectra and unfolded spectra by SANDII. The best results belong to 10 energy bin spectra. The neural network was also trained by detector readings with 5% uncertainty and the response of the trained neural network to detector readings with 5%, 10%, 15%, 20%, 25% and 50% uncertainty was compared with real spectra. Neural network algorithm, in comparison with other unfolding methods, is very fast and needless to detector response matrix and any prior information about spectra and also the outputs have low sensitivity to uncertainty in the activity measurements. The results show that the neural network algorithm is useful when a fast response is required with reasonable accuracy

  9. Simultaneous surface and depth neural activity recording with graphene transistor-based dual-modality probes.

    Science.gov (United States)

    Du, Mingde; Xu, Xianchen; Yang, Long; Guo, Yichuan; Guan, Shouliang; Shi, Jidong; Wang, Jinfen; Fang, Ying

    2018-05-15

    Subdural surface and penetrating depth probes are widely applied to record neural activities from the cortical surface and intracortical locations of the brain, respectively. Simultaneous surface and depth neural activity recording is essential to understand the linkage between the two modalities. Here, we develop flexible dual-modality neural probes based on graphene transistors. The neural probes exhibit stable electrical performance even under 90° bending because of the excellent mechanical properties of graphene, and thus allow multi-site recording from the subdural surface of rat cortex. In addition, finite element analysis was carried out to investigate the mechanical interactions between probe and cortex tissue during intracortical implantation. Based on the simulation results, a sharp tip angle of π/6 was chosen to facilitate tissue penetration of the neural probes. Accordingly, the graphene transistor-based dual-modality neural probes have been successfully applied for simultaneous surface and depth recording of epileptiform activity of rat brain in vivo. Our results show that graphene transistor-based dual-modality neural probes can serve as a facile and versatile tool to study tempo-spatial patterns of neural activities. Copyright © 2018 Elsevier B.V. All rights reserved.

  10. The neural basis of the bystander effect--the influence of group size on neural activity when witnessing an emergency.

    Science.gov (United States)

    Hortensius, Ruud; de Gelder, Beatrice

    2014-06-01

    Naturalistic observation and experimental studies in humans and other primates show that observing an individual in need automatically triggers helping behavior. The aim of the present study is to clarify the neurofunctional basis of social influences on individual helping behavior. We investigate whether when participants witness an emergency, while performing an unrelated color-naming task in an fMRI scanner, the number of bystanders present at the emergency influences neural activity in regions related to action preparation. The results show a decrease in activity with the increase in group size in the left pre- and postcentral gyri and left medial frontal gyrus. In contrast, regions related to visual perception and attention show an increase in activity. These results demonstrate the neural mechanisms of social influence on automatic action preparation that is at the core of helping behavior when witnessing an emergency. Copyright © 2014 Elsevier Inc. All rights reserved.

  11. Neural network modeling of chaotic dynamics in nuclear reactor flows

    International Nuclear Information System (INIS)

    Welstead, S.T.

    1992-01-01

    Neural networks have many scientific applications in areas such as pattern classification and time series prediction. The universal approximation property of these networks, however, can also be exploited to provide researchers with tool for modeling observed nonlinear phenomena. It has been shown that multilayer feed forward networks can capture important global nonlinear properties, such as chaotic dynamics, merely by training the network on a finite set of observed data. The network itself then provides a model of the process that generated the data. Characterizations such as the existence and general shape of a strange attractor and the sign of the largest Lyapunov exponent can then be extracted from the neural network model. In this paper, the author applies this idea to data generated from a nonlinear process that is representative of convective flows that can arise in nuclear reactor applications. Such flows play a role in forced convection heat removal from pressurized water reactors and boiling water reactors, and decay heat removal from liquid-metal-cooled reactors, either by natural convection or by thermosyphons

  12. Modelling electric trains energy consumption using Neural Networks

    Energy Technology Data Exchange (ETDEWEB)

    Martinez Fernandez, P.; Garcia Roman, C.; Insa Franco, R.

    2016-07-01

    Nowadays there is an evident concern regarding the efficiency and sustainability of the transport sector due to both the threat of climate change and the current financial crisis. This concern explains the growth of railways over the last years as they present an inherent efficiency compared to other transport means. However, in order to further expand their role, it is necessary to optimise their energy consumption so as to increase their competitiveness. Improving railways energy efficiency requires both reliable data and modelling tools that will allow the study of different variables and alternatives. With this need in mind, this paper presents the development of consumption models based on neural networks that calculate the energy consumption of electric trains. These networks have been trained based on an extensive set of consumption data measured in line 1 of the Valencia Metro Network. Once trained, the neural networks provide a reliable estimation of the vehicles consumption along a specific route when fed with input data such as train speed, acceleration or track longitudinal slope. These networks represent a useful modelling tool that may allow a deeper study of railway lines in terms of energy expenditure with the objective of reducing the costs and environmental impact associated to railways. (Author)

  13. Neural activity associated with metaphor comprehension: spatial analysis.

    Science.gov (United States)

    Sotillo, María; Carretié, Luis; Hinojosa, José A; Tapia, Manuel; Mercado, Francisco; López-Martín, Sara; Albert, Jacobo

    2005-01-03

    Though neuropsychological data indicate that the right hemisphere (RH) plays a major role in metaphor processing, other studies suggest that, at least during some phases of this processing, a RH advantage may not exist. The present study explores, through a temporally agile neural signal--the event-related potentials (ERPs)--, and through source-localization algorithms applied to ERP recordings, whether the crucial phase of metaphor comprehension presents or not a RH advantage. Participants (n=24) were submitted to a S1-S2 experimental paradigm. S1 consisted of visually presented metaphoric sentences (e.g., "Green lung of the city"), followed by S2, which consisted of words that could (i.e., "Park") or could not (i.e., "Semaphore") be defined by S1. ERPs elicited by S2 were analyzed using temporal principal component analysis (tPCA) and source-localization algorithms. These analyses revealed that metaphorically related S2 words showed significantly higher N400 amplitudes than non-related S2 words. Source-localization algorithms showed differential activity between the two S2 conditions in the right middle/superior temporal areas. These results support the existence of an important RH contribution to (at least) one phase of metaphor processing and, furthermore, implicate the temporal cortex with respect to that contribution.

  14. Trait motivation moderates neural activation associated with goal pursuit.

    Science.gov (United States)

    Spielberg, Jeffrey M; Miller, Gregory A; Warren, Stacie L; Engels, Anna S; Crocker, Laura D; Sutton, Bradley P; Heller, Wendy

    2012-06-01

    Research has indicated that regions of left and right dorsolateral prefrontal cortex (DLPFC) are involved in integrating the motivational and executive function processes related to, respectively, approach and avoidance goals. Given that sensitivity to pleasant and unpleasant stimuli is an important feature of conceptualizations of approach and avoidance motivation, it is possible that these regions of DLPFC are preferentially activated by valenced stimuli. The present study tested this hypothesis by using a task in which goal pursuit was threatened by distraction from valenced stimuli while functional magnetic resonance imaging data were collected. The analyses examined whether the impact of trait approach and avoidance motivation on the neural processes associated with executive function differed depending on the valence or arousal level of the distractor stimuli. The present findings support the hypothesis that the regions of DLPFC under investigation are involved in integrating motivational and executive function processes, and they also indicate the involvement of a number of other brain areas in maintaining goal pursuit. However, DLPFC did not display differential sensitivity to valence.

  15. Artificial Neural Network versus Linear Models Forecasting Doha Stock Market

    Science.gov (United States)

    Yousif, Adil; Elfaki, Faiz

    2017-12-01

    The purpose of this study is to determine the instability of Doha stock market and develop forecasting models. Linear time series models are used and compared with a nonlinear Artificial Neural Network (ANN) namely Multilayer Perceptron (MLP) Technique. It aims to establish the best useful model based on daily and monthly data which are collected from Qatar exchange for the period starting from January 2007 to January 2015. Proposed models are for the general index of Qatar stock exchange and also for the usages in other several sectors. With the help of these models, Doha stock market index and other various sectors were predicted. The study was conducted by using various time series techniques to study and analyze data trend in producing appropriate results. After applying several models, such as: Quadratic trend model, double exponential smoothing model, and ARIMA, it was concluded that ARIMA (2,2) was the most suitable linear model for the daily general index. However, ANN model was found to be more accurate than time series models.

  16. Self-reported empathy and neural activity during action imitation and observation in schizophrenia.

    Science.gov (United States)

    Horan, William P; Iacoboni, Marco; Cross, Katy A; Korb, Alex; Lee, Junghee; Nori, Poorang; Quintana, Javier; Wynn, Jonathan K; Green, Michael F

    2014-01-01

    Although social cognitive impairments are key determinants of functional outcome in schizophrenia their neural bases are poorly understood. This study investigated neural activity during imitation and observation of finger movements and facial expressions in schizophrenia, and their correlates with self-reported empathy. 23 schizophrenia outpatients and 23 healthy controls were studied with functional magnetic resonance imaging (fMRI) while they imitated, executed, or simply observed finger movements and facial emotional expressions. Between-group activation differences, as well as relationships between activation and self-reported empathy, were evaluated. Both patients and controls similarly activated neural systems previously associated with these tasks. We found no significant between-group differences in task-related activations. There were, however, between-group differences in the correlation between self-reported empathy and right inferior frontal (pars opercularis) activity during observation of facial emotional expressions. As in previous studies, controls demonstrated a positive association between brain activity and empathy scores. In contrast, the pattern in the patient group reflected a negative association between brain activity and empathy. Although patients with schizophrenia demonstrated largely normal patterns of neural activation across the finger movement and facial expression tasks, they reported decreased self perceived empathy and failed to show the typical relationship between neural activity and self-reported empathy seen in controls. These findings suggest that patients show a disjunction between automatic neural responses to low level social cues and higher level, integrative social cognitive processes involved in self-perceived empathy.

  17. The application of neural networks with artificial intelligence technique in the modeling of industrial processes

    International Nuclear Information System (INIS)

    Saini, K. K.; Saini, Sanju

    2008-01-01

    Neural networks are a relatively new artificial intelligence technique that emulates the behavior of biological neural systems in digital software or hardware. These networks can 'learn', automatically, complex relationships among data. This feature makes the technique very useful in modeling processes for which mathematical modeling is difficult or impossible. The work described here outlines some examples of the application of neural networks with artificial intelligence technique in the modeling of industrial processes.

  18. An effective convolutional neural network model for Chinese sentiment analysis

    Science.gov (United States)

    Zhang, Yu; Chen, Mengdong; Liu, Lianzhong; Wang, Yadong

    2017-06-01

    Nowadays microblog is getting more and more popular. People are increasingly accustomed to expressing their opinions on Twitter, Facebook and Sina Weibo. Sentiment analysis of microblog has received significant attention, both in academia and in industry. So far, Chinese microblog exploration still needs lots of further work. In recent years CNN has also been used to deal with NLP tasks, and already achieved good results. However, these methods ignore the effective use of a large number of existing sentimental resources. For this purpose, we propose a Lexicon-based Sentiment Convolutional Neural Networks (LSCNN) model focus on Weibo's sentiment analysis, which combines two CNNs, trained individually base on sentiment features and word embedding, at the fully connected hidden layer. The experimental results show that our model outperforms the CNN model only with word embedding features on microblog sentiment analysis task.

  19. Hierarchical Neural Regression Models for Customer Churn Prediction

    Directory of Open Access Journals (Sweden)

    Golshan Mohammadi

    2013-01-01

    Full Text Available As customers are the main assets of each industry, customer churn prediction is becoming a major task for companies to remain in competition with competitors. In the literature, the better applicability and efficiency of hierarchical data mining techniques has been reported. This paper considers three hierarchical models by combining four different data mining techniques for churn prediction, which are backpropagation artificial neural networks (ANN, self-organizing maps (SOM, alpha-cut fuzzy c-means (α-FCM, and Cox proportional hazards regression model. The hierarchical models are ANN + ANN + Cox, SOM + ANN + Cox, and α-FCM + ANN + Cox. In particular, the first component of the models aims to cluster data in two churner and nonchurner groups and also filter out unrepresentative data or outliers. Then, the clustered data as the outputs are used to assign customers to churner and nonchurner groups by the second technique. Finally, the correctly classified data are used to create Cox proportional hazards model. To evaluate the performance of the hierarchical models, an Iranian mobile dataset is considered. The experimental results show that the hierarchical models outperform the single Cox regression baseline model in terms of prediction accuracy, Types I and II errors, RMSE, and MAD metrics. In addition, the α-FCM + ANN + Cox model significantly performs better than the two other hierarchical models.

  20. Learning to Recognize Actions From Limited Training Examples Using a Recurrent Spiking Neural Model

    Science.gov (United States)

    Panda, Priyadarshini; Srinivasa, Narayan

    2018-01-01

    A fundamental challenge in machine learning today is to build a model that can learn from few examples. Here, we describe a reservoir based spiking neural model for learning to recognize actions with a limited number of labeled videos. First, we propose a novel encoding, inspired by how microsaccades influence visual perception, to extract spike information from raw video data while preserving the temporal correlation across different frames. Using this encoding, we show that the reservoir generalizes its rich dynamical activity toward signature action/movements enabling it to learn from few training examples. We evaluate our approach on the UCF-101 dataset. Our experiments demonstrate that our proposed reservoir achieves 81.3/87% Top-1/Top-5 accuracy, respectively, on the 101-class data while requiring just 8 video examples per class for training. Our results establish a new benchmark for action recognition from limited video examples for spiking neural models while yielding competitive accuracy with respect to state-of-the-art non-spiking neural models. PMID:29551962

  1. Models of neural networks IV early vision and attention

    CERN Document Server

    Cowan, Jack; Domany, Eytan

    2002-01-01

    Close this book for a moment and look around you. You scan the scene by directing your attention, and gaze, at certain specific objects. Despite the background, you discern them. The process is partially intentional and partially preattentive. How all this can be done is described in the fourth volume of Models of Neural Networks devoted to Early Vision and Atten­ tion that you are holding in your hands. Early vision comprises the first stages of visual information processing. It is as such a scientific challenge whose clarification calls for a penetrating review. Here you see the result. The Heraeus Foundation (Hanau) is to be thanked for its support during the initial phase of this project. John Hertz, who has extensive experience in both computational and ex­ perimental neuroscience, provides in "Neurons, Networks, and Cognition" to neural modeling. John Van Opstal explains in a theoretical introduction "The Gaze Control System" how the eye's gaze control is performed and presents a novel theoretical des...

  2. Evaluation of the Performance of Feedforward and Recurrent Neural Networks in Active Cancellation of Sound Noise

    Directory of Open Access Journals (Sweden)

    Mehrshad Salmasi

    2012-07-01

    Full Text Available Active noise control is based on the destructive interference between the primary noise and generated noise from the secondary source. An antinoise of equal amplitude and opposite phase is generated and combined with the primary noise. In this paper, performance of the neural networks is evaluated in active cancellation of sound noise. For this reason, feedforward and recurrent neural networks are designed and trained. After training, performance of the feedforwrad and recurrent networks in noise attenuation are compared. We use Elman network as a recurrent neural network. For simulations, noise signals from a SPIB database are used. In order to compare the networks appropriately, equal number of layers and neurons are considered for the networks. Moreover, training and test samples are similar. Simulation results show that feedforward and recurrent neural networks present good performance in noise cancellation. As it is seen, the ability of recurrent neural network in noise attenuation is better than feedforward network.

  3. Activity in part of the neural correlates of consciousness reflects integration.

    Science.gov (United States)

    Eriksson, Johan

    2017-10-01

    Integration is commonly viewed as a key process for generating conscious experiences. Accordingly, there should be increased activity within the neural correlates of consciousness when demands on integration increase. We used fMRI and "informational masking" to isolate the neural correlates of consciousness and measured how the associated brain activity changed as a function of required integration. Integration was manipulated by comparing the experience of hearing simple reoccurring tones to hearing harmonic tone triplets. The neural correlates of auditory consciousness included superior temporal gyrus, lateral and medial frontal regions, cerebellum, and also parietal cortex. Critically, only activity in left parietal cortex increased significantly as a function of increasing demands on integration. We conclude that integration can explain part of the neural activity associated with the generation conscious experiences, but that much of associated brain activity apparently reflects other processes. Copyright © 2017 Elsevier Inc. All rights reserved.

  4. Predicting musically induced emotions from physiological inputs: linear and neural network models.

    Science.gov (United States)

    Russo, Frank A; Vempala, Naresh N; Sandstrom, Gillian M

    2013-01-01

    Listening to music often leads to physiological responses. Do these physiological responses contain sufficient information to infer emotion induced in the listener? The current study explores this question by attempting to predict judgments of "felt" emotion from physiological responses alone using linear and neural network models. We measured five channels of peripheral physiology from 20 participants-heart rate (HR), respiration, galvanic skin response, and activity in corrugator supercilii and zygomaticus major facial muscles. Using valence and arousal (VA) dimensions, participants rated their felt emotion after listening to each of 12 classical music excerpts. After extracting features from the five channels, we examined their correlation with VA ratings, and then performed multiple linear regression to see if a linear relationship between the physiological responses could account for the ratings. Although linear models predicted a significant amount of variance in arousal ratings, they were unable to do so with valence ratings. We then used a neural network to provide a non-linear account of the ratings. The network was trained on the mean ratings of eight of the 12 excerpts and tested on the remainder. Performance of the neural network confirms that physiological responses alone can be used to predict musically induced emotion. The non-linear model derived from the neural network was more accurate than linear models derived from multiple linear regression, particularly along the valence dimension. A secondary analysis allowed us to quantify the relative contributions of inputs to the non-linear model. The study represents a novel approach to understanding the complex relationship between physiological responses and musically induced emotion.

  5. Predicting musically induced emotions from physiological inputs: Linear and neural network models

    Directory of Open Access Journals (Sweden)

    Frank A. Russo

    2013-08-01

    Full Text Available Listening to music often leads to physiological responses. Do these physiological responses contain sufficient information to infer emotion induced in the listener? The current study explores this question by attempting to predict judgments of 'felt' emotion from physiological responses alone using linear and neural network models. We measured five channels of peripheral physiology from 20 participants – heart rate, respiration, galvanic skin response, and activity in corrugator supercilii and zygomaticus major facial muscles. Using valence and arousal (VA dimensions, participants rated their felt emotion after listening to each of 12 classical music excerpts. After extracting features from the five channels, we examined their correlation with VA ratings, and then performed multiple linear regression to see if a linear relationship between the physiological responses could account for the ratings. Although linear models predicted a significant amount of variance in arousal ratings, they were unable to do so with valence ratings. We then used a neural network to provide a nonlinear account of the ratings. The network was trained on the mean ratings of eight of the 12 excerpts and tested on the remainder. Performance of the neural network confirms that physiological responses alone can be used to predict musically induced emotion. The nonlinear model derived from the neural network was more accurate than linear models derived from multiple linear regression, particularly along the valence dimension. A secondary analysis allowed us to quantify the relative contributions of inputs to the nonlinear model. The study represents a novel approach to understanding the complex relationship between physiological responses and musically induced emotion.

  6. Modulation of neural activity by reward in medial intraparietal cortex is sensitive to temporal sequence of reward

    Science.gov (United States)

    Rajalingham, Rishi; Stacey, Richard Greg; Tsoulfas, Georgios

    2014-01-01

    To restore movements to paralyzed patients, neural prosthetic systems must accurately decode patients' intentions from neural signals. Despite significant advancements, current systems are unable to restore complex movements. Decoding reward-related signals from the medial intraparietal area (MIP) could enhance prosthetic performance. However, the dynamics of reward sensitivity in MIP is not known. Furthermore, reward-related modulation in premotor areas has been attributed to behavioral confounds. Here we investigated the stability of reward encoding in MIP by assessing the effect of reward history on reward sensitivity. We recorded from neurons in MIP while monkeys performed a delayed-reach task under two reward schedules. In the variable schedule, an equal number of small- and large-rewards trials were randomly interleaved. In the constant schedule, one reward size was delivered for a block of trials. The memory period firing rate of most neurons in response to identical rewards varied according to schedule. Using systems identification tools, we attributed the schedule sensitivity to the dependence of neural activity on the history of reward. We did not find schedule-dependent behavioral changes, suggesting that reward modulates neural activity in MIP. Neural discrimination between rewards was less in the variable than in the constant schedule, degrading our ability to decode reach target and reward simultaneously. The effect of schedule was mitigated by adding Haar wavelet coefficients to the decoding model. This raises the possibility of multiple encoding schemes at different timescales and reinforces the potential utility of reward information for prosthetic performance. PMID:25008408

  7. Short-Term Load Forecasting Model Based on Quantum Elman Neural Networks

    Directory of Open Access Journals (Sweden)

    Zhisheng Zhang

    2016-01-01

    Full Text Available Short-term load forecasting model based on quantum Elman neural networks was constructed in this paper. The quantum computation and Elman feedback mechanism were integrated into quantum Elman neural networks. Quantum computation can effectively improve the approximation capability and the information processing ability of the neural networks. Quantum Elman neural networks have not only the feedforward connection but also the feedback connection. The feedback connection between the hidden nodes and the context nodes belongs to the state feedback in the internal system, which has formed specific dynamic memory performance. Phase space reconstruction theory is the theoretical basis of constructing the forecasting model. The training samples are formed by means of K-nearest neighbor approach. Through the example simulation, the testing results show that the model based on quantum Elman neural networks is better than the model based on the quantum feedforward neural network, the model based on the conventional Elman neural network, and the model based on the conventional feedforward neural network. So the proposed model can effectively improve the prediction accuracy. The research in the paper makes a theoretical foundation for the practical engineering application of the short-term load forecasting model based on quantum Elman neural networks.

  8. Adaptive Neural-Sliding Mode Control of Active Suspension System for Camera Stabilization

    Directory of Open Access Journals (Sweden)

    Feng Zhao

    2015-01-01

    Full Text Available The camera always suffers from image instability on the moving vehicle due to the unintentional vibrations caused by road roughness. This paper presents a novel adaptive neural network based on sliding mode control strategy to stabilize the image captured area of the camera. The purpose is to suppress vertical displacement of sprung mass with the application of active suspension system. Since the active suspension system has nonlinear and time varying characteristics, adaptive neural network (ANN is proposed to make the controller robustness against systematic uncertainties, which release the model-based requirement of the sliding model control, and the weighting matrix is adjusted online according to Lyapunov function. The control system consists of two loops. The outer loop is a position controller designed with sliding mode strategy, while the PID controller in the inner loop is to track the desired force. The closed loop stability and asymptotic convergence performance can be guaranteed on the basis of the Lyapunov stability theory. Finally, the simulation results show that the employed controller effectively suppresses the vibration of the camera and enhances the stabilization of the entire camera, where different excitations are considered to validate the system performance.

  9. Daily rainfall-runoff modelling by neural networks in semi-arid zone ...

    African Journals Online (AJOL)

    This research work will allow checking efficiency of formal neural networks for flows' modelling of wadi Ouahrane's basin from rainfall-runoff relation which is non-linear. Two models of neural networks were optimized through supervised learning and compared in order to achieve this goal, the first model with input rain, and ...

  10. A Neural Network Model for Prediction of Sound Quality

    DEFF Research Database (Denmark)

    Nielsen,, Lars Bramsløw

    An artificial neural network structure has been specified, implemented and optimized for the purpose of predicting the perceived sound quality for normal-hearing and hearing-impaired subjects. The network was implemented by means of commercially available software and optimized to predict results...... obtained in subjective sound quality rating experiments based on input data from an auditory model. Various types of input data and data representations from the auditory model were used as input data for the chosen network structure, which was a three-layer perceptron. This network was trained by means...... the physical signal parameters and the subjectively perceived sound quality. No simple objective-subjective relationship was evident from this analysis....

  11. A case study to estimate costs using Neural Networks and regression based models

    Directory of Open Access Journals (Sweden)

    Nadia Bhuiyan

    2012-07-01

    Full Text Available Bombardier Aerospace’s high performance aircrafts and services set the utmost standard for the Aerospace industry. A case study in collaboration with Bombardier Aerospace is conducted in order to estimate the target cost of a landing gear. More precisely, the study uses both parametric model and neural network models to estimate the cost of main landing gears, a major aircraft commodity. A comparative analysis between the parametric based model and those upon neural networks model will be considered in order to determine the most accurate method to predict the cost of a main landing gear. Several trials are presented for the design and use of the neural network model. The analysis for the case under study shows the flexibility in the design of the neural network model. Furthermore, the performance of the neural network model is deemed superior to the parametric models for this case study.

  12. Social exclusion in middle childhood: rejection events, slow-wave neural activity, and ostracism distress.

    Science.gov (United States)

    Crowley, Michael J; Wu, Jia; Molfese, Peter J; Mayes, Linda C

    2010-01-01

    This study examined neural activity with event-related potentials (ERPs) in middle childhood during a computer-simulated ball-toss game, Cyberball. After experiencing fair play initially, children were ultimately excluded by the other players. We focused specifically on “not my turn” events within fair play and rejection events within social exclusion. Dense-array ERPs revealed that rejection events are perceived rapidly. Condition differences (“not my turn” vs. rejection) were evident in a posterior ERP peaking at 420 ms consistent, with a larger P3 effect for rejection events indicating that in middle childhood rejection events are differentiated in <500 ms. Condition differences were evident for slow-wave activity (500-900 ms) in the medial frontal cortical region and the posterior occipital-parietal region, with rejection events more negative frontally and more positive posteriorly. Distress from the rejection experience was associated with a more negative frontal slow wave and a larger late positive slow wave, but only for rejection events. Source modeling with Geosouce software suggested that slow-wave neural activity in cortical regions previously identified in functional imaging studies of ostracism, including subgenual cortex, ventral anterior cingulate cortex, and insula, was greater for rejection events vs. “not my turn” events. © 2010 Psychology Press

  13. Anisotropy of ongoing neural activity in the primate visual cortex

    Directory of Open Access Journals (Sweden)

    Maier A

    2014-09-01

    Full Text Available Alexander Maier,1 Michele A Cox,1 Kacie Dougherty,1 Brandon Moore,1 David A Leopold2 1Department of Psychology, College of Arts and Science, Vanderbilt University, Nashville, TN, USA; 2Section on Cognitive Neurophysiology and Imaging, National Institute of Mental Health, National Institute of Health, Bethesda, MD, USA Abstract: The mammalian neocortex features distinct anatomical variation in its tangential and radial extents. This review consolidates previously published findings from our group in order to compare and contrast the spatial profile of neural activity coherence across these distinct cortical dimensions. We focus on studies of ongoing local field potential (LFP data obtained simultaneously from multiple sites in the primary visual cortex in two types of experiments in which electrode contacts were spaced either along the cortical surface or at different laminar positions. These studies demonstrate that across both dimensions the coherence of ongoing LFP fluctuations diminishes as a function of interelectrode distance, although the nature and spatial scale of this falloff is very different. Along the cortical surface, the overall LFP coherence declines gradually and continuously away from a given position. In contrast, across the cortical layers, LFP coherence is discontinuous and compartmentalized as a function of depth. Specifically, regions of high LFP coherence fall into discrete superficial and deep laminar zones, with an abrupt discontinuity between the granular and infragranular layers. This spatial pattern of ongoing LFP coherence is similar when animals are at rest and when they are engaged in a behavioral task. These results point to the existence of partially segregated laminar zones of cortical processing that extend tangentially within the laminar compartments and are thus oriented orthogonal to the cortical columns. We interpret these electrophysiological observations in light of the known anatomical organization of

  14. Artificial Neural Network Modelling of the Energy Content of Municipal Solid Wastes in Northern Nigeria

    Directory of Open Access Journals (Sweden)

    M. B. Oumarou

    2017-12-01

    Full Text Available The study presents an application of the artificial neural network model using the back propagation learning algorithm to predict the actual calorific value of the municipal solid waste in major cities of the northern part of Nigeria, with high population densities and intense industrial activities. These cities are: Kano, Damaturu, Dutse, Bauchi, Birnin Kebbi, Gusau, Maiduguri, Katsina and Sokoto. Experimental data of the energy content and the physical characterization of the municipal solid waste serve as the input parameter in nature of wood, grass, metal, plastic, food remnants, leaves, glass and paper. Comparative studies were made by using the developed model, the experimental results and a correlation which was earlier developed by the authors to predict the energy content. While predicting the actual calorific value, the maximum error was 0.94% for the artificial neural network model and 5.20% by the statistical correlation. The network with eight neurons and an R2 = 0.96881 in the hidden layer results in a stable and optimum network. This study showed that the artificial neural network approach could successfully be used for energy content predictions from the municipal solid wastes in Northern Nigeria and other areas of similar waste stream and composition.

  15. Complex Environmental Data Modelling Using Adaptive General Regression Neural Networks

    Science.gov (United States)

    Kanevski, Mikhail

    2015-04-01

    The research deals with an adaptation and application of Adaptive General Regression Neural Networks (GRNN) to high dimensional environmental data. GRNN [1,2,3] are efficient modelling tools both for spatial and temporal data and are based on nonparametric kernel methods closely related to classical Nadaraya-Watson estimator. Adaptive GRNN, using anisotropic kernels, can be also applied for features selection tasks when working with high dimensional data [1,3]. In the present research Adaptive GRNN are used to study geospatial data predictability and relevant feature selection using both simulated and real data case studies. The original raw data were either three dimensional monthly precipitation data or monthly wind speeds embedded into 13 dimensional space constructed by geographical coordinates and geo-features calculated from digital elevation model. GRNN were applied in two different ways: 1) adaptive GRNN with the resulting list of features ordered according to their relevancy; and 2) adaptive GRNN applied to evaluate all possible models N [in case of wind fields N=(2^13 -1)=8191] and rank them according to the cross-validation error. In both cases training were carried out applying leave-one-out procedure. An important result of the study is that the set of the most relevant features depends on the month (strong seasonal effect) and year. The predictabilities of precipitation and wind field patterns, estimated using the cross-validation and testing errors of raw and shuffled data, were studied in detail. The results of both approaches were qualitatively and quantitatively compared. In conclusion, Adaptive GRNN with their ability to select features and efficient modelling of complex high dimensional data can be widely used in automatic/on-line mapping and as an integrated part of environmental decision support systems. 1. Kanevski M., Pozdnoukhov A., Timonin V. Machine Learning for Spatial Environmental Data. Theory, applications and software. EPFL Press

  16. Neural network modeling of nonlinear systems based on Volterra series extension of a linear model

    Science.gov (United States)

    Soloway, Donald I.; Bialasiewicz, Jan T.

    1992-01-01

    A Volterra series approach was applied to the identification of nonlinear systems which are described by a neural network model. A procedure is outlined by which a mathematical model can be developed from experimental data obtained from the network structure. Applications of the results to the control of robotic systems are discussed.

  17. Forecasting Macroeconomic Variables using Neural Network Models and Three Automated Model Selection Techniques

    DEFF Research Database (Denmark)

    Kock, Anders Bredahl; Teräsvirta, Timo

    such as the neural network model is not appropriate if the data is generated by a linear mechanism. Hence, it might be appropriate to test the null of linearity prior to building a nonlinear model. We investigate whether this kind of pretesting improves the forecast accuracy compared to the case where...

  18. Neural network models for biological waste-gas treatment systems.

    Science.gov (United States)

    Rene, Eldon R; Estefanía López, M; Veiga, María C; Kennes, Christian

    2011-12-15

    This paper outlines the procedure for developing artificial neural network (ANN) based models for three bioreactor configurations used for waste-gas treatment. The three bioreactor configurations chosen for this modelling work were: biofilter (BF), continuous stirred tank bioreactor (CSTB) and monolith bioreactor (MB). Using styrene as the model pollutant, this paper also serves as a general database of information pertaining to the bioreactor operation and important factors affecting gas-phase styrene removal in these biological systems. Biological waste-gas treatment systems are considered to be both advantageous and economically effective in treating a stream of polluted air containing low to moderate concentrations of the target contaminant, over a rather wide range of gas-flow rates. The bioreactors were inoculated with the fungus Sporothrix variecibatus, and their performances were evaluated at different empty bed residence times (EBRT), and at different inlet styrene concentrations (C(i)). The experimental data from these bioreactors were modelled to predict the bioreactors performance in terms of their removal efficiency (RE, %), by adequate training and testing of a three-layered back propagation neural network (input layer-hidden layer-output layer). Two models (BIOF1 and BIOF2) were developed for the BF with different combinations of easily measurable BF parameters as the inputs, that is concentration (gm(-3)), unit flow (h(-1)) and pressure drop (cm of H(2)O). The model developed for the CSTB used two inputs (concentration and unit flow), while the model for the MB had three inputs (concentration, G/L (gas/liquid) ratio, and pressure drop). Sensitivity analysis in the form of absolute average sensitivity (AAS) was performed for all the developed ANN models to ascertain the importance of the different input parameters, and to assess their direct effect on the bioreactors performance. The performance of the models was estimated by the regression

  19. Evaluation of the Performance of Feedforward and Recurrent Neural Networks in Active Cancellation of Sound Noise

    OpenAIRE

    Mehrshad Salmasi; Homayoun Mahdavi-Nasab

    2012-01-01

    Active noise control is based on the destructive interference between the primary noise and generated noise from the secondary source. An antinoise of equal amplitude and opposite phase is generated and combined with the primary noise. In this paper, performance of the neural networks is evaluated in active cancellation of sound noise. For this reason, feedforward and recurrent neural networks are designed and trained. After training, performance of the feedforwrad and recurrent networks in n...

  20. Parasympathetic neural activity accounts for the lowering of exercise heart rate at high altitude

    DEFF Research Database (Denmark)

    Boushel, Robert Christopher; Calbet, J A; Rådegran, G

    2001-01-01

    In chronic hypoxia, both heart rate (HR) and cardiac output (Q) are reduced during exercise. The role of parasympathetic neural activity in lowering HR is unresolved, and its influence on Q and oxygen transport at high altitude has never been studied.......In chronic hypoxia, both heart rate (HR) and cardiac output (Q) are reduced during exercise. The role of parasympathetic neural activity in lowering HR is unresolved, and its influence on Q and oxygen transport at high altitude has never been studied....

  1. Deep Convolutional and LSTM Recurrent Neural Networks for Multimodal Wearable Activity Recognition

    OpenAIRE

    Francisco Javier Ordóñez; Daniel Roggen

    2016-01-01

    Human activity recognition (HAR) tasks have traditionally been solved using engineered features obtained by heuristic processes. Current research suggests that deep convolutional neural networks are suited to automate feature extraction from raw sensor inputs. However, human activities are made of complex sequences of motor movements, and capturing this temporal dynamics is fundamental for successful HAR. Based on the recent success of recurrent neural networks for time series domains, we pro...

  2. Sustained Activity in Hierarchical Modular Neural Networks: Self-Organized Criticality and Oscillations

    Science.gov (United States)

    Wang, Sheng-Jun; Hilgetag, Claus C.; Zhou, Changsong

    2010-01-01

    Cerebral cortical brain networks possess a number of conspicuous features of structure and dynamics. First, these networks have an intricate, non-random organization. In particular, they are structured in a hierarchical modular fashion, from large-scale regions of the whole brain, via cortical areas and area subcompartments organized as structural and functional maps to cortical columns, and finally circuits made up of individual neurons. Second, the networks display self-organized sustained activity, which is persistent in the absence of external stimuli. At the systems level, such activity is characterized by complex rhythmical oscillations over a broadband background, while at the cellular level, neuronal discharges have been observed to display avalanches, indicating that cortical networks are at the state of self-organized criticality (SOC). We explored the relationship between hierarchical neural network organization and sustained dynamics using large-scale network modeling. Previously, it was shown that sparse random networks with balanced excitation and inhibition can sustain neural activity without external stimulation. We found that a hierarchical modular architecture can generate sustained activity better than random networks. Moreover, the system can simultaneously support rhythmical oscillations and SOC, which are not present in the respective random networks. The mechanism underlying the sustained activity is that each dense module cannot sustain activity on its own, but displays SOC in the presence of weak perturbations. Therefore, the hierarchical modular networks provide the coupling among subsystems with SOC. These results imply that the hierarchical modular architecture of cortical networks plays an important role in shaping the ongoing spontaneous activity of the brain, potentially allowing the system to take advantage of both the sensitivity of critical states and the predictability and timing of oscillations for efficient information

  3. DataHigh: graphical user interface for visualizing and interacting with high-dimensional neural activity

    Science.gov (United States)

    Cowley, Benjamin R.; Kaufman, Matthew T.; Butler, Zachary S.; Churchland, Mark M.; Ryu, Stephen I.; Shenoy, Krishna V.; Yu, Byron M.

    2013-12-01

    Objective. Analyzing and interpreting the activity of a heterogeneous population of neurons can be challenging, especially as the number of neurons, experimental trials, and experimental conditions increases. One approach is to extract a set of latent variables that succinctly captures the prominent co-fluctuation patterns across the neural population. A key problem is that the number of latent variables needed to adequately describe the population activity is often greater than 3, thereby preventing direct visualization of the latent space. By visualizing a small number of 2-d projections of the latent space or each latent variable individually, it is easy to miss salient features of the population activity. Approach. To address this limitation, we developed a Matlab graphical user interface (called DataHigh) that allows the user to quickly and smoothly navigate through a continuum of different 2-d projections of the latent space. We also implemented a suite of additional visualization tools (including playing out population activity timecourses as a movie and displaying summary statistics, such as covariance ellipses and average timecourses) and an optional tool for performing dimensionality reduction. Main results. To demonstrate the utility and versatility of DataHigh, we used it to analyze single-trial spike count and single-trial timecourse population activity recorded using a multi-electrode array, as well as trial-averaged population activity recorded using single electrodes. Significance. DataHigh was developed to fulfil a need for visualization in exploratory neural data analysis, which can provide intuition that is critical for building scientific hypotheses and models of population activity.

  4. DataHigh: graphical user interface for visualizing and interacting with high-dimensional neural activity.

    Science.gov (United States)

    Cowley, Benjamin R; Kaufman, Matthew T; Butler, Zachary S; Churchland, Mark M; Ryu, Stephen I; Shenoy, Krishna V; Yu, Byron M

    2013-12-01

    Analyzing and interpreting the activity of a heterogeneous population of neurons can be challenging, especially as the number of neurons, experimental trials, and experimental conditions increases. One approach is to extract a set of latent variables that succinctly captures the prominent co-fluctuation patterns across the neural population. A key problem is that the number of latent variables needed to adequately describe the population activity is often greater than 3, thereby preventing direct visualization of the latent space. By visualizing a small number of 2-d projections of the latent space or each latent variable individually, it is easy to miss salient features of the population activity. To address this limitation, we developed a Matlab graphical user interface (called DataHigh) that allows the user to quickly and smoothly navigate through a continuum of different 2-d projections of the latent space. We also implemented a suite of additional visualization tools (including playing out population activity timecourses as a movie and displaying summary statistics, such as covariance ellipses and average timecourses) and an optional tool for performing dimensionality reduction. To demonstrate the utility and versatility of DataHigh, we used it to analyze single-trial spike count and single-trial timecourse population activity recorded using a multi-electrode array, as well as trial-averaged population activity recorded using single electrodes. DataHigh was developed to fulfil a need for visualization in exploratory neural data analysis, which can provide intuition that is critical for building scientific hypotheses and models of population activity.

  5. DataHigh: Graphical user interface for visualizing and interacting with high-dimensional neural activity

    Science.gov (United States)

    Cowley, Benjamin R.; Kaufman, Matthew T.; Butler, Zachary S.; Churchland, Mark M.; Ryu, Stephen I.; Shenoy, Krishna V.; Yu, Byron M.

    2014-01-01

    Objective Analyzing and interpreting the activity of a heterogeneous population of neurons can be challenging, especially as the number of neurons, experimental trials, and experimental conditions increases. One approach is to extract a set of latent variables that succinctly captures the prominent co-fluctuation patterns across the neural population. A key problem is that the number of latent variables needed to adequately describe the population activity is often greater than three, thereby preventing direct visualization of the latent space. By visualizing a small number of 2-d projections of the latent space or each latent variable individually, it is easy to miss salient features of the population activity. Approach To address this limitation, we developed a Matlab graphical user interface (called DataHigh) that allows the user to quickly and smoothly navigate through a continuum of different 2-d projections of the latent space. We also implemented a suite of additional visualization tools (including playing out population activity timecourses as a movie and displaying summary statistics, such as covariance ellipses and average timecourses) and an optional tool for performing dimensionality reduction. Main results To demonstrate the utility and versatility of DataHigh, we used it to analyze single-trial spike count and single-trial timecourse population activity recorded using a multi-electrode array, as well as trial-averaged population activity recorded using single electrodes. Significance DataHigh was developed to fulfill a need for visualization in exploratory neural data analysis, which can provide intuition that is critical for building scientific hypotheses and models of population activity. PMID:24216250

  6. Memory and learning in a class of neural network models

    International Nuclear Information System (INIS)

    Wallace, D.J.

    1986-01-01

    The author discusses memory and learning properties of the neural network model now identified with Hopfield's work. The model, how it attempts to abstract some key features of the nervous system, and the sense in which learning and memory are identified in the model are described. A brief report is presented on the important role of phase transitions in the model and their implications for memory capacity. The results of numerical simulations obtained using the ICL Distributed Array Processors at Edinburgh are presented. A summary is presented on how the fraction of images which are perfectly stored, depends on the number of nodes and the number of nominal images which one attempts to store using the prescription in Hopfield's paper. Results are presented on the second phase transition in the model, which corresponds to almost total loss of storage capacity as the number of nominal images is increased. Results are given on the performance of a new iterative algorithm for exact storage of up to N images in an N node model

  7. Ground Motion Prediction Model Using Artificial Neural Network

    Science.gov (United States)

    Dhanya, J.; Raghukanth, S. T. G.

    2018-03-01

    This article focuses on developing a ground motion prediction equation based on artificial neural network (ANN) technique for shallow crustal earthquakes. A hybrid technique combining genetic algorithm and Levenberg-Marquardt technique is used for training the model. The present model is developed to predict peak ground velocity, and 5% damped spectral acceleration. The input parameters for the prediction are moment magnitude ( M w), closest distance to rupture plane ( R rup), shear wave velocity in the region ( V s30) and focal mechanism ( F). A total of 13,552 ground motion records from 288 earthquakes provided by the updated NGA-West2 database released by Pacific Engineering Research Center are utilized to develop the model. The ANN architecture considered for the model consists of 192 unknowns including weights and biases of all the interconnected nodes. The performance of the model is observed to be within the prescribed error limits. In addition, the results from the study are found to be comparable with the existing relations in the global database. The developed model is further demonstrated by estimating site-specific response spectra for Shimla city located in Himalayan region.

  8. Neural network-based nonlinear model predictive control vs. linear quadratic gaussian control

    Science.gov (United States)

    Cho, C.; Vance, R.; Mardi, N.; Qian, Z.; Prisbrey, K.

    1997-01-01

    One problem with the application of neural networks to the multivariable control of mineral and extractive processes is determining whether and how to use them. The objective of this investigation was to compare neural network control to more conventional strategies and to determine if there are any advantages in using neural network control in terms of set-point tracking, rise time, settling time, disturbance rejection and other criteria. The procedure involved developing neural network controllers using both historical plant data and simulation models. Various control patterns were tried, including both inverse and direct neural network plant models. These were compared to state space controllers that are, by nature, linear. For grinding and leaching circuits, a nonlinear neural network-based model predictive control strategy was superior to a state space-based linear quadratic gaussian controller. The investigation pointed out the importance of incorporating state space into neural networks by making them recurrent, i.e., feeding certain output state variables into input nodes in the neural network. It was concluded that neural network controllers can have better disturbance rejection, set-point tracking, rise time, settling time and lower set-point overshoot, and it was also concluded that neural network controllers can be more reliable and easy to implement in complex, multivariable plants.

  9. Modeling and Control of CSTR using Model based Neural Network Predictive Control

    OpenAIRE

    Shrivastava, Piyush

    2012-01-01

    This paper presents a predictive control strategy based on neural network model of the plant is applied to Continuous Stirred Tank Reactor (CSTR). This system is a highly nonlinear process; therefore, a nonlinear predictive method, e.g., neural network predictive control, can be a better match to govern the system dynamics. In the paper, the NN model and the way in which it can be used to predict the behavior of the CSTR process over a certain prediction horizon are described, and some commen...

  10. Model-Based Fault Diagnosis in Electric Drive Inverters Using Artificial Neural Network

    National Research Council Canada - National Science Library

    Masrur, Abul; Chen, ZhiHang; Zhang, Baifang; Jia, Hongbin; Murphey, Yi-Lu

    2006-01-01

    .... A normal model and various faulted models of the inverter-motor combination were developed, and voltages and current signals were generated from those models to train an artificial neural network for fault diagnosis...

  11. Critical Branching Neural Networks

    Science.gov (United States)

    Kello, Christopher T.

    2013-01-01

    It is now well-established that intrinsic variations in human neural and behavioral activity tend to exhibit scaling laws in their fluctuations and distributions. The meaning of these scaling laws is an ongoing matter of debate between isolable causes versus pervasive causes. A spiking neural network model is presented that self-tunes to critical…

  12. Modeling Distillation Column Using ARX Model Structure and Artificial Neural Networks

    Directory of Open Access Journals (Sweden)

    Reza Pirmoradi

    2012-04-01

    Full Text Available Distillation is a complex and highly nonlinear industrial process. In general it is not always possible to obtain accurate first principles models for high-purity distillation columns. On the other hand the development of first principles models is usually time consuming and expensive. To overcome these problems, empirical models such as neural networks can be used. One major drawback of empirical models is that the prediction is valid only inside the data domain that is sufficiently covered by measurement data. Modeling distillation columns by means of neural networks is reported in literature by using recursive networks. The recursive networks are proper for modeling purpose, but such models have the problems of high complexity and high computational cost. The objective of this paper is to propose a simple and reliable model for distillation column. The proposed model uses feed forward neural networks which results in a simple model with less parameters and faster training time. Simulation results demonstrate that predictions of the proposed model in all regions are close to outputs of the dynamic model and the error in negligible. This implies that the model is reliable in all regions.

  13. Neural Activity during Encoding Predicts False Memories Created by Misinformation

    Science.gov (United States)

    Okado, Yoko; Stark, Craig E. L.

    2005-01-01

    False memories are often demonstrated using the misinformation paradigm, in which a person's recollection of a witnessed event is altered after exposure to misinformation about the event. The neural basis of this phenomenon, however, remains unknown. The authors used fMRI to investigate encoding processes during the viewing of an event and…

  14. Voltage Estimation in Active Distribution Grids Using Neural Networks

    DEFF Research Database (Denmark)

    Pertl, Michael; Heussen, Kai; Gehrke, Oliver

    2016-01-01

    the observability of distribution systems has to be improved. To increase the situational awareness of the power system operator data driven methods can be employed. These methods benefit from newly available data sources such as smart meters. This paper presents a voltage estimation method based on neural networks...

  15. Active Control of Sound based on Diagonal Recurrent Neural Network

    NARCIS (Netherlands)

    Jayawardhana, Bayu; Xie, Lihua; Yuan, Shuqing

    2002-01-01

    Recurrent neural network has been known for its dynamic mapping and better suited for nonlinear dynamical system. Nonlinear controller may be needed in cases where the actuators exhibit the nonlinear characteristics, or in cases when the structure to be controlled exhibits nonlinear behavior. The

  16. Connectivity effects in the dynamic model of neural networks

    International Nuclear Information System (INIS)

    Choi, J; Choi, M Y; Yoon, B-G

    2009-01-01

    We study, via extensive Monte Carlo calculations, the effects of connectivity in the dynamic model of neural networks, to observe that the Mattis-state order parameter increases with the number of coupled neurons. Such effects appear more pronounced when the average number of connections is increased by introducing shortcuts in the network. In particular, the power spectra of the order parameter at stationarity are found to exhibit power-law behavior, depending on how the average number of connections is increased. The cluster size distribution of the 'memory-unmatched' sites also follows a power law and possesses strong correlations with the power spectra. It is further observed that the distribution of waiting times for neuron firing fits roughly to a power law, again depending on how neuronal connections are increased

  17. Optimization of recurrent neural networks for time series modeling

    DEFF Research Database (Denmark)

    Pedersen, Morten With

    1997-01-01

    The present thesis is about optimization of recurrent neural networks applied to time series modeling. In particular is considered fully recurrent networks working from only a single external input, one layer of nonlinear hidden units and a li near output unit applied to prediction of discrete time...... series. The overall objective s are to improve training by application of second-order methods and to improve generalization ability by architecture optimization accomplished by pruning. The major topics covered in the thesis are: 1. The problem of training recurrent networks is analyzed from a numerical...... of solution obtained as well as computation time required. 3. A theoretical definition of the generalization error for recurrent networks is provided. This definition justifies a commonly adopted approach for estimating generalization ability. 4. The viability of pruning recurrent networks by the Optimal...

  18. Early Model of Traffic Sign Reminder Based on Neural Network

    Directory of Open Access Journals (Sweden)

    Budi Rahmani

    2012-12-01

    Full Text Available Recognizing the traffic signs installed on the streets is one of the requirements of driving on the road. Laxity in driving may result in traffic accident. This paper describes a real-time reminder model, by utilizing a camera that can be installed in a car to capture image of traffic signs, and is processed and later to inform the driver. The extracting feature harnessing the morphological elements (strel is used in this paper. Artificial Neural Networks is used to train the system and to produce a final decision. The result shows that the accuracy in detecting and recognizing the ten types of traffic signs in real-time is 80%.

  19. Neural Network Modeling to Predict Shelf Life of Greenhouse Lettuce

    Directory of Open Access Journals (Sweden)

    Wei-Chin Lin

    2009-04-01

    Full Text Available Greenhouse-grown butter lettuce (Lactuca sativa L. can potentially be stored for 21 days at constant 0°C. When storage temperature was increased to 5°C or 10°C, shelf life was shortened to 14 or 10 days, respectively, in our previous observations. Also, commercial shelf life of 7 to 10 days is common, due to postharvest temperature fluctuations. The objective of this study was to establish neural network (NN models to predict the remaining shelf life (RSL under fluctuating postharvest temperatures. A box of 12 - 24 lettuce heads constituted a sample unit. The end of the shelf life of each head was determined when it showed initial signs of decay or yellowing. Air temperatures inside a shipping box were recorded. Daily average temperatures in storage and averaged shelf life of each box were used as inputs, and the RSL was modeled as an output. An R2 of 0.57 could be observed when a simple NN structure was employed. Since the "future" (or remaining storage temperatures were unavailable at the time of making a prediction, a second NN model was introduced to accommodate a range of future temperatures and associated shelf lives. Using such 2-stage NN models, an R2 of 0.61 could be achieved for predicting RSL. This study indicated that NN modeling has potential for cold chain quality control and shelf life prediction.

  20. Sustained activity in hierarchical modular neural networks: self-organized criticality and oscillations

    Directory of Open Access Journals (Sweden)

    Sheng-Jun Wang

    2011-06-01

    Full Text Available Cerebral cortical brain networks possess a number of conspicuous features of structure and dynamics. First, these networks have an intricate, non-random organization. They are structured in a hierarchical modular fashion, from large-scale regions of the whole brain, via cortical areas and area subcompartments organized as structural and functional maps to cortical columns, and finally circuits made up of individual neurons. Second, the networks display self-organized sustained activity, which is persistent in the absence of external stimuli. At the systems level, such activity is characterized by complex rhythmical oscillations over a broadband background, while at the cellular level, neuronal discharges have been observed to display avalanches, indicating that cortical networks are at the state of self-organized criticality. We explored the relationship between hierarchical neural network organization and sustained dynamics using large-scale network modeling. It was shown that sparse random networks with balanced excitation and inhibition can sustain neural activity without external stimulation. We find that a hierarchical modular architecture can generate sustained activity better than random networks. Moreover, the system can simultaneously support rhythmical oscillations and self-organized criticality, which are not present in the respective random networks. The underlying mechanism is that each dense module cannot sustain activity on its own, but displays self-organized criticality in the presence of weak perturbations. The hierarchical modular networks provide the coupling among subsystems with self-organized criticality. These results imply that the hierarchical modular architecture of cortical networks plays an important role in shaping the ongoing spontaneous activity of the brain, potentially allowing the system to take advantage of both the sensitivityof critical state and predictability and timing of oscillations for efficient

  1. Modelling a variable valve timing spark ignition engine using different neural networks

    Energy Technology Data Exchange (ETDEWEB)

    Beham, M. [BMW AG, Munich (Germany); Yu, D.L. [John Moores University, Liverpool (United Kingdom). Control Systems Research Group

    2004-10-01

    In this paper different neural networks (NN) are compared for modelling a variable valve timing spark-ignition (VVT SI) engine. The overall system is divided for each output into five neural multi-input single output (MISO) subsystems. Three kinds of NN, multilayer Perceptron (MLP), pseudo-linear radial basis function (PLRBF), and local linear model tree (LOLIMOT) networks, are used to model each subsystem. Real data were collected when the engine was under different operating conditions and these data are used in training and validation of the developed neural models. The obtained models are finally tested in a real-time online model configuration on the test bench. The neural models run independently of the engine in parallel mode. The model outputs are compared with process output and compared among different models. These models performed well and can be used in the model-based engine control and optimization, and for hardware in the loop systems. (author)

  2. An Adaptive Neural Mechanism with a Lizard Ear Model for Binaural Acoustic Tracking

    DEFF Research Database (Denmark)

    Shaikh, Danish; Manoonpong, Poramate

    2016-01-01

    expensive algorithms. We present a novel bioinspired solution to acoustic tracking that uses only two microphones. The system is based on a neural mechanism coupled with a model of the peripheral auditory system of lizards. The peripheral auditory model provides sound direction information which the neural...

  3. The fiber-optic imaging and manipulation of neural activity during animal behavior.

    Science.gov (United States)

    Miyamoto, Daisuke; Murayama, Masanori

    2016-02-01

    Recent progress with optogenetic probes for imaging and manipulating neural activity has further increased the relevance of fiber-optic systems for neural circuitry research. Optical fibers, which bi-directionally transmit light between separate sites (even at a distance of several meters), can be used for either optical imaging or manipulating neural activity relevant to behavioral circuitry mechanisms. The method's flexibility and the specifications of the light structure are well suited for following the behavior of freely moving animals. Furthermore, thin optical fibers allow researchers to monitor neural activity from not only the cortical surface but also deep brain regions, including the hippocampus and amygdala. Such regions are difficult to target with two-photon microscopes. Optogenetic manipulation of neural activity with an optical fiber has the advantage of being selective for both cell-types and projections as compared to conventional electrophysiological brain tissue stimulation. It is difficult to extract any data regarding changes in neural activity solely from a fiber-optic manipulation device; however, the readout of data is made possible by combining manipulation with electrophysiological recording, or the simultaneous application of optical imaging and manipulation using a bundle-fiber. The present review introduces recent progress in fiber-optic imaging and manipulation methods, while also discussing fiber-optic system designs that are suitable for a given experimental protocol. Copyright © 2015 The Authors. Published by Elsevier Ireland Ltd.. All rights reserved.

  4. Neural activity in the medial temporal lobe reveals the fidelity of mental time travel.

    Science.gov (United States)

    Kragel, James E; Morton, Neal W; Polyn, Sean M

    2015-02-18

    Neural circuitry in the medial temporal lobe (MTL) is critically involved in mental time travel, which involves the vivid retrieval of the details of past experience. Neuroscientific theories propose that the MTL supports memory of the past by retrieving previously encoded episodic information, as well as by reactivating a temporal code specifying the position of a particular event within an episode. However, the neural computations supporting these abilities are underspecified. To test hypotheses regarding the computational mechanisms supported by different MTL subregions during mental time travel, we developed a computational model that linked a blood oxygenation level-dependent signal to cognitive operations, allowing us to predict human performance in a memory search task. Activity in the posterior MTL, including parahippocampal cortex, reflected how strongly one reactivates the temporal context of a retrieved memory, allowing the model to predict whether the next memory will correspond to a nearby moment in the study episode. A signal in the anterior MTL, including perirhinal cortex, indicated the successful retrieval of list items, without providing information regarding temporal organization. A hippocampal signal reflected both processes, consistent with theories that this region binds item and context information together to form episodic memories. These findings provide evidence for modern theories that describe complementary roles of the hippocampus and surrounding parahippocampal and perirhinal cortices during the retrieval of episodic memories, shaping how humans revisit the past. Copyright © 2015 the authors 0270-6474/15/352914-13$15.00/0.

  5. A customizable stochastic state point process filter (SSPPF) for neural spiking activity.

    Science.gov (United States)

    Xin, Yao; Li, Will X Y; Min, Biao; Han, Yan; Cheung, Ray C C

    2013-01-01

    Stochastic State Point Process Filter (SSPPF) is effective for adaptive signal processing. In particular, it has been successfully applied to neural signal coding/decoding in recent years. Recent work has proven its efficiency in non-parametric coefficients tracking in modeling of mammal nervous system. However, existing SSPPF has only been realized in commercial software platforms which limit their computational capability. In this paper, the first hardware architecture of SSPPF has been designed and successfully implemented on field-programmable gate array (FPGA), proving a more efficient means for coefficient tracking in a well-established generalized Laguerre-Volterra model for mammalian hippocampal spiking activity research. By exploring the intrinsic parallelism of the FPGA, the proposed architecture is able to process matrices or vectors with random size, and is efficiently scalable. Experimental result shows its superior performance comparing to the software implementation, while maintaining the numerical precision. This architecture can also be potentially utilized in the future hippocampal cognitive neural prosthesis design.

  6. Decision Making under Uncertainty: A Neural Model based on Partially Observable Markov Decision Processes

    Directory of Open Access Journals (Sweden)

    Rajesh P N Rao

    2010-11-01

    Full Text Available A fundamental problem faced by animals is learning to select actions based on noisy sensory information and incomplete knowledge of the world. It has been suggested that the brain engages in Bayesian inference during perception but how such probabilistic representations are used to select actions has remained unclear. Here we propose a neural model of action selection and decision making based on the theory of partially observable Markov decision processes (POMDPs. Actions are selected based not on a single optimal estimate of state but on the posterior distribution over states (the belief state. We show how such a model provides a unified framework for explaining experimental results in decision making that involve both information gathering and overt actions. The model utilizes temporal difference (TD learning for maximizing expected reward. The resulting neural architecture posits an active role for the neocortex in belief computation while ascribing a role to the basal ganglia in belief representation, value computation, and action selection. When applied to the random dots motion discrimination task, model neurons representing belief exhibit responses similar to those of LIP neurons in primate neocortex. The appropriate threshold for switching from information gathering to overt actions emerges naturally during reward maximization. Additionally, the time course of reward prediction error in the model shares similarities with dopaminergic responses in the basal ganglia during the random dots task. For tasks with a deadline, the model learns a decision making strategy that changes with elapsed time, predicting a collapsing decision threshold consistent with some experimental studies. The model provides a new framework for understanding neural decision making and suggests an important role for interactions between the neocortex and the basal ganglia in learning the mapping between probabilistic sensory representations and actions that maximize

  7. Neural Networks for Modeling and Control of Particle Accelerators

    CERN Document Server

    Edelen, A.L.; Chase, B.E.; Edstrom, D.; Milton, S.V.; Stabile, P.

    2016-01-01

    We describe some of the challenges of particle accelerator control, highlight recent advances in neural network techniques, discuss some promising avenues for incorporating neural networks into particle accelerator control systems, and describe a neural network-based control system that is being developed for resonance control of an RF electron gun at the Fermilab Accelerator Science and Technology (FAST) facility, including initial experimental results from a benchmark controller.

  8. Training Spiking Neural Models Using Artificial Bee Colony

    Science.gov (United States)

    Vazquez, Roberto A.; Garro, Beatriz A.

    2015-01-01

    Spiking neurons are models designed to simulate, in a realistic manner, the behavior of biological neurons. Recently, it has been proven that this type of neurons can be applied to solve pattern recognition problems with great efficiency. However, the lack of learning strategies for training these models do not allow to use them in several pattern recognition problems. On the other hand, several bioinspired algorithms have been proposed in the last years for solving a broad range of optimization problems, including those related to the field of artificial neural networks (ANNs). Artificial bee colony (ABC) is a novel algorithm based on the behavior of bees in the task of exploring their environment to find a food source. In this paper, we describe how the ABC algorithm can be used as a learning strategy to train a spiking neuron aiming to solve pattern recognition problems. Finally, the proposed approach is tested on several pattern recognition problems. It is important to remark that to realize the powerfulness of this type of model only one neuron will be used. In addition, we analyze how the performance of these models is improved using this kind of learning strategy. PMID:25709644

  9. Metadynamics for training neural network model chemistries: A competitive assessment

    Science.gov (United States)

    Herr, John E.; Yao, Kun; McIntyre, Ryker; Toth, David W.; Parkhill, John

    2018-06-01

    Neural network model chemistries (NNMCs) promise to facilitate the accurate exploration of chemical space and simulation of large reactive systems. One important path to improving these models is to add layers of physical detail, especially long-range forces. At short range, however, these models are data driven and data limited. Little is systematically known about how data should be sampled, and "test data" chosen randomly from some sampling techniques can provide poor information about generality. If the sampling method is narrow, "test error" can appear encouragingly tiny while the model fails catastrophically elsewhere. In this manuscript, we competitively evaluate two common sampling methods: molecular dynamics (MD), normal-mode sampling, and one uncommon alternative, Metadynamics (MetaMD), for preparing training geometries. We show that MD is an inefficient sampling method in the sense that additional samples do not improve generality. We also show that MetaMD is easily implemented in any NNMC software package with cost that scales linearly with the number of atoms in a sample molecule. MetaMD is a black-box way to ensure samples always reach out to new regions of chemical space, while remaining relevant to chemistry near kbT. It is a cheap tool to address the issue of generalization.

  10. Learning Data Set Influence on Identification Accuracy of Gas Turbine Neural Network Model

    Science.gov (United States)

    Kuznetsov, A. V.; Makaryants, G. M.

    2018-01-01

    There are many gas turbine engine identification researches via dynamic neural network models. It should minimize errors between model and real object during identification process. Questions about training data set processing of neural networks are usually missed. This article presents a study about influence of data set type on gas turbine neural network model accuracy. The identification object is thermodynamic model of micro gas turbine engine. The thermodynamic model input signal is the fuel consumption and output signal is the engine rotor rotation frequency. Four types input signals was used for creating training and testing data sets of dynamic neural network models - step, fast, slow and mixed. Four dynamic neural networks were created based on these types of training data sets. Each neural network was tested via four types test data sets. In the result 16 transition processes from four neural networks and four test data sets from analogous solving results of thermodynamic model were compared. The errors comparison was made between all neural network errors in each test data set. In the comparison result it was shown error value ranges of each test data set. It is shown that error values ranges is small therefore the influence of data set types on identification accuracy is low.

  11. Modeling the electrode-neuron interface of cochlear implants: effects of neural survival, electrode placement, and the partial tripolar configuration.

    Science.gov (United States)

    Goldwyn, Joshua H; Bierer, Steven M; Bierer, Julie Arenberg

    2010-09-01

    The partial tripolar electrode configuration is a relatively novel stimulation strategy that can generate more spatially focused electric fields than the commonly used monopolar configuration. Focused stimulation strategies should improve spectral resolution in cochlear implant users, but may also be more sensitive to local irregularities in the electrode-neuron interface. In this study, we develop a practical computer model of cochlear implant stimulation that can simulate neural activation in a simplified cochlear geometry and we relate the resulting patterns of neural activity to basic psychophysical measures. We examine how two types of local irregularities in the electrode-neuron interface, variations in spiral ganglion nerve density and electrode position within the scala tympani, affect the simulated neural activation patterns and how these patterns change with electrode configuration. The model shows that higher partial tripolar fractions activate more spatially restricted populations of neurons at all current levels and require higher current levels to excite a given number of neurons. We find that threshold levels are more sensitive at high partial tripolar fractions to both types of irregularities, but these effects are not independent. In particular, at close electrode-neuron distances, activation is typically more spatially localized which leads to a greater influence of neural dead regions. Copyright (c) 2010 Elsevier B.V. All rights reserved.

  12. High baseline activity in inferior temporal cortex improves neural and behavioral discriminability during visual categorization

    Directory of Open Access Journals (Sweden)

    Nazli eEmadi

    2014-11-01

    Full Text Available Spontaneous firing is a ubiquitous property of neural activity in the brain. Recent literature suggests that this baseline activity plays a key role in perception. However, it is not known how the baseline activity contributes to neural coding and behavior. Here, by recording from the single neurons in the inferior temporal cortex of monkeys performing a visual categorization task, we thoroughly explored the relationship between baseline activity, the evoked response, and behavior. Specifically we found that a low-frequency (< 8 Hz oscillation in the spike train, prior and phase-locked to the stimulus onset, was correlated with increased gamma power and neuronal baseline activity. This enhancement of the baseline activity was then followed by an increase in the neural selectivity and the response reliability and eventually a higher behavioral performance.

  13. Exploring Neural Network Models with Hierarchical Memories and Their Use in Modeling Biological Systems

    Science.gov (United States)

    Pusuluri, Sai Teja

    Energy landscapes are often used as metaphors for phenomena in biology, social sciences and finance. Different methods have been implemented in the past for the construction of energy landscapes. Neural network models based on spin glass physics provide an excellent mathematical framework for the construction of energy landscapes. This framework uses a minimal number of parameters and constructs the landscape using data from the actual phenomena. In the past neural network models were used to mimic the storage and retrieval process of memories (patterns) in the brain. With advances in the field now, these models are being used in machine learning, deep learning and modeling of complex phenomena. Most of the past literature focuses on increasing the storage capacity and stability of stored patterns in the network but does not study these models from a modeling perspective or an energy landscape perspective. This dissertation focuses on neural network models both from a modeling perspective and from an energy landscape perspective. I firstly show how the cellular interconversion phenomenon can be modeled as a transition between attractor states on an epigenetic landscape constructed using neural network models. The model allows the identification of a reaction coordinate of cellular interconversion by analyzing experimental and simulation time course data. Monte Carlo simulations of the model show that the initial phase of cellular interconversion is a Poisson process and the later phase of cellular interconversion is a deterministic process. Secondly, I explore the static features of landscapes generated using neural network models, such as sizes of basins of attraction and densities of metastable states. The simulation results show that the static landscape features are strongly dependent on the correlation strength and correlation structure between patterns. Using different hierarchical structures of the correlation between patterns affects the landscape features

  14. SOME QUESTIONS OF THE GRID AND NEURAL NETWORK MODELING OF AIRPORT AVIATION SECURITY CONTROL TASKS

    Directory of Open Access Journals (Sweden)

    N. Elisov Lev

    2017-01-01

    Full Text Available The authors’ original problem-solution-approach concerning aviation security management in civil aviation apply- ing parallel calculation processes method and the usage of neural computers is considered in this work. The statement of secure environment modeling problems for grid models and with the use of neural networks is presented. The research sub- ject area of this article is airport activity in the field of civil aviation, considered in the context of aviation security, defined as the state of aviation security against unlawful interference with the aviation field. The key issue in this subject area is aviation safety provision at an acceptable level. In this case, airport security level management becomes one of the main objectives of aviation security. Aviation security management is organizational-regulation in modern systems that can no longer correspond to changing requirements, increasingly getting complex and determined by external and internal envi- ronment factors, associated with a set of potential threats to airport activity. Optimal control requires the most accurate identification of management parameters and their quantitative assessment. The authors examine the possibility of applica- tion of mathematical methods for the modeling of security management processes and procedures in their latest works. Par- allel computing methods and network neurocomputing for modeling of airport security control processes are examined in this work. It is shown that the methods’ practical application of the methods is possible along with the decision support system, where the decision maker plays the leading role.

  15. Neural pathways in processing of sexual arousal: a dynamic causal modeling study.

    Science.gov (United States)

    Seok, J-W; Park, M-S; Sohn, J-H

    2016-09-01

    Three decades of research have investigated brain processing of visual sexual stimuli with neuroimaging methods. These researchers have found that sexual arousal stimuli elicit activity in a broad neural network of cortical and subcortical brain areas that are known to be associated with cognitive, emotional, motivational and physiological components. However, it is not completely understood how these neural systems integrate and modulated incoming information. Therefore, we identify cerebral areas whose activations were correlated with sexual arousal using event-related functional magnetic resonance imaging and used the dynamic causal modeling method for searching the effective connectivity about the sexual arousal processing network. Thirteen heterosexual males were scanned while they passively viewed alternating short trials of erotic and neutral pictures on a monitor. We created a subset of seven models based on our results and previous studies and selected a dominant connectivity model. Consequently, we suggest a dynamic causal model of the brain processes mediating the cognitive, emotional, motivational and physiological factors of human male sexual arousal. These findings are significant implications for the neuropsychology of male sexuality.

  16. Application of Artificial Neural Networks in the Heart Electrical Axis Position Conclusion Modeling

    Science.gov (United States)

    Bakanovskaya, L. N.

    2016-08-01

    The article touches upon building of a heart electrical axis position conclusion model using an artificial neural network. The input signals of the neural network are the values of deflections Q, R and S; and the output signal is the value of the heart electrical axis position. Training of the network is carried out by the error propagation method. The test results allow concluding that the created neural network makes a conclusion with a high degree of accuracy.

  17. GABAA receptors in visual and auditory cortex and neural activity changes during basic visual stimulation

    Directory of Open Access Journals (Sweden)

    Pengmin eQin

    2012-12-01

    Full Text Available Recent imaging studies have demonstrated that levels of resting GABA in the visual cortex predict the degree of stimulus-induced activity in the same region. These studies have used the presentation of discrete visual stimulus; the change from closed eyes to open also represents a simple visual stimulus, however, and has been shown to induce changes in local brain activity and in functional connectivity between regions. We thus aimed to investigate the role of the GABA system, specifically GABAA receptors, in the changes in brain activity between the eyes closed (EC and eyes open (EO state in order to provide detail at the receptor level to complement previous studies of GABA concentrations. We conducted an fMRI study involving two different modes of the change from EC to EO: An EO and EC block design, allowing the modelling of the haemodynamic response, followed by longer periods of EC and EO to allow the measuring of functional connectivity. The same subjects also underwent [18F]Flumazenil PET measure GABAA receptor binding potentials. It was demonstrated that the local-to-global ratio of GABAA receptor binding potential in the visual cortex predicted the degree of changes in neural activity from EC to EO. This same relationship was also shown in the auditory cortex. Furthermore, the local-to-global ratio of GABAA receptor binding potential in the visual cortex also predicts the change of functional connectivity between visual and auditory cortex from EC to EO. These findings contribute to our understanding of the role of GABAA receptors in stimulus-induced neural activity in local regions and in inter-regional functional connectivity.

  18. Biological oscillations for learning walking coordination: dynamic recurrent neural network functionally models physiological central pattern generator.

    Science.gov (United States)

    Hoellinger, Thomas; Petieau, Mathieu; Duvinage, Matthieu; Castermans, Thierry; Seetharaman, Karthik; Cebolla, Ana-Maria; Bengoetxea, Ana; Ivanenko, Yuri; Dan, Bernard; Cheron, Guy

    2013-01-01

    The existence of dedicated neuronal modules such as those organized in the cerebral cortex, thalamus, basal ganglia, cerebellum, or spinal cord raises the question of how these functional modules are coordinated for appropriate motor behavior. Study of human locomotion offers an interesting field for addressing this central question. The coordination of the elevation of the 3 leg segments under a planar covariation rule (Borghese et al., 1996) was recently modeled (Barliya et al., 2009) by phase-adjusted simple oscillators shedding new light on the understanding of the central pattern generator (CPG) processing relevant oscillation signals. We describe the use of a dynamic recurrent neural network (DRNN) mimicking the natural oscillatory behavior of human locomotion for reproducing the planar covariation rule in both legs at different walking speeds. Neural network learning was based on sinusoid signals integrating frequency and amplitude features of the first three harmonics of the sagittal elevation angles of the thigh, shank, and foot of each lower limb. We verified the biological plausibility of the neural networks. Best results were obtained with oscillations extracted from the first three harmonics in comparison to oscillations outside the harmonic frequency peaks. Physiological replication steadily increased with the number of neuronal units from 1 to 80, where similarity index reached 0.99. Analysis of synaptic weighting showed that the proportion of inhibitory connections consistently increased with the number of neuronal units in the DRNN. This emerging property in the artificial neural networks resonates with recent advances in neurophysiology of inhibitory neurons that are involved in central nervous system oscillatory activities. The main message of this study is that this type of DRNN may offer a useful model of physiological central pattern generator for gaining insights in basic research and developing clinical applications.

  19. Evolutionary Design of Convolutional Neural Networks for Human Activity Recognition in Sensor-Rich Environments

    Science.gov (United States)

    2018-01-01

    Human activity recognition is a challenging problem for context-aware systems and applications. It is gaining interest due to the ubiquity of different sensor sources, wearable smart objects, ambient sensors, etc. This task is usually approached as a supervised machine learning problem, where a label is to be predicted given some input data, such as the signals retrieved from different sensors. For tackling the human activity recognition problem in sensor network environments, in this paper we propose the use of deep learning (convolutional neural networks) to perform activity recognition using the publicly available OPPORTUNITY dataset. Instead of manually choosing a suitable topology, we will let an evolutionary algorithm design the optimal topology in order to maximize the classification F1 score. After that, we will also explore the performance of committees of the models resulting from the evolutionary process. Results analysis indicates that the proposed model was able to perform activity recognition within a heterogeneous sensor network environment, achieving very high accuracies when tested with new sensor data. Based on all conducted experiments, the proposed neuroevolutionary system has proved to be able to systematically find a classification model which is capable of outperforming previous results reported in the state-of-the-art, showing that this approach is useful and improves upon previously manually-designed architectures. PMID:29690587

  20. Evolutionary Design of Convolutional Neural Networks for Human Activity Recognition in Sensor-Rich Environments

    Directory of Open Access Journals (Sweden)

    Alejandro Baldominos

    2018-04-01

    Full Text Available Human activity recognition is a challenging problem for context-aware systems and applications. It is gaining interest due to the ubiquity of different sensor sources, wearable smart objects, ambient sensors, etc. This task is usually approached as a supervised machine learning problem, where a label is to be predicted given some input data, such as the signals retrieved from different sensors. For tackling the human activity recognition problem in sensor network environments, in this paper we propose the use of deep learning (convolutional neural networks to perform activity recognition using the publicly available OPPORTUNITY dataset. Instead of manually choosing a suitable topology, we will let an evolutionary algorithm design the optimal topology in order to maximize the classification F1 score. After that, we will also explore the performance of committees of the models resulting from the evolutionary process. Results analysis indicates that the proposed model was able to perform activity recognition within a heterogeneous sensor network environment, achieving very high accuracies when tested with new sensor data. Based on all conducted experiments, the proposed neuroevolutionary system has proved to be able to systematically find a classification model which is capable of outperforming previous results reported in the state-of-the-art, showing that this approach is useful and improves upon previously manually-designed architectures.

  1. Evolutionary Design of Convolutional Neural Networks for Human Activity Recognition in Sensor-Rich Environments.

    Science.gov (United States)

    Baldominos, Alejandro; Saez, Yago; Isasi, Pedro

    2018-04-23

    Human activity recognition is a challenging problem for context-aware systems and applications. It is gaining interest due to the ubiquity of different sensor sources, wearable smart objects, ambient sensors, etc. This task is usually approached as a supervised machine learning problem, where a label is to be predicted given some input data, such as the signals retrieved from different sensors. For tackling the human activity recognition problem in sensor network environments, in this paper we propose the use of deep learning (convolutional neural networks) to perform activity recognition using the publicly available OPPORTUNITY dataset. Instead of manually choosing a suitable topology, we will let an evolutionary algorithm design the optimal topology in order to maximize the classification F1 score. After that, we will also explore the performance of committees of the models resulting from the evolutionary process. Results analysis indicates that the proposed model was able to perform activity recognition within a heterogeneous sensor network environment, achieving very high accuracies when tested with new sensor data. Based on all conducted experiments, the proposed neuroevolutionary system has proved to be able to systematically find a classification model which is capable of outperforming previous results reported in the state-of-the-art, showing that this approach is useful and improves upon previously manually-designed architectures.

  2. Accurate prediction of the dew points of acidic combustion gases by using an artificial neural network model

    International Nuclear Information System (INIS)

    ZareNezhad, Bahman; Aminian, Ali

    2011-01-01

    This paper presents a new approach based on using an artificial neural network (ANN) model for predicting the acid dew points of the combustion gases in process and power plants. The most important acidic combustion gases namely, SO 3 , SO 2 , NO 2 , HCl and HBr are considered in this investigation. Proposed Network is trained using the Levenberg-Marquardt back propagation algorithm and the hyperbolic tangent sigmoid activation function is applied to calculate the output values of the neurons of the hidden layer. According to the network's training, validation and testing results, a three layer neural network with nine neurons in the hidden layer is selected as the best architecture for accurate prediction of the acidic combustion gases dew points over wide ranges of acid and moisture concentrations. The proposed neural network model can have significant application in predicting the condensation temperatures of different acid gases to mitigate the corrosion problems in stacks, pollution control devices and energy recovery systems.

  3. Distributed Recurrent Neural Forward Models with Neural Control for Complex Locomotion in Walking Robots

    DEFF Research Database (Denmark)

    Dasgupta, Sakyasingha; Goldschmidt, Dennis; Wörgötter, Florentin

    2015-01-01

    here, an artificial bio-inspired walking system which effectively combines biomechanics (in terms of the body and leg structures) with the underlying neural mechanisms. The neural mechanisms consist of (1) central pattern generator based control for generating basic rhythmic patterns and coordinated......Walking animals, like stick insects, cockroaches or ants, demonstrate a fascinating range of locomotive abilities and complex behaviors. The locomotive behaviors can consist of a variety of walking patterns along with adaptation that allow the animals to deal with changes in environmental...... conditions, like uneven terrains, gaps, obstacles etc. Biological study has revealed that such complex behaviors are a result of a combination of biomechanics and neural mechanism thus representing the true nature of embodied interactions. While the biomechanics helps maintain flexibility and sustain...

  4. Unloading arm movement modeling using neural networks for a rotary hearth furnace

    Directory of Open Access Journals (Sweden)

    Iulia Inoan

    2011-12-01

    Full Text Available Neural networks are being applied in many fields of engineering having nowadays a wide range of application. Neural networks are very useful for modeling dynamic processes for which the mathematical modeling is hard to obtain, or for processes that can’t be modeled using mathematical equations. This paper describes the modeling process for the unloading arm movement from a rotary hearth furnace using neural networks with back propagation algorithm. In this case the designed network was trained using the simulation results from a previous calculated mathematical model.

  5. Simple Electromagnetic Modeling of Small Airplanes: Neural Network Approach

    Directory of Open Access Journals (Sweden)

    P. Tobola

    2009-04-01

    Full Text Available The paper deals with the development of simple electromagnetic models of small airplanes, which can contain composite materials in their construction. Electromagnetic waves can penetrate through the surface of the aircraft due to the specific electromagnetic properties of the composite materials, which can increase the intensity of fields inside the airplane and can negatively influence the functionality of the sensitive avionics. The airplane is simulated by two parallel dielectric layers (the left-hand side wall and the right-hand side wall of the airplane. The layers are put into a rectangular metallic waveguide terminated by the absorber in order to simulate the illumination of the airplane by the external wave (both of the harmonic nature and pulse one. Thanks to the simplicity of the model, the parametric analysis can be performed, and the results can be used in order to train an artificial neural network. The trained networks excel in further reduction of CPU-time demands of an airplane modeling.

  6. A neural model of figure-ground organization.

    Science.gov (United States)

    Craft, Edward; Schütze, Hartmut; Niebur, Ernst; von der Heydt, Rüdiger

    2007-06-01

    Psychophysical studies suggest that figure-ground organization is a largely autonomous process that guides--and thus precedes--allocation of attention and object recognition. The discovery of border-ownership representation in single neurons of early visual cortex has confirmed this view. Recent theoretical studies have demonstrated that border-ownership assignment can be modeled as a process of self-organization by lateral interactions within V2 cortex. However, the mechanism proposed relies on propagation of signals through horizontal fibers, which would result in increasing delays of the border-ownership signal with increasing size of the visual stimulus, in contradiction with experimental findings. It also remains unclear how the resulting border-ownership representation would interact with attention mechanisms to guide further processing. Here we present a model of border-ownership coding based on dedicated neural circuits for contour grouping that produce border-ownership assignment and also provide handles for mechanisms of selective attention. The results are consistent with neurophysiological and psychophysical findings. The model makes predictions about the hypothetical grouping circuits and the role of feedback between cortical areas.

  7. Abnormal neural activation patterns underlying working memory impairment in chronic phencyclidine-treated mice.

    Directory of Open Access Journals (Sweden)

    Yosefu Arime

    Full Text Available Working memory impairment is a hallmark feature of schizophrenia and is thought be caused by dysfunctions in the prefrontal cortex (PFC and associated brain regions. However, the neural circuit anomalies underlying this impairment are poorly understood. The aim of this study is to assess working memory performance in the chronic phencyclidine (PCP mouse model of schizophrenia, and to identify the neural substrates of working memory. To address this issue, we conducted the following experiments for mice after withdrawal from chronic administration (14 days of either saline or PCP (10 mg/kg: (1 a discrete paired-trial variable-delay task in T-maze to assess working memory, and (2 brain-wide c-Fos mapping to identify activated brain regions relevant to this task performance either 90 min or 0 min after the completion of the task, with each time point examined under working memory effort and basal conditions. Correct responses in the test phase of the task were significantly reduced across delays (5, 15, and 30 s in chronic PCP-treated mice compared with chronic saline-treated controls, suggesting delay-independent impairments in working memory in the PCP group. In layer 2-3 of the prelimbic cortex, the number of working memory effort-elicited c-Fos+ cells was significantly higher in the chronic PCP group than in the chronic saline group. The main effect of working memory effort relative to basal conditions was to induce significantly increased c-Fos+ cells in the other layers of prelimbic cortex and the anterior cingulate and infralimbic cortex regardless of the different chronic regimens. Conversely, this working memory effort had a negative effect (fewer c-Fos+ cells in the ventral hippocampus. These results shed light on some putative neural networks relevant to working memory impairments in mice chronically treated with PCP, and emphasize the importance of the layer 2-3 of the prelimbic cortex of the PFC.

  8. Modeling Markov Switching ARMA-GARCH Neural Networks Models and an Application to Forecasting Stock Returns

    Directory of Open Access Journals (Sweden)

    Melike Bildirici

    2014-01-01

    Full Text Available The study has two aims. The first aim is to propose a family of nonlinear GARCH models that incorporate fractional integration and asymmetric power properties to MS-GARCH processes. The second purpose of the study is to augment the MS-GARCH type models with artificial neural networks to benefit from the universal approximation properties to achieve improved forecasting accuracy. Therefore, the proposed Markov-switching MS-ARMA-FIGARCH, APGARCH, and FIAPGARCH processes are further augmented with MLP, Recurrent NN, and Hybrid NN type neural networks. The MS-ARMA-GARCH family and MS-ARMA-GARCH-NN family are utilized for modeling the daily stock returns in an emerging market, the Istanbul Stock Index (ISE100. Forecast accuracy is evaluated in terms of MAE, MSE, and RMSE error criteria and Diebold-Mariano equal forecast accuracy tests. The results suggest that the fractionally integrated and asymmetric power counterparts of Gray’s MS-GARCH model provided promising results, while the best results are obtained for their neural network based counterparts. Further, among the models analyzed, the models based on the Hybrid-MLP and Recurrent-NN, the MS-ARMA-FIAPGARCH-HybridMLP, and MS-ARMA-FIAPGARCH-RNN provided the best forecast performances over the baseline single regime GARCH models and further, over the Gray’s MS-GARCH model. Therefore, the models are promising for various economic applications.

  9. Self-reported empathy and neural activity during action imitation and observation in schizophrenia

    OpenAIRE

    Horan, William P.; Iacoboni, Marco; Cross, Katy A.; Korb, Alex; Lee, Junghee; Nori, Poorang; Quintana, Javier; Wynn, Jonathan K.; Green, Michael F.

    2014-01-01

    Introduction: Although social cognitive impairments are key determinants of functional outcome in schizophrenia their neural bases are poorly understood. This study investigated neural activity during imitation and observation of finger movements and facial expressions in schizophrenia, and their correlates with self-reported empathy. Methods: 23 schizophrenia outpatients and 23 healthy controls were studied with functional magnetic resonance imaging (fMRI) while they imitated, executed, o...

  10. assessment of neural networks performance in modeling rainfall ...

    African Journals Online (AJOL)

    Sholagberu

    neural network architecture for precipitation prediction of Myanmar, World Academy of. Science, Engineering and Technology, 48, pp. 130 – 134. Kumarasiri, A.D. and Sonnadara, D.U.J. (2006). Rainfall forecasting: an artificial neural network approach, Proceedings of the Technical Sessions,. 22, pp. 1-13 Institute of Physics ...

  11. Commentary. Integrative Modeling and the Role of Neural Constraints

    Czech Academy of Sciences Publication Activity Database

    Bantegnie, Brice

    2017-01-01

    Roč. 8, SEP 5 (2017), s. 1-2, č. článku 1531. ISSN 1664-1078 Institutional support: RVO:67985955 Keywords : mechanistic explanation * functional analysis * mechanistic integration * reverse inference * neural plasticity * neural networks Subject RIV: AA - Philosophy ; Religion Impact factor: 2.323, year: 2016

  12. Coupling Strength and System Size Induce Firing Activity of Globally Coupled Neural Network

    International Nuclear Information System (INIS)

    Wei Duqu; Luo Xiaoshu; Zou Yanli

    2008-01-01

    We investigate how firing activity of globally coupled neural network depends on the coupling strength C and system size N. Network elements are described by space-clamped FitzHugh-Nagumo (SCFHN) neurons with the values of parameters at which no firing activity occurs. It is found that for a given appropriate coupling strength, there is an intermediate range of system size where the firing activity of globally coupled SCFHN neural network is induced and enhanced. On the other hand, for a given intermediate system size level, there exists an optimal value of coupling strength such that the intensity of firing activity reaches its maximum. These phenomena imply that the coupling strength and system size play a vital role in firing activity of neural network

  13. Artificial neural network based modelling approach for municipal solid waste gasification in a fluidized bed reactor.

    Science.gov (United States)

    Pandey, Daya Shankar; Das, Saptarshi; Pan, Indranil; Leahy, James J; Kwapinski, Witold

    2016-12-01

    In this paper, multi-layer feed forward neural networks are used to predict the lower heating value of gas (LHV), lower heating value of gasification products including tars and entrained char (LHV p ) and syngas yield during gasification of municipal solid waste (MSW) during gasification in a fluidized bed reactor. These artificial neural networks (ANNs) with different architectures are trained using the Levenberg-Marquardt (LM) back-propagation algorithm and a cross validation is also performed to ensure that the results generalise to other unseen datasets. A rigorous study is carried out on optimally choosing the number of hidden layers, number of neurons in the hidden layer and activation function in a network using multiple Monte Carlo runs. Nine input and three output parameters are used to train and test various neural network architectures in both multiple output and single output prediction paradigms using the available experimental datasets. The model selection procedure is carried out to ascertain the best network architecture in terms of predictive accuracy. The simulation results show that the ANN based methodology is a viable alternative which can be used to predict the performance of a fluidized bed gasifier. Copyright © 2016 Elsevier Ltd. All rights reserved.

  14. SpikingLab: modelling agents controlled by Spiking Neural Networks in Netlogo.

    Science.gov (United States)

    Jimenez-Romero, Cristian; Johnson, Jeffrey

    2017-01-01

    The scientific interest attracted by Spiking Neural Networks (SNN) has lead to the development of tools for the simulation and study of neuronal dynamics ranging from phenomenological models to the more sophisticated and biologically accurate Hodgkin-and-Huxley-based and multi-compartmental models. However, despite the multiple features offered by neural modelling tools, their integration with environments for the simulation of robots and agents can be challenging and time consuming. The implementation of artificial neural circuits to control robots generally involves the following tasks: (1) understanding the simulation tools, (2) creating the neural circuit in the neural simulator, (3) linking the simulated neural circuit with the environment of the agent and (4) programming the appropriate interface in the robot or agent to use the neural controller. The accomplishment of the above-mentioned tasks can be challenging, especially for undergraduate students or novice researchers. This paper presents an alternative tool which facilitates the simulation of simple SNN circuits using the multi-agent simulation and the programming environment Netlogo (educational software that simplifies the study and experimentation of complex systems). The engine proposed and implemented in Netlogo for the simulation of a functional model of SNN is a simplification of integrate and fire (I&F) models. The characteristics of the engine (including neuronal dynamics, STDP learning and synaptic delay) are demonstrated through the implementation of an agent representing an artificial insect controlled by a simple neural circuit. The setup of the experiment and its outcomes are described in this work.

  15. Nondestructive pavement evaluation using ILLI-PAVE based artificial neural network models.

    Science.gov (United States)

    2008-09-01

    The overall objective in this research project is to develop advanced pavement structural analysis models for more accurate solutions with fast computation schemes. Soft computing and modeling approaches, specifically the Artificial Neural Network (A...

  16. Mitochondrial metabolism in early neural fate and its relevance for neuronal disease modeling.

    Science.gov (United States)

    Lorenz, Carmen; Prigione, Alessandro

    2017-12-01

    Modulation of energy metabolism is emerging as a key aspect associated with cell fate transition. The establishment of a correct metabolic program is particularly relevant for neural cells given their high bioenergetic requirements. Accordingly, diseases of the nervous system commonly involve mitochondrial impairment. Recent studies in animals and in neural derivatives of human pluripotent stem cells (PSCs) highlighted the importance of mitochondrial metabolism for neural fate decisions in health and disease. The mitochondria-based metabolic program of early neurogenesis suggests that PSC-derived neural stem cells (NSCs) may be used for modeling neurological disorders. Understanding how metabolic programming is orchestrated during neural commitment may provide important information for the development of therapies against conditions affecting neural functions, including aging and mitochondrial disorders. Copyright © 2017. Published by Elsevier Ltd.

  17. Mapping visual stimuli to perceptual decisions via sparse decoding of mesoscopic neural activity.

    Science.gov (United States)

    Sajda, Paul

    2010-01-01

    In this talk I will describe our work investigating sparse decoding of neural activity, given a realistic mapping of the visual scene to neuronal spike trains generated by a model of primary visual cortex (V1). We use a linear decoder which imposes sparsity via an L1 norm. The decoder can be viewed as a decoding neuron (linear summation followed by a sigmoidal nonlinearity) in which there are relatively few non-zero synaptic weights. We find: (1) the best decoding performance is for a representation that is sparse in both space and time, (2) decoding of a temporal code results in better performance than a rate code and is also a better fit to the psychophysical data, (3) the number of neurons required for decoding increases monotonically as signal-to-noise in the stimulus decreases, with as little as 1% of the neurons required for decoding at the highest signal-to-noise levels, and (4) sparse decoding results in a more accurate decoding of the stimulus and is a better fit to psychophysical performance than a distributed decoding, for example one imposed by an L2 norm. We conclude that sparse coding is well-justified from a decoding perspective in that it results in a minimum number of neurons and maximum accuracy when sparse representations can be decoded from the neural dynamics.

  18. Modelling innovation performance of European regions using multi-output neural networks.

    Science.gov (United States)

    Hajek, Petr; Henriques, Roberto

    2017-01-01

    Regional innovation performance is an important indicator for decision-making regarding the implementation of policies intended to support innovation. However, patterns in regional innovation structures are becoming increasingly diverse, complex and nonlinear. To address these issues, this study aims to develop a model based on a multi-output neural network. Both intra- and inter-regional determinants of innovation performance are empirically investigated using data from the 4th and 5th Community Innovation Surveys of NUTS 2 (Nomenclature of Territorial Units for Statistics) regions. The results suggest that specific innovation strategies must be developed based on the current state of input attributes in the region. Thus, it is possible to develop appropriate strategies and targeted interventions to improve regional innovation performance. We demonstrate that support of entrepreneurship is an effective instrument of innovation policy. We also provide empirical support that both business and government R&D activity have a sigmoidal effect, implying that the most effective R&D support should be directed to regions with below-average and average R&D activity. We further show that the multi-output neural network outperforms traditional statistical and machine learning regression models. In general, therefore, it seems that the proposed model can effectively reflect both the multiple-output nature of innovation performance and the interdependency of the output attributes.

  19. Modelling innovation performance of European regions using multi-output neural networks.

    Directory of Open Access Journals (Sweden)

    Petr Hajek

    Full Text Available Regional innovation performance is an important indicator for decision-making regarding the implementation of policies intended to support innovation. However, patterns in regional innovation structures are becoming increasingly diverse, complex and nonlinear. To address these issues, this study aims to develop a model based on a multi-output neural network. Both intra- and inter-regional determinants of innovation performance are empirically investigated using data from the 4th and 5th Community Innovation Surveys of NUTS 2 (Nomenclature of Territorial Units for Statistics regions. The results suggest that specific innovation strategies must be developed based on the current state of input attributes in the region. Thus, it is possible to develop appropriate strategies and targeted interventions to improve regional innovation performance. We demonstrate that support of entrepreneurship is an effective instrument of innovation policy. We also provide empirical support that both business and government R&D activity have a sigmoidal effect, implying that the most effective R&D support should be directed to regions with below-average and average R&D activity. We further show that the multi-output neural network outperforms traditional statistical and machine learning regression models. In general, therefore, it seems that the proposed model can effectively reflect both the multiple-output nature of innovation performance and the interdependency of the output attributes.

  20. Neural activity, neural connectivity, and the processing of emotionally valenced information in older adults: links with life satisfaction.

    Science.gov (United States)

    Waldinger, Robert J; Kensinger, Elizabeth A; Schulz, Marc S

    2011-09-01

    This study examines whether differences in late-life well-being are linked to how older adults encode emotionally valenced information. Using fMRI with 39 older adults varying in life satisfaction, we examined how viewing positive and negative images would affect activation and connectivity of an emotion-processing network. Participants engaged most regions within this network more robustly for positive than for negative images, but within the PFC this effect was moderated by life satisfaction, with individuals higher in satisfaction showing lower levels of activity during the processing of positive images. Participants high in satisfaction showed stronger correlations among network regions-particularly between the amygdala and other emotion processing regions-when viewing positive, as compared with negative, images. Participants low in satisfaction showed no valence effect. Findings suggest that late-life satisfaction is linked with how emotion-processing regions are engaged and connected during processing of valenced information. This first demonstration of a link between neural recruitment and late-life well-being suggests that differences in neural network activation and connectivity may account for the preferential encoding of positive information seen in some older adults.

  1. Bacterial DNA Sequence Compression Models Using Artificial Neural Networks

    Directory of Open Access Journals (Sweden)

    Armando J. Pinho

    2013-08-01

    Full Text Available It is widely accepted that the advances in DNA sequencing techniques have contributed to an unprecedented growth of genomic data. This fact has increased the interest in DNA compression, not only from the information theory and biology points of view, but also from a practical perspective, since such sequences require storage resources. Several compression methods exist, and particularly, those using finite-context models (FCMs have received increasing attention, as they have been proven to effectively compress DNA sequences with low bits-per-base, as well as low encoding/decoding time-per-base. However, the amount of run-time memory required to store high-order finite-context models may become impractical, since a context-order as low as 16 requires a maximum of 17.2 x 109 memory entries. This paper presents a method to reduce such a memory requirement by using a novel application of artificial neural networks (ANN to build such probabilistic models in a compact way and shows how to use them to estimate the probabilities. Such a system was implemented, and its performance compared against state-of-the art compressors, such as XM-DNA (expert model and FCM-Mx (mixture of finite-context models , as well as with general-purpose compressors. Using a combination of order-10 FCM and ANN, similar encoding results to those of FCM, up to order-16, are obtained using only 17 megabytes of memory, whereas the latter, even employing hash-tables, uses several hundreds of megabytes.

  2. Stability of a neural network model with small-world connections

    International Nuclear Information System (INIS)

    Li Chunguang; Chen Guanrong

    2003-01-01

    Small-world networks are highly clustered networks with small distances among the nodes. There are many biological neural networks that present this kind of connection. There are no special weightings in the connections of most existing small-world network models. However, this kind of simply connected model cannot characterize biological neural networks, in which there are different weights in synaptic connections. In this paper, we present a neural network model with weighted small-world connections and further investigate the stability of this model

  3. Modeling the Constitutive Relationship of Al–0.62Mg–0.73Si Alloy Based on Artificial Neural Network

    Directory of Open Access Journals (Sweden)

    Ying Han

    2017-03-01

    Full Text Available In this work, the hot deformation behavior of 6A02 aluminum alloy was investigated by isothermal compression tests conducted in the temperature range of 683–783 K and strain-rate range of 0.001–1 s−1. According to the obtained true stress–true strain curves, the constitutive relationship of the alloy was revealed by establishing the Arrhenius-type constitutive model and back-propagation (BP neural network model. It is found that the flow characteristic of 6A02 aluminum alloy is closely related to deformation temperature and strain rate, and the true stress decreases with increasing temperatures and decreasing strain rates. The hot deformation activation energy is calculated to be 168.916 kJ mol−1. The BP neural network model with one hidden layer and 20 neurons in the hidden layer is developed. The accuracy in prediction of the Arrhenius-type constitutive model and BP neural network model is eveluated by using statistics analysis method. It is demonstrated that the BP neural network model has better performance in predicting the flow stress.

  4. Validation of protein models by a neural network approach

    Directory of Open Access Journals (Sweden)

    Fantucci Piercarlo

    2008-01-01

    Full Text Available Abstract Background The development and improvement of reliable computational methods designed to evaluate the quality of protein models is relevant in the context of protein structure refinement, which has been recently identified as one of the bottlenecks limiting the quality and usefulness of protein structure prediction. Results In this contribution, we present a computational method (Artificial Intelligence Decoys Evaluator: AIDE which is able to consistently discriminate between correct and incorrect protein models. In particular, the method is based on neural networks that use as input 15 structural parameters, which include energy, solvent accessible surface, hydrophobic contacts and secondary structure content. The results obtained with AIDE on a set of decoy structures were evaluated using statistical indicators such as Pearson correlation coefficients, Znat, fraction enrichment, as well as ROC plots. It turned out that AIDE performances are comparable and often complementary to available state-of-the-art learning-based methods. Conclusion In light of the results obtained with AIDE, as well as its comparison with available learning-based methods, it can be concluded that AIDE can be successfully used to evaluate the quality of protein structures. The use of AIDE in combination with other evaluation tools is expected to further enhance protein refinement efforts.

  5. Modelling and Prediction of Photovoltaic Power Output Using Artificial Neural Networks

    Directory of Open Access Journals (Sweden)

    Aminmohammad Saberian

    2014-01-01

    Full Text Available This paper presents a solar power modelling method using artificial neural networks (ANNs. Two neural network structures, namely, general regression neural network (GRNN feedforward back propagation (FFBP, have been used to model a photovoltaic panel output power and approximate the generated power. Both neural networks have four inputs and one output. The inputs are maximum temperature, minimum temperature, mean temperature, and irradiance; the output is the power. The data used in this paper started from January 1, 2006, until December 31, 2010. The five years of data were split into two parts: 2006–2008 and 2009-2010; the first part was used for training and the second part was used for testing the neural networks. A mathematical equation is used to estimate the generated power. At the end, both of these networks have shown good modelling performance; however, FFBP has shown a better performance comparing with GRNN.

  6. Reynolds averaged turbulence modelling using deep neural networks with embedded invariance

    International Nuclear Information System (INIS)

    Ling, Julia; Kurzawski, Andrew; Templeton, Jeremy

    2016-01-01

    There exists significant demand for improved Reynolds-averaged Navier–Stokes (RANS) turbulence models that are informed by and can represent a richer set of turbulence physics. This paper presents a method of using deep neural networks to learn a model for the Reynolds stress anisotropy tensor from high-fidelity simulation data. A novel neural network architecture is proposed which uses a multiplicative layer with an invariant tensor basis to embed Galilean invariance into the predicted anisotropy tensor. It is demonstrated that this neural network architecture provides improved prediction accuracy compared with a generic neural network architecture that does not embed this invariance property. Furthermore, the Reynolds stress anisotropy predictions of this invariant neural network are propagated through to the velocity field for two test cases. For both test cases, significant improvement versus baseline RANS linear eddy viscosity and nonlinear eddy viscosity models is demonstrated.

  7. Neural systems language: a formal modeling language for the systematic description, unambiguous communication, and automated digital curation of neural connectivity.

    Science.gov (United States)

    Brown, Ramsay A; Swanson, Larry W

    2013-09-01

    Systematic description and the unambiguous communication of findings and models remain among the unresolved fundamental challenges in systems neuroscience. No common descriptive frameworks exist to describe systematically the connective architecture of the nervous system, even at the grossest level of observation. Furthermore, the accelerating volume of novel data generated on neural connectivity outpaces the rate at which this data is curated into neuroinformatics databases to synthesize digitally systems-level insights from disjointed reports and observations. To help address these challenges, we propose the Neural Systems Language (NSyL). NSyL is a modeling language to be used by investigators to encode and communicate systematically reports of neural connectivity from neuroanatomy and brain imaging. NSyL engenders systematic description and communication of connectivity irrespective of the animal taxon described, experimental or observational technique implemented, or nomenclature referenced. As a language, NSyL is internally consistent, concise, and comprehensible to both humans and computers. NSyL is a promising development for systematizing the representation of neural architecture, effectively managing the increasing volume of data on neural connectivity and streamlining systems neuroscience research. Here we present similar precedent systems, how NSyL extends existing frameworks, and the reasoning behind NSyL's development. We explore NSyL's potential for balancing robustness and consistency in representation by encoding previously reported assertions of connectivity from the literature as examples. Finally, we propose and discuss the implications of a framework for how NSyL will be digitally implemented in the future to streamline curation of experimental results and bridge the gaps among anatomists, imagers, and neuroinformatics databases. Copyright © 2013 Wiley Periodicals, Inc.

  8. QSAR models for prediction study of HIV protease inhibitors using support vector machines, neural networks and multiple linear regression

    Directory of Open Access Journals (Sweden)

    Rachid Darnag

    2017-02-01

    Full Text Available Support vector machines (SVM represent one of the most promising Machine Learning (ML tools that can be applied to develop a predictive quantitative structure–activity relationship (QSAR models using molecular descriptors. Multiple linear regression (MLR and artificial neural networks (ANNs were also utilized to construct quantitative linear and non linear models to compare with the results obtained by SVM. The prediction results are in good agreement with the experimental value of HIV activity; also, the results reveal the superiority of the SVM over MLR and ANN model. The contribution of each descriptor to the structure–activity relationships was evaluated.

  9. The role of shared neural activations, mirror neurons, and morality in empathy--a critical comment.

    Science.gov (United States)

    Lamm, Claus; Majdandžić, Jasminka

    2015-01-01

    In the last decade, the phenomenon of empathy has received widespread attention by the field of social neuroscience. This has provided fresh insights for theoretical models of empathy, and substantially influenced the academic and public conceptions about this complex social skill. The present paper highlights three key issues which are often linked to empathy, but which at the same time might obscure our understanding of it. These issues are: (1) shared neural activations and whether these can be interpreted as evidence for simulation accounts of empathy; (2) the causal link of empathy to our presumed mirror neuron system; and (3) the question whether increasing empathy will result in better moral decisions and behaviors. The aim of our review is to provide the basis for critically evaluating our current understanding of empathy, and its public reception, and to inspire new research directions. Copyright © 2014 The Authors. Published by Elsevier Ireland Ltd.. All rights reserved.

  10. Isolating Discriminant Neural Activity in the Presence of Eye Movements and Concurrent Task Demands

    Directory of Open Access Journals (Sweden)

    Jon Touryan

    2017-07-01

    Full Text Available A growing number of studies use the combination of eye-tracking and electroencephalographic (EEG measures to explore the neural processes that underlie visual perception. In these studies, fixation-related potentials (FRPs are commonly used to quantify early and late stages of visual processing that follow the onset of each fixation. However, FRPs reflect a mixture of bottom-up (sensory-driven and top-down (goal-directed processes, in addition to eye movement artifacts and unrelated neural activity. At present there is little consensus on how to separate this evoked response into its constituent elements. In this study we sought to isolate the neural sources of target detection in the presence of eye movements and over a range of concurrent task demands. Here, participants were asked to identify visual targets (Ts amongst a grid of distractor stimuli (Ls, while simultaneously performing an auditory N-back task. To identify the discriminant activity, we used independent components analysis (ICA for the separation of EEG into neural and non-neural sources. We then further separated the neural sources, using a modified measure-projection approach, into six regions of interest (ROIs: occipital, fusiform, temporal, parietal, cingulate, and frontal cortices. Using activity from these ROIs, we identified target from non-target fixations in all participants at a level similar to other state-of-the-art classification techniques. Importantly, we isolated the time course and spectral features of this discriminant activity in each ROI. In addition, we were able to quantify the effect of cognitive load on both fixation-locked potential and classification performance across regions. Together, our results show the utility of a measure-projection approach for separating task-relevant neural activity into meaningful ROIs within more complex contexts that include eye movements.

  11. Neural Activations of Guided Imagery and Music in Negative Emotional Processing: A Functional MRI Study.

    Science.gov (United States)

    Lee, Sang Eun; Han, Yeji; Park, HyunWook

    2016-01-01

    The Bonny Method of Guided Imagery and Music uses music and imagery to access and explore personal emotions associated with episodic memories. Understanding the neural mechanism of guided imagery and music (GIM) as combined stimuli for emotional processing informs clinical application. We performed functional magnetic resonance imaging (fMRI) to demonstrate neural mechanisms of GIM for negative emotional processing when personal episodic memory is recalled and re-experienced through GIM processes. Twenty-four healthy volunteers participated in the study, which used classical music and verbal instruction stimuli to evoke negative emotions. To analyze the neural mechanism, activated regions associated with negative emotional and episodic memory processing were extracted by conducting volume analyses for the contrast between GIM and guided imagery (GI) or music (M). The GIM stimuli showed increased activation over the M-only stimuli in five neural regions associated with negative emotional and episodic memory processing, including the left amygdala, left anterior cingulate gyrus, left insula, bilateral culmen, and left angular gyrus (AG). Compared with GI alone, GIM showed increased activation in three regions associated with episodic memory processing in the emotional context, including the right posterior cingulate gyrus, bilateral parahippocampal gyrus, and AG. No neural regions related to negative emotional and episodic memory processing showed more activation for M and GI than for GIM. As a combined multimodal stimulus, GIM may increase neural activations related to negative emotions and episodic memory processing. Findings suggest a neural basis for GIM with personal episodic memories affecting cortical and subcortical structures and functions. © the American Music Therapy Association 2016. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  12. Altered Neural Activity Associated with Mindfulness during Nociception: A Systematic Review of Functional MRI

    Directory of Open Access Journals (Sweden)

    Elena Bilevicius

    2016-04-01

    Full Text Available Objective: To assess the neural activity associated with mindfulness-based alterations of pain perception. Methods: The Cochrane Central, EMBASE, Ovid Medline, PsycINFO, Scopus, and Web of Science databases were searched on 2 February 2016. Titles, abstracts, and full-text articles were independently screened by two reviewers. Data were independently extracted from records that included topics of functional neuroimaging, pain, and mindfulness interventions. Results: The literature search produced 946 total records, of which five met the inclusion criteria. Records reported pain in terms of anticipation (n = 2, unpleasantness (n = 5, and intensity (n = 5, and how mindfulness conditions altered the neural activity during noxious stimulation accordingly. Conclusions: Although the studies were inconsistent in relating pain components to neural activity, in general, mindfulness was able to reduce pain anticipation and unpleasantness ratings, as well as alter the corresponding neural activity. The major neural underpinnings of mindfulness-based pain reduction consisted of altered activity in the anterior cingulate cortex, insula, and dorsolateral prefrontal cortex.

  13. A neural model of motion processing and visual navigation by cortical area MST.

    Science.gov (United States)

    Grossberg, S; Mingolla, E; Pack, C

    1999-12-01

    Cells in the dorsal medial superior temporal cortex (MSTd) process optic flow generated by self-motion during visually guided navigation. A neural model shows how interactions between well-known neural mechanisms (log polar cortical magnification, Gaussian motion-sensitive receptive fields, spatial pooling of motion-sensitive signals and subtractive extraretinal eye movement signals) lead to emergent properties that quantitatively simulate neurophysiological data about MSTd cell properties and psychophysical data about human navigation. Model cells match MSTd neuron responses to optic flow stimuli placed in different parts of the visual field, including position invariance, tuning curves, preferred spiral directions, direction reversals, average response curves and preferred locations for stimulus motion centers. The model shows how the preferred motion direction of the most active MSTd cells can explain human judgments of self-motion direction (heading), without using complex heading templates. The model explains when extraretinal eye movement signals are needed for accurate heading perception, and when retinal input is sufficient, and how heading judgments depend on scene layouts and rotation rates.

  14. A recurrent neural model for proto-object based contour integration and figure-ground segregation.

    Science.gov (United States)

    Hu, Brian; Niebur, Ernst

    2017-12-01

    Visual processing of objects makes use of both feedforward and feedback streams of information. However, the nature of feedback signals is largely unknown, as is the identity of the neuronal populations in lower visual areas that receive them. Here, we develop a recurrent neural model to address these questions in the context of contour integration and figure-ground segregation. A key feature of our model is the use of grouping neurons whose activity represents tentative objects ("proto-objects") based on the integration of local feature information. Grouping neurons receive input from an organized set of local feature neurons, and project modulatory feedback to those same neurons. Additionally, inhibition at both the local feature level and the object representation level biases the interpretation of the visual scene in agreement with principles from Gestalt psychology. Our model explains several sets of neurophysiological results (Zhou et al. Journal of Neuroscience, 20(17), 6594-6611 2000; Qiu et al. Nature Neuroscience, 10(11), 1492-1499 2007; Chen et al. Neuron, 82(3), 682-694 2014), and makes testable predictions about the influence of neuronal feedback and attentional selection on neural responses across different visual areas. Our model also provides a framework for understanding how object-based attention is able to select both objects and the features associated with them.

  15. Transport energy demand modeling of South Korea using artificial neural network

    International Nuclear Information System (INIS)

    Geem, Zong Woo

    2011-01-01

    Artificial neural network models were developed to forecast South Korea's transport energy demand. Various independent variables, such as GDP, population, oil price, number of vehicle registrations, and passenger transport amount, were considered and several good models (Model 1 with GDP, population, and passenger transport amount; Model 2 with GDP, number of vehicle registrations, and passenger transport amount; and Model 3 with oil price, number of vehicle registrations, and passenger transport amount) were selected by comparing with multiple linear regression models. Although certain regression models obtained better R-squared values than neural network models, this does not guarantee the fact that the former is better than the latter because root mean squared errors of the former were much inferior to those of the latter. Also, certain regression model had structural weakness based on P-value. Instead, neural network models produced more robust results. Forecasted results using the neural network models show that South Korea will consume around 37 MTOE of transport energy in 2025. - Highlights: → Transport energy demand of South Korea was forecasted using artificial neural network. → Various variables (GDP, population, oil price, number of registrations, etc.) were considered. → Results of artificial neural network were compared with those of multiple linear regression.

  16. Neural Control and Adaptive Neural Forward Models for Insect-like, Energy-Efficient, and Adaptable Locomotion of Walking Machines

    DEFF Research Database (Denmark)

    Manoonpong, Poramate; Parlitz, Ulrich; Wörgötter, Florentin

    2013-01-01

    such natural properties with artificial legged locomotion systems by using different approaches including machine learning algorithms, classical engineering control techniques, and biologically-inspired control mechanisms. However, their levels of performance are still far from the natural ones. By contrast...... on sensory feedback and adaptive neural forward models with efference copies. This neural closed-loop controller enables a walking machine to perform a multitude of different walking patterns including insect-like leg movements and gaits as well as energy-efficient locomotion. In addition, the forward models...... allow the machine to autonomously adapt its locomotion to deal with a change of terrain, losing of ground contact during stance phase, stepping on or hitting an obstacle during swing phase, leg damage, and even to promote cockroach-like climbing behavior. Thus, the results presented here show...

  17. Bayesian Inference for Neural Electromagnetic Source Localization: Analysis of MEG Visual Evoked Activity

    International Nuclear Information System (INIS)

    George, J.S.; Schmidt, D.M.; Wood, C.C.

    1999-01-01

    We have developed a Bayesian approach to the analysis of neural electromagnetic (MEG/EEG) data that can incorporate or fuse information from other imaging modalities and addresses the ill-posed inverse problem by sarnpliig the many different solutions which could have produced the given data. From these samples one can draw probabilistic inferences about regions of activation. Our source model assumes a variable number of variable size cortical regions of stimulus-correlated activity. An active region consists of locations on the cortical surf ace, within a sphere centered on some location in cortex. The number and radi of active regions can vary to defined maximum values. The goal of the analysis is to determine the posterior probability distribution for the set of parameters that govern the number, location, and extent of active regions. Markov Chain Monte Carlo is used to generate a large sample of sets of parameters distributed according to the posterior distribution. This sample is representative of the many different source distributions that could account for given data, and allows identification of probable (i.e. consistent) features across solutions. Examples of the use of this analysis technique with both simulated and empirical MEG data are presented

  18. Comparing Models GRM, Refraction Tomography and Neural Network to Analyze Shallow Landslide

    Directory of Open Access Journals (Sweden)

    Armstrong F. Sompotan

    2011-11-01

    Full Text Available Detailed investigations of landslides are essential to understand fundamental landslide mechanisms. Seismic refraction method has been proven as a useful geophysical tool for investigating shallow landslides. The objective of this study is to introduce a new workflow using neural network in analyzing seismic refraction data and to compare the result with some methods; that are general reciprocal method (GRM and refraction tomography. The GRM is effective when the velocity structure is relatively simple and refractors are gently dipping. Refraction tomography is capable of modeling the complex velocity structures of landslides. Neural network is found to be more potential in application especially in time consuming and complicated numerical methods. Neural network seem to have the ability to establish a relationship between an input and output space for mapping seismic velocity. Therefore, we made a preliminary attempt to evaluate the applicability of neural network to determine velocity and elevation of subsurface synthetic models corresponding to arrival times. The training and testing process of the neural network is successfully accomplished using the synthetic data. Furthermore, we evaluated the neural network using observed data. The result of the evaluation indicates that the neural network can compute velocity and elevation corresponding to arrival times. The similarity of those models shows the success of neural network as a new alternative in seismic refraction data interpretation.

  19. Shared memories reveal shared structure in neural activity across individuals

    Science.gov (United States)

    Chen, J.; Leong, Y.C.; Honey, C.J.; Yong, C.H.; Norman, K.A.; Hasson, U.

    2016-01-01

    Our lives revolve around sharing experiences and memories with others. When different people recount the same events, how similar are their underlying neural representations? Participants viewed a fifty-minute movie, then verbally described the events during functional MRI, producing unguided detailed descriptions lasting up to forty minutes. As each person spoke, event-specific spatial patterns were reinstated in default-network, medial-temporal, and high-level visual areas. Individual event patterns were both highly discriminable from one another and similar between people, suggesting consistent spatial organization. In many high-order areas, patterns were more similar between people recalling the same event than between recall and perception, indicating systematic reshaping of percept into memory. These results reveal the existence of a common spatial organization for memories in high-level cortical areas, where encoded information is largely abstracted beyond sensory constraints; and that neural patterns during perception are altered systematically across people into shared memory representations for real-life events. PMID:27918531

  20. Neural Control and Adaptive Neural Forward Models for Insect-like, Energy-Efficient, and Adaptable Locomotion of Walking Machines

    Directory of Open Access Journals (Sweden)

    Poramate eManoonpong

    2013-02-01

    Full Text Available Living creatures, like walking animals, have found fascinating solutions for the problem of locomotion control. Their movements show the impression of elegance including versatile, energy-efficient, and adaptable locomotion. During the last few decades, roboticists have tried to imitate such natural properties with artificial legged locomotion systems by using different approaches including machine learning algorithms, classical engineering control techniques, and biologically-inspired control mechanisms. However, their levels of performance are still far from the natural ones. By contrast, animal locomotion mechanisms seem to largely depend not only on central mechanisms (central pattern generators, CPGs and sensory feedback (afferent-based control but also on internal forward models (efference copies. They are used to a different degree in different animals. Generally, CPGs organize basic rhythmic motions which are shaped by sensory feedback while internal models are used for sensory prediction and state estimations. According to this concept, we present here adaptive neural locomotion control consisting of a CPG mechanism with neuromodulation and local leg control mechanisms based on sensory feedback and adaptive neural forward models with efference copies. This neural closed-loop controller enables a walking machine to perform a multitude of different walking patterns including insect-like leg movements and gaits as well as energy-efficient locomotion. In addition, the forward models allow the machine to autonomously adapt its locomotion to deal with a change of terrain, losing of ground contact during stance phase, stepping on or hitting an obstacle during swing phase, leg damage, and even to promote cockroach-like climbing behavior. Thus, the results presented here show that the employed embodied neural closed-loop system can be a powerful way for developing robust and adaptable machines.

  1. A new neural network model for solving random interval linear programming problems.

    Science.gov (United States)

    Arjmandzadeh, Ziba; Safi, Mohammadreza; Nazemi, Alireza

    2017-05-01

    This paper presents a neural network model for solving random interval linear programming problems. The original problem involving random interval variable coefficients is first transformed into an equivalent convex second order cone programming problem. A neural network model is then constructed for solving the obtained convex second order cone problem. Employing Lyapunov function approach, it is also shown that the proposed neural network model is stable in the sense of Lyapunov and it is globally convergent to an exact satisfactory solution of the original problem. Several illustrative examples are solved in support of this technique. Copyright © 2017 Elsevier Ltd. All rights reserved.

  2. Exponential stabilization and synchronization for fuzzy model of memristive neural networks by periodically intermittent control.

    Science.gov (United States)

    Yang, Shiju; Li, Chuandong; Huang, Tingwen

    2016-03-01

    The problem of exponential stabilization and synchronization for fuzzy model of memristive neural networks (MNNs) is investigated by using periodically intermittent control in this paper. Based on the knowledge of memristor and recurrent neural network, the model of MNNs is formulated. Some novel and useful stabilization criteria and synchronization conditions are then derived by using the Lyapunov functional and differential inequality techniques. It is worth noting that the methods used in this paper are also applied to fuzzy model for complex networks and general neural networks. Numerical simulations are also provided to verify the effectiveness of theoretical results. Copyright © 2015 Elsevier Ltd. All rights reserved.

  3. Global convergence of periodic solution of neural networks with discontinuous activation functions

    International Nuclear Information System (INIS)

    Huang Lihong; Guo Zhenyuan

    2009-01-01

    In this paper, without assuming boundedness and monotonicity of the activation functions, we establish some sufficient conditions ensuring the existence and global asymptotic stability of periodic solution of neural networks with discontinuous activation functions by using the Yoshizawa-like theorem and constructing proper Lyapunov function. The obtained results improve and extend previous works.

  4. Modeling the dynamics of evaluation: a multilevel neural network implementation of the iterative reprocessing model.

    Science.gov (United States)

    Ehret, Phillip J; Monroe, Brian M; Read, Stephen J

    2015-05-01

    We present a neural network implementation of central components of the iterative reprocessing (IR) model. The IR model argues that the evaluation of social stimuli (attitudes, stereotypes) is the result of the IR of stimuli in a hierarchy of neural systems: The evaluation of social stimuli develops and changes over processing. The network has a multilevel, bidirectional feedback evaluation system that integrates initial perceptual processing and later developing semantic processing. The network processes stimuli (e.g., an individual's appearance) over repeated iterations, with increasingly higher levels of semantic processing over time. As a result, the network's evaluations of stimuli evolve. We discuss the implications of the network for a number of different issues involved in attitudes and social evaluation. The success of the network supports the IR model framework and provides new insights into attitude theory. © 2014 by the Society for Personality and Social Psychology, Inc.

  5. SWANN: The Snow Water Artificial Neural Network Modelling System

    Science.gov (United States)

    Broxton, P. D.; van Leeuwen, W.; Biederman, J. A.

    2017-12-01

    Snowmelt from mountain forests is important for water supply and ecosystem health. Along Arizona's Mogollon Rim, snowmelt contributes to rivers and streams that provide a significant water supply for hydro-electric power generation, agriculture, and human consumption in central Arizona. In this project, we are building a snow monitoring system for the Salt River Project (SRP), which supplies water and power to millions of customers in the Phoenix metropolitan area. We are using process-based hydrological models and artificial neural networks (ANNs) to generate information about both snow water equivalent (SWE) and snow cover. The snow-cover data is generated with ANNs that are applied to Landsat and MODIS satellite reflectance data. The SWE data is generated using a combination of gridded SWE estimates generated by process-based snow models and ANNs that account for variations in topography, forest cover, and solar radiation. The models are trained and evaluated with snow data from SNOTEL stations as well as from aerial LiDAR and field data that we collected this past winter in northern Arizona, as well as with similar data from other sites in the Southwest US. These snow data are produced in near-real time, and we have built a prototype decision support tool to deliver them to SRP. This tool is designed to provide daily-to annual operational monitoring of spatial and temporal changes in SWE and snow cover conditions over the entire Salt River Watershed (covering 17,000 km2), and features advanced web mapping capabilities and watershed analytics displayed as graphical data.

  6. Modal demultiplexing properties of tapered and nanostructured optical fibers for in vivo optogenetic control of neural activity.

    Science.gov (United States)

    Pisanello, Marco; Della Patria, Andrea; Sileo, Leonardo; Sabatini, Bernardo L; De Vittorio, Massimo; Pisanello, Ferruccio

    2015-10-01

    Optogenetic approaches to manipulate neural activity have revolutionized the ability of neuroscientists to uncover the functional connectivity underlying brain function. At the same time, the increasing complexity of in vivo optogenetic experiments has increased the demand for new techniques to precisely deliver light into the brain, in particular to illuminate selected portions of the neural tissue. Tapered and nanopatterned gold-coated optical fibers were recently proposed as minimally invasive multipoint light delivery devices, allowing for site-selective optogenetic stimulation in the mammalian brain [Pisanello , Neuron82, 1245 (2014)]. Here we demonstrate that the working principle behind these devices is based on the mode-selective photonic properties of the fiber taper. Using analytical and ray tracing models we model the finite conductance of the metal coating, and show that single or multiple optical windows located at specific taper sections can outcouple only specific subsets of guided modes injected into the fiber.

  7. ALADDIN: a neural model for event classification in dynamic processes

    International Nuclear Information System (INIS)

    Roverso, Davide

    1998-02-01

    ALADDIN is a prototype system which combines fuzzy clustering techniques and artificial neural network (ANN) models in a novel approach to the problem of classifying events in dynamic processes. The main motivation for the development of such a system derived originally from the problem of finding new principled methods to perform alarm structuring/suppression in a nuclear power plant (NPP) alarm system. One such method consists in basing the alarm structuring/suppression on a fast recognition of the event generating the alarms, so that a subset of alarms sufficient to efficiently handle the current fault can be selected to be presented to the operator, minimizing in this way the operator's workload in a potentially stressful situation. The scope of application of a system like ALADDIN goes however beyond alarm handling, to include diagnostic tasks in general. The eventual application of the system to domains other than NPPs was also taken into special consideration during the design phase. In this document we report on the first phase of the ALADDIN project which consisted mainly in a comparative study of a series of ANN-based approaches to event classification, and on the proposal of a first system prototype which is to undergo further tests and, eventually, be integrated in existing alarm, diagnosis, and accident management systems such as CASH, IDS, and CAMS. (author)

  8. Neural network models: from biology to many - body phenomenology

    International Nuclear Information System (INIS)

    Clark, J.W.

    1993-01-01

    The current surge of research on practical side of neural networks and their utility in memory storage/recall, pattern recognition and classification is given in this article. The initial attraction of neural networks as dynamical and statistical system has been investigated. From the view of many-body theorist, the neurons may be thought of as particles, and the weighted connection between the units, as the interaction between these particles. Finally, the author has seen the impressive capabilities of artificial neural networks in pattern recognition and classification may be exploited to solve data management problems in experimental physics and the discovery of radically new theoretically description of physical problems and neural networks can be used in physics. (A.B.)

  9. Generalized Net Model of the Cognitive and Neural Algorithm for Adaptive Resonance Theory 1

    Directory of Open Access Journals (Sweden)

    Todor Petkov

    2013-12-01

    Full Text Available The artificial neural networks are inspired by biological properties of human and animal brains. One of the neural networks type is called ART [4]. The abbreviation of ART stands for Adaptive Resonance Theory that has been invented by Stephen Grossberg in 1976 [5]. ART represents a family of Neural Networks. It is a cognitive and neural theory that describes how the brain autonomously learns to categorize, recognize and predict objects and events in the changing world. In this paper we introduce a GN model that represent ART1 Neural Network learning algorithm [1]. The purpose of this model is to explain when the input vector will be clustered or rejected among all nodes by the network. It can also be used for explanation and optimization of ART1 learning algorithm.

  10. Modelling of solar energy potential in Nigeria using an artificial neural network model

    International Nuclear Information System (INIS)

    Fadare, D.A.

    2009-01-01

    In this study, an artificial neural network (ANN) based model for prediction of solar energy potential in Nigeria (lat. 4-14 o N, log. 2-15 o E) was developed. Standard multilayered, feed-forward, back-propagation neural networks with different architecture were designed using neural toolbox for MATLAB. Geographical and meteorological data of 195 cities in Nigeria for period of 10 years (1983-1993) from the NASA geo-satellite database were used for the training and testing the network. Meteorological and geographical data (latitude, longitude, altitude, month, mean sunshine duration, mean temperature, and relative humidity) were used as inputs to the network, while the solar radiation intensity was used as the output of the network. The results show that the correlation coefficients between the ANN predictions and actual mean monthly global solar radiation intensities for training and testing datasets were higher than 90%, thus suggesting a high reliability of the model for evaluation of solar radiation in locations where solar radiation data are not available. The predicted solar radiation values from the model were given in form of monthly maps. The monthly mean solar radiation potential in northern and southern regions ranged from 7.01-5.62 to 5.43-3.54 kW h/m 2 day, respectively. A graphical user interface (GUI) was developed for the application of the model. The model can be used easily for estimation of solar radiation for preliminary design of solar applications.

  11. An interpretable LSTM neural network for autoregressive exogenous model

    OpenAIRE

    Guo, Tian; Lin, Tao; Lu, Yao

    2018-01-01

    In this paper, we propose an interpretable LSTM recurrent neural network, i.e., multi-variable LSTM for time series with exogenous variables. Currently, widely used attention mechanism in recurrent neural networks mostly focuses on the temporal aspect of data and falls short of characterizing variable importance. To this end, our multi-variable LSTM equipped with tensorized hidden states is developed to learn variable specific representations, which give rise to both temporal and variable lev...

  12. Activational and effort-related aspects of motivation: neural mechanisms and implications for psychopathology

    Science.gov (United States)

    Yohn, Samantha E.; López-Cruz, Laura; San Miguel, Noemí; Correa, Mercè

    2016-01-01

    Abstract Motivation has been defined as the process that allows organisms to regulate their internal and external environment, and control the probability, proximity and availability of stimuli. As such, motivation is a complex process that is critical for survival, which involves multiple behavioural functions mediated by a number of interacting neural circuits. Classical theories of motivation suggest that there are both directional and activational aspects of motivation, and activational aspects (i.e. speed and vigour of both the instigation and persistence of behaviour) are critical for enabling organisms to overcome work-related obstacles or constraints that separate them from significant stimuli. The present review discusses the role of brain dopamine and related circuits in behavioural activation, exertion of effort in instrumental behaviour, and effort-related decision-making, based upon both animal and human studies. Impairments in behavioural activation and effort-related aspects of motivation are associated with psychiatric symptoms such as anergia, fatigue, lassitude and psychomotor retardation, which cross multiple pathologies, including depression, schizophrenia, and Parkinson’s disease. Therefore, this review also attempts to provide an interdisciplinary approach that integrates findings from basic behavioural neuroscience, behavioural economics, clinical neuropsychology, psychiatry, and neurology, to provide a coherent framework for future research and theory in this critical field. Although dopamine systems are a critical part of the brain circuitry regulating behavioural activation, exertion of effort, and effort-related decision-making, mesolimbic dopamine is only one part of a distributed circuitry that includes multiple neurotransmitters and brain areas. Overall, there is a striking similarity between the brain areas involved in behavioural activation and effort-related processes in rodents and in humans. Animal models of effort

  13. Activational and effort-related aspects of motivation: neural mechanisms and implications for psychopathology.

    Science.gov (United States)

    Salamone, John D; Yohn, Samantha E; López-Cruz, Laura; San Miguel, Noemí; Correa, Mercè

    2016-05-01

    Motivation has been defined as the process that allows organisms to regulate their internal and external environment, and control the probability, proximity and availability of stimuli. As such, motivation is a complex process that is critical for survival, which involves multiple behavioural functions mediated by a number of interacting neural circuits. Classical theories of motivation suggest that there are both directional and activational aspects of motivation, and activational aspects (i.e. speed and vigour of both the instigation and persistence of behaviour) are critical for enabling organisms to overcome work-related obstacles or constraints that separate them from significant stimuli. The present review discusses the role of brain dopamine and related circuits in behavioural activation, exertion of effort in instrumental behaviour, and effort-related decision-making, based upon both animal and human studies. Impairments in behavioural activation and effort-related aspects of motivation are associated with psychiatric symptoms such as anergia, fatigue, lassitude and psychomotor retardation, which cross multiple pathologies, including depression, schizophrenia, and Parkinson's disease. Therefore, this review also attempts to provide an interdisciplinary approach that integrates findings from basic behavioural neuroscience, behavioural economics, clinical neuropsychology, psychiatry, and neurology, to provide a coherent framework for future research and theory in this critical field. Although dopamine systems are a critical part of the brain circuitry regulating behavioural activation, exertion of effort, and effort-related decision-making, mesolimbic dopamine is only one part of a distributed circuitry that includes multiple neurotransmitters and brain areas. Overall, there is a striking similarity between the brain areas involved in behavioural activation and effort-related processes in rodents and in humans. Animal models of effort-related decision

  14. The simplest maximum entropy model for collective behavior in a neural network

    International Nuclear Information System (INIS)

    Tkačik, Gašper; Marre, Olivier; Mora, Thierry; Amodei, Dario; Bialek, William; Berry II, Michael J

    2013-01-01

    Recent work emphasizes that the maximum entropy principle provides a bridge between statistical mechanics models for collective behavior in neural networks and experiments on networks of real neurons. Most of this work has focused on capturing the measured correlations among pairs of neurons. Here we suggest an alternative, constructing models that are consistent with the distribution of global network activity, i.e. the probability that K out of N cells in the network generate action potentials in the same small time bin. The inverse problem that we need to solve in constructing the model is analytically tractable, and provides a natural ‘thermodynamics’ for the network in the limit of large N. We analyze the responses of neurons in a small patch of the retina to naturalistic stimuli, and find that the implied thermodynamics is very close to an unusual critical point, in which the entropy (in proper units) is exactly equal to the energy. (paper)

  15. Vascular Endothelial Growth Factor Receptor 3 Controls Neural Stem Cell Activation in Mice and Humans

    Directory of Open Access Journals (Sweden)

    Jinah Han

    2015-02-01

    Full Text Available Neural stem cells (NSCs continuously produce new neurons within the adult mammalian hippocampus. NSCs are typically quiescent but activated to self-renew or differentiate into neural progenitor cells. The molecular mechanisms of NSC activation remain poorly understood. Here, we show that adult hippocampal NSCs express vascular endothelial growth factor receptor (VEGFR 3 and its ligand VEGF-C, which activates quiescent NSCs to enter the cell cycle and generate progenitor cells. Hippocampal NSC activation and neurogenesis are impaired by conditional deletion of Vegfr3 in NSCs. Functionally, this is associated with compromised NSC activation in response to VEGF-C and physical activity. In NSCs derived from human embryonic stem cells (hESCs, VEGF-C/VEGFR3 mediates intracellular activation of AKT and ERK pathways that control cell fate and proliferation. These findings identify VEGF-C/VEGFR3 signaling as a specific regulator of NSC activation and neurogenesis in mammals.

  16. Modelling the nonlinearity of piezoelectric actuators in active ...

    African Journals Online (AJOL)

    Piezoelectric actuators have great capabilities as elements of intelligent structures for active vibration cancellation. One problem with this type of actuator is its nonlinear behaviour. In active vibration control systems, it is important to have an accurate model of the control branch. This paper demonstrates the ability of neural ...

  17. Increased Neural Activation during Picture Encoding and Retrieval in 60-Year-Olds Compared to 20-Year-Olds

    Science.gov (United States)

    Burgmans, S.; van Boxtel, M. P. J.; Vuurman, E. F. P. M.; Evers, E. A. T.; Jolles, J.

    2010-01-01

    Brain aging has been associated with both reduced and increased neural activity during task execution. The purpose of the present study was to investigate whether increased neural activation during memory encoding and retrieval is already present at the age of 60 as well as to obtain more insight into the mechanism behind increased activity.…

  18. Neural model of gene regulatory network: a survey on supportive meta-heuristics.

    Science.gov (United States)

    Biswas, Surama; Acharyya, Sriyankar

    2016-06-01

    Gene regulatory network (GRN) is produced as a result of regulatory interactions between different genes through their coded proteins in cellular context. Having immense importance in disease detection and drug finding, GRN has been modelled through various mathematical and computational schemes and reported in survey articles. Neural and neuro-fuzzy models have been the focus of attraction in bioinformatics. Predominant use of meta-heuristic algorithms in training neural models has proved its excellence. Considering these facts, this paper is organized to survey neural modelling schemes of GRN and the efficacy of meta-heuristic algorithms towards parameter learning (i.e. weighting connections) within the model. This survey paper renders two different structure-related approaches to infer GRN which are global structure approach and substructure approach. It also describes two neural modelling schemes, such as artificial neural network/recurrent neural network based modelling and neuro-fuzzy modelling. The meta-heuristic algorithms applied so far to learn the structure and parameters of neutrally modelled GRN have been reviewed here.

  19. Artificial neural network modelling approach for a biomass gasification process in fixed bed gasifiers

    International Nuclear Information System (INIS)

    Mikulandrić, Robert; Lončar, Dražen; Böhning, Dorith; Böhme, Rene; Beckmann, Michael

    2014-01-01

    Highlights: • 2 Different equilibrium models are developed and their performance is analysed. • Neural network prediction models for 2 different fixed bed gasifier types are developed. • The influence of different input parameters on neural network model performance is analysed. • Methodology for neural network model development for different gasifier types is described. • Neural network models are verified for various operating conditions based on measured data. - Abstract: The number of the small and middle-scale biomass gasification combined heat and power plants as well as syngas production plants has been significantly increased in the last decade mostly due to extensive incentives. However, existing issues regarding syngas quality, process efficiency, emissions and environmental standards are preventing biomass gasification technology to become more economically viable. To encounter these issues, special attention is given to the development of mathematical models which can be used for a process analysis or plant control purposes. The presented paper analyses possibilities of neural networks to predict process parameters with high speed and accuracy. After a related literature review and measurement data analysis, different modelling approaches for the process parameter prediction that can be used for an on-line process control were developed and their performance were analysed. Neural network models showed good capability to predict biomass gasification process parameters with reasonable accuracy and speed. Measurement data for the model development, verification and performance analysis were derived from biomass gasification plant operated by Technical University Dresden

  20. Maximum solid concentrations of coal water slurries predicted by neural network models

    Energy Technology Data Exchange (ETDEWEB)

    Cheng, Jun; Li, Yanchang; Zhou, Junhu; Liu, Jianzhong; Cen, Kefa

    2010-12-15

    The nonlinear back-propagation (BP) neural network models were developed to predict the maximum solid concentration of coal water slurry (CWS) which is a substitute for oil fuel, based on physicochemical properties of 37 typical Chinese coals. The Levenberg-Marquardt algorithm was used to train five BP neural network models with different input factors. The data pretreatment method, learning rate and hidden neuron number were optimized by training models. It is found that the Hardgrove grindability index (HGI), moisture and coalification degree of parent coal are 3 indispensable factors for the prediction of CWS maximum solid concentration. Each BP neural network model gives a more accurate prediction result than the traditional polynomial regression equation. The BP neural network model with 3 input factors of HGI, moisture and oxygen/carbon ratio gives the smallest mean absolute error of 0.40%, which is much lower than that of 1.15% given by the traditional polynomial regression equation. (author)

  1. Adaptive control using a hybrid-neural model: application to a polymerisation reactor

    Directory of Open Access Journals (Sweden)

    Cubillos F.

    2001-01-01

    Full Text Available This work presents the use of a hybrid-neural model for predictive control of a plug flow polymerisation reactor. The hybrid-neural model (HNM is based on fundamental conservation laws associated with a neural network (NN used to model the uncertain parameters. By simulations, the performance of this approach was studied for a peroxide-initiated styrene tubular reactor. The HNM was synthesised for a CSTR reactor with a radial basis function neural net (RBFN used to estimate the reaction rates recursively. The adaptive HNM was incorporated in two model predictive control strategies, a direct synthesis scheme and an optimum steady state scheme. Tests for servo and regulator control showed excellent behaviour following different setpoint variations, and rejecting perturbations. The good generalisation and training capacities of hybrid models, associated with the simplicity and robustness characteristics of the MPC formulations, make an attractive combination for the control of a polymerisation reactor.

  2. Abnormal neural activities of directional brain networks in patients with long-term bilateral hearing loss.

    Science.gov (United States)

    Xu, Long-Chun; Zhang, Gang; Zou, Yue; Zhang, Min-Feng; Zhang, Dong-Sheng; Ma, Hua; Zhao, Wen-Bo; Zhang, Guang-Yu

    2017-10-13

    The objective of the study is to provide some implications for rehabilitation of hearing impairment by investigating changes of neural activities of directional brain networks in patients with long-term bilateral hearing loss. Firstly, we implemented neuropsychological tests of 21 subjects (11 patients with long-term bilateral hearing loss, and 10 subjects with normal hearing), and these tests revealed significant differences between the deaf group and the controls. Then we constructed the individual specific virtual brain based on functional magnetic resonance data of participants by utilizing effective connectivity and multivariate regression methods. We exerted the stimulating signal to the primary auditory cortices of the virtual brain and observed the brain region activations. We found that patients with long-term bilateral hearing loss presented weaker brain region activations in the auditory and language networks, but enhanced neural activities in the default mode network as compared with normally hearing subjects. Especially, the right cerebral hemisphere presented more changes than the left. Additionally, weaker neural activities in the primary auditor cortices were also strongly associated with poorer cognitive performance. Finally, causal analysis revealed several interactional circuits among activated brain regions, and these interregional causal interactions implied that abnormal neural activities of the directional brain networks in the deaf patients impacted cognitive function.

  3. An Integrative Model for the Neural Mechanism of Eye Movement Desensitization and Reprocessing (EMDR).

    Science.gov (United States)

    Coubard, Olivier A

    2016-01-01

    Since the seminal report by Shapiro that bilateral stimulation induces cognitive and emotional changes, 26 years of basic and clinical research have examined the effects of Eye Movement Desensitization and Reprocessing (EMDR) in anxiety disorders, particularly in post-traumatic stress disorder (PTSD). The present article aims at better understanding EMDR neural mechanism. I first review procedural aspects of EMDR protocol and theoretical hypothesis about EMDR effects, and develop the reasons why the scientific community is still divided about EMDR. I then slide from psychology to physiology describing eye movements/emotion interaction from the physiological viewpoint, and introduce theoretical and technical tools used in movement research to re-examine EMDR neural mechanism. Using a recent physiological model for the neuropsychological architecture of motor and cognitive control, the Threshold Interval Modulation with Early Release-Rate of rIse Deviation with Early Release (TIMER-RIDER)-model, I explore how attentional control and bilateral stimulation may participate to EMDR effects. These effects may be obtained by two processes acting in parallel: (i) activity level enhancement of attentional control component; and (ii) bilateral stimulation in any sensorimotor modality, both resulting in lower inhibition enabling dysfunctional information to be processed and anxiety to be reduced. The TIMER-RIDER model offers quantitative predictions about EMDR effects for future research about its underlying physiological mechanisms.

  4. An integrative model for the neural mechanism of Eye Movement Desensitization and Reprocessing (EMDR

    Directory of Open Access Journals (Sweden)

    Olivier A. Coubard

    2016-04-01

    Full Text Available Since the seminal report by Shapiro that bilateral stimulation induces cognitive and emotional changes, twenty-six years of basic and clinical research have examined the effects of Eye Movement Desensitization and Reprocessing (EMDR in anxiety disorders, particularly in Post-Traumatic Stress Disorder (PTSD. The present article aims at better understanding EMDR neural mechanism. I first review procedural aspects of EMDR protocol and theoretical hypothesis about EMDR effects, and develop the reasons why the scientific community is still divided about EMDR. I then slide from psychology to physiology describing eye movements/emotion interaction from the physiological viewpoint, and introduce theoretical and technical tools used in movement research to re-examine EMDR neural mechanism. Using a recent physiological model for the neuropsychological architecture of motor and cognitive control, the Threshold Interval Modulation with Early Release-Rate of rIse Deviation with Early Release – TIMER-RIDER – model, I explore how attentional control and bilateral stimulation may participate to EMDR effects. These effects may be obtained by two processes acting in parallel: (i activity level enhancement of attentional control component; and (ii bilateral stimulation in any sensorimotor modality, both resulting in lower inhibition enabling dysfunctional information to be processed and anxiety to be reduced. The TIMER-RIDER model offers quantitative predictions about EMDR effects for future research about its underlying physiological mechanisms.

  5. Nonlinearly Activated Neural Network for Solving Time-Varying Complex Sylvester Equation.

    Science.gov (United States)

    Li, Shuai; Li, Yangming

    2013-10-28

    The Sylvester equation is often encountered in mathematics and control theory. For the general time-invariant Sylvester equation problem, which is defined in the domain of complex numbers, the Bartels-Stewart algorithm and its extensions are effective and widely used with an O(n³) time complexity. When applied to solving the time-varying Sylvester equation, the computation burden increases intensively with the decrease of sampling period and cannot satisfy continuous realtime calculation requirements. For the special case of the general Sylvester equation problem defined in the domain of real numbers, gradient-based recurrent neural networks are able to solve the time-varying Sylvester equation in real time, but there always exists an estimation error while a recently proposed recurrent neural network by Zhang et al [this type of neural network is called Zhang neural network (ZNN)] converges to the solution ideally. The advancements in complex-valued neural networks cast light to extend the existing real-valued ZNN for solving the time-varying real-valued Sylvester equation to its counterpart in the domain of complex numbers. In this paper, a complex-valued ZNN for solving the complex-valued Sylvester equation problem is investigated and the global convergence of the neural network is proven with the proposed nonlinear complex-valued activation functions. Moreover, a special type of activation function with a core function, called sign-bi-power function, is proven to enable the ZNN to converge in finite time, which further enhances its advantage in online processing. In this case, the upper bound of the convergence time is also derived analytically. Simulations are performed to evaluate and compare the performance of the neural network with different parameters and activation functions. Both theoretical analysis and numerical simulations validate the effectiveness of the proposed method.

  6. Modulation of neural activity by reward in medial intraparietal cortex is sensitive to temporal sequence of reward.

    Science.gov (United States)

    Rajalingham, Rishi; Stacey, Richard Greg; Tsoulfas, Georgios; Musallam, Sam

    2014-10-01

    To restore movements to paralyzed patients, neural prosthetic systems must accurately decode patients' intentions from neural signals. Despite significant advancements, current systems are unable to restore complex movements. Decoding reward-related signals from the medial intraparietal area (MIP) could enhance prosthetic performance. However, the dynamics of reward sensitivity in MIP is not known. Furthermore, reward-related modulation in premotor areas has been attributed to behavioral confounds. Here we investigated the stability of reward encoding in MIP by assessing the effect of reward history on reward sensitivity. We recorded from neurons in MIP while monkeys performed a delayed-reach task under two reward schedules. In the variable schedule, an equal number of small- and large-rewards trials were randomly interleaved. In the constant schedule, one reward size was delivered for a block of trials. The memory period firing rate of most neurons in response to identical rewards varied according to schedule. Using systems identification tools, we attributed the schedule sensitivity to the dependence of neural activity on the history of reward. We did not find schedule-dependent behavioral changes, suggesting that reward modulates neural activity in MIP. Neural discrimination between rewards was less in the variable than in the constant schedule, degrading our ability to decode reach target and reward simultaneously. The effect of schedule was mitigated by adding Haar wavelet coefficients to the decoding model. This raises the possibility of multiple encoding schemes at different timescales and reinforces the potential utility of reward information for prosthetic performance. Copyright © 2014 the American Physiological Society.

  7. Dynamic modeling of physical phenomena for PRAs using neural networks

    International Nuclear Information System (INIS)

    Benjamin, A.S.; Brown, N.N.; Paez, T.L.

    1998-04-01

    In most probabilistic risk assessments, there is a set of accident scenarios that involves the physical responses of a system to environmental challenges. Examples include the effects of earthquakes and fires on the operability of a nuclear reactor safety system, the effects of fires and impacts on the safety integrity of a nuclear weapon, and the effects of human intrusions on the transport of radionuclides from an underground waste facility. The physical responses of the system to these challenges can be quite complex, and their evaluation may require the use of detailed computer codes that are very time consuming to execute. Yet, to perform meaningful probabilistic analyses, it is necessary to evaluate the responses for a large number of variations in the input parameters that describe the initial state of the system, the environments to which it is exposed, and the effects of human interaction. Because the uncertainties of the system response may be very large, it may also be necessary to perform these evaluations for various values of modeling parameters that have high uncertainties, such as material stiffnesses, surface emissivities, and ground permeabilities. The authors have been exploring the use of artificial neural networks (ANNs) as a means for estimating the physical responses of complex systems to phenomenological events such as those cited above. These networks are designed as mathematical constructs with adjustable parameters that can be trained so that the results obtained from the networks will simulate the results obtained from the detailed computer codes. The intent is for the networks to provide an adequate simulation of the detailed codes over a significant range of variables while requiring only a small fraction of the computer processing time required by the detailed codes. This enables the authors to integrate the physical response analyses into the probabilistic models in order to estimate the probabilities of various responses

  8. Information content of neural networks with self-control and variable activity

    International Nuclear Information System (INIS)

    Bolle, D.; Amari, S.I.; Dominguez Carreta, D.R.C.; Massolo, G.

    2001-01-01

    A self-control mechanism for the dynamics of neural networks with variable activity is discussed using a recursive scheme for the time evolution of the local field. It is based upon the introduction of a self-adapting time-dependent threshold as a function of both the neural and pattern activity in the network. This mechanism leads to an improvement of the information content of the network as well as an increase of the storage capacity and the basins of attraction. Different architectures are considered and the results are compared with numerical simulations

  9. The necessity of connection structures in neural models of variable binding.

    Science.gov (United States)

    van der Velde, Frank; de Kamps, Marc

    2015-08-01

    In his review of neural binding problems, Feldman (Cogn Neurodyn 7:1-11, 2013) addressed two types of models as solutions of (novel) variable binding. The one type uses labels such as phase synchrony of activation. The other ('connectivity based') type uses dedicated connections structures to achieve novel variable binding. Feldman argued that label (synchrony) based models are the only possible candidates to handle novel variable binding, whereas connectivity based models lack the flexibility required for that. We argue and illustrate that Feldman's analysis is incorrect. Contrary to his conclusion, connectivity based models are the only viable candidates for models of novel variable binding because they are the only type of models that can produce behavior. We will show that the label (synchrony) based models analyzed by Feldman are in fact examples of connectivity based models. Feldman's analysis that novel variable binding can be achieved without existing connection structures seems to result from analyzing the binding problem in a wrong frame of reference, in particular in an outside instead of the required inside frame of reference. Connectivity based models can be models of novel variable binding when they possess a connection structure that resembles a small-world network, as found in the brain. We will illustrate binding with this type of model with episode binding and the binding of words, including novel words, in sentence structures.

  10. Battery Performance Modelling ad Simulation: a Neural Network Based Approach

    Science.gov (United States)

    Ottavianelli, Giuseppe; Donati, Alessandro

    2002-01-01

    This project has developed on the background of ongoing researches within the Control Technology Unit (TOS-OSC) of the Special Projects Division at the European Space Operations Centre (ESOC) of the European Space Agency. The purpose of this research is to develop and validate an Artificial Neural Network tool (ANN) able to model, simulate and predict the Cluster II battery system's performance degradation. (Cluster II mission is made of four spacecraft flying in tetrahedral formation and aimed to observe and study the interaction between sun and earth by passing in and out of our planet's magnetic field). This prototype tool, named BAPER and developed with a commercial neural network toolbox, could be used to support short and medium term mission planning in order to improve and maximise the batteries lifetime, determining which are the future best charge/discharge cycles for the batteries given their present states, in view of a Cluster II mission extension. This study focuses on the five Silver-Cadmium batteries onboard of Tango, the fourth Cluster II satellite, but time restrains have allowed so far to perform an assessment only on the first battery. In their most basic form, ANNs are hyper-dimensional curve fits for non-linear data. With their remarkable ability to derive meaning from complicated or imprecise history data, ANN can be used to extract patterns and detect trends that are too complex to be noticed by either humans or other computer techniques. ANNs learn by example, and this is why they can be described as an inductive, or data-based models for the simulation of input/target mappings. A trained ANN can be thought of as an "expert" in the category of information it has been given to analyse, and this expert can then be used, as in this project, to provide projections given new situations of interest and answer "what if" questions. The most appropriate algorithm, in terms of training speed and memory storage requirements, is clearly the Levenberg

  11. Neural Networks Method in modeling of the financial company’s performance

    Directory of Open Access Journals (Sweden)

    I. P. Kurochkina

    2017-01-01

    with a range of quantitative parameters: conditional-ideal, real, the worst. The copyright factor selection algorithm complements the developed model. Because of the functioning of the neural network, a management report on the financial performance of the company is formed. During the research, the following methods have been used: the system approach in factors’ classification of financial results, factor analysis and mathematical modeling at development of the corresponding neural model. The research is based on a complex of theoretical and empirical developments of domestic and foreign authors. The actual digital materials of the real economic entity are involved in the verification phase of the research results.The advantage of the model is the ability to track changes in the input data and indicators in the online mode, to build quality forecasts for future periods with different combinations of the whole set of factors. The proposed instrument of factor analysis has been tested in the activities of real companies. The factors can ensure growth in terms of financial results; visualization of business processes is enhanced, as well as the probability of making rational management decisions. 

  12. Sex differences in neural activation following different routes of oxytocin administration in awake adult rats.

    Science.gov (United States)

    Dumais, Kelly M; Kulkarni, Praveen P; Ferris, Craig F; Veenema, Alexa H

    2017-07-01

    The neuropeptide oxytocin (OT) regulates social behavior in sex-specific ways across species. OT has promising effects on alleviating social deficits in sex-biased neuropsychiatric disorders. However little is known about potential sexually dimorphic effects of OT on brain function. Using the rat as a model organism, we determined whether OT administered centrally or peripherally induces sex differences in brain activation. Functional magnetic resonance imaging was used to examine blood oxygen level-dependent (BOLD) signal intensity changes in the brains of awake rats during the 20min following intracerebroventricular (ICV; 1μg/5μl) or intraperitoneal (IP; 0.1mg/kg) OT administration as compared to baseline. ICV OT induced sex differences in BOLD activation in 26 out of 172 brain regions analyzed, with 20 regions showing a greater volume of activation in males (most notably the nucleus accumbens and insular cortex), and 6 regions showing a greater volume of activation in females (including the lateral and central amygdala). IP OT also elicited sex differences in BOLD activation with a greater volume of activation in males, but this activation was found in different and fewer (10) brain regions compared to ICV OT. In conclusion, exogenous OT modulates neural activation differently in male versus female rats with the pattern and magnitude, but not the direction, of sex differences depending on the route of administration. These findings highlight the need to include both sexes in basic and clinical studies to fully understand the role of OT on brain function. Copyright © 2017 Elsevier Ltd. All rights reserved.

  13. Electrocardiogram (ECG Signal Modeling and Noise Reduction Using Hopfield Neural Networks

    Directory of Open Access Journals (Sweden)

    F. Bagheri

    2013-02-01

    Full Text Available The Electrocardiogram (ECG signal is one of the diagnosing approaches to detect heart disease. In this study the Hopfield Neural Network (HNN is applied and proposed for ECG signal modeling and noise reduction. The Hopfield Neural Network (HNN is a recurrent neural network that stores the information in a dynamic stable pattern. This algorithm retrieves a pattern stored in memory in response to the presentation of an incomplete or noisy version of that pattern. Computer simulation results show that this method can successfully model the ECG signal and remove high-frequency noise.

  14. Modeling of quasistatic magnetic hysteresis with feed-forward neural networks

    International Nuclear Information System (INIS)

    Makaveev, Dimitre; Dupre, Luc; De Wulf, Marc; Melkebeek, Jan

    2001-01-01

    A modeling technique for rate-independent (quasistatic) scalar magnetic hysteresis is presented, using neural networks. Based on the theory of dynamic systems and the wiping-out and congruency properties of the classical scalar Preisach hysteresis model, the choice of a feed-forward neural network model is motivated. The neural network input parameters at each time step are the corresponding magnetic field strength and memory state, thereby assuring accurate prediction of the change of magnetic induction. For rate-independent hysteresis, the current memory state can be determined by the last extreme magnetic field strength and induction values, kept in memory. The choice of a network training set is motivated and the performance of the network is illustrated for a test set not used during training. Very accurate prediction of both major and minor hysteresis loops is observed, proving that the neural network technique is suitable for hysteresis modeling. [copyright] 2001 American Institute of Physics

  15. Computational neural network regression model for Host based Intrusion Detection System

    Directory of Open Access Journals (Sweden)

    Sunil Kumar Gautam

    2016-09-01

    Full Text Available The current scenario of information gathering and storing in secure system is a challenging task due to increasing cyber-attacks. There exists computational neural network techniques designed for intrusion detection system, which provide security to single machine and entire network's machine. In this paper, we have used two types of computational neural network models, namely, Generalized Regression Neural Network (GRNN model and Multilayer Perceptron Neural Network (MPNN model for Host based Intrusion Detection System using log files that are generated by a single personal computer. The simulation results show correctly classified percentage of normal and abnormal (intrusion class using confusion matrix. On the basis of results and discussion, we found that the Host based Intrusion Systems Model (HISM significantly improved the detection accuracy while retaining minimum false alarm rate.

  16. Effects of Some Neurobiological Factors in a Self-organized Critical Model Based on Neural Networks

    International Nuclear Information System (INIS)

    Zhou Liming; Zhang Yingyue; Chen Tianlun

    2005-01-01

    Based on an integrate-and-fire mechanism, we investigate the effect of changing the efficacy of the synapse, the transmitting time-delayed, and the relative refractoryperiod on the self-organized criticality in our neural network model.

  17. Neural-network analysis of irradiation hardening in low-activation steels

    Energy Technology Data Exchange (ETDEWEB)

    Kemp, R. [Department of Materials Science and Metallurgy, University of Cambridge, Pembroke Street, Cambridge CB2 3QZ, UK (United Kingdom)]. E-mail: rk237@cam.ac.uk; Cottrell, G.A. [EURATOM/UKAEA Fusion Association, Culham Science Centre, Abingdon, Oxon OX14 3DB, UK (United Kingdom); Bhadeshia, H.K.D.H. [Department of Materials Science and Metallurgy, University of Cambridge, Pembroke Street, Cambridge CB2 3QZ, UK (United Kingdom); Odette, G.R. [Department of Mechanical and Environmental Engineering and Department of Materials, University of California Santa Barbara, Santa Barbara, CA 93106 (United States); Yamamoto, T. [Department of Mechanical and Environmental Engineering and Department of Materials, University of California Santa Barbara, Santa Barbara, CA 93106 (United States); Kishimoto, H. [Department of Mechanical and Environmental Engineering and Department of Materials, University of California Santa Barbara, Santa Barbara, CA 93106 (United States)

    2006-02-01

    An artificial neural network has been used to model the irradiation hardening of low-activation ferritic/martensitic steels. The data used to create the model span a range of displacement damage of 0-90 dpa, within a temperature range of 273-973 K and contain 1800 points. The trained model has been able to capture the non-linear dependence of yield strength on the chemical composition and irradiation parameters. The ability of the model to generalise on unseen data has been tested and regions within the input domain that are sparsely populated have been identified. These are the regions where future experiments could be focused. It is shown that this method of analysis, because of its ability to capture complex relationships between the many variables, could help in the design of maximally informative experiments on materials in future irradiation test facilities. This will accelerate the acquisition of the key missing knowledge to assist the materials choices in a future fusion power plant.

  18. Modeling of the height control system using artificial neural networks

    Directory of Open Access Journals (Sweden)

    A. R Tahavvor

    2016-09-01

    Full Text Available Introduction Automation of agricultural and machinery construction has generally been enhanced by intelligent control systems due to utility and efficiency rising, ease of use, profitability and upgrading according to market demand. A broad variety of industrial merchandise are now supplied with computerized control systems of earth moving processes to be performed by construction and agriculture field vehicle such as grader, backhoe, tractor and scraper machines. A height control machine which is used in measuring base thickness is consisted of two mechanical and electronic parts. The mechanical part is consisted of conveyor belt, main body, electrical engine and invertors while the electronic part is consisted of ultrasonic, wave transmitter and receiver sensor, electronic board, control set, and microcontroller. The main job of these controlling devices consists of the topographic surveying, cutting and filling of elevated and spotted low area, and these actions fundamentally dependent onthe machine's ability in elevation and thickness measurement and control. In this study, machine was first tested and then some experiments were conducted for data collection. Study of system modeling in artificial neural networks (ANN was done for measuring, controlling the height for bases by input variable input vectors such as sampling time, probe speed, conveyer speed, sound wave speed and speed sensor are finally the maximum and minimum probe output vector on various conditions. The result reveals the capability of this procedure for experimental recognition of sensors' behavior and improvement of field machine control systems. Inspection, calibration and response, diagnosis of the elevation control system in combination with machine function can also be evaluated by some extra development of this system. Materials and Methods Designing and manufacture of the planned apparatus classified in three dissimilar, mechanical and electronic module, courses of

  19. Enhancing neural activity to drive respiratory plasticity following cervical spinal cord injury

    Science.gov (United States)

    Hormigo, Kristiina M.; Zholudeva, Lyandysha V.; Spruance, Victoria M.; Marchenko, Vitaliy; Cote, Marie-Pascale; Vinit, Stephane; Giszter, Simon; Bezdudnaya, Tatiana; Lane, Michael A.

    2016-01-01

    Cervical spinal cord injury (SCI) results in permanent life-altering sensorimotor deficits, among which impaired breathing is one of the most devastating and life-threatening. While clinical and experimental research has revealed that some spontaneous respiratory improvement (functional plasticity) can occur post-SCI, the extent of the recovery is limited and significant deficits persist. Thus, increasing effort is being made to develop therapies that harness and enhance this neuroplastic potential to optimize long-term recovery of breathing in injured individuals. One strategy with demonstrated therapeutic potential is the use of treatments that increase neural and muscular activity (e.g. locomotor training, neural and muscular stimulation) and promote plasticity. With a focus on respiratory function post-SCI, this review will discuss advances in the use of neural interfacing strategies and activity-based treatments, and highlights some recent results from our own research. PMID:27582085

  20. Application of a hybrid model of neural networks and genetic algorithms to evaluate landslide susceptibility

    Science.gov (United States)

    Wang, H. B.; Li, J. W.; Zhou, B.; Yuan, Z. Q.; Chen, Y. P.

    2013-03-01

    In the last few decades, the development of Geographical Information Systems (GIS) technology has provided a method for the evaluation of landslide susceptibility and hazard. Slope units were found to be appropriate for the fundamental morphological elements in landslide susceptibility evaluation. Following the DEM construction in a loess area susceptible to landslides, the direct-reverse DEM technology was employed to generate 216 slope units in the studied area. After a detailed investigation, the landslide inventory was mapped in which 39 landslides, including paleo-landslides, old landslides and recent landslides, were present. Of the 216 slope units, 123 involved landslides. To analyze the mechanism of these landslides, six environmental factors were selected to evaluate landslide occurrence: slope angle, aspect, the height and shape of the slope, distance to river and human activities. These factors were extracted in terms of the slope unit within the ArcGIS software. The spatial analysis demonstrates that most of the landslides are located on convex slopes at an elevation of 100-150 m with slope angles from 135°-225° and 40°-60°. Landslide occurrence was then checked according to these environmental factors using an artificial neural network with back propagation, optimized by genetic algorithms. A dataset of 120 slope units was chosen for training the neural network model, i.e., 80 units with landslide presence and 40 units without landslide presence. The parameters of genetic algorithms and neural networks were then set: population size of 100, crossover probability of 0.65, mutation probability of 0.01, momentum factor of 0.60, learning rate of 0.7, max learning number of 10 000, and target error of 0.000001. After training on the datasets, the susceptibility of landslides was mapped for the land-use plan and hazard mitigation. Comparing the susceptibility map with landslide inventory, it was noted that the prediction accuracy of landslide occurrence

  1. Protection of visual functions by human neural progenitors in a rat model of retinal disease.

    Directory of Open Access Journals (Sweden)

    David M Gamm

    2007-03-01

    Full Text Available A promising clinical application for stem and progenitor cell transplantation is in rescue therapy for degenerative diseases. This strategy seeks to preserve rather than restore host tissue function by taking advantage of unique properties often displayed by these versatile cells. In studies using different neurodegenerative disease models, transplanted human neural progenitor cells (hNPC protected dying host neurons within both the brain and spinal cord. Based on these reports, we explored the potential of hNPC transplantation to rescue visual function in an animal model of retinal degeneration, the Royal College of Surgeons rat.Animals received unilateral subretinal injections of hNPC or medium alone at an age preceding major photoreceptor loss. Principal outcomes were quantified using electroretinography, visual acuity measurements and luminance threshold recordings from the superior colliculus. At 90-100 days postnatal, a time point when untreated rats exhibit little or no retinal or visual function, hNPC-treated eyes retained substantial retinal electrical activity and visual field with near-normal visual acuity. Functional efficacy was further enhanced when hNPC were genetically engineered to secrete glial cell line-derived neurotrophic factor. Histological examination at 150 days postnatal showed hNPC had formed a nearly continuous pigmented layer between the neural retina and retinal pigment epithelium, as well as distributed within the inner retina. A concomitant preservation of host cone photoreceptors was also observed.Wild type and genetically modified human neural progenitor cells survive for prolonged periods, migrate extensively, secrete growth factors and rescue visual functions following subretinal transplantation in the Royal College of Surgeons rat. These results underscore the potential therapeutic utility of hNPC in the treatment of retinal degenerative diseases and suggest potential mechanisms underlying their effect in

  2. Electronic bypass of spinal lesions: activation of lower motor neurons directly driven by cortical neural signals.

    Science.gov (United States)

    Li, Yan; Alam, Monzurul; Guo, Shanshan; Ting, K H; He, Jufang

    2014-07-03

    Lower motor neurons in the spinal cord lose supraspinal inputs after complete spinal cord injury, leading to a loss of volitional control below the injury site. Extensive locomotor training with spinal cord stimulation can restore locomotion function after spinal cord injury in humans and animals. However, this locomotion is non-voluntary, meaning that subjects cannot control stimulation via their natural "intent". A recent study demonstrated an advanced system that triggers a stimulator using forelimb stepping electromyographic patterns to restore quadrupedal walking in rats with spinal cord transection. However, this indirect source of "intent" may mean that other non-stepping forelimb activities may false-trigger the spinal stimulator and thus produce unwanted hindlimb movements. We hypothesized that there are distinguishable neural activities in the primary motor cortex during treadmill walking, even after low-thoracic spinal transection in adult guinea pigs. We developed an electronic spinal bridge, called "Motolink", which detects these neural patterns and triggers a "spinal" stimulator for hindlimb movement. This hardware can be head-mounted or carried in a backpack. Neural data were processed in real-time and transmitted to a computer for analysis by an embedded processor. Off-line neural spike analysis was conducted to calculate and preset the spike threshold for "Motolink" hardware. We identified correlated activities of primary motor cortex neurons during treadmill walking of guinea pigs with spinal cord transection. These neural activities were used to predict the kinematic states of the animals. The appropriate selection of spike threshold value enabled the "Motolink" system to detect the neural "intent" of walking, which triggered electrical stimulation of the spinal cord and induced stepping-like hindlimb movements. We present a direct cortical "intent"-driven electronic spinal bridge to restore hindlimb locomotion after complete spinal cord injury.

  3. Modeling by artificial neural networks. Application to the management of fuel in a nuclear power plant

    International Nuclear Information System (INIS)

    Gaudier, F.

    1999-01-01

    The determination of the family of optimum core loading patterns for Pressurized Water Reactors (PWRs) involves the assessment of the core attributes, such as the power peaking factor for thousands of candidate loading patterns. Despite the rapid advances in computer architecture, the direct calculation of these attributes by a neutronic code needs a lot of of time and memory. With the goal of reducing the calculation time and optimizing the loading pattern, we propose in this thesis a method based on ideas of neural and statistical learning to provide a feed forward neural network capable of calculating the power peaking corresponding to an eighth core PWR. We use statistical methods to deduct judicious inputs (reduction of the input space dimension) and neural methods to train the model (learning capabilities). Indeed, on one hand, a principal component analysis allows us to characterize more efficiently the fuel assemblies (neural model inputs) and the other hand, the introduction of the a priori knowledge allows us to reducing the number of freedom parameters in the neural network. The model was built using a multi layered perceptron trained with the standard back propagation algorithm. We introduced our neural network in the automatic optimization code FORMOSA, and on EDF real problems we showed an important saving in time. Finally, we propose an hybrid method which combining the best characteristics of the linear local approximator GPT (Generalized Perturbation Theory) and the artificial neural network. (author)

  4. State-dependent, bidirectional modulation of neural network activity by endocannabinoids.

    Science.gov (United States)

    Piet, Richard; Garenne, André; Farrugia, Fanny; Le Masson, Gwendal; Marsicano, Giovanni; Chavis, Pascale; Manzoni, Olivier J

    2011-11-16

    The endocannabinoid (eCB) system and the cannabinoid CB1 receptor (CB1R) play key roles in the modulation of brain functions. Although actions of eCBs and CB1Rs are well described at the synaptic level, little is known of their modulation of neural activity at the network level. Using microelectrode arrays, we have examined the role of CB1R activation in the modulation of the electrical activity of rat and mice cortical neural networks in vitro. We find that exogenous activation of CB1Rs expressed on glutamatergic neurons decreases the spontaneous activity of cortical neural networks. Moreover, we observe that the net effect of the CB1R antagonist AM251 inversely correlates with the initial level of activity in the network: blocking CB1Rs increases network activity when basal network activity is low, whereas it depresses spontaneous activity when its initial level is high. Our results reveal a complex role of CB1Rs in shaping spontaneous network activity, and suggest that the outcome of endogenous neuromodulation on network function might be state dependent.

  5. Adolescent-specific patterns of behavior and neural activity during social reinforcement learning.

    Science.gov (United States)

    Jones, Rebecca M; Somerville, Leah H; Li, Jian; Ruberry, Erika J; Powers, Alisa; Mehta, Natasha; Dyke, Jonathan; Casey, B J

    2014-06-01

    Humans are sophisticated social beings. Social cues from others are exceptionally salient, particularly during adolescence. Understanding how adolescents interpret and learn from variable social signals can provide insight into the observed shift in social sensitivity during this period. The present study tested 120 participants between the ages of 8 and 25 years on a social reinforcement learning task where the probability of receiving positive social feedback was parametrically manipulated. Seventy-eight of these participants completed the task during fMRI scanning. Modeling trial-by-trial learning, children and adults showed higher positive learning rates than did adolescents, suggesting that adolescents demonstrated less differentiation in their reaction times for peers who provided more positive feedback. Forming expectations about receiving positive social reinforcement correlated with neural activity within the medial prefrontal cortex and ventral striatum across age. Adolescents, unlike children and adults, showed greater insular activity during positive prediction error learning and increased activity in the supplementary motor cortex and the putamen when receiving positive social feedback regardless of the expected outcome, suggesting that peer approval may motivate adolescents toward action. While different amounts of positive social reinforcement enhanced learning in children and adults, all positive social reinforcement equally motivated adolescents. Together, these findings indicate that sensitivity to peer approval during adolescence goes beyond simple reinforcement theory accounts and suggest possible explanations for how peers may motivate adolescent behavior.

  6. Fault Diagnosis Based on Chemical Sensor Data with an Active Deep Neural Network.

    Science.gov (United States)

    Jiang, Peng; Hu, Zhixin; Liu, Jun; Yu, Shanen; Wu, Feng

    2016-10-13

    Big sensor data provide significant potential for chemical fault diagnosis, which involves the baseline values of security, stability and reliability in chemical processes. A deep neural network (DNN) with novel active learning for inducing chemical fault diagnosis is presented in this study. It is a method using large amount of chemical sensor data, which is a combination of deep learning and active learning criterion to target the difficulty of consecutive fault diagnosis. DNN with deep architectures, instead of shallow ones, could be developed through deep learning to learn a suitable feature representation from raw sensor data in an unsupervised manner using stacked denoising auto-encoder (SDAE) and work through a layer-by-layer successive learning process. The features are added to the top Softmax regression layer to construct the discriminative fault characteristics for diagnosis in a supervised manner. Considering the expensive and time consuming labeling of sensor data in chemical applications, in contrast to the available methods, we employ a novel active learning criterion for the particularity of chemical processes, which is a combination of Best vs. Second Best criterion (BvSB) and a Lowest False Positive criterion (LFP), for further fine-tuning of diagnosis model in an active manner rather than passive manner. That is, we allow models to rank the most informative sensor data to be labeled for updating the DNN parameters during the interaction phase. The effectiveness of the proposed method is validated in two well-known industrial datasets. Results indicate that the proposed method can obtain superior diagnosis accuracy and provide significant performance improvement in accuracy and false positive rate with less labeled chemical sensor data by further active learning compared with existing methods.

  7. Fault Diagnosis Based on Chemical Sensor Data with an Active Deep Neural Network

    Science.gov (United States)

    Jiang, Peng; Hu, Zhixin; Liu, Jun; Yu, Shanen; Wu, Feng

    2016-01-01

    Big sensor data provide significant potential for chemical fault diagnosis, which involves the baseline values of security, stability and reliability in chemical processes. A deep neural network (DNN) with novel active learning for inducing chemical fault diagnosis is presented in this study. It is a method using large amount of chemical sensor data, which is a combination of deep learning and active learning criterion to target the difficulty of consecutive fault diagnosis. DNN with deep architectures, instead of shallow ones, could be developed through deep learning to learn a suitable feature representation from raw sensor data in an unsupervised manner using stacked denoising auto-encoder (SDAE) and work through a layer-by-layer successive learning process. The features are added to the top Softmax regression layer to construct the discriminative fault characteristics for diagnosis in a supervised manner. Considering the expensive and time consuming labeling of sensor data in chemical applications, in contrast to the available methods, we employ a novel active learning criterion for the particularity of chemical processes, which is a combination of Best vs. Second Best criterion (BvSB) and a Lowest False Positive criterion (LFP), for further fine-tuning of diagnosis model in an active manner rather than passive manner. That is, we allow models to rank the most informative sensor data to be labeled for updating the DNN parameters during the interaction phase. The effectiveness of the proposed method is validated in two well-known industrial datasets. Results indicate that the proposed method can obtain superior diagnosis accuracy and provide significant performance improvement in accuracy and false positive rate with less labeled chemical sensor data by further active learning compared with existing methods. PMID:27754386

  8. Fault Diagnosis Based on Chemical Sensor Data with an Active Deep Neural Network

    Directory of Open Access Journals (Sweden)

    Peng Jiang

    2016-10-01

    Full Text Available Big sensor data provide significant potential for chemical fault diagnosis, which involves the baseline values of security, stability and reliability in chemical processes. A deep neural network (DNN with novel active learning for inducing chemical fault diagnosis is presented in this study. It is a method using large amount of chemical sensor data, which is a combination of deep learning and active learning criterion to target the difficulty of consecutive fault diagnosis. DNN with deep architectures, instead of shallow ones, could be developed through deep learning to learn a suitable feature representation from raw sensor data in an unsupervised manner using stacked denoising auto-encoder (SDAE and work through a layer-by-layer successive learning process. The features are added to the top Softmax regression layer to construct the discriminative fault characteristics for diagnosis in a supervised manner. Considering the expensive and time consuming labeling of sensor data in chemical applications, in contrast to the available methods, we employ a novel active learning criterion for the particularity of chemical processes, which is a combination of Best vs. Second Best criterion (BvSB and a Lowest False Positive criterion (LFP, for further fine-tuning of diagnosis model in an active manner rather than passive manner. That is, we allow models to rank the most informative sensor data to be labeled for updating the DNN parameters during the interaction phase. The effectiveness of the proposed method is validated in two well-known industrial datasets. Results indicate that the proposed method can obtain superior diagnosis accuracy and provide significant performance improvement in accuracy and false positive rate with less labeled chemical sensor data by further active learning compared with existing methods.

  9. Development of neural network model of the multiparametric ...

    African Journals Online (AJOL)

    The best structure of the model was established for identifying a complex multiparameter object, using the example of statistics for the operation of a ball mill.It was a network with three hidden layers and 50, 35 and 25 neurons in them, with activation functions, respectively by layers - hyperbolic tangent, sigmoid function in 2 ...

  10. Comparative aspects of adult neural stem cell activity in vertebrates.

    Science.gov (United States)

    Grandel, Heiner; Brand, Michael

    2013-03-01

    At birth or after hatching from the egg, vertebrate brains still contain neural stem cells which reside in specialized niches. In some cases, these stem cells are deployed for further postnatal development of parts of the brain until the final structure is reached. In other cases, postnatal neurogenesis continues as constitutive neurogenesis into adulthood leading to a net increase of the number of neurons with age. Yet, in other cases, stem cells fuel neuronal turnover. An example is protracted development of the cerebellar granular layer in mammals and birds, where neurogenesis continues for a few weeks postnatally until the granular layer has reached its definitive size and stem cells are used up. Cerebellar growth also provides an example of continued neurogenesis during adulthood in teleosts. Again, it is the granular layer that grows as neurogenesis continues and no definite adult cerebellar size is reached. Neuronal turnover is most clearly seen in the telencephalon of male canaries, where projection neurons are replaced in nucleus high vocal centre each year before the start of a new mating season--circuitry reconstruction to achieve changes of the song repertoire in these birds? In this review, we describe these and other examples of adult neurogenesis in different vertebrate taxa. We also compare the structure of the stem cell niches to find common themes in their organization despite different functions adult neurogenesis serves in different species. Finally, we report on regeneration of the zebrafish telencephalon after injury to highlight similarities and differences of constitutive neurogenesis and neuronal regeneration.

  11. Robust nonlinear autoregressive moving average model parameter estimation using stochastic recurrent artificial neural networks

    DEFF Research Database (Denmark)

    Chon, K H; Hoyer, D; Armoundas, A A

    1999-01-01

    In this study, we introduce a new approach for estimating linear and nonlinear stochastic autoregressive moving average (ARMA) model parameters, given a corrupt signal, using artificial recurrent neural networks. This new approach is a two-step approach in which the parameters of the deterministic...... part of the stochastic ARMA model are first estimated via a three-layer artificial neural network (deterministic estimation step) and then reestimated using the prediction error as one of the inputs to the artificial neural networks in an iterative algorithm (stochastic estimation step). The prediction...... error is obtained by subtracting the corrupt signal of the estimated ARMA model obtained via the deterministic estimation step from the system output response. We present computer simulation examples to show the efficacy of the proposed stochastic recurrent neural network approach in obtaining accurate...

  12. Energy efficiency optimisation for distillation column using artificial neural network models

    International Nuclear Information System (INIS)

    Osuolale, Funmilayo N.; Zhang, Jie

    2016-01-01

    This paper presents a neural network based strategy for the modelling and optimisation of energy efficiency in distillation columns incorporating the second law of thermodynamics. Real-time optimisation of distillation columns based on mechanistic models is often infeasible due to the effort in model development and the large computation effort associated with mechanistic model computation. This issue can be addressed by using neural network models which can be quickly developed from process operation data. The computation time in neural network model evaluation is very short making them ideal for real-time optimisation. Bootstrap aggregated neural networks are used in this study for enhanced model accuracy and reliability. Aspen HYSYS is used for the simulation of the distillation systems. Neural network models for exergy efficiency and product compositions are developed from simulated process operation data and are used to maximise exergy efficiency while satisfying products qualities constraints. Applications to binary systems of methanol-water and benzene-toluene separations culminate in a reduction of utility consumption of 8.2% and 28.2% respectively. Application to multi-component separation columns also demonstrate the effectiveness of the proposed method with a 32.4% improvement in the exergy efficiency. - Highlights: • Neural networks can accurately model exergy efficiency in distillation columns. • Bootstrap aggregated neural network offers improved model prediction accuracy. • Improved exergy efficiency is obtained through model based optimisation. • Reductions of utility consumption by 8.2% and 28.2% were achieved for binary systems. • The exergy efficiency for multi-component distillation is increased by 32.4%.

  13. Embodied learning of a generative neural model for biological motion perception and inference.

    Science.gov (United States)

    Schrodt, Fabian; Layher, Georg; Neumann, Heiko; Butz, Martin V

    2015-01-01

    Although an action observation network and mirror neurons for understanding the actions and intentions of others have been under deep, interdisciplinary consideration over recent years, it remains largely unknown how the brain manages to map visually perceived biological motion of others onto its own motor system. This paper shows how such a mapping may be established, even if the biologically motion is visually perceived from a new vantage point. We introduce a learning artificial neural network model and evaluate it on full body motion tracking recordings. The model implements an embodied, predictive inference approach. It