WorldWideScience

Sample records for neural network short-term

  1. Spatiotemporal discrimination in neural networks with short-term synaptic plasticity

    Science.gov (United States)

    Shlaer, Benjamin; Miller, Paul

    2015-03-01

    Cells in recurrently connected neural networks exhibit bistability, which allows for stimulus information to persist in a circuit even after stimulus offset, i.e. short-term memory. However, such a system does not have enough hysteresis to encode temporal information about the stimuli. The biophysically described phenomenon of synaptic depression decreases synaptic transmission strengths due to increased presynaptic activity. This short-term reduction in synaptic strengths can destabilize attractor states in excitatory recurrent neural networks, causing the network to move along stimulus dependent dynamical trajectories. Such a network can successfully separate amplitudes and durations of stimuli from the number of successive stimuli. Stimulus number, duration and intensity encoding in randomly connected attractor networks with synaptic depression. Front. Comput. Neurosci. 7:59., and so provides a strong candidate network for the encoding of spatiotemporal information. Here we explicitly demonstrate the capability of a recurrent neural network with short-term synaptic depression to discriminate between the temporal sequences in which spatial stimuli are presented.

  2. A short-term neural network memory

    Energy Technology Data Exchange (ETDEWEB)

    Morris, R.J.T.; Wong, W.S.

    1988-12-01

    Neural network memories with storage prescriptions based on Hebb's rule are known to collapse as more words are stored. By requiring that the most recently stored word be remembered precisely, a new simple short-term neutral network memory is obtained and its steady state capacity analyzed and simulated. Comparisons are drawn with Hopfield's method, the delta method of Widrow and Hoff, and the revised marginalist model of Mezard, Nadal, and Toulouse.

  3. Short-Term Load Forecasting Model Based on Quantum Elman Neural Networks

    Directory of Open Access Journals (Sweden)

    Zhisheng Zhang

    2016-01-01

    Full Text Available Short-term load forecasting model based on quantum Elman neural networks was constructed in this paper. The quantum computation and Elman feedback mechanism were integrated into quantum Elman neural networks. Quantum computation can effectively improve the approximation capability and the information processing ability of the neural networks. Quantum Elman neural networks have not only the feedforward connection but also the feedback connection. The feedback connection between the hidden nodes and the context nodes belongs to the state feedback in the internal system, which has formed specific dynamic memory performance. Phase space reconstruction theory is the theoretical basis of constructing the forecasting model. The training samples are formed by means of K-nearest neighbor approach. Through the example simulation, the testing results show that the model based on quantum Elman neural networks is better than the model based on the quantum feedforward neural network, the model based on the conventional Elman neural network, and the model based on the conventional feedforward neural network. So the proposed model can effectively improve the prediction accuracy. The research in the paper makes a theoretical foundation for the practical engineering application of the short-term load forecasting model based on quantum Elman neural networks.

  4. Short term and medium term power distribution load forecasting by neural networks

    International Nuclear Information System (INIS)

    Yalcinoz, T.; Eminoglu, U.

    2005-01-01

    Load forecasting is an important subject for power distribution systems and has been studied from different points of view. In general, load forecasts should be performed over a broad spectrum of time intervals, which could be classified into short term, medium term and long term forecasts. Several research groups have proposed various techniques for either short term load forecasting or medium term load forecasting or long term load forecasting. This paper presents a neural network (NN) model for short term peak load forecasting, short term total load forecasting and medium term monthly load forecasting in power distribution systems. The NN is used to learn the relationships among past, current and future temperatures and loads. The neural network was trained to recognize the peak load of the day, total load of the day and monthly electricity consumption. The suitability of the proposed approach is illustrated through an application to real load shapes from the Turkish Electricity Distribution Corporation (TEDAS) in Nigde. The data represents the daily and monthly electricity consumption in Nigde, Turkey

  5. An Artificial Neural Network Based Short-term Dynamic Prediction of Algae Bloom

    Directory of Open Access Journals (Sweden)

    Yao Junyang

    2014-06-01

    Full Text Available This paper proposes a method of short-term prediction of algae bloom based on artificial neural network. Firstly, principal component analysis is applied to water environmental factors in algae bloom raceway ponds to get main factors that influence the formation of algae blooms. Then, a model of short-term dynamic prediction based on neural network is built with the current chlorophyll_a values as input and the chlorophyll_a values in the next moment as output to realize short-term dynamic prediction of algae bloom. Simulation results show that the model can realize short-term prediction of algae bloom effectively.

  6. Short-term electricity prices forecasting in a competitive market: A neural network approach

    International Nuclear Information System (INIS)

    Catalao, J.P.S.; Mariano, S.J.P.S.; Mendes, V.M.F.; Ferreira, L.A.F.M.

    2007-01-01

    This paper proposes a neural network approach for forecasting short-term electricity prices. Almost until the end of last century, electricity supply was considered a public service and any price forecasting which was undertaken tended to be over the longer term, concerning future fuel prices and technical improvements. Nowadays, short-term forecasts have become increasingly important since the rise of the competitive electricity markets. In this new competitive framework, short-term price forecasting is required by producers and consumers to derive their bidding strategies to the electricity market. Accurate forecasting tools are essential for producers to maximize their profits, avowing profit losses over the misjudgement of future price movements, and for consumers to maximize their utilities. A three-layered feedforward neural network, trained by the Levenberg-Marquardt algorithm, is used for forecasting next-week electricity prices. We evaluate the accuracy of the price forecasting attained with the proposed neural network approach, reporting the results from the electricity markets of mainland Spain and California. (author)

  7. Statistical downscaling of precipitation using long short-term memory recurrent neural networks

    Science.gov (United States)

    Misra, Saptarshi; Sarkar, Sudeshna; Mitra, Pabitra

    2017-11-01

    Hydrological impacts of global climate change on regional scale are generally assessed by downscaling large-scale climatic variables, simulated by General Circulation Models (GCMs), to regional, small-scale hydrometeorological variables like precipitation, temperature, etc. In this study, we propose a new statistical downscaling model based on Recurrent Neural Network with Long Short-Term Memory which captures the spatio-temporal dependencies in local rainfall. The previous studies have used several other methods such as linear regression, quantile regression, kernel regression, beta regression, and artificial neural networks. Deep neural networks and recurrent neural networks have been shown to be highly promising in modeling complex and highly non-linear relationships between input and output variables in different domains and hence we investigated their performance in the task of statistical downscaling. We have tested this model on two datasets—one on precipitation in Mahanadi basin in India and the second on precipitation in Campbell River basin in Canada. Our autoencoder coupled long short-term memory recurrent neural network model performs the best compared to other existing methods on both the datasets with respect to temporal cross-correlation, mean squared error, and capturing the extremes.

  8. A Neural Network Model of the Visual Short-Term Memory

    DEFF Research Database (Denmark)

    Petersen, Anders; Kyllingsbæk, Søren; Hansen, Lars Kai

    2009-01-01

    In this paper a neural network model of Visual Short-Term Memory (VSTM) is presented. The model links closely with Bundesen’s (1990) well-established mathematical theory of visual attention. We evaluate the model’s ability to fit experimental data from a classical whole and partial report study...

  9. Short-term PV/T module temperature prediction based on PCA-RBF neural network

    Science.gov (United States)

    Li, Jiyong; Zhao, Zhendong; Li, Yisheng; Xiao, Jing; Tang, Yunfeng

    2018-02-01

    Aiming at the non-linearity and large inertia of temperature control in PV/T system, short-term temperature prediction of PV/T module is proposed, to make the PV/T system controller run forward according to the short-term forecasting situation to optimize control effect. Based on the analysis of the correlation between PV/T module temperature and meteorological factors, and the temperature of adjacent time series, the principal component analysis (PCA) method is used to pre-process the original input sample data. Combined with the RBF neural network theory, the simulation results show that the PCA method makes the prediction accuracy of the network model higher and the generalization performance stronger than that of the RBF neural network without the main component extraction.

  10. Analysis of recurrent neural networks for short-term energy load forecasting

    Science.gov (United States)

    Di Persio, Luca; Honchar, Oleksandr

    2017-11-01

    Short-term forecasts have recently gained an increasing attention because of the rise of competitive electricity markets. In fact, short-terms forecast of possible future loads turn out to be fundamental to build efficient energy management strategies as well as to avoid energy wastage. Such type of challenges are difficult to tackle both from a theoretical and applied point of view. Latter tasks require sophisticated methods to manage multidimensional time series related to stochastic phenomena which are often highly interconnected. In the present work we first review novel approaches to energy load forecasting based on recurrent neural network, focusing our attention on long/short term memory architectures (LSTMs). Such type of artificial neural networks have been widely applied to problems dealing with sequential data such it happens, e.g., in socio-economics settings, for text recognition purposes, concerning video signals, etc., always showing their effectiveness to model complex temporal data. Moreover, we consider different novel variations of basic LSTMs, such as sequence-to-sequence approach and bidirectional LSTMs, aiming at providing effective models for energy load data. Last but not least, we test all the described algorithms on real energy load data showing not only that deep recurrent networks can be successfully applied to energy load forecasting, but also that this approach can be extended to other problems based on time series prediction.

  11. Short term memory in echo state networks

    OpenAIRE

    Jaeger, H.

    2001-01-01

    The report investigates the short-term memory capacity of echo state recurrent neural networks. A quantitative measure MC of short-term memory capacity is introduced. The main result is that MC 5 N for networks with linear Output units and i.i.d. input, where N is network size. Conditions under which these maximal memory capacities are realized are described. Several theoretical and practical examples demonstrate how the short-term memory capacities of echo state networks can be exploited for...

  12. Forecasting short-term data center network traffic load with convolutional neural networks

    Science.gov (United States)

    Ordozgoiti, Bruno; Gómez-Canaval, Sandra

    2018-01-01

    Efficient resource management in data centers is of central importance to content service providers as 90 percent of the network traffic is expected to go through them in the coming years. In this context we propose the use of convolutional neural networks (CNNs) to forecast short-term changes in the amount of traffic crossing a data center network. This value is an indicator of virtual machine activity and can be utilized to shape the data center infrastructure accordingly. The behaviour of network traffic at the seconds scale is highly chaotic and therefore traditional time-series-analysis approaches such as ARIMA fail to obtain accurate forecasts. We show that our convolutional neural network approach can exploit the non-linear regularities of network traffic, providing significant improvements with respect to the mean absolute and standard deviation of the data, and outperforming ARIMA by an increasingly significant margin as the forecasting granularity is above the 16-second resolution. In order to increase the accuracy of the forecasting model, we exploit the architecture of the CNNs using multiresolution input distributed among separate channels of the first convolutional layer. We validate our approach with an extensive set of experiments using a data set collected at the core network of an Internet Service Provider over a period of 5 months, totalling 70 days of traffic at the one-second resolution. PMID:29408936

  13. Forecasting short-term data center network traffic load with convolutional neural networks.

    Science.gov (United States)

    Mozo, Alberto; Ordozgoiti, Bruno; Gómez-Canaval, Sandra

    2018-01-01

    Efficient resource management in data centers is of central importance to content service providers as 90 percent of the network traffic is expected to go through them in the coming years. In this context we propose the use of convolutional neural networks (CNNs) to forecast short-term changes in the amount of traffic crossing a data center network. This value is an indicator of virtual machine activity and can be utilized to shape the data center infrastructure accordingly. The behaviour of network traffic at the seconds scale is highly chaotic and therefore traditional time-series-analysis approaches such as ARIMA fail to obtain accurate forecasts. We show that our convolutional neural network approach can exploit the non-linear regularities of network traffic, providing significant improvements with respect to the mean absolute and standard deviation of the data, and outperforming ARIMA by an increasingly significant margin as the forecasting granularity is above the 16-second resolution. In order to increase the accuracy of the forecasting model, we exploit the architecture of the CNNs using multiresolution input distributed among separate channels of the first convolutional layer. We validate our approach with an extensive set of experiments using a data set collected at the core network of an Internet Service Provider over a period of 5 months, totalling 70 days of traffic at the one-second resolution.

  14. Multi-Temporal Land Cover Classification with Long Short-Term Memory Neural Networks

    Science.gov (United States)

    Rußwurm, M.; Körner, M.

    2017-05-01

    Land cover classification (LCC) is a central and wide field of research in earth observation and has already put forth a variety of classification techniques. Many approaches are based on classification techniques considering observation at certain points in time. However, some land cover classes, such as crops, change their spectral characteristics due to environmental influences and can thus not be monitored effectively with classical mono-temporal approaches. Nevertheless, these temporal observations should be utilized to benefit the classification process. After extensive research has been conducted on modeling temporal dynamics by spectro-temporal profiles using vegetation indices, we propose a deep learning approach to utilize these temporal characteristics for classification tasks. In this work, we show how long short-term memory (LSTM) neural networks can be employed for crop identification purposes with SENTINEL 2A observations from large study areas and label information provided by local authorities. We compare these temporal neural network models, i.e., LSTM and recurrent neural network (RNN), with a classical non-temporal convolutional neural network (CNN) model and an additional support vector machine (SVM) baseline. With our rather straightforward LSTM variant, we exceeded state-of-the-art classification performance, thus opening promising potential for further research.

  15. MULTI-TEMPORAL LAND COVER CLASSIFICATION WITH LONG SHORT-TERM MEMORY NEURAL NETWORKS

    Directory of Open Access Journals (Sweden)

    M. Rußwurm

    2017-05-01

    Full Text Available Land cover classification (LCC is a central and wide field of research in earth observation and has already put forth a variety of classification techniques. Many approaches are based on classification techniques considering observation at certain points in time. However, some land cover classes, such as crops, change their spectral characteristics due to environmental influences and can thus not be monitored effectively with classical mono-temporal approaches. Nevertheless, these temporal observations should be utilized to benefit the classification process. After extensive research has been conducted on modeling temporal dynamics by spectro-temporal profiles using vegetation indices, we propose a deep learning approach to utilize these temporal characteristics for classification tasks. In this work, we show how long short-term memory (LSTM neural networks can be employed for crop identification purposes with SENTINEL 2A observations from large study areas and label information provided by local authorities. We compare these temporal neural network models, i.e., LSTM and recurrent neural network (RNN, with a classical non-temporal convolutional neural network (CNN model and an additional support vector machine (SVM baseline. With our rather straightforward LSTM variant, we exceeded state-of-the-art classification performance, thus opening promising potential for further research.

  16. Language Identification in Short Utterances Using Long Short-Term Memory (LSTM) Recurrent Neural Networks.

    Science.gov (United States)

    Zazo, Ruben; Lozano-Diez, Alicia; Gonzalez-Dominguez, Javier; Toledano, Doroteo T; Gonzalez-Rodriguez, Joaquin

    2016-01-01

    Long Short Term Memory (LSTM) Recurrent Neural Networks (RNNs) have recently outperformed other state-of-the-art approaches, such as i-vector and Deep Neural Networks (DNNs), in automatic Language Identification (LID), particularly when dealing with very short utterances (∼3s). In this contribution we present an open-source, end-to-end, LSTM RNN system running on limited computational resources (a single GPU) that outperforms a reference i-vector system on a subset of the NIST Language Recognition Evaluation (8 target languages, 3s task) by up to a 26%. This result is in line with previously published research using proprietary LSTM implementations and huge computational resources, which made these former results hardly reproducible. Further, we extend those previous experiments modeling unseen languages (out of set, OOS, modeling), which is crucial in real applications. Results show that a LSTM RNN with OOS modeling is able to detect these languages and generalizes robustly to unseen OOS languages. Finally, we also analyze the effect of even more limited test data (from 2.25s to 0.1s) proving that with as little as 0.5s an accuracy of over 50% can be achieved.

  17. Folk music style modelling by recurrent neural networks with long short term memory units

    OpenAIRE

    Sturm, Bob; Santos, João Felipe; Korshunova, Iryna

    2015-01-01

    We demonstrate two generative models created by training a recurrent neural network (RNN) with three hidden layers of long short-term memory (LSTM) units. This extends past work in numerous directions, including training deeper models with nearly 24,000 high-level transcriptions of folk tunes. We discuss our on-going work.

  18. Critical neural networks with short- and long-term plasticity

    Science.gov (United States)

    Michiels van Kessenich, L.; Luković, M.; de Arcangelis, L.; Herrmann, H. J.

    2018-03-01

    In recent years self organized critical neuronal models have provided insights regarding the origin of the experimentally observed avalanching behavior of neuronal systems. It has been shown that dynamical synapses, as a form of short-term plasticity, can cause critical neuronal dynamics. Whereas long-term plasticity, such as Hebbian or activity dependent plasticity, have a crucial role in shaping the network structure and endowing neural systems with learning abilities. In this work we provide a model which combines both plasticity mechanisms, acting on two different time scales. The measured avalanche statistics are compatible with experimental results for both the avalanche size and duration distribution with biologically observed percentages of inhibitory neurons. The time series of neuronal activity exhibits temporal bursts leading to 1 /f decay in the power spectrum. The presence of long-term plasticity gives the system the ability to learn binary rules such as xor, providing the foundation of future research on more complicated tasks such as pattern recognition.

  19. Short-Term Forecasting of Electric Loads Using Nonlinear Autoregressive Artificial Neural Networks with Exogenous Vector Inputs

    Directory of Open Access Journals (Sweden)

    Jaime Buitrago

    2017-01-01

    Full Text Available Short-term load forecasting is crucial for the operations planning of an electrical grid. Forecasting the next 24 h of electrical load in a grid allows operators to plan and optimize their resources. The purpose of this study is to develop a more accurate short-term load forecasting method utilizing non-linear autoregressive artificial neural networks (ANN with exogenous multi-variable input (NARX. The proposed implementation of the network is new: the neural network is trained in open-loop using actual load and weather data, and then, the network is placed in closed-loop to generate a forecast using the predicted load as the feedback input. Unlike the existing short-term load forecasting methods using ANNs, the proposed method uses its own output as the input in order to improve the accuracy, thus effectively implementing a feedback loop for the load, making it less dependent on external data. Using the proposed framework, mean absolute percent errors in the forecast in the order of 1% have been achieved, which is a 30% improvement on the average error using feedforward ANNs, ARMAX and state space methods, which can result in large savings by avoiding commissioning of unnecessary power plants. The New England electrical load data are used to train and validate the forecast prediction.

  20. Continuous Timescale Long-Short Term Memory Neural Network for Human Intent Understanding

    Directory of Open Access Journals (Sweden)

    Zhibin Yu

    2017-08-01

    Full Text Available Understanding of human intention by observing a series of human actions has been a challenging task. In order to do so, we need to analyze longer sequences of human actions related with intentions and extract the context from the dynamic features. The multiple timescales recurrent neural network (MTRNN model, which is believed to be a kind of solution, is a useful tool for recording and regenerating a continuous signal for dynamic tasks. However, the conventional MTRNN suffers from the vanishing gradient problem which renders it impossible to be used for longer sequence understanding. To address this problem, we propose a new model named Continuous Timescale Long-Short Term Memory (CTLSTM in which we inherit the multiple timescales concept into the Long-Short Term Memory (LSTM recurrent neural network (RNN that addresses the vanishing gradient problem. We design an additional recurrent connection in the LSTM cell outputs to produce a time-delay in order to capture the slow context. Our experiments show that the proposed model exhibits better context modeling ability and captures the dynamic features on multiple large dataset classification tasks. The results illustrate that the multiple timescales concept enhances the ability of our model to handle longer sequences related with human intentions and hence proving to be more suitable for complex tasks, such as intention recognition.

  1. A novel analytical characterization for short-term plasticity parameters in spiking neural networks.

    Science.gov (United States)

    O'Brien, Michael J; Thibeault, Corey M; Srinivasa, Narayan

    2014-01-01

    Short-term plasticity (STP) is a phenomenon that widely occurs in the neocortex with implications for learning and memory. Based on a widely used STP model, we develop an analytical characterization of the STP parameter space to determine the nature of each synapse (facilitating, depressing, or both) in a spiking neural network based on presynaptic firing rate and the corresponding STP parameters. We demonstrate consistency with previous work by leveraging the power of our characterization to replicate the functional volumes that are integral for the previous network stabilization results. We then use our characterization to predict the precise transitional point from the facilitating regime to the depressing regime in a simulated synapse, suggesting in vitro experiments to verify the underlying STP model. We conclude the work by integrating our characterization into a framework for finding suitable STP parameters for self-sustaining random, asynchronous activity in a prescribed recurrent spiking neural network. The systematic process resulting from our analytical characterization improves the success rate of finding the requisite parameters for such networks by three orders of magnitude over a random search.

  2. Improving protein disorder prediction by deep bidirectional long short-term memory recurrent neural networks.

    Science.gov (United States)

    Hanson, Jack; Yang, Yuedong; Paliwal, Kuldip; Zhou, Yaoqi

    2017-03-01

    Capturing long-range interactions between structural but not sequence neighbors of proteins is a long-standing challenging problem in bioinformatics. Recently, long short-term memory (LSTM) networks have significantly improved the accuracy of speech and image classification problems by remembering useful past information in long sequential events. Here, we have implemented deep bidirectional LSTM recurrent neural networks in the problem of protein intrinsic disorder prediction. The new method, named SPOT-Disorder, has steadily improved over a similar method using a traditional, window-based neural network (SPINE-D) in all datasets tested without separate training on short and long disordered regions. Independent tests on four other datasets including the datasets from critical assessment of structure prediction (CASP) techniques and >10 000 annotated proteins from MobiDB, confirmed SPOT-Disorder as one of the best methods in disorder prediction. Moreover, initial studies indicate that the method is more accurate in predicting functional sites in disordered regions. These results highlight the usefulness combining LSTM with deep bidirectional recurrent neural networks in capturing non-local, long-range interactions for bioinformatics applications. SPOT-disorder is available as a web server and as a standalone program at: http://sparks-lab.org/server/SPOT-disorder/index.php . j.hanson@griffith.edu.au or yuedong.yang@griffith.edu.au or yaoqi.zhou@griffith.edu.au. Supplementary data is available at Bioinformatics online. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com

  3. Sensitivity Analysis of Wavelet Neural Network Model for Short-Term Traffic Volume Prediction

    Directory of Open Access Journals (Sweden)

    Jinxing Shen

    2013-01-01

    Full Text Available In order to achieve a more accurate and robust traffic volume prediction model, the sensitivity of wavelet neural network model (WNNM is analyzed in this study. Based on real loop detector data which is provided by traffic police detachment of Maanshan, WNNM is discussed with different numbers of input neurons, different number of hidden neurons, and traffic volume for different time intervals. The test results show that the performance of WNNM depends heavily on network parameters and time interval of traffic volume. In addition, the WNNM with 4 input neurons and 6 hidden neurons is the optimal predictor with more accuracy, stability, and adaptability. At the same time, a much better prediction record will be achieved with the time interval of traffic volume are 15 minutes. In addition, the optimized WNNM is compared with the widely used back-propagation neural network (BPNN. The comparison results indicated that WNNM produce much lower values of MAE, MAPE, and VAPE than BPNN, which proves that WNNM performs better on short-term traffic volume prediction.

  4. Short-term wind power forecasting in Portugal by neural networks and wavelet transform

    Energy Technology Data Exchange (ETDEWEB)

    Catalao, J.P.S. [Department of Electromechanical Engineering, University of Beira Interior, R. Fonte do Lameiro, 6201-001 Covilha (Portugal); Center for Innovation in Electrical and Energy Engineering, Instituto Superior Tecnico, Technical University of Lisbon, Av. Rovisco Pais, 1049-001 Lisbon (Portugal); Pousinho, H.M.I. [Department of Electromechanical Engineering, University of Beira Interior, R. Fonte do Lameiro, 6201-001 Covilha (Portugal); Mendes, V.M.F. [Department of Electrical Engineering and Automation, Instituto Superior de Engenharia de Lisboa, R. Conselheiro Emidio Navarro, 1950-062 Lisbon (Portugal)

    2011-04-15

    This paper proposes artificial neural networks in combination with wavelet transform for short-term wind power forecasting in Portugal. The increased integration of wind power into the electric grid, as nowadays occurs in Portugal, poses new challenges due to its intermittency and volatility. Hence, good forecasting tools play a key role in tackling these challenges. Results from a real-world case study are presented. A comparison is carried out, taking into account the results obtained with other approaches. Finally, conclusions are duly drawn. (author)

  5. Short-Term Wind Speed Forecasting Using Decomposition-Based Neural Networks Combining Abnormal Detection Method

    Directory of Open Access Journals (Sweden)

    Xuejun Chen

    2014-01-01

    Full Text Available As one of the most promising renewable resources in electricity generation, wind energy is acknowledged for its significant environmental contributions and economic competitiveness. Because wind fluctuates with strong variation, it is quite difficult to describe the characteristics of wind or to estimate the power output that will be injected into the grid. In particular, short-term wind speed forecasting, an essential support for the regulatory actions and short-term load dispatching planning during the operation of wind farms, is currently regarded as one of the most difficult problems to be solved. This paper contributes to short-term wind speed forecasting by developing two three-stage hybrid approaches; both are combinations of the five-three-Hanning (53H weighted average smoothing method, ensemble empirical mode decomposition (EEMD algorithm, and nonlinear autoregressive (NAR neural networks. The chosen datasets are ten-minute wind speed observations, including twelve samples, and our simulation indicates that the proposed methods perform much better than the traditional ones when addressing short-term wind speed forecasting problems.

  6. A New Neural Network Approach to Short Term Load Forecasting of Electrical Power Systems

    Directory of Open Access Journals (Sweden)

    Farshid Keynia

    2011-03-01

    Full Text Available Short-term load forecast (STLF is an important operational function in both regulated power systems and deregulated open electricity markets. However, STLF is not easy to handle due to the nonlinear and random-like behaviors of system loads, weather conditions, and social and economic environment variations. Despite the research work performed in the area, more accurate and robust STLF methods are still needed due to the importance and complexity of STLF. In this paper, a new neural network approach for STLF is proposed. The proposed neural network has a novel learning algorithm based on a new modified harmony search technique. This learning algorithm can widely search the solution space in various directions, and it can also avoid the overfitting problem, trapping in local minima and dead bands. Based on this learning algorithm, the suggested neural network can efficiently extract the input/output mapping function of the forecast process leading to high STLF accuracy. The proposed approach is tested on two practical power systems and the results obtained are compared with the results of several other recently published STLF methods. These comparisons confirm the validity of the developed approach.

  7. Intelligent and robust prediction of short term wind power using genetic programming based ensemble of neural networks

    International Nuclear Information System (INIS)

    Zameer, Aneela; Arshad, Junaid; Khan, Asifullah; Raja, Muhammad Asif Zahoor

    2017-01-01

    Highlights: • Genetic programming based ensemble of neural networks is employed for short term wind power prediction. • Proposed predictor shows resilience against abrupt changes in weather. • Genetic programming evolves nonlinear mapping between meteorological measures and wind-power. • Proposed approach gives mathematical expressions of wind power to its independent variables. • Proposed model shows relatively accurate and steady wind-power prediction performance. - Abstract: The inherent instability of wind power production leads to critical problems for smooth power generation from wind turbines, which then requires an accurate forecast of wind power. In this study, an effective short term wind power prediction methodology is presented, which uses an intelligent ensemble regressor that comprises Artificial Neural Networks and Genetic Programming. In contrast to existing series based combination of wind power predictors, whereby the error or variation in the leading predictor is propagated down the stream to the next predictors, the proposed intelligent ensemble predictor avoids this shortcoming by introducing Genetical Programming based semi-stochastic combination of neural networks. It is observed that the decision of the individual base regressors may vary due to the frequent and inherent fluctuations in the atmospheric conditions and thus meteorological properties. The novelty of the reported work lies in creating ensemble to generate an intelligent, collective and robust decision space and thereby avoiding large errors due to the sensitivity of the individual wind predictors. The proposed ensemble based regressor, Genetic Programming based ensemble of Artificial Neural Networks, has been implemented and tested on data taken from five different wind farms located in Europe. Obtained numerical results of the proposed model in terms of various error measures are compared with the recent artificial intelligence based strategies to demonstrate the

  8. Short term wind speed forecasting in La Venta, Oaxaca, Mexico, using artificial neural networks

    Energy Technology Data Exchange (ETDEWEB)

    Cadenas, Erasmo [Facultad de Ingenieria Mecanica, Universidad Michoacana de San Nicolas de Hidalgo, Santiago Tapia No. 403, Centro, 5000, Mor., Mich. (Mexico); Rivera, Wilfrido [Centro de Ivestigacion en Energia, Universidad Nacional Autonoma de Mexico, Apartado Postal 34, Temixco 62580, Morelos (Mexico)

    2009-01-15

    In this paper the short term wind speed forecasting in the region of La Venta, Oaxaca, Mexico, applying the technique of artificial neural network (ANN) to the hourly time series representative of the site is presented. The data were collected by the Comision Federal de Electricidad (CFE) during 7 years through a network of measurement stations located in the place of interest. Diverse configurations of ANN were generated and compared through error measures, guaranteeing the performance and accuracy of the chosen models. First a model with three layers and seven neurons was chosen, according to the recommendations of diverse authors, nevertheless, the results were not sufficiently satisfactory so other three models were developed, consisting of three layers and six neurons, two layers and four neurons and two layers and three neurons. The simplest model of two layers, with two input neurons and one output neuron, was the best for the short term wind speed forecasting, with mean squared error and mean absolute error values of 0.0016 and 0.0399, respectively. The developed model for short term wind speed forecasting showed a very good accuracy to be used by the Electric Utility Control Centre in Oaxaca for the energy supply. (author)

  9. Using Long-Short-Term-Memory Recurrent Neural Networks to Predict Aviation Engine Vibrations

    Science.gov (United States)

    ElSaid, AbdElRahman Ahmed

    This thesis examines building viable Recurrent Neural Networks (RNN) using Long Short Term Memory (LSTM) neurons to predict aircraft engine vibrations. The different networks are trained on a large database of flight data records obtained from an airline containing flights that suffered from excessive vibration. RNNs can provide a more generalizable and robust method for prediction over analytical calculations of engine vibration, as analytical calculations must be solved iteratively based on specific empirical engine parameters, and this database contains multiple types of engines. Further, LSTM RNNs provide a "memory" of the contribution of previous time series data which can further improve predictions of future vibration values. LSTM RNNs were used over traditional RNNs, as those suffer from vanishing/exploding gradients when trained with back propagation. The study managed to predict vibration values for 1, 5, 10, and 20 seconds in the future, with 2.84% 3.3%, 5.51% and 10.19% mean absolute error, respectively. These neural networks provide a promising means for the future development of warning systems so that suitable actions can be taken before the occurrence of excess vibration to avoid unfavorable situations during flight.

  10. Applying long short-term memory recurrent neural networks to intrusion detection

    Directory of Open Access Journals (Sweden)

    Ralf C. Staudemeyer

    2015-07-01

    Full Text Available We claim that modelling network traffic as a time series with a supervised learning approach, using known genuine and malicious behaviour, improves intrusion detection. To substantiate this, we trained long short-term memory (LSTM recurrent neural networks with the training data provided by the DARPA / KDD Cup ’99 challenge. To identify suitable LSTM-RNN network parameters and structure we experimented with various network topologies. We found networks with four memory blocks containing two cells each offer a good compromise between computational cost and detection performance. We applied forget gates and shortcut connections respectively. A learning rate of 0.1 and up to 1,000 epochs showed good results. We tested the performance on all features and on extracted minimal feature sets respectively. We evaluated different feature sets for the detection of all attacks within one network and also to train networks specialised on individual attack classes. Our results show that the LSTM classifier provides superior performance in comparison to results previously published results of strong static classifiers. With 93.82% accuracy and 22.13 cost, LSTM outperforms the winning entries of the KDD Cup ’99 challenge by far. This is due to the fact that LSTM learns to look back in time and correlate consecutive connection records. For the first time ever, we have demonstrated the usefulness of LSTM networks to intrusion detection.

  11. Constructing Long Short-Term Memory based Deep Recurrent Neural Networks for Large Vocabulary Speech Recognition

    OpenAIRE

    Li, Xiangang; Wu, Xihong

    2014-01-01

    Long short-term memory (LSTM) based acoustic modeling methods have recently been shown to give state-of-the-art performance on some speech recognition tasks. To achieve a further performance improvement, in this research, deep extensions on LSTM are investigated considering that deep hierarchical model has turned out to be more efficient than a shallow one. Motivated by previous research on constructing deep recurrent neural networks (RNNs), alternative deep LSTM architectures are proposed an...

  12. Short term load forecasting using neuro-fuzzy networks

    Energy Technology Data Exchange (ETDEWEB)

    Hoffman, M.; Hassan, A. [South Dakota School of Mines and Technology, Rapid City, SD (United States); Martinez, D. [Black Hills Power and Light, Rapid City, SD (United States)

    2005-07-01

    Details of a neuro-fuzzy network-based short term load forecasting system for power utilities were presented. The fuzzy logic controller was used to fuzzify inputs representing historical temperature and load curves. The fuzzified inputs were then used to develop the fuzzy rules matrix. Output membership function values were determined by evaluating the fuzzified inputs with the fuzzy rules. Output membership function values were used as inputs for the neural network portion of the system. The training process used a back propagation gradient descent algorithm to adjust the weight values of the neural network in order to reduce the error between the neural network output and the desired output. The neural network was then used to predict future load values. Sample data were taken from a local power company's daily load curve to validate the system. A 10 per cent forecast error was introduced in the temperature values to determine the effect on load prediction. Results of the study suggest that the combined use of fuzzy logic and neural networks provide greater accuracy than studies where either approach is used alone. 6 refs., 6 figs.

  13. Neuroticism and conscientiousness respectively constrain and facilitate short-term plasticity within the working memory neural network.

    Science.gov (United States)

    Dima, Danai; Friston, Karl J; Stephan, Klaas E; Frangou, Sophia

    2015-10-01

    Individual differences in cognitive efficiency, particularly in relation to working memory (WM), have been associated both with personality dimensions that reflect enduring regularities in brain configuration, and with short-term neural plasticity, that reflects task-related changes in brain connectivity. To elucidate the relationship of these two divergent mechanisms, we tested the hypothesis that personality dimensions, which reflect enduring aspects of brain configuration, inform about the neurobiological framework within which short-term, task-related plasticity, as measured by effective connectivity, can be facilitated or constrained. As WM consistently engages the dorsolateral prefrontal (DLPFC), parietal (PAR), and anterior cingulate cortex (ACC), we specified a WM network model with bidirectional, ipsilateral, and contralateral connections between these regions from a functional magnetic resonance imaging dataset obtained from 40 healthy adults while performing the 3-back WM task. Task-related effective connectivity changes within this network were estimated using Dynamic Causal Modelling. Personality was evaluated along the major dimensions of Neuroticism, Extraversion, Openness to Experience, Agreeableness, and Conscientiousness. Only two dimensions were relevant to task-dependent effective connectivity. Neuroticism and Conscientiousness respectively constrained and facilitated neuroplastic responses within the WM network. These results suggest individual differences in cognitive efficiency arise from the interplay between enduring and short-term plasticity in brain configuration. © 2015 Wiley Periodicals, Inc.

  14. Long short-term memory neural network for air pollutant concentration predictions: Method development and evaluation

    International Nuclear Information System (INIS)

    Li, Xiang; Peng, Ling; Yao, Xiaojing; Cui, Shaolong; Hu, Yuan; You, Chengzeng; Chi, Tianhe

    2017-01-01

    Air pollutant concentration forecasting is an effective method of protecting public health by providing an early warning against harmful air pollutants. However, existing methods of air pollutant concentration prediction fail to effectively model long-term dependencies, and most neglect spatial correlations. In this paper, a novel long short-term memory neural network extended (LSTME) model that inherently considers spatiotemporal correlations is proposed for air pollutant concentration prediction. Long short-term memory (LSTM) layers were used to automatically extract inherent useful features from historical air pollutant data, and auxiliary data, including meteorological data and time stamp data, were merged into the proposed model to enhance the performance. Hourly PM 2.5 (particulate matter with an aerodynamic diameter less than or equal to 2.5 μm) concentration data collected at 12 air quality monitoring stations in Beijing City from Jan/01/2014 to May/28/2016 were used to validate the effectiveness of the proposed LSTME model. Experiments were performed using the spatiotemporal deep learning (STDL) model, the time delay neural network (TDNN) model, the autoregressive moving average (ARMA) model, the support vector regression (SVR) model, and the traditional LSTM NN model, and a comparison of the results demonstrated that the LSTME model is superior to the other statistics-based models. Additionally, the use of auxiliary data improved model performance. For the one-hour prediction tasks, the proposed model performed well and exhibited a mean absolute percentage error (MAPE) of 11.93%. In addition, we conducted multiscale predictions over different time spans and achieved satisfactory performance, even for 13–24 h prediction tasks (MAPE = 31.47%). - Highlights: • Regional air pollutant concentration shows an obvious spatiotemporal correlation. • Our prediction model presents superior performance. • Climate data and metadata can significantly

  15. A short-term load forecasting model of natural gas based on optimized genetic algorithm and improved BP neural network

    International Nuclear Information System (INIS)

    Yu, Feng; Xu, Xiaozhong

    2014-01-01

    Highlights: • A detailed data processing will make more accurate results prediction. • Taking a full account of more load factors to improve the prediction precision. • Improved BP network obtains higher learning convergence. • Genetic algorithm optimized by chaotic cat map enhances the global search ability. • The combined GA–BP model improved by modified additional momentum factor is superior to others. - Abstract: This paper proposes an appropriate combinational approach which is based on improved BP neural network for short-term gas load forecasting, and the network is optimized by the real-coded genetic algorithm. Firstly, several kinds of modifications are carried out on the standard neural network to accelerate the convergence speed of network, including improved additional momentum factor, improved self-adaptive learning rate and improved momentum and self-adaptive learning rate. Then, it is available to use the global search capability of optimized genetic algorithm to determine the initial weights and thresholds of BP neural network to avoid being trapped in local minima. The ability of GA is enhanced by cat chaotic mapping. In light of the characteristic of natural gas load for Shanghai, a series of data preprocessing methods are adopted and more comprehensive load factors are taken into account to improve the prediction accuracy. Such improvements facilitate forecasting efficiency and exert maximum performance of the model. As a result, the integration model improved by modified additional momentum factor gets more ideal solutions for short-term gas load forecasting, through analyses and comparisons of the above several different combinational algorithms

  16. Neural networks engaged in short-term memory rehearsal are disrupted by irrelevant speech in human subjects.

    Science.gov (United States)

    Kopp, Franziska; Schröger, Erich; Lipka, Sigrid

    2004-01-02

    Rehearsal mechanisms in human short-term memory are increasingly understood in the light of both behavioural and neuroanatomical findings. However, little is known about the cooperation of participating brain structures and how such cooperations are affected when memory performance is disrupted. In this paper we use EEG coherence as a measure of synchronization to investigate rehearsal processes and their disruption by irrelevant speech in a delayed serial recall paradigm. Fronto-central and fronto-parietal theta (4-7.5 Hz), beta (13-20 Hz), and gamma (35-47 Hz) synchronizations are shown to be involved in our short-term memory task. Moreover, the impairment in serial recall due to irrelevant speech was preceded by a reduction of gamma band coherence. Results suggest that the irrelevant speech effect has its neural basis in the disruption of left-lateralized fronto-central networks. This stresses the importance of gamma band activity for short-term memory operations.

  17. Automatic temporal segment detection via bilateral long short-term memory recurrent neural networks

    Science.gov (United States)

    Sun, Bo; Cao, Siming; He, Jun; Yu, Lejun; Li, Liandong

    2017-03-01

    Constrained by the physiology, the temporal factors associated with human behavior, irrespective of facial movement or body gesture, are described by four phases: neutral, onset, apex, and offset. Although they may benefit related recognition tasks, it is not easy to accurately detect such temporal segments. An automatic temporal segment detection framework using bilateral long short-term memory recurrent neural networks (BLSTM-RNN) to learn high-level temporal-spatial features, which synthesizes the local and global temporal-spatial information more efficiently, is presented. The framework is evaluated in detail over the face and body database (FABO). The comparison shows that the proposed framework outperforms state-of-the-art methods for solving the problem of temporal segment detection.

  18. Long short-term memory neural network for air pollutant concentration predictions: Method development and evaluation.

    Science.gov (United States)

    Li, Xiang; Peng, Ling; Yao, Xiaojing; Cui, Shaolong; Hu, Yuan; You, Chengzeng; Chi, Tianhe

    2017-12-01

    Air pollutant concentration forecasting is an effective method of protecting public health by providing an early warning against harmful air pollutants. However, existing methods of air pollutant concentration prediction fail to effectively model long-term dependencies, and most neglect spatial correlations. In this paper, a novel long short-term memory neural network extended (LSTME) model that inherently considers spatiotemporal correlations is proposed for air pollutant concentration prediction. Long short-term memory (LSTM) layers were used to automatically extract inherent useful features from historical air pollutant data, and auxiliary data, including meteorological data and time stamp data, were merged into the proposed model to enhance the performance. Hourly PM 2.5 (particulate matter with an aerodynamic diameter less than or equal to 2.5 μm) concentration data collected at 12 air quality monitoring stations in Beijing City from Jan/01/2014 to May/28/2016 were used to validate the effectiveness of the proposed LSTME model. Experiments were performed using the spatiotemporal deep learning (STDL) model, the time delay neural network (TDNN) model, the autoregressive moving average (ARMA) model, the support vector regression (SVR) model, and the traditional LSTM NN model, and a comparison of the results demonstrated that the LSTME model is superior to the other statistics-based models. Additionally, the use of auxiliary data improved model performance. For the one-hour prediction tasks, the proposed model performed well and exhibited a mean absolute percentage error (MAPE) of 11.93%. In addition, we conducted multiscale predictions over different time spans and achieved satisfactory performance, even for 13-24 h prediction tasks (MAPE = 31.47%). Copyright © 2017 Elsevier Ltd. All rights reserved.

  19. Ensemble Nonlinear Autoregressive Exogenous Artificial Neural Networks for Short-Term Wind Speed and Power Forecasting.

    Science.gov (United States)

    Men, Zhongxian; Yee, Eugene; Lien, Fue-Sang; Yang, Zhiling; Liu, Yongqian

    2014-01-01

    Short-term wind speed and wind power forecasts (for a 72 h period) are obtained using a nonlinear autoregressive exogenous artificial neural network (ANN) methodology which incorporates either numerical weather prediction or high-resolution computational fluid dynamics wind field information as an exogenous input. An ensemble approach is used to combine the predictions from many candidate ANNs in order to provide improved forecasts for wind speed and power, along with the associated uncertainties in these forecasts. More specifically, the ensemble ANN is used to quantify the uncertainties arising from the network weight initialization and from the unknown structure of the ANN. All members forming the ensemble of neural networks were trained using an efficient particle swarm optimization algorithm. The results of the proposed methodology are validated using wind speed and wind power data obtained from an operational wind farm located in Northern China. The assessment demonstrates that this methodology for wind speed and power forecasting generally provides an improvement in predictive skills when compared to the practice of using an "optimal" weight vector from a single ANN while providing additional information in the form of prediction uncertainty bounds.

  20. A methodology based on dynamic artificial neural network for short-term forecasting of the power output of a PV generator

    International Nuclear Information System (INIS)

    Almonacid, F.; Pérez-Higueras, P.J.; Fernández, Eduardo F.; Hontoria, L.

    2014-01-01

    Highlights: • The output of the majority of renewables energies depends on the variability of the weather conditions. • The short-term forecast is going to be essential for effectively integrating solar energy sources. • A new method based on artificial neural network to predict the power output of a PV generator one hour ahead is proposed. • This new method is based on dynamic artificial neural network to predict global solar irradiance and the air temperature. • The methodology developed can be used to estimate the power output of a PV generator with a satisfactory margin of error. - Abstract: One of the problems of some renewables energies is that the output of these kinds of systems is non-dispatchable depending on variability of weather conditions that cannot be predicted and controlled. From this point of view, the short-term forecast is going to be essential for effectively integrating solar energy sources, being a very useful tool for the reliability and stability of the grid ensuring that an adequate supply is present. In this paper a new methodology for forecasting the output of a PV generator one hour ahead based on dynamic artificial neural network is presented. The results of this study show that the proposed methodology could be used to forecast the power output of PV systems one hour ahead with an acceptable degree of accuracy

  1. Short-Term Power Load Point Prediction Based on the Sharp Degree and Chaotic RBF Neural Network

    Directory of Open Access Journals (Sweden)

    Dongxiao Niu

    2015-01-01

    Full Text Available In order to realize the predicting and positioning of short-term load inflection point, this paper made reference to related research in the field of computer image recognition. It got a load sharp degree sequence by the transformation of the original load sequence based on the algorithm of sharp degree. Then this paper designed a forecasting model based on the chaos theory and RBF neural network. It predicted the load sharp degree sequence based on the forecasting model to realize the positioning of short-term load inflection point. Finally, in the empirical example analysis, this paper predicted the daily load point of a region using the actual load data of the certain region to verify the effectiveness and applicability of this method. Prediction results showed that most of the test sample load points could be accurately predicted.

  2. Short-term synaptic plasticity and heterogeneity in neural systems

    Science.gov (United States)

    Mejias, J. F.; Kappen, H. J.; Longtin, A.; Torres, J. J.

    2013-01-01

    We review some recent results on neural dynamics and information processing which arise when considering several biophysical factors of interest, in particular, short-term synaptic plasticity and neural heterogeneity. The inclusion of short-term synaptic plasticity leads to enhanced long-term memory capacities, a higher robustness of memory to noise, and irregularity in the duration of the so-called up cortical states. On the other hand, considering some level of neural heterogeneity in neuron models allows neural systems to optimize information transmission in rate coding and temporal coding, two strategies commonly used by neurons to codify information in many brain areas. In all these studies, analytical approximations can be made to explain the underlying dynamics of these neural systems.

  3. Application of a Shallow Neural Network to Short-Term Stock Trading

    OpenAIRE

    Madahar, Abhinav; Ma, Yuze; Patel, Kunal

    2017-01-01

    Machine learning is increasingly prevalent in stock market trading. Though neural networks have seen success in computer vision and natural language processing, they have not been as useful in stock market trading. To demonstrate the applicability of a neural network in stock trading, we made a single-layer neural network that recommends buying or selling shares of a stock by comparing the highest high of 10 consecutive days with that of the next 10 days, a process repeated for the stock's ye...

  4. Short-term estimation of GNSS TEC using a neural network model in Brazil

    Science.gov (United States)

    Ferreira, Arthur Amaral; Borges, Renato Alves; Paparini, Claudia; Ciraolo, Luigi; Radicella, Sandro M.

    2017-10-01

    This work presents a novel Neural Network (NN) model to estimate Total Electron Content (TEC) from Global Navigation Satellite Systems (GNSS) measurements in three distinct sectors in Brazil. The purpose of this work is to start the investigations on the development of a regional model that can be used to determine the vertical TEC over Brazil, aiming future applications on a near real-time frame estimations and short-term forecasting. The NN is used to estimate the GNSS TEC values at void locations, where no dual-frequency GNSS receiver that may be used as a source of data to GNSS TEC estimation is available. This approach is particularly useful for GNSS single-frequency users that rely on corrections of ionospheric range errors by TEC models. GNSS data from the first GLONASS network for research and development (GLONASS R&D network) installed in Latin America, and from the Brazilian Network for Continuous Monitoring of the GNSS (RMBC) were used on TEC calibration. The input parameters of the NN model are based on features known to influence TEC values, such as geographic location of the GNSS receiver, magnetic activity, seasonal and diurnal variations, and solar activity. Data from two ten-days periods (from DoY 154 to 163 and from 282 to 291) are used to train the network. Three distinct analyses have been carried out in order to assess time-varying and spatial performance of the model. At the spatial performance analysis, for each region, a set of stations is chosen to provide training data to the NN, and after the training procedure, the NN is used to estimate vTEC behavior for the test station which data were not presented to the NN in training process. An analysis is done by comparing, for each testing station, the estimated NN vTEC delivered by the NN and reference calibrated vTEC. Also, as a second analysis, the network ability to forecast one day after the time interval (DoY 292) based on information of the second period of investigation is also assessed

  5. Modelling self-optimised short term load forecasting for medium voltage loads using tunning fuzzy systems and Artificial Neural Networks

    International Nuclear Information System (INIS)

    Mahmoud, Thair S.; Habibi, Daryoush; Hassan, Mohammed Y.; Bass, Octavian

    2015-01-01

    Highlights: • A novel Short Term Medium Voltage (MV) Load Forecasting (STLF) model is presented. • A knowledge-based STLF error control mechanism is implemented. • An Artificial Neural Network (ANN)-based optimum tuning is applied on STLF. • The relationship between load profiles and operational conditions is analysed. - Abstract: This paper presents an intelligent mechanism for Short Term Load Forecasting (STLF) models, which allows self-adaptation with respect to the load operational conditions. Specifically, a knowledge-based FeedBack Tunning Fuzzy System (FBTFS) is proposed to instantaneously correlate the information about the demand profile and its operational conditions to make decisions for controlling the model’s forecasting error rate. To maintain minimum forecasting error under various operational scenarios, the FBTFS adaptation was optimised using a Multi-Layer Perceptron Artificial Neural Network (MLPANN), which was trained using Backpropagation algorithm, based on the information about the amount of error and the operational conditions at time of forecasting. For the sake of comparison and performance testing, this mechanism was added to the conventional forecasting methods, i.e. Nonlinear AutoRegressive eXogenous-Artificial Neural Network (NARXANN), Fuzzy Subtractive Clustering Method-based Adaptive Neuro Fuzzy Inference System (FSCMANFIS) and Gaussian-kernel Support Vector Machine (GSVM), and the measured forecasting error reduction average in a 12 month simulation period was 7.83%, 8.5% and 8.32% respectively. The 3.5 MW variable load profile of Edith Cowan University (ECU) in Joondalup, Australia, was used in the modelling and simulations of this model, and the data was provided by Western Power, the transmission and distribution company of the state of Western Australia.

  6. Data-driven forecasting of high-dimensional chaotic systems with long short-term memory networks.

    Science.gov (United States)

    Vlachas, Pantelis R; Byeon, Wonmin; Wan, Zhong Y; Sapsis, Themistoklis P; Koumoutsakos, Petros

    2018-05-01

    We introduce a data-driven forecasting method for high-dimensional chaotic systems using long short-term memory (LSTM) recurrent neural networks. The proposed LSTM neural networks perform inference of high-dimensional dynamical systems in their reduced order space and are shown to be an effective set of nonlinear approximators of their attractor. We demonstrate the forecasting performance of the LSTM and compare it with Gaussian processes (GPs) in time series obtained from the Lorenz 96 system, the Kuramoto-Sivashinsky equation and a prototype climate model. The LSTM networks outperform the GPs in short-term forecasting accuracy in all applications considered. A hybrid architecture, extending the LSTM with a mean stochastic model (MSM-LSTM), is proposed to ensure convergence to the invariant measure. This novel hybrid method is fully data-driven and extends the forecasting capabilities of LSTM networks.

  7. A High Precision Artificial Neural Networks Model for Short-Term Energy Load Forecasting

    Directory of Open Access Journals (Sweden)

    Ping-Huan Kuo

    2018-01-01

    Full Text Available One of the most important research topics in smart grid technology is load forecasting, because accuracy of load forecasting highly influences reliability of the smart grid systems. In the past, load forecasting was obtained by traditional analysis techniques such as time series analysis and linear regression. Since the load forecast focuses on aggregated electricity consumption patterns, researchers have recently integrated deep learning approaches with machine learning techniques. In this study, an accurate deep neural network algorithm for short-term load forecasting (STLF is introduced. The forecasting performance of proposed algorithm is compared with performances of five artificial intelligence algorithms that are commonly used in load forecasting. The Mean Absolute Percentage Error (MAPE and Cumulative Variation of Root Mean Square Error (CV-RMSE are used as accuracy evaluation indexes. The experiment results show that MAPE and CV-RMSE of proposed algorithm are 9.77% and 11.66%, respectively, displaying very high forecasting accuracy.

  8. Short-Term and Long-Term Forecasting for the 3D Point Position Changing by Using Artificial Neural Networks

    Directory of Open Access Journals (Sweden)

    Eleni-Georgia Alevizakou

    2018-03-01

    Full Text Available Forecasting is one of the most growing areas in most sciences attracting the attention of many researchers for more extensive study. Therefore, the goal of this study is to develop an integrated forecasting methodology based on an Artificial Neural Network (ANN, which is a modern and attractive intelligent technique. The final result is to provide short-term and long-term forecasts for point position changing, i.e., the displacement or deformation of the surface they belong to. The motivation was the combination of two thoughts, the insertion of the forecasting concept in Geodesy as in the most scientific disciplines (e.g., Economics, Medicine and the desire to know the future position of any point on a construction or on the earth’s crustal. This methodology was designed to be accurate, stable and general for different kind of geodetic data. The basic procedure consists of the definition of the forecasting problem, the preliminary data analysis (data pre-processing, the definition of the most suitable ANN, its evaluation using the proper criteria and finally the production of forecasts. The methodology gives particular emphasis on the stages of the pre-processing and the evaluation. Additionally, the importance of the prediction intervals (PI is emphasized. A case study, which includes geodetic data from the year 2003 to the year 2016—namely X, Y, Z coordinates—is implemented. The data were acquired by 1000 permanent Global Navigation Satellite System (GNSS stations. During this case study, 2016 ANNs—with different hyper-parameters—are trained and tested for short-term forecasting and 2016 for long-term forecasting, for each of the GNSS stations. In addition, other conventional statistical forecasting methods are used for the same purpose using the same data set. Finally the most appropriate Non-linear Autoregressive Recurrent network (NAR or Non-linear Autoregressive with eXogenous inputs (NARX for the forecasting of 3D point

  9. Music Learning with Long Short Term Memory Networks

    OpenAIRE

    Colombo, Florian François

    2015-01-01

    Humans are able to learn and compose complex, yet beautiful, pieces of music as seen in e.g. the highly complicated works of J.S. Bach. However, how our brain is able to store and produce these very long temporal sequences is still an open question. Long short-term memory (LSTM) artificial neural networks have been shown to be efficient in sequence learning tasks thanks to their inherent ability to bridge long time lags between input events and their target signals. Here, I investigate the po...

  10. Short-term memory in networks of dissociated cortical neurons.

    Science.gov (United States)

    Dranias, Mark R; Ju, Han; Rajaram, Ezhilarasan; VanDongen, Antonius M J

    2013-01-30

    Short-term memory refers to the ability to store small amounts of stimulus-specific information for a short period of time. It is supported by both fading and hidden memory processes. Fading memory relies on recurrent activity patterns in a neuronal network, whereas hidden memory is encoded using synaptic mechanisms, such as facilitation, which persist even when neurons fall silent. We have used a novel computational and optogenetic approach to investigate whether these same memory processes hypothesized to support pattern recognition and short-term memory in vivo, exist in vitro. Electrophysiological activity was recorded from primary cultures of dissociated rat cortical neurons plated on multielectrode arrays. Cultures were transfected with ChannelRhodopsin-2 and optically stimulated using random dot stimuli. The pattern of neuronal activity resulting from this stimulation was analyzed using classification algorithms that enabled the identification of stimulus-specific memories. Fading memories for different stimuli, encoded in ongoing neural activity, persisted and could be distinguished from each other for as long as 1 s after stimulation was terminated. Hidden memories were detected by altered responses of neurons to additional stimulation, and this effect persisted longer than 1 s. Interestingly, network bursts seem to eliminate hidden memories. These results are similar to those that have been reported from similar experiments in vivo and demonstrate that mechanisms of information processing and short-term memory can be studied using cultured neuronal networks, thereby setting the stage for therapeutic applications using this platform.

  11. Long Short-Term Memory Projection Recurrent Neural Network Architectures for Piano’s Continuous Note Recognition

    Directory of Open Access Journals (Sweden)

    YuKang Jia

    2017-01-01

    Full Text Available Long Short-Term Memory (LSTM is a kind of Recurrent Neural Networks (RNN relating to time series, which has achieved good performance in speech recogniton and image recognition. Long Short-Term Memory Projection (LSTMP is a variant of LSTM to further optimize speed and performance of LSTM by adding a projection layer. As LSTM and LSTMP have performed well in pattern recognition, in this paper, we combine them with Connectionist Temporal Classification (CTC to study piano’s continuous note recognition for robotics. Based on the Beijing Forestry University music library, we conduct experiments to show recognition rates and numbers of iterations of LSTM with a single layer, LSTMP with a single layer, and Deep LSTM (DLSTM, LSTM with multilayers. As a result, the single layer LSTMP proves performing much better than the single layer LSTM in both time and the recognition rate; that is, LSTMP has fewer parameters and therefore reduces the training time, and, moreover, benefiting from the projection layer, LSTMP has better performance, too. The best recognition rate of LSTMP is 99.8%. As for DLSTM, the recognition rate can reach 100% because of the effectiveness of the deep structure, but compared with the single layer LSTMP, DLSTM needs more training time.

  12. Adaptive short-term electricity price forecasting using artificial neural networks in the restructured power markets

    International Nuclear Information System (INIS)

    Yamin, H.Y.; Shahidehpour, S.M.; Li, Z.

    2004-01-01

    This paper proposes a comprehensive model for the adaptive short-term electricity price forecasting using Artificial Neural Networks (ANN) in the restructured power markets. The model consists: price simulation, price forecasting, and performance analysis. The factors impacting the electricity price forecasting, including time factors, load factors, reserve factors, and historical price factor are discussed. We adopted ANN and proposed a new definition for the MAPE using the median to study the relationship between these factors and market price as well as the performance of the electricity price forecasting. The reserve factors are included to enhance the performance of the forecasting process. The proposed model handles the price spikes more efficiently due to considering the median instead of the average. The IEEE 118-bus system and California practical system are used to demonstrate the superiority of the proposed model. (author)

  13. A Hybrid Method Based on Singular Spectrum Analysis, Firefly Algorithm, and BP Neural Network for Short-Term Wind Speed Forecasting

    Directory of Open Access Journals (Sweden)

    Yuyang Gao

    2016-09-01

    Full Text Available With increasing importance being attached to big data mining, analysis, and forecasting in the field of wind energy, how to select an optimization model to improve the forecasting accuracy of the wind speed time series is not only an extremely challenging problem, but also a problem of concern for economic forecasting. The artificial intelligence model is widely used in forecasting and data processing, but the individual back-propagation artificial neural network cannot always satisfy the time series forecasting needs. Thus, a hybrid forecasting approach has been proposed in this study, which consists of data preprocessing, parameter optimization and a neural network for advancing the accuracy of short-term wind speed forecasting. According to the case study, in which the data are collected from Peng Lai, a city located in China, the simulation results indicate that the hybrid forecasting method yields better predictions compared to the individual BP, which indicates that the hybrid method exhibits stronger forecasting ability.

  14. Improved Short-Term Load Forecasting Based on Two-Stage Predictions with Artificial Neural Networks in a Microgrid Environment

    Directory of Open Access Journals (Sweden)

    Jaime Lloret

    2013-08-01

    Full Text Available Short-Term Load Forecasting plays a significant role in energy generation planning, and is specially gaining momentum in the emerging Smart Grids environment, which usually presents highly disaggregated scenarios where detailed real-time information is available thanks to Communications and Information Technologies, as it happens for example in the case of microgrids. This paper presents a two stage prediction model based on an Artificial Neural Network in order to allow Short-Term Load Forecasting of the following day in microgrid environment, which first estimates peak and valley values of the demand curve of the day to be forecasted. Those, together with other variables, will make the second stage, forecast of the entire demand curve, more precise than a direct, single-stage forecast. The whole architecture of the model will be presented and the results compared with recent work on the same set of data, and on the same location, obtaining a Mean Absolute Percentage Error of 1.62% against the original 2.47% of the single stage model.

  15. Wind power prediction based on genetic neural network

    Science.gov (United States)

    Zhang, Suhan

    2017-04-01

    The scale of grid connected wind farms keeps increasing. To ensure the stability of power system operation, make a reasonable scheduling scheme and improve the competitiveness of wind farm in the electricity generation market, it's important to accurately forecast the short-term wind power. To reduce the influence of the nonlinear relationship between the disturbance factor and the wind power, the improved prediction model based on genetic algorithm and neural network method is established. To overcome the shortcomings of long training time of BP neural network and easy to fall into local minimum and improve the accuracy of the neural network, genetic algorithm is adopted to optimize the parameters and topology of neural network. The historical data is used as input to predict short-term wind power. The effectiveness and feasibility of the method is verified by the actual data of a certain wind farm as an example.

  16. Fast Weight Long Short-Term Memory

    OpenAIRE

    Keller, T. Anderson; Sridhar, Sharath Nittur; Wang, Xin

    2018-01-01

    Associative memory using fast weights is a short-term memory mechanism that substantially improves the memory capacity and time scale of recurrent neural networks (RNNs). As recent studies introduced fast weights only to regular RNNs, it is unknown whether fast weight memory is beneficial to gated RNNs. In this work, we report a significant synergy between long short-term memory (LSTM) networks and fast weight associative memories. We show that this combination, in learning associative retrie...

  17. Neural circuit mechanisms of short-term memory

    Science.gov (United States)

    Goldman, Mark

    Memory over time scales of seconds to tens of seconds is thought to be maintained by neural activity that is triggered by a memorized stimulus and persists long after the stimulus is turned off. This presents a challenge to current models of memory-storing mechanisms, because the typical time scales associated with cellular and synaptic dynamics are two orders of magnitude smaller than this. While such long time scales can easily be achieved by bistable processes that toggle like a flip-flop between a baseline and elevated-activity state, many neuronal systems have been observed experimentally to be capable of maintaining a continuum of stable states. For example, in neural integrator networks involved in the accumulation of evidence for decision making and in motor control, individual neurons have been recorded whose activity reflects the mathematical integral of their inputs; in the absence of input, these neurons sustain activity at a level proportional to the running total of their inputs. This represents an analog form of memory whose dynamics can be conceptualized through an energy landscape with a continuum of lowest-energy states. Such continuous attractor landscapes are structurally non-robust, in seeming violation of the relative robustness of biological memory systems. In this talk, I will present and compare different biologically motivated circuit motifs for the accumulation and storage of signals in short-term memory. Challenges to generating robust memory maintenance will be highlighted and potential mechanisms for ameliorating the sensitivity of memory networks to perturbations will be discussed. Funding for this work was provided by NIH R01 MH065034, NSF IIS-1208218, Simons Foundation 324260, and a UC Davis Ophthalmology Research to Prevent Blindness Grant.

  18. Memristor-based neural networks

    International Nuclear Information System (INIS)

    Thomas, Andy

    2013-01-01

    The synapse is a crucial element in biological neural networks, but a simple electronic equivalent has been absent. This complicates the development of hardware that imitates biological architectures in the nervous system. Now, the recent progress in the experimental realization of memristive devices has renewed interest in artificial neural networks. The resistance of a memristive system depends on its past states and exactly this functionality can be used to mimic the synaptic connections in a (human) brain. After a short introduction to memristors, we present and explain the relevant mechanisms in a biological neural network, such as long-term potentiation and spike time-dependent plasticity, and determine the minimal requirements for an artificial neural network. We review the implementations of these processes using basic electric circuits and more complex mechanisms that either imitate biological systems or could act as a model system for them. (topical review)

  19. Synaptic plasticity, neural circuits, and the emerging role of altered short-term information processing in schizophrenia

    Science.gov (United States)

    Crabtree, Gregg W.; Gogos, Joseph A.

    2014-01-01

    Synaptic plasticity alters the strength of information flow between presynaptic and postsynaptic neurons and thus modifies the likelihood that action potentials in a presynaptic neuron will lead to an action potential in a postsynaptic neuron. As such, synaptic plasticity and pathological changes in synaptic plasticity impact the synaptic computation which controls the information flow through the neural microcircuits responsible for the complex information processing necessary to drive adaptive behaviors. As current theories of neuropsychiatric disease suggest that distinct dysfunctions in neural circuit performance may critically underlie the unique symptoms of these diseases, pathological alterations in synaptic plasticity mechanisms may be fundamental to the disease process. Here we consider mechanisms of both short-term and long-term plasticity of synaptic transmission and their possible roles in information processing by neural microcircuits in both health and disease. As paradigms of neuropsychiatric diseases with strongly implicated risk genes, we discuss the findings in schizophrenia and autism and consider the alterations in synaptic plasticity and network function observed in both human studies and genetic mouse models of these diseases. Together these studies have begun to point toward a likely dominant role of short-term synaptic plasticity alterations in schizophrenia while dysfunction in autism spectrum disorders (ASDs) may be due to a combination of both short-term and long-term synaptic plasticity alterations. PMID:25505409

  20. Short-Term Solar Irradiance Forecasting Model Based on Artificial Neural Network Using Statistical Feature Parameters

    Directory of Open Access Journals (Sweden)

    Hongshan Zhao

    2012-05-01

    Full Text Available Short-term solar irradiance forecasting (STSIF is of great significance for the optimal operation and power predication of grid-connected photovoltaic (PV plants. However, STSIF is very complex to handle due to the random and nonlinear characteristics of solar irradiance under changeable weather conditions. Artificial Neural Network (ANN is suitable for STSIF modeling and many research works on this topic are presented, but the conciseness and robustness of the existing models still need to be improved. After discussing the relation between weather variations and irradiance, the characteristics of the statistical feature parameters of irradiance under different weather conditions are figured out. A novel ANN model using statistical feature parameters (ANN-SFP for STSIF is proposed in this paper. The input vector is reconstructed with several statistical feature parameters of irradiance and ambient temperature. Thus sufficient information can be effectively extracted from relatively few inputs and the model complexity is reduced. The model structure is determined by cross-validation (CV, and the Levenberg-Marquardt algorithm (LMA is used for the network training. Simulations are carried out to validate and compare the proposed model with the conventional ANN model using historical data series (ANN-HDS, and the results indicated that the forecast accuracy is obviously improved under variable weather conditions.

  1. Deep Neural Network Based Demand Side Short Term Load Forecasting

    Directory of Open Access Journals (Sweden)

    Seunghyoung Ryu

    2016-12-01

    Full Text Available In the smart grid, one of the most important research areas is load forecasting; it spans from traditional time series analyses to recent machine learning approaches and mostly focuses on forecasting aggregated electricity consumption. However, the importance of demand side energy management, including individual load forecasting, is becoming critical. In this paper, we propose deep neural network (DNN-based load forecasting models and apply them to a demand side empirical load database. DNNs are trained in two different ways: a pre-training restricted Boltzmann machine and using the rectified linear unit without pre-training. DNN forecasting models are trained by individual customer’s electricity consumption data and regional meteorological elements. To verify the performance of DNNs, forecasting results are compared with a shallow neural network (SNN, a double seasonal Holt–Winters (DSHW model and the autoregressive integrated moving average (ARIMA. The mean absolute percentage error (MAPE and relative root mean square error (RRMSE are used for verification. Our results show that DNNs exhibit accurate and robust predictions compared to other forecasting models, e.g., MAPE and RRMSE are reduced by up to 17% and 22% compared to SNN and 9% and 29% compared to DSHW.

  2. Magnetic Tunnel Junction Based Long-Term Short-Term Stochastic Synapse for a Spiking Neural Network with On-Chip STDP Learning

    Science.gov (United States)

    Srinivasan, Gopalakrishnan; Sengupta, Abhronil; Roy, Kaushik

    2016-07-01

    Spiking Neural Networks (SNNs) have emerged as a powerful neuromorphic computing paradigm to carry out classification and recognition tasks. Nevertheless, the general purpose computing platforms and the custom hardware architectures implemented using standard CMOS technology, have been unable to rival the power efficiency of the human brain. Hence, there is a need for novel nanoelectronic devices that can efficiently model the neurons and synapses constituting an SNN. In this work, we propose a heterostructure composed of a Magnetic Tunnel Junction (MTJ) and a heavy metal as a stochastic binary synapse. Synaptic plasticity is achieved by the stochastic switching of the MTJ conductance states, based on the temporal correlation between the spiking activities of the interconnecting neurons. Additionally, we present a significance driven long-term short-term stochastic synapse comprising two unique binary synaptic elements, in order to improve the synaptic learning efficiency. We demonstrate the efficacy of the proposed synaptic configurations and the stochastic learning algorithm on an SNN trained to classify handwritten digits from the MNIST dataset, using a device to system-level simulation framework. The power efficiency of the proposed neuromorphic system stems from the ultra-low programming energy of the spintronic synapses.

  3. Robustness Analysis of Hybrid Stochastic Neural Networks with Neutral Terms and Time-Varying Delays

    Directory of Open Access Journals (Sweden)

    Chunmei Wu

    2015-01-01

    Full Text Available We analyze the robustness of global exponential stability of hybrid stochastic neural networks subject to neutral terms and time-varying delays simultaneously. Given globally exponentially stable hybrid stochastic neural networks, we characterize the upper bounds of contraction coefficients of neutral terms and time-varying delays by using the transcendental equation. Moreover, we prove theoretically that, for any globally exponentially stable hybrid stochastic neural networks, if additive neutral terms and time-varying delays are smaller than the upper bounds arrived, then the perturbed neural networks are guaranteed to also be globally exponentially stable. Finally, a numerical simulation example is given to illustrate the presented criteria.

  4. Holding multiple items in short term memory: a neural mechanism.

    Directory of Open Access Journals (Sweden)

    Edmund T Rolls

    Full Text Available Human short term memory has a capacity of several items maintained simultaneously. We show how the number of short term memory representations that an attractor network modeling a cortical local network can simultaneously maintain active is increased by using synaptic facilitation of the type found in the prefrontal cortex. We have been able to maintain 9 short term memories active simultaneously in integrate-and-fire simulations where the proportion of neurons in each population, the sparseness, is 0.1, and have confirmed the stability of such a system with mean field analyses. Without synaptic facilitation the system can maintain many fewer memories active in the same network. The system operates because of the effectively increased synaptic strengths formed by the synaptic facilitation just for those pools to which the cue is applied, and then maintenance of this synaptic facilitation in just those pools when the cue is removed by the continuing neuronal firing in those pools. The findings have implications for understanding how several items can be maintained simultaneously in short term memory, how this may be relevant to the implementation of language in the brain, and suggest new approaches to understanding and treating the decline in short term memory that can occur with normal aging.

  5. Holding multiple items in short term memory: a neural mechanism.

    Science.gov (United States)

    Rolls, Edmund T; Dempere-Marco, Laura; Deco, Gustavo

    2013-01-01

    Human short term memory has a capacity of several items maintained simultaneously. We show how the number of short term memory representations that an attractor network modeling a cortical local network can simultaneously maintain active is increased by using synaptic facilitation of the type found in the prefrontal cortex. We have been able to maintain 9 short term memories active simultaneously in integrate-and-fire simulations where the proportion of neurons in each population, the sparseness, is 0.1, and have confirmed the stability of such a system with mean field analyses. Without synaptic facilitation the system can maintain many fewer memories active in the same network. The system operates because of the effectively increased synaptic strengths formed by the synaptic facilitation just for those pools to which the cue is applied, and then maintenance of this synaptic facilitation in just those pools when the cue is removed by the continuing neuronal firing in those pools. The findings have implications for understanding how several items can be maintained simultaneously in short term memory, how this may be relevant to the implementation of language in the brain, and suggest new approaches to understanding and treating the decline in short term memory that can occur with normal aging.

  6. Holding Multiple Items in Short Term Memory: A Neural Mechanism

    Science.gov (United States)

    Rolls, Edmund T.; Dempere-Marco, Laura; Deco, Gustavo

    2013-01-01

    Human short term memory has a capacity of several items maintained simultaneously. We show how the number of short term memory representations that an attractor network modeling a cortical local network can simultaneously maintain active is increased by using synaptic facilitation of the type found in the prefrontal cortex. We have been able to maintain 9 short term memories active simultaneously in integrate-and-fire simulations where the proportion of neurons in each population, the sparseness, is 0.1, and have confirmed the stability of such a system with mean field analyses. Without synaptic facilitation the system can maintain many fewer memories active in the same network. The system operates because of the effectively increased synaptic strengths formed by the synaptic facilitation just for those pools to which the cue is applied, and then maintenance of this synaptic facilitation in just those pools when the cue is removed by the continuing neuronal firing in those pools. The findings have implications for understanding how several items can be maintained simultaneously in short term memory, how this may be relevant to the implementation of language in the brain, and suggest new approaches to understanding and treating the decline in short term memory that can occur with normal aging. PMID:23613789

  7. Adaptive exponential synchronization of delayed neural networks with reaction-diffusion terms

    International Nuclear Information System (INIS)

    Sheng Li; Yang Huizhong; Lou Xuyang

    2009-01-01

    This paper presents an exponential synchronization scheme for a class of neural networks with time-varying and distributed delays and reaction-diffusion terms. An adaptive synchronization controller is derived to achieve the exponential synchronization of the drive-response structure of neural networks by using the Lyapunov stability theory. At the same time, the update laws of parameters are proposed to guarantee the synchronization of delayed neural networks with all parameters unknown. It is shown that the approaches developed here extend and improve the ideas presented in recent literatures.

  8. Rich spectrum of neural field dynamics in the presence of short-term synaptic depression

    Science.gov (United States)

    Wang, He; Lam, Kin; Fung, C. C. Alan; Wong, K. Y. Michael; Wu, Si

    2015-09-01

    In continuous attractor neural networks (CANNs), spatially continuous information such as orientation, head direction, and spatial location is represented by Gaussian-like tuning curves that can be displaced continuously in the space of the preferred stimuli of the neurons. We investigate how short-term synaptic depression (STD) can reshape the intrinsic dynamics of the CANN model and its responses to a single static input. In particular, CANNs with STD can support various complex firing patterns and chaotic behaviors. These chaotic behaviors have the potential to encode various stimuli in the neuronal system.

  9. Short-term memory of motor network performance via activity-dependent potentiation of Na+/K+ pump function.

    Science.gov (United States)

    Zhang, Hong-Yan; Sillar, Keith T

    2012-03-20

    Brain networks memorize previous performance to adjust their output in light of past experience. These activity-dependent modifications generally result from changes in synaptic strengths or ionic conductances, and ion pumps have only rarely been demonstrated to play a dynamic role. Locomotor behavior is produced by central pattern generator (CPG) networks and modified by sensory and descending signals to allow for changes in movement frequency, intensity, and duration, but whether or how the CPG networks recall recent activity is largely unknown. In Xenopus frog tadpoles, swim bout duration correlates linearly with interswim interval, suggesting that the locomotor network retains a short-term memory of previous output. We discovered an ultraslow, minute-long afterhyperpolarization (usAHP) in network neurons following locomotor episodes. The usAHP is mediated by an activity- and sodium spike-dependent enhancement of electrogenic Na(+)/K(+) pump function. By integrating spike frequency over time and linking the membrane potential of spinal neurons to network performance, the usAHP plays a dynamic role in short-term motor memory. Because Na(+)/K(+) pumps are ubiquitously expressed in neurons of all animals and because sodium spikes inevitably accompany network activity, the usAHP may represent a phylogenetically conserved but largely overlooked mechanism for short-term memory of neural network function. Copyright © 2012 Elsevier Ltd. All rights reserved.

  10. A PSO based Artificial Neural Network approach for short term unit commitment problem

    Directory of Open Access Journals (Sweden)

    AFTAB AHMAD

    2010-10-01

    Full Text Available Unit commitment (UC is a non-linear, large scale, complex, mixed-integer combinatorial constrained optimization problem. This paper proposes, a new hybrid approach for generating unit commitment schedules using swarm intelligence learning rule based neural network. The training data has been generated using dynamic programming for machines without valve point effects and using genetic algorithm for machines with valve point effects. A set of load patterns as inputs and the corresponding unit generation schedules as outputs are used to train the network. The neural network fine tunes the best results to the desired targets. The proposed approach has been validated for three thermal machines with valve point effects and without valve point effects. The results are compared with the approaches available in the literature. The PSO-ANN trained model gives better results which show the promise of the proposed methodology.

  11. Application of artificial neural network to search for gravitational-wave signals associated with short gamma-ray bursts

    International Nuclear Information System (INIS)

    Kim, Kyungmin; Lee, Hyun Kyu; Harry, Ian W; Hodge, Kari A; Kim, Young-Min; Lee, Chang-Hwan; Oh, John J; Oh, Sang Hoon; Son, Edwin J

    2015-01-01

    We apply a machine learning algorithm, the artificial neural network, to the search for gravitational-wave signals associated with short gamma-ray bursts (GRBs). The multi-dimensional samples consisting of data corresponding to the statistical and physical quantities from the coherent search pipeline are fed into the artificial neural network to distinguish simulated gravitational-wave signals from background noise artifacts. Our result shows that the data classification efficiency at a fixed false alarm probability (FAP) is improved by the artificial neural network in comparison to the conventional detection statistic. Specifically, the distance at 50% detection probability at a fixed false positive rate is increased about 8%–14% for the considered waveform models. We also evaluate a few seconds of the gravitational-wave data segment using the trained networks and obtain the FAP. We suggest that the artificial neural network can be a complementary method to the conventional detection statistic for identifying gravitational-wave signals related to the short GRBs. (paper)

  12. Attention supports verbal short-term memory via competition between dorsal and ventral attention networks.

    Science.gov (United States)

    Majerus, Steve; Attout, Lucie; D'Argembeau, Arnaud; Degueldre, Christian; Fias, Wim; Maquet, Pierre; Martinez Perez, Trecy; Stawarczyk, David; Salmon, Eric; Van der Linden, Martial; Phillips, Christophe; Balteau, Evelyne

    2012-05-01

    Interactions between the neural correlates of short-term memory (STM) and attention have been actively studied in the visual STM domain but much less in the verbal STM domain. Here we show that the same attention mechanisms that have been shown to shape the neural networks of visual STM also shape those of verbal STM. Based on previous research in visual STM, we contrasted the involvement of a dorsal attention network centered on the intraparietal sulcus supporting task-related attention and a ventral attention network centered on the temporoparietal junction supporting stimulus-related attention. We observed that, with increasing STM load, the dorsal attention network was activated while the ventral attention network was deactivated, especially during early maintenance. Importantly, activation in the ventral attention network increased in response to task-irrelevant stimuli briefly presented during the maintenance phase of the STM trials but only during low-load STM conditions, which were associated with the lowest levels of activity in the dorsal attention network during encoding and early maintenance. By demonstrating a trade-off between task-related and stimulus-related attention networks during verbal STM, this study highlights the dynamics of attentional processes involved in verbal STM.

  13. Short-term load forecasting by a neuro-fuzzy based approach

    Energy Technology Data Exchange (ETDEWEB)

    Ruey-Hsun Liang; Ching-Chi Cheng [National Yunlin University of Science and Technology (China). Dept. of Electrical Engineering

    2002-02-01

    An approach based on an artificial neural network (ANN) combined with a fuzzy system is proposed for short-term load forecasting. This approach was developed in order to reach the desired short-term load forecasting in an efficient manner. Over the past few years, ANNs have attained the ability to manage a great deal of system complexity and are now being proposed as powerful computational tools. In order to select the appropriate load as the input for the desired forecasting, the Pearson analysis method is first applied to choose two historical record load patterns that are similar to the forecasted load pattern. These two load patterns and the required weather parameters are then fuzzified and input into a neural network for training or testing the network. The back-propagation (BP) neural network is applied to determine the preliminary forecasted load. In addition, the rule base for the fuzzy inference machine contains important linguistic membership function terms with knowledge in the form of fuzzy IF-THEN rules. This produces the load correction inference from the historical information and past forecasted load errors to obtain an inferred load error. Adding the inferred load error to the preliminary forecasted load, we can obtain the finial forecasted load. The effectiveness of the proposed approach to the short-term load-forecasting problem is demonstrated using practical data from the Taiwan Power Company (TPC). (Author)

  14. Australia's long-term electricity demand forecasting using deep neural networks

    OpenAIRE

    Hamedmoghadam, Homayoun; Joorabloo, Nima; Jalili, Mahdi

    2018-01-01

    Accurate prediction of long-term electricity demand has a significant role in demand side management and electricity network planning and operation. Demand over-estimation results in over-investment in network assets, driving up the electricity prices, while demand under-estimation may lead to under-investment resulting in unreliable and insecure electricity. In this manuscript, we apply deep neural networks to predict Australia's long-term electricity demand. A stacked autoencoder is used in...

  15. Numerical experiments with neural networks

    International Nuclear Information System (INIS)

    Miranda, Enrique.

    1990-01-01

    Neural networks are highly idealized models which, in spite of their simplicity, reproduce some key features of the real brain. In this paper, they are introduced at a level adequate for an undergraduate computational physics course. Some relevant magnitudes are defined and evaluated numerically for the Hopfield model and a short term memory model. (Author)

  16. Neural organization of linguistic short-term memory is sensory modality-dependent: evidence from signed and spoken language.

    Science.gov (United States)

    Pa, Judy; Wilson, Stephen M; Pickell, Herbert; Bellugi, Ursula; Hickok, Gregory

    2008-12-01

    Despite decades of research, there is still disagreement regarding the nature of the information that is maintained in linguistic short-term memory (STM). Some authors argue for abstract phonological codes, whereas others argue for more general sensory traces. We assess these possibilities by investigating linguistic STM in two distinct sensory-motor modalities, spoken and signed language. Hearing bilingual participants (native in English and American Sign Language) performed equivalent STM tasks in both languages during functional magnetic resonance imaging. Distinct, sensory-specific activations were seen during the maintenance phase of the task for spoken versus signed language. These regions have been previously shown to respond to nonlinguistic sensory stimulation, suggesting that linguistic STM tasks recruit sensory-specific networks. However, maintenance-phase activations common to the two languages were also observed, implying some form of common process. We conclude that linguistic STM involves sensory-dependent neural networks, but suggest that sensory-independent neural networks may also exist.

  17. Neural activity in the hippocampus predicts individual visual short-term memory capacity.

    Science.gov (United States)

    von Allmen, David Yoh; Wurmitzer, Karoline; Martin, Ernst; Klaver, Peter

    2013-07-01

    Although the hippocampus had been traditionally thought to be exclusively involved in long-term memory, recent studies raised controversial explanations why hippocampal activity emerged during short-term memory tasks. For example, it has been argued that long-term memory processes might contribute to performance within a short-term memory paradigm when memory capacity has been exceeded. It is still unclear, though, whether neural activity in the hippocampus predicts visual short-term memory (VSTM) performance. To investigate this question, we measured BOLD activity in 21 healthy adults (age range 19-27 yr, nine males) while they performed a match-to-sample task requiring processing of object-location associations (delay period  =  900 ms; set size conditions 1, 2, 4, and 6). Based on individual memory capacity (estimated by Cowan's K-formula), two performance groups were formed (high and low performers). Within whole brain analyses, we found a robust main effect of "set size" in the posterior parietal cortex (PPC). In line with a "set size × group" interaction in the hippocampus, a subsequent Finite Impulse Response (FIR) analysis revealed divergent hippocampal activation patterns between performance groups: Low performers (mean capacity  =  3.63) elicited increased neural activity at set size two, followed by a drop in activity at set sizes four and six, whereas high performers (mean capacity  =  5.19) showed an incremental activity increase with larger set size (maximal activation at set size six). Our data demonstrated that performance-related neural activity in the hippocampus emerged below capacity limit. In conclusion, we suggest that hippocampal activity reflected successful processing of object-location associations in VSTM. Neural activity in the PPC might have been involved in attentional updating. Copyright © 2013 Wiley Periodicals, Inc.

  18. Composing Music with Grammar Argumented Neural Networks and Note-Level Encoding

    OpenAIRE

    Sun, Zheng; Liu, Jiaqi; Zhang, Zewang; Chen, Jingwen; Huo, Zhao; Lee, Ching Hua; Zhang, Xiao

    2016-01-01

    Creating aesthetically pleasing pieces of art, including music, has been a long-term goal for artificial intelligence research. Despite recent successes of long-short term memory (LSTM) recurrent neural networks (RNNs) in sequential learning, LSTM neural networks have not, by themselves, been able to generate natural-sounding music conforming to music theory. To transcend this inadequacy, we put forward a novel method for music composition that combines the LSTM with Grammars motivated by mus...

  19. Energy management of a university campus utilizing short-term load forecasting with an artificial neural network

    Science.gov (United States)

    Palchak, David

    Electrical load forecasting is a tool that has been utilized by distribution designers and operators as a means for resource planning and generation dispatch. The techniques employed in these predictions are proving useful in the growing market of consumer, or end-user, participation in electrical energy consumption. These predictions are based on exogenous variables, such as weather, and time variables, such as day of week and time of day as well as prior energy consumption patterns. The participation of the end-user is a cornerstone of the Smart Grid initiative presented in the Energy Independence and Security Act of 2007, and is being made possible by the emergence of enabling technologies such as advanced metering infrastructure. The optimal application of the data provided by an advanced metering infrastructure is the primary motivation for the work done in this thesis. The methodology for using this data in an energy management scheme that utilizes a short-term load forecast is presented. The objective of this research is to quantify opportunities for a range of energy management and operation cost savings of a university campus through the use of a forecasted daily electrical load profile. The proposed algorithm for short-term load forecasting is optimized for Colorado State University's main campus, and utilizes an artificial neural network that accepts weather and time variables as inputs. The performance of the predicted daily electrical load is evaluated using a number of error measurements that seek to quantify the best application of the forecast. The energy management presented utilizes historical electrical load data from the local service provider to optimize the time of day that electrical loads are being managed. Finally, the utilization of forecasts in the presented energy management scenario is evaluated based on cost and energy savings.

  20. Implicitly Defined Neural Networks for Sequence Labeling

    Science.gov (United States)

    2017-07-31

    ularity has soared for the Long Short - Term Memory (LSTM) (Hochreiter and Schmidhuber, 1997) and vari- ants such as Gated Recurrent Unit (GRU) (Cho et...610. Sepp Hochreiter and Jürgen Schmidhuber. 1997. Long short - term memory . Neural computation 9(8):1735– 1780. Zhiheng Huang, Wei Xu, and Kai Yu. 2015...network are coupled together, in order to improve perfor- mance on complex, long -range dependencies in either direction of a sequence. We contrast our

  1. Railway track circuit fault diagnosis using recurrent neural networks

    NARCIS (Netherlands)

    de Bruin, T.D.; Verbert, K.A.J.; Babuska, R.

    2017-01-01

    Timely detection and identification of faults in railway track circuits are crucial for the safety and availability of railway networks. In this paper, the use of the long-short-term memory (LSTM) recurrent neural network is proposed to accomplish these tasks based on the commonly available

  2. The stability of the international oil trade network from short-term and long-term perspectives

    Science.gov (United States)

    Sun, Qingru; Gao, Xiangyun; Zhong, Weiqiong; Liu, Nairong

    2017-09-01

    To examine the stability of the international oil trade network and explore the influence of countries and trade relationships on the trade stability, we construct weighted and unweighted international oil trade networks based on complex network theory using oil trading data between countries from 1996 to 2014. We analyze the stability of international oil trade network (IOTN) from short-term and long-term aspects. From the short-term perspective, we find that the trade volumes play an important role on the stability. Moreover, the weighted IOTN is stable; however, the unweighted networks can better reflect the actual evolution of IOTN. From the long-term perspective, we identify trade relationships that are maintained during the whole sample period to reveal the situation of the whole international oil trade. We provide a way to quantitatively measure the stability of complex network from short-term and long-term perspectives, which can be applied to measure and analyze trade stability of other goods or services.

  3. Global exponential stability of fuzzy cellular neural networks with delays and reaction-diffusion terms

    International Nuclear Information System (INIS)

    Wang Jian; Lu Junguo

    2008-01-01

    In this paper, we study the global exponential stability of fuzzy cellular neural networks with delays and reaction-diffusion terms. By constructing a suitable Lyapunov functional and utilizing some inequality techniques, we obtain a sufficient condition for the uniqueness and global exponential stability of the equilibrium solution for a class of fuzzy cellular neural networks with delays and reaction-diffusion terms. The result imposes constraint conditions on the network parameters independently of the delay parameter. The result is also easy to check and plays an important role in the design and application of globally exponentially stable fuzzy neural circuits

  4. Displacement prediction of Baijiabao landslide based on empirical mode decomposition and long short-term memory neural network in Three Gorges area, China

    Science.gov (United States)

    Xu, Shiluo; Niu, Ruiqing

    2018-02-01

    Every year, landslides pose huge threats to thousands of people in China, especially those in the Three Gorges area. It is thus necessary to establish an early warning system to help prevent property damage and save peoples' lives. Most of the landslide displacement prediction models that have been proposed are static models. However, landslides are dynamic systems. In this paper, the total accumulative displacement of the Baijiabao landslide is divided into trend and periodic components using empirical mode decomposition. The trend component is predicted using an S-curve estimation, and the total periodic component is predicted using a long short-term memory neural network (LSTM). LSTM is a dynamic model that can remember historical information and apply it to the current output. Six triggering factors are chosen to predict the periodic term using the Pearson cross-correlation coefficient and mutual information. These factors include the cumulative precipitation during the previous month, the cumulative precipitation during a two-month period, the reservoir level during the current month, the change in the reservoir level during the previous month, the cumulative increment of the reservoir level during the current month, and the cumulative displacement during the previous month. When using one-step-ahead prediction, LSTM yields a root mean squared error (RMSE) value of 6.112 mm, while the support vector machine for regression (SVR) and the back-propagation neural network (BP) yield values of 10.686 mm and 8.237 mm, respectively. Meanwhile, the Elman network (Elman) yields an RMSE value of 6.579 mm. In addition, when using multi-step-ahead prediction, LSTM obtains an RMSE value of 8.648 mm, while SVR, BP and the Elman network obtains RSME values of 13.418 mm, 13.014 mm, and 13.370 mm. The predicted results indicate that, to some extent, the dynamic model (LSTM) achieves results that are more accurate than those of the static models (i.e., SVR and BP). LSTM even

  5. Short-term Memory of Deep RNN

    OpenAIRE

    Gallicchio, Claudio

    2018-01-01

    The extension of deep learning towards temporal data processing is gaining an increasing research interest. In this paper we investigate the properties of state dynamics developed in successive levels of deep recurrent neural networks (RNNs) in terms of short-term memory abilities. Our results reveal interesting insights that shed light on the nature of layering as a factor of RNN design. Noticeably, higher layers in a hierarchically organized RNN architecture results to be inherently biased ...

  6. Predicting local field potentials with recurrent neural networks.

    Science.gov (United States)

    Kim, Louis; Harer, Jacob; Rangamani, Akshay; Moran, James; Parks, Philip D; Widge, Alik; Eskandar, Emad; Dougherty, Darin; Chin, Sang Peter

    2016-08-01

    We present a Recurrent Neural Network using LSTM (Long Short Term Memory) that is capable of modeling and predicting Local Field Potentials. We train and test the network on real data recorded from epilepsy patients. We construct networks that predict multi-channel LFPs for 1, 10, and 100 milliseconds forward in time. Our results show that prediction using LSTM outperforms regression when predicting 10 and 100 millisecond forward in time.

  7. Integration and segregation of large-scale brain networks during short-term task automatization.

    Science.gov (United States)

    Mohr, Holger; Wolfensteller, Uta; Betzel, Richard F; Mišić, Bratislav; Sporns, Olaf; Richiardi, Jonas; Ruge, Hannes

    2016-11-03

    The human brain is organized into large-scale functional networks that can flexibly reconfigure their connectivity patterns, supporting both rapid adaptive control and long-term learning processes. However, it has remained unclear how short-term network dynamics support the rapid transformation of instructions into fluent behaviour. Comparing fMRI data of a learning sample (N=70) with a control sample (N=67), we find that increasingly efficient task processing during short-term practice is associated with a reorganization of large-scale network interactions. Practice-related efficiency gains are facilitated by enhanced coupling between the cingulo-opercular network and the dorsal attention network. Simultaneously, short-term task automatization is accompanied by decreasing activation of the fronto-parietal network, indicating a release of high-level cognitive control, and a segregation of the default mode network from task-related networks. These findings suggest that short-term task automatization is enabled by the brain's ability to rapidly reconfigure its large-scale network organization involving complementary integration and segregation processes.

  8. Deep Bidirectional and Unidirectional LSTM Recurrent Neural Network for Network-wide Traffic Speed Prediction

    OpenAIRE

    Cui, Zhiyong; Ke, Ruimin; Wang, Yinhai

    2018-01-01

    Short-term traffic forecasting based on deep learning methods, especially long short-term memory (LSTM) neural networks, has received much attention in recent years. However, the potential of deep learning methods in traffic forecasting has not yet fully been exploited in terms of the depth of the model architecture, the spatial scale of the prediction area, and the predictive power of spatial-temporal data. In this paper, a deep stacked bidirectional and unidirectional LSTM (SBU- LSTM) neura...

  9. Detecting atrial fibrillation by deep convolutional neural networks.

    Science.gov (United States)

    Xia, Yong; Wulan, Naren; Wang, Kuanquan; Zhang, Henggui

    2018-02-01

    Atrial fibrillation (AF) is the most common cardiac arrhythmia. The incidence of AF increases with age, causing high risks of stroke and increased morbidity and mortality. Efficient and accurate diagnosis of AF based on the ECG is valuable in clinical settings and remains challenging. In this paper, we proposed a novel method with high reliability and accuracy for AF detection via deep learning. The short-term Fourier transform (STFT) and stationary wavelet transform (SWT) were used to analyze ECG segments to obtain two-dimensional (2-D) matrix input suitable for deep convolutional neural networks. Then, two different deep convolutional neural network models corresponding to STFT output and SWT output were developed. Our new method did not require detection of P or R peaks, nor feature designs for classification, in contrast to existing algorithms. Finally, the performances of the two models were evaluated and compared with those of existing algorithms. Our proposed method demonstrated favorable performances on ECG segments as short as 5 s. The deep convolutional neural network using input generated by STFT, presented a sensitivity of 98.34%, specificity of 98.24% and accuracy of 98.29%. For the deep convolutional neural network using input generated by SWT, a sensitivity of 98.79%, specificity of 97.87% and accuracy of 98.63% was achieved. The proposed method using deep convolutional neural networks shows high sensitivity, specificity and accuracy, and, therefore, is a valuable tool for AF detection. Copyright © 2017 Elsevier Ltd. All rights reserved.

  10. Robust short-term memory without synaptic learning.

    Directory of Open Access Journals (Sweden)

    Samuel Johnson

    Full Text Available Short-term memory in the brain cannot in general be explained the way long-term memory can--as a gradual modification of synaptic weights--since it takes place too quickly. Theories based on some form of cellular bistability, however, do not seem able to account for the fact that noisy neurons can collectively store information in a robust manner. We show how a sufficiently clustered network of simple model neurons can be instantly induced into metastable states capable of retaining information for a short time (a few seconds. The mechanism is robust to different network topologies and kinds of neural model. This could constitute a viable means available to the brain for sensory and/or short-term memory with no need of synaptic learning. Relevant phenomena described by neurobiology and psychology, such as local synchronization of synaptic inputs and power-law statistics of forgetting avalanches, emerge naturally from this mechanism, and we suggest possible experiments to test its viability in more biological settings.

  11. Robust short-term memory without synaptic learning.

    Science.gov (United States)

    Johnson, Samuel; Marro, J; Torres, Joaquín J

    2013-01-01

    Short-term memory in the brain cannot in general be explained the way long-term memory can--as a gradual modification of synaptic weights--since it takes place too quickly. Theories based on some form of cellular bistability, however, do not seem able to account for the fact that noisy neurons can collectively store information in a robust manner. We show how a sufficiently clustered network of simple model neurons can be instantly induced into metastable states capable of retaining information for a short time (a few seconds). The mechanism is robust to different network topologies and kinds of neural model. This could constitute a viable means available to the brain for sensory and/or short-term memory with no need of synaptic learning. Relevant phenomena described by neurobiology and psychology, such as local synchronization of synaptic inputs and power-law statistics of forgetting avalanches, emerge naturally from this mechanism, and we suggest possible experiments to test its viability in more biological settings.

  12. Robust Short-Term Memory without Synaptic Learning

    Science.gov (United States)

    Johnson, Samuel; Marro, J.; Torres, Joaquín J.

    2013-01-01

    Short-term memory in the brain cannot in general be explained the way long-term memory can – as a gradual modification of synaptic weights – since it takes place too quickly. Theories based on some form of cellular bistability, however, do not seem able to account for the fact that noisy neurons can collectively store information in a robust manner. We show how a sufficiently clustered network of simple model neurons can be instantly induced into metastable states capable of retaining information for a short time (a few seconds). The mechanism is robust to different network topologies and kinds of neural model. This could constitute a viable means available to the brain for sensory and/or short-term memory with no need of synaptic learning. Relevant phenomena described by neurobiology and psychology, such as local synchronization of synaptic inputs and power-law statistics of forgetting avalanches, emerge naturally from this mechanism, and we suggest possible experiments to test its viability in more biological settings. PMID:23349664

  13. Short-term plasticity as a neural mechanism supporting memory and attentional functions.

    Science.gov (United States)

    Jääskeläinen, Iiro P; Ahveninen, Jyrki; Andermann, Mark L; Belliveau, John W; Raij, Tommi; Sams, Mikko

    2011-11-08

    Based on behavioral studies, several relatively distinct perceptual and cognitive functions have been defined in cognitive psychology such as sensory memory, short-term memory, and selective attention. Here, we review evidence suggesting that some of these functions may be supported by shared underlying neuronal mechanisms. Specifically, we present, based on an integrative review of the literature, a hypothetical model wherein short-term plasticity, in the form of transient center-excitatory and surround-inhibitory modulations, constitutes a generic processing principle that supports sensory memory, short-term memory, involuntary attention, selective attention, and perceptual learning. In our model, the size and complexity of receptive fields/level of abstraction of neural representations, as well as the length of temporal receptive windows, increases as one steps up the cortical hierarchy. Consequently, the type of input (bottom-up vs. top down) and the level of cortical hierarchy that the inputs target, determine whether short-term plasticity supports purely sensory vs. semantic short-term memory or attentional functions. Furthermore, we suggest that rather than discrete memory systems, there are continuums of memory representations from short-lived sensory ones to more abstract longer-duration representations, such as those tapped by behavioral studies of short-term memory. Copyright © 2011 Elsevier B.V. All rights reserved.

  14. A phenomenological memristor model for short-term/long-term memory

    International Nuclear Information System (INIS)

    Chen, Ling; Li, Chuandong; Huang, Tingwen; Ahmad, Hafiz Gulfam; Chen, Yiran

    2014-01-01

    Memristor is considered to be a natural electrical synapse because of its distinct memory property and nanoscale. In recent years, more and more similar behaviors are observed between memristors and biological synapse, e.g., short-term memory (STM) and long-term memory (LTM). The traditional mathematical models are unable to capture the new emerging behaviors. In this article, an updated phenomenological model based on the model of the Hewlett–Packard (HP) Labs has been proposed to capture such new behaviors. The new dynamical memristor model with an improved ion diffusion term can emulate the synapse behavior with forgetting effect, and exhibit the transformation between the STM and the LTM. Further, this model can be used in building new type of neural networks with forgetting ability like biological systems, and it is verified by our experiment with Hopfield neural network. - Highlights: • We take the Fick diffusion and the Soret diffusion into account in the ion drift theory. • We develop a new model based on the old HP model. • The new model can describe the forgetting effect and the spike-rate-dependent property of memristor. • The new model can solve the boundary effect of all window functions discussed in [13]. • A new Hopfield neural network with the forgetting ability is built by the new memristor model

  15. Analysis of neural networks in terms of domain functions

    NARCIS (Netherlands)

    van der Zwaag, B.J.; Slump, Cornelis H.; Spaanenburg, Lambert

    Despite their success-story, artificial neural networks have one major disadvantage compared to other techniques: the inability to explain comprehensively how a trained neural network reaches its output; neural networks are not only (incorrectly) seen as a "magic tool" but possibly even more as a

  16. Bi-directional LSTM Recurrent Neural Network for Chinese Word Segmentation

    OpenAIRE

    Yao, Yushi; Huang, Zheng

    2016-01-01

    Recurrent neural network(RNN) has been broadly applied to natural language processing(NLP) problems. This kind of neural network is designed for modeling sequential data and has been testified to be quite efficient in sequential tagging tasks. In this paper, we propose to use bi-directional RNN with long short-term memory(LSTM) units for Chinese word segmentation, which is a crucial preprocess task for modeling Chinese sentences and articles. Classical methods focus on designing and combining...

  17. Bidirectional Long Short-Term Memory Network for Vehicle Behavior Recognition

    Directory of Open Access Journals (Sweden)

    Jiasong Zhu

    2018-06-01

    Full Text Available Vehicle behavior recognition is an attractive research field which is useful for many computer vision and intelligent traffic analysis tasks. This paper presents an all-in-one behavior recognition framework for moving vehicles based on the latest deep learning techniques. Unlike traditional traffic analysis methods which rely on low-resolution videos captured by road cameras, we capture 4K ( 3840 × 2178 traffic videos at a busy road intersection of a modern megacity by flying a unmanned aerial vehicle (UAV during the rush hours. We then manually annotate locations and types of road vehicles. The proposed method consists of the following three steps: (1 vehicle detection and type recognition based on deep neural networks; (2 vehicle tracking by data association and vehicle trajectory modeling; (3 vehicle behavior recognition by nearest neighbor search and by bidirectional long short-term memory network, respectively. This paper also presents experimental results of the proposed framework in comparison with state-of-the-art approaches on the 4K testing traffic video, which demonstrated the effectiveness and superiority of the proposed method.

  18. Spatial patterns of persistent neural activity vary with the behavioral context of short-term memory.

    Science.gov (United States)

    Daie, Kayvon; Goldman, Mark S; Aksay, Emre R F

    2015-02-18

    A short-term memory can be evoked by different inputs and control separate targets in different behavioral contexts. To address the circuit mechanisms underlying context-dependent memory function, we determined through optical imaging how memory is encoded at the whole-network level in two behavioral settings. Persistent neural activity maintaining a memory of desired eye position was imaged throughout the oculomotor integrator after saccadic or optokinetic stimulation. While eye position was encoded by the amplitude of network activity, the spatial patterns of firing were context dependent: cells located caudally generally were most persistent following saccadic input, whereas cells located rostrally were most persistent following optokinetic input. To explain these data, we computationally identified four independent modes of network activity and found these were differentially accessed by saccadic and optokinetic inputs. These results show how a circuit can simultaneously encode memory value and behavioral context, respectively, in its amplitude and spatial pattern of persistent firing. Copyright © 2015 Elsevier Inc. All rights reserved.

  19. Word embeddings and recurrent neural networks based on Long-Short Term Memory nodes in supervised biomedical word sense disambiguation.

    Science.gov (United States)

    Jimeno Yepes, Antonio

    2017-09-01

    Word sense disambiguation helps identifying the proper sense of ambiguous words in text. With large terminologies such as the UMLS Metathesaurus ambiguities appear and highly effective disambiguation methods are required. Supervised learning algorithm methods are used as one of the approaches to perform disambiguation. Features extracted from the context of an ambiguous word are used to identify the proper sense of such a word. The type of features have an impact on machine learning methods, thus affect disambiguation performance. In this work, we have evaluated several types of features derived from the context of the ambiguous word and we have explored as well more global features derived from MEDLINE using word embeddings. Results show that word embeddings improve the performance of more traditional features and allow as well using recurrent neural network classifiers based on Long-Short Term Memory (LSTM) nodes. The combination of unigrams and word embeddings with an SVM sets a new state of the art performance with a macro accuracy of 95.97 in the MSH WSD data set. Copyright © 2017 Elsevier Inc. All rights reserved.

  20. Context-dependent retrieval of information by neural-network dynamics with continuous attractors.

    Science.gov (United States)

    Tsuboshita, Yukihiro; Okamoto, Hiroshi

    2007-08-01

    Memory retrieval in neural networks has traditionally been described by dynamic systems with discrete attractors. However, recent neurophysiological findings of graded persistent activity suggest that memory retrieval in the brain is more likely to be described by dynamic systems with continuous attractors. To explore what sort of information processing is achieved by continuous-attractor dynamics, keyword extraction from documents by a network of bistable neurons, which gives robust continuous attractors, is examined. Given an associative network of terms, a continuous attractor led by propagation of neuronal activation in this network appears to represent keywords that express underlying meaning of a document encoded in the initial state of the network-activation pattern. A dominant hypothesis in cognitive psychology is that long-term memory is archived in the network structure, which resembles associative networks of terms. Our results suggest that keyword extraction by the neural-network dynamics with continuous attractors might symbolically represent context-dependent retrieval of short-term memory from long-term memory in the brain.

  1. Neural Networks

    International Nuclear Information System (INIS)

    Smith, Patrick I.

    2003-01-01

    Physicists use large detectors to measure particles created in high-energy collisions at particle accelerators. These detectors typically produce signals indicating either where ionization occurs along the path of the particle, or where energy is deposited by the particle. The data produced by these signals is fed into pattern recognition programs to try to identify what particles were produced, and to measure the energy and direction of these particles. Ideally, there are many techniques used in this pattern recognition software. One technique, neural networks, is particularly suitable for identifying what type of particle caused by a set of energy deposits. Neural networks can derive meaning from complicated or imprecise data, extract patterns, and detect trends that are too complex to be noticed by either humans or other computer related processes. To assist in the advancement of this technology, Physicists use a tool kit to experiment with several neural network techniques. The goal of this research is interface a neural network tool kit into Java Analysis Studio (JAS3), an application that allows data to be analyzed from any experiment. As the final result, a physicist will have the ability to train, test, and implement a neural network with the desired output while using JAS3 to analyze the results or output. Before an implementation of a neural network can take place, a firm understanding of what a neural network is and how it works is beneficial. A neural network is an artificial representation of the human brain that tries to simulate the learning process [5]. It is also important to think of the word artificial in that definition as computer programs that use calculations during the learning process. In short, a neural network learns by representative examples. Perhaps the easiest way to describe the way neural networks learn is to explain how the human brain functions. The human brain contains billions of neural cells that are responsible for processing

  2. Bach in 2014: Music Composition with Recurrent Neural Network

    OpenAIRE

    Liu, I-Ting; Ramakrishnan, Bhiksha

    2014-01-01

    We propose a framework for computer music composition that uses resilient propagation (RProp) and long short term memory (LSTM) recurrent neural network. In this paper, we show that LSTM network learns the structure and characteristics of music pieces properly by demonstrating its ability to recreate music. We also show that predicting existing music using RProp outperforms Back propagation through time (BPTT).

  3. Artificial neural network applications in ionospheric studies

    Directory of Open Access Journals (Sweden)

    L. R. Cander

    1998-06-01

    Full Text Available The ionosphere of Earth exhibits considerable spatial changes and has large temporal variability of various timescales related to the mechanisms of creation, decay and transport of space ionospheric plasma. Many techniques for modelling electron density profiles through entire ionosphere have been developed in order to solve the "age-old problem" of ionospheric physics which has not yet been fully solved. A new way to address this problem is by applying artificial intelligence methodologies to current large amounts of solar-terrestrial and ionospheric data. It is the aim of this paper to show by the most recent examples that modern development of numerical models for ionospheric monthly median long-term prediction and daily hourly short-term forecasting may proceed successfully applying the artificial neural networks. The performance of these techniques is illustrated with different artificial neural networks developed to model and predict the temporal and spatial variations of ionospheric critical frequency, f0F2 and Total Electron Content (TEC. Comparisons between results obtained by the proposed approaches and measured f0F2 and TEC data provide prospects for future applications of the artificial neural networks in ionospheric studies.

  4. Short-term load and wind power forecasting using neural network-based prediction intervals.

    Science.gov (United States)

    Quan, Hao; Srinivasan, Dipti; Khosravi, Abbas

    2014-02-01

    Electrical power systems are evolving from today's centralized bulk systems to more decentralized systems. Penetrations of renewable energies, such as wind and solar power, significantly increase the level of uncertainty in power systems. Accurate load forecasting becomes more complex, yet more important for management of power systems. Traditional methods for generating point forecasts of load demands cannot properly handle uncertainties in system operations. To quantify potential uncertainties associated with forecasts, this paper implements a neural network (NN)-based method for the construction of prediction intervals (PIs). A newly introduced method, called lower upper bound estimation (LUBE), is applied and extended to develop PIs using NN models. A new problem formulation is proposed, which translates the primary multiobjective problem into a constrained single-objective problem. Compared with the cost function, this new formulation is closer to the primary problem and has fewer parameters. Particle swarm optimization (PSO) integrated with the mutation operator is used to solve the problem. Electrical demands from Singapore and New South Wales (Australia), as well as wind power generation from Capital Wind Farm, are used to validate the PSO-based LUBE method. Comparative results show that the proposed method can construct higher quality PIs for load and wind power generation forecasts in a short time.

  5. Short-Term Memory in Orthogonal Neural Networks

    Science.gov (United States)

    White, Olivia L.; Lee, Daniel D.; Sompolinsky, Haim

    2004-04-01

    We study the ability of linear recurrent networks obeying discrete time dynamics to store long temporal sequences that are retrievable from the instantaneous state of the network. We calculate this temporal memory capacity for both distributed shift register and random orthogonal connectivity matrices. We show that the memory capacity of these networks scales with system size.

  6. Short-term memory in orthogonal neural networks

    International Nuclear Information System (INIS)

    White, Olivia L.; Lee, Daniel D.; Sompolinsky, Haim

    2004-01-01

    We study the ability of linear recurrent networks obeying discrete time dynamics to store long temporal sequences that are retrievable from the instantaneous state of the network. We calculate this temporal memory capacity for both distributed shift register and random orthogonal connectivity matrices. We show that the memory capacity of these networks scales with system size

  7. Neural processing of short-term recurrence in songbird vocal communication.

    Directory of Open Access Journals (Sweden)

    Gabriël J L Beckers

    Full Text Available BACKGROUND: Many situations involving animal communication are dominated by recurring, stereotyped signals. How do receivers optimally distinguish between frequently recurring signals and novel ones? Cortical auditory systems are known to be pre-attentively sensitive to short-term delivery statistics of artificial stimuli, but it is unknown if this phenomenon extends to the level of behaviorally relevant delivery patterns, such as those used during communication. METHODOLOGY/PRINCIPAL FINDINGS: We recorded and analyzed complete auditory scenes of spontaneously communicating zebra finch (Taeniopygia guttata pairs over a week-long period, and show that they can produce tens of thousands of short-range contact calls per day. Individual calls recur at time scales (median interval 1.5 s matching those at which mammalian sensory systems are sensitive to recent stimulus history. Next, we presented to anesthetized birds sequences of frequently recurring calls interspersed with rare ones, and recorded, in parallel, action and local field potential responses in the medio-caudal auditory forebrain at 32 unique sites. Variation in call recurrence rate over natural ranges leads to widespread and significant modulation in strength of neural responses. Such modulation is highly call-specific in secondary auditory areas, but not in the main thalamo-recipient, primary auditory area. CONCLUSIONS/SIGNIFICANCE: Our results support the hypothesis that pre-attentive neural sensitivity to short-term stimulus recurrence is involved in the analysis of auditory scenes at the level of delivery patterns of meaningful sounds. This may enable birds to efficiently and automatically distinguish frequently recurring vocalizations from other events in their auditory scene.

  8. A Spiking Working Memory Model Based on Hebbian Short-Term Potentiation

    Science.gov (United States)

    Fiebig, Florian

    2017-01-01

    A dominant theory of working memory (WM), referred to as the persistent activity hypothesis, holds that recurrently connected neural networks, presumably located in the prefrontal cortex, encode and maintain WM memory items through sustained elevated activity. Reexamination of experimental data has shown that prefrontal cortex activity in single units during delay periods is much more variable than predicted by such a theory and associated computational models. Alternative models of WM maintenance based on synaptic plasticity, such as short-term nonassociative (non-Hebbian) synaptic facilitation, have been suggested but cannot account for encoding of novel associations. Here we test the hypothesis that a recently identified fast-expressing form of Hebbian synaptic plasticity (associative short-term potentiation) is a possible mechanism for WM encoding and maintenance. Our simulations using a spiking neural network model of cortex reproduce a range of cognitive memory effects in the classical multi-item WM task of encoding and immediate free recall of word lists. Memory reactivation in the model occurs in discrete oscillatory bursts rather than as sustained activity. We relate dynamic network activity as well as key synaptic characteristics to electrophysiological measurements. Our findings support the hypothesis that fast Hebbian short-term potentiation is a key WM mechanism. SIGNIFICANCE STATEMENT Working memory (WM) is a key component of cognition. Hypotheses about the neural mechanism behind WM are currently under revision. Reflecting recent findings of fast Hebbian synaptic plasticity in cortex, we test whether a cortical spiking neural network model with such a mechanism can learn a multi-item WM task (word list learning). We show that our model can reproduce human cognitive phenomena and achieve comparable memory performance in both free and cued recall while being simultaneously compatible with experimental data on structure, connectivity, and

  9. Modelling of multiple short-length-scale stall cells in an axial compressor using evolved GMDH neural networks

    International Nuclear Information System (INIS)

    Amanifard, N.; Nariman-Zadeh, N.; Farahani, M.H.; Khalkhali, A.

    2008-01-01

    Over the past 15 years there have been several research efforts to capture the stall inception nature in axial flow compressors. However previous analytical models could not explain the formation of short-length-scale stall cells. This paper provides a new model based on evolved GMDH neural network for transient evolution of multiple short-length-scale stall cells in an axial compressor. Genetic Algorithms (GAs) are also employed for optimal design of connectivity configuration of such GMDH-type neural networks. In this way, low-pass filter (LPF) pressure trace near the rotor leading edge is modelled with respect to the variation of pressure coefficient, flow rate coefficient, and number of rotor rotations which are defined as inputs

  10. Wind Power Plant Prediction by Using Neural Networks: Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Liu, Z.; Gao, W.; Wan, Y. H.; Muljadi, E.

    2012-08-01

    This paper introduces a method of short-term wind power prediction for a wind power plant by training neural networks based on historical data of wind speed and wind direction. The model proposed is shown to achieve a high accuracy with respect to the measured data.

  11. Neural evidence for a distinction between short-term memory and the focus of attention

    OpenAIRE

    Lewis-Peacock, Jarrod A; Drysdale, Andrew T; Oberauer, Klaus; Postle, Bradley R

    2012-01-01

    It is widely assumed that the short-term retention of information is accomplished via maintenance of an active neural trace. However, we demonstrate that memory can be preserved across a brief delay despite the apparent loss of sustained representations. Delay-period activity may in fact reflect the focus of attention, rather than short-term memory. We unconfounded attention and memory by causing external and internal shifts of attention away from items that were being actively retained. Mult...

  12. Unsupervised learning in neural networks with short range synapses

    Science.gov (United States)

    Brunnet, L. G.; Agnes, E. J.; Mizusaki, B. E. P.; Erichsen, R., Jr.

    2013-01-01

    Different areas of the brain are involved in specific aspects of the information being processed both in learning and in memory formation. For example, the hippocampus is important in the consolidation of information from short-term memory to long-term memory, while emotional memory seems to be dealt by the amygdala. On the microscopic scale the underlying structures in these areas differ in the kind of neurons involved, in their connectivity, or in their clustering degree but, at this level, learning and memory are attributed to neuronal synapses mediated by longterm potentiation and long-term depression. In this work we explore the properties of a short range synaptic connection network, a nearest neighbor lattice composed mostly by excitatory neurons and a fraction of inhibitory ones. The mechanism of synaptic modification responsible for the emergence of memory is Spike-Timing-Dependent Plasticity (STDP), a Hebbian-like rule, where potentiation/depression is acquired when causal/non-causal spikes happen in a synapse involving two neurons. The system is intended to store and recognize memories associated to spatial external inputs presented as simple geometrical forms. The synaptic modifications are continuously applied to excitatory connections, including a homeostasis rule and STDP. In this work we explore the different scenarios under which a network with short range connections can accomplish the task of storing and recognizing simple connected patterns.

  13. Convolutional neural network regression for short-axis left ventricle segmentation in cardiac cine MR sequences.

    Science.gov (United States)

    Tan, Li Kuo; Liew, Yih Miin; Lim, Einly; McLaughlin, Robert A

    2017-07-01

    Automated left ventricular (LV) segmentation is crucial for efficient quantification of cardiac function and morphology to aid subsequent management of cardiac pathologies. In this paper, we parameterize the complete (all short axis slices and phases) LV segmentation task in terms of the radial distances between the LV centerpoint and the endo- and epicardial contours in polar space. We then utilize convolutional neural network regression to infer these parameters. Utilizing parameter regression, as opposed to conventional pixel classification, allows the network to inherently reflect domain-specific physical constraints. We have benchmarked our approach primarily against the publicly-available left ventricle segmentation challenge (LVSC) dataset, which consists of 100 training and 100 validation cardiac MRI cases representing a heterogeneous mix of cardiac pathologies and imaging parameters across multiple centers. Our approach attained a .77 Jaccard index, which is the highest published overall result in comparison to other automated algorithms. To test general applicability, we also evaluated against the Kaggle Second Annual Data Science Bowl, where the evaluation metric was the indirect clinical measures of LV volume rather than direct myocardial contours. Our approach attained a Continuous Ranked Probability Score (CRPS) of .0124, which would have ranked tenth in the original challenge. With this we demonstrate the effectiveness of convolutional neural network regression paired with domain-specific features in clinical segmentation. Copyright © 2017 Elsevier B.V. All rights reserved.

  14. An artificial neural network approach to reconstruct the source term of a nuclear accident

    International Nuclear Information System (INIS)

    Giles, J.; Palma, C. R.; Weller, P.

    1997-01-01

    This work makes use of one of the main features of artificial neural networks, which is their ability to 'learn' from sets of known input and output data. Indeed, a trained artificial neural network can be used to make predictions on the input data when the output is known, and this feedback process enables one to reconstruct the source term from field observations. With this aim, an artificial neural networks has been trained, using the projections of a segmented plume atmospheric dispersion model at fixed points, simulating a set of gamma detectors located outside the perimeter of a nuclear facility. The resulting set of artificial neural networks was used to determine the release fraction and rate for each of the noble gases, iodines and particulate fission products that could originate from a nuclear accident. Model projections were made using a large data set consisting of effective release height, release fraction of noble gases, iodines and particulate fission products, atmospheric stability, wind speed and wind direction. The model computed nuclide-specific gamma dose rates. The locations of the detectors were chosen taking into account both building shine and wake effects, and varied in distance between 800 and 1200 m from the reactor.The inputs to the artificial neural networks consisted of the measurements from the detector array, atmospheric stability, wind speed and wind direction; the outputs comprised a set of release fractions and heights. Once trained, the artificial neural networks was used to reconstruct the source term from the detector responses for data sets not used in training. The preliminary results are encouraging and show that the noble gases and particulate fission product release fractions are well determined

  15. Neural networks with discontinuous/impact activations

    CERN Document Server

    Akhmet, Marat

    2014-01-01

    This book presents as its main subject new models in mathematical neuroscience. A wide range of neural networks models with discontinuities are discussed, including impulsive differential equations, differential equations with piecewise constant arguments, and models of mixed type. These models involve discontinuities, which are natural because huge velocities and short distances are usually observed in devices modeling the networks. A discussion of the models, appropriate for the proposed applications, is also provided. This book also: Explores questions related to the biological underpinning for models of neural networks\\ Considers neural networks modeling using differential equations with impulsive and piecewise constant argument discontinuities Provides all necessary mathematical basics for application to the theory of neural networks Neural Networks with Discontinuous/Impact Activations is an ideal book for researchers and professionals in the field of engineering mathematics that have an interest in app...

  16. Framewise phoneme classification with bidirectional LSTM and other neural network architectures.

    Science.gov (United States)

    Graves, Alex; Schmidhuber, Jürgen

    2005-01-01

    In this paper, we present bidirectional Long Short Term Memory (LSTM) networks, and a modified, full gradient version of the LSTM learning algorithm. We evaluate Bidirectional LSTM (BLSTM) and several other network architectures on the benchmark task of framewise phoneme classification, using the TIMIT database. Our main findings are that bidirectional networks outperform unidirectional ones, and Long Short Term Memory (LSTM) is much faster and also more accurate than both standard Recurrent Neural Nets (RNNs) and time-windowed Multilayer Perceptrons (MLPs). Our results support the view that contextual information is crucial to speech processing, and suggest that BLSTM is an effective architecture with which to exploit it.

  17. A Study of Recurrent and Convolutional Neural Networks in the Native Language Identification Task

    KAUST Repository

    Werfelmann, Robert

    2018-01-01

    around the world. The neural network models consisted of Long Short-Term Memory and Convolutional networks using the sentences of each document as the input. Additional statistical features were generated from the text to complement the predictions

  18. An accident diagnosis algorithm using long short-term memory

    Directory of Open Access Journals (Sweden)

    Jaemin Yang

    2018-05-01

    Full Text Available Accident diagnosis is one of the complex tasks for nuclear power plant (NPP operators. In abnormal or emergency situations, the diagnostic activity of the NPP states is burdensome though necessary. Numerous computer-based methods and operator support systems have been suggested to address this problem. Among them, the recurrent neural network (RNN has performed well at analyzing time series data. This study proposes an algorithm for accident diagnosis using long short-term memory (LSTM, which is a kind of RNN, which improves the limitation for time reflection. The algorithm consists of preprocessing, the LSTM network, and postprocessing. In the LSTM-based algorithm, preprocessed input variables are calculated to output the accident diagnosis results. The outputs are also postprocessed using softmax to determine the ranking of accident diagnosis results with probabilities. This algorithm was trained using a compact nuclear simulator for several accidents: a loss of coolant accident, a steam generator tube rupture, and a main steam line break. The trained algorithm was also tested to demonstrate the feasibility of diagnosing NPP accidents. Keywords: Accident Diagnosis, Long Short-term Memory, Recurrent Neural Network, Softmax

  19. False memory for face in short-term memory and neural activity in human amygdala.

    Science.gov (United States)

    Iidaka, Tetsuya; Harada, Tokiko; Sadato, Norihiro

    2014-12-03

    Human memory is often inaccurate. Similar to words and figures, new faces are often recognized as seen or studied items in long- and short-term memory tests; however, the neural mechanisms underlying this false memory remain elusive. In a previous fMRI study using morphed faces and a standard false memory paradigm, we found that there was a U-shaped response curve of the amygdala to old, new, and lure items. This indicates that the amygdala is more active in response to items that are salient (hit and correct rejection) compared to items that are less salient (false alarm), in terms of memory retrieval. In the present fMRI study, we determined whether the false memory for faces occurs within the short-term memory range (a few seconds), and assessed which neural correlates are involved in veridical and illusory memories. Nineteen healthy participants were scanned by 3T MRI during a short-term memory task using morphed faces. The behavioral results indicated that the occurrence of false memories was within the short-term range. We found that the amygdala displayed a U-shaped response curve to memory items, similar to those observed in our previous study. These results suggest that the amygdala plays a common role in both long- and short-term false memory for faces. We made the following conclusions: First, the amygdala is involved in detecting the saliency of items, in addition to fear, and supports goal-oriented behavior by modulating memory. Second, amygdala activity and response time might be related with a subject's response criterion for similar faces. Copyright © 2014 Elsevier B.V. All rights reserved.

  20. Abnormal neural activities of directional brain networks in patients with long-term bilateral hearing loss.

    Science.gov (United States)

    Xu, Long-Chun; Zhang, Gang; Zou, Yue; Zhang, Min-Feng; Zhang, Dong-Sheng; Ma, Hua; Zhao, Wen-Bo; Zhang, Guang-Yu

    2017-10-13

    The objective of the study is to provide some implications for rehabilitation of hearing impairment by investigating changes of neural activities of directional brain networks in patients with long-term bilateral hearing loss. Firstly, we implemented neuropsychological tests of 21 subjects (11 patients with long-term bilateral hearing loss, and 10 subjects with normal hearing), and these tests revealed significant differences between the deaf group and the controls. Then we constructed the individual specific virtual brain based on functional magnetic resonance data of participants by utilizing effective connectivity and multivariate regression methods. We exerted the stimulating signal to the primary auditory cortices of the virtual brain and observed the brain region activations. We found that patients with long-term bilateral hearing loss presented weaker brain region activations in the auditory and language networks, but enhanced neural activities in the default mode network as compared with normally hearing subjects. Especially, the right cerebral hemisphere presented more changes than the left. Additionally, weaker neural activities in the primary auditor cortices were also strongly associated with poorer cognitive performance. Finally, causal analysis revealed several interactional circuits among activated brain regions, and these interregional causal interactions implied that abnormal neural activities of the directional brain networks in the deaf patients impacted cognitive function.

  1. Short term depression unmasks the ghost frequency.

    Directory of Open Access Journals (Sweden)

    Tjeerd V Olde Scheper

    Full Text Available Short Term Plasticity (STP has been shown to exist extensively in synapses throughout the brain. Its function is more or less clear in the sense that it alters the probability of synaptic transmission at short time scales. However, it is still unclear what effect STP has on the dynamics of neural networks. We show, using a novel dynamic STP model, that Short Term Depression (STD can affect the phase of frequency coded input such that small networks can perform temporal signal summation and determination with high accuracy. We show that this property of STD can readily solve the problem of the ghost frequency, the perceived pitch of a harmonic complex in absence of the base frequency. Additionally, we demonstrate that this property can explain dynamics in larger networks. By means of two models, one of chopper neurons in the Ventral Cochlear Nucleus and one of a cortical microcircuit with inhibitory Martinotti neurons, it is shown that the dynamics in these microcircuits can reliably be reproduced using STP. Our model of STP gives important insights into the potential roles of STP in self-regulation of cortical activity and long-range afferent input in neuronal microcircuits.

  2. Short-Term Load Forecast in Electric Energy System in Bulgaria

    Directory of Open Access Journals (Sweden)

    Irina Asenova

    2010-01-01

    Full Text Available As the accuracy of the electricity load forecast is crucial in providing better cost effective risk management plans, this paper proposes a Short Term Electricity Load Forecast (STLF model with high forecasting accuracy. Two kind of neural networks, Multilayer Perceptron network model and Radial Basis Function network model, are presented and compared using the mean absolute percentage error. The data used in the models are electricity load historical data. Even though the very good performance of the used model for the load data, weather parameters, especially the temperature, take important part for the energy predicting which is taken into account in this paper. A comparative evaluation between a traditional statistical method and artificial neural networks is presented.

  3. A Long Short-Term Memory deep learning network for the prediction of epileptic seizures using EEG signals.

    Science.gov (United States)

    Tsiouris, Κostas Μ; Pezoulas, Vasileios C; Zervakis, Michalis; Konitsiotis, Spiros; Koutsouris, Dimitrios D; Fotiadis, Dimitrios I

    2018-05-17

    The electroencephalogram (EEG) is the most prominent means to study epilepsy and capture changes in electrical brain activity that could declare an imminent seizure. In this work, Long Short-Term Memory (LSTM) networks are introduced in epileptic seizure prediction using EEG signals, expanding the use of deep learning algorithms with convolutional neural networks (CNN). A pre-analysis is initially performed to find the optimal architecture of the LSTM network by testing several modules and layers of memory units. Based on these results, a two-layer LSTM network is selected to evaluate seizure prediction performance using four different lengths of preictal windows, ranging from 15 min to 2 h. The LSTM model exploits a wide range of features extracted prior to classification, including time and frequency domain features, between EEG channels cross-correlation and graph theoretic features. The evaluation is performed using long-term EEG recordings from the open CHB-MIT Scalp EEG database, suggest that the proposed methodology is able to predict all 185 seizures, providing high rates of seizure prediction sensitivity and low false prediction rates (FPR) of 0.11-0.02 false alarms per hour, depending on the duration of the preictal window. The proposed LSTM-based methodology delivers a significant increase in seizure prediction performance compared to both traditional machine learning techniques and convolutional neural networks that have been previously evaluated in the literature. Copyright © 2018 Elsevier Ltd. All rights reserved.

  4. Parallel consensual neural networks.

    Science.gov (United States)

    Benediktsson, J A; Sveinsson, J R; Ersoy, O K; Swain, P H

    1997-01-01

    A new type of a neural-network architecture, the parallel consensual neural network (PCNN), is introduced and applied in classification/data fusion of multisource remote sensing and geographic data. The PCNN architecture is based on statistical consensus theory and involves using stage neural networks with transformed input data. The input data are transformed several times and the different transformed data are used as if they were independent inputs. The independent inputs are first classified using the stage neural networks. The output responses from the stage networks are then weighted and combined to make a consensual decision. In this paper, optimization methods are used in order to weight the outputs from the stage networks. Two approaches are proposed to compute the data transforms for the PCNN, one for binary data and another for analog data. The analog approach uses wavelet packets. The experimental results obtained with the proposed approach show that the PCNN outperforms both a conjugate-gradient backpropagation neural network and conventional statistical methods in terms of overall classification accuracy of test data.

  5. The neural response in short-term visual recognition memory for perceptual conjunctions.

    Science.gov (United States)

    Elliott, R; Dolan, R J

    1998-01-01

    Short-term visual memory has been widely studied in humans and animals using delayed matching paradigms. The present study used positron emission tomography (PET) to determine the neural substrates of delayed matching to sample for complex abstract patterns over a 5-s delay. More specifically, the study assessed any differential neural response associated with remembering individual perceptual properties (color only and shape only) compared to conjunction between these properties. Significant activations associated with short-term visual memory (all memory conditions compared to perceptuomotor control) were observed in extrastriate cortex, medial and lateral parietal cortex, anterior cingulate, inferior frontal gyrus, and the thalamus. Significant deactivations were observed throughout the temporal cortex. Although the requirement to remember color compared to shape was associated with subtly different patterns of blood flow, the requirement to remember perceptual conjunctions between these features was not associated with additional specific activations. These data suggest that visual memory over a delay of the order of 5 s is mainly dependent on posterior perceptual regions of the cortex, with the exact regions depending on the perceptual aspect of the stimuli to be remembered.

  6. Neural correlates of enhanced visual short-term memory for angry faces: an FMRI study.

    Directory of Open Access Journals (Sweden)

    Margaret C Jackson

    Full Text Available Fluid and effective social communication requires that both face identity and emotional expression information are encoded and maintained in visual short-term memory (VSTM to enable a coherent, ongoing picture of the world and its players. This appears to be of particular evolutionary importance when confronted with potentially threatening displays of emotion - previous research has shown better VSTM for angry versus happy or neutral face identities.Using functional magnetic resonance imaging, here we investigated the neural correlates of this angry face benefit in VSTM. Participants were shown between one and four to-be-remembered angry, happy, or neutral faces, and after a short retention delay they stated whether a single probe face had been present or not in the previous display. All faces in any one display expressed the same emotion, and the task required memory for face identity. We find enhanced VSTM for angry face identities and describe the right hemisphere brain network underpinning this effect, which involves the globus pallidus, superior temporal sulcus, and frontal lobe. Increased activity in the globus pallidus was significantly correlated with the angry benefit in VSTM. Areas modulated by emotion were distinct from those modulated by memory load.Our results provide evidence for a key role of the basal ganglia as an interface between emotion and cognition, supported by a frontal, temporal, and occipital network.

  7. Improved merit order and augmented Lagrange Hopfield network for short term hydrothermal scheduling

    International Nuclear Information System (INIS)

    Vo Ngoc Dieu; Ongsakul, Weerakorn

    2009-01-01

    This paper proposes an improved merit order (IMO) combined with an augmented Lagrangian Hopfield network (ALHN) for solving short term hydrothermal scheduling (HTS) with pumped-storage hydro plants. The proposed IMO-ALHN consists of a merit order based on the average production cost of generating units enhanced by heuristic search algorithm for finding unit scheduling and a continuous Hopfield neural network with its energy function based on augmented Lagrangian relaxation for solving constrained economic dispatch (CED). The proposed method is applied to solve the HTS problem in five stages including thermal, hydro and pumped-storage unit commitment by IMO and heuristic search, constraint violations repairing by heuristic search and CED by ALHN. The proposed method is tested on the 24-bus IEEE RTS with 32 units including 4 fuel-constrained, 4-hydro, and 2 pumped-storage units scheduled over a 24-h period. Test results indicate that the proposed IMO-ALHN is efficient for hydrothermal systems with various constraints.

  8. Stability analysis of impulsive fuzzy cellular neural networks with distributed delays and reaction-diffusion terms

    International Nuclear Information System (INIS)

    Li Zuoan; Li Kelin

    2009-01-01

    In this paper, we investigate a class of impulsive fuzzy cellular neural networks with distributed delays and reaction-diffusion terms. By employing the delay differential inequality with impulsive initial conditions and M-matrix theory, we find some sufficient conditions ensuring the existence, uniqueness and global exponential stability of equilibrium point for impulsive fuzzy cellular neural networks with distributed delays and reaction-diffusion terms. In particular, the estimate of the exponential converging index is also provided, which depends on the system parameters. An example is given to show the effectiveness of the results obtained here.

  9. A Spiking Working Memory Model Based on Hebbian Short-Term Potentiation.

    Science.gov (United States)

    Fiebig, Florian; Lansner, Anders

    2017-01-04

    A dominant theory of working memory (WM), referred to as the persistent activity hypothesis, holds that recurrently connected neural networks, presumably located in the prefrontal cortex, encode and maintain WM memory items through sustained elevated activity. Reexamination of experimental data has shown that prefrontal cortex activity in single units during delay periods is much more variable than predicted by such a theory and associated computational models. Alternative models of WM maintenance based on synaptic plasticity, such as short-term nonassociative (non-Hebbian) synaptic facilitation, have been suggested but cannot account for encoding of novel associations. Here we test the hypothesis that a recently identified fast-expressing form of Hebbian synaptic plasticity (associative short-term potentiation) is a possible mechanism for WM encoding and maintenance. Our simulations using a spiking neural network model of cortex reproduce a range of cognitive memory effects in the classical multi-item WM task of encoding and immediate free recall of word lists. Memory reactivation in the model occurs in discrete oscillatory bursts rather than as sustained activity. We relate dynamic network activity as well as key synaptic characteristics to electrophysiological measurements. Our findings support the hypothesis that fast Hebbian short-term potentiation is a key WM mechanism. Working memory (WM) is a key component of cognition. Hypotheses about the neural mechanism behind WM are currently under revision. Reflecting recent findings of fast Hebbian synaptic plasticity in cortex, we test whether a cortical spiking neural network model with such a mechanism can learn a multi-item WM task (word list learning). We show that our model can reproduce human cognitive phenomena and achieve comparable memory performance in both free and cued recall while being simultaneously compatible with experimental data on structure, connectivity, and neurophysiology of the underlying

  10. Learning Traffic as Images: A Deep Convolutional Neural Network for Large-Scale Transportation Network Speed Prediction.

    Science.gov (United States)

    Ma, Xiaolei; Dai, Zhuang; He, Zhengbing; Ma, Jihui; Wang, Yong; Wang, Yunpeng

    2017-04-10

    This paper proposes a convolutional neural network (CNN)-based method that learns traffic as images and predicts large-scale, network-wide traffic speed with a high accuracy. Spatiotemporal traffic dynamics are converted to images describing the time and space relations of traffic flow via a two-dimensional time-space matrix. A CNN is applied to the image following two consecutive steps: abstract traffic feature extraction and network-wide traffic speed prediction. The effectiveness of the proposed method is evaluated by taking two real-world transportation networks, the second ring road and north-east transportation network in Beijing, as examples, and comparing the method with four prevailing algorithms, namely, ordinary least squares, k-nearest neighbors, artificial neural network, and random forest, and three deep learning architectures, namely, stacked autoencoder, recurrent neural network, and long-short-term memory network. The results show that the proposed method outperforms other algorithms by an average accuracy improvement of 42.91% within an acceptable execution time. The CNN can train the model in a reasonable time and, thus, is suitable for large-scale transportation networks.

  11. Temporal neural networks and transient analysis of complex engineering systems

    Science.gov (United States)

    Uluyol, Onder

    A theory is introduced for a multi-layered Local Output Gamma Feedback (LOGF) neural network within the paradigm of Locally-Recurrent Globally-Feedforward neural networks. It is developed for the identification, prediction, and control tasks of spatio-temporal systems and allows for the presentation of different time scales through incorporation of a gamma memory. It is initially applied to the tasks of sunspot and Mackey-Glass series prediction as benchmarks, then it is extended to the task of power level control of a nuclear reactor at different fuel cycle conditions. The developed LOGF neuron model can also be viewed as a Transformed Input and State (TIS) Gamma memory for neural network architectures for temporal processing. The novel LOGF neuron model extends the static neuron model by incorporating into it a short-term memory structure in the form of a digital gamma filter. A feedforward neural network made up of LOGF neurons can thus be used to model dynamic systems. A learning algorithm based upon the Backpropagation-Through-Time (BTT) approach is derived. It is applicable for training a general L-layer LOGF neural network. The spatial and temporal weights and parameters of the network are iteratively optimized for a given problem using the derived learning algorithm.

  12. Neural mechanisms of information storage in visual short-term memory.

    Science.gov (United States)

    Serences, John T

    2016-11-01

    The capacity to briefly memorize fleeting sensory information supports visual search and behavioral interactions with relevant stimuli in the environment. Traditionally, studies investigating the neural basis of visual short term memory (STM) have focused on the role of prefrontal cortex (PFC) in exerting executive control over what information is stored and how it is adaptively used to guide behavior. However, the neural substrates that support the actual storage of content-specific information in STM are more controversial, with some attributing this function to PFC and others to the specialized areas of early visual cortex that initially encode incoming sensory stimuli. In contrast to these traditional views, I will review evidence suggesting that content-specific information can be flexibly maintained in areas across the cortical hierarchy ranging from early visual cortex to PFC. While the factors that determine exactly where content-specific information is represented are not yet entirely clear, recognizing the importance of task-demands and better understanding the operation of non-spiking neural codes may help to constrain new theories about how memories are maintained at different resolutions, across different timescales, and in the presence of distracting information. Copyright © 2016 Elsevier Ltd. All rights reserved.

  13. Neural Mechanisms of Information Storage in Visual Short-Term Memory

    Science.gov (United States)

    Serences, John T.

    2016-01-01

    The capacity to briefly memorize fleeting sensory information supports visual search and behavioral interactions with relevant stimuli in the environment. Traditionally, studies investigating the neural basis of visual short term memory (STM) have focused on the role of prefrontal cortex (PFC) in exerting executive control over what information is stored and how it is adaptively used to guide behavior. However, the neural substrates that support the actual storage of content-specific information in STM are more controversial, with some attributing this function to PFC and others to the specialized areas of early visual cortex that initially encode incoming sensory stimuli. In contrast to these traditional views, I will review evidence suggesting that content-specific information can be flexibly maintained in areas across the cortical hierarchy ranging from early visual cortex to PFC. While the factors that determine exactly where content-specific information is represented are not yet entirely clear, recognizing the importance of task-demands and better understanding the operation of non-spiking neural codes may help to constrain new theories about how memories are maintained at different resolutions, across different timescales, and in the presence of distracting information. PMID:27668990

  14. Use long short-term memory to enhance Internet of Things for combined sewer overflow monitoring

    Science.gov (United States)

    Zhang, Duo; Lindholm, Geir; Ratnaweera, Harsha

    2018-01-01

    Combined sewer overflow causes severe water pollution, urban flooding and reduced treatment plant efficiency. Understanding the behavior of CSO structures is vital for urban flooding prevention and overflow control. Neural networks have been extensively applied in water resource related fields. In this study, we collect data from an Internet of Things monitoring CSO structure and build different neural network models for simulating and predicting the water level of the CSO structure. Through a comparison of four different neural networks, namely multilayer perceptron (MLP), wavelet neural network (WNN), long short-term memory (LSTM) and gated recurrent unit (GRU), the LSTM and GRU present superior capabilities for multi-step-ahead time series prediction. Furthermore, GRU achieves prediction performances similar to LSTM with a quicker learning curve.

  15. A Study of Recurrent and Convolutional Neural Networks in the Native Language Identification Task

    KAUST Repository

    Werfelmann, Robert

    2018-05-24

    Native Language Identification (NLI) is the task of predicting the native language of an author from their text written in a second language. The idea is to find writing habits that transfer from an author’s native language to their second language. Many approaches to this task have been studied, from simple word frequency analysis, to analyzing grammatical and spelling mistakes to find patterns and traits that are common between different authors of the same native language. This can be a very complex task, depending on the native language and the proficiency of the author’s second language. The most common approach that has seen very good results is based on the usage of n-gram features of words and characters. In this thesis, we attempt to extract lexical, grammatical, and semantic features from the sentences of non-native English essays using neural networks. The training and testing data was obtained from a large corpus of publicly available essays written by authors of several countries around the world. The neural network models consisted of Long Short-Term Memory and Convolutional networks using the sentences of each document as the input. Additional statistical features were generated from the text to complement the predictions of the neural networks, which were then used as feature inputs to a Support Vector Machine, making the final prediction. Results show that Long Short-Term Memory neural network can improve performance over a naive bag of words approach, but with a much smaller feature set. With more fine-tuning of neural network hyperparameters, these results will likely improve significantly.

  16. Neural networks. A new analytical tool, applicable also in nuclear technology

    International Nuclear Information System (INIS)

    Stritar, A.

    1992-01-01

    The basic concept of neural networks and back propagation learning algorithm are described. The behaviour of typical neural network is demonstrated on a simple graphical case. A short literature survey about the application of neural networks in nuclear science and engineering is made. The application of the neural network to the probability density calculation is shown. (author) [sl

  17. Short-Term Memory for Serial Order: A Recurrent Neural Network Model

    Science.gov (United States)

    Botvinick, Matthew M.; Plaut, David C.

    2006-01-01

    Despite a century of research, the mechanisms underlying short-term or working memory for serial order remain uncertain. Recent theoretical models have converged on a particular account, based on transient associations between independent item and context representations. In the present article, the authors present an alternative model, according…

  18. Short-term electric load forecasting using computational intelligence methods

    OpenAIRE

    Jurado, Sergio; Peralta, J.; Nebot, Àngela; Mugica, Francisco; Cortez, Paulo

    2013-01-01

    Accurate time series forecasting is a key issue to support individual and organizational decision making. In this paper, we introduce several methods for short-term electric load forecasting. All the presented methods stem from computational intelligence techniques: Random Forest, Nonlinear Autoregressive Neural Networks, Evolutionary Support Vector Machines and Fuzzy Inductive Reasoning. The performance of the suggested methods is experimentally justified with several experiments carried out...

  19. Modeling long-term human activeness using recurrent neural networks for biometric data.

    Science.gov (United States)

    Kim, Zae Myung; Oh, Hyungrai; Kim, Han-Gyu; Lim, Chae-Gyun; Oh, Kyo-Joong; Choi, Ho-Jin

    2017-05-18

    With the invention of fitness trackers, it has been possible to continuously monitor a user's biometric data such as heart rates, number of footsteps taken, and amount of calories burned. This paper names the time series of these three types of biometric data, the user's "activeness", and investigates the feasibility in modeling and predicting the long-term activeness of the user. The dataset used in this study consisted of several months of biometric time-series data gathered by seven users independently. Four recurrent neural network (RNN) architectures-as well as a deep neural network and a simple regression model-were proposed to investigate the performance on predicting the activeness of the user under various length-related hyper-parameter settings. In addition, the learned model was tested to predict the time period when the user's activeness falls below a certain threshold. A preliminary experimental result shows that each type of activeness data exhibited a short-term autocorrelation; and among the three types of data, the consumed calories and the number of footsteps were positively correlated, while the heart rate data showed almost no correlation with neither of them. It is probably due to this characteristic of the dataset that although the RNN models produced the best results on modeling the user's activeness, the difference was marginal; and other baseline models, especially the linear regression model, performed quite admirably as well. Further experimental results show that it is feasible to predict a user's future activeness with precision, for example, a trained RNN model could predict-with the precision of 84%-when the user would be less active within the next hour given the latest 15 min of his activeness data. This paper defines and investigates the notion of a user's "activeness", and shows that forecasting the long-term activeness of the user is indeed possible. Such information can be utilized by a health-related application to proactively

  20. Global exponential stability of impulsive fuzzy cellular neural networks with mixed delays and reaction-diffusion terms

    International Nuclear Information System (INIS)

    Wang Xiaohu; Xu Daoyi

    2009-01-01

    In this paper, the global exponential stability of impulsive fuzzy cellular neural networks with mixed delays and reaction-diffusion terms is considered. By establishing an integro-differential inequality with impulsive initial condition and using the properties of M-cone and eigenspace of the spectral radius of nonnegative matrices, several new sufficient conditions are obtained to ensure the global exponential stability of the equilibrium point for fuzzy cellular neural networks with delays and reaction-diffusion terms. These results extend and improve the earlier publications. Two examples are given to illustrate the efficiency of the obtained results.

  1. Fractional Hopfield Neural Networks: Fractional Dynamic Associative Recurrent Neural Networks.

    Science.gov (United States)

    Pu, Yi-Fei; Yi, Zhang; Zhou, Ji-Liu

    2017-10-01

    This paper mainly discusses a novel conceptual framework: fractional Hopfield neural networks (FHNN). As is commonly known, fractional calculus has been incorporated into artificial neural networks, mainly because of its long-term memory and nonlocality. Some researchers have made interesting attempts at fractional neural networks and gained competitive advantages over integer-order neural networks. Therefore, it is naturally makes one ponder how to generalize the first-order Hopfield neural networks to the fractional-order ones, and how to implement FHNN by means of fractional calculus. We propose to introduce a novel mathematical method: fractional calculus to implement FHNN. First, we implement fractor in the form of an analog circuit. Second, we implement FHNN by utilizing fractor and the fractional steepest descent approach, construct its Lyapunov function, and further analyze its attractors. Third, we perform experiments to analyze the stability and convergence of FHNN, and further discuss its applications to the defense against chip cloning attacks for anticounterfeiting. The main contribution of our work is to propose FHNN in the form of an analog circuit by utilizing a fractor and the fractional steepest descent approach, construct its Lyapunov function, prove its Lyapunov stability, analyze its attractors, and apply FHNN to the defense against chip cloning attacks for anticounterfeiting. A significant advantage of FHNN is that its attractors essentially relate to the neuron's fractional order. FHNN possesses the fractional-order-stability and fractional-order-sensitivity characteristics.

  2. A New Delay Connection for Long Short-Term Memory Networks.

    Science.gov (United States)

    Wang, Jianyong; Zhang, Lei; Chen, Yuanyuan; Yi, Zhang

    2017-12-17

    Connections play a crucial role in neural network (NN) learning because they determine how information flows in NNs. Suitable connection mechanisms may extensively enlarge the learning capability and reduce the negative effect of gradient problems. In this paper, a new delay connection is proposed for Long Short-Term Memory (LSTM) unit to develop a more sophisticated recurrent unit, called Delay Connected LSTM (DCLSTM). The proposed delay connection brings two main merits to DCLSTM with introducing no extra parameters. First, it allows the output of the DCLSTM unit to maintain LSTM, which is absent in the LSTM unit. Second, the proposed delay connection helps to bridge the error signals to previous time steps and allows it to be back-propagated across several layers without vanishing too quickly. To evaluate the performance of the proposed delay connections, the DCLSTM model with and without peephole connections was compared with four state-of-the-art recurrent model on two sequence classification tasks. DCLSTM model outperformed the other models with higher accuracy and F1[Formula: see text]score. Furthermore, the networks with multiple stacked DCLSTM layers and the standard LSTM layer were evaluated on Penn Treebank (PTB) language modeling. The DCLSTM model achieved lower perplexity (PPL)/bit-per-character (BPC) than the standard LSTM model. The experiments demonstrate that the learning of the DCLSTM models is more stable and efficient.

  3. Global exponential stability of BAM neural networks with time-varying delays and diffusion terms

    International Nuclear Information System (INIS)

    Wan Li; Zhou Qinghua

    2007-01-01

    The stability property of bidirectional associate memory (BAM) neural networks with time-varying delays and diffusion terms are considered. By using the method of variation parameter and inequality technique, the delay-independent sufficient conditions to guarantee the uniqueness and global exponential stability of the equilibrium solution of such networks are established

  4. Global exponential stability of BAM neural networks with time-varying delays and diffusion terms

    Science.gov (United States)

    Wan, Li; Zhou, Qinghua

    2007-11-01

    The stability property of bidirectional associate memory (BAM) neural networks with time-varying delays and diffusion terms are considered. By using the method of variation parameter and inequality technique, the delay-independent sufficient conditions to guarantee the uniqueness and global exponential stability of the equilibrium solution of such networks are established.

  5. Analyzing Snowpack Metrics Over Large Spatial Extents Using Calibrated, Enhanced-Resolution Brightness Temperature Data and Long Short Term Memory Artificial Neural Networks

    Science.gov (United States)

    Norris, W.; J Q Farmer, C.

    2017-12-01

    Snow water equivalence (SWE) is a difficult metric to measure accurately over large spatial extents; snow-tell sites are too localized, and traditional remotely sensed brightness temperature data is at too coarse of a resolution to capture variation. The new Calibrated Enhanced-Resolution Brightness Temperature (CETB) data from the National Snow and Ice Data Center (NSIDC) offers remotely sensed brightness temperature data at an enhanced resolution of 3.125 km versus the original 25 km, which allows for large spatial extents to be analyzed with reduced uncertainty compared to the 25km product. While the 25km brightness temperature data has proved useful in past research — one group found decreasing trends in SWE outweighed increasing trends three to one in North America; other researchers used the data to incorporate winter conditions, like snow cover, into ecological zoning criterion — with the new 3.125 km data, it is possible to derive more accurate metrics for SWE, since we have far more spatial variability in measurements. Even with higher resolution data, using the 37 - 19 GHz frequencies to estimate SWE distorts the data during times of melt onset and accumulation onset. Past researchers employed statistical splines, while other successful attempts utilized non-parametric curve fitting to smooth out spikes distorting metrics. In this work, rather than using legacy curve fitting techniques, a Long Short Term Memory (LSTM) Artificial Neural Network (ANN) was trained to perform curve fitting on the data. LSTM ANN have shown great promise in modeling time series data, and with almost 40 years of data available — 14,235 days — there is plenty of training data for the ANN. LSTM's are ideal for this type of time series analysis because they allow important trends to persist for long periods of time, but ignore short term fluctuations; since LSTM's have poor mid- to short-term memory, they are ideal for smoothing out the large spikes generated in the melt

  6. Characterizing short-term stability for Boolean networks over any distribution of transfer functions

    International Nuclear Information System (INIS)

    Seshadhri, C.; Smith, Andrew M.; Vorobeychik, Yevgeniy; Mayo, Jackson R.; Armstrong, Robert C.

    2016-01-01

    Here we present a characterization of short-term stability of random Boolean networks under arbitrary distributions of transfer functions. Given any distribution of transfer functions for a random Boolean network, we present a formula that decides whether short-term chaos (damage spreading) will happen. We provide a formal proof for this formula, and empirically show that its predictions are accurate. Previous work only works for special cases of balanced families. Finally, it has been observed that these characterizations fail for unbalanced families, yet such families are widespread in real biological networks.

  7. The functional neuroanatomy of multitasking: combining dual tasking with a short term memory task.

    Science.gov (United States)

    Deprez, Sabine; Vandenbulcke, Mathieu; Peeters, Ron; Emsell, Louise; Amant, Frederic; Sunaert, Stefan

    2013-09-01

    Insight into the neural architecture of multitasking is crucial when investigating the pathophysiology of multitasking deficits in clinical populations. Presently, little is known about how the brain combines dual-tasking with a concurrent short-term memory task, despite the relevance of this mental operation in daily life and the frequency of complaints related to this process, in disease. In this study we aimed to examine how the brain responds when a memory task is added to dual-tasking. Thirty-three right-handed healthy volunteers (20 females, mean age 39.9 ± 5.8) were examined with functional brain imaging (fMRI). The paradigm consisted of two cross-modal single tasks (a visual and auditory temporal same-different task with short delay), a dual-task combining both single tasks simultaneously and a multi-task condition, combining the dual-task with an additional short-term memory task (temporal same-different visual task with long delay). Dual-tasking compared to both individual visual and auditory single tasks activated a predominantly right-sided fronto-parietal network and the cerebellum. When adding the additional short-term memory task, a larger and more bilateral frontoparietal network was recruited. We found enhanced activity during multitasking in components of the network that were already involved in dual-tasking, suggesting increased working memory demands, as well as recruitment of multitask-specific components including areas that are likely to be involved in online holding of visual stimuli in short-term memory such as occipito-temporal cortex. These results confirm concurrent neural processing of a visual short-term memory task during dual-tasking and provide evidence for an effective fMRI multitasking paradigm. © 2013 Elsevier Ltd. All rights reserved.

  8. Neural Evidence for a Distinction between Short-Term Memory and the Focus of Attention

    Science.gov (United States)

    Lewis-Peacock, Jarrod A.; Drysdale, Andrew T.; Oberauer, Klaus; Postle, Bradley R.

    2012-01-01

    It is widely assumed that the short-term retention of information is accomplished via maintenance of an active neural trace. However, we demonstrate that memory can be preserved across a brief delay despite the apparent loss of sustained representations. Delay period activity may, in fact, reflect the focus of attention, rather than STM. We…

  9. Long Short-Term Memory Neural Networks for Online Disturbance Detection in Satellite Image Time Series

    Directory of Open Access Journals (Sweden)

    Yun-Long Kong

    2018-03-01

    Full Text Available A satellite image time series (SITS contains a significant amount of temporal information. By analysing this type of data, the pattern of the changes in the object of concern can be explored. The natural change in the Earth’s surface is relatively slow and exhibits a pronounced pattern. Some natural events (for example, fires, floods, plant diseases, and insect pests and human activities (for example, deforestation and urbanisation will disturb this pattern and cause a relatively profound change on the Earth’s surface. These events are usually referred to as disturbances. However, disturbances in ecosystems are not easy to detect from SITS data, because SITS contain combined information on disturbances, phenological variations and noise in remote sensing data. In this paper, a novel framework is proposed for online disturbance detection from SITS. The framework is based on long short-term memory (LSTM networks. First, LSTM networks are trained by historical SITS. The trained LSTM networks are then used to predict new time series data. Last, the predicted data are compared with real data, and the noticeable deviations reveal disturbances. Experimental results using 16-day compositions of the moderate resolution imaging spectroradiometer (MOD13Q1 illustrate the effectiveness and stability of the proposed approach for online disturbance detection.

  10. Short-Term Load Forecasting-Based Automatic Distribution Network Reconfiguration

    Energy Technology Data Exchange (ETDEWEB)

    Jiang, Huaiguang [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Ding, Fei [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Zhang, Yingchen [National Renewable Energy Laboratory (NREL), Golden, CO (United States)

    2017-08-23

    In a traditional dynamic network reconfiguration study, the optimal topology is determined at every scheduled time point by using the real load data measured at that time. The development of the load forecasting technique can provide an accurate prediction of the load power that will happen in a future time and provide more information about load changes. With the inclusion of load forecasting, the optimal topology can be determined based on the predicted load conditions during a longer time period instead of using a snapshot of the load at the time when the reconfiguration happens; thus, the distribution system operator can use this information to better operate the system reconfiguration and achieve optimal solutions. This paper proposes a short-term load forecasting approach to automatically reconfigure distribution systems in a dynamic and pre-event manner. Specifically, a short-term and high-resolution distribution system load forecasting approach is proposed with a forecaster based on support vector regression and parallel parameters optimization. The network reconfiguration problem is solved by using the forecasted load continuously to determine the optimal network topology with the minimum amount of loss at the future time. The simulation results validate and evaluate the proposed approach.

  11. An Ensemble of Neural Networks for Stock Trading Decision Making

    Science.gov (United States)

    Chang, Pei-Chann; Liu, Chen-Hao; Fan, Chin-Yuan; Lin, Jun-Lin; Lai, Chih-Ming

    Stock turning signals detection are very interesting subject arising in numerous financial and economic planning problems. In this paper, Ensemble Neural Network system with Intelligent Piecewise Linear Representation for stock turning points detection is presented. The Intelligent piecewise linear representation method is able to generate numerous stocks turning signals from the historic data base, then Ensemble Neural Network system will be applied to train the pattern and retrieve similar stock price patterns from historic data for training. These turning signals represent short-term and long-term trading signals for selling or buying stocks from the market which are applied to forecast the future turning points from the set of test data. Experimental results demonstrate that the hybrid system can make a significant and constant amount of profit when compared with other approaches using stock data available in the market.

  12. Synaptic plasticity in a recurrent neural network for versatile and adaptive behaviors of a walking robot

    DEFF Research Database (Denmark)

    Grinke, Eduard; Tetzlaff, Christian; Wörgötter, Florentin

    2015-01-01

    correlation-based learning with synaptic scaling is applied to adequately change the connections of the network. By doing so, we can effectively exploit neural dynamics (i.e., hysteresis effects and single attractors) in the network to generate different turning angles with short-term memory for a walking...... dynamics, plasticity, sensory feedback, and biomechanics. Generating such versatile and adaptive behaviors for a many degrees-of-freedom (DOFs) walking robot is a challenging task. Thus, in this study, we present a bio-inspired approach to solve this task. Specifically, the approach combines neural...... mechanisms with plasticity, exteroceptive sensory feedback, and biomechanics. The neural mechanisms consist of adaptive neural sensory processing and modular neural locomotion control. The sensory processing is based on a small recurrent neural network consisting of two fully connected neurons. Online...

  13. A Novel Robot System Integrating Biological and Mechanical Intelligence Based on Dissociated Neural Network-Controlled Closed-Loop Environment.

    Science.gov (United States)

    Li, Yongcheng; Sun, Rong; Wang, Yuechao; Li, Hongyi; Zheng, Xiongfei

    2016-01-01

    We propose the architecture of a novel robot system merging biological and artificial intelligence based on a neural controller connected to an external agent. We initially built a framework that connected the dissociated neural network to a mobile robot system to implement a realistic vehicle. The mobile robot system characterized by a camera and two-wheeled robot was designed to execute the target-searching task. We modified a software architecture and developed a home-made stimulation generator to build a bi-directional connection between the biological and the artificial components via simple binomial coding/decoding schemes. In this paper, we utilized a specific hierarchical dissociated neural network for the first time as the neural controller. Based on our work, neural cultures were successfully employed to control an artificial agent resulting in high performance. Surprisingly, under the tetanus stimulus training, the robot performed better and better with the increasement of training cycle because of the short-term plasticity of neural network (a kind of reinforced learning). Comparing to the work previously reported, we adopted an effective experimental proposal (i.e. increasing the training cycle) to make sure of the occurrence of the short-term plasticity, and preliminarily demonstrated that the improvement of the robot's performance could be caused independently by the plasticity development of dissociated neural network. This new framework may provide some possible solutions for the learning abilities of intelligent robots by the engineering application of the plasticity processing of neural networks, also for the development of theoretical inspiration for the next generation neuro-prostheses on the basis of the bi-directional exchange of information within the hierarchical neural networks.

  14. A Novel Robot System Integrating Biological and Mechanical Intelligence Based on Dissociated Neural Network-Controlled Closed-Loop Environment.

    Directory of Open Access Journals (Sweden)

    Yongcheng Li

    Full Text Available We propose the architecture of a novel robot system merging biological and artificial intelligence based on a neural controller connected to an external agent. We initially built a framework that connected the dissociated neural network to a mobile robot system to implement a realistic vehicle. The mobile robot system characterized by a camera and two-wheeled robot was designed to execute the target-searching task. We modified a software architecture and developed a home-made stimulation generator to build a bi-directional connection between the biological and the artificial components via simple binomial coding/decoding schemes. In this paper, we utilized a specific hierarchical dissociated neural network for the first time as the neural controller. Based on our work, neural cultures were successfully employed to control an artificial agent resulting in high performance. Surprisingly, under the tetanus stimulus training, the robot performed better and better with the increasement of training cycle because of the short-term plasticity of neural network (a kind of reinforced learning. Comparing to the work previously reported, we adopted an effective experimental proposal (i.e. increasing the training cycle to make sure of the occurrence of the short-term plasticity, and preliminarily demonstrated that the improvement of the robot's performance could be caused independently by the plasticity development of dissociated neural network. This new framework may provide some possible solutions for the learning abilities of intelligent robots by the engineering application of the plasticity processing of neural networks, also for the development of theoretical inspiration for the next generation neuro-prostheses on the basis of the bi-directional exchange of information within the hierarchical neural networks.

  15. Artificial neural network intelligent method for prediction

    Science.gov (United States)

    Trifonov, Roumen; Yoshinov, Radoslav; Pavlova, Galya; Tsochev, Georgi

    2017-09-01

    Accounting and financial classification and prediction problems are high challenge and researchers use different methods to solve them. Methods and instruments for short time prediction of financial operations using artificial neural network are considered. The methods, used for prediction of financial data as well as the developed forecasting system with neural network are described in the paper. The architecture of a neural network used four different technical indicators, which are based on the raw data and the current day of the week is presented. The network developed is used for forecasting movement of stock prices one day ahead and consists of an input layer, one hidden layer and an output layer. The training method is algorithm with back propagation of the error. The main advantage of the developed system is self-determination of the optimal topology of neural network, due to which it becomes flexible and more precise The proposed system with neural network is universal and can be applied to various financial instruments using only basic technical indicators as input data.

  16. Implementation of a fuzzy logic/neural network multivariable controller

    International Nuclear Information System (INIS)

    Cordes, G.A.; Clark, D.E.; Johnson, J.A.; Smartt, H.B.; Wickham, K.L.; Larson, T.K.

    1992-01-01

    This paper describes a multivariable controller developed at the Idaho National Engineering Laboratory (INEL) that incorporates both fuzzy logic rules and a neural network. The controller was implemented in a laboratory demonstration and was robust, producing smooth temperature and water level response curves with short time constants. In the future, intelligent control systems will be a necessity for optimal operation of autonomous reactor systems located on earth or in space. Even today, there is a need for control systems that adapt to the changing environment and process. Hybrid intelligent control systems promise to provide this adaptive capability. Fuzzy logic implements our imprecise, qualitative human reasoning. The values of system variables (controller inputs) and control variables (controller outputs) are described in linguistic terms and subdivided into fully overlapping value ranges. The fuzzy rule base describes how combinations of input parameter ranges determine the output control values. Neural networks implement our human learning. In this controller, neural networks were embedded in the software to explore their potential for adding adaptability

  17. A Probabilistic Short-Term Water Demand Forecasting Model Based on the Markov Chain

    Directory of Open Access Journals (Sweden)

    Francesca Gagliardi

    2017-07-01

    Full Text Available This paper proposes a short-term water demand forecasting method based on the use of the Markov chain. This method provides estimates of future demands by calculating probabilities that the future demand value will fall within pre-assigned intervals covering the expected total variability. More specifically, two models based on homogeneous and non-homogeneous Markov chains were developed and presented. These models, together with two benchmark models (based on artificial neural network and naïve methods, were applied to three real-life case studies for the purpose of forecasting the respective water demands from 1 to 24 h ahead. The results obtained show that the model based on a homogeneous Markov chain provides more accurate short-term forecasts than the one based on a non-homogeneous Markov chain, which is in line with the artificial neural network model. Both Markov chain models enable probabilistic information regarding the stochastic demand forecast to be easily obtained.

  18. Modelling the permeability of polymers: a neural network approach

    NARCIS (Netherlands)

    Wessling, Matthias; Mulder, M.H.V.; Bos, A.; Bos, A.; van der Linden, M.K.T.; Bos, M.; van der Linden, W.E.

    1994-01-01

    In this short communication, the prediction of the permeability of carbon dioxide through different polymers using a neural network is studied. A neural network is a numeric-mathematical construction that can model complex non-linear relationships. Here it is used to correlate the IR spectrum of a

  19. Recurrent Neural Network Applications for Astronomical Time Series

    Science.gov (United States)

    Protopapas, Pavlos

    2017-06-01

    The benefits of good predictive models in astronomy lie in early event prediction systems and effective resource allocation. Current time series methods applicable to regular time series have not evolved to generalize for irregular time series. In this talk, I will describe two Recurrent Neural Network methods, Long Short-Term Memory (LSTM) and Echo State Networks (ESNs) for predicting irregular time series. Feature engineering along with a non-linear modeling proved to be an effective predictor. For noisy time series, the prediction is improved by training the network on error realizations using the error estimates from astronomical light curves. In addition to this, we propose a new neural network architecture to remove correlation from the residuals in order to improve prediction and compensate for the noisy data. Finally, I show how to set hyperparameters for a stable and performant solution correctly. In this work, we circumvent this obstacle by optimizing ESN hyperparameters using Bayesian optimization with Gaussian Process priors. This automates the tuning procedure, enabling users to employ the power of RNN without needing an in-depth understanding of the tuning procedure.

  20. Learning representations for the early detection of sepsis with deep neural networks.

    Science.gov (United States)

    Kam, Hye Jin; Kim, Ha Young

    2017-10-01

    Sepsis is one of the leading causes of death in intensive care unit patients. Early detection of sepsis is vital because mortality increases as the sepsis stage worsens. This study aimed to develop detection models for the early stage of sepsis using deep learning methodologies, and to compare the feasibility and performance of the new deep learning methodology with those of the regression method with conventional temporal feature extraction. Study group selection adhered to the InSight model. The results of the deep learning-based models and the InSight model were compared. With deep feedforward networks, the area under the ROC curve (AUC) of the models were 0.887 and 0.915 for the InSight and the new feature sets, respectively. For the model with the combined feature set, the AUC was the same as that of the basic feature set (0.915). For the long short-term memory model, only the basic feature set was applied and the AUC improved to 0.929 compared with the existing 0.887 of the InSight model. The contributions of this paper can be summarized in three ways: (i) improved performance without feature extraction using domain knowledge, (ii) verification of feature extraction capability of deep neural networks through comparison with reference features, and (iii) improved performance with feedforward neural networks using long short-term memory, a neural network architecture that can learn sequential patterns. Copyright © 2017 Elsevier Ltd. All rights reserved.

  1. Prediction of Sea Surface Temperature Using Long Short-Term Memory

    Science.gov (United States)

    Zhang, Qin; Wang, Hui; Dong, Junyu; Zhong, Guoqiang; Sun, Xin

    2017-10-01

    This letter adopts long short-term memory(LSTM) to predict sea surface temperature(SST), which is the first attempt, to our knowledge, to use recurrent neural network to solve the problem of SST prediction, and to make one week and one month daily prediction. We formulate the SST prediction problem as a time series regression problem. LSTM is a special kind of recurrent neural network, which introduces gate mechanism into vanilla RNN to prevent the vanished or exploding gradient problem. It has strong ability to model the temporal relationship of time series data and can handle the long-term dependency problem well. The proposed network architecture is composed of two kinds of layers: LSTM layer and full-connected dense layer. LSTM layer is utilized to model the time series relationship. Full-connected layer is utilized to map the output of LSTM layer to a final prediction. We explore the optimal setting of this architecture by experiments and report the accuracy of coastal seas of China to confirm the effectiveness of the proposed method. In addition, we also show its online updated characteristics.

  2. Estimating Ads’ Click through Rate with Recurrent Neural Network

    Directory of Open Access Journals (Sweden)

    Chen Qiao-Hong

    2016-01-01

    Full Text Available With the development of the Internet, online advertising spreads across every corner of the world, the ads' click through rate (CTR estimation is an important method to improve the online advertising revenue. Compared with the linear model, the nonlinear models can study much more complex relationships between a large number of nonlinear characteristics, so as to improve the accuracy of the estimation of the ads’ CTR. The recurrent neural network (RNN based on Long-Short Term Memory (LSTM is an improved model of the feedback neural network with ring structure. The model overcomes the problem of the gradient of the general RNN. Experiments show that the RNN based on LSTM exceeds the linear models, and it can effectively improve the estimation effect of the ads’ click through rate.

  3. Exponential Stability for Impulsive BAM Neural Networks with Time-Varying Delays and Reaction-Diffusion Terms

    Directory of Open Access Journals (Sweden)

    Qiankun Song

    2007-06-01

    Full Text Available Impulsive bidirectional associative memory neural network model with time-varying delays and reaction-diffusion terms is considered. Several sufficient conditions ensuring the existence, uniqueness, and global exponential stability of equilibrium point for the addressed neural network are derived by M-matrix theory, analytic methods, and inequality techniques. Moreover, the exponential convergence rate index is estimated, which depends on the system parameters. The obtained results in this paper are less restrictive than previously known criteria. Two examples are given to show the effectiveness of the obtained results.

  4. Exponential Stability for Impulsive BAM Neural Networks with Time-Varying Delays and Reaction-Diffusion Terms

    Directory of Open Access Journals (Sweden)

    Cao Jinde

    2007-01-01

    Full Text Available Impulsive bidirectional associative memory neural network model with time-varying delays and reaction-diffusion terms is considered. Several sufficient conditions ensuring the existence, uniqueness, and global exponential stability of equilibrium point for the addressed neural network are derived by M-matrix theory, analytic methods, and inequality techniques. Moreover, the exponential convergence rate index is estimated, which depends on the system parameters. The obtained results in this paper are less restrictive than previously known criteria. Two examples are given to show the effectiveness of the obtained results.

  5. Auditory short-term memory in the primate auditory cortex.

    Science.gov (United States)

    Scott, Brian H; Mishkin, Mortimer

    2016-06-01

    Sounds are fleeting, and assembling the sequence of inputs at the ear into a coherent percept requires auditory memory across various time scales. Auditory short-term memory comprises at least two components: an active ׳working memory' bolstered by rehearsal, and a sensory trace that may be passively retained. Working memory relies on representations recalled from long-term memory, and their rehearsal may require phonological mechanisms unique to humans. The sensory component, passive short-term memory (pSTM), is tractable to study in nonhuman primates, whose brain architecture and behavioral repertoire are comparable to our own. This review discusses recent advances in the behavioral and neurophysiological study of auditory memory with a focus on single-unit recordings from macaque monkeys performing delayed-match-to-sample (DMS) tasks. Monkeys appear to employ pSTM to solve these tasks, as evidenced by the impact of interfering stimuli on memory performance. In several regards, pSTM in monkeys resembles pitch memory in humans, and may engage similar neural mechanisms. Neural correlates of DMS performance have been observed throughout the auditory and prefrontal cortex, defining a network of areas supporting auditory STM with parallels to that supporting visual STM. These correlates include persistent neural firing, or a suppression of firing, during the delay period of the memory task, as well as suppression or (less commonly) enhancement of sensory responses when a sound is repeated as a ׳match' stimulus. Auditory STM is supported by a distributed temporo-frontal network in which sensitivity to stimulus history is an intrinsic feature of auditory processing. This article is part of a Special Issue entitled SI: Auditory working memory. Published by Elsevier B.V.

  6. Morphological neural networks

    Energy Technology Data Exchange (ETDEWEB)

    Ritter, G.X.; Sussner, P. [Univ. of Florida, Gainesville, FL (United States)

    1996-12-31

    The theory of artificial neural networks has been successfully applied to a wide variety of pattern recognition problems. In this theory, the first step in computing the next state of a neuron or in performing the next layer neural network computation involves the linear operation of multiplying neural values by their synaptic strengths and adding the results. Thresholding usually follows the linear operation in order to provide for nonlinearity of the network. In this paper we introduce a novel class of neural networks, called morphological neural networks, in which the operations of multiplication and addition are replaced by addition and maximum (or minimum), respectively. By taking the maximum (or minimum) of sums instead of the sum of products, morphological network computation is nonlinear before thresholding. As a consequence, the properties of morphological neural networks are drastically different than those of traditional neural network models. In this paper we consider some of these differences and provide some particular examples of morphological neural network.

  7. Forecasting electricity market pricing using artificial neural networks

    International Nuclear Information System (INIS)

    Pao, Hsiao-Tien

    2007-01-01

    Electricity price forecasting is extremely important for all market players, in particular for generating companies: in the short term, they must set up bids for the spot market; in the medium term, they have to define contract policies; and in the long term, they must define their expansion plans. For forecasting long-term electricity market pricing, in order to avoid excessive round-off and prediction errors, this paper proposes a new artificial neural network (ANN) with single output node structure by using direct forecasting approach. The potentials of ANNs are investigated by employing a rolling cross validation scheme. Out of sample performance evaluated with three criteria across five forecasting horizons shows that the proposed ANNs are a more robust multi-step ahead forecasting method than autoregressive error models. Moreover, ANN predictions are quite accurate even when the length of the forecast horizon is relatively short or long

  8. Forecasting the daily electricity consumption in the Moscow region using artificial neural networks

    Science.gov (United States)

    Ivanov, V. V.; Kryanev, A. V.; Osetrov, E. S.

    2017-07-01

    In [1] we demonstrated the possibility in principle for short-term forecasting of daily volumes of passenger traffic in the Moscow metro with the help of artificial neural networks. During training and predicting, a set of the factors that affect the daily passenger traffic in the subway is passed to the input of the neural network. One of these factors is the daily power consumption in the Moscow region. Therefore, to predict the volume of the passenger traffic in the subway, we must first to solve the problem of forecasting the daily energy consumption in the Moscow region.

  9. Cotton genotypes selection through artificial neural networks.

    Science.gov (United States)

    Júnior, E G Silva; Cardoso, D B O; Reis, M C; Nascimento, A F O; Bortolin, D I; Martins, M R; Sousa, L B

    2017-09-27

    Breeding programs currently use statistical analysis to assist in the identification of superior genotypes at various stages of a cultivar's development. Differently from these analyses, the computational intelligence approach has been little explored in genetic improvement of cotton. Thus, this study was carried out with the objective of presenting the use of artificial neural networks as auxiliary tools in the improvement of the cotton to improve fiber quality. To demonstrate the applicability of this approach, this research was carried out using the evaluation data of 40 genotypes. In order to classify the genotypes for fiber quality, the artificial neural networks were trained with replicate data of 20 genotypes of cotton evaluated in the harvests of 2013/14 and 2014/15, regarding fiber length, uniformity of length, fiber strength, micronaire index, elongation, short fiber index, maturity index, reflectance degree, and fiber quality index. This quality index was estimated by means of a weighted average on the determined score (1 to 5) of each characteristic of the HVI evaluated, according to its industry standards. The artificial neural networks presented a high capacity of correct classification of the 20 selected genotypes based on the fiber quality index, so that when using fiber length associated with the short fiber index, fiber maturation, and micronaire index, the artificial neural networks presented better results than using only fiber length and previous associations. It was also observed that to submit data of means of new genotypes to the neural networks trained with data of repetition, provides better results of classification of the genotypes. When observing the results obtained in the present study, it was verified that the artificial neural networks present great potential to be used in the different stages of a genetic improvement program of the cotton, aiming at the improvement of the fiber quality of the future cultivars.

  10. Behavior control in the sensorimotor loop with short-term synaptic dynamics induced by self-regulating neurons

    Directory of Open Access Journals (Sweden)

    Hazem eToutounji

    2014-05-01

    Full Text Available The behavior and skills of living systems depend on the distributed control provided by specialized and highly recurrent neural networks. Learning and memory in these systems is mediated by a set of adaptation mechanisms, known collectively as neuronal plasticity. Translating principles of recurrent neural control and plasticity to artificial agents has seen major strides, but is usually hampered by the complex interactions between the agent's body and its environment. One of the important standing issues is for the agent to support multiple stable states of behavior, so that its behavioral repertoire matches the requirements imposed by these interactions. The agent also must have the capacity to switch between these states in time scales that are comparable to those by which sensory stimulation varies. Achieving this requires a mechanism of short-term memory that allows the neurocontroller to keep track of the recent history of its input, which finds its biological counterpart in short-term synaptic plasticity. This issue is approached here by deriving synaptic dynamics in recurrent neural networks. Neurons are introduced as self-regulating units with a rich repertoire of dynamics. They exhibit homeostatic properties for certain parameter domains, which result in a set of stable states and the required short-term memory. They can also operate as oscillators, which allow them to surpass the level of activity imposed by their homeostatic operation conditions. Neural systems endowed with the derived synaptic dynamics can be utilized for the neural behavior control of autonomous mobile agents. The resulting behavior depends also on the underlying network structure, which is either engineered, or developed by evolutionary techniques. The effectiveness of these self-regulating units is demonstrated by controlling locomotion of a hexapod with eighteen degrees of freedom, and obstacle-avoidance of a wheel-driven robot.

  11. Statistical Modeling and Prediction for Tourism Economy Using Dendritic Neural Network.

    Science.gov (United States)

    Yu, Ying; Wang, Yirui; Gao, Shangce; Tang, Zheng

    2017-01-01

    With the impact of global internationalization, tourism economy has also been a rapid development. The increasing interest aroused by more advanced forecasting methods leads us to innovate forecasting methods. In this paper, the seasonal trend autoregressive integrated moving averages with dendritic neural network model (SA-D model) is proposed to perform the tourism demand forecasting. First, we use the seasonal trend autoregressive integrated moving averages model (SARIMA model) to exclude the long-term linear trend and then train the residual data by the dendritic neural network model and make a short-term prediction. As the result showed in this paper, the SA-D model can achieve considerably better predictive performances. In order to demonstrate the effectiveness of the SA-D model, we also use the data that other authors used in the other models and compare the results. It also proved that the SA-D model achieved good predictive performances in terms of the normalized mean square error, absolute percentage of error, and correlation coefficient.

  12. Dynamic training algorithm for dynamic neural networks

    International Nuclear Information System (INIS)

    Tan, Y.; Van Cauwenberghe, A.; Liu, Z.

    1996-01-01

    The widely used backpropagation algorithm for training neural networks based on the gradient descent has a significant drawback of slow convergence. A Gauss-Newton method based recursive least squares (RLS) type algorithm with dynamic error backpropagation is presented to speed-up the learning procedure of neural networks with local recurrent terms. Finally, simulation examples concerning the applications of the RLS type algorithm to identification of nonlinear processes using a local recurrent neural network are also included in this paper

  13. Effect of short-term escitalopram treatment on neural activation during emotional processing.

    Science.gov (United States)

    Maron, Eduard; Wall, Matt; Norbury, Ray; Godlewska, Beata; Terbeck, Sylvia; Cowen, Philip; Matthews, Paul; Nutt, David J

    2016-01-01

    Recent functional magnetic resonance (fMRI) imaging studies have revealed that subchronic medication with escitalopram leads to significant reduction in both amygdala and medial frontal gyrus reactivity during processing of emotional faces, suggesting that escitalopram may have a distinguishable modulatory effect on neural activation as compared with other serotonin-selective antidepressants. In this fMRI study we aimed to explore whether short-term medication with escitalopram in healthy volunteers is associated with reduced neural response to emotional processing, and whether this effect is predicted by drug plasma concentration. The neural response to fearful and happy faces was measured before and on day 7 of treatment with escitalopram (10mg) in 15 healthy volunteers and compared with those in a control unmedicated group (n=14). Significantly reduced activation to fearful, but not to happy facial expressions was observed in the bilateral amygdala, cingulate and right medial frontal gyrus following escitalopram medication. This effect was not correlated with plasma drug concentration. In accordance with previous data, we showed that escitalopram exerts its rapid direct effect on emotional processing via attenuation of neural activation in pathways involving medial frontal gyrus and amygdala, an effect that seems to be distinguishable from that of other SSRIs. © The Author(s) 2015.

  14. Forecasting the term structure of crude oil futures prices with neural networks

    Czech Academy of Sciences Publication Activity Database

    Baruník, Jozef; Malinská, B.

    2016-01-01

    Roč. 164, č. 1 (2016), s. 366-379 ISSN 0306-2619 R&D Projects: GA ČR(CZ) GBP402/12/G097 Institutional support: RVO:67985556 Keywords : Term structure * Nelson–Siegel model * Dynamic neural networks * Crude oil futures Subject RIV: AH - Economics Impact factor: 7.182, year: 2016 http://library.utia.cas.cz/separaty/2016/E/barunik-0453168.pdf

  15. Short-Term Load Forecasting Based Automatic Distribution Network Reconfiguration: Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Jiang, Huaiguang [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Ding, Fei [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Zhang, Yingchen [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Jiang, Huaiguang [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Ding, Fei [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Zhang, Yingchen [National Renewable Energy Laboratory (NREL), Golden, CO (United States)

    2017-07-26

    In the traditional dynamic network reconfiguration study, the optimal topology is determined at every scheduled time point by using the real load data measured at that time. The development of load forecasting technique can provide accurate prediction of load power that will happen in future time and provide more information about load changes. With the inclusion of load forecasting, the optimal topology can be determined based on the predicted load conditions during the longer time period instead of using the snapshot of load at the time when the reconfiguration happens, and thus it can provide information to the distribution system operator (DSO) to better operate the system reconfiguration to achieve optimal solutions. Thus, this paper proposes a short-term load forecasting based approach for automatically reconfiguring distribution systems in a dynamic and pre-event manner. Specifically, a short-term and high-resolution distribution system load forecasting approach is proposed with support vector regression (SVR) based forecaster and parallel parameters optimization. And the network reconfiguration problem is solved by using the forecasted load continuously to determine the optimal network topology with the minimum loss at the future time. The simulation results validate and evaluate the proposed approach.

  16. PERAMALAN KONSUMSI LISTRIK JANGKA PENDEK DENGAN ARIMA MUSIMAN GANDA DAN ELMAN-RECURRENT NEURAL NETWORK

    Directory of Open Access Journals (Sweden)

    Suhartono Suhartono

    2009-07-01

    Full Text Available Neural network (NN is one of many method used to predict the electricity consumption per hour in many countries. NN method which is used in many previous studies is Feed-Forward Neural Network (FFNN or Autoregressive Neural Network(AR-NN. AR-NN model is not able to capture and explain the effect of moving average (MA order on a time series of data. This research was conducted with the purpose of reviewing the application of other types of NN, that is Elman-Recurrent Neural Network (Elman-RNN which could explain MA order effect and compare the result of prediction accuracy with multiple seasonal ARIMA (Autoregressive Integrated Moving Average models. As a case study, we used data electricity consumption per hour in Mengare Gresik. Result of analysis showed that the best of double seasonal Arima models suited to short-term forecasting in the case study data is ARIMA([1,2,3,4,6,7,9,10,14,21,33],1,8(0,1,124 (1,1,0168. This model produces a white noise residuals, but it does not have a normal distribution due to suspected outlier. Outlier detection in iterative produce 14 innovation outliers. There are 4 inputs of Elman-RNN network that were examined and tested for forecasting the data, the input according to lag Arima, input such as lag Arima plus 14 dummy outlier, inputs are the lag-multiples of 24 up to lag 480, and the inputs are lag 1 and lag multiples of 24+1. All of four network uses one hidden layer with tangent sigmoid activation function and one output with a linear function. The result of comparative forecast accuracy through value of MAPE out-sample showed that the fourth networks, namely Elman-RNN (22, 3, 1, is the best model for forecasting electricity consumption per hour in short term in Mengare Gresik.

  17. Adaptive nonlinear control using input normalized neural networks

    International Nuclear Information System (INIS)

    Leeghim, Henzeh; Seo, In Ho; Bang, Hyo Choong

    2008-01-01

    An adaptive feedback linearization technique combined with the neural network is addressed to control uncertain nonlinear systems. The neural network-based adaptive control theory has been widely studied. However, the stability analysis of the closed-loop system with the neural network is rather complicated and difficult to understand, and sometimes unnecessary assumptions are involved. As a result, unnecessary assumptions for stability analysis are avoided by using the neural network with input normalization technique. The ultimate boundedness of the tracking error is simply proved by the Lyapunov stability theory. A new simple update law as an adaptive nonlinear control is derived by the simplification of the input normalized neural network assuming the variation of the uncertain term is sufficiently small

  18. The neural basis of precise visual short-term memory for complex recognisable objects.

    Science.gov (United States)

    Veldsman, Michele; Mitchell, Daniel J; Cusack, Rhodri

    2017-10-01

    Recent evidence suggests that visual short-term memory (VSTM) capacity estimated using simple objects, such as colours and oriented bars, may not generalise well to more naturalistic stimuli. More visual detail can be stored in VSTM when complex, recognisable objects are maintained compared to simple objects. It is not yet known if it is recognisability that enhances memory precision, nor whether maintenance of recognisable objects is achieved with the same network of brain regions supporting maintenance of simple objects. We used a novel stimulus generation method to parametrically warp photographic images along a continuum, allowing separate estimation of the precision of memory representations and the number of items retained. The stimulus generation method was also designed to create unrecognisable, though perceptually matched, stimuli, to investigate the impact of recognisability on VSTM. We adapted the widely-used change detection and continuous report paradigms for use with complex, photographic images. Across three functional magnetic resonance imaging (fMRI) experiments, we demonstrated greater precision for recognisable objects in VSTM compared to unrecognisable objects. This clear behavioural advantage was not the result of recruitment of additional brain regions, or of stronger mean activity within the core network. Representational similarity analysis revealed greater variability across item repetitions in the representations of recognisable, compared to unrecognisable complex objects. We therefore propose that a richer range of neural representations support VSTM for complex recognisable objects. Copyright © 2017 Elsevier Inc. All rights reserved.

  19. A neutral network based technique for short-term forecasting of anomalous load periods

    Energy Technology Data Exchange (ETDEWEB)

    Sforna, M [ENEL, s.p.a, Italian Power Company (Italy); Lamedica, R; Prudenzi, A [Rome Univ. ` La Sapienza` , Rome (Italy); Caciotta, M; Orsolini Cencelli, V [Rome Univ. III, Rome (Italy)

    1995-01-01

    The paper illustrates a part of the research activity conducted by authors in the field of electric Short Term Load Forecasting (STLF) based on Artificial Neural Network (ANN) architectures. Previous experiences with basic ANN architectures have shown that, even though these architecture provide results comparable with those obtained by human operators for most normal days, they evidence some accuracy deficiencies when applied to `anomalous` load conditions occurring during holidays and long weekends. For these periods a specific procedure based upon a combined (unsupervised/supervised) approach has been proposed. The unsupervised stage provides a preventive classification of the historical load data by means of a Kohonen`s Self Organizing Map (SOM). The supervised stage, performing the proper forecasting activity, is obtained by using a multi-layer percept ron with a back propagation learning algorithm similar to the ones above mentioned. The unconventional use of information deriving from the classification stage permits the proposed procedure to obtain a relevant enhancement of the forecast accuracy for anomalous load situations.

  20. An adaptive network-based fuzzy inference system for short-term natural gas demand estimation: Uncertain and complex environments

    International Nuclear Information System (INIS)

    Azadeh, A.; Asadzadeh, S.M.; Ghanbari, A.

    2010-01-01

    Accurate short-term natural gas (NG) demand estimation and forecasting is vital for policy and decision-making process in energy sector. Moreover, conventional methods may not provide accurate results. This paper presents an adaptive network-based fuzzy inference system (ANFIS) for estimation of NG demand. Standard input variables are used which are day of the week, demand of the same day in previous year, demand of a day before and demand of 2 days before. The proposed ANFIS approach is equipped with pre-processing and post-processing concepts. Moreover, input data are pre-processed (scaled) and finally output data are post-processed (returned to its original scale). The superiority and applicability of the ANFIS approach is shown for Iranian NG consumption from 22/12/2007 to 30/6/2008. Results show that ANFIS provides more accurate results than artificial neural network (ANN) and conventional time series approach. The results of this study provide policy makers with an appropriate tool to make more accurate predictions on future short-term NG demand. This is because the proposed approach is capable of handling non-linearity, complexity as well as uncertainty that may exist in actual data sets due to erratic responses and measurement errors.

  1. Land Cover Classification via Multitemporal Spatial Data by Deep Recurrent Neural Networks

    Science.gov (United States)

    Ienco, Dino; Gaetano, Raffaele; Dupaquier, Claire; Maurel, Pierre

    2017-10-01

    Nowadays, modern earth observation programs produce huge volumes of satellite images time series (SITS) that can be useful to monitor geographical areas through time. How to efficiently analyze such kind of information is still an open question in the remote sensing field. Recently, deep learning methods proved suitable to deal with remote sensing data mainly for scene classification (i.e. Convolutional Neural Networks - CNNs - on single images) while only very few studies exist involving temporal deep learning approaches (i.e Recurrent Neural Networks - RNNs) to deal with remote sensing time series. In this letter we evaluate the ability of Recurrent Neural Networks, in particular the Long-Short Term Memory (LSTM) model, to perform land cover classification considering multi-temporal spatial data derived from a time series of satellite images. We carried out experiments on two different datasets considering both pixel-based and object-based classification. The obtained results show that Recurrent Neural Networks are competitive compared to state-of-the-art classifiers, and may outperform classical approaches in presence of low represented and/or highly mixed classes. We also show that using the alternative feature representation generated by LSTM can improve the performances of standard classifiers.

  2. Stuck in default mode: inefficient cross-frequency synchronization may lead to age-related short-term memory decline.

    Science.gov (United States)

    Pinal, Diego; Zurrón, Montserrat; Díaz, Fernando; Sauseng, Paul

    2015-04-01

    Aging-related decline in short-term memory capacity seems to be caused by deficient balancing of task-related and resting state brain networks activity; however, the exact neural mechanism underlying this deficit remains elusive. Here, we studied brain oscillatory activity in healthy young and old adults during visual information maintenance in a delayed match-to-sample task. Particular emphasis was on long range phase:amplitude coupling of frontal alpha (8-12 Hz) and posterior fast oscillatory activity (>30 Hz). It is argued that through posterior fast oscillatory activity nesting into the excitatory or the inhibitory phase of frontal alpha wave, long-range networks can be efficiently coupled or decoupled, respectively. On the basis of this mechanism, we show that healthy, elderly participants exhibit a lack of synchronization in task-relevant networks while maintaining synchronized regions of the resting state network. Lacking disconnection of this resting state network is predictive of aging-related short-term memory decline. These results support the idea of inefficient orchestration of competing brain networks in the aging human brain and identify the neural mechanism responsible for this control breakdown. Copyright © 2015 Elsevier Inc. All rights reserved.

  3. Deep Recurrent Neural Network-Based Autoencoders for Acoustic Novelty Detection

    Directory of Open Access Journals (Sweden)

    Erik Marchi

    2017-01-01

    Full Text Available In the emerging field of acoustic novelty detection, most research efforts are devoted to probabilistic approaches such as mixture models or state-space models. Only recent studies introduced (pseudo-generative models for acoustic novelty detection with recurrent neural networks in the form of an autoencoder. In these approaches, auditory spectral features of the next short term frame are predicted from the previous frames by means of Long-Short Term Memory recurrent denoising autoencoders. The reconstruction error between the input and the output of the autoencoder is used as activation signal to detect novel events. There is no evidence of studies focused on comparing previous efforts to automatically recognize novel events from audio signals and giving a broad and in depth evaluation of recurrent neural network-based autoencoders. The present contribution aims to consistently evaluate our recent novel approaches to fill this white spot in the literature and provide insight by extensive evaluations carried out on three databases: A3Novelty, PASCAL CHiME, and PROMETHEUS. Besides providing an extensive analysis of novel and state-of-the-art methods, the article shows how RNN-based autoencoders outperform statistical approaches up to an absolute improvement of 16.4% average F-measure over the three databases.

  4. Deep Recurrent Neural Network-Based Autoencoders for Acoustic Novelty Detection.

    Science.gov (United States)

    Marchi, Erik; Vesperini, Fabio; Squartini, Stefano; Schuller, Björn

    2017-01-01

    In the emerging field of acoustic novelty detection, most research efforts are devoted to probabilistic approaches such as mixture models or state-space models. Only recent studies introduced (pseudo-)generative models for acoustic novelty detection with recurrent neural networks in the form of an autoencoder. In these approaches, auditory spectral features of the next short term frame are predicted from the previous frames by means of Long-Short Term Memory recurrent denoising autoencoders. The reconstruction error between the input and the output of the autoencoder is used as activation signal to detect novel events. There is no evidence of studies focused on comparing previous efforts to automatically recognize novel events from audio signals and giving a broad and in depth evaluation of recurrent neural network-based autoencoders. The present contribution aims to consistently evaluate our recent novel approaches to fill this white spot in the literature and provide insight by extensive evaluations carried out on three databases: A3Novelty, PASCAL CHiME, and PROMETHEUS. Besides providing an extensive analysis of novel and state-of-the-art methods, the article shows how RNN-based autoencoders outperform statistical approaches up to an absolute improvement of 16.4% average F -measure over the three databases.

  5. Neural network-based model reference adaptive control system.

    Science.gov (United States)

    Patino, H D; Liu, D

    2000-01-01

    In this paper, an approach to model reference adaptive control based on neural networks is proposed and analyzed for a class of first-order continuous-time nonlinear dynamical systems. The controller structure can employ either a radial basis function network or a feedforward neural network to compensate adaptively the nonlinearities in the plant. A stable controller-parameter adjustment mechanism, which is determined using the Lyapunov theory, is constructed using a sigma-modification-type updating law. The evaluation of control error in terms of the neural network learning error is performed. That is, the control error converges asymptotically to a neighborhood of zero, whose size is evaluated and depends on the approximation error of the neural network. In the design and analysis of neural network-based control systems, it is important to take into account the neural network learning error and its influence on the control error of the plant. Simulation results showing the feasibility and performance of the proposed approach are given.

  6. An Excitatory Neural Assembly Encodes Short-Term Memory in the Prefrontal Cortex

    Directory of Open Access Journals (Sweden)

    Yonglu Tian

    2018-02-01

    Full Text Available Short-term memory (STM is crucial for animals to hold information for a small period of time. Persistent or recurrent neural activity, together with neural oscillations, is known to encode the STM at the cellular level. However, the coding mechanisms at the microcircuitry level remain a mystery. Here, we performed two-photon imaging on behaving mice to monitor the activity of neuronal microcircuitry. We discovered a neuronal subpopulation in the medial prefrontal cortex (mPFC that exhibited emergent properties in a context-dependent manner underlying a STM-like behavior paradigm. These neuronal subpopulations exclusively comprise excitatory neurons and mainly represent a group of neurons with stronger functional connections. Microcircuitry plasticity was maintained for minutes and was absent in an animal model of Alzheimer’s disease (AD. Thus, these results point to a functional coding mechanism that relies on the emergent behavior of a functionally defined neuronal assembly to encode STM.

  7. Postscript: More Problems with Botvinick and Plaut's (2006) PDP Model of Short-Term Memory

    Science.gov (United States)

    Bowers, Jeffrey S.; Damian, Markus F.; Davis, Colin J.

    2009-01-01

    Presents a postscript to the current authors' comment on the original article, "Short-term memory for serial order: A recurrent neural network model," by M. M. Botvinick and D. C. Plaut. In their commentary, the current authors demonstrated that Botvinick and Plaut's (2006) model of immediate serial recall catastrophically fails when familiar…

  8. Short-term streamflow forecasting with global climate change implications A comparative study between genetic programming and neural network models

    Science.gov (United States)

    Makkeasorn, A.; Chang, N. B.; Zhou, X.

    2008-05-01

    SummarySustainable water resources management is a critically important priority across the globe. While water scarcity limits the uses of water in many ways, floods may also result in property damages and the loss of life. To more efficiently use the limited amount of water under the changing world or to resourcefully provide adequate time for flood warning, the issues have led us to seek advanced techniques for improving streamflow forecasting on a short-term basis. This study emphasizes the inclusion of sea surface temperature (SST) in addition to the spatio-temporal rainfall distribution via the Next Generation Radar (NEXRAD), meteorological data via local weather stations, and historical stream data via USGS gage stations to collectively forecast discharges in a semi-arid watershed in south Texas. Two types of artificial intelligence models, including genetic programming (GP) and neural network (NN) models, were employed comparatively. Four numerical evaluators were used to evaluate the validity of a suite of forecasting models. Research findings indicate that GP-derived streamflow forecasting models were generally favored in the assessment in which both SST and meteorological data significantly improve the accuracy of forecasting. Among several scenarios, NEXRAD rainfall data were proven its most effectiveness for a 3-day forecast, and SST Gulf-to-Atlantic index shows larger impacts than the SST Gulf-to-Pacific index on the streamflow forecasts. The most forward looking GP-derived models can even perform a 30-day streamflow forecast ahead of time with an r-square of 0.84 and RMS error 5.4 in our study.

  9. Neural networks. A new analytical tool, applicable also in nuclear technology

    Energy Technology Data Exchange (ETDEWEB)

    Stritar, A [Inst. Jozef Stefan, Ljubljana (Slovenia)

    1992-07-01

    The basic concept of neural networks and back propagation learning algorithm are described. The behaviour of typical neural network is demonstrated on a simple graphical case. A short literature survey about the application of neural networks in nuclear science and engineering is made. The application of the neural network to the probability density calculation is shown. (author) [Slovenian] Opisana je osnova nevronskih mrez in back propagation nacina njihovega ucenja. Obnasanje enostavne nevronske mreze je prikazano na graficnem primeru. Podan je kratek pregled literaure o uporabi nevronskih mrez v jedrski znanosti in tehnologiji. Prikazana je tudi uporaba nevronske mreze pri izracunu verjetnostne porazdelitve. (author)

  10. A model of microsaccade-related neural responses induced by short-term depression in thalamocortical synapses

    Directory of Open Access Journals (Sweden)

    Wujie eYuan

    2013-04-01

    Full Text Available Microsaccades during fixation have been suggested to counteract visual fading. Recent experi- ments have also observed microsaccade-related neural responses from cellular record, scalp elec- troencephalogram (EEG and functional magnetic resonance imaging (fMRI. The underlying mechanism, however, is not yet understood and highly debated. It has been proposed that the neural activity of primary visual cortex (V1 is a crucial component for counteracting visual adaptation. In this paper, we use computational modeling to investigate how short-term depres- sion (STD in thalamocortical synapses might affect the neural responses of V1 in the presence of microsaccades. Our model not only gives a possible synaptic explanation for microsaccades in counteracting visual fading, but also reproduces several features in experimental findings. These modeling results suggest that STD in thalamocortical synapses plays an important role in microsaccade-related neural responses and the model may be useful for further investigation of behavioral properties and functional roles of microsaccades.

  11. A model of microsaccade-related neural responses induced by short-term depression in thalamocortical synapses

    Science.gov (United States)

    Yuan, Wu-Jie; Dimigen, Olaf; Sommer, Werner; Zhou, Changsong

    2013-01-01

    Microsaccades during fixation have been suggested to counteract visual fading. Recent experiments have also observed microsaccade-related neural responses from cellular record, scalp electroencephalogram (EEG), and functional magnetic resonance imaging (fMRI). The underlying mechanism, however, is not yet understood and highly debated. It has been proposed that the neural activity of primary visual cortex (V1) is a crucial component for counteracting visual adaptation. In this paper, we use computational modeling to investigate how short-term depression (STD) in thalamocortical synapses might affect the neural responses of V1 in the presence of microsaccades. Our model not only gives a possible synaptic explanation for microsaccades in counteracting visual fading, but also reproduces several features in experimental findings. These modeling results suggest that STD in thalamocortical synapses plays an important role in microsaccade-related neural responses and the model may be useful for further investigation of behavioral properties and functional roles of microsaccades. PMID:23630494

  12. The Neural Substrates of Recognition Memory for Verbal Information: Spanning the Divide between Short- and Long-Term Memory

    Science.gov (United States)

    Buchsbaum, Bradley R.; Padmanabhan, Aarthi; Berman, Karen Faith

    2011-01-01

    One of the classic categorical divisions in the history of memory research is that between short-term and long-term memory. Indeed, because memory for the immediate past (a few seconds) and memory for the relatively more remote past (several seconds and beyond) are assumed to rely on distinct neural systems, more often than not, memory research…

  13. Decoding small surface codes with feedforward neural networks

    Science.gov (United States)

    Varsamopoulos, Savvas; Criger, Ben; Bertels, Koen

    2018-01-01

    Surface codes reach high error thresholds when decoded with known algorithms, but the decoding time will likely exceed the available time budget, especially for near-term implementations. To decrease the decoding time, we reduce the decoding problem to a classification problem that a feedforward neural network can solve. We investigate quantum error correction and fault tolerance at small code distances using neural network-based decoders, demonstrating that the neural network can generalize to inputs that were not provided during training and that they can reach similar or better decoding performance compared to previous algorithms. We conclude by discussing the time required by a feedforward neural network decoder in hardware.

  14. Mittag-Leffler synchronization of fractional neural networks with time-varying delays and reaction-diffusion terms using impulsive and linear controllers.

    Science.gov (United States)

    Stamova, Ivanka; Stamov, Gani

    2017-12-01

    In this paper, we propose a fractional-order neural network system with time-varying delays and reaction-diffusion terms. We first develop a new Mittag-Leffler synchronization strategy for the controlled nodes via impulsive controllers. Using the fractional Lyapunov method sufficient conditions are given. We also study the global Mittag-Leffler synchronization of two identical fractional impulsive reaction-diffusion neural networks using linear controllers, which was an open problem even for integer-order models. Since the Mittag-Leffler stability notion is a generalization of the exponential stability concept for fractional-order systems, our results extend and improve the exponential impulsive control theory of neural network system with time-varying delays and reaction-diffusion terms to the fractional-order case. The fractional-order derivatives allow us to model the long-term memory in the neural networks, and thus the present research provides with a conceptually straightforward mathematical representation of rather complex processes. Illustrative examples are presented to show the validity of the obtained results. We show that by means of appropriate impulsive controllers we can realize the stability goal and to control the qualitative behavior of the states. An image encryption scheme is extended using fractional derivatives. Copyright © 2017 Elsevier Ltd. All rights reserved.

  15. Statistical Modeling and Prediction for Tourism Economy Using Dendritic Neural Network

    Directory of Open Access Journals (Sweden)

    Ying Yu

    2017-01-01

    Full Text Available With the impact of global internationalization, tourism economy has also been a rapid development. The increasing interest aroused by more advanced forecasting methods leads us to innovate forecasting methods. In this paper, the seasonal trend autoregressive integrated moving averages with dendritic neural network model (SA-D model is proposed to perform the tourism demand forecasting. First, we use the seasonal trend autoregressive integrated moving averages model (SARIMA model to exclude the long-term linear trend and then train the residual data by the dendritic neural network model and make a short-term prediction. As the result showed in this paper, the SA-D model can achieve considerably better predictive performances. In order to demonstrate the effectiveness of the SA-D model, we also use the data that other authors used in the other models and compare the results. It also proved that the SA-D model achieved good predictive performances in terms of the normalized mean square error, absolute percentage of error, and correlation coefficient.

  16. Financial time series prediction using spiking neural networks.

    Science.gov (United States)

    Reid, David; Hussain, Abir Jaafar; Tawfik, Hissam

    2014-01-01

    In this paper a novel application of a particular type of spiking neural network, a Polychronous Spiking Network, was used for financial time series prediction. It is argued that the inherent temporal capabilities of this type of network are suited to non-stationary data such as this. The performance of the spiking neural network was benchmarked against three systems: two "traditional", rate-encoded, neural networks; a Multi-Layer Perceptron neural network and a Dynamic Ridge Polynomial neural network, and a standard Linear Predictor Coefficients model. For this comparison three non-stationary and noisy time series were used: IBM stock data; US/Euro exchange rate data, and the price of Brent crude oil. The experiments demonstrated favourable prediction results for the Spiking Neural Network in terms of Annualised Return and prediction error for 5-Step ahead predictions. These results were also supported by other relevant metrics such as Maximum Drawdown and Signal-To-Noise ratio. This work demonstrated the applicability of the Polychronous Spiking Network to financial data forecasting and this in turn indicates the potential of using such networks over traditional systems in difficult to manage non-stationary environments.

  17. A New Strategy for Short-Term Load Forecasting

    Directory of Open Access Journals (Sweden)

    Yi Yang

    2013-01-01

    Full Text Available Electricity is a special energy which is hard to store, so the electricity demand forecasting remains an important problem. Accurate short-term load forecasting (STLF plays a vital role in power systems because it is the essential part of power system planning and operation, and it is also fundamental in many applications. Considering that an individual forecasting model usually cannot work very well for STLF, a hybrid model based on the seasonal ARIMA model and BP neural network is presented in this paper to improve the forecasting accuracy. Firstly the seasonal ARIMA model is adopted to forecast the electric load demand day ahead; then, by using the residual load demand series obtained in this forecasting process as the original series, the follow-up residual series is forecasted by BP neural network; finally, by summing up the forecasted residual series and the forecasted load demand series got by seasonal ARIMA model, the final load demand forecasting series is obtained. Case studies show that the new strategy is quite useful to improve the accuracy of STLF.

  18. Wind Power Forecasting Based on Echo State Networks and Long Short-Term Memory

    Directory of Open Access Journals (Sweden)

    Erick López

    2018-02-01

    Full Text Available Wind power generation has presented an important development around the world. However, its integration into electrical systems presents numerous challenges due to the variable nature of the wind. Therefore, to maintain an economical and reliable electricity supply, it is necessary to accurately predict wind generation. The Wind Power Prediction Tool (WPPT has been proposed to solve this task using the power curve associated with a wind farm. Recurrent Neural Networks (RNNs model complex non-linear relationships without requiring explicit mathematical expressions that relate the variables involved. In particular, two types of RNN, Long Short-Term Memory (LSTM and Echo State Network (ESN, have shown good results in time series forecasting. In this work, we present an LSTM+ESN architecture that combines the characteristics of both networks. An architecture similar to an ESN is proposed, but using LSTM blocks as units in the hidden layer. The training process of this network has two key stages: (i the hidden layer is trained with a descending gradient method online using one epoch; (ii the output layer is adjusted with a regularized regression. In particular, the case is proposed where Step (i is used as a target for the input signal, in order to extract characteristics automatically as the autoencoder approach; and in the second stage (ii, a quantile regression is used in order to obtain a robust estimate of the expected target. The experimental results show that LSTM+ESN using the autoencoder and quantile regression outperforms the WPPT model in all global metrics used.

  19. Application of hierarchical dissociated neural network in closed-loop hybrid system integrating biological and mechanical intelligence.

    Directory of Open Access Journals (Sweden)

    Yongcheng Li

    Full Text Available Neural networks are considered the origin of intelligence in organisms. In this paper, a new design of an intelligent system merging biological intelligence with artificial intelligence was created. It was based on a neural controller bidirectionally connected to an actual mobile robot to implement a novel vehicle. Two types of experimental preparations were utilized as the neural controller including 'random' and '4Q' (cultured neurons artificially divided into four interconnected parts neural network. Compared to the random cultures, the '4Q' cultures presented absolutely different activities, and the robot controlled by the '4Q' network presented better capabilities in search tasks. Our results showed that neural cultures could be successfully employed to control an artificial agent; the robot performed better and better with the stimulus because of the short-term plasticity. A new framework is provided to investigate the bidirectional biological-artificial interface and develop new strategies for a future intelligent system using these simplified model systems.

  20. Application of Hierarchical Dissociated Neural Network in Closed-Loop Hybrid System Integrating Biological and Mechanical Intelligence

    Science.gov (United States)

    Zhang, Bin; Wang, Yuechao; Li, Hongyi

    2015-01-01

    Neural networks are considered the origin of intelligence in organisms. In this paper, a new design of an intelligent system merging biological intelligence with artificial intelligence was created. It was based on a neural controller bidirectionally connected to an actual mobile robot to implement a novel vehicle. Two types of experimental preparations were utilized as the neural controller including ‘random’ and ‘4Q’ (cultured neurons artificially divided into four interconnected parts) neural network. Compared to the random cultures, the ‘4Q’ cultures presented absolutely different activities, and the robot controlled by the ‘4Q’ network presented better capabilities in search tasks. Our results showed that neural cultures could be successfully employed to control an artificial agent; the robot performed better and better with the stimulus because of the short-term plasticity. A new framework is provided to investigate the bidirectional biological-artificial interface and develop new strategies for a future intelligent system using these simplified model systems. PMID:25992579

  1. Application of hierarchical dissociated neural network in closed-loop hybrid system integrating biological and mechanical intelligence.

    Science.gov (United States)

    Li, Yongcheng; Sun, Rong; Zhang, Bin; Wang, Yuechao; Li, Hongyi

    2015-01-01

    Neural networks are considered the origin of intelligence in organisms. In this paper, a new design of an intelligent system merging biological intelligence with artificial intelligence was created. It was based on a neural controller bidirectionally connected to an actual mobile robot to implement a novel vehicle. Two types of experimental preparations were utilized as the neural controller including 'random' and '4Q' (cultured neurons artificially divided into four interconnected parts) neural network. Compared to the random cultures, the '4Q' cultures presented absolutely different activities, and the robot controlled by the '4Q' network presented better capabilities in search tasks. Our results showed that neural cultures could be successfully employed to control an artificial agent; the robot performed better and better with the stimulus because of the short-term plasticity. A new framework is provided to investigate the bidirectional biological-artificial interface and develop new strategies for a future intelligent system using these simplified model systems.

  2. Entropy Learning in Neural Network

    Directory of Open Access Journals (Sweden)

    Geok See Ng

    2017-12-01

    Full Text Available In this paper, entropy term is used in the learning phase of a neural network.  As learning progresses, more hidden nodes get into saturation.  The early creation of such hidden nodes may impair generalisation.  Hence entropy approach is proposed to dampen the early creation of such nodes.  The entropy learning also helps to increase the importance of relevant nodes while dampening the less important nodes.  At the end of learning, the less important nodes can then be eliminated to reduce the memory requirements of the neural network.

  3. Wind Power Forecasting Based on Echo State Networks and Long Short-Term Memory

    DEFF Research Database (Denmark)

    López, Erick; Allende, Héctor; Gil, Esteban

    2018-01-01

    involved. In particular, two types of RNN, Long Short-Term Memory (LSTM) and Echo State Network (ESN), have shown good results in time series forecasting. In this work, we present an LSTM+ESN architecture that combines the characteristics of both networks. An architecture similar to an ESN is proposed...

  4. Delay-dependent stability of neural networks of neutral type with time delay in the leakage term

    International Nuclear Information System (INIS)

    Li, Xiaodi; Cao, Jinde

    2010-01-01

    This paper studies the global asymptotic stability of neural networks of neutral type with mixed delays. The mixed delays include constant delay in the leakage term (i.e. 'leakage delay'), time-varying delays and continuously distributed delays. Based on the topological degree theory, Lyapunov method and linear matrix inequality (LMI) approach, some sufficient conditions are derived ensuring the existence, uniqueness and global asymptotic stability of the equilibrium point, which are dependent on both the discrete and distributed time delays. These conditions are expressed in terms of LMI and can be easily checked by the MATLAB LMI toolbox. Even if there is no leakage delay, the obtained results are less restrictive than some recent works. It can be applied to neural networks of neutral type with activation functions without assuming their boundedness, monotonicity or differentiability. Moreover, the differentiability of the time-varying delay in the non-neutral term is removed. Finally, two numerical examples are given to show the effectiveness of the proposed method

  5. Neural networks for perception human and machine perception

    CERN Document Server

    Wechsler, Harry

    1991-01-01

    Neural Networks for Perception, Volume 1: Human and Machine Perception focuses on models for understanding human perception in terms of distributed computation and examples of PDP models for machine perception. This book addresses both theoretical and practical issues related to the feasibility of both explaining human perception and implementing machine perception in terms of neural network models. The book is organized into two parts. The first part focuses on human perception. Topics on network model ofobject recognition in human vision, the self-organization of functional architecture in t

  6. Precipitation Nowcast using Deep Recurrent Neural Network

    Science.gov (United States)

    Akbari Asanjan, A.; Yang, T.; Gao, X.; Hsu, K. L.; Sorooshian, S.

    2016-12-01

    An accurate precipitation nowcast (0-6 hours) with a fine temporal and spatial resolution has always been an important prerequisite for flood warning, streamflow prediction and risk management. Most of the popular approaches used for forecasting precipitation can be categorized into two groups. One type of precipitation forecast relies on numerical modeling of the physical dynamics of atmosphere and another is based on empirical and statistical regression models derived by local hydrologists or meteorologists. Given the recent advances in artificial intelligence, in this study a powerful Deep Recurrent Neural Network, termed as Long Short-Term Memory (LSTM) model, is creatively used to extract the patterns and forecast the spatial and temporal variability of Cloud Top Brightness Temperature (CTBT) observed from GOES satellite. Then, a 0-6 hours precipitation nowcast is produced using a Precipitation Estimation from Remote Sensing Information using Artificial Neural Network (PERSIANN) algorithm, in which the CTBT nowcast is used as the PERSIANN algorithm's raw inputs. Two case studies over the continental U.S. have been conducted that demonstrate the improvement of proposed approach as compared to a classical Feed Forward Neural Network and a couple simple regression models. The advantages and disadvantages of the proposed method are summarized with regard to its capability of pattern recognition through time, handling of vanishing gradient during model learning, and working with sparse data. The studies show that the LSTM model performs better than other methods, and it is able to learn the temporal evolution of the precipitation events through over 1000 time lags. The uniqueness of PERSIANN's algorithm enables an alternative precipitation nowcast approach as demonstrated in this study, in which the CTBT prediction is produced and used as the inputs for generating precipitation nowcast.

  7. Chaotic diagonal recurrent neural network

    International Nuclear Information System (INIS)

    Wang Xing-Yuan; Zhang Yi

    2012-01-01

    We propose a novel neural network based on a diagonal recurrent neural network and chaos, and its structure and learning algorithm are designed. The multilayer feedforward neural network, diagonal recurrent neural network, and chaotic diagonal recurrent neural network are used to approach the cubic symmetry map. The simulation results show that the approximation capability of the chaotic diagonal recurrent neural network is better than the other two neural networks. (interdisciplinary physics and related areas of science and technology)

  8. Wind Resource Assessment and Forecast Planning with Neural Networks

    Directory of Open Access Journals (Sweden)

    Nicolus K. Rotich

    2014-06-01

    Full Text Available In this paper we built three types of artificial neural networks, namely: Feed forward networks, Elman networks and Cascade forward networks, for forecasting wind speeds and directions. A similar network topology was used for all the forecast horizons, regardless of the model type. All the models were then trained with real data of collected wind speeds and directions over a period of two years in the municipal of Puumala, Finland. Up to 70th percentile of the data was used for training, validation and testing, while 71–85th percentile was presented to the trained models for validation. The model outputs were then compared to the last 15% of the original data, by measuring the statistical errors between them. The feed forward networks returned the lowest errors for wind speeds. Cascade forward networks gave the lowest errors for wind directions; Elman networks returned the lowest errors when used for short term forecasting.

  9. Brain oscillatory substrates of visual short-term memory capacity.

    Science.gov (United States)

    Sauseng, Paul; Klimesch, Wolfgang; Heise, Kirstin F; Gruber, Walter R; Holz, Elisa; Karim, Ahmed A; Glennon, Mark; Gerloff, Christian; Birbaumer, Niels; Hummel, Friedhelm C

    2009-11-17

    The amount of information that can be stored in visual short-term memory is strictly limited to about four items. Therefore, memory capacity relies not only on the successful retention of relevant information but also on efficient suppression of distracting information, visual attention, and executive functions. However, completely separable neural signatures for these memory capacity-limiting factors remain to be identified. Because of its functional diversity, oscillatory brain activity may offer a utile solution. In the present study, we show that capacity-determining mechanisms, namely retention of relevant information and suppression of distracting information, are based on neural substrates independent of each other: the successful maintenance of relevant material in short-term memory is associated with cross-frequency phase synchronization between theta (rhythmical neural activity around 5 Hz) and gamma (> 50 Hz) oscillations at posterior parietal recording sites. On the other hand, electroencephalographic alpha activity (around 10 Hz) predicts memory capacity based on efficient suppression of irrelevant information in short-term memory. Moreover, repetitive transcranial magnetic stimulation at alpha frequency can modulate short-term memory capacity by influencing the ability to suppress distracting information. Taken together, the current study provides evidence for a double dissociation of brain oscillatory correlates of visual short-term memory capacity.

  10. Using neural networks to describe tracer correlations

    Directory of Open Access Journals (Sweden)

    D. J. Lary

    2004-01-01

    Full Text Available Neural networks are ideally suited to describe the spatial and temporal dependence of tracer-tracer correlations. The neural network performs well even in regions where the correlations are less compact and normally a family of correlation curves would be required. For example, the CH4-N2O correlation can be well described using a neural network trained with the latitude, pressure, time of year, and methane volume mixing ratio (v.m.r.. In this study a neural network using Quickprop learning and one hidden layer with eight nodes was able to reproduce the CH4-N2O correlation with a correlation coefficient between simulated and training values of 0.9995. Such an accurate representation of tracer-tracer correlations allows more use to be made of long-term datasets to constrain chemical models. Such as the dataset from the Halogen Occultation Experiment (HALOE which has continuously observed CH4  (but not N2O from 1991 till the present. The neural network Fortran code used is available for download.

  11. Neural networks

    International Nuclear Information System (INIS)

    Denby, Bruce; Lindsey, Clark; Lyons, Louis

    1992-01-01

    The 1980s saw a tremendous renewal of interest in 'neural' information processing systems, or 'artificial neural networks', among computer scientists and computational biologists studying cognition. Since then, the growth of interest in neural networks in high energy physics, fueled by the need for new information processing technologies for the next generation of high energy proton colliders, can only be described as explosive

  12. Neural dynamics underlying attentional orienting to auditory representations in short-term memory.

    Science.gov (United States)

    Backer, Kristina C; Binns, Malcolm A; Alain, Claude

    2015-01-21

    Sounds are ephemeral. Thus, coherent auditory perception depends on "hearing" back in time: retrospectively attending that which was lost externally but preserved in short-term memory (STM). Current theories of auditory attention assume that sound features are integrated into a perceptual object, that multiple objects can coexist in STM, and that attention can be deployed to an object in STM. Recording electroencephalography from humans, we tested these assumptions, elucidating feature-general and feature-specific neural correlates of auditory attention to STM. Alpha/beta oscillations and frontal and posterior event-related potentials indexed feature-general top-down attentional control to one of several coexisting auditory representations in STM. Particularly, task performance during attentional orienting was correlated with alpha/low-beta desynchronization (i.e., power suppression). However, attention to one feature could occur without simultaneous processing of the second feature of the representation. Therefore, auditory attention to memory relies on both feature-specific and feature-general neural dynamics. Copyright © 2015 the authors 0270-6474/15/351307-12$15.00/0.

  13. Different propagation speeds of recalled sequences in plastic spiking neural networks

    Science.gov (United States)

    Huang, Xuhui; Zheng, Zhigang; Hu, Gang; Wu, Si; Rasch, Malte J.

    2015-03-01

    Neural networks can generate spatiotemporal patterns of spike activity. Sequential activity learning and retrieval have been observed in many brain areas, and e.g. is crucial for coding of episodic memory in the hippocampus or generating temporal patterns during song production in birds. In a recent study, a sequential activity pattern was directly entrained onto the neural activity of the primary visual cortex (V1) of rats and subsequently successfully recalled by a local and transient trigger. It was observed that the speed of activity propagation in coordinates of the retinotopically organized neural tissue was constant during retrieval regardless how the speed of light stimulation sweeping across the visual field during training was varied. It is well known that spike-timing dependent plasticity (STDP) is a potential mechanism for embedding temporal sequences into neural network activity. How training and retrieval speeds relate to each other and how network and learning parameters influence retrieval speeds, however, is not well described. We here theoretically analyze sequential activity learning and retrieval in a recurrent neural network with realistic synaptic short-term dynamics and STDP. Testing multiple STDP rules, we confirm that sequence learning can be achieved by STDP. However, we found that a multiplicative nearest-neighbor (NN) weight update rule generated weight distributions and recall activities that best matched the experiments in V1. Using network simulations and mean-field analysis, we further investigated the learning mechanisms and the influence of network parameters on recall speeds. Our analysis suggests that a multiplicative STDP rule with dominant NN spike interaction might be implemented in V1 since recall speed was almost constant in an NMDA-dominant regime. Interestingly, in an AMPA-dominant regime, neural circuits might exhibit recall speeds that instead follow the change in stimulus speeds. This prediction could be tested in

  14. Representation of neutron noise data using neural networks

    International Nuclear Information System (INIS)

    Korsah, K.; Damiano, B.; Wood, R.T.

    1992-01-01

    This paper describes a neural network-based method of representing neutron noise spectra using a model developed at the Oak Ridge National Laboratory (ORNL). The backpropagation neural network learned to represent neutron noise data in terms of four descriptors, and the network response matched calculated values to within 3.5 percent. These preliminary results are encouraging, and further research is directed towards the application of neural networks in a diagnostics system for the identification of the causes of changes in structural spectral resonances. This work is part of our current investigation of advanced technologies such as expert systems and neural networks for neutron noise data reduction, analysis, and interpretation. The objective is to improve the state-of-the-art of noise analysis as a diagnostic tool for nuclear power plants and other mechanical systems

  15. An interpretable LSTM neural network for autoregressive exogenous model

    OpenAIRE

    Guo, Tian; Lin, Tao; Lu, Yao

    2018-01-01

    In this paper, we propose an interpretable LSTM recurrent neural network, i.e., multi-variable LSTM for time series with exogenous variables. Currently, widely used attention mechanism in recurrent neural networks mostly focuses on the temporal aspect of data and falls short of characterizing variable importance. To this end, our multi-variable LSTM equipped with tensorized hidden states is developed to learn variable specific representations, which give rise to both temporal and variable lev...

  16. A Novel Hybrid Data-Driven Model for Daily Land Surface Temperature Forecasting Using Long Short-Term Memory Neural Network Based on Ensemble Empirical Mode Decomposition

    Directory of Open Access Journals (Sweden)

    Xike Zhang

    2018-05-01

    Full Text Available Daily land surface temperature (LST forecasting is of great significance for application in climate-related, agricultural, eco-environmental, or industrial studies. Hybrid data-driven prediction models using Ensemble Empirical Mode Composition (EEMD coupled with Machine Learning (ML algorithms are useful for achieving these purposes because they can reduce the difficulty of modeling, require less history data, are easy to develop, and are less complex than physical models. In this article, a computationally simple, less data-intensive, fast and efficient novel hybrid data-driven model called the EEMD Long Short-Term Memory (LSTM neural network, namely EEMD-LSTM, is proposed to reduce the difficulty of modeling and to improve prediction accuracy. The daily LST data series from the Mapoling and Zhijaing stations in the Dongting Lake basin, central south China, from 1 January 2014 to 31 December 2016 is used as a case study. The EEMD is firstly employed to decompose the original daily LST data series into many Intrinsic Mode Functions (IMFs and a single residue item. Then, the Partial Autocorrelation Function (PACF is used to obtain the number of input data sample points for LSTM models. Next, the LSTM models are constructed to predict the decompositions. All the predicted results of the decompositions are aggregated as the final daily LST. Finally, the prediction performance of the hybrid EEMD-LSTM model is assessed in terms of the Mean Square Error (MSE, Mean Absolute Error (MAE, Mean Absolute Percentage Error (MAPE, Root Mean Square Error (RMSE, Pearson Correlation Coefficient (CC and Nash-Sutcliffe Coefficient of Efficiency (NSCE. To validate the hybrid data-driven model, the hybrid EEMD-LSTM model is compared with the Recurrent Neural Network (RNN, LSTM and Empirical Mode Decomposition (EMD coupled with RNN, EMD-LSTM and EEMD-RNN models, and their comparison results demonstrate that the hybrid EEMD-LSTM model performs better than the other

  17. A Novel Hybrid Data-Driven Model for Daily Land Surface Temperature Forecasting Using Long Short-Term Memory Neural Network Based on Ensemble Empirical Mode Decomposition.

    Science.gov (United States)

    Zhang, Xike; Zhang, Qiuwen; Zhang, Gui; Nie, Zhiping; Gui, Zifan; Que, Huafei

    2018-05-21

    Daily land surface temperature (LST) forecasting is of great significance for application in climate-related, agricultural, eco-environmental, or industrial studies. Hybrid data-driven prediction models using Ensemble Empirical Mode Composition (EEMD) coupled with Machine Learning (ML) algorithms are useful for achieving these purposes because they can reduce the difficulty of modeling, require less history data, are easy to develop, and are less complex than physical models. In this article, a computationally simple, less data-intensive, fast and efficient novel hybrid data-driven model called the EEMD Long Short-Term Memory (LSTM) neural network, namely EEMD-LSTM, is proposed to reduce the difficulty of modeling and to improve prediction accuracy. The daily LST data series from the Mapoling and Zhijaing stations in the Dongting Lake basin, central south China, from 1 January 2014 to 31 December 2016 is used as a case study. The EEMD is firstly employed to decompose the original daily LST data series into many Intrinsic Mode Functions (IMFs) and a single residue item. Then, the Partial Autocorrelation Function (PACF) is used to obtain the number of input data sample points for LSTM models. Next, the LSTM models are constructed to predict the decompositions. All the predicted results of the decompositions are aggregated as the final daily LST. Finally, the prediction performance of the hybrid EEMD-LSTM model is assessed in terms of the Mean Square Error (MSE), Mean Absolute Error (MAE), Mean Absolute Percentage Error (MAPE), Root Mean Square Error (RMSE), Pearson Correlation Coefficient (CC) and Nash-Sutcliffe Coefficient of Efficiency (NSCE). To validate the hybrid data-driven model, the hybrid EEMD-LSTM model is compared with the Recurrent Neural Network (RNN), LSTM and Empirical Mode Decomposition (EMD) coupled with RNN, EMD-LSTM and EEMD-RNN models, and their comparison results demonstrate that the hybrid EEMD-LSTM model performs better than the other five

  18. Neural networks and orbit control in accelerators

    International Nuclear Information System (INIS)

    Bozoki, E.; Friedman, A.

    1994-01-01

    An overview of the architecture, workings and training of Neural Networks is given. We stress the aspects which are important for the use of Neural Networks for orbit control in accelerators and storage rings, especially its ability to cope with the nonlinear behavior of the orbit response to 'kicks' and the slow drift in the orbit response during long-term operation. Results obtained for the two NSLS storage rings with several network architectures and various training methods for each architecture are given

  19. Hippocampal and posterior parietal contributions to developmental increases in visual short-term memory capacity.

    Science.gov (United States)

    von Allmen, David Yoh; Wurmitzer, Karoline; Klaver, Peter

    2014-10-01

    Developmental increases in visual short-term memory (VSTM) capacity have been associated with changes in attention processing limitations and changes in neural activity within neural networks including the posterior parietal cortex (PPC). A growing body of evidence suggests that the hippocampus plays a role in VSTM, but it is unknown whether the hippocampus contributes to the capacity increase across development. We investigated the functional development of the hippocampus and PPC in 57 children, adolescents and adults (age 8-27 years) who performed a visuo-spatial change detection task. A negative relationship between age and VSTM related activity was found in the right posterior hippocampus that was paralleled by a positive age-activity relationship in the right PPC. In the posterior hippocampus, VSTM related activity predicted individual capacity in children, whereas neural activity in the right anterior hippocampus predicted individual capacity in adults. The findings provide first evidence that VSTM development is supported by an integrated neural network that involves hippocampal and posterior parietal regions.

  20. Deep Recurrent Convolutional Neural Network: Improving Performance For Speech Recognition

    OpenAIRE

    Zhang, Zewang; Sun, Zheng; Liu, Jiaqi; Chen, Jingwen; Huo, Zhao; Zhang, Xiao

    2016-01-01

    A deep learning approach has been widely applied in sequence modeling problems. In terms of automatic speech recognition (ASR), its performance has significantly been improved by increasing large speech corpus and deeper neural network. Especially, recurrent neural network and deep convolutional neural network have been applied in ASR successfully. Given the arising problem of training speed, we build a novel deep recurrent convolutional network for acoustic modeling and then apply deep resid...

  1. Issues in the use of neural networks in information retrieval

    CERN Document Server

    Iatan, Iuliana F

    2017-01-01

    This book highlights the ability of neural networks (NNs) to be excellent pattern matchers and their importance in information retrieval (IR), which is based on index term matching. The book defines a new NN-based method for learning image similarity and describes how to use fuzzy Gaussian neural networks to predict personality. It introduces the fuzzy Clifford Gaussian network, and two concurrent neural models: (1) concurrent fuzzy nonlinear perceptron modules, and (2) concurrent fuzzy Gaussian neural network modules. Furthermore, it explains the design of a new model of fuzzy nonlinear perceptron based on alpha level sets and describes a recurrent fuzzy neural network model with a learning algorithm based on the improved particle swarm optimization method.

  2. Bidirectional Long Short-Term Memory Network with a Conditional Random Field Layer for Uyghur Part-Of-Speech Tagging

    Directory of Open Access Journals (Sweden)

    Maihemuti Maimaiti

    2017-11-01

    Full Text Available Uyghur is an agglutinative and a morphologically rich language; natural language processing tasks in Uyghur can be a challenge. Word morphology is important in Uyghur part-of-speech (POS tagging. However, POS tagging performance suffers from error propagation of morphological analyzers. To address this problem, we propose a few models for POS tagging: conditional random fields (CRF, long short-term memory (LSTM, bidirectional LSTM networks (BI-LSTM, LSTM networks with a CRF layer, and BI-LSTM networks with a CRF layer. These models do not depend on stemming and word disambiguation for Uyghur and combine hand-crafted features with neural network models. State-of-the-art performance on Uyghur POS tagging is achieved on test data sets using the proposed approach: 98.41% accuracy on 15 labels and 95.74% accuracy on 64 labels, which are 2.71% and 4% improvements, respectively, over the CRF model results. Using engineered features, our model achieves further improvements of 0.2% (15 labels and 0.48% (64 labels. The results indicate that the proposed method could be an effective approach for POS tagging in other morphologically rich languages.

  3. Neural networks for aircraft control

    Science.gov (United States)

    Linse, Dennis

    1990-01-01

    Current research in Artificial Neural Networks indicates that networks offer some potential advantages in adaptation and fault tolerance. This research is directed at determining the possible applicability of neural networks to aircraft control. The first application will be to aircraft trim. Neural network node characteristics, network topology and operation, neural network learning and example histories using neighboring optimal control with a neural net are discussed.

  4. A new cascade NN based method to short-term load forecast in deregulated electricity market

    International Nuclear Information System (INIS)

    Kouhi, Sajjad; Keynia, Farshid

    2013-01-01

    Highlights: • We are proposed a new hybrid cascaded NN based method and WT to short-term load forecast in deregulated electricity market. • An efficient preprocessor consist of normalization and shuffling of signals is presented. • In order to select the best inputs, a two-stage feature selection is presented. • A new cascaded structure consist of three cascaded NNs is used as forecaster. - Abstract: Short-term load forecasting (STLF) is a major discussion in efficient operation of power systems. The electricity load is a nonlinear signal with time dependent behavior. The area of electricity load forecasting has still essential need for more accurate and stable load forecast algorithm. To improve the accuracy of prediction, a new hybrid forecast strategy based on cascaded neural network is proposed for STLF. This method is consists of wavelet transform, an intelligent two-stage feature selection, and cascaded neural network. The feature selection is used to remove the irrelevant and redundant inputs. The forecast engine is composed of three cascaded neural network (CNN) structure. This cascaded structure can be efficiently extract input/output mapping function of the nonlinear electricity load data. Adjustable parameters of the intelligent feature selection and CNN is fine-tuned by a kind of cross-validation technique. The proposed STLF is tested on PJM and New York electricity markets. It is concluded from the result, the proposed algorithm is a robust forecast method

  5. Short-term wind speed prediction based on the wavelet transformation and Adaboost neural network

    Science.gov (United States)

    Hai, Zhou; Xiang, Zhu; Haijian, Shao; Ji, Wu

    2018-03-01

    The operation of the power grid will be affected inevitably with the increasing scale of wind farm due to the inherent randomness and uncertainty, so the accurate wind speed forecasting is critical for the stability of the grid operation. Typically, the traditional forecasting method does not take into account the frequency characteristics of wind speed, which cannot reflect the nature of the wind speed signal changes result from the low generality ability of the model structure. AdaBoost neural network in combination with the multi-resolution and multi-scale decomposition of wind speed is proposed to design the model structure in order to improve the forecasting accuracy and generality ability. The experimental evaluation using the data from a real wind farm in Jiangsu province is given to demonstrate the proposed strategy can improve the robust and accuracy of the forecasted variable.

  6. Estimating tree bole volume using artificial neural network models for four species in Turkey.

    Science.gov (United States)

    Ozçelik, Ramazan; Diamantopoulou, Maria J; Brooks, John R; Wiant, Harry V

    2010-01-01

    Tree bole volumes of 89 Scots pine (Pinus sylvestris L.), 96 Brutian pine (Pinus brutia Ten.), 107 Cilicica fir (Abies cilicica Carr.) and 67 Cedar of Lebanon (Cedrus libani A. Rich.) trees were estimated using Artificial Neural Network (ANN) models. Neural networks offer a number of advantages including the ability to implicitly detect complex nonlinear relationships between input and output variables, which is very helpful in tree volume modeling. Two different neural network architectures were used and produced the Back propagation (BPANN) and the Cascade Correlation (CCANN) Artificial Neural Network models. In addition, tree bole volume estimates were compared to other established tree bole volume estimation techniques including the centroid method, taper equations, and existing standard volume tables. An overview of the features of ANNs and traditional methods is presented and the advantages and limitations of each one of them are discussed. For validation purposes, actual volumes were determined by aggregating the volumes of measured short sections (average 1 meter) of the tree bole using Smalian's formula. The results reported in this research suggest that the selected cascade correlation artificial neural network (CCANN) models are reliable for estimating the tree bole volume of the four examined tree species since they gave unbiased results and were superior to almost all methods in terms of error (%) expressed as the mean of the percentage errors. 2009 Elsevier Ltd. All rights reserved.

  7. Open quantum generalisation of Hopfield neural networks

    Science.gov (United States)

    Rotondo, P.; Marcuzzi, M.; Garrahan, J. P.; Lesanovsky, I.; Müller, M.

    2018-03-01

    We propose a new framework to understand how quantum effects may impact on the dynamics of neural networks. We implement the dynamics of neural networks in terms of Markovian open quantum systems, which allows us to treat thermal and quantum coherent effects on the same footing. In particular, we propose an open quantum generalisation of the Hopfield neural network, the simplest toy model of associative memory. We determine its phase diagram and show that quantum fluctuations give rise to a qualitatively new non-equilibrium phase. This novel phase is characterised by limit cycles corresponding to high-dimensional stationary manifolds that may be regarded as a generalisation of storage patterns to the quantum domain.

  8. An Attractor-Based Complexity Measurement for Boolean Recurrent Neural Networks

    Science.gov (United States)

    Cabessa, Jérémie; Villa, Alessandro E. P.

    2014-01-01

    We provide a novel refined attractor-based complexity measurement for Boolean recurrent neural networks that represents an assessment of their computational power in terms of the significance of their attractor dynamics. This complexity measurement is achieved by first proving a computational equivalence between Boolean recurrent neural networks and some specific class of -automata, and then translating the most refined classification of -automata to the Boolean neural network context. As a result, a hierarchical classification of Boolean neural networks based on their attractive dynamics is obtained, thus providing a novel refined attractor-based complexity measurement for Boolean recurrent neural networks. These results provide new theoretical insights to the computational and dynamical capabilities of neural networks according to their attractive potentialities. An application of our findings is illustrated by the analysis of the dynamics of a simplified model of the basal ganglia-thalamocortical network simulated by a Boolean recurrent neural network. This example shows the significance of measuring network complexity, and how our results bear new founding elements for the understanding of the complexity of real brain circuits. PMID:24727866

  9. Anomaly Detection for Temporal Data using Long Short-Term Memory (LSTM)

    OpenAIRE

    Singh, Akash

    2017-01-01

    We explore the use of Long short-term memory (LSTM) for anomaly detection in temporal data. Due to the challenges in obtaining labeled anomaly datasets, an unsupervised approach is employed. We train recurrent neural networks (RNNs) with LSTM units to learn the normal time series patterns and predict future values. The resulting prediction errors are modeled to give anomaly scores. We investigate different ways of maintaining LSTM state, and the effect of using a fixed number of time steps on...

  10. Synchronization criteria for generalized reaction-diffusion neural networks via periodically intermittent control.

    Science.gov (United States)

    Gan, Qintao; Lv, Tianshi; Fu, Zhenhua

    2016-04-01

    In this paper, the synchronization problem for a class of generalized neural networks with time-varying delays and reaction-diffusion terms is investigated concerning Neumann boundary conditions in terms of p-norm. The proposed generalized neural networks model includes reaction-diffusion local field neural networks and reaction-diffusion static neural networks as its special cases. By establishing a new inequality, some simple and useful conditions are obtained analytically to guarantee the global exponential synchronization of the addressed neural networks under the periodically intermittent control. According to the theoretical results, the influences of diffusion coefficients, diffusion space, and control rate on synchronization are analyzed. Finally, the feasibility and effectiveness of the proposed methods are shown by simulation examples, and by choosing different diffusion coefficients, diffusion spaces, and control rates, different controlled synchronization states can be obtained.

  11. Neural network based multiscale image restoration approach

    Science.gov (United States)

    de Castro, Ana Paula A.; da Silva, José D. S.

    2007-02-01

    This paper describes a neural network based multiscale image restoration approach. Multilayer perceptrons are trained with artificial images of degraded gray level circles, in an attempt to make the neural network learn inherent space relations of the degraded pixels. The present approach simulates the degradation by a low pass Gaussian filter blurring operation and the addition of noise to the pixels at pre-established rates. The training process considers the degraded image as input and the non-degraded image as output for the supervised learning process. The neural network thus performs an inverse operation by recovering a quasi non-degraded image in terms of least squared. The main difference of the approach to existing ones relies on the fact that the space relations are taken from different scales, thus providing relational space data to the neural network. The approach is an attempt to come up with a simple method that leads to an optimum solution to the problem. Considering different window sizes around a pixel simulates the multiscale operation. In the generalization phase the neural network is exposed to indoor, outdoor, and satellite degraded images following the same steps use for the artificial circle image.

  12. Short-term memory capacity in networks via the restricted isometry property.

    Science.gov (United States)

    Charles, Adam S; Yap, Han Lun; Rozell, Christopher J

    2014-06-01

    Cortical networks are hypothesized to rely on transient network activity to support short-term memory (STM). In this letter, we study the capacity of randomly connected recurrent linear networks for performing STM when the input signals are approximately sparse in some basis. We leverage results from compressed sensing to provide rigorous nonasymptotic recovery guarantees, quantifying the impact of the input sparsity level, the input sparsity basis, and the network characteristics on the system capacity. Our analysis demonstrates that network memory capacities can scale superlinearly with the number of nodes and in some situations can achieve STM capacities that are much larger than the network size. We provide perfect recovery guarantees for finite sequences and recovery bounds for infinite sequences. The latter analysis predicts that network STM systems may have an optimal recovery length that balances errors due to omission and recall mistakes. Furthermore, we show that the conditions yielding optimal STM capacity can be embodied in several network topologies, including networks with sparse or dense connectivities.

  13. Topology influences performance in the associative memory neural networks

    International Nuclear Information System (INIS)

    Lu Jianquan; He Juan; Cao Jinde; Gao Zhiqiang

    2006-01-01

    To explore how topology affects performance within Hopfield-type associative memory neural networks (AMNNs), we studied the computational performance of the neural networks with regular lattice, random, small-world, and scale-free structures. In this Letter, we found that the memory performance of neural networks obtained through asynchronous updating from 'larger' nodes to 'smaller' nodes are better than asynchronous updating in random order, especially for the scale-free topology. The computational performance of associative memory neural networks linked by the above-mentioned network topologies with the same amounts of nodes (neurons) and edges (synapses) were studied respectively. Along with topologies becoming more random and less locally disordered, we will see that the performance of associative memory neural network is quite improved. By comparing, we show that the regular lattice and random network form two extremes in terms of patterns stability and retrievability. For a network, its patterns stability and retrievability can be largely enhanced by adding a random component or some shortcuts to its structured component. According to the conclusions of this Letter, we can design the associative memory neural networks with high performance and minimal interconnect requirements

  14. Hybrid neural network bushing model for vehicle dynamics simulation

    International Nuclear Information System (INIS)

    Sohn, Jeong Hyun; Lee, Seung Kyu; Yoo, Wan Suk

    2008-01-01

    Although the linear model was widely used for the bushing model in vehicle suspension systems, it could not express the nonlinear characteristics of bushing in terms of the amplitude and the frequency. An artificial neural network model was suggested to consider the hysteretic responses of bushings. This model, however, often diverges due to the uncertainties of the neural network under the unexpected excitation inputs. In this paper, a hybrid neural network bushing model combining linear and neural network is suggested. A linear model was employed to represent linear stiffness and damping effects, and the artificial neural network algorithm was adopted to take into account the hysteretic responses. A rubber test was performed to capture bushing characteristics, where sine excitation with different frequencies and amplitudes is applied. Random test results were used to update the weighting factors of the neural network model. It is proven that the proposed model has more robust characteristics than a simple neural network model under step excitation input. A full car simulation was carried out to verify the proposed bushing models. It was shown that the hybrid model results are almost identical to the linear model under several maneuvers

  15. Hopfield neural network in HEP track reconstruction

    International Nuclear Information System (INIS)

    Muresan, R.; Pentia, M.

    1997-01-01

    In experimental particle physics, pattern recognition problems, specifically for neural network methods, occur frequently in track finding or feature extraction. Track finding is a combinatorial optimization problem. Given a set of points in Euclidean space, one tries the reconstruction of particle trajectories, subject to smoothness constraints.The basic ingredients in a neural network are the N binary neurons and the synaptic strengths connecting them. In our case the neurons are the segments connecting all possible point pairs.The dynamics of the neural network is given by a local updating rule wich evaluates for each neuron the sign of the 'upstream activity'. An updating rule in the form of sigmoid function is given. The synaptic strengths are defined in terms of angle between the segments and the lengths of the segments implied in the track reconstruction. An algorithm based on Hopfield neural network has been developed and tested on the track coordinates measured by silicon microstrip tracking system

  16. Deep Recurrent Neural Networks for Human Activity Recognition

    Directory of Open Access Journals (Sweden)

    Abdulmajid Murad

    2017-11-01

    Full Text Available Adopting deep learning methods for human activity recognition has been effective in extracting discriminative features from raw input sequences acquired from body-worn sensors. Although human movements are encoded in a sequence of successive samples in time, typical machine learning methods perform recognition tasks without exploiting the temporal correlations between input data samples. Convolutional neural networks (CNNs address this issue by using convolutions across a one-dimensional temporal sequence to capture dependencies among input data. However, the size of convolutional kernels restricts the captured range of dependencies between data samples. As a result, typical models are unadaptable to a wide range of activity-recognition configurations and require fixed-length input windows. In this paper, we propose the use of deep recurrent neural networks (DRNNs for building recognition models that are capable of capturing long-range dependencies in variable-length input sequences. We present unidirectional, bidirectional, and cascaded architectures based on long short-term memory (LSTM DRNNs and evaluate their effectiveness on miscellaneous benchmark datasets. Experimental results show that our proposed models outperform methods employing conventional machine learning, such as support vector machine (SVM and k-nearest neighbors (KNN. Additionally, the proposed models yield better performance than other deep learning techniques, such as deep believe networks (DBNs and CNNs.

  17. Deep Recurrent Neural Networks for Human Activity Recognition.

    Science.gov (United States)

    Murad, Abdulmajid; Pyun, Jae-Young

    2017-11-06

    Adopting deep learning methods for human activity recognition has been effective in extracting discriminative features from raw input sequences acquired from body-worn sensors. Although human movements are encoded in a sequence of successive samples in time, typical machine learning methods perform recognition tasks without exploiting the temporal correlations between input data samples. Convolutional neural networks (CNNs) address this issue by using convolutions across a one-dimensional temporal sequence to capture dependencies among input data. However, the size of convolutional kernels restricts the captured range of dependencies between data samples. As a result, typical models are unadaptable to a wide range of activity-recognition configurations and require fixed-length input windows. In this paper, we propose the use of deep recurrent neural networks (DRNNs) for building recognition models that are capable of capturing long-range dependencies in variable-length input sequences. We present unidirectional, bidirectional, and cascaded architectures based on long short-term memory (LSTM) DRNNs and evaluate their effectiveness on miscellaneous benchmark datasets. Experimental results show that our proposed models outperform methods employing conventional machine learning, such as support vector machine (SVM) and k-nearest neighbors (KNN). Additionally, the proposed models yield better performance than other deep learning techniques, such as deep believe networks (DBNs) and CNNs.

  18. Information in a Network of Neuronal Cells: Effect of Cell Density and Short-Term Depression

    KAUST Repository

    Onesto, Valentina; Cosentino, Carlo; Di Fabrizio, Enzo M.; Cesarelli, Mario; Amato, Francesco; Gentile, Francesco

    2016-01-01

    Neurons are specialized, electrically excitable cells which use electrical to chemical signals to transmit and elaborate information. Understanding how the cooperation of a great many of neurons in a grid may modify and perhaps improve the information quality, in contrast to few neurons in isolation, is critical for the rational design of cell-materials interfaces for applications in regenerative medicine, tissue engineering, and personalized lab-on-a-chips. In the present paper, we couple an integrate-and-fire model with information theory variables to analyse the extent of information in a network of nerve cells. We provide an estimate of the information in the network in bits as a function of cell density and short-term depression time. In the model, neurons are connected through a Delaunay triangulation of not-intersecting edges; in doing so, the number of connecting synapses per neuron is approximately constant to reproduce the early time of network development in planar neural cell cultures. In simulations where the number of nodes is varied, we observe an optimal value of cell density for which information in the grid is maximized. In simulations in which the posttransmission latency time is varied, we observe that information increases as the latency time decreases and, for specific configurations of the grid, it is largely enhanced in a resonance effect.

  19. Information in a Network of Neuronal Cells: Effect of Cell Density and Short-Term Depression

    KAUST Repository

    Onesto, Valentina

    2016-05-10

    Neurons are specialized, electrically excitable cells which use electrical to chemical signals to transmit and elaborate information. Understanding how the cooperation of a great many of neurons in a grid may modify and perhaps improve the information quality, in contrast to few neurons in isolation, is critical for the rational design of cell-materials interfaces for applications in regenerative medicine, tissue engineering, and personalized lab-on-a-chips. In the present paper, we couple an integrate-and-fire model with information theory variables to analyse the extent of information in a network of nerve cells. We provide an estimate of the information in the network in bits as a function of cell density and short-term depression time. In the model, neurons are connected through a Delaunay triangulation of not-intersecting edges; in doing so, the number of connecting synapses per neuron is approximately constant to reproduce the early time of network development in planar neural cell cultures. In simulations where the number of nodes is varied, we observe an optimal value of cell density for which information in the grid is maximized. In simulations in which the posttransmission latency time is varied, we observe that information increases as the latency time decreases and, for specific configurations of the grid, it is largely enhanced in a resonance effect.

  20. Information in a Network of Neuronal Cells: Effect of Cell Density and Short-Term Depression

    Directory of Open Access Journals (Sweden)

    Valentina Onesto

    2016-01-01

    Full Text Available Neurons are specialized, electrically excitable cells which use electrical to chemical signals to transmit and elaborate information. Understanding how the cooperation of a great many of neurons in a grid may modify and perhaps improve the information quality, in contrast to few neurons in isolation, is critical for the rational design of cell-materials interfaces for applications in regenerative medicine, tissue engineering, and personalized lab-on-a-chips. In the present paper, we couple an integrate-and-fire model with information theory variables to analyse the extent of information in a network of nerve cells. We provide an estimate of the information in the network in bits as a function of cell density and short-term depression time. In the model, neurons are connected through a Delaunay triangulation of not-intersecting edges; in doing so, the number of connecting synapses per neuron is approximately constant to reproduce the early time of network development in planar neural cell cultures. In simulations where the number of nodes is varied, we observe an optimal value of cell density for which information in the grid is maximized. In simulations in which the posttransmission latency time is varied, we observe that information increases as the latency time decreases and, for specific configurations of the grid, it is largely enhanced in a resonance effect.

  1. The Use of Artificial Neural Networks for Forecasting the Electric Demand of Stand-Alone Consumers

    Science.gov (United States)

    Ivanin, O. A.; Direktor, L. B.

    2018-05-01

    The problem of short-term forecasting of electric power demand of stand-alone consumers (small inhabited localities) situated outside centralized power supply areas is considered. The basic approaches to modeling the electric power demand depending on the forecasting time frame and the problems set, as well as the specific features of such modeling, are described. The advantages and disadvantages of the methods used for the short-term forecast of the electric demand are indicated, and difficulties involved in the solution of the problem are outlined. The basic principles of arranging artificial neural networks are set forth; it is also shown that the proposed method is preferable when the input information necessary for prediction is lacking or incomplete. The selection of the parameters that should be included into the list of the input data for modeling the electric power demand of residential areas using artificial neural networks is validated. The structure of a neural network is proposed for solving the problem of modeling the electric power demand of residential areas. The specific features of generation of the training dataset are outlined. The results of test modeling of daily electric demand curves for some settlements of Kamchatka and Yakutia based on known actual electric demand curves are provided. The reliability of the test modeling has been validated. A high value of the deviation of the modeled curve from the reference curve obtained in one of the four reference calculations is explained. The input data and the predicted power demand curves for the rural settlement of Kuokuiskii Nasleg are provided. The power demand curves were modeled for four characteristic days of the year, and they can be used in the future for designing a power supply system for the settlement. To enhance the accuracy of the method, a series of measures based on specific features of a neural network's functioning are proposed.

  2. Hidden neural networks

    DEFF Research Database (Denmark)

    Krogh, Anders Stærmose; Riis, Søren Kamaric

    1999-01-01

    A general framework for hybrids of hidden Markov models (HMMs) and neural networks (NNs) called hidden neural networks (HNNs) is described. The article begins by reviewing standard HMMs and estimation by conditional maximum likelihood, which is used by the HNN. In the HNN, the usual HMM probability...... parameters are replaced by the outputs of state-specific neural networks. As opposed to many other hybrids, the HNN is normalized globally and therefore has a valid probabilistic interpretation. All parameters in the HNN are estimated simultaneously according to the discriminative conditional maximum...... likelihood criterion. The HNN can be viewed as an undirected probabilistic independence network (a graphical model), where the neural networks provide a compact representation of the clique functions. An evaluation of the HNN on the task of recognizing broad phoneme classes in the TIMIT database shows clear...

  3. Character-level neural network for biomedical named entity recognition.

    Science.gov (United States)

    Gridach, Mourad

    2017-06-01

    Biomedical named entity recognition (BNER), which extracts important named entities such as genes and proteins, is a challenging task in automated systems that mine knowledge in biomedical texts. The previous state-of-the-art systems required large amounts of task-specific knowledge in the form of feature engineering, lexicons and data pre-processing to achieve high performance. In this paper, we introduce a novel neural network architecture that benefits from both word- and character-level representations automatically, by using a combination of bidirectional long short-term memory (LSTM) and conditional random field (CRF) eliminating the need for most feature engineering tasks. We evaluate our system on two datasets: JNLPBA corpus and the BioCreAtIvE II Gene Mention (GM) corpus. We obtained state-of-the-art performance by outperforming the previous systems. To the best of our knowledge, we are the first to investigate the combination of deep neural networks, CRF, word embeddings and character-level representation in recognizing biomedical named entities. Copyright © 2017 Elsevier Inc. All rights reserved.

  4. Synapse:neural network for predict power consumption: users guide

    Energy Technology Data Exchange (ETDEWEB)

    Muller, C; Mangeas, M; Perrot, N

    1994-08-01

    SYNAPSE is forecasting tool designed to predict power consumption in metropolitan France on the half hour time scale. Some characteristics distinguish this forecasting model from those which already exist. In particular, it is composed of numerous neural networks. The idea for using many neural networks arises from past tests. These tests showed us that a single neural network is not able to solve the problem correctly. From this result, we decided to perform unsupervised classification of the 24 consumption curves. From this classification, six classes appeared, linked with the weekdays: Mondays, Tuesdays, Wednesdays, Thursdays, Fridays, Saturdays, Sundays, holidays and bridge days. For each class and for each half hour, two multilayer perceptrons are built. The two of them forecast the power for one particular half hour, and for a day including one of the determined class. The input of these two network are different: the first one (short time forecasting) includes the powers for the most recent half hour and relative power of the previous day; the second (medium time forecasting) includes only the relative power of the previous day. A process connects the results of every networks and allows one to forecast more than one half-hour in advance. In this process, short time forecasting networks and medium time forecasting networks are used differently. The first kind of neural networks gives good results on the scale of one day. The second one gives good forecasts for the next predicted powers. In this note, the organization of the SYNAPSE program is detailed, and the user`s menu is described. This first version of synapse works and should allow the APC group to evaluate its utility. (authors). 6 refs., 2 appends.

  5. Deep Learning Neural Networks and Bayesian Neural Networks in Data Analysis

    Directory of Open Access Journals (Sweden)

    Chernoded Andrey

    2017-01-01

    Full Text Available Most of the modern analyses in high energy physics use signal-versus-background classification techniques of machine learning methods and neural networks in particular. Deep learning neural network is the most promising modern technique to separate signal and background and now days can be widely and successfully implemented as a part of physical analysis. In this article we compare Deep learning and Bayesian neural networks application as a classifiers in an instance of top quark analysis.

  6. Adaptive competitive learning neural networks

    Directory of Open Access Journals (Sweden)

    Ahmed R. Abas

    2013-11-01

    Full Text Available In this paper, the adaptive competitive learning (ACL neural network algorithm is proposed. This neural network not only groups similar input feature vectors together but also determines the appropriate number of groups of these vectors. This algorithm uses a new proposed criterion referred to as the ACL criterion. This criterion evaluates different clustering structures produced by the ACL neural network for an input data set. Then, it selects the best clustering structure and the corresponding network architecture for this data set. The selected structure is composed of the minimum number of clusters that are compact and balanced in their sizes. The selected network architecture is efficient, in terms of its complexity, as it contains the minimum number of neurons. Synaptic weight vectors of these neurons represent well-separated, compact and balanced clusters in the input data set. The performance of the ACL algorithm is evaluated and compared with the performance of a recently proposed algorithm in the literature in clustering an input data set and determining its number of clusters. Results show that the ACL algorithm is more accurate and robust in both determining the number of clusters and allocating input feature vectors into these clusters than the other algorithm especially with data sets that are sparsely distributed.

  7. Short-term electricity prices forecasting in a competitive market by a hybrid intelligent approach

    Energy Technology Data Exchange (ETDEWEB)

    Catalao, J.P.S. [Department of Electromechanical Engineering, University of Beira Interior, R. Fonte do Lameiro, 6201-001 Covilha (Portugal); Center for Innovation in Electrical and Energy Engineering, Instituto Superior Tecnico, Technical University of Lisbon, Av. Rovisco Pais, 1049-001 Lisbon (Portugal); Pousinho, H.M.I. [Department of Electromechanical Engineering, University of Beira Interior, R. Fonte do Lameiro, 6201-001 Covilha (Portugal); Mendes, V.M.F. [Department of Electrical Engineering and Automation, Instituto Superior de Engenharia de Lisboa, R. Conselheiro Emidio Navarro, 1950-062 Lisbon (Portugal)

    2011-02-15

    In this paper, a hybrid intelligent approach is proposed for short-term electricity prices forecasting in a competitive market. The proposed approach is based on the wavelet transform and a hybrid of neural networks and fuzzy logic. Results from a case study based on the electricity market of mainland Spain are presented. A thorough comparison is carried out, taking into account the results of previous publications. Conclusions are duly drawn. (author)

  8. Short-term electricity prices forecasting in a competitive market by a hybrid intelligent approach

    International Nuclear Information System (INIS)

    Catalao, J.P.S.; Pousinho, H.M.I.; Mendes, V.M.F.

    2011-01-01

    In this paper, a hybrid intelligent approach is proposed for short-term electricity prices forecasting in a competitive market. The proposed approach is based on the wavelet transform and a hybrid of neural networks and fuzzy logic. Results from a case study based on the electricity market of mainland Spain are presented. A thorough comparison is carried out, taking into account the results of previous publications. Conclusions are duly drawn. (author)

  9. Medical Concept Normalization in Social Media Posts with Recurrent Neural Networks.

    Science.gov (United States)

    Tutubalina, Elena; Miftahutdinov, Zulfat; Nikolenko, Sergey; Malykh, Valentin

    2018-06-12

    Text mining of scientific libraries and social media has already proven itself as a reliable tool for drug repurposing and hypothesis generation. The task of mapping a disease mention to a concept in a controlled vocabulary, typically to the standard thesaurus in the Unified Medical Language System (UMLS), is known as medical concept normalization. This task is challenging due to the differences in the use of medical terminology between health care professionals and social media texts coming from the lay public. To bridge this gap, we use sequence learning with recurrent neural networks and semantic representation of one- or multi-word expressions: we develop end-to-end architectures directly tailored to the task, including bidirectional Long Short-Term Memory, Gated Recurrent Units with an attention mechanism, and additional semantic similarity features based on UMLS. Our evaluation against a standard benchmark shows that recurrent neural networks improve results over an effective baseline for classification based on convolutional neural networks. A qualitative examination of mentions discovered in a dataset of user reviews collected from popular online health information platforms as well as a quantitative evaluation both show improvements in the semantic representation of health-related expressions in social media. Copyright © 2018. Published by Elsevier Inc.

  10. Spatiotemporal Recurrent Convolutional Networks for Traffic Prediction in Transportation Networks

    Directory of Open Access Journals (Sweden)

    Haiyang Yu

    2017-06-01

    Full Text Available Predicting large-scale transportation network traffic has become an important and challenging topic in recent decades. Inspired by the domain knowledge of motion prediction, in which the future motion of an object can be predicted based on previous scenes, we propose a network grid representation method that can retain the fine-scale structure of a transportation network. Network-wide traffic speeds are converted into a series of static images and input into a novel deep architecture, namely, spatiotemporal recurrent convolutional networks (SRCNs, for traffic forecasting. The proposed SRCNs inherit the advantages of deep convolutional neural networks (DCNNs and long short-term memory (LSTM neural networks. The spatial dependencies of network-wide traffic can be captured by DCNNs, and the temporal dynamics can be learned by LSTMs. An experiment on a Beijing transportation network with 278 links demonstrates that SRCNs outperform other deep learning-based algorithms in both short-term and long-term traffic prediction.

  11. Spatiotemporal Recurrent Convolutional Networks for Traffic Prediction in Transportation Networks.

    Science.gov (United States)

    Yu, Haiyang; Wu, Zhihai; Wang, Shuqin; Wang, Yunpeng; Ma, Xiaolei

    2017-06-26

    Predicting large-scale transportation network traffic has become an important and challenging topic in recent decades. Inspired by the domain knowledge of motion prediction, in which the future motion of an object can be predicted based on previous scenes, we propose a network grid representation method that can retain the fine-scale structure of a transportation network. Network-wide traffic speeds are converted into a series of static images and input into a novel deep architecture, namely, spatiotemporal recurrent convolutional networks (SRCNs), for traffic forecasting. The proposed SRCNs inherit the advantages of deep convolutional neural networks (DCNNs) and long short-term memory (LSTM) neural networks. The spatial dependencies of network-wide traffic can be captured by DCNNs, and the temporal dynamics can be learned by LSTMs. An experiment on a Beijing transportation network with 278 links demonstrates that SRCNs outperform other deep learning-based algorithms in both short-term and long-term traffic prediction.

  12. Deep Gate Recurrent Neural Network

    Science.gov (United States)

    2016-11-22

    and Fred Cummins. Learning to forget: Continual prediction with lstm . Neural computation, 12(10):2451–2471, 2000. Alex Graves. Generating sequences...DSGU) and Simple Gated Unit (SGU), which are structures for learning long-term dependencies. Compared to traditional Long Short-Term Memory ( LSTM ) and...Gated Recurrent Unit (GRU), both structures require fewer parameters and less computation time in sequence classification tasks. Unlike GRU and LSTM

  13. Global Robust Stability of Switched Interval Neural Networks with Discrete and Distributed Time-Varying Delays of Neural Type

    Directory of Open Access Journals (Sweden)

    Huaiqin Wu

    2012-01-01

    Full Text Available By combing the theories of the switched systems and the interval neural networks, the mathematics model of the switched interval neural networks with discrete and distributed time-varying delays of neural type is presented. A set of the interval parameter uncertainty neural networks with discrete and distributed time-varying delays of neural type are used as the individual subsystem, and an arbitrary switching rule is assumed to coordinate the switching between these networks. By applying the augmented Lyapunov-Krasovskii functional approach and linear matrix inequality (LMI techniques, a delay-dependent criterion is achieved to ensure to such switched interval neural networks to be globally asymptotically robustly stable in terms of LMIs. The unknown gain matrix is determined by solving this delay-dependent LMIs. Finally, an illustrative example is given to demonstrate the validity of the theoretical results.

  14. End-to-End Multimodal Emotion Recognition Using Deep Neural Networks

    Science.gov (United States)

    Tzirakis, Panagiotis; Trigeorgis, George; Nicolaou, Mihalis A.; Schuller, Bjorn W.; Zafeiriou, Stefanos

    2017-12-01

    Automatic affect recognition is a challenging task due to the various modalities emotions can be expressed with. Applications can be found in many domains including multimedia retrieval and human computer interaction. In recent years, deep neural networks have been used with great success in determining emotional states. Inspired by this success, we propose an emotion recognition system using auditory and visual modalities. To capture the emotional content for various styles of speaking, robust features need to be extracted. To this purpose, we utilize a Convolutional Neural Network (CNN) to extract features from the speech, while for the visual modality a deep residual network (ResNet) of 50 layers. In addition to the importance of feature extraction, a machine learning algorithm needs also to be insensitive to outliers while being able to model the context. To tackle this problem, Long Short-Term Memory (LSTM) networks are utilized. The system is then trained in an end-to-end fashion where - by also taking advantage of the correlations of the each of the streams - we manage to significantly outperform the traditional approaches based on auditory and visual handcrafted features for the prediction of spontaneous and natural emotions on the RECOLA database of the AVEC 2016 research challenge on emotion recognition.

  15. A Quantum Implementation Model for Artificial Neural Networks

    OpenAIRE

    Ammar Daskin

    2018-01-01

    The learning process for multilayered neural networks with many nodes makes heavy demands on computational resources. In some neural network models, the learning formulas, such as the Widrow–Hoff formula, do not change the eigenvectors of the weight matrix while flatting the eigenvalues. In infinity, these iterative formulas result in terms formed by the principal components of the weight matrix, namely, the eigenvectors corresponding to the non-zero eigenvalues. In quantum computing, the pha...

  16. Rule extraction from minimal neural networks for credit card screening.

    Science.gov (United States)

    Setiono, Rudy; Baesens, Bart; Mues, Christophe

    2011-08-01

    While feedforward neural networks have been widely accepted as effective tools for solving classification problems, the issue of finding the best network architecture remains unresolved, particularly so in real-world problem settings. We address this issue in the context of credit card screening, where it is important to not only find a neural network with good predictive performance but also one that facilitates a clear explanation of how it produces its predictions. We show that minimal neural networks with as few as one hidden unit provide good predictive accuracy, while having the added advantage of making it easier to generate concise and comprehensible classification rules for the user. To further reduce model size, a novel approach is suggested in which network connections from the input units to this hidden unit are removed by a very straightaway pruning procedure. In terms of predictive accuracy, both the minimized neural networks and the rule sets generated from them are shown to compare favorably with other neural network based classifiers. The rules generated from the minimized neural networks are concise and thus easier to validate in a real-life setting.

  17. A HYBRID HOPFIELD NEURAL NETWORK AND TABU SEARCH ALGORITHM TO SOLVE ROUTING PROBLEM IN COMMUNICATION NETWORK

    Directory of Open Access Journals (Sweden)

    MANAR Y. KASHMOLA

    2012-06-01

    Full Text Available The development of hybrid algorithms for solving complex optimization problems focuses on enhancing the strengths and compensating for the weakness of two or more complementary approaches. The goal is to intelligently combine the key elements of these approaches to find superior solutions to solve optimization problems. Optimal routing in communication network is considering a complex optimization problem. In this paper we propose a hybrid Hopfield Neural Network (HNN and Tabu Search (TS algorithm, this algorithm called hybrid HNN-TS algorithm. The paradigm of this hybridization is embedded. We embed the short-term memory and tabu restriction features from TS algorithm in the HNN model. The short-term memory and tabu restriction control the neuron selection process in the HNN model in order to get around the local minima problem and find an optimal solution using the HNN model to solve complex optimization problem. The proposed algorithm is intended to find the optimal path for packet transmission in the network which is fills in the field of routing problem. The optimal path that will be selected is depending on 4-tuples (delay, cost, reliability and capacity. Test results show that the propose algorithm can find path with optimal cost and a reasonable number of iterations. It also shows that the complexity of the network model won’t be a problem since the neuron selection is done heuristically.

  18. Greek long-term energy consumption prediction using artificial neural networks

    International Nuclear Information System (INIS)

    Ekonomou, L.

    2010-01-01

    In this paper artificial neural networks (ANN) are addressed in order the Greek long-term energy consumption to be predicted. The multilayer perceptron model (MLP) has been used for this purpose by testing several possible architectures in order to be selected the one with the best generalizing ability. Actual recorded input and output data that influence long-term energy consumption were used in the training, validation and testing process. The developed ANN model is used for the prediction of 2005-2008, 2010, 2012 and 2015 Greek energy consumption. The produced ANN results for years 2005-2008 were compared with the results produced by a linear regression method, a support vector machine method and with real energy consumption records showing a great accuracy. The proposed approach can be useful in the effective implementation of energy policies, since accurate predictions of energy consumption affect the capital investment, the environmental quality, the revenue analysis, the market research management, while conserve at the same time the supply security. Furthermore it constitutes an accurate tool for the Greek long-term energy consumption prediction problem, which up today has not been faced effectively.

  19. Order recall in verbal short-term memory: The role of semantic networks.

    Science.gov (United States)

    Poirier, Marie; Saint-Aubin, Jean; Mair, Ali; Tehan, Gerry; Tolan, Anne

    2015-04-01

    In their recent article, Acheson, MacDonald, and Postle (Journal of Experimental Psychology: Learning, Memory, and Cognition 37:44-59, 2011) made an important but controversial suggestion: They hypothesized that (a) semantic information has an effect on order information in short-term memory (STM) and (b) order recall in STM is based on the level of activation of items within the relevant lexico-semantic long-term memory (LTM) network. However, verbal STM research has typically led to the conclusion that factors such as semantic category have a large effect on the number of correctly recalled items, but little or no impact on order recall (Poirier & Saint-Aubin, Quarterly Journal of Experimental Psychology 48A:384-404, 1995; Saint-Aubin, Ouellette, & Poirier, Psychonomic Bulletin & Review 12:171-177, 2005; Tse, Memory 17:874-891, 2009). Moreover, most formal models of short-term order memory currently suggest a separate mechanism for order coding-that is, one that is separate from item representation and not associated with LTM lexico-semantic networks. Both of the experiments reported here tested the predictions that we derived from Acheson et al. The findings show that, as predicted, manipulations aiming to affect the activation of item representations significantly impacted order memory.

  20. Advanced models of neural networks nonlinear dynamics and stochasticity in biological neurons

    CERN Document Server

    Rigatos, Gerasimos G

    2015-01-01

    This book provides a complete study on neural structures exhibiting nonlinear and stochastic dynamics, elaborating on neural dynamics by introducing advanced models of neural networks. It overviews the main findings in the modelling of neural dynamics in terms of electrical circuits and examines their stability properties with the use of dynamical systems theory. It is suitable for researchers and postgraduate students engaged with neural networks and dynamical systems theory.

  1. Neural Networks in R Using the Stuttgart Neural Network Simulator: RSNNS

    Directory of Open Access Journals (Sweden)

    Christopher Bergmeir

    2012-01-01

    Full Text Available Neural networks are important standard machine learning procedures for classification and regression. We describe the R package RSNNS that provides a convenient interface to the popular Stuttgart Neural Network Simulator SNNS. The main features are (a encapsulation of the relevant SNNS parts in a C++ class, for sequential and parallel usage of different networks, (b accessibility of all of the SNNSalgorithmic functionality from R using a low-level interface, and (c a high-level interface for convenient, R-style usage of many standard neural network procedures. The package also includes functions for visualization and analysis of the models and the training procedures, as well as functions for data input/output from/to the original SNNSfile formats.

  2. Neural Networks: Implementations and Applications

    OpenAIRE

    Vonk, E.; Veelenturf, L.P.J.; Jain, L.C.

    1996-01-01

    Artificial neural networks, also called neural networks, have been used successfully in many fields including engineering, science and business. This paper presents the implementation of several neural network simulators and their applications in character recognition and other engineering areas

  3. Neural electrical activity and neural network growth.

    Science.gov (United States)

    Gafarov, F M

    2018-05-01

    The development of central and peripheral neural system depends in part on the emergence of the correct functional connectivity in its input and output pathways. Now it is generally accepted that molecular factors guide neurons to establish a primary scaffold that undergoes activity-dependent refinement for building a fully functional circuit. However, a number of experimental results obtained recently shows that the neuronal electrical activity plays an important role in the establishing of initial interneuronal connections. Nevertheless, these processes are rather difficult to study experimentally, due to the absence of theoretical description and quantitative parameters for estimation of the neuronal activity influence on growth in neural networks. In this work we propose a general framework for a theoretical description of the activity-dependent neural network growth. The theoretical description incorporates a closed-loop growth model in which the neural activity can affect neurite outgrowth, which in turn can affect neural activity. We carried out the detailed quantitative analysis of spatiotemporal activity patterns and studied the relationship between individual cells and the network as a whole to explore the relationship between developing connectivity and activity patterns. The model, developed in this work will allow us to develop new experimental techniques for studying and quantifying the influence of the neuronal activity on growth processes in neural networks and may lead to a novel techniques for constructing large-scale neural networks by self-organization. Copyright © 2018 Elsevier Ltd. All rights reserved.

  4. Controlled neural network application in track-match problem

    International Nuclear Information System (INIS)

    Baginyan, S.A.; Ososkov, G.A.

    1993-01-01

    Track-match problem of high energy physics (HEP) data handling is formulated in terms of incidence matrices. The corresponding Hopfield neural network is developed to solve this type of constraint satisfaction problems (CSP). A special concept of the controlled neural network is proposed as a basis of an algorithm for the effective CSP solution. Results of comparable calculations show the very high performance of this algorithm against conventional search procedures. 8 refs.; 1 fig.; 1 tab

  5. A Quantum Implementation Model for Artificial Neural Networks

    OpenAIRE

    Daskin, Ammar

    2016-01-01

    The learning process for multi layered neural networks with many nodes makes heavy demands on computational resources. In some neural network models, the learning formulas, such as the Widrow-Hoff formula, do not change the eigenvectors of the weight matrix while flatting the eigenvalues. In infinity, this iterative formulas result in terms formed by the principal components of the weight matrix: i.e., the eigenvectors corresponding to the non-zero eigenvalues. In quantum computing, the phase...

  6. Neural network-based nonlinear model predictive control vs. linear quadratic gaussian control

    Science.gov (United States)

    Cho, C.; Vance, R.; Mardi, N.; Qian, Z.; Prisbrey, K.

    1997-01-01

    One problem with the application of neural networks to the multivariable control of mineral and extractive processes is determining whether and how to use them. The objective of this investigation was to compare neural network control to more conventional strategies and to determine if there are any advantages in using neural network control in terms of set-point tracking, rise time, settling time, disturbance rejection and other criteria. The procedure involved developing neural network controllers using both historical plant data and simulation models. Various control patterns were tried, including both inverse and direct neural network plant models. These were compared to state space controllers that are, by nature, linear. For grinding and leaching circuits, a nonlinear neural network-based model predictive control strategy was superior to a state space-based linear quadratic gaussian controller. The investigation pointed out the importance of incorporating state space into neural networks by making them recurrent, i.e., feeding certain output state variables into input nodes in the neural network. It was concluded that neural network controllers can have better disturbance rejection, set-point tracking, rise time, settling time and lower set-point overshoot, and it was also concluded that neural network controllers can be more reliable and easy to implement in complex, multivariable plants.

  7. Individual Identification Using Functional Brain Fingerprint Detected by Recurrent Neural Network.

    Science.gov (United States)

    Chen, Shiyang; Hu, Xiaoping P

    2018-03-20

    Individual identification based on brain function has gained traction in literature. Investigating individual differences in brain function can provide additional insights into the brain. In this work, we introduce a recurrent neural network based model for identifying individuals based on only a short segment of resting state functional MRI data. In addition, we demonstrate how the global signal and differences in atlases affect the individual identifiability. Furthermore, we investigate neural network features that exhibit the uniqueness of each individual. The results indicate that our model is able to identify individuals based on neural features and provides additional information regarding brain dynamics.

  8. Feature to prototype transition in neural networks

    Science.gov (United States)

    Krotov, Dmitry; Hopfield, John

    Models of associative memory with higher order (higher than quadratic) interactions, and their relationship to neural networks used in deep learning are discussed. Associative memory is conventionally described by recurrent neural networks with dynamical convergence to stable points. Deep learning typically uses feedforward neural nets without dynamics. However, a simple duality relates these two different views when applied to problems of pattern classification. From the perspective of associative memory such models deserve attention because they make it possible to store a much larger number of memories, compared to the quadratic case. In the dual description, these models correspond to feedforward neural networks with one hidden layer and unusual activation functions transmitting the activities of the visible neurons to the hidden layer. These activation functions are rectified polynomials of a higher degree rather than the rectified linear functions used in deep learning. The network learns representations of the data in terms of features for rectified linear functions, but as the power in the activation function is increased there is a gradual shift to a prototype-based representation, the two extreme regimes of pattern recognition known in cognitive psychology. Simons Center for Systems Biology.

  9. Artificial neural networks in NDT

    International Nuclear Information System (INIS)

    Abdul Aziz Mohamed

    2001-01-01

    Artificial neural networks, simply known as neural networks, have attracted considerable interest in recent years largely because of a growing recognition of the potential of these computational paradigms as powerful alternative models to conventional pattern recognition or function approximation techniques. The neural networks approach is having a profound effect on almost all fields, and has been utilised in fields Where experimental inter-disciplinary work is being carried out. Being a multidisciplinary subject with a broad knowledge base, Nondestructive Testing (NDT) or Nondestructive Evaluation (NDE) is no exception. This paper explains typical applications of neural networks in NDT/NDE. Three promising types of neural networks are highlighted, namely, back-propagation, binary Hopfield and Kohonen's self-organising maps. (Author)

  10. Application of neural networks to signal prediction in nuclear power plant

    International Nuclear Information System (INIS)

    Wan Joo Kim; Soon Heung Chang; Byung Ho Lee

    1993-01-01

    This paper describes the feasibility study of an artificial neural network for signal prediction. The purpose of signal prediction is to estimate the value of undetected next time step signal. As the prediction method, based on the idea of auto regression, a few previous signals are inputs to the artificial neural network and the signal value of next time step is estimated with the outputs of the network. The artificial neural network can be applied to the nonlinear system and answers in short time. The training algorithm is a modified backpropagation model, which can effectively reduce the training time. The target signal of the simulation is the steam generator water level, which is one of the important parameters in nuclear power plants. The simulation result shows that the predicted value follows the real trend well

  11. Continuous Online Sequence Learning with an Unsupervised Neural Network Model.

    Science.gov (United States)

    Cui, Yuwei; Ahmad, Subutar; Hawkins, Jeff

    2016-09-14

    The ability to recognize and predict temporal sequences of sensory inputs is vital for survival in natural environments. Based on many known properties of cortical neurons, hierarchical temporal memory (HTM) sequence memory recently has been proposed as a theoretical framework for sequence learning in the cortex. In this letter, we analyze properties of HTM sequence memory and apply it to sequence learning and prediction problems with streaming data. We show the model is able to continuously learn a large number of variableorder temporal sequences using an unsupervised Hebbian-like learning rule. The sparse temporal codes formed by the model can robustly handle branching temporal sequences by maintaining multiple predictions until there is sufficient disambiguating evidence. We compare the HTM sequence memory with other sequence learning algorithms, including statistical methods: autoregressive integrated moving average; feedforward neural networks-time delay neural network and online sequential extreme learning machine; and recurrent neural networks-long short-term memory and echo-state networks on sequence prediction problems with both artificial and real-world data. The HTM model achieves comparable accuracy to other state-of-the-art algorithms. The model also exhibits properties that are critical for sequence learning, including continuous online learning, the ability to handle multiple predictions and branching sequences with high-order statistics, robustness to sensor noise and fault tolerance, and good performance without task-specific hyperparameter tuning. Therefore, the HTM sequence memory not only advances our understanding of how the brain may solve the sequence learning problem but is also applicable to real-world sequence learning problems from continuous data streams.

  12. Short-Term Wind Speed Prediction Using EEMD-LSSVM Model

    Directory of Open Access Journals (Sweden)

    Aiqing Kang

    2017-01-01

    Full Text Available Hybrid Ensemble Empirical Mode Decomposition (EEMD and Least Square Support Vector Machine (LSSVM is proposed to improve short-term wind speed forecasting precision. The EEMD is firstly utilized to decompose the original wind speed time series into a set of subseries. Then the LSSVM models are established to forecast these subseries. Partial autocorrelation function is adopted to analyze the inner relationships between the historical wind speed series in order to determine input variables of LSSVM models for prediction of every subseries. Finally, the superposition principle is employed to sum the predicted values of every subseries as the final wind speed prediction. The performance of hybrid model is evaluated based on six metrics. Compared with LSSVM, Back Propagation Neural Networks (BP, Auto-Regressive Integrated Moving Average (ARIMA, combination of Empirical Mode Decomposition (EMD with LSSVM, and hybrid EEMD with ARIMA models, the wind speed forecasting results show that the proposed hybrid model outperforms these models in terms of six metrics. Furthermore, the scatter diagrams of predicted versus actual wind speed and histograms of prediction errors are presented to verify the superiority of the hybrid model in short-term wind speed prediction.

  13. Constructing general partial differential equations using polynomial and neural networks.

    Science.gov (United States)

    Zjavka, Ladislav; Pedrycz, Witold

    2016-01-01

    Sum fraction terms can approximate multi-variable functions on the basis of discrete observations, replacing a partial differential equation definition with polynomial elementary data relation descriptions. Artificial neural networks commonly transform the weighted sum of inputs to describe overall similarity relationships of trained and new testing input patterns. Differential polynomial neural networks form a new class of neural networks, which construct and solve an unknown general partial differential equation of a function of interest with selected substitution relative terms using non-linear multi-variable composite polynomials. The layers of the network generate simple and composite relative substitution terms whose convergent series combinations can describe partial dependent derivative changes of the input variables. This regression is based on trained generalized partial derivative data relations, decomposed into a multi-layer polynomial network structure. The sigmoidal function, commonly used as a nonlinear activation of artificial neurons, may transform some polynomial items together with the parameters with the aim to improve the polynomial derivative term series ability to approximate complicated periodic functions, as simple low order polynomials are not able to fully make up for the complete cycles. The similarity analysis facilitates substitutions for differential equations or can form dimensional units from data samples to describe real-world problems. Copyright © 2015 Elsevier Ltd. All rights reserved.

  14. Combining neural networks and genetic algorithms for hydrological flow forecasting

    Science.gov (United States)

    Neruda, Roman; Srejber, Jan; Neruda, Martin; Pascenko, Petr

    2010-05-01

    We present a neural network approach to rainfall-runoff modeling for small size river basins based on several time series of hourly measured data. Different neural networks are considered for short time runoff predictions (from one to six hours lead time) based on runoff and rainfall data observed in previous time steps. Correlation analysis shows that runoff data, short time rainfall history, and aggregated API values are the most significant data for the prediction. Neural models of multilayer perceptron and radial basis function networks with different numbers of units are used and compared with more traditional linear time series predictors. Out of possible 48 hours of relevant history of all the input variables, the most important ones are selected by means of input filters created by a genetic algorithm. The genetic algorithm works with population of binary encoded vectors defining input selection patterns. Standard genetic operators of two-point crossover, random bit-flipping mutation, and tournament selection were used. The evaluation of objective function of each individual consists of several rounds of building and testing a particular neural network model. The whole procedure is rather computational exacting (taking hours to days on a desktop PC), thus a high-performance mainframe computer has been used for our experiments. Results based on two years worth data from the Ploucnice river in Northern Bohemia suggest that main problems connected with this approach to modeling are ovetraining that can lead to poor generalization, and relatively small number of extreme events which makes it difficult for a model to predict the amplitude of the event. Thus, experiments with both absolute and relative runoff predictions were carried out. In general it can be concluded that the neural models show about 5 per cent improvement in terms of efficiency coefficient over liner models. Multilayer perceptrons with one hidden layer trained by back propagation algorithm and

  15. LANGUAGE REPETITION AND SHORT-TERM MEMORY: AN INTEGRATIVE FRAMEWORK

    Directory of Open Access Journals (Sweden)

    Steve eMajerus

    2013-07-01

    Full Text Available Short-term maintenance of verbal information is a core factor of language repetition, especially when reproducing multiple or unfamiliar stimuli. Many models of language processing locate the verbal short-term maintenance function in the left posterior superior temporo-parietal area and its connections with the inferior frontal gyrus. However, research in the field of short-term memory has implicated bilateral fronto-parietal networks, involved in attention and serial order processing, as being critical for the maintenance and reproduction of verbal sequences. We present here an integrative framework aimed at bridging research in the language processing and short-term memory fields. This framework considers verbal short-term maintenance as an emergent function resulting from synchronized and integrated activation in dorsal and ventral language processing networks as well as fronto-parietal attention and serial order processing networks. To-be-maintained item representations are temporarily activated in the dorsal and ventral language processing networks, novel phoneme and word serial order information is proposed to be maintained via a right fronto-parietal serial order processing network, and activation in these different networks is proposed to be coordinated and maintained via a left fronto-parietal attention processing network. This framework provides new perspectives for our understanding of information maintenance at the nonword-, word- and sentence-level as well as of verbal maintenance deficits in case of brain injury.

  16. Language repetition and short-term memory: an integrative framework.

    Science.gov (United States)

    Majerus, Steve

    2013-01-01

    Short-term maintenance of verbal information is a core factor of language repetition, especially when reproducing multiple or unfamiliar stimuli. Many models of language processing locate the verbal short-term maintenance function in the left posterior superior temporo-parietal area and its connections with the inferior frontal gyrus. However, research in the field of short-term memory has implicated bilateral fronto-parietal networks, involved in attention and serial order processing, as being critical for the maintenance and reproduction of verbal sequences. We present here an integrative framework aimed at bridging research in the language processing and short-term memory fields. This framework considers verbal short-term maintenance as an emergent function resulting from synchronized and integrated activation in dorsal and ventral language processing networks as well as fronto-parietal attention and serial order processing networks. To-be-maintained item representations are temporarily activated in the dorsal and ventral language processing networks, novel phoneme and word serial order information is proposed to be maintained via a right fronto-parietal serial order processing network, and activation in these different networks is proposed to be coordinated and maintained via a left fronto-parietal attention processing network. This framework provides new perspectives for our understanding of information maintenance at the non-word-, word- and sentence-level as well as of verbal maintenance deficits in case of brain injury.

  17. Design of Robust Neural Network Classifiers

    DEFF Research Database (Denmark)

    Larsen, Jan; Andersen, Lars Nonboe; Hintz-Madsen, Mads

    1998-01-01

    This paper addresses a new framework for designing robust neural network classifiers. The network is optimized using the maximum a posteriori technique, i.e., the cost function is the sum of the log-likelihood and a regularization term (prior). In order to perform robust classification, we present...... a modified likelihood function which incorporates the potential risk of outliers in the data. This leads to the introduction of a new parameter, the outlier probability. Designing the neural classifier involves optimization of network weights as well as outlier probability and regularization parameters. We...... suggest to adapt the outlier probability and regularisation parameters by minimizing the error on a validation set, and a simple gradient descent scheme is derived. In addition, the framework allows for constructing a simple outlier detector. Experiments with artificial data demonstrate the potential...

  18. Fundamental bound on the persistence and capacity of short-term memory stored as graded persistent activity.

    Science.gov (United States)

    Koyluoglu, Onur Ozan; Pertzov, Yoni; Manohar, Sanjay; Husain, Masud; Fiete, Ila R

    2017-09-07

    It is widely believed that persistent neural activity underlies short-term memory. Yet, as we show, the degradation of information stored directly in such networks behaves differently from human short-term memory performance. We build a more general framework where memory is viewed as a problem of passing information through noisy channels whose degradation characteristics resemble those of persistent activity networks. If the brain first encoded the information appropriately before passing the information into such networks, the information can be stored substantially more faithfully. Within this framework, we derive a fundamental lower-bound on recall precision, which declines with storage duration and number of stored items. We show that human performance, though inconsistent with models involving direct (uncoded) storage in persistent activity networks, can be well-fit by the theoretical bound. This finding is consistent with the view that if the brain stores information in patterns of persistent activity, it might use codes that minimize the effects of noise, motivating the search for such codes in the brain.

  19. Retention interval affects visual short-term memory encoding.

    Science.gov (United States)

    Bankó, Eva M; Vidnyánszky, Zoltán

    2010-03-01

    Humans can efficiently store fine-detailed facial emotional information in visual short-term memory for several seconds. However, an unresolved question is whether the same neural mechanisms underlie high-fidelity short-term memory for emotional expressions at different retention intervals. Here we show that retention interval affects the neural processes of short-term memory encoding using a delayed facial emotion discrimination task. The early sensory P100 component of the event-related potentials (ERP) was larger in the 1-s interstimulus interval (ISI) condition than in the 6-s ISI condition, whereas the face-specific N170 component was larger in the longer ISI condition. Furthermore, the memory-related late P3b component of the ERP responses was also modulated by retention interval: it was reduced in the 1-s ISI as compared with the 6-s condition. The present findings cannot be explained based on differences in sensory processing demands or overall task difficulty because there was no difference in the stimulus information and subjects' performance between the two different ISI conditions. These results reveal that encoding processes underlying high-precision short-term memory for facial emotional expressions are modulated depending on whether information has to be stored for one or for several seconds.

  20. Improving short-term forecasting during ramp events by means of Regime-Switching Artificial Neural Networks

    Science.gov (United States)

    Gallego, C.; Costa, A.; Cuerva, A.

    2010-09-01

    Since nowadays wind energy can't be neither scheduled nor large-scale storaged, wind power forecasting has been useful to minimize the impact of wind fluctuations. In particular, short-term forecasting (characterised by prediction horizons from minutes to a few days) is currently required by energy producers (in a daily electricity market context) and the TSO's (in order to keep the stability/balance of an electrical system). Within the short-term background, time-series based models (i.e., statistical models) have shown a better performance than NWP models for horizons up to few hours. These models try to learn and replicate the dynamic shown by the time series of a certain variable. When considering the power output of wind farms, ramp events are usually observed, being characterized by a large positive gradient in the time series (ramp-up) or negative (ramp-down) during relatively short time periods (few hours). Ramp events may be motivated by many different causes, involving generally several spatial scales, since the large scale (fronts, low pressure systems) up to the local scale (wind turbine shut-down due to high wind speed, yaw misalignment due to fast changes of wind direction). Hence, the output power may show unexpected dynamics during ramp events depending on the underlying processes; consequently, traditional statistical models considering only one dynamic for the hole power time series may be inappropriate. This work proposes a Regime Switching (RS) model based on Artificial Neural Nets (ANN). The RS-ANN model gathers as many ANN's as different dynamics considered (called regimes); a certain ANN is selected so as to predict the output power, depending on the current regime. The current regime is on-line updated based on a gradient criteria, regarding the past two values of the output power. 3 Regimes are established, concerning ramp events: ramp-up, ramp-down and no-ramp regime. In order to assess the skillness of the proposed RS-ANN model, a single

  1. Introduction to neural networks

    International Nuclear Information System (INIS)

    Pavlopoulos, P.

    1996-01-01

    This lecture is a presentation of today's research in neural computation. Neural computation is inspired by knowledge from neuro-science. It draws its methods in large degree from statistical physics and its potential applications lie mainly in computer science and engineering. Neural networks models are algorithms for cognitive tasks, such as learning and optimization, which are based on concepts derived from research into the nature of the brain. The lecture first gives an historical presentation of neural networks development and interest in performing complex tasks. Then, an exhaustive overview of data management and networks computation methods is given: the supervised learning and the associative memory problem, the capacity of networks, the Perceptron networks, the functional link networks, the Madaline (Multiple Adalines) networks, the back-propagation networks, the reduced coulomb energy (RCE) networks, the unsupervised learning and the competitive learning and vector quantization. An example of application in high energy physics is given with the trigger systems and track recognition system (track parametrization, event selection and particle identification) developed for the CPLEAR experiment detectors from the LEAR at CERN. (J.S.). 56 refs., 20 figs., 1 tab., 1 appendix

  2. Optimizing a neural network for detection of moving vehicles in video

    Science.gov (United States)

    Fischer, Noëlle M.; Kruithof, Maarten C.; Bouma, Henri

    2017-10-01

    In the field of security and defense, it is extremely important to reliably detect moving objects, such as cars, ships, drones and missiles. Detection and analysis of moving objects in cameras near borders could be helpful to reduce illicit trading, drug trafficking, irregular border crossing, trafficking in human beings and smuggling. Many recent benchmarks have shown that convolutional neural networks are performing well in the detection of objects in images. Most deep-learning research effort focuses on classification or detection on single images. However, the detection of dynamic changes (e.g., moving objects, actions and events) in streaming video is extremely relevant for surveillance and forensic applications. In this paper, we combine an end-to-end feedforward neural network for static detection with a recurrent Long Short-Term Memory (LSTM) network for multi-frame analysis. We present a practical guide with special attention to the selection of the optimizer and batch size. The end-to-end network is able to localize and recognize the vehicles in video from traffic cameras. We show an efficient way to collect relevant in-domain data for training with minimal manual labor. Our results show that the combination with LSTM improves performance for the detection of moving vehicles.

  3. Multi nodal load forecasting in electric power systems using a radial basis neural network; Previsao de carga multinodal em sistemas eletricos de potencia usando uma rede neural de base radial

    Energy Technology Data Exchange (ETDEWEB)

    Altran, A.B.; Lotufo, A.D.P.; Minussi, C.R. [Universidade Estadual Paulista Julio de Mesquita Filho (UNESP), Ilha Solteira, SP (Brazil). Dept. de Engenharia Eletrica], Emails: lealtran@yahoo.com.br, annadiva@dee.feis.unesp.br, minussi@dee.feis.unesp.br; Lopes, M.L.M. [Universidade Estadual Paulista Julio de Mesquita Filho (UNESP), Ilha Solteira, SP (Brazil). Dept. de Matematica], E-mail: mara@mat.feis.unesp.br

    2009-07-01

    This paper presents a methodology for electrical load forecasting, using radial base functions as activation function in artificial neural networks with the training by backpropagation algorithm. This methodology is applied to short term electrical load forecasting (24 h ahead). Therefore, results are presented analyzing the use of radial base functions substituting the sigmoid function as activation function in multilayer perceptron neural networks. However, the main contribution of this paper is the proposal of a new formulation of load forecasting dedicated to the forecasting in several points of the electrical network, as well as considering several types of users (residential, commercial, industrial). It deals with the MLF (Multimodal Load Forecasting), with the same processing time as the GLF (Global Load Forecasting). (author)

  4. Hypothalamus-Related Resting Brain Network Underlying Short-Term Acupuncture Treatment in Primary Hypertension

    Directory of Open Access Journals (Sweden)

    Hongyan Chen

    2013-01-01

    Full Text Available The present study attempted to explore modulated hypothalamus-seeded resting brain network underlying the cardiovascular system in primary hypertensive patients after short-term acupuncture treatment. Thirty right-handed patients (14 male were divided randomly into acupuncture and control groups. The acupuncture group received a continuous five-day acupuncture treatment and undertook three resting-state fMRI scans and 24-hour ambulatory blood pressure monitoring (ABPM as well as SF-36 questionnaires before, after, and one month after acupuncture treatment. The control group undertook fMRI scans and 24-hour ABPM. For verum acupuncture, average blood pressure (BP and heart rate (HR decreased after treatment but showed no statistical differences. There were no significant differences in BP and HR between the acupuncture and control groups. Notably, SF-36 indicated that bodily pain (P = 0.005 decreased and vitality (P = 0.036 increased after acupuncture compared to the baseline. The hypothalamus-related brain network showed increased functional connectivity with the medulla, brainstem, cerebellum, limbic system, thalamus, and frontal lobes. In conclusion, short-term acupuncture did not decrease BP significantly but appeared to improve body pain and vitality. Acupuncture may regulate the cardiovascular system through a complicated brain network from the cortical level, the hypothalamus, and the brainstem.

  5. A Neural Network Model to Learn Multiple Tasks under Dynamic Environments

    Science.gov (United States)

    Tsumori, Kenji; Ozawa, Seiichi

    When environments are dynamically changed for agents, the knowledge acquired in an environment might be useless in future. In such dynamic environments, agents should be able to not only acquire new knowledge but also modify old knowledge in learning. However, modifying all knowledge acquired before is not efficient because the knowledge once acquired may be useful again when similar environment reappears and some knowledge can be shared among different environments. To learn efficiently in such environments, we propose a neural network model that consists of the following modules: resource allocating network, long-term & short-term memory, and environment change detector. We evaluate the model under a class of dynamic environments where multiple function approximation tasks are sequentially given. The experimental results demonstrate that the proposed model possesses stable incremental learning, accurate environmental change detection, proper association and recall of old knowledge, and efficient knowledge transfer.

  6. State-of-the-art of applications of neural networks in the nuclear industry

    International Nuclear Information System (INIS)

    Zwingelstein, G.; Masson, M.H.

    1990-01-01

    Artificial neural net models have been extensively studied for many years in various laboratories to try to simulate with computer programs the human brain performances. The first applications were developed in the fields of speech and image recognition. The aims of these studies were mainly to classify rapidly patterns corrupted by noises or partly missing. Neural networks with the development of new net topologies and algorithms and parallel computing hardwares and softwares are to-day very promising for applications in many industries. In the introduction, this paper presents the anticipated benefits of the uses of neural networks for industrial applications. Then a brief overview of the main neural networks is provided. Finally a short review of neural networks applications in the nuclear industry is given. It covers domains such as: predictive maintenance for vibratory surveillance of rotating machinery, signal processing, operator guidance and eddy current inspection. In conclusion recommendations are made to use with efficiency neural networks for practical applications. In particular the need for supercomputing will be pinpointed. (author)

  7. ARTIFICIAL NEURAL NETWORK AND WAVELET DECOMPOSITION IN THE FORECAST OF GLOBAL HORIZONTAL SOLAR RADIATION

    Directory of Open Access Journals (Sweden)

    Luiz Albino Teixeira Júnior

    2015-04-01

    Full Text Available This paper proposes a method (denoted by WD-ANN that combines the Artificial Neural Networks (ANN and the Wavelet Decomposition (WD to generate short-term global horizontal solar radiation forecasting, which is an essential information for evaluating the electrical power generated from the conversion of solar energy into electrical energy. The WD-ANN method consists of two basic steps: firstly, it is performed the decomposition of level p of the time series of interest, generating p + 1 wavelet orthonormal components; secondly, the p + 1 wavelet orthonormal components (generated in the step 1 are inserted simultaneously into an ANN in order to generate short-term forecasting. The results showed that the proposed method (WD-ANN improved substantially the performance over the (traditional ANN method.

  8. Solving differential equations with unknown constitutive relations as recurrent neural networks

    Energy Technology Data Exchange (ETDEWEB)

    Hagge, Tobias J.; Stinis, Panagiotis; Yeung, Enoch H.; Tartakovsky, Alexandre M.

    2017-12-08

    We solve a system of ordinary differential equations with an unknown functional form of a sink (reaction rate) term. We assume that the measurements (time series) of state variables are partially available, and use a recurrent neural network to “learn” the reaction rate from this data. This is achieved by including discretized ordinary differential equations as part of a recurrent neural network training problem. We extend TensorFlow’s recurrent neural network architecture to create a simple but scalable and effective solver for the unknown functions, and apply it to a fedbatch bioreactor simulation problem. Use of techniques from recent deep learning literature enables training of functions with behavior manifesting over thousands of time steps. Our networks are structurally similar to recurrent neural networks, but differ in purpose, and require modified training strategies.

  9. The Laplacian spectrum of neural networks

    Science.gov (United States)

    de Lange, Siemon C.; de Reus, Marcel A.; van den Heuvel, Martijn P.

    2014-01-01

    The brain is a complex network of neural interactions, both at the microscopic and macroscopic level. Graph theory is well suited to examine the global network architecture of these neural networks. Many popular graph metrics, however, encode average properties of individual network elements. Complementing these “conventional” graph metrics, the eigenvalue spectrum of the normalized Laplacian describes a network's structure directly at a systems level, without referring to individual nodes or connections. In this paper, the Laplacian spectra of the macroscopic anatomical neuronal networks of the macaque and cat, and the microscopic network of the Caenorhabditis elegans were examined. Consistent with conventional graph metrics, analysis of the Laplacian spectra revealed an integrative community structure in neural brain networks. Extending previous findings of overlap of network attributes across species, similarity of the Laplacian spectra across the cat, macaque and C. elegans neural networks suggests a certain level of consistency in the overall architecture of the anatomical neural networks of these species. Our results further suggest a specific network class for neural networks, distinct from conceptual small-world and scale-free models as well as several empirical networks. PMID:24454286

  10. Program Helps Simulate Neural Networks

    Science.gov (United States)

    Villarreal, James; Mcintire, Gary

    1993-01-01

    Neural Network Environment on Transputer System (NNETS) computer program provides users high degree of flexibility in creating and manipulating wide variety of neural-network topologies at processing speeds not found in conventional computing environments. Supports back-propagation and back-propagation-related algorithms. Back-propagation algorithm used is implementation of Rumelhart's generalized delta rule. NNETS developed on INMOS Transputer(R). Predefines back-propagation network, Jordan network, and reinforcement network to assist users in learning and defining own networks. Also enables users to configure other neural-network paradigms from NNETS basic architecture. Small portion of software written in OCCAM(R) language.

  11. Energy efficiency optimisation for distillation column using artificial neural network models

    International Nuclear Information System (INIS)

    Osuolale, Funmilayo N.; Zhang, Jie

    2016-01-01

    This paper presents a neural network based strategy for the modelling and optimisation of energy efficiency in distillation columns incorporating the second law of thermodynamics. Real-time optimisation of distillation columns based on mechanistic models is often infeasible due to the effort in model development and the large computation effort associated with mechanistic model computation. This issue can be addressed by using neural network models which can be quickly developed from process operation data. The computation time in neural network model evaluation is very short making them ideal for real-time optimisation. Bootstrap aggregated neural networks are used in this study for enhanced model accuracy and reliability. Aspen HYSYS is used for the simulation of the distillation systems. Neural network models for exergy efficiency and product compositions are developed from simulated process operation data and are used to maximise exergy efficiency while satisfying products qualities constraints. Applications to binary systems of methanol-water and benzene-toluene separations culminate in a reduction of utility consumption of 8.2% and 28.2% respectively. Application to multi-component separation columns also demonstrate the effectiveness of the proposed method with a 32.4% improvement in the exergy efficiency. - Highlights: • Neural networks can accurately model exergy efficiency in distillation columns. • Bootstrap aggregated neural network offers improved model prediction accuracy. • Improved exergy efficiency is obtained through model based optimisation. • Reductions of utility consumption by 8.2% and 28.2% were achieved for binary systems. • The exergy efficiency for multi-component distillation is increased by 32.4%.

  12. Spike timing analysis in neural networks with unsupervised synaptic plasticity

    Science.gov (United States)

    Mizusaki, B. E. P.; Agnes, E. J.; Brunnet, L. G.; Erichsen, R., Jr.

    2013-01-01

    The synaptic plasticity rules that sculpt a neural network architecture are key elements to understand cortical processing, as they may explain the emergence of stable, functional activity, while avoiding runaway excitation. For an associative memory framework, they should be built in a way as to enable the network to reproduce a robust spatio-temporal trajectory in response to an external stimulus. Still, how these rules may be implemented in recurrent networks and the way they relate to their capacity of pattern recognition remains unclear. We studied the effects of three phenomenological unsupervised rules in sparsely connected recurrent networks for associative memory: spike-timing-dependent-plasticity, short-term-plasticity and an homeostatic scaling. The system stability is monitored during the learning process of the network, as the mean firing rate converges to a value determined by the homeostatic scaling. Afterwards, it is possible to measure the recovery efficiency of the activity following each initial stimulus. This is evaluated by a measure of the correlation between spike fire timings, and we analysed the full memory separation capacity and limitations of this system.

  13. Motor Fault Diagnosis Based on Short-time Fourier Transform and Convolutional Neural Network

    Science.gov (United States)

    Wang, Li-Hua; Zhao, Xiao-Ping; Wu, Jia-Xin; Xie, Yang-Yang; Zhang, Yong-Hong

    2017-11-01

    With the rapid development of mechanical equipment, the mechanical health monitoring field has entered the era of big data. However, the method of manual feature extraction has the disadvantages of low efficiency and poor accuracy, when handling big data. In this study, the research object was the asynchronous motor in the drivetrain diagnostics simulator system. The vibration signals of different fault motors were collected. The raw signal was pretreated using short time Fourier transform (STFT) to obtain the corresponding time-frequency map. Then, the feature of the time-frequency map was adaptively extracted by using a convolutional neural network (CNN). The effects of the pretreatment method, and the hyper parameters of network diagnostic accuracy, were investigated experimentally. The experimental results showed that the influence of the preprocessing method is small, and that the batch-size is the main factor affecting accuracy and training efficiency. By investigating feature visualization, it was shown that, in the case of big data, the extracted CNN features can represent complex mapping relationships between signal and health status, and can also overcome the prior knowledge and engineering experience requirement for feature extraction, which is used by traditional diagnosis methods. This paper proposes a new method, based on STFT and CNN, which can complete motor fault diagnosis tasks more intelligently and accurately.

  14. Identification of serial number on bank card using recurrent neural network

    Science.gov (United States)

    Liu, Li; Huang, Linlin; Xue, Jian

    2018-04-01

    Identification of serial number on bank card has many applications. Due to the different number printing mode, complex background, distortion in shape, etc., it is quite challenging to achieve high identification accuracy. In this paper, we propose a method using Normalization-Cooperated Gradient Feature (NCGF) and Recurrent Neural Network (RNN) based on Long Short-Term Memory (LSTM) for serial number identification. The NCGF maps the gradient direction elements of original image to direction planes such that the RNN with direction planes as input can recognize numbers more accurately. Taking the advantages of NCGF and RNN, we get 90%digit string recognition accuracy.

  15. Neural-Network Quantum States, String-Bond States, and Chiral Topological States

    Science.gov (United States)

    Glasser, Ivan; Pancotti, Nicola; August, Moritz; Rodriguez, Ivan D.; Cirac, J. Ignacio

    2018-01-01

    Neural-network quantum states have recently been introduced as an Ansatz for describing the wave function of quantum many-body systems. We show that there are strong connections between neural-network quantum states in the form of restricted Boltzmann machines and some classes of tensor-network states in arbitrary dimensions. In particular, we demonstrate that short-range restricted Boltzmann machines are entangled plaquette states, while fully connected restricted Boltzmann machines are string-bond states with a nonlocal geometry and low bond dimension. These results shed light on the underlying architecture of restricted Boltzmann machines and their efficiency at representing many-body quantum states. String-bond states also provide a generic way of enhancing the power of neural-network quantum states and a natural generalization to systems with larger local Hilbert space. We compare the advantages and drawbacks of these different classes of states and present a method to combine them together. This allows us to benefit from both the entanglement structure of tensor networks and the efficiency of neural-network quantum states into a single Ansatz capable of targeting the wave function of strongly correlated systems. While it remains a challenge to describe states with chiral topological order using traditional tensor networks, we show that, because of their nonlocal geometry, neural-network quantum states and their string-bond-state extension can describe a lattice fractional quantum Hall state exactly. In addition, we provide numerical evidence that neural-network quantum states can approximate a chiral spin liquid with better accuracy than entangled plaquette states and local string-bond states. Our results demonstrate the efficiency of neural networks to describe complex quantum wave functions and pave the way towards the use of string-bond states as a tool in more traditional machine-learning applications.

  16. EDM - A model for optimising the short-term power operation of a complex hydroelectric network

    International Nuclear Information System (INIS)

    Tremblay, M.; Guillaud, C.

    1996-01-01

    In order to optimize the short-term power operation of a complex hydroelectric network, a new model called EDM was added to PROSPER, a water management analysis system developed by SNC-Lavalin. PROSPER is now divided into three parts: an optimization model (DDDP), a simulation model (ESOLIN), and an economic dispatch model (EDM) for the short-term operation. The operation of the KSEB hydroelectric system (located in southern India) with PROSPER was described. The long-term analysis with monthly time steps is assisted by the DDDP, and the daily analysis with hourly or half-hourly time steps is performed with the EDM model. 3 figs

  17. Neural correlates of auditory short-term memory in rostral superior temporal cortex.

    Science.gov (United States)

    Scott, Brian H; Mishkin, Mortimer; Yin, Pingbo

    2014-12-01

    Auditory short-term memory (STM) in the monkey is less robust than visual STM and may depend on a retained sensory trace, which is likely to reside in the higher-order cortical areas of the auditory ventral stream. We recorded from the rostral superior temporal cortex as monkeys performed serial auditory delayed match-to-sample (DMS). A subset of neurons exhibited modulations of their firing rate during the delay between sounds, during the sensory response, or during both. This distributed subpopulation carried a predominantly sensory signal modulated by the mnemonic context of the stimulus. Excitatory and suppressive effects on match responses were dissociable in their timing and in their resistance to sounds intervening between the sample and match. Like the monkeys' behavioral performance, these neuronal effects differ from those reported in the same species during visual DMS, suggesting different neural mechanisms for retaining dynamic sounds and static images in STM. Copyright © 2014 Elsevier Ltd. All rights reserved.

  18. Neural network classification of gamma-ray bursts

    International Nuclear Information System (INIS)

    Balastegui, A.; Canal, R.

    2005-01-01

    From a cluster analysis it appeared that a three-class classification of GRBs could be preferable to just the classic separation of short/hard and long/soft GRBs (Balastegui A., Ruiz-Lapuente, P. and Canal, R. MNRAS 328 (2001) 283). A new classification of GRBs obtained via a neural network is presented, with a short/hard class, an intermediate-duration/soft class, and a long/soft class, the latter being a brighter and more inhomogeneous class than the intermediate duration one. A possible physical meaning of this new classification is also outlined

  19. Computational chaos in massively parallel neural networks

    Science.gov (United States)

    Barhen, Jacob; Gulati, Sandeep

    1989-01-01

    A fundamental issue which directly impacts the scalability of current theoretical neural network models to massively parallel embodiments, in both software as well as hardware, is the inherent and unavoidable concurrent asynchronicity of emerging fine-grained computational ensembles and the possible emergence of chaotic manifestations. Previous analyses attributed dynamical instability to the topology of the interconnection matrix, to parasitic components or to propagation delays. However, researchers have observed the existence of emergent computational chaos in a concurrently asynchronous framework, independent of the network topology. Researcher present a methodology enabling the effective asynchronous operation of large-scale neural networks. Necessary and sufficient conditions guaranteeing concurrent asynchronous convergence are established in terms of contracting operators. Lyapunov exponents are computed formally to characterize the underlying nonlinear dynamics. Simulation results are presented to illustrate network convergence to the correct results, even in the presence of large delays.

  20. HIV lipodystrophy case definition using artificial neural network modelling

    DEFF Research Database (Denmark)

    Ioannidis, John P A; Trikalinos, Thomas A; Law, Matthew

    2003-01-01

    OBJECTIVE: A case definition of HIV lipodystrophy has recently been developed from a combination of clinical, metabolic and imaging/body composition variables using logistic regression methods. We aimed to evaluate whether artificial neural networks could improve the diagnostic accuracy. METHODS......: The database of the case-control Lipodystrophy Case Definition Study was split into 504 subjects (265 with and 239 without lipodystrophy) used for training and 284 independent subjects (152 with and 132 without lipodystrophy) used for validation. Back-propagation neural networks with one or two middle layers...... were trained and validated. Results were compared against logistic regression models using the same information. RESULTS: Neural networks using clinical variables only (41 items) achieved consistently superior performance than logistic regression in terms of specificity, overall accuracy and area under...

  1. Deep learning in neural networks: an overview.

    Science.gov (United States)

    Schmidhuber, Jürgen

    2015-01-01

    In recent years, deep artificial neural networks (including recurrent ones) have won numerous contests in pattern recognition and machine learning. This historical survey compactly summarizes relevant work, much of it from the previous millennium. Shallow and Deep Learners are distinguished by the depth of their credit assignment paths, which are chains of possibly learnable, causal links between actions and effects. I review deep supervised learning (also recapitulating the history of backpropagation), unsupervised learning, reinforcement learning & evolutionary computation, and indirect search for short programs encoding deep and large networks.

  2. Neural network to diagnose lining condition

    Science.gov (United States)

    Yemelyanov, V. A.; Yemelyanova, N. Y.; Nedelkin, A. A.; Zarudnaya, M. V.

    2018-03-01

    The paper presents data on the problem of diagnosing the lining condition at the iron and steel works. The authors describe the neural network structure and software that are designed and developed to determine the lining burnout zones. The simulation results of the proposed neural networks are presented. The authors note the low learning and classification errors of the proposed neural networks. To realize the proposed neural network, the specialized software has been developed.

  3. Artificial Neural Network for the Prediction of Chromosomal Abnormalities in Azoospermic Males.

    Science.gov (United States)

    Akinsal, Emre Can; Haznedar, Bulent; Baydilli, Numan; Kalinli, Adem; Ozturk, Ahmet; Ekmekçioğlu, Oğuz

    2018-02-04

    To evaluate whether an artifical neural network helps to diagnose any chromosomal abnormalities in azoospermic males. The data of azoospermic males attending to a tertiary academic referral center were evaluated retrospectively. Height, total testicular volume, follicle stimulating hormone, luteinising hormone, total testosterone and ejaculate volume of the patients were used for the analyses. In artificial neural network, the data of 310 azoospermics were used as the education and 115 as the test set. Logistic regression analyses and discriminant analyses were performed for statistical analyses. The tests were re-analysed with a neural network. Both logistic regression analyses and artificial neural network predicted the presence or absence of chromosomal abnormalities with more than 95% accuracy. The use of artificial neural network model has yielded satisfactory results in terms of distinguishing patients whether they have any chromosomal abnormality or not.

  4. Statistical Language Modeling for Historical Documents using Weighted Finite-State Transducers and Long Short-Term Memory

    OpenAIRE

    Al Azawi, Mayce

    2015-01-01

    The goal of this work is to develop statistical natural language models and processing techniques based on Recurrent Neural Networks (RNN), especially the recently introduced Long Short- Term Memory (LSTM). Due to their adapting and predicting abilities, these methods are more robust, and easier to train than traditional methods, i.e., words list and rule-based models. They improve the output of recognition systems and make them more accessible to users for browsing and reading...

  5. A neural network approach to job-shop scheduling.

    Science.gov (United States)

    Zhou, D N; Cherkassky, V; Baldwin, T R; Olson, D E

    1991-01-01

    A novel analog computational network is presented for solving NP-complete constraint satisfaction problems, i.e. job-shop scheduling. In contrast to most neural approaches to combinatorial optimization based on quadratic energy cost function, the authors propose to use linear cost functions. As a result, the network complexity (number of neurons and the number of resistive interconnections) grows only linearly with problem size, and large-scale implementations become possible. The proposed approach is related to the linear programming network described by D.W. Tank and J.J. Hopfield (1985), which also uses a linear cost function for a simple optimization problem. It is shown how to map a difficult constraint-satisfaction problem onto a simple neural net in which the number of neural processors equals the number of subjobs (operations) and the number of interconnections grows linearly with the total number of operations. Simulations show that the authors' approach produces better solutions than existing neural approaches to job-shop scheduling, i.e. the traveling salesman problem-type Hopfield approach and integer linear programming approach of J.P.S. Foo and Y. Takefuji (1988), in terms of the quality of the solution and the network complexity.

  6. Partially overlapping sensorimotor networks underlie speech praxis and verbal short-term memory: evidence from apraxia of speech following acute stroke.

    Science.gov (United States)

    Hickok, Gregory; Rogalsky, Corianne; Chen, Rong; Herskovits, Edward H; Townsley, Sarah; Hillis, Argye E

    2014-01-01

    We tested the hypothesis that motor planning and programming of speech articulation and verbal short-term memory (vSTM) depend on partially overlapping networks of neural regions. We evaluated this proposal by testing 76 individuals with acute ischemic stroke for impairment in motor planning of speech articulation (apraxia of speech, AOS) and vSTM in the first day of stroke, before the opportunity for recovery or reorganization of structure-function relationships. We also evaluated areas of both infarct and low blood flow that might have contributed to AOS or impaired vSTM in each person. We found that AOS was associated with tissue dysfunction in motor-related areas (posterior primary motor cortex, pars opercularis; premotor cortex, insula) and sensory-related areas (primary somatosensory cortex, secondary somatosensory cortex, parietal operculum/auditory cortex); while impaired vSTM was associated with primarily motor-related areas (pars opercularis and pars triangularis, premotor cortex, and primary motor cortex). These results are consistent with the hypothesis, also supported by functional imaging data, that both speech praxis and vSTM rely on partially overlapping networks of brain regions.

  7. An enhanced radial basis function network for short-term electricity price forecasting

    International Nuclear Information System (INIS)

    Lin, Whei-Min; Gow, Hong-Jey; Tsai, Ming-Tang

    2010-01-01

    This paper proposed a price forecasting system for electric market participants to reduce the risk of price volatility. Combining the Radial Basis Function Network (RBFN) and Orthogonal Experimental Design (OED), an Enhanced Radial Basis Function Network (ERBFN) has been proposed for the solving process. The Locational Marginal Price (LMP), system load, transmission flow and temperature of the PJM system were collected and the data clusters were embedded in the Excel Database according to the year, season, workday and weekend. With the OED applied to learning rates in the ERBFN, the forecasting error can be reduced during the training process to improve both accuracy and reliability. This would mean that even the ''spikes'' could be tracked closely. The Back-propagation Neural Network (BPN), Probability Neural Network (PNN), other algorithms, and the proposed ERBFN were all developed and compared to check the performance. Simulation results demonstrated the effectiveness of the proposed ERBFN to provide quality information in a price volatile environment. (author)

  8. Artificial Neural Networks to Predict the Power Output of a PV Panel

    Directory of Open Access Journals (Sweden)

    Valerio Lo Brano

    2014-01-01

    Full Text Available The paper illustrates an adaptive approach based on different topologies of artificial neural networks (ANNs for the power energy output forecasting of photovoltaic (PV modules. The analysis of the PV module’s power output needed detailed local climate data, which was collected by a dedicated weather monitoring system. The Department of Energy, Information Engineering, and Mathematical Models of the University of Palermo (Italy has built up a weather monitoring system that worked together with a data acquisition system. The power output forecast is obtained using three different types of ANNs: a one hidden layer Multilayer perceptron (MLP, a recursive neural network (RNN, and a gamma memory (GM trained with the back propagation. In order to investigate the influence of climate variability on the electricity production, the ANNs were trained using weather data (air temperature, solar irradiance, and wind speed along with historical power output data available for the two test modules. The model validation was performed by comparing model predictions with power output data that were not used for the network's training. The results obtained bear out the suitability of the adopted methodology for the short-term power output forecasting problem and identified the best topology.

  9. Neural Networks in Control Applications

    DEFF Research Database (Denmark)

    Sørensen, O.

    The intention of this report is to make a systematic examination of the possibilities of applying neural networks in those technical areas, which are familiar to a control engineer. In other words, the potential of neural networks in control applications is given higher priority than a detailed...... study of the networks themselves. With this end in view the following restrictions have been made: - Amongst numerous neural network structures, only the Multi Layer Perceptron (a feed-forward network) is applied. - Amongst numerous training algorithms, only four algorithms are examined, all...... in a recursive form (sample updating). The simplest is the Back Probagation Error Algorithm, and the most complex is the recursive Prediction Error Method using a Gauss-Newton search direction. - Over-fitting is often considered to be a serious problem when training neural networks. This problem is specifically...

  10. Entropy method combined with extreme learning machine method for the short-term photovoltaic power generation forecasting

    International Nuclear Information System (INIS)

    Tang, Pingzhou; Chen, Di; Hou, Yushuo

    2016-01-01

    As the world’s energy problem becomes more severe day by day, photovoltaic power generation has opened a new door for us with no doubt. It will provide an effective solution for this severe energy problem and meet human’s needs for energy if we can apply photovoltaic power generation in real life, Similar to wind power generation, photovoltaic power generation is uncertain. Therefore, the forecast of photovoltaic power generation is very crucial. In this paper, entropy method and extreme learning machine (ELM) method were combined to forecast a short-term photovoltaic power generation. First, entropy method is used to process initial data, train the network through the data after unification, and then forecast electricity generation. Finally, the data results obtained through the entropy method with ELM were compared with that generated through generalized regression neural network (GRNN) and radial basis function neural network (RBF) method. We found that entropy method combining with ELM method possesses higher accuracy and the calculation is faster.

  11. Neural networks and wavelet analysis in the computer interpretation of pulse oximetry data

    Energy Technology Data Exchange (ETDEWEB)

    Dowla, F.U.; Skokowski, P.G.; Leach, R.R. Jr.

    1996-03-01

    Pulse oximeters determine the oxygen saturation level of blood by measuring the light absorption of arterial blood. The sensor consists of red and infrared light sources and photodetectors. A method based on neural networks and wavelet analysis is developed for improved saturation estimation in the presence of sensor motion. Spectral and correlation functions of the dual channel oximetry data are used by a backpropagation neural network to characterize the type of motion. Amplitude ratios of red to infrared signals as a function of time scale are obtained from the multiresolution wavelet decomposition of the two-channel data. Motion class and amplitude ratios are then combined to obtain a short-time estimate of the oxygen saturation level. A final estimate of oxygen saturation is obtained by applying a 15 s smoothing filter on the short-time measurements based on 3.5 s windows sampled every 1.75 s. The design employs two backpropagation neural networks. The first neural network determines the motion characteristics and the second network determines the saturation estimate. Our approach utilizes waveform analysis in contrast to the standard algorithms that are based on the successful detection of peaks and troughs in the signal. The proposed algorithm is numerically efficient and has stable characteristics with a reduced false alarm rate with a small loss in detection. The method can be rapidly developed on a digital signal processing platform.

  12. Practical neural network recipies in C++

    CERN Document Server

    Masters

    2014-01-01

    This text serves as a cookbook for neural network solutions to practical problems using C++. It will enable those with moderate programming experience to select a neural network model appropriate to solving a particular problem, and to produce a working program implementing that network. The book provides guidance along the entire problem-solving path, including designing the training set, preprocessing variables, training and validating the network, and evaluating its performance. Though the book is not intended as a general course in neural networks, no background in neural works is assum

  13. Using neural networks for prediction of nuclear parameters

    Energy Technology Data Exchange (ETDEWEB)

    Pereira Filho, Leonidas; Souto, Kelling Cabral, E-mail: leonidasmilenium@hotmail.com, E-mail: kcsouto@bol.com.br [Instituto Federal de Educacao, Ciencia e Tecnologia do Rio de Janeiro (IFRJ), Rio de Janeiro, RJ (Brazil); Machado, Marcelo Dornellas, E-mail: dornemd@eletronuclear.gov.br [Eletrobras Termonuclear S.A. (GCN.T/ELETRONUCLEAR), Rio de Janeiro, RJ (Brazil). Gerencia de Combustivel Nuclear

    2013-07-01

    Dating from 1943, the earliest work on artificial neural networks (ANN), when Warren Mc Cullock and Walter Pitts developed a study on the behavior of the biological neuron, with the goal of creating a mathematical model. Some other work was done until after the 80 witnessed an explosion of interest in ANNs, mainly due to advances in technology, especially microelectronics. Because ANNs are able to solve many problems such as approximation, classification, categorization, prediction and others, they have numerous applications in various areas, including nuclear. Nodal method is adopted as a tool for analyzing core parameters such as boron concentration and pin power peaks for pressurized water reactors. However, this method is extremely slow when it is necessary to perform various core evaluations, for example core reloading optimization. To overcome this difficulty, in this paper a model of Multi-layer Perceptron (MLP) artificial neural network type backpropagation will be trained to predict these values. The main objective of this work is the development of Multi-layer Perceptron (MLP) artificial neural network capable to predict, in very short time, with good accuracy, two important parameters used in the core reloading problem - Boron Concentration and Power Peaking Factor. For the training of the neural networks are provided loading patterns and nuclear data used in cycle 19 of Angra 1 nuclear power plant. Three models of networks are constructed using the same input data and providing the following outputs: 1- Boron Concentration and Power Peaking Factor, 2 - Boron Concentration and 3 - Power Peaking Factor. (author)

  14. Using neural networks for prediction of nuclear parameters

    International Nuclear Information System (INIS)

    Pereira Filho, Leonidas; Souto, Kelling Cabral; Machado, Marcelo Dornellas

    2013-01-01

    Dating from 1943, the earliest work on artificial neural networks (ANN), when Warren Mc Cullock and Walter Pitts developed a study on the behavior of the biological neuron, with the goal of creating a mathematical model. Some other work was done until after the 80 witnessed an explosion of interest in ANNs, mainly due to advances in technology, especially microelectronics. Because ANNs are able to solve many problems such as approximation, classification, categorization, prediction and others, they have numerous applications in various areas, including nuclear. Nodal method is adopted as a tool for analyzing core parameters such as boron concentration and pin power peaks for pressurized water reactors. However, this method is extremely slow when it is necessary to perform various core evaluations, for example core reloading optimization. To overcome this difficulty, in this paper a model of Multi-layer Perceptron (MLP) artificial neural network type backpropagation will be trained to predict these values. The main objective of this work is the development of Multi-layer Perceptron (MLP) artificial neural network capable to predict, in very short time, with good accuracy, two important parameters used in the core reloading problem - Boron Concentration and Power Peaking Factor. For the training of the neural networks are provided loading patterns and nuclear data used in cycle 19 of Angra 1 nuclear power plant. Three models of networks are constructed using the same input data and providing the following outputs: 1- Boron Concentration and Power Peaking Factor, 2 - Boron Concentration and 3 - Power Peaking Factor. (author)

  15. Exponential stability of delayed fuzzy cellular neural networks with diffusion

    International Nuclear Information System (INIS)

    Huang Tingwen

    2007-01-01

    The exponential stability of delayed fuzzy cellular neural networks (FCNN) with diffusion is investigated. Exponential stability, significant for applications of neural networks, is obtained under conditions that are easily verified by a new approach. Earlier results on the exponential stability of FCNN with time-dependent delay, a special case of the model studied in this paper, are improved without using the time-varying term condition: dτ(t)/dt < μ

  16. Differential neural network configuration during human path integration

    Science.gov (United States)

    Arnold, Aiden E. G. F; Burles, Ford; Bray, Signe; Levy, Richard M.; Iaria, Giuseppe

    2014-01-01

    Path integration is a fundamental skill for navigation in both humans and animals. Despite recent advances in unraveling the neural basis of path integration in animal models, relatively little is known about how path integration operates at a neural level in humans. Previous attempts to characterize the neural mechanisms used by humans to visually path integrate have suggested a central role of the hippocampus in allowing accurate performance, broadly resembling results from animal data. However, in recent years both the central role of the hippocampus and the perspective that animals and humans share similar neural mechanisms for path integration has come into question. The present study uses a data driven analysis to investigate the neural systems engaged during visual path integration in humans, allowing for an unbiased estimate of neural activity across the entire brain. Our results suggest that humans employ common task control, attention and spatial working memory systems across a frontoparietal network during path integration. However, individuals differed in how these systems are configured into functional networks. High performing individuals were found to more broadly express spatial working memory systems in prefrontal cortex, while low performing individuals engaged an allocentric memory system based primarily in the medial occipito-temporal region. These findings suggest that visual path integration in humans over short distances can operate through a spatial working memory system engaging primarily the prefrontal cortex and that the differential configuration of memory systems recruited by task control networks may help explain individual biases in spatial learning strategies. PMID:24808849

  17. Signal Processing and Neural Network Simulator

    Science.gov (United States)

    Tebbe, Dennis L.; Billhartz, Thomas J.; Doner, John R.; Kraft, Timothy T.

    1995-04-01

    The signal processing and neural network simulator (SPANNS) is a digital signal processing simulator with the capability to invoke neural networks into signal processing chains. This is a generic tool which will greatly facilitate the design and simulation of systems with embedded neural networks. The SPANNS is based on the Signal Processing WorkSystemTM (SPWTM), a commercial-off-the-shelf signal processing simulator. SPW provides a block diagram approach to constructing signal processing simulations. Neural network paradigms implemented in the SPANNS include Backpropagation, Kohonen Feature Map, Outstar, Fully Recurrent, Adaptive Resonance Theory 1, 2, & 3, and Brain State in a Box. The SPANNS was developed by integrating SAIC's Industrial Strength Neural Networks (ISNN) Software into SPW.

  18. A Quantum Implementation Model for Artificial Neural Networks

    Directory of Open Access Journals (Sweden)

    Ammar Daskin

    2018-02-01

    Full Text Available The learning process for multilayered neural networks with many nodes makes heavy demands on computational resources. In some neural network models, the learning formulas, such as the Widrow–Hoff formula, do not change the eigenvectors of the weight matrix while flatting the eigenvalues. In infinity, these iterative formulas result in terms formed by the principal components of the weight matrix, namely, the eigenvectors corresponding to the non-zero eigenvalues. In quantum computing, the phase estimation algorithm is known to provide speedups over the conventional algorithms for the eigenvalue-related problems. Combining the quantum amplitude amplification with the phase estimation algorithm, a quantum implementation model for artificial neural networks using the Widrow–Hoff learning rule is presented. The complexity of the model is found to be linear in the size of the weight matrix. This provides a quadratic improvement over the classical algorithms. Quanta 2018; 7: 7–18.

  19. Real-time energy resources scheduling considering short-term and very short-term wind forecast

    Energy Technology Data Exchange (ETDEWEB)

    Silva, Marco; Sousa, Tiago; Morais, Hugo; Vale, Zita [Polytechnic of Porto (Portugal). GECAD - Knowledge Engineering and Decision Support Research Center

    2012-07-01

    This paper proposes an energy resources management methodology based on three distinct time horizons: day-ahead scheduling, hour-ahead scheduling, and real-time scheduling. In each scheduling process the update of generation and consumption operation and of the storage and electric vehicles storage status are used. Besides the new operation conditions, the most accurate forecast values of wind generation and of consumption using results of short-term and very short-term methods are used. A case study considering a distribution network with intensive use of distributed generation and electric vehicles is presented. (orig.)

  20. Probing many-body localization with neural networks

    Science.gov (United States)

    Schindler, Frank; Regnault, Nicolas; Neupert, Titus

    2017-06-01

    We show that a simple artificial neural network trained on entanglement spectra of individual states of a many-body quantum system can be used to determine the transition between a many-body localized and a thermalizing regime. Specifically, we study the Heisenberg spin-1/2 chain in a random external field. We employ a multilayer perceptron with a single hidden layer, which is trained on labeled entanglement spectra pertaining to the fully localized and fully thermal regimes. We then apply this network to classify spectra belonging to states in the transition region. For training, we use a cost function that contains, in addition to the usual error and regularization parts, a term that favors a confident classification of the transition region states. The resulting phase diagram is in good agreement with the one obtained by more conventional methods and can be computed for small systems. In particular, the neural network outperforms conventional methods in classifying individual eigenstates pertaining to a single disorder realization. It allows us to map out the structure of these eigenstates across the transition with spatial resolution. Furthermore, we analyze the network operation using the dreaming technique to show that the neural network correctly learns by itself the power-law structure of the entanglement spectra in the many-body localized regime.

  1. Trimaran Resistance Artificial Neural Network

    Science.gov (United States)

    2011-01-01

    11th International Conference on Fast Sea Transportation FAST 2011, Honolulu, Hawaii, USA, September 2011 Trimaran Resistance Artificial Neural Network Richard...Trimaran Resistance Artificial Neural Network 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) 5d. PROJECT NUMBER 5e... Artificial Neural Network and is restricted to the center and side-hull configurations tested. The value in the parametric model is that it is able to

  2. Automatic construction of a recurrent neural network based classifier for vehicle passage detection

    Science.gov (United States)

    Burnaev, Evgeny; Koptelov, Ivan; Novikov, German; Khanipov, Timur

    2017-03-01

    Recurrent Neural Networks (RNNs) are extensively used for time-series modeling and prediction. We propose an approach for automatic construction of a binary classifier based on Long Short-Term Memory RNNs (LSTM-RNNs) for detection of a vehicle passage through a checkpoint. As an input to the classifier we use multidimensional signals of various sensors that are installed on the checkpoint. Obtained results demonstrate that the previous approach to handcrafting a classifier, consisting of a set of deterministic rules, can be successfully replaced by an automatic RNN training on an appropriately labelled data.

  3. The Multi-Layered Perceptrons Neural Networks for the Prediction of Daily Solar Radiation

    OpenAIRE

    Radouane Iqdour; Abdelouhab Zeroual

    2007-01-01

    The Multi-Layered Perceptron (MLP) Neural networks have been very successful in a number of signal processing applications. In this work we have studied the possibilities and the met difficulties in the application of the MLP neural networks for the prediction of daily solar radiation data. We have used the Polack-Ribière algorithm for training the neural networks. A comparison, in term of the statistical indicators, with a linear model most used in literature, is also perfo...

  4. Neural network regulation driven by autonomous neural firings

    Science.gov (United States)

    Cho, Myoung Won

    2016-07-01

    Biological neurons naturally fire spontaneously due to the existence of a noisy current. Such autonomous firings may provide a driving force for network formation because synaptic connections can be modified due to neural firings. Here, we study the effect of autonomous firings on network formation. For the temporally asymmetric Hebbian learning, bidirectional connections lose their balance easily and become unidirectional ones. Defining the difference between reciprocal connections as new variables, we could express the learning dynamics as if Ising model spins interact with each other in magnetism. We present a theoretical method to estimate the interaction between the new variables in a neural system. We apply the method to some network systems and find some tendencies of autonomous neural network regulation.

  5. Qualitative and quantitative estimation of comprehensive synaptic connectivity in short- and long-term cultured rat hippocampal neurons with new analytical methods inspired by Scatchard and Hill plots

    Energy Technology Data Exchange (ETDEWEB)

    Tanamoto, Ryo; Shindo, Yutaka; Niwano, Mariko [Department of Biosciences and Informatics, Faculty of Science and Technology, Keio University (Japan); Matsumoto, Yoshinori [Department of Applied Physics and Physico-Informatics, Faculty of Science and Technology, Keio University (Japan); Miki, Norihisa [Department of Mechanical Engineering, Faculty of Science and Technology, Keio University, 3-14-1 Hiyoshi, Kohoku-ku, Yokohama, Kanagawa, 223-8522 (Japan); Hotta, Kohji [Department of Biosciences and Informatics, Faculty of Science and Technology, Keio University (Japan); Oka, Kotaro, E-mail: oka@bio.keio.ac.jp [Department of Biosciences and Informatics, Faculty of Science and Technology, Keio University (Japan)

    2016-03-18

    To investigate comprehensive synaptic connectivity, we examined Ca{sup 2+} responses with quantitative electric current stimulation by indium-tin-oxide (ITO) glass electrode with transparent and high electro-conductivity. The number of neurons with Ca{sup 2+} responses was low during the application of stepwise increase of electric current in short-term cultured neurons (less than 17 days in-vitro (DIV)). The neurons cultured over 17 DIV showed two-type responses: S-shaped (sigmoid) and monotonous saturated responses, and Scatchard plots well illustrated the difference of these two responses. Furthermore, sigmoid like neural network responses over 17 DIV were altered to the monotonous saturated ones by the application of the mixture of AP5 and CNQX, specific blockers of NMDA and AMPA receptors, respectively. This alternation was also characterized by the change of Hill coefficients. These findings indicate that the neural network with sigmoid-like responses has strong synergetic or cooperative synaptic connectivity via excitatory glutamate synapses. - Highlights: • We succeed to evaluate the maturation of neural network by Scathard and Hill Plots. • Long-term cultured neurons showed two-type responses: sigmoid and monotonous. • The sigmoid-like increase indicates the cooperatevity of neural networks. • Excitatory glutamate synapses cause the cooperatevity of neural networks.

  6. Enhanced online convolutional neural networks for object tracking

    Science.gov (United States)

    Zhang, Dengzhuo; Gao, Yun; Zhou, Hao; Li, Tianwen

    2018-04-01

    In recent several years, object tracking based on convolution neural network has gained more and more attention. The initialization and update of convolution filters can directly affect the precision of object tracking effective. In this paper, a novel object tracking via an enhanced online convolution neural network without offline training is proposed, which initializes the convolution filters by a k-means++ algorithm and updates the filters by an error back-propagation. The comparative experiments of 7 trackers on 15 challenging sequences showed that our tracker can perform better than other trackers in terms of AUC and precision.

  7. Simplified LQG Control with Neural Networks

    DEFF Research Database (Denmark)

    Sørensen, O.

    1997-01-01

    A new neural network application for non-linear state control is described. One neural network is modelled to form a Kalmann predictor and trained to act as an optimal state observer for a non-linear process. Another neural network is modelled to form a state controller and trained to produce...

  8. Application of two neural network paradigms to the study of voluntary employee turnover.

    Science.gov (United States)

    Somers, M J

    1999-04-01

    Two neural network paradigms--multilayer perceptron and learning vector quantization--were used to study voluntary employee turnover with a sample of 577 hospital employees. The objectives of the study were twofold. The 1st was to assess whether neural computing techniques offered greater predictive accuracy than did conventional turnover methodologies. The 2nd was to explore whether computer models of turnover based on neural network technologies offered new insights into turnover processes. When compared with logistic regression analysis, both neural network paradigms provided considerably more accurate predictions of turnover behavior, particularly with respect to the correct classification of leavers. In addition, these neural network paradigms captured nonlinear relationships that are relevant for theory development. Results are discussed in terms of their implications for future research.

  9. Short-term effects of social encouragement on exercise behavior: insights from China's Wanbu network.

    Science.gov (United States)

    Wang, Liuan; Guo, Xitong; Wu, Tianshi; Lv, Lucheng; Zhang, Zhiwei

    2017-07-01

    The objective is to explore the short-term effects of social encouragement on exercise behavior in China. A longitudinal observational study. We collected longitudinal data on exercise and social interactions through public access to the Wanbu network, a large Chinese social network designed to encourage people to walk more. Our data set consisted of 5010 subjects who participated in the network between March 14, 2014, and September 4, 2015, and had at least one social interaction recorded. The data were analyzed using linear regression models relating the number of steps (NS) walked per day to the number of comments (NC), number of thumbs-up (NT), and number of posts (NP) received on the previous day, while adjusting for day of week, quarter of year, and a fixed or random subject effect, with or without a lag term (NS on the previous day) to account for serial correlation. We found that all three social interactions have positive effects on the next day's exercise level. The estimated effect sizes can be ordered as NT > NC > NP for each of the four models considered. The results also indicate that the participants walked less in the first quarter than in the other three quarters and more on weekdays than on weekends, with Monday being the most active day of a week. Social encouragement has positive short-term effects on exercise behavior. Copyright © 2017 The Royal Society for Public Health. Published by Elsevier Ltd. All rights reserved.

  10. Short-term wind power prediction based on LSSVM–GSA model

    International Nuclear Information System (INIS)

    Yuan, Xiaohui; Chen, Chen; Yuan, Yanbin; Huang, Yuehua; Tan, Qingxiong

    2015-01-01

    Highlights: • A hybrid model is developed for short-term wind power prediction. • The model is based on LSSVM and gravitational search algorithm. • Gravitational search algorithm is used to optimize parameters of LSSVM. • Effect of different kernel function of LSSVM on wind power prediction is discussed. • Comparative studies show that prediction accuracy of wind power is improved. - Abstract: Wind power forecasting can improve the economical and technical integration of wind energy into the existing electricity grid. Due to its intermittency and randomness, it is hard to forecast wind power accurately. For the purpose of utilizing wind power to the utmost extent, it is very important to make an accurate prediction of the output power of a wind farm under the premise of guaranteeing the security and the stability of the operation of the power system. In this paper, a hybrid model (LSSVM–GSA) based on the least squares support vector machine (LSSVM) and gravitational search algorithm (GSA) is proposed to forecast the short-term wind power. As the kernel function and the related parameters of the LSSVM have a great influence on the performance of the prediction model, the paper establishes LSSVM model based on different kernel functions for short-term wind power prediction. And then an optimal kernel function is determined and the parameters of the LSSVM model are optimized by using GSA. Compared with the Back Propagation (BP) neural network and support vector machine (SVM) model, the simulation results show that the hybrid LSSVM–GSA model based on exponential radial basis kernel function and GSA has higher accuracy for short-term wind power prediction. Therefore, the proposed LSSVM–GSA is a better model for short-term wind power prediction

  11. Fuzzy neural network theory and application

    CERN Document Server

    Liu, Puyin

    2004-01-01

    This book systematically synthesizes research achievements in the field of fuzzy neural networks in recent years. It also provides a comprehensive presentation of the developments in fuzzy neural networks, with regard to theory as well as their application to system modeling and image restoration. Special emphasis is placed on the fundamental concepts and architecture analysis of fuzzy neural networks. The book is unique in treating all kinds of fuzzy neural networks and their learning algorithms and universal approximations, and employing simulation examples which are carefully designed to he

  12. Modular representation of layered neural networks.

    Science.gov (United States)

    Watanabe, Chihiro; Hiramatsu, Kaoru; Kashino, Kunio

    2018-01-01

    Layered neural networks have greatly improved the performance of various applications including image processing, speech recognition, natural language processing, and bioinformatics. However, it is still difficult to discover or interpret knowledge from the inference provided by a layered neural network, since its internal representation has many nonlinear and complex parameters embedded in hierarchical layers. Therefore, it becomes important to establish a new methodology by which layered neural networks can be understood. In this paper, we propose a new method for extracting a global and simplified structure from a layered neural network. Based on network analysis, the proposed method detects communities or clusters of units with similar connection patterns. We show its effectiveness by applying it to three use cases. (1) Network decomposition: it can decompose a trained neural network into multiple small independent networks thus dividing the problem and reducing the computation time. (2) Training assessment: the appropriateness of a trained result with a given hyperparameter or randomly chosen initial parameters can be evaluated by using a modularity index. And (3) data analysis: in practical data it reveals the community structure in the input, hidden, and output layers, which serves as a clue for discovering knowledge from a trained neural network. Copyright © 2017 Elsevier Ltd. All rights reserved.

  13. Linear matrix inequality approach to exponential synchronization of a class of chaotic neural networks with time-varying delays

    Science.gov (United States)

    Wu, Wei; Cui, Bao-Tong

    2007-07-01

    In this paper, a synchronization scheme for a class of chaotic neural networks with time-varying delays is presented. This class of chaotic neural networks covers several well-known neural networks, such as Hopfield neural networks, cellular neural networks, and bidirectional associative memory networks. The obtained criteria are expressed in terms of linear matrix inequalities, thus they can be efficiently verified. A comparison between our results and the previous results shows that our results are less restrictive.

  14. Short-term memory in olfactory network dynamics

    Science.gov (United States)

    Stopfer, Mark; Laurent, Gilles

    1999-12-01

    Neural assemblies in a number of animal species display self-organized, synchronized oscillations in response to sensory stimuli in a variety of brain areas.. In the olfactory system of insects, odour-evoked oscillatory synchronization of antennal lobe projection neurons (PNs) is superimposed on slower and stimulus-specific temporal activity patterns. Hence, each odour activates a specific and dynamic projection neuron assembly whose evolution during a stimulus is locked to the oscillation clock. Here we examine, using locusts, the changes in population dynamics of projection-neuron assemblies over repeated odour stimulations, as would occur when an animal first encounters and then repeatedly samples an odour for identification or localization. We find that the responses of these assemblies rapidly decrease in intensity, while they show a marked increase in spike time precision and inter-neuronal oscillatory coherence. Once established, this enhanced precision in the representation endures for several minutes. This change is stimulus-specific, and depends on events within the antennal lobe circuits, independent of olfactory receptor adaptation: it may thus constitute a form of sensory memory. Our results suggest that this progressive change in olfactory network dynamics serves to converge, over repeated odour samplings, on a more precise and readily classifiable odour representation, using relational information contained across neural assemblies.

  15. Introduction to Artificial Neural Networks

    DEFF Research Database (Denmark)

    Larsen, Jan

    1999-01-01

    The note addresses introduction to signal analysis and classification based on artificial feed-forward neural networks.......The note addresses introduction to signal analysis and classification based on artificial feed-forward neural networks....

  16. Decoding attended information in short-term memory: an EEG study.

    Science.gov (United States)

    LaRocque, Joshua J; Lewis-Peacock, Jarrod A; Drysdale, Andrew T; Oberauer, Klaus; Postle, Bradley R

    2013-01-01

    For decades it has been assumed that sustained, elevated neural activity--the so-called active trace--is the neural correlate of the short-term retention of information. However, a recent fMRI study has suggested that this activity may be more related to attention than to retention. Specifically, a multivariate pattern analysis failed to find evidence that information that was outside the focus of attention, but nonetheless in STM, was retained in an active state. Here, we replicate and extend this finding by querying the neural signatures of attended versus unattended information within STM with electroencephalograpy (EEG), a method sensitive to oscillatory neural activity to which the previous fMRI study was insensitive. We demonstrate that in the delay-period EEG activity, there is information only about memory items that are also in the focus of attention. Information about items outside the focus of attention is not detectable. This result converges with the fMRI findings to suggest that, contrary to conventional wisdom, an active memory trace may be unnecessary for the short-term retention of information.

  17. Visual short-term memory load suppresses temporo-parietal junction activity and induces inattentional blindness.

    Science.gov (United States)

    Todd, J Jay; Fougnie, Daryl; Marois, René

    2005-12-01

    The right temporo-parietal junction (TPJ) is critical for stimulus-driven attention and visual awareness. Here we show that as the visual short-term memory (VSTM) load of a task increases, activity in this region is increasingly suppressed. Correspondingly, increasing VSTM load impairs the ability of subjects to consciously detect the presence of a novel, unexpected object in the visual field. These results not only demonstrate that VSTM load suppresses TPJ activity and induces inattentional blindness, but also offer a plausible neural mechanism for this perceptual deficit: suppression of the stimulus-driven attentional network.

  18. Neural networks, nativism, and the plausibility of constructivism.

    Science.gov (United States)

    Quartz, S R

    1993-09-01

    Recent interest in PDP (parallel distributed processing) models is due in part to the widely held belief that they challenge many of the assumptions of classical cognitive science. In the domain of language acquisition, for example, there has been much interest in the claim that PDP models might undermine nativism. Related arguments based on PDP learning have also been given against Fodor's anti-constructivist position--a position that has contributed to the widespread dismissal of constructivism. A limitation of many of the claims regarding PDP learning, however, is that the principles underlying this learning have not been rigorously characterized. In this paper, I examine PDP models from within the framework of Valiant's PAC (probably approximately correct) model of learning, now the dominant model in machine learning, and which applies naturally to neural network learning. From this perspective, I evaluate the implications of PDP models for nativism and Fodor's influential anti-constructivist position. In particular, I demonstrate that, contrary to a number of claims, PDP models are nativist in a robust sense. I also demonstrate that PDP models actually serve as a good illustration of Fodor's anti-constructivist position. While these results may at first suggest that neural network models in general are incapable of the sort of concept acquisition that is required to refute Fodor's anti-constructivist position, I suggest that there is an alternative form of neural network learning that demonstrates the plausibility of constructivism. This alternative form of learning is a natural interpretation of the constructivist position in terms of neural network learning, as it employs learning algorithms that incorporate the addition of structure in addition to weight modification schemes. By demonstrating that there is a natural and plausible interpretation of constructivism in terms of neural network learning, the position that nativism is the only plausible model of

  19. Stochastic synchronization of coupled neural networks with intermittent control

    International Nuclear Information System (INIS)

    Yang Xinsong; Cao Jinde

    2009-01-01

    In this Letter, we study the exponential stochastic synchronization problem for coupled neural networks with stochastic noise perturbations. Based on Lyapunov stability theory, inequality techniques, the properties of Weiner process, and adding different intermittent controllers, several sufficient conditions are obtained to ensure exponential stochastic synchronization of coupled neural networks with or without coupling delays under stochastic perturbations. These stochastic synchronization criteria are expressed in terms of several lower-dimensional linear matrix inequalities (LMIs) and can be easily verified. Moreover, the results of this Letter are applicable to both directed and undirected weighted networks. A numerical example and its simulations are offered to show the effectiveness of our new results.

  20. Artificial Neural Network Analysis System

    Science.gov (United States)

    2001-02-27

    Contract No. DASG60-00-M-0201 Purchase request no.: Foot in the Door-01 Title Name: Artificial Neural Network Analysis System Company: Atlantic... Artificial Neural Network Analysis System 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) Powell, Bruce C 5d. PROJECT NUMBER 5e. TASK NUMBER...34) 27-02-2001 Report Type N/A Dates Covered (from... to) ("DD MON YYYY") 28-10-2000 27-02-2001 Title and Subtitle Artificial Neural Network Analysis

  1. Coherent oscillatory networks supporting short-term memory retention.

    Science.gov (United States)

    Payne, Lisa; Kounios, John

    2009-01-09

    Accumulating evidence suggests that top-down processes, reflected by frontal-midline theta-band (4-8 Hz) electroencephalogram (EEG) oscillations, strengthen the activation of a memory set during short-term memory (STM) retention. In addition, the amplitude of posterior alpha-band (8-13 Hz) oscillations during STM retention is thought to reflect a mechanism that protects fragile STM activations from interference by gating bottom-up sensory inputs. The present study addressed two important questions about these phenomena. First, why have previous studies not consistently found memory set-size effects on frontal-midline theta? Second, how does posterior alpha participate in STM retention? To answer these questions, large-scale network connectivity during STM retention was examined by computing EEG wavelet coherence during the retention period of a modified Sternberg task using visually-presented letters as stimuli. The results showed (a) increasing theta-band coherence between frontal-midline and left temporal-parietal sites with increasing memory load, and (b) increasing alpha-band coherence between midline parietal and left temporal/parietal sites with increasing memory load. These findings support the view that theta-band coherence, rather than amplitude, is the key factor in selective top-down strengthening of the memory set and demonstrate that posterior alpha-band oscillations associated with sensory gating are involved in STM retention by participating in the STM network.

  2. The Satellite Clock Bias Prediction Method Based on Takagi-Sugeno Fuzzy Neural Network

    Science.gov (United States)

    Cai, C. L.; Yu, H. G.; Wei, Z. C.; Pan, J. D.

    2017-05-01

    The continuous improvement of the prediction accuracy of Satellite Clock Bias (SCB) is the key problem of precision navigation. In order to improve the precision of SCB prediction and better reflect the change characteristics of SCB, this paper proposes an SCB prediction method based on the Takagi-Sugeno fuzzy neural network. Firstly, the SCB values are pre-treated based on their characteristics. Then, an accurate Takagi-Sugeno fuzzy neural network model is established based on the preprocessed data to predict SCB. This paper uses the precise SCB data with different sampling intervals provided by IGS (International Global Navigation Satellite System Service) to realize the short-time prediction experiment, and the results are compared with the ARIMA (Auto-Regressive Integrated Moving Average) model, GM(1,1) model, and the quadratic polynomial model. The results show that the Takagi-Sugeno fuzzy neural network model is feasible and effective for the SCB short-time prediction experiment, and performs well for different types of clocks. The prediction results for the proposed method are better than the conventional methods obviously.

  3. PREDICTION OF DEMAND FOR PRIMARY BOND OFFERINGS USING ARTIFICIAL NEURAL NETWORKS

    Directory of Open Access Journals (Sweden)

    Michal Tkac

    2014-12-01

    Full Text Available Purpose: Primary bond markets represent an interesting investment opportunity not only for banks, insurance companies, and other institutional investors, but also for individuals looking for capital gains. Since offered securities vary in terms of their rating, industrial classification, coupon, or maturity, demand of buyers for particular offerings often overcomes issued volume and price of given bond on secondary market consequently rises. Investors might be regarded as consumers purchasing required service according to their specific preferences at desired price. This paper aims at analysis of demand for bonds on primary market using artificial neural networks.Design/methodology: We design a multilayered feedforward neural network trained by Levenberg-Marquardt algorithm in order to estimate demand for individual bonds based on parameters of particular offerings. Outcomes obtained by artificial neural network are compared with conventional econometric methods.Findings: Our results indicate that artificial neural network significantly outperformed standard econometric techniques and on examined sample of primary bond offerings achieved considerably better performance in terms of prediction accuracy and mean squared error.Originality: We show that proposed neural network is able to successfully predict demand for primary obligation offerings based on their specifications. Moreover, we identify relevant parameters of issues which are able to considerably affect total demand for given security.  Our findings might not only help investors to detect marketable securities, but also enable issuing entities to increase demand for their bonds in order to decrease their offering price. 

  4. Exponential stability result for discrete-time stochastic fuzzy uncertain neural networks

    International Nuclear Information System (INIS)

    Mathiyalagan, K.; Sakthivel, R.; Marshal Anthoni, S.

    2012-01-01

    This Letter addresses the stability analysis problem for a class of uncertain discrete-time stochastic fuzzy neural networks (DSFNNs) with time-varying delays. By constructing a new Lyapunov–Krasovskii functional combined with the free weighting matrix technique, a new set of delay-dependent sufficient conditions for the robust exponential stability of the considered DSFNNs is established in terms of Linear Matrix Inequalities (LMIs). Finally, numerical examples with simulation results are provided to illustrate the applicability and usefulness of the obtained theory. -- Highlights: ► Applications of neural networks require the knowledge of dynamic behaviors. ► Exponential stability of discrete-time stochastic fuzzy neural networks is studied. ► Linear matrix inequality optimization approach is used to obtain the result. ► Delay-dependent stability criterion is established in terms of LMIs. ► Examples with simulation are provided to show the effectiveness of the result.

  5. Development of neural network for analysis of local power distributions in BWR fuel bundles

    International Nuclear Information System (INIS)

    Tanabe, Akira; Yamamoto, Toru; Shinfuku, Kimihiro; Nakamae, Takuji.

    1993-01-01

    A neural network model has been developed to learn the local power distributions in a BWR fuel bundle. A two layers neural network with total 128 elements is used for this model. The neural network learns 33 cases of local power peaking factors of fuel rods with given enrichment distribution as the teacher signals, which were calculated by a fuel bundle nuclear analysis code based on precise physical models. This neural network model studied well the teacher signals within 1 % error. It is also able to calculate the local power distributions within several % error for the different enrichment distributions from the teacher signals when the average enrichment is close to 2 %. This neural network is simple and the computing speed of this model is 300 times faster than that of the precise nuclear analysis code. This model was applied to survey the enrichment distribution to meet a target local power distribution in a fuel bundle, and the enrichment distribution with flat power shape are obtained within short computing time. (author)

  6. SHORT-TERM SOLAR FLARE LEVEL PREDICTION USING A BAYESIAN NETWORK APPROACH

    International Nuclear Information System (INIS)

    Yu Daren; Huang Xin; Hu Qinghua; Zhou Rui; Wang Huaning; Cui Yanmei

    2010-01-01

    A Bayesian network approach for short-term solar flare level prediction has been proposed based on three sequences of photospheric magnetic field parameters extracted from Solar and Heliospheric Observatory/Michelson Doppler Imager longitudinal magnetograms. The magnetic measures, the maximum horizontal gradient, the length of neutral line, and the number of singular points do not have determinate relationships with solar flares, so the solar flare level prediction is considered as an uncertainty reasoning process modeled by the Bayesian network. The qualitative network structure which describes conditional independent relationships among magnetic field parameters and the quantitative conditional probability tables which determine the probabilistic values for each variable are learned from the data set. Seven sequential features-the maximum, the mean, the root mean square, the standard deviation, the shape factor, the crest factor, and the pulse factor-are extracted to reduce the dimensions of the raw sequences. Two Bayesian network models are built using raw sequential data (BN R ) and feature extracted data (BN F ), respectively. The explanations of these models are consistent with physical analyses of experts. The performances of the BN R and the BN F appear comparable with other methods. More importantly, the comprehensibility of the Bayesian network models is better than other methods.

  7. Ridge Polynomial Neural Network with Error Feedback for Time Series Forecasting.

    Science.gov (United States)

    Waheeb, Waddah; Ghazali, Rozaida; Herawan, Tutut

    2016-01-01

    Time series forecasting has gained much attention due to its many practical applications. Higher-order neural network with recurrent feedback is a powerful technique that has been used successfully for time series forecasting. It maintains fast learning and the ability to learn the dynamics of the time series over time. Network output feedback is the most common recurrent feedback for many recurrent neural network models. However, not much attention has been paid to the use of network error feedback instead of network output feedback. In this study, we propose a novel model, called Ridge Polynomial Neural Network with Error Feedback (RPNN-EF) that incorporates higher order terms, recurrence and error feedback. To evaluate the performance of RPNN-EF, we used four univariate time series with different forecasting horizons, namely star brightness, monthly smoothed sunspot numbers, daily Euro/Dollar exchange rate, and Mackey-Glass time-delay differential equation. We compared the forecasting performance of RPNN-EF with the ordinary Ridge Polynomial Neural Network (RPNN) and the Dynamic Ridge Polynomial Neural Network (DRPNN). Simulation results showed an average 23.34% improvement in Root Mean Square Error (RMSE) with respect to RPNN and an average 10.74% improvement with respect to DRPNN. That means that using network errors during training helps enhance the overall forecasting performance for the network.

  8. Ridge Polynomial Neural Network with Error Feedback for Time Series Forecasting.

    Directory of Open Access Journals (Sweden)

    Waddah Waheeb

    Full Text Available Time series forecasting has gained much attention due to its many practical applications. Higher-order neural network with recurrent feedback is a powerful technique that has been used successfully for time series forecasting. It maintains fast learning and the ability to learn the dynamics of the time series over time. Network output feedback is the most common recurrent feedback for many recurrent neural network models. However, not much attention has been paid to the use of network error feedback instead of network output feedback. In this study, we propose a novel model, called Ridge Polynomial Neural Network with Error Feedback (RPNN-EF that incorporates higher order terms, recurrence and error feedback. To evaluate the performance of RPNN-EF, we used four univariate time series with different forecasting horizons, namely star brightness, monthly smoothed sunspot numbers, daily Euro/Dollar exchange rate, and Mackey-Glass time-delay differential equation. We compared the forecasting performance of RPNN-EF with the ordinary Ridge Polynomial Neural Network (RPNN and the Dynamic Ridge Polynomial Neural Network (DRPNN. Simulation results showed an average 23.34% improvement in Root Mean Square Error (RMSE with respect to RPNN and an average 10.74% improvement with respect to DRPNN. That means that using network errors during training helps enhance the overall forecasting performance for the network.

  9. Robust stability analysis of switched Hopfield neural networks with time-varying delay under uncertainty

    International Nuclear Information System (INIS)

    Huang He; Qu Yuzhong; Li Hanxiong

    2005-01-01

    With the development of intelligent control, switched systems have been widely studied. Here we try to introduce some ideas of the switched systems into the field of neural networks. In this Letter, a class of switched Hopfield neural networks with time-varying delay is investigated. The parametric uncertainty is considered and assumed to be norm bounded. Firstly, the mathematical model of the switched Hopfield neural networks is established in which a set of Hopfield neural networks are used as the individual subsystems and an arbitrary switching rule is assumed; Secondly, robust stability analysis for such switched Hopfield neural networks is addressed based on the Lyapunov-Krasovskii approach. Some criteria are given to guarantee the switched Hopfield neural networks to be globally exponentially stable for all admissible parametric uncertainties. These conditions are expressed in terms of some strict linear matrix inequalities (LMIs). Finally, a numerical example is provided to illustrate our results

  10. Impact of leakage delay on bifurcation in high-order fractional BAM neural networks.

    Science.gov (United States)

    Huang, Chengdai; Cao, Jinde

    2018-02-01

    The effects of leakage delay on the dynamics of neural networks with integer-order have lately been received considerable attention. It has been confirmed that fractional neural networks more appropriately uncover the dynamical properties of neural networks, but the results of fractional neural networks with leakage delay are relatively few. This paper primarily concentrates on the issue of bifurcation for high-order fractional bidirectional associative memory(BAM) neural networks involving leakage delay. The first attempt is made to tackle the stability and bifurcation of high-order fractional BAM neural networks with time delay in leakage terms in this paper. The conditions for the appearance of bifurcation for the proposed systems with leakage delay are firstly established by adopting time delay as a bifurcation parameter. Then, the bifurcation criteria of such system without leakage delay are successfully acquired. Comparative analysis wondrously detects that the stability performance of the proposed high-order fractional neural networks is critically weakened by leakage delay, they cannot be overlooked. Numerical examples are ultimately exhibited to attest the efficiency of the theoretical results. Copyright © 2017 Elsevier Ltd. All rights reserved.

  11. Streamflow predictions in Alpine Catchments by using artificial neural networks. Application in the Alto Genil Basin (South Spain)

    Science.gov (United States)

    Jimeno-Saez, Patricia; Pegalajar-Cuellar, Manuel; Pulido-Velazquez, David

    2017-04-01

    This study explores techniques of modeling water inflow series, focusing on techniques of short-term steamflow prediction. An appropriate estimation of streamflow in advance is necessary to anticipate measures to mitigate the impacts and risks related to drought conditions. This study analyzes the prediction of future streamflow of nineteen subbasins in the Alto-Genil basin in Granada (Southeast of Spain). Some of these basin streamflow have an important component of snowmelt due to part of the system is located in Sierra Nevada Mountain Range, the highest mountain of continental Spain. Streamflow prediction models have been calibrated using time series of historical natural streamflows. The available streamflow measurements have been downloaded from several public data sources. These original data have been preprocessed to turn them to the original natural regime, removing the anthropic effects. The missing values in the adopted horizon period to calibrate the prediction models have been estimated by using a Temez hydrological balance model, approaching the snowmelt processes with a hybrid degree day method. In the experimentation, ARIMA models are used as baseline method, and recurrent neural networks ELMAN and nonlinear autoregressive neural network (NAR) to test if the prediction accuracy can be improved. After performing the multiple experiments with these models, non-parametric statistical tests are applied to select the best of these techniques. In the experiments carried out with ARIMA, it is concluded that ARIMA models are not adequate in this case study due to the existence of a nonlinear component that cannot be modeled. Secondly, ELMAN and NAR neural networks with multi-start training is performed with each network structure to deal with the local optimum problem, since in neural network training there is a very strong dependence on the initial weights of the network. The obtained results suggest that both neural networks are efficient for the short

  12. Neural Network Ensembles

    DEFF Research Database (Denmark)

    Hansen, Lars Kai; Salamon, Peter

    1990-01-01

    We propose several means for improving the performance an training of neural networks for classification. We use crossvalidation as a tool for optimizing network parameters and architecture. We show further that the remaining generalization error can be reduced by invoking ensembles of similar...... networks....

  13. Complex-Valued Neural Networks

    CERN Document Server

    Hirose, Akira

    2012-01-01

    This book is the second enlarged and revised edition of the first successful monograph on complex-valued neural networks (CVNNs) published in 2006, which lends itself to graduate and undergraduate courses in electrical engineering, informatics, control engineering, mechanics, robotics, bioengineering, and other relevant fields. In the second edition the recent trends in CVNNs research are included, resulting in e.g. almost a doubled number of references. The parametron invented in 1954 is also referred to with discussion on analogy and disparity. Also various additional arguments on the advantages of the complex-valued neural networks enhancing the difference to real-valued neural networks are given in various sections. The book is useful for those beginning their studies, for instance, in adaptive signal processing for highly functional sensing and imaging, control in unknown and changing environment, robotics inspired by human neural systems, and brain-like information processing, as well as interdisciplina...

  14. Prototype-Incorporated Emotional Neural Network.

    Science.gov (United States)

    Oyedotun, Oyebade K; Khashman, Adnan

    2017-08-15

    Artificial neural networks (ANNs) aim to simulate the biological neural activities. Interestingly, many ''engineering'' prospects in ANN have relied on motivations from cognition and psychology studies. So far, two important learning theories that have been subject of active research are the prototype and adaptive learning theories. The learning rules employed for ANNs can be related to adaptive learning theory, where several examples of the different classes in a task are supplied to the network for adjusting internal parameters. Conversely, the prototype-learning theory uses prototypes (representative examples); usually, one prototype per class of the different classes contained in the task. These prototypes are supplied for systematic matching with new examples so that class association can be achieved. In this paper, we propose and implement a novel neural network algorithm based on modifying the emotional neural network (EmNN) model to unify the prototype- and adaptive-learning theories. We refer to our new model as ``prototype-incorporated EmNN''. Furthermore, we apply the proposed model to two real-life challenging tasks, namely, static hand-gesture recognition and face recognition, and compare the result to those obtained using the popular back-propagation neural network (BPNN), emotional BPNN (EmNN), deep networks, an exemplar classification model, and k-nearest neighbor.

  15. Comparing various artificial neural network types for water temperature prediction in rivers

    Science.gov (United States)

    Piotrowski, Adam P.; Napiorkowski, Maciej J.; Napiorkowski, Jaroslaw J.; Osuch, Marzena

    2015-10-01

    A number of methods have been proposed for the prediction of streamwater temperature based on various meteorological and hydrological variables. The present study shows a comparison of few types of data-driven neural networks (multi-layer perceptron, product-units, adaptive-network-based fuzzy inference systems and wavelet neural networks) and nearest neighbour approach for short time streamwater temperature predictions in two natural catchments (mountainous and lowland) located in temperate climate zone, with snowy winters and hot summers. To allow wide applicability of such models, autoregressive inputs are not used and only easily available measurements are considered. Each neural network type is calibrated independently 100 times and the mean, median and standard deviation of the results are used for the comparison. Finally, the ensemble aggregation approach is tested. The results show that simple and popular multi-layer perceptron neural networks are in most cases not outperformed by more complex and advanced models. The choice of neural network is dependent on the way the models are compared. This may be a warning for anyone who wish to promote own models, that their superiority should be verified in different ways. The best results are obtained when mean, maximum and minimum daily air temperatures from the previous days are used as inputs, together with the current runoff and declination of the Sun from two recent days. The ensemble aggregation approach allows reducing the mean square error up to several percent, depending on the case, and noticeably diminishes differences in modelling performance obtained by various neural network types.

  16. Antenna analysis using neural networks

    Science.gov (United States)

    Smith, William T.

    1992-01-01

    Conventional computing schemes have long been used to analyze problems in electromagnetics (EM). The vast majority of EM applications require computationally intensive algorithms involving numerical integration and solutions to large systems of equations. The feasibility of using neural network computing algorithms for antenna analysis is investigated. The ultimate goal is to use a trained neural network algorithm to reduce the computational demands of existing reflector surface error compensation techniques. Neural networks are computational algorithms based on neurobiological systems. Neural nets consist of massively parallel interconnected nonlinear computational elements. They are often employed in pattern recognition and image processing problems. Recently, neural network analysis has been applied in the electromagnetics area for the design of frequency selective surfaces and beam forming networks. The backpropagation training algorithm was employed to simulate classical antenna array synthesis techniques. The Woodward-Lawson (W-L) and Dolph-Chebyshev (D-C) array pattern synthesis techniques were used to train the neural network. The inputs to the network were samples of the desired synthesis pattern. The outputs are the array element excitations required to synthesize the desired pattern. Once trained, the network is used to simulate the W-L or D-C techniques. Various sector patterns and cosecant-type patterns (27 total) generated using W-L synthesis were used to train the network. Desired pattern samples were then fed to the neural network. The outputs of the network were the simulated W-L excitations. A 20 element linear array was used. There were 41 input pattern samples with 40 output excitations (20 real parts, 20 imaginary). A comparison between the simulated and actual W-L techniques is shown for a triangular-shaped pattern. Dolph-Chebyshev is a different class of synthesis technique in that D-C is used for side lobe control as opposed to pattern

  17. Deconvolution using a neural network

    Energy Technology Data Exchange (ETDEWEB)

    Lehman, S.K.

    1990-11-15

    Viewing one dimensional deconvolution as a matrix inversion problem, we compare a neural network backpropagation matrix inverse with LMS, and pseudo-inverse. This is a largely an exercise in understanding how our neural network code works. 1 ref.

  18. Neural prediction of cows' milk yield according to environment ...

    African Journals Online (AJOL)

    Medium and maximum air temperatures around the milk cowsheds were measured and these empirical data were used to create a neural prediction model evaluating the cows' milk yield under varying thermal conditions. We found out that artificial neural networks were an effective tool supporting the process of short-term ...

  19. Determining the confidence levels of sensor outputs using neural networks

    Energy Technology Data Exchange (ETDEWEB)

    Broten, G S; Wood, H C [Saskatchewan Univ., Saskatoon, SK (Canada). Dept. of Electrical Engineering

    1996-12-31

    This paper describes an approach for determining the confidence level of a sensor output using multi-sensor arrays, sensor fusion and artificial neural networks. The authors have shown in previous work that sensor fusion and artificial neural networks can be used to learn the relationships between the outputs of an array of simulated partially selective sensors and the individual analyte concentrations in a mixture of analyses. Other researchers have shown that an array of partially selective sensors can be used to determine the individual gas concentrations in a gaseous mixture. The research reported in this paper shows that it is possible to extract confidence level information from an array of partially selective sensors using artificial neural networks. The confidence level of a sensor output is defined as a numeric value, ranging from 0% to 100%, that indicates the confidence associated with a output of a given sensor. A three layer back-propagation neural network was trained on a subset of the sensor confidence level space, and was tested for its ability to generalize, where the confidence level space is defined as all possible deviations from the correct sensor output. A learning rate of 0.1 was used and no momentum terms were used in the neural network. This research has shown that an artificial neural network can accurately estimate the confidence level of individual sensors in an array of partially selective sensors. This research has also shown that the neural network`s ability to determine the confidence level is influenced by the complexity of the sensor`s response and that the neural network is able to estimate the confidence levels even if more than one sensor is in error. The fundamentals behind this research could be applied to other configurations besides arrays of partially selective sensors, such as an array of sensors separated spatially. An example of such a configuration could be an array of temperature sensors in a tank that is not in

  20. Convolutional LSTM Networks for Subcellular Localization of Proteins

    DEFF Research Database (Denmark)

    Sønderby, Søren Kaae; Sønderby, Casper Kaae; Nielsen, Henrik

    2015-01-01

    Machine learning is widely used to analyze biological sequence data. Non-sequential models such as SVMs or feed-forward neural networks are often used although they have no natural way of handling sequences of varying length. Recurrent neural networks such as the long short term memory (LSTM) model...

  1. Predicting physical time series using dynamic ridge polynomial neural networks.

    Directory of Open Access Journals (Sweden)

    Dhiya Al-Jumeily

    Full Text Available Forecasting naturally occurring phenomena is a common problem in many domains of science, and this has been addressed and investigated by many scientists. The importance of time series prediction stems from the fact that it has wide range of applications, including control systems, engineering processes, environmental systems and economics. From the knowledge of some aspects of the previous behaviour of the system, the aim of the prediction process is to determine or predict its future behaviour. In this paper, we consider a novel application of a higher order polynomial neural network architecture called Dynamic Ridge Polynomial Neural Network that combines the properties of higher order and recurrent neural networks for the prediction of physical time series. In this study, four types of signals have been used, which are; The Lorenz attractor, mean value of the AE index, sunspot number, and heat wave temperature. The simulation results showed good improvements in terms of the signal to noise ratio in comparison to a number of higher order and feedforward neural networks in comparison to the benchmarked techniques.

  2. ASSESSMENT OF LIBRARY USERS’ FEEDBACK USING MODIFIED MULTILAYER PERCEPTRON NEURAL NETWORKS

    Directory of Open Access Journals (Sweden)

    K G Nandha Kumar

    2017-07-01

    Full Text Available An attempt has been made to evaluate the feedbacks of library users of four different libraries by using neural network based data mining techniques. This paper presents the results of a survey of users’ satisfactory level on four different libraries. The survey has been conducted among the users of four libraries of educational institutions of Kovai Medical Center Research and Educational Trust. Data were collected through questionnaires. Artificial neural network based data mining techniques are proposed and applied to assess the libraries in terms of level of satisfaction of users. In order to assess the users’ satisfaction level, two neural network techniques: Modified Multilayer Perceptron Network-Supervised and Modified Multilayer Perceptron Network-Unsupervised are proposed. The proposed techniques are compared with the conventional classification algorithm Multilayer Perceptron Neural Network and found better in overall performance. It is found that the quality of service provided by the libraries is highly good and users are highly satisfied with various aspects of library service. The Arts and Science College Library secured the maximum percent in terms of user satisfaction. This shows that the users’ satisfaction of ASCL is better than the other libraries. This study provides an insight into the actual quality and satisfactory level of users of libraries after proper assessment. It is strongly expected that the results will help library authorities to enhance services and quality in the near future.

  3. Neural networks and statistical learning

    CERN Document Server

    Du, Ke-Lin

    2014-01-01

    Providing a broad but in-depth introduction to neural network and machine learning in a statistical framework, this book provides a single, comprehensive resource for study and further research. All the major popular neural network models and statistical learning approaches are covered with examples and exercises in every chapter to develop a practical working understanding of the content. Each of the twenty-five chapters includes state-of-the-art descriptions and important research results on the respective topics. The broad coverage includes the multilayer perceptron, the Hopfield network, associative memory models, clustering models and algorithms, the radial basis function network, recurrent neural networks, principal component analysis, nonnegative matrix factorization, independent component analysis, discriminant analysis, support vector machines, kernel methods, reinforcement learning, probabilistic and Bayesian networks, data fusion and ensemble learning, fuzzy sets and logic, neurofuzzy models, hardw...

  4. Adaptive Neural Network Sliding Mode Control for Quad Tilt Rotor Aircraft

    Directory of Open Access Journals (Sweden)

    Yanchao Yin

    2017-01-01

    Full Text Available A novel neural network sliding mode control based on multicommunity bidirectional drive collaborative search algorithm (M-CBDCS is proposed to design a flight controller for performing the attitude tracking control of a quad tilt rotors aircraft (QTRA. Firstly, the attitude dynamic model of the QTRA concerning propeller tension, channel arm, and moment of inertia is formulated, and the equivalent sliding mode control law is stated. Secondly, an adaptive control algorithm is presented to eliminate the approximation error, where a radial basis function (RBF neural network is used to online regulate the equivalent sliding mode control law, and the novel M-CBDCS algorithm is developed to uniformly update the unknown neural network weights and essential model parameters adaptively. The nonlinear approximation error is obtained and serves as a novel leakage term in the adaptations to guarantee the sliding surface convergence and eliminate the chattering phenomenon, which benefit the overall attitude control performance for QTRA. Finally, the appropriate comparisons among the novel adaptive neural network sliding mode control, the classical neural network sliding mode control, and the dynamic inverse PID control are examined, and comparative simulations are included to verify the efficacy of the proposed control method.

  5. Neural network recognition of mammographic lesions

    International Nuclear Information System (INIS)

    Oldham, W.J.B.; Downes, P.T.; Hunter, V.

    1987-01-01

    A method for recognition of mammographic lesions through the use of neural networks is presented. Neural networks have exhibited the ability to learn the shape andinternal structure of patterns. Digitized mammograms containing circumscribed and stelate lesions were used to train a feedfoward synchronous neural network that self-organizes to stable attractor states. Encoding of data for submission to the network was accomplished by performing a fractal analysis of the digitized image. This results in scale invariant representation of the lesions. Results are discussed

  6. Neural Networks and Micromechanics

    Science.gov (United States)

    Kussul, Ernst; Baidyk, Tatiana; Wunsch, Donald C.

    The title of the book, "Neural Networks and Micromechanics," seems artificial. However, the scientific and technological developments in recent decades demonstrate a very close connection between the two different areas of neural networks and micromechanics. The purpose of this book is to demonstrate this connection. Some artificial intelligence (AI) methods, including neural networks, could be used to improve automation system performance in manufacturing processes. However, the implementation of these AI methods within industry is rather slow because of the high cost of conducting experiments using conventional manufacturing and AI systems. To lower the cost, we have developed special micromechanical equipment that is similar to conventional mechanical equipment but of much smaller size and therefore of lower cost. This equipment could be used to evaluate different AI methods in an easy and inexpensive way. The proved methods could be transferred to industry through appropriate scaling. In this book, we describe the prototypes of low cost microequipment for manufacturing processes and the implementation of some AI methods to increase precision, such as computer vision systems based on neural networks for microdevice assembly and genetic algorithms for microequipment characterization and the increase of microequipment precision.

  7. Partially Overlapping Sensorimotor Networks Underlie Speech Praxis and Verbal Short-Term Memory: Evidence from Apraxia of Speech Following Acute Stroke

    Directory of Open Access Journals (Sweden)

    Gregory eHickok

    2014-08-01

    Full Text Available We tested the hypothesis that motor planning and programming of speech articulation and verbal short-term memory (vSTM depend on partially overlapping networks of neural regions. We evaluated this proposal by testing 76 individuals with acute ischemic stroke for impairment in motor planning of speech articulation (apraxia of speech; AOS and vSTM in the first day of stroke, before the opportunity for recovery or reorganization of structure-function relationships. We also evaluate areas of both infarct and low blood flow that might have contributed to AOS or impaired vSTM in each person. We found that AOS was associated with tissue dysfunction in motor-related areas (posterior primary motor cortex, pars opercularis; premotor cortex, insula and sensory-related areas (primary somatosensory cortex, secondary somatosensory cortex, parietal operculum/auditory cortex; while impaired vSTM was associated with primarily motor-related areas (pars opercularis and pars triangularis, premotor cortex, and primary motor cortex. These results are consistent with the hypothesis, also supported by functional imaging data, that both speech praxis and vSTM rely on partially overlapping networks of brain regions.

  8. Recurrent neural networks for breast lesion classification based on DCE-MRIs

    Science.gov (United States)

    Antropova, Natasha; Huynh, Benjamin; Giger, Maryellen

    2018-02-01

    Dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) plays a significant role in breast cancer screening, cancer staging, and monitoring response to therapy. Recently, deep learning methods are being rapidly incorporated in image-based breast cancer diagnosis and prognosis. However, most of the current deep learning methods make clinical decisions based on 2-dimentional (2D) or 3D images and are not well suited for temporal image data. In this study, we develop a deep learning methodology that enables integration of clinically valuable temporal components of DCE-MRIs into deep learning-based lesion classification. Our work is performed on a database of 703 DCE-MRI cases for the task of distinguishing benign and malignant lesions, and uses the area under the ROC curve (AUC) as the performance metric in conducting that task. We train a recurrent neural network, specifically a long short-term memory network (LSTM), on sequences of image features extracted from the dynamic MRI sequences. These features are extracted with VGGNet, a convolutional neural network pre-trained on a large dataset of natural images ImageNet. The features are obtained from various levels of the network, to capture low-, mid-, and high-level information about the lesion. Compared to a classification method that takes as input only images at a single time-point (yielding an AUC = 0.81 (se = 0.04)), our LSTM method improves lesion classification with an AUC of 0.85 (se = 0.03).

  9. New baseline correction algorithm for text-line recognition with bidirectional recurrent neural networks

    Science.gov (United States)

    Morillot, Olivier; Likforman-Sulem, Laurence; Grosicki, Emmanuèle

    2013-04-01

    Many preprocessing techniques have been proposed for isolated word recognition. However, recently, recognition systems have dealt with text blocks and their compound text lines. In this paper, we propose a new preprocessing approach to efficiently correct baseline skew and fluctuations. Our approach is based on a sliding window within which the vertical position of the baseline is estimated. Segmentation of text lines into subparts is, thus, avoided. Experiments conducted on a large publicly available database (Rimes), with a BLSTM (bidirectional long short-term memory) recurrent neural network recognition system, show that our baseline correction approach highly improves performance.

  10. Parameterization Of Solar Radiation Using Neural Network

    International Nuclear Information System (INIS)

    Jiya, J. D.; Alfa, B.

    2002-01-01

    This paper presents a neural network technique for parameterization of global solar radiation. The available data from twenty-one stations is used for training the neural network and the data from other ten stations is used to validate the neural model. The neural network utilizes latitude, longitude, altitude, sunshine duration and period number to parameterize solar radiation values. The testing data was not used in the training to demonstrate the performance of the neural network in unknown stations to parameterize solar radiation. The results indicate a good agreement between the parameterized solar radiation values and actual measured values

  11. Patterns of work attitudes: A neural network approach

    Science.gov (United States)

    Mengov, George D.; Zinovieva, Irina L.; Sotirov, George R.

    2000-05-01

    In this paper we introduce a neural networks based approach to analyzing empirical data and models from work and organizational psychology (WOP), and suggest possible implications for the practice of managers and business consultants. With this method it becomes possible to have quantitative answers to a bunch of questions like: What are the characteristics of an organization in terms of its employees' motivation? What distinct attitudes towards the work exist? Which pattern is most desirable from the standpoint of productivity and professional achievement? What will be the dynamics of behavior as quantified by our method, during an ongoing organizational change or consultancy intervention? Etc. Our investigation is founded on the theoretical achievements of Maslow (1954, 1970) in human motivation, and of Hackman & Oldham (1975, 1980) in job diagnostics, and applies the mathematical algorithm of the dARTMAP variation (Carpenter et al., 1998) of the Adaptive Resonance Theory (ART) neural networks introduced by Grossberg (1976). We exploit the ART capabilities to visualize the knowledge accumulated in the network's long-term memory in order to interpret the findings in organizational research.

  12. MapReduce Based Parallel Neural Networks in Enabling Large Scale Machine Learning.

    Science.gov (United States)

    Liu, Yang; Yang, Jie; Huang, Yuan; Xu, Lixiong; Li, Siguang; Qi, Man

    2015-01-01

    Artificial neural networks (ANNs) have been widely used in pattern recognition and classification applications. However, ANNs are notably slow in computation especially when the size of data is large. Nowadays, big data has received a momentum from both industry and academia. To fulfill the potentials of ANNs for big data applications, the computation process must be speeded up. For this purpose, this paper parallelizes neural networks based on MapReduce, which has become a major computing model to facilitate data intensive applications. Three data intensive scenarios are considered in the parallelization process in terms of the volume of classification data, the size of the training data, and the number of neurons in the neural network. The performance of the parallelized neural networks is evaluated in an experimental MapReduce computer cluster from the aspects of accuracy in classification and efficiency in computation.

  13. Interdisciplinary Approach to the Mental Lexicon: Neural Network and Text Extraction From Long-term Memory

    Directory of Open Access Journals (Sweden)

    Vardan G. Arutyunyan

    2013-01-01

    Full Text Available The paper touches upon the principles of mental lexicon organization in the light of recent research in psycho- and neurolinguistics. As a focal point of discussion two main approaches to mental lexicon functioning are considered: modular or dual-system approach, developed within generativism and opposite single-system approach, representatives of which are the connectionists and supporters of network models. The paper is an endeavor towards advocating the viewpoint that mental lexicon is complex psychological organization based upon specific composition of neural network. In this regard, the paper further elaborates on the matter of storing text in human mental space and introduces a model of text extraction from long-term memory. Based upon data available, the author develops a methodology of modeling structures of knowledge representation in the systems of artificial intelligence.

  14. Analysis of neural networks through base functions

    NARCIS (Netherlands)

    van der Zwaag, B.J.; Slump, Cornelis H.; Spaanenburg, L.

    Problem statement. Despite their success-story, neural networks have one major disadvantage compared to other techniques: the inability to explain comprehensively how a trained neural network reaches its output; neural networks are not only (incorrectly) seen as a "magic tool" but possibly even more

  15. Neural Computations in a Dynamical System with Multiple Time Scales.

    Science.gov (United States)

    Mi, Yuanyuan; Lin, Xiaohan; Wu, Si

    2016-01-01

    Neural systems display rich short-term dynamics at various levels, e.g., spike-frequency adaptation (SFA) at the single-neuron level, and short-term facilitation (STF) and depression (STD) at the synapse level. These dynamical features typically cover a broad range of time scales and exhibit large diversity in different brain regions. It remains unclear what is the computational benefit for the brain to have such variability in short-term dynamics. In this study, we propose that the brain can exploit such dynamical features to implement multiple seemingly contradictory computations in a single neural circuit. To demonstrate this idea, we use continuous attractor neural network (CANN) as a working model and include STF, SFA and STD with increasing time constants in its dynamics. Three computational tasks are considered, which are persistent activity, adaptation, and anticipative tracking. These tasks require conflicting neural mechanisms, and hence cannot be implemented by a single dynamical feature or any combination with similar time constants. However, with properly coordinated STF, SFA and STD, we show that the network is able to implement the three computational tasks concurrently. We hope this study will shed light on the understanding of how the brain orchestrates its rich dynamics at various levels to realize diverse cognitive functions.

  16. Neural Networks for Optimal Control

    DEFF Research Database (Denmark)

    Sørensen, O.

    1995-01-01

    Two neural networks are trained to act as an observer and a controller, respectively, to control a non-linear, multi-variable process.......Two neural networks are trained to act as an observer and a controller, respectively, to control a non-linear, multi-variable process....

  17. Determining the confidence levels of sensor outputs using neural networks

    International Nuclear Information System (INIS)

    Broten, G.S.; Wood, H.C.

    1995-01-01

    This paper describes an approach for determining the confidence level of a sensor output using multi-sensor arrays, sensor fusion and artificial neural networks. The authors have shown in previous work that sensor fusion and artificial neural networks can be used to learn the relationships between the outputs of an array of simulated partially selective sensors and the individual analyte concentrations in a mixture of analyses. Other researchers have shown that an array of partially selective sensors can be used to determine the individual gas concentrations in a gaseous mixture. The research reported in this paper shows that it is possible to extract confidence level information from an array of partially selective sensors using artificial neural networks. The confidence level of a sensor output is defined as a numeric value, ranging from 0% to 100%, that indicates the confidence associated with a output of a given sensor. A three layer back-propagation neural network was trained on a subset of the sensor confidence level space, and was tested for its ability to generalize, where the confidence level space is defined as all possible deviations from the correct sensor output. A learning rate of 0.1 was used and no momentum terms were used in the neural network. This research has shown that an artificial neural network can accurately estimate the confidence level of individual sensors in an array of partially selective sensors. This research has also shown that the neural network's ability to determine the confidence level is influenced by the complexity of the sensor's response and that the neural network is able to estimate the confidence levels even if more than one sensor is in error. The fundamentals behind this research could be applied to other configurations besides arrays of partially selective sensors, such as an array of sensors separated spatially. An example of such a configuration could be an array of temperature sensors in a tank that is not in

  18. Boolean Factor Analysis by Attractor Neural Network

    Czech Academy of Sciences Publication Activity Database

    Frolov, A. A.; Húsek, Dušan; Muraviev, I. P.; Polyakov, P.Y.

    2007-01-01

    Roč. 18, č. 3 (2007), s. 698-707 ISSN 1045-9227 R&D Projects: GA AV ČR 1ET100300419; GA ČR GA201/05/0079 Institutional research plan: CEZ:AV0Z10300504 Keywords : recurrent neural network * Hopfield-like neural network * associative memory * unsupervised learning * neural network architecture * neural network application * statistics * Boolean factor analysis * dimensionality reduction * features clustering * concepts search * information retrieval Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 2.769, year: 2007

  19. Optimizing the De-Noise Neural Network Model for GPS Time-Series Monitoring of Structures

    Directory of Open Access Journals (Sweden)

    Mosbeh R. Kaloop

    2015-09-01

    Full Text Available The Global Positioning System (GPS is recently used widely in structures and other applications. Notwithstanding, the GPS accuracy still suffers from the errors afflicting the measurements, particularly the short-period displacement of structural components. Previously, the multi filter method is utilized to remove the displacement errors. This paper aims at using a novel application for the neural network prediction models to improve the GPS monitoring time series data. Four prediction models for the learning algorithms are applied and used with neural network solutions: back-propagation, Cascade-forward back-propagation, adaptive filter and extended Kalman filter, to estimate which model can be recommended. The noise simulation and bridge’s short-period GPS of the monitoring displacement component of one Hz sampling frequency are used to validate the four models and the previous method. The results show that the Adaptive neural networks filter is suggested for de-noising the observations, specifically for the GPS displacement components of structures. Also, this model is expected to have significant influence on the design of structures in the low frequency responses and measurements’ contents.

  20. Application of particle swarm optimization to identify gamma spectrum with neural network

    International Nuclear Information System (INIS)

    Shi Dongsheng; Di Yuming; Zhou Chunlin

    2007-01-01

    In applying neural network to identification of gamma spectra back propagation (BP) algorithm is usually trapped to a local optimum and has a low speed of convergence, whereas particle swarm optimization (PSO) is advantageous in terms of globe optimal searching. In this paper, we propose a new algorithm for neural network training, i.e. combined BP and PSO optimization, or PSO-BP algorithm. Practical example shows that the new algorithm can overcome shortcomings of BP algorithm and the neural network trained by it has a high ability of generalization with identification result of 100% correctness. It can be used effectively and reliably to identify gamma spectra. (authors)

  1. Short-Term Wind Power Forecasting Using the Enhanced Particle Swarm Optimization Based Hybrid Method

    Directory of Open Access Journals (Sweden)

    Wen-Yeau Chang

    2013-09-01

    Full Text Available High penetration of wind power in the electricity system provides many challenges to power system operators, mainly due to the unpredictability and variability of wind power generation. Although wind energy may not be dispatched, an accurate forecasting method of wind speed and power generation can help power system operators reduce the risk of an unreliable electricity supply. This paper proposes an enhanced particle swarm optimization (EPSO based hybrid forecasting method for short-term wind power forecasting. The hybrid forecasting method combines the persistence method, the back propagation neural network, and the radial basis function (RBF neural network. The EPSO algorithm is employed to optimize the weight coefficients in the hybrid forecasting method. To demonstrate the effectiveness of the proposed method, the method is tested on the practical information of wind power generation of a wind energy conversion system (WECS installed on the Taichung coast of Taiwan. Comparisons of forecasting performance are made with the individual forecasting methods. Good agreements between the realistic values and forecasting values are obtained; the test results show the proposed forecasting method is accurate and reliable.

  2. Global exponential synchronization of inertial memristive neural networks with time-varying delay via nonlinear controller.

    Science.gov (United States)

    Gong, Shuqing; Yang, Shaofu; Guo, Zhenyuan; Huang, Tingwen

    2018-06-01

    The paper is concerned with the synchronization problem of inertial memristive neural networks with time-varying delay. First, by choosing a proper variable substitution, inertial memristive neural networks described by second-order differential equations can be transformed into first-order differential equations. Then, a novel controller with a linear diffusive term and discontinuous sign term is designed. By using the controller, the sufficient conditions for assuring the global exponential synchronization of the derive and response neural networks are derived based on Lyapunov stability theory and some inequality techniques. Finally, several numerical simulations are provided to substantiate the effectiveness of the theoretical results. Copyright © 2018 Elsevier Ltd. All rights reserved.

  3. Neural networks at the Tevatron

    International Nuclear Information System (INIS)

    Badgett, W.; Burkett, K.; Campbell, M.K.; Wu, D.Y.; Bianchin, S.; DeNardi, M.; Pauletta, G.; Santi, L.; Caner, A.; Denby, B.; Haggerty, H.; Lindsey, C.S.; Wainer, N.; Dall'Agata, M.; Johns, K.; Dickson, M.; Stanco, L.; Wyss, J.L.

    1992-10-01

    This paper summarizes neural network applications at the Fermilab Tevatron, including the first online hardware application in high energy physics (muon tracking): the CDF and DO neural network triggers; offline quark/gluon discrimination at CDF; ND a new tool for top to multijets recognition at CDF

  4. Short-term memory in the service of executive control functions

    OpenAIRE

    Farshad Alizadeh Mansouri; Marcello eRosa; Nafiseh eAtapour

    2015-01-01

    Short-term memory is a crucial cognitive function for supporting on-going and upcoming behaviours, allowing storage of information across delay periods. The content of this memory may typically include tangible information about features such as the shape, colour or texture of an object, its location and motion relative to the body, or phonological information. The neural correlate of these short-term memories has been found in different brain areas involved in organizing perceptual or motor ...

  5. Genetic Algorithm Optimized Neural Networks Ensemble as ...

    African Journals Online (AJOL)

    NJD

    Improvements in neural network calibration models by a novel approach using neural network ensemble (NNE) for the simultaneous ... process by training a number of neural networks. .... Matlab® version 6.1 was employed for building principal component ... provide a fair simulation of calibration data set with some degree.

  6. Application of neural network to CT

    International Nuclear Information System (INIS)

    Ma, Xiao-Feng; Takeda, Tatsuoki

    1999-01-01

    This paper presents a new method for two-dimensional image reconstruction by using a multilayer neural network. Multilayer neural networks are extensively investigated and practically applied to solution of various problems such as inverse problems or time series prediction problems. From learning an input-output mapping from a set of examples, neural networks can be regarded as synthesizing an approximation of multidimensional function (that is, solving the problem of hypersurface reconstruction, including smoothing and interpolation). From this viewpoint, neural networks are well suited to the solution of CT image reconstruction. Though a conventionally used object function of a neural network is composed of a sum of squared errors of the output data, we can define an object function composed of a sum of residue of an integral equation. By employing an appropriate line integral for this integral equation, we can construct a neural network that can be used for CT. We applied this method to some model problems and obtained satisfactory results. As it is not necessary to discretized the integral equation using this reconstruction method, therefore it is application to the problem of complicated geometrical shapes is also feasible. Moreover, in neural networks, interpolation is performed quite smoothly, as a result, inverse mapping can be achieved smoothly even in case of including experimental and numerical errors, However, use of conventional back propagation technique for optimization leads to an expensive computation cost. To overcome this drawback, 2nd order optimization methods or parallel computing will be applied in future. (J.P.N.)

  7. Chromatin accessibility prediction via convolutional long short-term memory networks with k-mer embedding.

    Science.gov (United States)

    Min, Xu; Zeng, Wanwen; Chen, Ning; Chen, Ting; Jiang, Rui

    2017-07-15

    Experimental techniques for measuring chromatin accessibility are expensive and time consuming, appealing for the development of computational approaches to predict open chromatin regions from DNA sequences. Along this direction, existing methods fall into two classes: one based on handcrafted k -mer features and the other based on convolutional neural networks. Although both categories have shown good performance in specific applications thus far, there still lacks a comprehensive framework to integrate useful k -mer co-occurrence information with recent advances in deep learning. We fill this gap by addressing the problem of chromatin accessibility prediction with a convolutional Long Short-Term Memory (LSTM) network with k -mer embedding. We first split DNA sequences into k -mers and pre-train k -mer embedding vectors based on the co-occurrence matrix of k -mers by using an unsupervised representation learning approach. We then construct a supervised deep learning architecture comprised of an embedding layer, three convolutional layers and a Bidirectional LSTM (BLSTM) layer for feature learning and classification. We demonstrate that our method gains high-quality fixed-length features from variable-length sequences and consistently outperforms baseline methods. We show that k -mer embedding can effectively enhance model performance by exploring different embedding strategies. We also prove the efficacy of both the convolution and the BLSTM layers by comparing two variations of the network architecture. We confirm the robustness of our model to hyper-parameters by performing sensitivity analysis. We hope our method can eventually reinforce our understanding of employing deep learning in genomic studies and shed light on research regarding mechanisms of chromatin accessibility. The source code can be downloaded from https://github.com/minxueric/ismb2017_lstm . tingchen@tsinghua.edu.cn or ruijiang@tsinghua.edu.cn. Supplementary materials are available at

  8. Fuzzy logic and neural networks basic concepts & application

    CERN Document Server

    Alavala, Chennakesava R

    2008-01-01

    About the Book: The primary purpose of this book is to provide the student with a comprehensive knowledge of basic concepts of fuzzy logic and neural networks. The hybridization of fuzzy logic and neural networks is also included. No previous knowledge of fuzzy logic and neural networks is required. Fuzzy logic and neural networks have been discussed in detail through illustrative examples, methods and generic applications. Extensive and carefully selected references is an invaluable resource for further study of fuzzy logic and neural networks. Each chapter is followed by a question bank

  9. Synaptic energy drives the information processing mechanisms in spiking neural networks.

    Science.gov (United States)

    El Laithy, Karim; Bogdan, Martin

    2014-04-01

    Flow of energy and free energy minimization underpins almost every aspect of naturally occurring physical mechanisms. Inspired by this fact this work establishes an energy-based framework that spans the multi-scale range of biological neural systems and integrates synaptic dynamic, synchronous spiking activity and neural states into one consistent working paradigm. Following a bottom-up approach, a hypothetical energy function is proposed for dynamic synaptic models based on the theoretical thermodynamic principles and the Hopfield networks. We show that a synapse exposes stable operating points in terms of its excitatory postsynaptic potential as a function of its synaptic strength. We postulate that synapses in a network operating at these stable points can drive this network to an internal state of synchronous firing. The presented analysis is related to the widely investigated temporal coherent activities (cell assemblies) over a certain range of time scales (binding-by-synchrony). This introduces a novel explanation of the observed (poly)synchronous activities within networks regarding the synaptic (coupling) functionality. On a network level the transitions from one firing scheme to the other express discrete sets of neural states. The neural states exist as long as the network sustains the internal synaptic energy.

  10. Re-evaluation of the AASHTO-flexible pavement design equation with neural network modeling.

    Science.gov (United States)

    Tiğdemir, Mesut

    2014-01-01

    Here we establish that equivalent single-axle loads values can be estimated using artificial neural networks without the complex design equality of American Association of State Highway and Transportation Officials (AASHTO). More importantly, we find that the neural network model gives the coefficients to be able to obtain the actual load values using the AASHTO design values. Thus, those design traffic values that might result in deterioration can be better calculated using the neural networks model than with the AASHTO design equation. The artificial neural network method is used for this purpose. The existing AASHTO flexible pavement design equation does not currently predict the pavement performance of the strategic highway research program (Long Term Pavement Performance studies) test sections very accurately, and typically over-estimates the number of equivalent single axle loads needed to cause a measured loss of the present serviceability index. Here we aimed to demonstrate that the proposed neural network model can more accurately represent the loads values data, compared against the performance of the AASHTO formula. It is concluded that the neural network may be an appropriate tool for the development of databased-nonparametric models of pavement performance.

  11. Selected aspects of modelling of foreign exchange rates with neural networks

    Directory of Open Access Journals (Sweden)

    Václav Mastný

    2005-01-01

    Full Text Available This paper deals with forecasting of the high-frequency foreign exchange market with neural networks. The objective is to investigate some aspects of modelling with neural networks (impact of topology, size of training set and time horizon of the forecast on the performance of the network. The data used for the purpose of this paper contain 15-minute time series of US dollar against other major currencies, Japanese Yen, British Pound and Euro. The results show, that performance of the network in terms of correct directorial change is negatively influenced by increasing number of hidden neurons and decreasing size of training set. The performance of the network is influenced by sampling frequency.

  12. PERFORMANCE EVALUATION OF VARIANCES IN BACKPROPAGATION NEURAL NETWORK USED FOR HANDWRITTEN CHARACTER RECOGNITION

    OpenAIRE

    Vairaprakash Gurusamy *1 & K.Nandhini2

    2017-01-01

    A Neural Network is a powerful data modeling tool that is able to capture and represent complex input/output relationships. The motivation for the development of neural network technology stemmed from the desire to develop an artificial system that could perform "intelligent" tasks similar to those performed by the human brain.Back propagation was created by generalizing the Widrow-Hoff learning rule to multiple-layer networks and nonlinear differentiable transfer functions. The term back pro...

  13. MapReduce Based Parallel Neural Networks in Enabling Large Scale Machine Learning

    Directory of Open Access Journals (Sweden)

    Yang Liu

    2015-01-01

    Full Text Available Artificial neural networks (ANNs have been widely used in pattern recognition and classification applications. However, ANNs are notably slow in computation especially when the size of data is large. Nowadays, big data has received a momentum from both industry and academia. To fulfill the potentials of ANNs for big data applications, the computation process must be speeded up. For this purpose, this paper parallelizes neural networks based on MapReduce, which has become a major computing model to facilitate data intensive applications. Three data intensive scenarios are considered in the parallelization process in terms of the volume of classification data, the size of the training data, and the number of neurons in the neural network. The performance of the parallelized neural networks is evaluated in an experimental MapReduce computer cluster from the aspects of accuracy in classification and efficiency in computation.

  14. Qualitative similarities in the visual short-term memory of pigeons and people.

    Science.gov (United States)

    Gibson, Brett; Wasserman, Edward; Luck, Steven J

    2011-10-01

    Visual short-term memory plays a key role in guiding behavior, and individual differences in visual short-term memory capacity are strongly predictive of higher cognitive abilities. To provide a broader evolutionary context for understanding this memory system, we directly compared the behavior of pigeons and humans on a change detection task. Although pigeons had a lower storage capacity and a higher lapse rate than humans, both species stored multiple items in short-term memory and conformed to the same basic performance model. Thus, despite their very different evolutionary histories and neural architectures, pigeons and humans have functionally similar visual short-term memory systems, suggesting that the functional properties of visual short-term memory are subject to similar selective pressures across these distant species.

  15. An Efficient Neural-Network-Based Microseismic Monitoring Platform for Hydraulic Fracture on an Edge Computing Architecture

    Directory of Open Access Journals (Sweden)

    Xiaopu Zhang

    2018-06-01

    Full Text Available Microseismic monitoring is one of the most critical technologies for hydraulic fracturing in oil and gas production. To detect events in an accurate and efficient way, there are two major challenges. One challenge is how to achieve high accuracy due to a poor signal-to-noise ratio (SNR. The other one is concerned with real-time data transmission. Taking these challenges into consideration, an edge-computing-based platform, namely Edge-to-Center LearnReduce, is presented in this work. The platform consists of a data center with many edge components. At the data center, a neural network model combined with convolutional neural network (CNN and long short-term memory (LSTM is designed and this model is trained by using previously obtained data. Once the model is fully trained, it is sent to edge components for events detection and data reduction. At each edge component, a probabilistic inference is added to the neural network model to improve its accuracy. Finally, the reduced data is delivered to the data center. Based on experiment results, a high detection accuracy (over 96% with less transmitted data (about 90% was achieved by using the proposed approach on a microseismic monitoring system. These results show that the platform can simultaneously improve the accuracy and efficiency of microseismic monitoring.

  16. An Efficient Neural-Network-Based Microseismic Monitoring Platform for Hydraulic Fracture on an Edge Computing Architecture.

    Science.gov (United States)

    Zhang, Xiaopu; Lin, Jun; Chen, Zubin; Sun, Feng; Zhu, Xi; Fang, Gengfa

    2018-06-05

    Microseismic monitoring is one of the most critical technologies for hydraulic fracturing in oil and gas production. To detect events in an accurate and efficient way, there are two major challenges. One challenge is how to achieve high accuracy due to a poor signal-to-noise ratio (SNR). The other one is concerned with real-time data transmission. Taking these challenges into consideration, an edge-computing-based platform, namely Edge-to-Center LearnReduce, is presented in this work. The platform consists of a data center with many edge components. At the data center, a neural network model combined with convolutional neural network (CNN) and long short-term memory (LSTM) is designed and this model is trained by using previously obtained data. Once the model is fully trained, it is sent to edge components for events detection and data reduction. At each edge component, a probabilistic inference is added to the neural network model to improve its accuracy. Finally, the reduced data is delivered to the data center. Based on experiment results, a high detection accuracy (over 96%) with less transmitted data (about 90%) was achieved by using the proposed approach on a microseismic monitoring system. These results show that the platform can simultaneously improve the accuracy and efficiency of microseismic monitoring.

  17. Neural Computations in a Dynamical System with Multiple Time Scales

    Directory of Open Access Journals (Sweden)

    Yuanyuan Mi

    2016-09-01

    Full Text Available Neural systems display rich short-term dynamics at various levels, e.g., spike-frequencyadaptation (SFA at single neurons, and short-term facilitation (STF and depression (STDat neuronal synapses. These dynamical features typically covers a broad range of time scalesand exhibit large diversity in different brain regions. It remains unclear what the computationalbenefit for the brain to have such variability in short-term dynamics is. In this study, we proposethat the brain can exploit such dynamical features to implement multiple seemingly contradictorycomputations in a single neural circuit. To demonstrate this idea, we use continuous attractorneural network (CANN as a working model and include STF, SFA and STD with increasing timeconstants in their dynamics. Three computational tasks are considered, which are persistent activity,adaptation, and anticipative tracking. These tasks require conflicting neural mechanisms, andhence cannot be implemented by a single dynamical feature or any combination with similar timeconstants. However, with properly coordinated STF, SFA and STD, we show that the network isable to implement the three computational tasks concurrently. We hope this study will shed lighton the understanding of how the brain orchestrates its rich dynamics at various levels to realizediverse cognitive functions.

  18. Field-theoretic approach to fluctuation effects in neural networks

    International Nuclear Information System (INIS)

    Buice, Michael A.; Cowan, Jack D.

    2007-01-01

    A well-defined stochastic theory for neural activity, which permits the calculation of arbitrary statistical moments and equations governing them, is a potentially valuable tool for theoretical neuroscience. We produce such a theory by analyzing the dynamics of neural activity using field theoretic methods for nonequilibrium statistical processes. Assuming that neural network activity is Markovian, we construct the effective spike model, which describes both neural fluctuations and response. This analysis leads to a systematic expansion of corrections to mean field theory, which for the effective spike model is a simple version of the Wilson-Cowan equation. We argue that neural activity governed by this model exhibits a dynamical phase transition which is in the universality class of directed percolation. More general models (which may incorporate refractoriness) can exhibit other universality classes, such as dynamic isotropic percolation. Because of the extremely high connectivity in typical networks, it is expected that higher-order terms in the systematic expansion are small for experimentally accessible measurements, and thus, consistent with measurements in neocortical slice preparations, we expect mean field exponents for the transition. We provide a quantitative criterion for the relative magnitude of each term in the systematic expansion, analogous to the Ginsburg criterion. Experimental identification of dynamic universality classes in vivo is an outstanding and important question for neuroscience

  19. A novel delay-dependent criterion for delayed neural networks of neutral type

    International Nuclear Information System (INIS)

    Lee, S.M.; Kwon, O.M.; Park, Ju H.

    2010-01-01

    This Letter considers a robust stability analysis method for delayed neural networks of neutral type. By constructing a new Lyapunov functional, a novel delay-dependent criterion for the stability is derived in terms of LMIs (linear matrix inequalities). A less conservative stability criterion is derived by using nonlinear properties of the activation function of the neural networks. Two numerical examples are illustrated to show the effectiveness of the proposed method.

  20. The effect of the neural activity on topological properties of growing neural networks.

    Science.gov (United States)

    Gafarov, F M; Gafarova, V R

    2016-09-01

    The connectivity structure in cortical networks defines how information is transmitted and processed, and it is a source of the complex spatiotemporal patterns of network's development, and the process of creation and deletion of connections is continuous in the whole life of the organism. In this paper, we study how neural activity influences the growth process in neural networks. By using a two-dimensional activity-dependent growth model we demonstrated the neural network growth process from disconnected neurons to fully connected networks. For making quantitative investigation of the network's activity influence on its topological properties we compared it with the random growth network not depending on network's activity. By using the random graphs theory methods for the analysis of the network's connections structure it is shown that the growth in neural networks results in the formation of a well-known "small-world" network.

  1. Nonlinear neural network for hemodynamic model state and input estimation using fMRI data

    KAUST Repository

    Karam, Ayman M.

    2014-11-01

    Originally inspired by biological neural networks, artificial neural networks (ANNs) are powerful mathematical tools that can solve complex nonlinear problems such as filtering, classification, prediction and more. This paper demonstrates the first successful implementation of ANN, specifically nonlinear autoregressive with exogenous input (NARX) networks, to estimate the hemodynamic states and neural activity from simulated and measured real blood oxygenation level dependent (BOLD) signals. Blocked and event-related BOLD data are used to test the algorithm on real experiments. The proposed method is accurate and robust even in the presence of signal noise and it does not depend on sampling interval. Moreover, the structure of the NARX networks is optimized to yield the best estimate with minimal network architecture. The results of the estimated neural activity are also discussed in terms of their potential use.

  2. Neural network based method for conversion of solar radiation data

    International Nuclear Information System (INIS)

    Celik, Ali N.; Muneer, Tariq

    2013-01-01

    advantage of the neural network approach over the conventional one. In terms of the coefficient of determination, the neural network model results in a value of 0.987 whereas the isotropic and anisotropic approaches result in values of 0.959 and 0.966, respectively. On the other hand, the isotropic and anisotropic approaches give relative mean absolute error values of 11.4% and 11.5%, respectively, while that of the neural network model is 9.1%

  3. LiteNet: Lightweight Neural Network for Detecting Arrhythmias at Resource-Constrained Mobile Devices

    Directory of Open Access Journals (Sweden)

    Ziyang He

    2018-04-01

    Full Text Available By running applications and services closer to the user, edge processing provides many advantages, such as short response time and reduced network traffic. Deep-learning based algorithms provide significantly better performances than traditional algorithms in many fields but demand more resources, such as higher computational power and more memory. Hence, designing deep learning algorithms that are more suitable for resource-constrained mobile devices is vital. In this paper, we build a lightweight neural network, termed LiteNet which uses a deep learning algorithm design to diagnose arrhythmias, as an example to show how we design deep learning schemes for resource-constrained mobile devices. Compare to other deep learning models with an equivalent accuracy, LiteNet has several advantages. It requires less memory, incurs lower computational cost, and is more feasible for deployment on resource-constrained mobile devices. It can be trained faster than other neural network algorithms and requires less communication across different processing units during distributed training. It uses filters of heterogeneous size in a convolutional layer, which contributes to the generation of various feature maps. The algorithm was tested using the MIT-BIH electrocardiogram (ECG arrhythmia database; the results showed that LiteNet outperforms comparable schemes in diagnosing arrhythmias, and in its feasibility for use at the mobile devices.

  4. LiteNet: Lightweight Neural Network for Detecting Arrhythmias at Resource-Constrained Mobile Devices.

    Science.gov (United States)

    He, Ziyang; Zhang, Xiaoqing; Cao, Yangjie; Liu, Zhi; Zhang, Bo; Wang, Xiaoyan

    2018-04-17

    By running applications and services closer to the user, edge processing provides many advantages, such as short response time and reduced network traffic. Deep-learning based algorithms provide significantly better performances than traditional algorithms in many fields but demand more resources, such as higher computational power and more memory. Hence, designing deep learning algorithms that are more suitable for resource-constrained mobile devices is vital. In this paper, we build a lightweight neural network, termed LiteNet which uses a deep learning algorithm design to diagnose arrhythmias, as an example to show how we design deep learning schemes for resource-constrained mobile devices. Compare to other deep learning models with an equivalent accuracy, LiteNet has several advantages. It requires less memory, incurs lower computational cost, and is more feasible for deployment on resource-constrained mobile devices. It can be trained faster than other neural network algorithms and requires less communication across different processing units during distributed training. It uses filters of heterogeneous size in a convolutional layer, which contributes to the generation of various feature maps. The algorithm was tested using the MIT-BIH electrocardiogram (ECG) arrhythmia database; the results showed that LiteNet outperforms comparable schemes in diagnosing arrhythmias, and in its feasibility for use at the mobile devices.

  5. Artificial Neural Network for Short-Term Load Forecasting in Distribution Systems

    Directory of Open Access Journals (Sweden)

    Luis Hernández

    2014-03-01

    Full Text Available The new paradigms and latest developments in the Electrical Grid are based on the introduction of distributed intelligence at several stages of its physical layer, giving birth to concepts such as Smart Grids, Virtual Power Plants, microgrids, Smart Buildings and Smart Environments. Distributed Generation (DG is a philosophy in which energy is no longer produced exclusively in huge centralized plants, but also in smaller premises which take advantage of local conditions in order to minimize transmission losses and optimize production and consumption. This represents a new opportunity for renewable energy, because small elements such as solar panels and wind turbines are expected to be scattered along the grid, feeding local installations or selling energy to the grid depending on their local generation/consumption conditions. The introduction of these highly dynamic elements will lead to a substantial change in the curves of demanded energy. The aim of this paper is to apply Short-Term Load Forecasting (STLF in microgrid environments with curves and similar behaviours, using two different data sets: the first one packing electricity consumption information during four years and six months in a microgrid along with calendar data, while the second one will be just four months of the previous parameters along with the solar radiation from the site. For the first set of data different STLF models will be discussed, studying the effect of each variable, in order to identify the best one. That model will be employed with the second set of data, in order to make a comparison with a new model that takes into account the solar radiation, since the photovoltaic installations of the microgrid will cause the power demand to fluctuate depending on the solar radiation.

  6. Enhancing neural-network performance via assortativity

    International Nuclear Information System (INIS)

    Franciscis, Sebastiano de; Johnson, Samuel; Torres, Joaquin J.

    2011-01-01

    The performance of attractor neural networks has been shown to depend crucially on the heterogeneity of the underlying topology. We take this analysis a step further by examining the effect of degree-degree correlations - assortativity - on neural-network behavior. We make use of a method recently put forward for studying correlated networks and dynamics thereon, both analytically and computationally, which is independent of how the topology may have evolved. We show how the robustness to noise is greatly enhanced in assortative (positively correlated) neural networks, especially if it is the hub neurons that store the information.

  7. E-I balance emerges naturally from continuous Hebbian learning in autonomous neural networks.

    Science.gov (United States)

    Trapp, Philip; Echeveste, Rodrigo; Gros, Claudius

    2018-06-12

    Spontaneous brain activity is characterized in part by a balanced asynchronous chaotic state. Cortical recordings show that excitatory (E) and inhibitory (I) drivings in the E-I balanced state are substantially larger than the overall input. We show that such a state arises naturally in fully adapting networks which are deterministic, autonomously active and not subject to stochastic external or internal drivings. Temporary imbalances between excitatory and inhibitory inputs lead to large but short-lived activity bursts that stabilize irregular dynamics. We simulate autonomous networks of rate-encoding neurons for which all synaptic weights are plastic and subject to a Hebbian plasticity rule, the flux rule, that can be derived from the stationarity principle of statistical learning. Moreover, the average firing rate is regulated individually via a standard homeostatic adaption of the bias of each neuron's input-output non-linear function. Additionally, networks with and without short-term plasticity are considered. E-I balance may arise only when the mean excitatory and inhibitory weights are themselves balanced, modulo the overall activity level. We show that synaptic weight balance, which has been considered hitherto as given, naturally arises in autonomous neural networks when the here considered self-limiting Hebbian synaptic plasticity rule is continuously active.

  8. An Electricity Price Forecasting Model by Hybrid Structured Deep Neural Networks

    Directory of Open Access Journals (Sweden)

    Ping-Huan Kuo

    2018-04-01

    Full Text Available Electricity price is a key influencer in the electricity market. Electricity market trades by each participant are based on electricity price. The electricity price adjusted with the change in supply and demand relationship can reflect the real value of electricity in the transaction process. However, for the power generating party, bidding strategy determines the level of profit, and the accurate prediction of electricity price could make it possible to determine a more accurate bidding price. This cannot only reduce transaction risk, but also seize opportunities in the electricity market. In order to effectively estimate electricity price, this paper proposes an electricity price forecasting system based on the combination of 2 deep neural networks, the Convolutional Neural Network (CNN and the Long Short Term Memory (LSTM. In order to compare the overall performance of each algorithm, the Mean Absolute Error (MAE and Root-Mean-Square error (RMSE evaluating measures were applied in the experiments of this paper. Experiment results show that compared with other traditional machine learning methods, the prediction performance of the estimating model proposed in this paper is proven to be the best. By combining the CNN and LSTM models, the feasibility and practicality of electricity price prediction is also confirmed in this paper.

  9. Genetic algorithm for neural networks optimization

    Science.gov (United States)

    Setyawati, Bina R.; Creese, Robert C.; Sahirman, Sidharta

    2004-11-01

    This paper examines the forecasting performance of multi-layer feed forward neural networks in modeling a particular foreign exchange rates, i.e. Japanese Yen/US Dollar. The effects of two learning methods, Back Propagation and Genetic Algorithm, in which the neural network topology and other parameters fixed, were investigated. The early results indicate that the application of this hybrid system seems to be well suited for the forecasting of foreign exchange rates. The Neural Networks and Genetic Algorithm were programmed using MATLAB«.

  10. Graph Theoretical Analysis of Functional Brain Networks: Test-Retest Evaluation on Short- and Long-Term Resting-State Functional MRI Data

    Science.gov (United States)

    Wang, Jin-Hui; Zuo, Xi-Nian; Gohel, Suril; Milham, Michael P.; Biswal, Bharat B.; He, Yong

    2011-01-01

    Graph-based computational network analysis has proven a powerful tool to quantitatively characterize functional architectures of the brain. However, the test-retest (TRT) reliability of graph metrics of functional networks has not been systematically examined. Here, we investigated TRT reliability of topological metrics of functional brain networks derived from resting-state functional magnetic resonance imaging data. Specifically, we evaluated both short-term (5 months apart) TRT reliability for 12 global and 6 local nodal network metrics. We found that reliability of global network metrics was overall low, threshold-sensitive and dependent on several factors of scanning time interval (TI, long-term>short-term), network membership (NM, networks excluding negative correlations>networks including negative correlations) and network type (NT, binarized networks>weighted networks). The dependence was modulated by another factor of node definition (ND) strategy. The local nodal reliability exhibited large variability across nodal metrics and a spatially heterogeneous distribution. Nodal degree was the most reliable metric and varied the least across the factors above. Hub regions in association and limbic/paralimbic cortices showed moderate TRT reliability. Importantly, nodal reliability was robust to above-mentioned four factors. Simulation analysis revealed that global network metrics were extremely sensitive (but varying degrees) to noise in functional connectivity and weighted networks generated numerically more reliable results in compared with binarized networks. For nodal network metrics, they showed high resistance to noise in functional connectivity and no NT related differences were found in the resistance. These findings provide important implications on how to choose reliable analytical schemes and network metrics of interest. PMID:21818285

  11. Training feed-forward neural networks with gain constraints

    Science.gov (United States)

    Hartman

    2000-04-01

    Inaccurate input-output gains (partial derivatives of outputs with respect to inputs) are common in neural network models when input variables are correlated or when data are incomplete or inaccurate. Accurate gains are essential for optimization, control, and other purposes. We develop and explore a method for training feedforward neural networks subject to inequality or equality-bound constraints on the gains of the learned mapping. Gain constraints are implemented as penalty terms added to the objective function, and training is done using gradient descent. Adaptive and robust procedures are devised for balancing the relative strengths of the various terms in the objective function, which is essential when the constraints are inconsistent with the data. The approach has the virtue that the model domain of validity can be extended via extrapolation training, which can dramatically improve generalization. The algorithm is demonstrated here on artificial and real-world problems with very good results and has been advantageously applied to dozens of models currently in commercial use.

  12. Neural Network Based Load Frequency Control for Restructuring ...

    African Journals Online (AJOL)

    Neural Network Based Load Frequency Control for Restructuring Power Industry. ... an artificial neural network (ANN) application of load frequency control (LFC) of a Multi-Area power system by using a neural network controller is presented.

  13. PREDIKSI FOREX MENGGUNAKAN MODEL NEURAL NETWORK

    Directory of Open Access Journals (Sweden)

    R. Hadapiningradja Kusumodestoni

    2015-11-01

    Full Text Available ABSTRAK Prediksi adalah salah satu teknik yang paling penting dalam menjalankan bisnis forex. Keputusan dalam memprediksi adalah sangatlah penting, karena dengan prediksi dapat membantu mengetahui nilai forex di waktu tertentu kedepan sehingga dapat mengurangi resiko kerugian. Tujuan dari penelitian ini dimaksudkan memprediksi bisnis fores menggunakan model neural network dengan data time series per 1 menit untuk mengetahui nilai akurasi prediksi sehingga dapat mengurangi resiko dalam menjalankan bisnis forex. Metode penelitian pada penelitian ini meliputi metode pengumpulan data kemudian dilanjutkan ke metode training, learning, testing menggunakan neural network. Setelah di evaluasi hasil penelitian ini menunjukan bahwa penerapan algoritma Neural Network mampu untuk memprediksi forex dengan tingkat akurasi prediksi 0.431 +/- 0.096 sehingga dengan prediksi ini dapat membantu mengurangi resiko dalam menjalankan bisnis forex. Kata kunci: prediksi, forex, neural network.

  14. Artificial neural networks a practical course

    CERN Document Server

    da Silva, Ivan Nunes; Andrade Flauzino, Rogerio; Liboni, Luisa Helena Bartocci; dos Reis Alves, Silas Franco

    2017-01-01

    This book provides comprehensive coverage of neural networks, their evolution, their structure, the problems they can solve, and their applications. The first half of the book looks at theoretical investigations on artificial neural networks and addresses the key architectures that are capable of implementation in various application scenarios. The second half is designed specifically for the production of solutions using artificial neural networks to solve practical problems arising from different areas of knowledge. It also describes the various implementation details that were taken into account to achieve the reported results. These aspects contribute to the maturation and improvement of experimental techniques to specify the neural network architecture that is most appropriate for a particular application scope. The book is appropriate for students in graduate and upper undergraduate courses in addition to researchers and professionals.

  15. Web Page Classification Method Using Neural Networks

    Science.gov (United States)

    Selamat, Ali; Omatu, Sigeru; Yanagimoto, Hidekazu; Fujinaka, Toru; Yoshioka, Michifumi

    Automatic categorization is the only viable method to deal with the scaling problem of the World Wide Web (WWW). In this paper, we propose a news web page classification method (WPCM). The WPCM uses a neural network with inputs obtained by both the principal components and class profile-based features (CPBF). Each news web page is represented by the term-weighting scheme. As the number of unique words in the collection set is big, the principal component analysis (PCA) has been used to select the most relevant features for the classification. Then the final output of the PCA is combined with the feature vectors from the class-profile which contains the most regular words in each class before feeding them to the neural networks. We have manually selected the most regular words that exist in each class and weighted them using an entropy weighting scheme. The fixed number of regular words from each class will be used as a feature vectors together with the reduced principal components from the PCA. These feature vectors are then used as the input to the neural networks for classification. The experimental evaluation demonstrates that the WPCM method provides acceptable classification accuracy with the sports news datasets.

  16. Optical-Correlator Neural Network Based On Neocognitron

    Science.gov (United States)

    Chao, Tien-Hsin; Stoner, William W.

    1994-01-01

    Multichannel optical correlator implements shift-invariant, high-discrimination pattern-recognizing neural network based on paradigm of neocognitron. Selected as basic building block of this neural network because invariance under shifts is inherent advantage of Fourier optics included in optical correlators in general. Neocognitron is conceptual electronic neural-network model for recognition of visual patterns. Multilayer processing achieved by iteratively feeding back output of feature correlator to input spatial light modulator and updating Fourier filters. Neural network trained by use of characteristic features extracted from target images. Multichannel implementation enables parallel processing of large number of selected features.

  17. Nonequilibrium landscape theory of neural networks

    Science.gov (United States)

    Yan, Han; Zhao, Lei; Hu, Liang; Wang, Xidi; Wang, Erkang; Wang, Jin

    2013-01-01

    The brain map project aims to map out the neuron connections of the human brain. Even with all of the wirings mapped out, the global and physical understandings of the function and behavior are still challenging. Hopfield quantified the learning and memory process of symmetrically connected neural networks globally through equilibrium energy. The energy basins of attractions represent memories, and the memory retrieval dynamics is determined by the energy gradient. However, the realistic neural networks are asymmetrically connected, and oscillations cannot emerge from symmetric neural networks. Here, we developed a nonequilibrium landscape–flux theory for realistic asymmetrically connected neural networks. We uncovered the underlying potential landscape and the associated Lyapunov function for quantifying the global stability and function. We found the dynamics and oscillations in human brains responsible for cognitive processes and physiological rhythm regulations are determined not only by the landscape gradient but also by the flux. We found that the flux is closely related to the degrees of the asymmetric connections in neural networks and is the origin of the neural oscillations. The neural oscillation landscape shows a closed-ring attractor topology. The landscape gradient attracts the network down to the ring. The flux is responsible for coherent oscillations on the ring. We suggest the flux may provide the driving force for associations among memories. We applied our theory to rapid-eye movement sleep cycle. We identified the key regulation factors for function through global sensitivity analysis of landscape topography against wirings, which are in good agreements with experiments. PMID:24145451

  18. Nonequilibrium landscape theory of neural networks.

    Science.gov (United States)

    Yan, Han; Zhao, Lei; Hu, Liang; Wang, Xidi; Wang, Erkang; Wang, Jin

    2013-11-05

    The brain map project aims to map out the neuron connections of the human brain. Even with all of the wirings mapped out, the global and physical understandings of the function and behavior are still challenging. Hopfield quantified the learning and memory process of symmetrically connected neural networks globally through equilibrium energy. The energy basins of attractions represent memories, and the memory retrieval dynamics is determined by the energy gradient. However, the realistic neural networks are asymmetrically connected, and oscillations cannot emerge from symmetric neural networks. Here, we developed a nonequilibrium landscape-flux theory for realistic asymmetrically connected neural networks. We uncovered the underlying potential landscape and the associated Lyapunov function for quantifying the global stability and function. We found the dynamics and oscillations in human brains responsible for cognitive processes and physiological rhythm regulations are determined not only by the landscape gradient but also by the flux. We found that the flux is closely related to the degrees of the asymmetric connections in neural networks and is the origin of the neural oscillations. The neural oscillation landscape shows a closed-ring attractor topology. The landscape gradient attracts the network down to the ring. The flux is responsible for coherent oscillations on the ring. We suggest the flux may provide the driving force for associations among memories. We applied our theory to rapid-eye movement sleep cycle. We identified the key regulation factors for function through global sensitivity analysis of landscape topography against wirings, which are in good agreements with experiments.

  19. Neutron spectrometry and dosimetry by means of Bonner spheres system and artificial neural networks applying robust design of artificial neural networks

    Energy Technology Data Exchange (ETDEWEB)

    Martinez B, M.R.; Ortiz R, J.M.; Vega C, H.R. [UAZ, Av. Ramon Lopez Velarde No. 801, 98000 Zacatecas (Mexico)

    2006-07-01

    An Artificial Neural Network has been designed, trained and tested to unfold neutron spectra and simultaneously to calculate equivalent doses. A set of 187 neutron spectra compiled by the International Atomic Energy Agency and 13 equivalent doses were used in the artificial neural network designed, trained and tested. In order to design the neural network was used the robust design of artificial neural networks methodology, which assures that the quality of the neural networks takes into account from the design stage. Unless previous works, here, for first time a group of neural networks were designed and trained to unfold 187 neutron spectra and at the same time to calculate 13 equivalent doses, starting from the count rates coming from the Bonner spheres system by using a systematic and experimental strategy. (Author)

  20. Neutron spectrometry and dosimetry by means of Bonner spheres system and artificial neural networks applying robust design of artificial neural networks

    International Nuclear Information System (INIS)

    Martinez B, M.R.; Ortiz R, J.M.; Vega C, H.R.

    2006-01-01

    An Artificial Neural Network has been designed, trained and tested to unfold neutron spectra and simultaneously to calculate equivalent doses. A set of 187 neutron spectra compiled by the International Atomic Energy Agency and 13 equivalent doses were used in the artificial neural network designed, trained and tested. In order to design the neural network was used the robust design of artificial neural networks methodology, which assures that the quality of the neural networks takes into account from the design stage. Unless previous works, here, for first time a group of neural networks were designed and trained to unfold 187 neutron spectra and at the same time to calculate 13 equivalent doses, starting from the count rates coming from the Bonner spheres system by using a systematic and experimental strategy. (Author)

  1. Neural networks within multi-core optic fibers.

    Science.gov (United States)

    Cohen, Eyal; Malka, Dror; Shemer, Amir; Shahmoon, Asaf; Zalevsky, Zeev; London, Michael

    2016-07-07

    Hardware implementation of artificial neural networks facilitates real-time parallel processing of massive data sets. Optical neural networks offer low-volume 3D connectivity together with large bandwidth and minimal heat production in contrast to electronic implementation. Here, we present a conceptual design for in-fiber optical neural networks. Neurons and synapses are realized as individual silica cores in a multi-core fiber. Optical signals are transferred transversely between cores by means of optical coupling. Pump driven amplification in erbium-doped cores mimics synaptic interactions. We simulated three-layered feed-forward neural networks and explored their capabilities. Simulations suggest that networks can differentiate between given inputs depending on specific configurations of amplification; this implies classification and learning capabilities. Finally, we tested experimentally our basic neuronal elements using fibers, couplers, and amplifiers, and demonstrated that this configuration implements a neuron-like function. Therefore, devices similar to our proposed multi-core fiber could potentially serve as building blocks for future large-scale small-volume optical artificial neural networks.

  2. Intelligent neural network diagnostic system

    International Nuclear Information System (INIS)

    Mohamed, A.H.

    2010-01-01

    Recently, artificial neural network (ANN) has made a significant mark in the domain of diagnostic applications. Neural networks are used to implement complex non-linear mappings (functions) using simple elementary units interrelated through connections with adaptive weights. The performance of the ANN is mainly depending on their topology structure and weights. Some systems have been developed using genetic algorithm (GA) to optimize the topology of the ANN. But, they suffer from some limitations. They are : (1) The computation time requires for training the ANN several time reaching for the average weight required, (2) Slowness of GA for optimization process and (3) Fitness noise appeared in the optimization of ANN. This research suggests new issues to overcome these limitations for finding optimal neural network architectures to learn particular problems. This proposed methodology is used to develop a diagnostic neural network system. It has been applied for a 600 MW turbo-generator as a case of real complex systems. The proposed system has proved its significant performance compared to two common methods used in the diagnostic applications.

  3. An adaptive short-term prediction scheme for wind energy storage management

    International Nuclear Information System (INIS)

    Blonbou, Ruddy; Monjoly, Stephanie; Dorville, Jean-Francois

    2011-01-01

    Research highlights: → We develop a real time algorithm for grid-connected wind energy storage management. → The method aims to guarantee, with ±5% error margin, the power sent to the grid. → Dynamic scheduling of energy storage is based on short-term energy prediction. → Accurate predictions reduce the need in storage capacity. -- Abstract: Efficient forecasting scheme that includes some information on the likelihood of the forecast and based on a better knowledge of the wind variations characteristics along with their influence on power output variation is of key importance for the optimal integration of wind energy in island's power system. In the Guadeloupean archipelago (French West-Indies), with a total wind power capacity of 25 MW; wind energy can represent up to 5% of the instantaneous electricity production. At this level, wind energy contribution can be equivalent to the current network primary control reserve, which causes balancing difficult. The share of wind energy is due to grow even further since the objective is set to reach 118 MW by 2020. It is an absolute evidence for the network operator that due to security concerns of the electrical grid, the share of wind generation should not increase unless solutions are found to solve the prediction problem. The University of French West-Indies and Guyana has developed a short-term wind energy prediction scheme that uses artificial neural networks and adaptive learning procedures based on Bayesian approach and Gaussian approximation. This paper reports the results of the evaluation of the proposed approach; the improvement with respect to the simple persistent prediction model was globally good. A discussion on how such a tool combined with energy storage capacity could help to smooth the wind power variation and improve the wind energy penetration rate into island utility network is also proposed.

  4. Neural networks and applications tutorial

    Science.gov (United States)

    Guyon, I.

    1991-09-01

    The importance of neural networks has grown dramatically during this decade. While only a few years ago they were primarily of academic interest, now dozens of companies and many universities are investigating the potential use of these systems and products are beginning to appear. The idea of building a machine whose architecture is inspired by that of the brain has roots which go far back in history. Nowadays, technological advances of computers and the availability of custom integrated circuits, permit simulations of hundreds or even thousands of neurons. In conjunction, the growing interest in learning machines, non-linear dynamics and parallel computation spurred renewed attention in artificial neural networks. Many tentative applications have been proposed, including decision systems (associative memories, classifiers, data compressors and optimizers), or parametric models for signal processing purposes (system identification, automatic control, noise canceling, etc.). While they do not always outperform standard methods, neural network approaches are already used in some real world applications for pattern recognition and signal processing tasks. The tutorial is divided into six lectures, that where presented at the Third Graduate Summer Course on Computational Physics (September 3-7, 1990) on Parallel Architectures and Applications, organized by the European Physical Society: (1) Introduction: machine learning and biological computation. (2) Adaptive artificial neurons (perceptron, ADALINE, sigmoid units, etc.): learning rules and implementations. (3) Neural network systems: architectures, learning algorithms. (4) Applications: pattern recognition, signal processing, etc. (5) Elements of learning theory: how to build networks which generalize. (6) A case study: a neural network for on-line recognition of handwritten alphanumeric characters.

  5. Estimation of break location and size for loss of coolant accidents using neural networks

    International Nuclear Information System (INIS)

    Na, Man Gyun; Shin, Sun Ho; Jung, Dong Won; Kim, Soong Pyung; Jeong, Ji Hwan; Lee, Byung Chul

    2004-01-01

    In this work, a probabilistic neural network (PNN) that has been applied well to the classification problems is used in order to identify the break locations of loss of coolant accidents (LOCA) such as hot-leg, cold-leg and steam generator tubes. Also, a fuzzy neural network (FNN) is designed to estimate the break size. The inputs to PNN and FNN are time-integrated values obtained by integrating measurement signals during a short time interval after reactor scram. An automatic structure constructor for the fuzzy neural network automatically selects the input variables from the time-integrated values of many measured signals, and optimizes the number of rules and its related parameters. It is verified that the proposed algorithm identifies very well the break locations of LOCAs and also, estimate their break size accurately

  6. Memory and pattern storage in neural networks with activity dependent synapses

    Science.gov (United States)

    Mejias, J. F.; Torres, J. J.

    2009-01-01

    We present recently obtained results on the influence of the interplay between several activity dependent synaptic mechanisms, such as short-term depression and facilitation, on the maximum memory storage capacity in an attractor neural network [1]. In contrast with the case of synaptic depression, which drastically reduces the capacity of the network to store and retrieve activity patterns [2], synaptic facilitation is able to enhance the memory capacity in different situations. In particular, we find that a convenient balance between depression and facilitation can enhance the memory capacity, reaching maximal values similar to those obtained with static synapses, that is, without activity-dependent processes. We also argue, employing simple arguments, that this level of balance is compatible with experimental data recorded from some cortical areas, where depression and facilitation may play an important role for both memory-oriented tasks and information processing. We conclude that depressing synapses with a certain level of facilitation allow to recover the good retrieval properties of networks with static synapses while maintaining the nonlinear properties of dynamic synapses, convenient for information processing and coding.

  7. Distribution network fault section identification and fault location using artificial neural network

    DEFF Research Database (Denmark)

    Dashtdar, Masoud; Dashti, Rahman; Shaker, Hamid Reza

    2018-01-01

    In this paper, a method for fault location in power distribution network is presented. The proposed method uses artificial neural network. In order to train the neural network, a series of specific characteristic are extracted from the recorded fault signals in relay. These characteristics...... components of the sequences as well as three-phase signals could be obtained using statistics to extract the hidden features inside them and present them separately to train the neural network. Also, since the obtained inputs for the training of the neural network strongly depend on the fault angle, fault...... resistance, and fault location, the training data should be selected such that these differences are properly presented so that the neural network does not face any issues for identification. Therefore, selecting the signal processing function, data spectrum and subsequently, statistical parameters...

  8. Adaptive optimization and control using neural networks

    Energy Technology Data Exchange (ETDEWEB)

    Mead, W.C.; Brown, S.K.; Jones, R.D.; Bowling, P.S.; Barnes, C.W.

    1993-10-22

    Recent work has demonstrated the ability of neural-network-based controllers to optimize and control machines with complex, non-linear, relatively unknown control spaces. We present a brief overview of neural networks via a taxonomy illustrating some capabilities of different kinds of neural networks. We present some successful control examples, particularly the optimization and control of a small-angle negative ion source.

  9. Memristor-based neural networks: Synaptic versus neuronal stochasticity

    KAUST Repository

    Naous, Rawan

    2016-11-02

    In neuromorphic circuits, stochasticity in the cortex can be mapped into the synaptic or neuronal components. The hardware emulation of these stochastic neural networks are currently being extensively studied using resistive memories or memristors. The ionic process involved in the underlying switching behavior of the memristive elements is considered as the main source of stochasticity of its operation. Building on its inherent variability, the memristor is incorporated into abstract models of stochastic neurons and synapses. Two approaches of stochastic neural networks are investigated. Aside from the size and area perspective, the impact on the system performance, in terms of accuracy, recognition rates, and learning, among these two approaches and where the memristor would fall into place are the main comparison points to be considered.

  10. Neural Networks

    Directory of Open Access Journals (Sweden)

    Schwindling Jerome

    2010-04-01

    Full Text Available This course presents an overview of the concepts of the neural networks and their aplication in the framework of High energy physics analyses. After a brief introduction on the concept of neural networks, the concept is explained in the frame of neuro-biology, introducing the concept of multi-layer perceptron, learning and their use as data classifer. The concept is then presented in a second part using in more details the mathematical approach focussing on typical use cases faced in particle physics. Finally, the last part presents the best way to use such statistical tools in view of event classifers, putting the emphasis on the setup of the multi-layer perceptron. The full article (15 p. corresponding to this lecture is written in french and is provided in the proceedings of the book SOS 2008.

  11. Altered Synchronizations among Neural Networks in Geriatric Depression.

    Science.gov (United States)

    Wang, Lihong; Chou, Ying-Hui; Potter, Guy G; Steffens, David C

    2015-01-01

    Although major depression has been considered as a manifestation of discoordinated activity between affective and cognitive neural networks, only a few studies have examined the relationships among neural networks directly. Because of the known disconnection theory, geriatric depression could be a useful model in studying the interactions among different networks. In the present study, using independent component analysis to identify intrinsically connected neural networks, we investigated the alterations in synchronizations among neural networks in geriatric depression to better understand the underlying neural mechanisms. Resting-state fMRI data was collected from thirty-two patients with geriatric depression and thirty-two age-matched never-depressed controls. We compared the resting-state activities between the two groups in the default-mode, central executive, attention, salience, and affective networks as well as correlations among these networks. The depression group showed stronger activity than the controls in an affective network, specifically within the orbitofrontal region. However, unlike the never-depressed controls, geriatric depression group lacked synchronized/antisynchronized activity between the affective network and the other networks. Those depressed patients with lower executive function has greater synchronization between the salience network with the executive and affective networks. Our results demonstrate the effectiveness of the between-network analyses in examining neural models for geriatric depression.

  12. Application of neural networks in coastal engineering - An overview

    Digital Repository Service at National Institute of Oceanography (India)

    Mandal, S.; Patil, S.G.; Manjunatha, Y.R.; Hegde, A.V.

    Artificial Neural Network (ANN) is being applied to solve a wide variety of coastal/ocean engineering problems. In practical terms ANNs are non-linear modeling tools and they can be used to model complex relationship between the input and output...

  13. Neural Networks for the Beginner.

    Science.gov (United States)

    Snyder, Robin M.

    Motivated by the brain, neural networks are a right-brained approach to artificial intelligence that is used to recognize patterns based on previous training. In practice, one would not program an expert system to recognize a pattern and one would not train a neural network to make decisions from rules; but one could combine the best features of…

  14. Human Age Recognition by Electrocardiogram Signal Based on Artificial Neural Network

    Science.gov (United States)

    Dasgupta, Hirak

    2016-12-01

    The objective of this work is to make a neural network function approximation model to detect human age from the electrocardiogram (ECG) signal. The input vectors of the neural network are the Katz fractal dimension of the ECG signal, frequencies in the QRS complex, male or female (represented by numeric constant) and the average of successive R-R peak distance of a particular ECG signal. The QRS complex has been detected by short time Fourier transform algorithm. The successive R peak has been detected by, first cutting the signal into periods by auto-correlation method and then finding the absolute of the highest point in each period. The neural network used in this problem consists of two layers, with Sigmoid neuron in the input and linear neuron in the output layer. The result shows the mean of errors as -0.49, 1.03, 0.79 years and the standard deviation of errors as 1.81, 1.77, 2.70 years during training, cross validation and testing with unknown data sets, respectively.

  15. Mass reconstruction with a neural network

    International Nuclear Information System (INIS)

    Loennblad, L.; Peterson, C.; Roegnvaldsson, T.

    1992-01-01

    A feed-forward neural network method is developed for reconstructing the invariant mass of hadronic jets appearing in a calorimeter. The approach is illustrated in W→qanti q, where W-bosons are produced in panti p reactions at SPS collider energies. The neural network method yields results that are superior to conventional methods. This neural network application differs from the classification ones in the sense that an analog number (the mass) is computed by the network, rather than a binary decision being made. As a by-product our application clearly demonstrates the need for using 'intelligent' variables in instances when the amount of training instances is limited. (orig.)

  16. Short-term depression and transient memory in sensory cortex.

    Science.gov (United States)

    Gillary, Grant; Heydt, Rüdiger von der; Niebur, Ernst

    2017-12-01

    Persistent neuronal activity is usually studied in the context of short-term memory localized in central cortical areas. Recent studies show that early sensory areas also can have persistent representations of stimuli which emerge quickly (over tens of milliseconds) and decay slowly (over seconds). Traditional positive feedback models cannot explain sensory persistence for at least two reasons: (i) They show attractor dynamics, with transient perturbations resulting in a quasi-permanent change of system state, whereas sensory systems return to the original state after a transient. (ii) As we show, those positive feedback models which decay to baseline lose their persistence when their recurrent connections are subject to short-term depression, a common property of excitatory connections in early sensory areas. Dual time constant network behavior has also been implemented by nonlinear afferents producing a large transient input followed by much smaller steady state input. We show that such networks require unphysiologically large onset transients to produce the rise and decay observed in sensory areas. Our study explores how memory and persistence can be implemented in another model class, derivative feedback networks. We show that these networks can operate with two vastly different time courses, changing their state quickly when new information is coming in but retaining it for a long time, and that these capabilities are robust to short-term depression. Specifically, derivative feedback networks with short-term depression that acts differentially on positive and negative feedback projections are capable of dynamically changing their time constant, thus allowing fast onset and slow decay of responses without requiring unrealistically large input transients.

  17. Synaptic plasticity in a recurrent neural network for versatile and adaptive behaviors of a walking robot

    Directory of Open Access Journals (Sweden)

    Eduard eGrinke

    2015-10-01

    Full Text Available Walking animals, like insects, with little neural computing can effectively perform complex behaviors. They can walk around their environment, escape from corners/deadlocks, and avoid or climb over obstacles. While performing all these behaviors, they can also adapt their movements to deal with an unknown situation. As a consequence, they successfully navigate through their complex environment. The versatile and adaptive abilities are the result of an integration of several ingredients embedded in their sensorimotor loop. Biological studies reveal that the ingredients include neural dynamics, plasticity, sensory feedback, and biomechanics. Generating such versatile and adaptive behaviors for a walking robot is a challenging task. In this study, we present a bio-inspired approach to solve this task. Specifically, the approach combines neural mechanisms with plasticity, sensory feedback, and biomechanics. The neural mechanisms consist of adaptive neural sensory processing and modular neural locomotion control. The sensory processing is based on a small recurrent network consisting of two fully connected neurons. Online correlation-based learning with synaptic scaling is applied to adequately change the connections of the network. By doing so, we can effectively exploit neural dynamics (i.e., hysteresis effects and single attractors in the network to generate different turning angles with short-term memory for a biomechanical walking robot. The turning information is transmitted as descending steering signals to the locomotion control which translates the signals into motor actions. As a result, the robot can walk around and adapt its turning angle for avoiding obstacles in different situations as well as escaping from sharp corners or deadlocks. Using backbone joint control embedded in the locomotion control allows the robot to climb over small obstacles. Consequently, it can successfully explore and navigate in complex environments.

  18. Inversion of a lateral log using neural networks

    International Nuclear Information System (INIS)

    Garcia, G.; Whitman, W.W.

    1992-01-01

    In this paper a technique using neural networks is demonstrated for the inversion of a lateral log. The lateral log is simulated by a finite difference method which in turn is used as an input to a backpropagation neural network. An initial guess earth model is generated from the neural network, which is then input to a Marquardt inversion. The neural network reacts to gross and subtle data features in actual logs and produces a response inferred from the knowledge stored in the network during a training process. The neural network inversion of lateral logs is tested on synthetic and field data. Tests using field data resulted in a final earth model whose simulated lateral is in good agreement with the actual log data

  19. Variation in Parasympathetic Dysregulation Moderates Short-term Memory Problems in Childhood Attention-Deficit/Hyperactivity Disorder.

    Science.gov (United States)

    Ward, Anthony R; Alarcón, Gabriela; Nigg, Joel T; Musser, Erica D

    2015-11-01

    Although attention deficit/hyperactivity disorder (ADHD) is associated with impairment in working memory and short-term memory, up to half of individual children with ADHD perform within a normative range. Heterogeneity in other ADHD-related mechanisms, which may compensate for or combine with cognitive weaknesses, is a likely explanation. One candidate is the robustness of parasympathetic regulation (as indexed by respiratory sinus arrhythmia; RSA). Theory and data suggest that a common neural network is likely tied to both heart-rate regulation and certain cognitive functions (including aspects of working and short-term memory). Cardiac-derived indices of parasympathetic reactivity were collected during short-term memory (STM) storage and rehearsal tasks from 243 children (116 ADHD, 127 controls). ADHD was associated with lower STM performance, replicating previous work. In addition, RSA reactivity moderated the association between STM and ADHD - both as a category and a dimension - independent of comorbidity. Specifically, conditional effects revealed that high levels of withdrawal interacted with weakened STM but high levels of augmentation moderated a positive association predicting ADHD. Thus, variations in parasympathetic reactivity may help explain neuropsychological heterogeneity in ADHD.

  20. Nonlinear Time Series Prediction Using Chaotic Neural Networks

    Science.gov (United States)

    Li, Ke-Ping; Chen, Tian-Lun

    2001-06-01

    A nonlinear feedback term is introduced into the evaluation equation of weights of the backpropagation algorithm for neural network, the network becomes a chaotic one. For the purpose of that we can investigate how the different feedback terms affect the process of learning and forecasting, we use the model to forecast the nonlinear time series which is produced by Makey-Glass equation. By selecting the suitable feedback term, the system can escape from the local minima and converge to the global minimum or its approximate solutions, and the forecasting results are better than those of backpropagation algorithm. The project supported by National Basic Research Project "Nonlinear Science" and National Natural Science Foundation of China under Grant No. 60074020

  1. Neural Networks in Mobile Robot Motion

    Directory of Open Access Journals (Sweden)

    Danica Janglová

    2004-03-01

    Full Text Available This paper deals with a path planning and intelligent control of an autonomous robot which should move safely in partially structured environment. This environment may involve any number of obstacles of arbitrary shape and size; some of them are allowed to move. We describe our approach to solving the motion-planning problem in mobile robot control using neural networks-based technique. Our method of the construction of a collision-free path for moving robot among obstacles is based on two neural networks. The first neural network is used to determine the “free” space using ultrasound range finder data. The second neural network “finds” a safe direction for the next robot section of the path in the workspace while avoiding the nearest obstacles. Simulation examples of generated path with proposed techniques will be presented.

  2. International Conference on Artificial Neural Networks (ICANN)

    CERN Document Server

    Mladenov, Valeri; Kasabov, Nikola; Artificial Neural Networks : Methods and Applications in Bio-/Neuroinformatics

    2015-01-01

    The book reports on the latest theories on artificial neural networks, with a special emphasis on bio-neuroinformatics methods. It includes twenty-three papers selected from among the best contributions on bio-neuroinformatics-related issues, which were presented at the International Conference on Artificial Neural Networks, held in Sofia, Bulgaria, on September 10-13, 2013 (ICANN 2013). The book covers a broad range of topics concerning the theory and applications of artificial neural networks, including recurrent neural networks, super-Turing computation and reservoir computing, double-layer vector perceptrons, nonnegative matrix factorization, bio-inspired models of cell communities, Gestalt laws, embodied theory of language understanding, saccadic gaze shifts and memory formation, and new training algorithms for Deep Boltzmann Machines, as well as dynamic neural networks and kernel machines. It also reports on new approaches to reinforcement learning, optimal control of discrete time-delay systems, new al...

  3. Models of neural networks temporal aspects of coding and information processing in biological systems

    CERN Document Server

    Hemmen, J; Schulten, Klaus

    1994-01-01

    Since the appearance of Vol. 1 of Models of Neural Networks in 1991, the theory of neural nets has focused on two paradigms: information coding through coherent firing of the neurons and functional feedback. Information coding through coherent neuronal firing exploits time as a cardinal degree of freedom. This capacity of a neural network rests on the fact that the neuronal action potential is a short, say 1 ms, spike, localized in space and time. Spatial as well as temporal correlations of activity may represent different states of a network. In particular, temporal correlations of activity may express that neurons process the same "object" of, for example, a visual scene by spiking at the very same time. The traditional description of a neural network through a firing rate, the famous S-shaped curve, presupposes a wide time window of, say, at least 100 ms. It thus fails to exploit the capacity to "bind" sets of coherently firing neurons for the purpose of both scene segmentation and figure-ground segregatio...

  4. Artificial intelligence. Application of the Statistical Neural Networks computer program in nuclear medicine

    International Nuclear Information System (INIS)

    Stefaniak, B.; Cholewinski, W.; Tarkowska, A.

    2005-01-01

    Artificial Neural Networks (ANN) may be a tool alternative and complementary to typical statistical analysis. However, in spite of many computer application of various ANN algorithms ready for use, artificial intelligence is relatively rarely applied to data processing. In this paper practical aspects of scientific application of ANN in medicine using the Statistical Neural Networks Computer program, were presented. Several steps of data analysis with the above ANN software package were discussed shortly, from material selection and its dividing into groups to the types of obtained results. The typical problems connected with assessing scintigrams by ANN were also described. (author)

  5. Mitigation of short-term disturbance negative impacts in the agent-based model of a production companies network

    Science.gov (United States)

    Shevchuk, G. K.; Berg, D. B.; Zvereva, O. M.; Medvedeva, M. A.

    2017-11-01

    This article is devoted to the study of a supply chain disturbance impact on manufacturing volumes in a production system network. Each network agent's product can be used as a resource by other system agents (manufacturers). A supply chain disturbance can lead to operating cease of the entire network. Authors suggest using of short-term partial resources reservation to mitigate negative consequences of such disturbances. An agent-based model with a reservation algorithm compatible with strategies for resource procurement in terms of financial constraints was engineered. This model works in accordance with the static input-output Leontief 's model. The results can be used for choosing the ways of system's stability improving, and protecting it from various disturbances and imbalance.

  6. Interpretable neural networks with BP-SOM

    NARCIS (Netherlands)

    Weijters, A.J.M.M.; Bosch, van den A.P.J.; Pobil, del A.P.; Mira, J.; Ali, M.

    1998-01-01

    Artificial Neural Networks (ANNS) are used successfully in industry and commerce. This is not surprising since neural networks are especially competitive for complex tasks for which insufficient domain-specific knowledge is available. However, interpretation of models induced by ANNS is often

  7. Automatic Seismic-Event Classification with Convolutional Neural Networks.

    Science.gov (United States)

    Bueno Rodriguez, A.; Titos Luzón, M.; Garcia Martinez, L.; Benitez, C.; Ibáñez, J. M.

    2017-12-01

    Active volcanoes exhibit a wide range of seismic signals, providing vast amounts of unlabelled volcano-seismic data that can be analyzed through the lens of artificial intelligence. However, obtaining high-quality labelled data is time-consuming and expensive. Deep neural networks can process data in their raw form, compute high-level features and provide a better representation of the input data distribution. These systems can be deployed to classify seismic data at scale, enhance current early-warning systems and build extensive seismic catalogs. In this research, we aim to classify spectrograms from seven different seismic events registered at "Volcán de Fuego" (Colima, Mexico), during four eruptive periods. Our approach is based on convolutional neural networks (CNNs), a sub-type of deep neural networks that can exploit grid structure from the data. Volcano-seismic signals can be mapped into a grid-like structure using the spectrogram: a representation of the temporal evolution in terms of time and frequency. Spectrograms were computed from the data using Hamming windows with 4 seconds length, 2.5 seconds overlapping and 128 points FFT resolution. Results are compared to deep neural networks, random forest and SVMs. Experiments show that CNNs can exploit temporal and frequency information, attaining a classification accuracy of 93%, similar to deep networks 91% but outperforming SVM and random forest. These results empirically show that CNNs are powerful models to classify a wide range of volcano-seismic signals, and achieve good generalization. Furthermore, volcano-seismic spectrograms contains useful discriminative information for the CNN, as higher layers of the network combine high-level features computed for each frequency band, helping to detect simultaneous events in time. Being at the intersection of deep learning and geophysics, this research enables future studies of how CNNs can be used in volcano monitoring to accurately determine the detection and

  8. Learning in neural networks based on a generalized fluctuation theorem

    Science.gov (United States)

    Hayakawa, Takashi; Aoyagi, Toshio

    2015-11-01

    Information maximization has been investigated as a possible mechanism of learning governing the self-organization that occurs within the neural systems of animals. Within the general context of models of neural systems bidirectionally interacting with environments, however, the role of information maximization remains to be elucidated. For bidirectionally interacting physical systems, universal laws describing the fluctuation they exhibit and the information they possess have recently been discovered. These laws are termed fluctuation theorems. In the present study, we formulate a theory of learning in neural networks bidirectionally interacting with environments based on the principle of information maximization. Our formulation begins with the introduction of a generalized fluctuation theorem, employing an interpretation appropriate for the present application, which differs from the original thermodynamic interpretation. We analytically and numerically demonstrate that the learning mechanism presented in our theory allows neural networks to efficiently explore their environments and optimally encode information about them.

  9. Runoff Modelling in Urban Storm Drainage by Neural Networks

    DEFF Research Database (Denmark)

    Rasmussen, Michael R.; Brorsen, Michael; Schaarup-Jensen, Kjeld

    1995-01-01

    A neural network is used to simulate folw and water levels in a sewer system. The calibration of th neural network is based on a few measured events and the network is validated against measureed events as well as flow simulated with the MOUSE model (Lindberg and Joergensen, 1986). The neural...... network is used to compute flow or water level at selected points in the sewer system, and to forecast the flow from a small residential area. The main advantages of the neural network are the build-in self calibration procedure and high speed performance, but the neural network cannot be used to extract...... knowledge of the runoff process. The neural network was found to simulate 150 times faster than e.g. the MOUSE model....

  10. Neural networks in economic modelling : An empirical study

    NARCIS (Netherlands)

    Verkooijen, W.J.H.

    1996-01-01

    This dissertation addresses the statistical aspects of neural networks and their usability for solving problems in economics and finance. Neural networks are discussed in a framework of modelling which is generally accepted in econometrics. Within this framework a neural network is regarded as a

  11. Dynamic artificial neural networks with affective systems.

    Directory of Open Access Journals (Sweden)

    Catherine D Schuman

    Full Text Available Artificial neural networks (ANNs are processors that are trained to perform particular tasks. We couple a computational ANN with a simulated affective system in order to explore the interaction between the two. In particular, we design a simple affective system that adjusts the threshold values in the neurons of our ANN. The aim of this paper is to demonstrate that this simple affective system can control the firing rate of the ensemble of neurons in the ANN, as well as to explore the coupling between the affective system and the processes of long term potentiation (LTP and long term depression (LTD, and the effect of the parameters of the affective system on its performance. We apply our networks with affective systems to a simple pole balancing example and briefly discuss the effect of affective systems on network performance.

  12. Influence of neural adaptation on dynamics and equilibrium state of neural activities in a ring neural network

    Science.gov (United States)

    Takiyama, Ken

    2017-12-01

    How neural adaptation affects neural information processing (i.e. the dynamics and equilibrium state of neural activities) is a central question in computational neuroscience. In my previous works, I analytically clarified the dynamics and equilibrium state of neural activities in a ring-type neural network model that is widely used to model the visual cortex, motor cortex, and several other brain regions. The neural dynamics and the equilibrium state in the neural network model corresponded to a Bayesian computation and statistically optimal multiple information integration, respectively, under a biologically inspired condition. These results were revealed in an analytically tractable manner; however, adaptation effects were not considered. Here, I analytically reveal how the dynamics and equilibrium state of neural activities in a ring neural network are influenced by spike-frequency adaptation (SFA). SFA is an adaptation that causes gradual inhibition of neural activity when a sustained stimulus is applied, and the strength of this inhibition depends on neural activities. I reveal that SFA plays three roles: (1) SFA amplifies the influence of external input in neural dynamics; (2) SFA allows the history of the external input to affect neural dynamics; and (3) the equilibrium state corresponds to the statistically optimal multiple information integration independent of the existence of SFA. In addition, the equilibrium state in a ring neural network model corresponds to the statistically optimal integration of multiple information sources under biologically inspired conditions, independent of the existence of SFA.

  13. Forecasting influenza-like illness dynamics for military populations using neural networks and social media.

    Directory of Open Access Journals (Sweden)

    Svitlana Volkova

    Full Text Available This work is the first to take advantage of recurrent neural networks to predict influenza-like illness (ILI dynamics from various linguistic signals extracted from social media data. Unlike other approaches that rely on timeseries analysis of historical ILI data and the state-of-the-art machine learning models, we build and evaluate the predictive power of neural network architectures based on Long Short Term Memory (LSTMs units capable of nowcasting (predicting in "real-time" and forecasting (predicting the future ILI dynamics in the 2011 - 2014 influenza seasons. To build our models we integrate information people post in social media e.g., topics, embeddings, word ngrams, stylistic patterns, and communication behavior using hashtags and mentions. We then quantitatively evaluate the predictive power of different social media signals and contrast the performance of the-state-of-the-art regression models with neural networks using a diverse set of evaluation metrics. Finally, we combine ILI and social media signals to build a joint neural network model for ILI dynamics prediction. Unlike the majority of the existing work, we specifically focus on developing models for local rather than national ILI surveillance, specifically for military rather than general populations in 26 U.S. and six international locations., and analyze how model performance depends on the amount of social media data available per location. Our approach demonstrates several advantages: (a Neural network architectures that rely on LSTM units trained on social media data yield the best performance compared to previously used regression models. (b Previously under-explored language and communication behavior features are more predictive of ILI dynamics than stylistic and topic signals expressed in social media. (c Neural network models learned exclusively from social media signals yield comparable or better performance to the models learned from ILI historical data, thus

  14. Forecasting influenza-like illness dynamics for military populations using neural networks and social media.

    Science.gov (United States)

    Volkova, Svitlana; Ayton, Ellyn; Porterfield, Katherine; Corley, Courtney D

    2017-01-01

    This work is the first to take advantage of recurrent neural networks to predict influenza-like illness (ILI) dynamics from various linguistic signals extracted from social media data. Unlike other approaches that rely on timeseries analysis of historical ILI data and the state-of-the-art machine learning models, we build and evaluate the predictive power of neural network architectures based on Long Short Term Memory (LSTMs) units capable of nowcasting (predicting in "real-time") and forecasting (predicting the future) ILI dynamics in the 2011 - 2014 influenza seasons. To build our models we integrate information people post in social media e.g., topics, embeddings, word ngrams, stylistic patterns, and communication behavior using hashtags and mentions. We then quantitatively evaluate the predictive power of different social media signals and contrast the performance of the-state-of-the-art regression models with neural networks using a diverse set of evaluation metrics. Finally, we combine ILI and social media signals to build a joint neural network model for ILI dynamics prediction. Unlike the majority of the existing work, we specifically focus on developing models for local rather than national ILI surveillance, specifically for military rather than general populations in 26 U.S. and six international locations., and analyze how model performance depends on the amount of social media data available per location. Our approach demonstrates several advantages: (a) Neural network architectures that rely on LSTM units trained on social media data yield the best performance compared to previously used regression models. (b) Previously under-explored language and communication behavior features are more predictive of ILI dynamics than stylistic and topic signals expressed in social media. (c) Neural network models learned exclusively from social media signals yield comparable or better performance to the models learned from ILI historical data, thus, signals from

  15. A fuzzy neural network for sensor signal estimation

    International Nuclear Information System (INIS)

    Na, Man Gyun

    2000-01-01

    In this work, a fuzzy neural network is used to estimate the relevant sensor signal using other sensor signals. Noise components in input signals into the fuzzy neural network are removed through the wavelet denoising technique. Principal component analysis (PCA) is used to reduce the dimension of an input space without losing a significant amount of information. A lower dimensional input space will also usually reduce the time necessary to train a fuzzy-neural network. Also, the principal component analysis makes easy the selection of the input signals into the fuzzy neural network. The fuzzy neural network parameters are optimized by two learning methods. A genetic algorithm is used to optimize the antecedent parameters of the fuzzy neural network and a least-squares algorithm is used to solve the consequent parameters. The proposed algorithm was verified through the application to the pressurizer water level and the hot-leg flowrate measurements in pressurized water reactors

  16. Multistability in bidirectional associative memory neural networks

    International Nuclear Information System (INIS)

    Huang Gan; Cao Jinde

    2008-01-01

    In this Letter, the multistability issue is studied for Bidirectional Associative Memory (BAM) neural networks. Based on the existence and stability analysis of the neural networks with or without delay, it is found that the 2n-dimensional networks can have 3 n equilibria and 2 n equilibria of them are locally exponentially stable, where each layer of the BAM network has n neurons. Furthermore, the results has been extended to (n+m)-dimensional BAM neural networks, where there are n and m neurons on the two layers respectively. Finally, two numerical examples are presented to illustrate the validity of our results

  17. Multistability in bidirectional associative memory neural networks

    Science.gov (United States)

    Huang, Gan; Cao, Jinde

    2008-04-01

    In this Letter, the multistability issue is studied for Bidirectional Associative Memory (BAM) neural networks. Based on the existence and stability analysis of the neural networks with or without delay, it is found that the 2 n-dimensional networks can have 3 equilibria and 2 equilibria of them are locally exponentially stable, where each layer of the BAM network has n neurons. Furthermore, the results has been extended to (n+m)-dimensional BAM neural networks, where there are n and m neurons on the two layers respectively. Finally, two numerical examples are presented to illustrate the validity of our results.

  18. Pseudo dynamic transitional modeling of building heating energy demand using artificial neural network

    NARCIS (Netherlands)

    Paudel, S.; Elmtiri, M.; Kling, W.L.; Corre, le O.; Lacarriere, B.

    2014-01-01

    This paper presents the building heating demand prediction model with occupancy profile and operational heating power level characteristics in short time horizon (a couple of days) using artificial neural network. In addition, novel pseudo dynamic transitional model is introduced, which consider

  19. Machine Learning Topological Invariants with Neural Networks

    Science.gov (United States)

    Zhang, Pengfei; Shen, Huitao; Zhai, Hui

    2018-02-01

    In this Letter we supervisedly train neural networks to distinguish different topological phases in the context of topological band insulators. After training with Hamiltonians of one-dimensional insulators with chiral symmetry, the neural network can predict their topological winding numbers with nearly 100% accuracy, even for Hamiltonians with larger winding numbers that are not included in the training data. These results show a remarkable success that the neural network can capture the global and nonlinear topological features of quantum phases from local inputs. By opening up the neural network, we confirm that the network does learn the discrete version of the winding number formula. We also make a couple of remarks regarding the role of the symmetry and the opposite effect of regularization techniques when applying machine learning to physical systems.

  20. Market data analysis and short-term price forecasting in the Iran electricity market with pay-as-bid payment mechanism

    International Nuclear Information System (INIS)

    Bigdeli, N.; Afshar, K.; Amjady, N.

    2009-01-01

    Market data analysis and short-term price forecasting in Iran electricity market as a market with pay-as-bid payment mechanism has been considered in this paper. The data analysis procedure includes both correlation and predictability analysis of the most important load and price indices. The employed data are the experimental time series from Iran electricity market in its real size and is long enough to make it possible to take properties such as non-stationarity of market into account. For predictability analysis, the bifurcation diagrams and recurrence plots of the data have been investigated. The results of these analyses indicate existence of deterministic chaos in addition to non-stationarity property of the system which implies short-term predictability. In the next step, two artificial neural networks have been developed for forecasting the two price indices in Iran's electricity market. The models' input sets are selected regarding four aspects: the correlation properties of the available data, the critiques of Iran's electricity market, a proper convergence rate in case of sudden variations in the market price behavior, and the omission of cumulative forecasting errors. The simulation results based on experimental data from Iran electricity market are representative of good performance of the developed neural networks in coping with and forecasting of the market behavior, even in the case of severe volatility in the market price indices. (author)

  1. Static Voltage Stability Analysis by Using SVM and Neural Network

    Directory of Open Access Journals (Sweden)

    Mehdi Hajian

    2013-01-01

    Full Text Available Voltage stability is an important problem in power system networks. In this paper, in terms of static voltage stability, and application of Neural Networks (NN and Supported Vector Machine (SVM for estimating of voltage stability margin (VSM and predicting of voltage collapse has been investigated. This paper considers voltage stability in power system in two parts. The first part calculates static voltage stability margin by Radial Basis Function Neural Network (RBFNN. The advantage of the used method is high accuracy in online detecting the VSM. Whereas the second one, voltage collapse analysis of power system is performed by Probabilistic Neural Network (PNN and SVM. The obtained results in this paper indicate, that time and number of training samples of SVM, are less than NN. In this paper, a new model of training samples for detection system, using the normal distribution load curve at each load feeder, has been used. Voltage stability analysis is estimated by well-know L and VSM indexes. To demonstrate the validity of the proposed methods, IEEE 14 bus grid and the actual network of Yazd Province are used.

  2. A step beyond local observations with a dialog aware bidirectional GRU network for Spoken Language Understanding

    OpenAIRE

    Vukotic , Vedran; Raymond , Christian; Gravier , Guillaume

    2016-01-01

    International audience; Architectures of Recurrent Neural Networks (RNN) recently become a very popular choice for Spoken Language Understanding (SLU) problems; however, they represent a big family of different architectures that can furthermore be combined to form more complex neural networks. In this work, we compare different recurrent networks, such as simple Recurrent Neural Networks (RNN), Long Short-Term Memory (LSTM) networks, Gated Memory Units (GRU) and their bidirectional versions,...

  3. Particle identification with neural networks using a rotational invariant moment representation

    International Nuclear Information System (INIS)

    Sinkus, R.

    1997-01-01

    A feed-forward neural network is used to identify electromagnetic particles based upon their showering properties within a segmented calorimeter. The novel feature is the expansion of the energy distribution in terms of moments of the so-called Zernike functions which are invariant under rotation. The multidimensional input distribution for the neural network is transformed via a principle component analysis and rescaled by its respective variances to ensure input values of the order of one. This results is a better performance in identifying and separating electromagnetic from hadronic particles, especially at low energies. (orig.)

  4. Time series prediction with simple recurrent neural networks ...

    African Journals Online (AJOL)

    A hybrid of the two called Elman-Jordan (or Multi-recurrent) neural network is also being used. In this study, we evaluated the performance of these neural networks on three established bench mark time series prediction problems. Results from the experiments showed that Jordan neural network performed significantly ...

  5. Quantum neural networks: Current status and prospects for development

    Science.gov (United States)

    Altaisky, M. V.; Kaputkina, N. E.; Krylov, V. A.

    2014-11-01

    The idea of quantum artificial neural networks, first formulated in [34], unites the artificial neural network concept with the quantum computation paradigm. Quantum artificial neural networks were first systematically considered in the PhD thesis by T. Menneer (1998). Based on the works of Menneer and Narayanan [42, 43], Kouda, Matsui, and Nishimura [35, 36], Altaisky [2, 68], Zhou [67], and others, quantum-inspired learning algorithms for neural networks were developed, and are now used in various training programs and computer games [29, 30]. The first practically realizable scaled hardware-implemented model of the quantum artificial neural network is obtained by D-Wave Systems, Inc. [33]. It is a quantum Hopfield network implemented on the basis of superconducting quantum interference devices (SQUIDs). In this work we analyze possibilities and underlying principles of an alternative way to implement quantum neural networks on the basis of quantum dots. A possibility of using quantum neural network algorithms in automated control systems, associative memory devices, and in modeling biological and social networks is examined.

  6. Neural network modeling for near wall turbulent flow

    International Nuclear Information System (INIS)

    Milano, Michele; Koumoutsakos, Petros

    2002-01-01

    A neural network methodology is developed in order to reconstruct the near wall field in a turbulent flow by exploiting flow fields provided by direct numerical simulations. The results obtained from the neural network methodology are compared with the results obtained from prediction and reconstruction using proper orthogonal decomposition (POD). Using the property that the POD is equivalent to a specific linear neural network, a nonlinear neural network extension is presented. It is shown that for a relatively small additional computational cost nonlinear neural networks provide us with improved reconstruction and prediction capabilities for the near wall velocity fields. Based on these results advantages and drawbacks of both approaches are discussed with an outlook toward the development of near wall models for turbulence modeling and control

  7. Application of neural networks in CRM systems

    Directory of Open Access Journals (Sweden)

    Bojanowska Agnieszka

    2017-01-01

    Full Text Available The central aim of this study is to investigate how to apply artificial neural networks in Customer Relationship Management (CRM. The paper presents several business applications of neural networks in software systems designed to aid CRM, e.g. in deciding on the profitability of building a relationship with a given customer. Furthermore, a framework for a neural-network based CRM software tool is developed. Building beneficial relationships with customers is generating considerable interest among various businesses, and is often mentioned as one of the crucial objectives of enterprises, next to their key aim: to bring satisfactory profit. There is a growing tendency among businesses to invest in CRM systems, which together with an organisational culture of a company aid managing customer relationships. It is the sheer amount of gathered data as well as the need for constant updating and analysis of this breadth of information that may imply the suitability of neural networks for the application in question. Neural networks exhibit considerably higher computational capabilities than sequential calculations because the solution to a problem is obtained without the need for developing a special algorithm. In the majority of presented CRM applications neural networks constitute and are presented as a managerial decision-taking optimisation tool.

  8. Quantum Entanglement in Neural Network States

    Directory of Open Access Journals (Sweden)

    Dong-Ling Deng

    2017-05-01

    Full Text Available Machine learning, one of today’s most rapidly growing interdisciplinary fields, promises an unprecedented perspective for solving intricate quantum many-body problems. Understanding the physical aspects of the representative artificial neural-network states has recently become highly desirable in the applications of machine-learning techniques to quantum many-body physics. In this paper, we explore the data structures that encode the physical features in the network states by studying the quantum entanglement properties, with a focus on the restricted-Boltzmann-machine (RBM architecture. We prove that the entanglement entropy of all short-range RBM states satisfies an area law for arbitrary dimensions and bipartition geometry. For long-range RBM states, we show by using an exact construction that such states could exhibit volume-law entanglement, implying a notable capability of RBM in representing quantum states with massive entanglement. Strikingly, the neural-network representation for these states is remarkably efficient, in the sense that the number of nonzero parameters scales only linearly with the system size. We further examine the entanglement properties of generic RBM states by randomly sampling the weight parameters of the RBM. We find that their averaged entanglement entropy obeys volume-law scaling, and the meantime strongly deviates from the Page entropy of the completely random pure states. We show that their entanglement spectrum has no universal part associated with random matrix theory and bears a Poisson-type level statistics. Using reinforcement learning, we demonstrate that RBM is capable of finding the ground state (with power-law entanglement of a model Hamiltonian with a long-range interaction. In addition, we show, through a concrete example of the one-dimensional symmetry-protected topological cluster states, that the RBM representation may also be used as a tool to analytically compute the entanglement spectrum. Our

  9. Local Dynamics in Trained Recurrent Neural Networks.

    Science.gov (United States)

    Rivkind, Alexander; Barak, Omri

    2017-06-23

    Learning a task induces connectivity changes in neural circuits, thereby changing their dynamics. To elucidate task-related neural dynamics, we study trained recurrent neural networks. We develop a mean field theory for reservoir computing networks trained to have multiple fixed point attractors. Our main result is that the dynamics of the network's output in the vicinity of attractors is governed by a low-order linear ordinary differential equation. The stability of the resulting equation can be assessed, predicting training success or failure. As a consequence, networks of rectified linear units and of sigmoidal nonlinearities are shown to have diametrically different properties when it comes to learning attractors. Furthermore, a characteristic time constant, which remains finite at the edge of chaos, offers an explanation of the network's output robustness in the presence of variability of the internal neural dynamics. Finally, the proposed theory predicts state-dependent frequency selectivity in the network response.

  10. Local Dynamics in Trained Recurrent Neural Networks

    Science.gov (United States)

    Rivkind, Alexander; Barak, Omri

    2017-06-01

    Learning a task induces connectivity changes in neural circuits, thereby changing their dynamics. To elucidate task-related neural dynamics, we study trained recurrent neural networks. We develop a mean field theory for reservoir computing networks trained to have multiple fixed point attractors. Our main result is that the dynamics of the network's output in the vicinity of attractors is governed by a low-order linear ordinary differential equation. The stability of the resulting equation can be assessed, predicting training success or failure. As a consequence, networks of rectified linear units and of sigmoidal nonlinearities are shown to have diametrically different properties when it comes to learning attractors. Furthermore, a characteristic time constant, which remains finite at the edge of chaos, offers an explanation of the network's output robustness in the presence of variability of the internal neural dynamics. Finally, the proposed theory predicts state-dependent frequency selectivity in the network response.

  11. Synaptic plasticity in a recurrent neural network for versatile and adaptive behaviors of a walking robot.

    Science.gov (United States)

    Grinke, Eduard; Tetzlaff, Christian; Wörgötter, Florentin; Manoonpong, Poramate

    2015-01-01

    Walking animals, like insects, with little neural computing can effectively perform complex behaviors. For example, they can walk around their environment, escape from corners/deadlocks, and avoid or climb over obstacles. While performing all these behaviors, they can also adapt their movements to deal with an unknown situation. As a consequence, they successfully navigate through their complex environment. The versatile and adaptive abilities are the result of an integration of several ingredients embedded in their sensorimotor loop. Biological studies reveal that the ingredients include neural dynamics, plasticity, sensory feedback, and biomechanics. Generating such versatile and adaptive behaviors for a many degrees-of-freedom (DOFs) walking robot is a challenging task. Thus, in this study, we present a bio-inspired approach to solve this task. Specifically, the approach combines neural mechanisms with plasticity, exteroceptive sensory feedback, and biomechanics. The neural mechanisms consist of adaptive neural sensory processing and modular neural locomotion control. The sensory processing is based on a small recurrent neural network consisting of two fully connected neurons. Online correlation-based learning with synaptic scaling is applied to adequately change the connections of the network. By doing so, we can effectively exploit neural dynamics (i.e., hysteresis effects and single attractors) in the network to generate different turning angles with short-term memory for a walking robot. The turning information is transmitted as descending steering signals to the neural locomotion control which translates the signals into motor actions. As a result, the robot can walk around and adapt its turning angle for avoiding obstacles in different situations. The adaptation also enables the robot to effectively escape from sharp corners or deadlocks. Using backbone joint control embedded in the the locomotion control allows the robot to climb over small obstacles

  12. Using deep recurrent neural network for direct beam solar irradiance cloud screening

    Science.gov (United States)

    Chen, Maosi; Davis, John M.; Liu, Chaoshun; Sun, Zhibin; Zempila, Melina Maria; Gao, Wei

    2017-09-01

    Cloud screening is an essential procedure for in-situ calibration and atmospheric properties retrieval on (UV-)MultiFilter Rotating Shadowband Radiometer [(UV-)MFRSR]. Previous study has explored a cloud screening algorithm for direct-beam (UV-)MFRSR voltage measurements based on the stability assumption on a long time period (typically a half day or a whole day). To design such an algorithm requires in-depth understanding of radiative transfer and delicate data manipulation. Recent rapid developments on deep neural network and computation hardware have opened a window for modeling complicated End-to-End systems with a standardized strategy. In this study, a multi-layer dynamic bidirectional recurrent neural network is built for determining the cloudiness on each time point with a 17-year training dataset and tested with another 1-year dataset. The dataset is the daily 3-minute cosine corrected voltages, airmasses, and the corresponding cloud/clear-sky labels at two stations of the USDA UV-B Monitoring and Research Program. The results show that the optimized neural network model (3-layer, 250 hidden units, and 80 epochs of training) has an overall test accuracy of 97.87% (97.56% for the Oklahoma site and 98.16% for the Hawaii site). Generally, the neural network model grasps the key concept of the original model to use data in the entire day rather than short nearby measurements to perform cloud screening. A scrutiny of the logits layer suggests that the neural network model automatically learns a way to calculate a quantity similar to total optical depth and finds an appropriate threshold for cloud screening.

  13. Semantic and phonological contributions to short-term repetition and long-term cued sentence recall.

    Science.gov (United States)

    Meltzer, Jed A; Rose, Nathan S; Deschamps, Tiffany; Leigh, Rosie C; Panamsky, Lilia; Silberberg, Alexandra; Madani, Noushin; Links, Kira A

    2016-02-01

    The function of verbal short-term memory is supported not only by the phonological loop, but also by semantic resources that may operate on both short and long time scales. Elucidation of the neural underpinnings of these mechanisms requires effective behavioral manipulations that can selectively engage them. We developed a novel cued sentence recall paradigm to assess the effects of two factors on sentence recall accuracy at short-term and long-term stages. Participants initially repeated auditory sentences immediately following a 14-s retention period. After this task was complete, long-term memory for each sentence was probed by a two-word recall cue. The sentences were either concrete (high imageability) or abstract (low imageability), and the initial 14-s retention period was filled with either an undemanding finger-tapping task or a more engaging articulatory suppression task (Exp. 1, counting backward by threes; Exp. 2, repeating a four-syllable nonword). Recall was always better for the concrete sentences. Articulatory suppression reduced accuracy in short-term recall, especially for abstract sentences, but the sentences initially recalled following articulatory suppression were retained better at the subsequent cued-recall test, suggesting that the engagement of semantic mechanisms for short-term retention promoted encoding of the sentence meaning into long-term memory. These results provide a basis for using sentence imageability and subsequent memory performance as probes of semantic engagement in short-term memory for sentences.

  14. Mode Choice Modeling Using Artificial Neural Networks

    OpenAIRE

    Edara, Praveen Kumar

    2003-01-01

    Artificial intelligence techniques have produced excellent results in many diverse fields of engineering. Techniques such as neural networks and fuzzy systems have found their way into transportation engineering. In recent years, neural networks are being used instead of regression techniques for travel demand forecasting purposes. The basic reason lies in the fact that neural networks are able to capture complex relationships and learn from examples and also able to adapt when new data becom...

  15. Neutron spectrometry with artificial neural networks

    International Nuclear Information System (INIS)

    Vega C, H.R.; Hernandez D, V.M.; Manzanares A, E.; Rodriguez, J.M.; Mercado S, G.A.; Iniguez de la Torre Bayo, M.P.; Barquero, R.; Arteaga A, T.

    2005-01-01

    An artificial neural network has been designed to obtain the neutron spectra from the Bonner spheres spectrometer's count rates. The neural network was trained using 129 neutron spectra. These include isotopic neutron sources; reference and operational spectra from accelerators and nuclear reactors, spectra from mathematical functions as well as few energy groups and monoenergetic spectra. The spectra were transformed from lethargy to energy distribution and were re-bin ned to 31 energy groups using the MCNP 4C code. Re-binned spectra and UTA4 response matrix were used to calculate the expected count rates in Bonner spheres spectrometer. These count rates were used as input and the respective spectrum was used as output during neural network training. After training the network was tested with the Bonner spheres count rates produced by a set of neutron spectra. This set contains data used during network training as well as data not used. Training and testing was carried out in the Mat lab program. To verify the network unfolding performance the original and unfolded spectra were compared using the χ 2 -test and the total fluence ratios. The use of Artificial Neural Networks to unfold neutron spectra in neutron spectrometry is an alternative procedure that overcomes the drawbacks associated in this ill-conditioned problem. (Author)

  16. Short-term plasticity as a neural mechanism supporting memory and attentional functions

    OpenAIRE

    Jääskeläinen, Iiro P.; Ahveninen, Jyrki; Andermann, Mark L.; Belliveau, John W.; Raij, Tommi; Sams, Mikko

    2011-01-01

    Based on behavioral studies, several relatively distinct perceptual and cognitive functions have been defined in cognitive psychology such as sensory memory, short-term memory, and selective attention. Here, we review evidence suggesting that some of these functions may be supported by shared underlying neuronal mechanisms. Specifically, we present, based on an integrative review of the literature, a hypothetical model wherein short-term plasticity, in the form of transient center-excitatory ...

  17. Short-term memory of TiO2-based electrochemical capacitors: empirical analysis with adoption of a sliding threshold

    International Nuclear Information System (INIS)

    Lim, Hyungkwang; Kim, Inho; Kim, Jin-Sang; Jeong, Doo Seok; Seong Hwang, Cheol

    2013-01-01

    Chemical synapses are important components of the large-scaled neural network in the hippocampus of the mammalian brain, and a change in their weight is thought to be in charge of learning and memory. Thus, the realization of artificial chemical synapses is of crucial importance in achieving artificial neural networks emulating the brain’s functionalities to some extent. This kind of research is often referred to as neuromorphic engineering. In this study, we report short-term memory behaviours of electrochemical capacitors (ECs) utilizing TiO 2 mixed ionic–electronic conductor and various reactive electrode materials e.g. Ti, Ni, and Cr. By experiments, it turned out that the potentiation behaviours did not represent unlimited growth of synaptic weight. Instead, the behaviours exhibited limited synaptic weight growth that can be understood by means of an empirical equation similar to the Bienenstock–Cooper–Munro rule, employing a sliding threshold. The observed potentiation behaviours were analysed using the empirical equation and the differences between the different ECs were parameterized. (paper)

  18. Neural network and its application to CT imaging

    Energy Technology Data Exchange (ETDEWEB)

    Nikravesh, M.; Kovscek, A.R.; Patzek, T.W. [Lawrence Berkeley National Lab., CA (United States)] [and others

    1997-02-01

    We present an integrated approach to imaging the progress of air displacement by spontaneous imbibition of oil into sandstone. We combine Computerized Tomography (CT) scanning and neural network image processing. The main aspects of our approach are (I) visualization of the distribution of oil and air saturation by CT, (II) interpretation of CT scans using neural networks, and (III) reconstruction of 3-D images of oil saturation from the CT scans with a neural network model. Excellent agreement between the actual images and the neural network predictions is found.

  19. Artificial neural networks in neutron dosimetry

    Energy Technology Data Exchange (ETDEWEB)

    Vega C, H.R.; Hernandez D, V.M.; Manzanares A, E.; Mercado, G.A.; Perales M, W.A.; Robles R, J.A. [Unidades Academicas de Estudios Nucleares, UAZ, A.P. 336, 98000 Zacatecas (Mexico); Gallego, E.; Lorente, A. [Depto. de Ingenieria Nuclear, Universidad Politecnica de Madrid, (Spain)

    2005-07-01

    An artificial neural network has been designed to obtain the neutron doses using only the Bonner spheres spectrometer's count rates. Ambient, personal and effective neutron doses were included. 187 neutron spectra were utilized to calculate the Bonner count rates and the neutron doses. The spectra were transformed from lethargy to energy distribution and were re-binned to 31 energy groups using the MCNP 4C code. Re-binned spectra, UTA4 response matrix and fluence-to-dose coefficients were used to calculate the count rates in Bonner spheres spectrometer and the doses. Count rates were used as input and the respective doses were used as output during neural network training. Training and testing was carried out in Mat lab environment. The artificial neural network performance was evaluated using the {chi}{sup 2}- test, where the original and calculated doses were compared. The use of Artificial Neural Networks in neutron dosimetry is an alternative procedure that overcomes the drawbacks associated in this ill-conditioned problem. (Author)

  20. Artificial neural networks in neutron dosimetry

    International Nuclear Information System (INIS)

    Vega C, H.R.; Hernandez D, V.M.; Manzanares A, E.; Mercado, G.A.; Perales M, W.A.; Robles R, J.A.; Gallego, E.; Lorente, A.

    2005-01-01

    An artificial neural network has been designed to obtain the neutron doses using only the Bonner spheres spectrometer's count rates. Ambient, personal and effective neutron doses were included. 187 neutron spectra were utilized to calculate the Bonner count rates and the neutron doses. The spectra were transformed from lethargy to energy distribution and were re-binned to 31 energy groups using the MCNP 4C code. Re-binned spectra, UTA4 response matrix and fluence-to-dose coefficients were used to calculate the count rates in Bonner spheres spectrometer and the doses. Count rates were used as input and the respective doses were used as output during neural network training. Training and testing was carried out in Mat lab environment. The artificial neural network performance was evaluated using the χ 2 - test, where the original and calculated doses were compared. The use of Artificial Neural Networks in neutron dosimetry is an alternative procedure that overcomes the drawbacks associated in this ill-conditioned problem. (Author)

  1. Comparison of Back propagation neural network and Back propagation neural network Based Particle Swarm intelligence in Diagnostic Breast Cancer

    Directory of Open Access Journals (Sweden)

    Farahnaz SADOUGHI

    2014-03-01

    Full Text Available Breast cancer is the most commonly diagnosed cancer and the most common cause of death in women all over the world. Use of computer technology supporting breast cancer diagnosing is now widespread and pervasive across a broad range of medical areas. Early diagnosis of this disease can greatly enhance the chances of long-term survival of breast cancer victims. Artificial Neural Networks (ANN as mainly method play important role in early diagnoses breast cancer. This paper studies Levenberg Marquardet Backpropagation (LMBP neural network and Levenberg Marquardet Backpropagation based Particle Swarm Optimization(LMBP-PSO for the diagnosis of breast cancer. The obtained results show that LMBP and LMBP based PSO system provides higher classification efficiency. But LMBP based PSO needs minimum training and testing time. It helps in developing Medical Decision System (MDS for breast cancer diagnosing. It can also be used as secondary observer in clinical decision making.

  2. Kalman-fuzzy algorithm in short term load forecasting

    International Nuclear Information System (INIS)

    Shah Baki, S.R.; Saibon, H.; Lo, K.L.

    1996-01-01

    A combination of Kalman-Fuzzy-Neural is developed to forecast the next 24 hours load. The input data fed to neural network are presented with training data set composed of historical load data, weather, day of the week, month of the year and holidays. The load data is fed through Kalman-Fuzzy filter before being applied to Neural Network for training. With this techniques Neural Network converges faster and the mean percentage error of predicted load is reduced as compared to the classical ANN technique

  3. Neural-network-based depth computation for blind navigation

    Science.gov (United States)

    Wong, Farrah; Nagarajan, Ramachandran R.; Yaacob, Sazali

    2004-12-01

    A research undertaken to help blind people to navigate autonomously or with minimum assistance is termed as "Blind Navigation". In this research, an aid that could help blind people in their navigation is proposed. Distance serves as an important clue during our navigation. A stereovision navigation aid implemented with two digital video cameras that are spaced apart and fixed on a headgear to obtain the distance information is presented. In this paper, a neural network methodology is used to obtain the required parameters of the camera which is known as camera calibration. These parameters are not known but obtained by adjusting the weights in the network. The inputs to the network consist of the matching features in the stereo pair images. A back propagation network with 16-input neurons, 3 hidden neurons and 1 output neuron, which gives depth, is created. The distance information is incorporated into the final processed image as four gray levels such as white, light gray, dark gray and black. Preliminary results have shown that the percentage errors fall below 10%. It is envisaged that the distance provided by neural network shall enable blind individuals to go near and pick up an object of interest.

  4. Artificial neural networks for plasma spectroscopy analysis

    International Nuclear Information System (INIS)

    Morgan, W.L.; Larsen, J.T.; Goldstein, W.H.

    1992-01-01

    Artificial neural networks have been applied to a variety of signal processing and image recognition problems. Of the several common neural models the feed-forward, back-propagation network is well suited for the analysis of scientific laboratory data, which can be viewed as a pattern recognition problem. The authors present a discussion of the basic neural network concepts and illustrate its potential for analysis of experiments by applying it to the spectra of laser produced plasmas in order to obtain estimates of electron temperatures and densities. Although these are high temperature and density plasmas, the neural network technique may be of interest in the analysis of the low temperature and density plasmas characteristic of experiments and devices in gaseous electronics

  5. Neural Networks for Non-linear Control

    DEFF Research Database (Denmark)

    Sørensen, O.

    1994-01-01

    This paper describes how a neural network, structured as a Multi Layer Perceptron, is trained to predict, simulate and control a non-linear process.......This paper describes how a neural network, structured as a Multi Layer Perceptron, is trained to predict, simulate and control a non-linear process....

  6. Drift chamber tracking with neural networks

    International Nuclear Information System (INIS)

    Lindsey, C.S.; Denby, B.; Haggerty, H.

    1992-10-01

    We discuss drift chamber tracking with a commercial log VLSI neural network chip. Voltages proportional to the drift times in a 4-layer drift chamber were presented to the Intel ETANN chip. The network was trained to provide the intercept and slope of straight tracks traversing the chamber. The outputs were recorded and later compared off line to conventional track fits. Two types of network architectures were studied. Applications of neural network tracking to high energy physics detector triggers is discussed

  7. Global existence of periodic solutions of BAM neural networks with variable coefficients

    International Nuclear Information System (INIS)

    Guo Shangjiang; Huang Lihong; Dai Binxiang; Zhang Zhongzhi

    2003-01-01

    In this Letter, we study BAM (bidirectional associative memory) networks with variable coefficients. By some spectral theorems and a continuation theorem based on coincidence degree, we not only obtain some new sufficient conditions ensuring the existence, uniqueness, and global exponential stability of the periodic solution but also estimate the exponentially convergent rate. Our results are less restrictive than previously known criteria and can be applied to neural networks with a broad range of activation functions assuming neither differentiability nor strict monotonicity. Moreover, these conclusions are presented in terms of system parameters and can be easily verified for the globally Lipschitz and the spectral radius being less than 1. Therefore, our results should be useful in the design and applications of periodic oscillatory neural circuits for neural networks with delays

  8. A novel neural-wavelet approach for process diagnostics and complex system modeling

    Science.gov (United States)

    Gao, Rong

    Neural networks have been effective in several engineering applications because of their learning abilities and robustness. However certain shortcomings, such as slow convergence and local minima, are always associated with neural networks, especially neural networks applied to highly nonlinear and non-stationary problems. These problems can be effectively alleviated by integrating a new powerful tool, wavelets, into conventional neural networks. The multi-resolution analysis and feature localization capabilities of the wavelet transform offer neural networks new possibilities for learning. A neural wavelet network approach developed in this thesis enjoys fast convergence rate with little possibility to be caught at a local minimum. It combines the localization properties of wavelets with the learning abilities of neural networks. Two different testbeds are used for testing the efficiency of the new approach. The first is magnetic flowmeter-based process diagnostics: here we extend previous work, which has demonstrated that wavelet groups contain process information, to more general process diagnostics. A loop at Applied Intelligent Systems Lab (AISL) is used for collecting and analyzing data through the neural-wavelet approach. The research is important for thermal-hydraulic processes in nuclear and other engineering fields. The neural-wavelet approach developed is also tested with data from the electric power grid. More specifically, the neural-wavelet approach is used for performing short-term and mid-term prediction of power load demand. In addition, the feasibility of determining the type of load using the proposed neural wavelet approach is also examined. The notion of cross scale product has been developed as an expedient yet reliable discriminator of loads. Theoretical issues involved in the integration of wavelets and neural networks are discussed and future work outlined.

  9. Inverting radiometric measurements with a neural network

    Science.gov (United States)

    Measure, Edward M.; Yee, Young P.; Balding, Jeff M.; Watkins, Wendell R.

    1992-02-01

    A neural network scheme for retrieving remotely sensed vertical temperature profiles was applied to observed ground based radiometer measurements. The neural network used microwave radiance measurements and surface measurements of temperature and pressure as inputs. Because the microwave radiometer is capable of measuring 4 oxygen channels at 5 different elevation angles (9, 15, 25, 40, and 90 degs), 20 microwave measurements are potentially available. Because these measurements have considerable redundancy, a neural network was experimented with, accepting as inputs microwave measurements taken at 53.88 GHz, 40 deg; 57.45 GHz, 40 deg; and 57.45, 90 deg. The primary test site was located at White Sands Missile Range (WSMR), NM. Results are compared with measurements made simultaneously with balloon borne radiosonde instruments and with radiometric temperature retrievals made using more conventional retrieval algorithms. The neural network was trained using a Widrow-Hoff delta rule procedure. Functions of date to include season dependence in the retrieval process and functions of time to include diurnal effects were used as inputs to the neural network.

  10. Evaluating Lyapunov exponent spectra with neural networks

    International Nuclear Information System (INIS)

    Maus, A.; Sprott, J.C.

    2013-01-01

    Highlights: • Cross-correlation is employed to remove spurious Lyapunov exponents from a spectrum. • Neural networks are shown to accurately model Lyapunov exponent spectra. • Neural networks compare favorably to local linear fits in modeling Lyapunov exponents. • Numerical experiments are performed with time series of varying length and noise. • Methods perform reasonably well on discrete time series. -- Abstract: A method using discrete cross-correlation for identifying and removing spurious Lyapunov exponents when embedding experimental data in a dimension greater than the original system is introduced. The method uses a distribution of calculated exponent values produced by modeling a single time series many times or multiple instances of a time series. For this task, global models are shown to compare favorably to local models traditionally used for time series taken from the Hénon map and delayed Hénon map, especially when the time series are short or contaminated by noise. An additional merit of global modeling is its ability to estimate the dynamical and geometrical properties of the original system such as the attractor dimension, entropy, and lag space, although consideration must be taken for the time it takes to train the global models

  11. Stochastic stability analysis for delayed neural networks of neutral type with Markovian jump parameters

    International Nuclear Information System (INIS)

    Lou Xuyang; Cui Baotong

    2009-01-01

    In this paper, the problem of stochastic stability for a class of delayed neural networks of neutral type with Markovian jump parameters is investigated. The jumping parameters are modelled as a continuous-time, discrete-state Markov process. A sufficient condition guaranteeing the stochastic stability of the equilibrium point is derived for the Markovian jumping delayed neural networks (MJDNNs) with neutral type. The stability criterion not only eliminates the differences between excitatory and inhibitory effects on the neural networks, but also can be conveniently checked. The sufficient condition obtained can be essentially solved in terms of linear matrix inequality. A numerical example is given to show the effectiveness of the obtained results.

  12. A TLD dose algorithm using artificial neural networks

    International Nuclear Information System (INIS)

    Moscovitch, M.; Rotunda, J.E.; Tawil, R.A.; Rathbone, B.A.

    1995-01-01

    An artificial neural network was designed and used to develop a dose algorithm for a multi-element thermoluminescence dosimeter (TLD). The neural network architecture is based on the concept of functional links network (FLN). Neural network is an information processing method inspired by the biological nervous system. A dose algorithm based on neural networks is fundamentally different as compared to conventional algorithms, as it has the capability to learn from its own experience. The neural network algorithm is shown the expected dose values (output) associated with given responses of a multi-element dosimeter (input) many times. The algorithm, being trained that way, eventually is capable to produce its own unique solution to similar (but not exactly the same) dose calculation problems. For personal dosimetry, the output consists of the desired dose components: deep dose, shallow dose and eye dose. The input consists of the TL data obtained from the readout of a multi-element dosimeter. The neural network approach was applied to the Harshaw Type 8825 TLD, and was shown to significantly improve the performance of this dosimeter, well within the U.S. accreditation requirements for personnel dosimeters

  13. Artificial Astrocytes Improve Neural Network Performance

    Science.gov (United States)

    Porto-Pazos, Ana B.; Veiguela, Noha; Mesejo, Pablo; Navarrete, Marta; Alvarellos, Alberto; Ibáñez, Oscar; Pazos, Alejandro; Araque, Alfonso

    2011-01-01

    Compelling evidence indicates the existence of bidirectional communication between astrocytes and neurons. Astrocytes, a type of glial cells classically considered to be passive supportive cells, have been recently demonstrated to be actively involved in the processing and regulation of synaptic information, suggesting that brain function arises from the activity of neuron-glia networks. However, the actual impact of astrocytes in neural network function is largely unknown and its application in artificial intelligence remains untested. We have investigated the consequences of including artificial astrocytes, which present the biologically defined properties involved in astrocyte-neuron communication, on artificial neural network performance. Using connectionist systems and evolutionary algorithms, we have compared the performance of artificial neural networks (NN) and artificial neuron-glia networks (NGN) to solve classification problems. We show that the degree of success of NGN is superior to NN. Analysis of performances of NN with different number of neurons or different architectures indicate that the effects of NGN cannot be accounted for an increased number of network elements, but rather they are specifically due to astrocytes. Furthermore, the relative efficacy of NGN vs. NN increases as the complexity of the network increases. These results indicate that artificial astrocytes improve neural network performance, and established the concept of Artificial Neuron-Glia Networks, which represents a novel concept in Artificial Intelligence with implications in computational science as well as in the understanding of brain function. PMID:21526157

  14. Artificial astrocytes improve neural network performance.

    Directory of Open Access Journals (Sweden)

    Ana B Porto-Pazos

    Full Text Available Compelling evidence indicates the existence of bidirectional communication between astrocytes and neurons. Astrocytes, a type of glial cells classically considered to be passive supportive cells, have been recently demonstrated to be actively involved in the processing and regulation of synaptic information, suggesting that brain function arises from the activity of neuron-glia networks. However, the actual impact of astrocytes in neural network function is largely unknown and its application in artificial intelligence remains untested. We have investigated the consequences of including artificial astrocytes, which present the biologically defined properties involved in astrocyte-neuron communication, on artificial neural network performance. Using connectionist systems and evolutionary algorithms, we have compared the performance of artificial neural networks (NN and artificial neuron-glia networks (NGN to solve classification problems. We show that the degree of success of NGN is superior to NN. Analysis of performances of NN with different number of neurons or different architectures indicate that the effects of NGN cannot be accounted for an increased number of network elements, but rather they are specifically due to astrocytes. Furthermore, the relative efficacy of NGN vs. NN increases as the complexity of the network increases. These results indicate that artificial astrocytes improve neural network performance, and established the concept of Artificial Neuron-Glia Networks, which represents a novel concept in Artificial Intelligence with implications in computational science as well as in the understanding of brain function.

  15. Artificial astrocytes improve neural network performance.

    Science.gov (United States)

    Porto-Pazos, Ana B; Veiguela, Noha; Mesejo, Pablo; Navarrete, Marta; Alvarellos, Alberto; Ibáñez, Oscar; Pazos, Alejandro; Araque, Alfonso

    2011-04-19

    Compelling evidence indicates the existence of bidirectional communication between astrocytes and neurons. Astrocytes, a type of glial cells classically considered to be passive supportive cells, have been recently demonstrated to be actively involved in the processing and regulation of synaptic information, suggesting that brain function arises from the activity of neuron-glia networks. However, the actual impact of astrocytes in neural network function is largely unknown and its application in artificial intelligence remains untested. We have investigated the consequences of including artificial astrocytes, which present the biologically defined properties involved in astrocyte-neuron communication, on artificial neural network performance. Using connectionist systems and evolutionary algorithms, we have compared the performance of artificial neural networks (NN) and artificial neuron-glia networks (NGN) to solve classification problems. We show that the degree of success of NGN is superior to NN. Analysis of performances of NN with different number of neurons or different architectures indicate that the effects of NGN cannot be accounted for an increased number of network elements, but rather they are specifically due to astrocytes. Furthermore, the relative efficacy of NGN vs. NN increases as the complexity of the network increases. These results indicate that artificial astrocytes improve neural network performance, and established the concept of Artificial Neuron-Glia Networks, which represents a novel concept in Artificial Intelligence with implications in computational science as well as in the understanding of brain function.

  16. Convolutional Neural Network for Image Recognition

    CERN Document Server

    Seifnashri, Sahand

    2015-01-01

    The aim of this project is to use machine learning techniques especially Convolutional Neural Networks for image processing. These techniques can be used for Quark-Gluon discrimination using calorimeters data, but unfortunately I didn’t manage to get the calorimeters data and I just used the Jet data fromminiaodsim(ak4 chs). The Jet data was not good enough for Convolutional Neural Network which is designed for ’image’ recognition. This report is made of twomain part, part one is mainly about implementing Convolutional Neural Network on unphysical data such as MNIST digits and CIFAR-10 dataset and part 2 is about the Jet data.

  17. Enhancement of signal sensitivity in a heterogeneous neural network refined from synaptic plasticity

    Energy Technology Data Exchange (ETDEWEB)

    Li Xiumin; Small, Michael, E-mail: ensmall@polyu.edu.h, E-mail: 07901216r@eie.polyu.edu.h [Department of Electronic and Information Engineering, Hong Kong Polytechnic University, Hung Hom, Kowloon (Hong Kong)

    2010-08-15

    Long-term synaptic plasticity induced by neural activity is of great importance in informing the formation of neural connectivity and the development of the nervous system. It is reasonable to consider self-organized neural networks instead of prior imposition of a specific topology. In this paper, we propose a novel network evolved from two stages of the learning process, which are respectively guided by two experimentally observed synaptic plasticity rules, i.e. the spike-timing-dependent plasticity (STDP) mechanism and the burst-timing-dependent plasticity (BTDP) mechanism. Due to the existence of heterogeneity in neurons that exhibit different degrees of excitability, a two-level hierarchical structure is obtained after the synaptic refinement. This self-organized network shows higher sensitivity to afferent current injection compared with alternative archetypal networks with different neural connectivity. Statistical analysis also demonstrates that it has the small-world properties of small shortest path length and high clustering coefficients. Thus the selectively refined connectivity enhances the ability of neuronal communications and improves the efficiency of signal transmission in the network.

  18. Enhancement of signal sensitivity in a heterogeneous neural network refined from synaptic plasticity

    International Nuclear Information System (INIS)

    Li Xiumin; Small, Michael

    2010-01-01

    Long-term synaptic plasticity induced by neural activity is of great importance in informing the formation of neural connectivity and the development of the nervous system. It is reasonable to consider self-organized neural networks instead of prior imposition of a specific topology. In this paper, we propose a novel network evolved from two stages of the learning process, which are respectively guided by two experimentally observed synaptic plasticity rules, i.e. the spike-timing-dependent plasticity (STDP) mechanism and the burst-timing-dependent plasticity (BTDP) mechanism. Due to the existence of heterogeneity in neurons that exhibit different degrees of excitability, a two-level hierarchical structure is obtained after the synaptic refinement. This self-organized network shows higher sensitivity to afferent current injection compared with alternative archetypal networks with different neural connectivity. Statistical analysis also demonstrates that it has the small-world properties of small shortest path length and high clustering coefficients. Thus the selectively refined connectivity enhances the ability of neuronal communications and improves the efficiency of signal transmission in the network.

  19. Exponential stability of uncertain stochastic neural networks with mixed time-delays

    International Nuclear Information System (INIS)

    Wang Zidong; Lauria, Stanislao; Fang Jian'an; Liu Xiaohui

    2007-01-01

    This paper is concerned with the global exponential stability analysis problem for a class of stochastic neural networks with mixed time-delays and parameter uncertainties. The mixed delays comprise discrete and distributed time-delays, the parameter uncertainties are norm-bounded, and the neural networks are subjected to stochastic disturbances described in terms of a Brownian motion. The purpose of the stability analysis problem is to derive easy-to-test criteria under which the delayed stochastic neural network is globally, robustly, exponentially stable in the mean square for all admissible parameter uncertainties. By resorting to the Lyapunov-Krasovskii stability theory and the stochastic analysis tools, sufficient stability conditions are established by using an efficient linear matrix inequality (LMI) approach. The proposed criteria can be checked readily by using recently developed numerical packages, where no tuning of parameters is required. An example is provided to demonstrate the usefulness of the proposed criteria

  20. A neuromorphic circuit mimicking biological short-term memory.

    Science.gov (United States)

    Barzegarjalali, Saeid; Parker, Alice C

    2016-08-01

    Research shows that the way we remember things for a few seconds is a different mechanism from the way we remember things for a longer time. Short-term memory is based on persistently firing neurons, whereas storing information for a longer time is based on strengthening the synapses or even forming new neural connections. Information about location and appearance of an object is segregated and processed by separate neurons. Furthermore neurons can continue firing using different mechanisms. Here, we have designed a biomimetic neuromorphic circuit that mimics short-term memory by firing neurons, using biological mechanisms to remember location and shape of an object. Our neuromorphic circuit has a hybrid architecture. Neurons are designed with CMOS 45nm technology and synapses are designed with carbon nanotubes (CNT).