Sample records for elman neural network

  1. Financial Time Series Prediction Using Elman Recurrent Random Neural Networks.

    Wang, Jie; Wang, Jun; Fang, Wen; Niu, Hongli


    In recent years, financial market dynamics forecasting has been a focus of economic research. To predict the price indices of stock markets, we developed an architecture which combined Elman recurrent neural networks with stochastic time effective function. By analyzing the proposed model with the linear regression, complexity invariant distance (CID), and multiscale CID (MCID) analysis methods and taking the model compared with different models such as the backpropagation neural network (BPNN), the stochastic time effective neural network (STNN), and the Elman recurrent neural network (ERNN), the empirical results show that the proposed neural network displays the best performance among these neural networks in financial time series forecasting. Further, the empirical research is performed in testing the predictive effects of SSE, TWSE, KOSPI, and Nikkei225 with the established model, and the corresponding statistical comparisons of the above market indices are also exhibited. The experimental results show that this approach gives good performance in predicting the values from the stock market indices.

  2. Speed-Sensorless Control Using Elman Neural Network


    This paper describes a modified speed-sensorless control for induction motor (IM) based on space vector pulse width modulation and neural network. An Elman ANN method to identify the IM speed is proposed,with IM parameters employed as associated elements. The BP algorithm is used to provide an adaptive estimation of the motor speed. The effectiveness of the proposed method is verified by simulation results. The implementation on TMS320F240 fixed DSP is provided.

  3. Method of gear fault diagnosis based on EEMD and improved Elman neural network

    Zhang, Qi; Zhao, Wei; Xiao, Shungen; Song, Mengmeng


    Aiming at crack and wear and so on of gears Fault information is difficult to diagnose usually due to its weak, a gear fault diagnosis method that is based on EEMD and improved Elman neural network fusion is proposed. A number of IMF components are obtained by decomposing denoised all kinds of fault signals with EEMD, and the pseudo IMF components is eliminated by using the correlation coefficient method to obtain the effective IMF component. The energy characteristic value of each effective component is calculated as the input feature quantity of Elman neural network, and the improved Elman neural network is based on standard network by adding a feedback factor. The fault data of normal gear, broken teeth, cracked gear and attrited gear were collected by field collecting. The results were analyzed by the diagnostic method proposed in this paper. The results show that compared with the standard Elman neural network, Improved Elman neural network has the advantages of high diagnostic efficiency.

  4. Short-Term Load Forecasting Model Based on Quantum Elman Neural Networks

    Zhisheng Zhang


    Full Text Available Short-term load forecasting model based on quantum Elman neural networks was constructed in this paper. The quantum computation and Elman feedback mechanism were integrated into quantum Elman neural networks. Quantum computation can effectively improve the approximation capability and the information processing ability of the neural networks. Quantum Elman neural networks have not only the feedforward connection but also the feedback connection. The feedback connection between the hidden nodes and the context nodes belongs to the state feedback in the internal system, which has formed specific dynamic memory performance. Phase space reconstruction theory is the theoretical basis of constructing the forecasting model. The training samples are formed by means of K-nearest neighbor approach. Through the example simulation, the testing results show that the model based on quantum Elman neural networks is better than the model based on the quantum feedforward neural network, the model based on the conventional Elman neural network, and the model based on the conventional feedforward neural network. So the proposed model can effectively improve the prediction accuracy. The research in the paper makes a theoretical foundation for the practical engineering application of the short-term load forecasting model based on quantum Elman neural networks.

  5. Real-Time Control Strategy of Elman Neural Network for the Parallel Hybrid Electric Vehicle

    Ruijun Liu


    Full Text Available Through researching the instantaneous control strategy and Elman neural network, the paper established equivalent fuel consumption functions under the charging and discharging conditions of power batteries, deduced the optimal control objective function of instantaneous equivalent consumption, established the instantaneous optimal control model, and designs the Elman neural network controller. Based on the ADVISOR 2002 platform, the instantaneous optimal control strategy and the Elman neural network control strategy were simulated on a parallel HEV. The simulation results were analyzed in the end. The contribution of the paper is that the trained Elman neural network control strategy can reduce the simulation time by 96% and improve the real-time performance of energy control, which also ensures the good performance of power and fuel economy.

  6. Dynamic recurrent Elman neural network based on immune clonal selection algorithm

    Wang, Limin; Han, Xuming; Li, Ming; Sun, Haibo; Li, Qingzhao


    Owing to the immune clonal selection algorithm introduced into dynamic threshold strategy has better advantage on optimizing multi-parameters, therefore a novel approach that the immune clonal selection algorithm introduced into dynamic threshold strategy, is used to optimize the dynamic recursion Elman neural network is proposed in the paper. The concrete structure of the recursion neural network, the connect weight and the initial values of the contact units etc. are done by evolving training and learning automatically. Thus it could realize to construct and design for dynamic recursion Elman neural networks. It could provide a new effective approach for immune clonal selection algorithm optimizing dynamic recursion neural networks.

  7. Short-term load forecasting study of wind power based on Elman neural network

    Tian, Xinran; Yu, Jing; Long, Teng; Liu, Jicheng


    Since wind power has intermittent, irregular and volatility nature, improving load forecasting accuracy of wind power has significant influence on controlling wind system and guarantees stable operation of power grids. This paper constructed the wind farm loading forecasting in short-term based on Elman neural network, and made a numerical example analysis. . Examples show that, using input delayed of feedback Elman neural network, can reflect the inherent laws of wind load operation better, so as to present a new idea for short-term load forecasting of wind power.

  8. Temperature drift modeling of MEMS gyroscope based on genetic-Elman neural network

    Chong, Shen; Rui, Song; Jie, Li; Xiaoming, Zhang; Jun, Tang; Yunbo, Shi; Jun, Liu; Huiliang, Cao


    In order to improve the temperature drift modeling precision of a tuning fork micro-electromechanical system (MEMS) gyroscope, a novel multiple inputs/single output model based on genetic algorithm (GA) and Elman neural network (Elman NN) is proposed. First, the temperature experiment of MEMS gyroscope is carried out and the outputs of MEMS gyroscope and temperature sensors are collected; then the temperature drift model based on temperature, temperature variation rate and the coupling term is proposed, and the Elman NN is employed to guarantee the generalization ability of the model; at last the genetic algorithm is used to tune the parameters of Elman NN in order to improve the modeling precision. The Allan analysis results validate that, compared to traditional single input/single output model, the novel multiple inputs/single output model can guarantee high accurate fitting ability because the proposed model can provide more plentiful controllable information. By the way, the generalization ability of the Elman neural network can be improved significantly due to the parameters are optimized by genetic algorithm.

  9. Multicomponent Kinetic Determination by Wavelet Packet Transform Based Elman Recurrent Neural Network Method

    REN Shou-xin; GAO Ling


    This paper covers a novel method named wavelet packet transform based Elman recurrent neural network(WPTERNN) for the simultaneous kinetic determination of periodate and iodate. The wavelet packet representations of signals provide a local time-frequency description, thus in the wavelet packet domain, the quality of the noise removal can be improved. The Elman recurrent network was applied to non-linear multivariate calibration. In this case, by means of optimization, the wavelet function, decomposition level and number of hidden nodes for WPTERNN method were selected as D4, 5 and 5 respectively. A program PWPTERNN was designed to perform multicomponent kinetic determination. The relative standard error of prediction(RSEP) for all the components with WPTERNN, Elman RNN and PLS were 3.23%, 11.8% and 10.9% respectively. The experimental results show that the method is better than the others.

  10. A maze learning comparison of Elman, long short-term memory, and Mona neural networks.

    Portegys, Thomas E


    This study compares the maze learning performance of three artificial neural network architectures: an Elman recurrent neural network, a long short-term memory (LSTM) network, and Mona, a goal-seeking neural network. The mazes are networks of distinctly marked rooms randomly interconnected by doors that open probabilistically. The mazes are used to examine two important problems related to artificial neural networks: (1) the retention of long-term state information and (2) the modular use of learned information. For the former, mazes impose a context learning demand: at the beginning of the maze, an initial door choice forms a context that must be remembered until the end of the maze, where the same numbered door must be chosen again in order to reach the goal. For the latter, the effect of modular and non-modular training is examined. In modular training, the door associations are trained in separate trials from the intervening maze paths, and only presented together in testing trials. All networks performed well on mazes without the context learning requirement. The Mona and LSTM networks performed well on context learning with non-modular training; the Elman performance degraded as the task length increased. Mona also performed well for modular training; both the LSTM and Elman networks performed poorly with modular training.

  11. Optimization of matrix tablets controlled drug release using Elman dynamic neural networks and decision trees.

    Petrović, Jelena; Ibrić, Svetlana; Betz, Gabriele; Đurić, Zorica


    The main objective of the study was to develop artificial intelligence methods for optimization of drug release from matrix tablets regardless of the matrix type. Static and dynamic artificial neural networks of the same topology were developed to model dissolution profiles of different matrix tablets types (hydrophilic/lipid) using formulation composition, compression force used for tableting and tablets porosity and tensile strength as input data. Potential application of decision trees in discovering knowledge from experimental data was also investigated. Polyethylene oxide polymer and glyceryl palmitostearate were used as matrix forming materials for hydrophilic and lipid matrix tablets, respectively whereas selected model drugs were diclofenac sodium and caffeine. Matrix tablets were prepared by direct compression method and tested for in vitro dissolution profiles. Optimization of static and dynamic neural networks used for modeling of drug release was performed using Monte Carlo simulations or genetic algorithms optimizer. Decision trees were constructed following discretization of data. Calculated difference (f(1)) and similarity (f(2)) factors for predicted and experimentally obtained dissolution profiles of test matrix tablets formulations indicate that Elman dynamic neural networks as well as decision trees are capable of accurate predictions of both hydrophilic and lipid matrix tablets dissolution profiles. Elman neural networks were compared to most frequently used static network, Multi-layered perceptron, and superiority of Elman networks have been demonstrated. Developed methods allow simple, yet very precise way of drug release predictions for both hydrophilic and lipid matrix tablets having controlled drug release.

  12. Maximus-AI: Using Elman Neural Networks for Implementing a SLMR Trading Strategy

    Marques, Nuno C.; Gomes, Carlos

    This paper presents a stop-loss - maximum return (SLMR) trading strategy based on improving the classic moving average technical indicator with neural networks. We propose an improvement in the efficiency of the long term moving average by using the limited recursion in Elman Neural Networks, jointly with hybrid neuro-symbolic neural network, while still fully keeping all the learning capabilities of non-recursive parts of the network. Simulations using Eurostoxx50 financial index will illustrate the potential of such a strategy for avoiding negative asset returns and decreasing the investment risk.


    Suhartono Suhartono


    Full Text Available Neural network (NN is one of many method used to predict the electricity consumption per hour in many countries. NN method which is used in many previous studies is Feed-Forward Neural Network (FFNN or Autoregressive Neural Network(AR-NN. AR-NN model is not able to capture and explain the effect of moving average (MA order on a time series of data. This research was conducted with the purpose of reviewing the application of other types of NN, that is Elman-Recurrent Neural Network (Elman-RNN which could explain MA order effect and compare the result of prediction accuracy with multiple seasonal ARIMA (Autoregressive Integrated Moving Average models. As a case study, we used data electricity consumption per hour in Mengare Gresik. Result of analysis showed that the best of double seasonal Arima models suited to short-term forecasting in the case study data is ARIMA([1,2,3,4,6,7,9,10,14,21,33],1,8(0,1,124 (1,1,0168. This model produces a white noise residuals, but it does not have a normal distribution due to suspected outlier. Outlier detection in iterative produce 14 innovation outliers. There are 4 inputs of Elman-RNN network that were examined and tested for forecasting the data, the input according to lag Arima, input such as lag Arima plus 14 dummy outlier, inputs are the lag-multiples of 24 up to lag 480, and the inputs are lag 1 and lag multiples of 24+1. All of four network uses one hidden layer with tangent sigmoid activation function and one output with a linear function. The result of comparative forecast accuracy through value of MAPE out-sample showed that the fourth networks, namely Elman-RNN (22, 3, 1, is the best model for forecasting electricity consumption per hour in short term in Mengare Gresik.

  14. Temperature prediction and analysis based on BP and Elman neural network for cement rotary kiln

    Yang, Baosheng; Ma, Xiushui


    In order to reduce energy consumption and improve the stability of cement burning system production, it is necessary to conduct in-depth analysis of the cement burning system, control the operation state and law of the system. In view of the rotary kiln consumes most of the fuel, we establish the simulation model of the cement kiln used to find effective control methods. It is difficult to construct mathematical model for the rotary cement kiln as the complex parameters, so we expressed directly using neural network method to establish the simulation model for the kiln. Choosing reasonable state and control variables and collecting actual operation data to train neural network weights. We first in-depth analyze mechanism and working parameters correlation to determine factors of the yield and quality as the model input variables; then constructed cement kiln model based on BP and Elman network, both achieved good fitting results. Elman network model has a faster convergence speed, high precision and good generalization ability. So the Elman network based model can be used as simulation model of the cement rotary kiln for exploring new control method.

  15. Financial Time Series Prediction Using Elman Recurrent Random Neural Networks

    Jie Wang


    (ERNN, the empirical results show that the proposed neural network displays the best performance among these neural networks in financial time series forecasting. Further, the empirical research is performed in testing the predictive effects of SSE, TWSE, KOSPI, and Nikkei225 with the established model, and the corresponding statistical comparisons of the above market indices are also exhibited. The experimental results show that this approach gives good performance in predicting the values from the stock market indices.

  16. Discrimination of neutrons and {\\gamma}-rays in liquid scintillator based on Elman neural network

    Zhang, Cai-Xun; Zhao, Jian-Ling; Wang, Li; Yu, Xun-Zhen; Zhu, Jing-Jun; Xing, Hao-Yang


    A new neutron and {\\gamma} (n/{\\gamma}) discrimination method based on Elman Neural Network (ENN) was put forward to improve the n/{\\gamma} discrimination performance of liquid scintillator (LS). In this study, neutron and {\\gamma} data acquired from EJ-335 which was exposed in Am-Be radiation field was discriminated using ENN. The difference of n/{\\gamma} discrimination performance between using ENN and Back Propagation Neural Network (BPNN) is that ENN gave a improvement over BPNN in n/{\\gamma} discrimination with the increasing increasing of the Figure of Merit (FOM) from 0.907 to 0.953.

  17. Novel Modified Elman Neural Network Control for PMSG System Based on Wind Turbine Emulator

    Chih-Hong Lin


    Full Text Available The novel modified Elman neural network (NN controlled permanent magnet synchronous generator (PMSG system, which is directly driven by a permanent magnet synchronous motor (PMSM based on wind turbine emulator, is proposed to control output of rectifier (AC/DC power converter and inverter (DC/AC power converter in this study. First, a closed loop PMSM drive control based on wind turbine emulator is designed to generate power for the PMSG system according to different wind speeds. Then, the rotor speed of the PMSG, the voltage, and current of the power converter are detected simultaneously to yield better power output of the converter. Because the PMSG system is the nonlinear and time-varying system, two sets online trained modified Elman NN controllers are developed for the tracking controllers of DC bus power and AC power to improve output performance of rectifier and inverter. Finally, experimental results are verified to show the effectiveness of the proposed control scheme.

  18. A Pressure Control Method for Emulsion Pump Station Based on Elman Neural Network

    Chao Tan


    Full Text Available In order to realize pressure control of emulsion pump station which is key equipment of coal mine in the safety production, the control requirements were analyzed and a pressure control method based on Elman neural network was proposed. The key techniques such as system framework, pressure prediction model, pressure control model, and the flowchart of proposed approach were presented. Finally, a simulation example was carried out and comparison results indicated that the proposed approach was feasible and efficient and outperformed others.

  19. Rolling Bearing Fault Detection Based on the Teager Energy Operator and Elman Neural Network

    Hongmei Liu


    Full Text Available This paper presents an approach to bearing fault diagnosis based on the Teager energy operator (TEO and Elman neural network. The TEO can estimate the total mechanical energy required to generate signals, thereby resulting in good time resolution and self-adaptability to transient signals. These attributes reflect the advantage of detecting signal impact characteristics. To detect the impact characteristics of the vibration signals of bearing faults, we used the TEO to extract the cyclical impact caused by bearing failure and applied the wavelet packet to reduce the noise of the Teager energy signal. This approach also enabled the extraction of bearing fault feature frequencies, which were identified using the fast Fourier transform of Teager energy. The feature frequencies of the inner and outer faults, as well as the ratio of resonance frequency band energy to total energy in the Teager spectrum, were extracted as feature vectors. In order to avoid a frequency leak error, the weighted Teager spectrum around the fault frequency was extracted as feature vector. These vectors were then used to train the Elman neural network and improve the robustness of the diagnostic algorithm. Experimental results indicate that the proposed approach effectively detects bearing faults under variable conditions.

  20. Actuator fault diagnosis of autonomous underwater vehicle based on improved Elman neural network

    孙玉山; 李岳明; 张国成; 张英浩; 吴海波


    Autonomous underwater vehicles (AUV) work in a complex marine environment. Its system reliability and autonomous fault diagnosis are particularly important and can provide the basis for underwater vehicles to take corresponding security policy in a failure. Aiming at the characteristics of the underwater vehicle which has uncertain system and modeling difficulty, an improved Elman neural network is introduced which is applied to the underwater vehicle motion modeling. Through designing self-feedback connection with fixed gain in the unit connection as well as increasing the feedback of the output layer node, improved Elman network has faster convergence speed and generalization ability. This method for high-order nonlinear system has stronger identification ability. Firstly, the residual is calculated by comparing the output of the underwater vehicle model (estimation in the motion state) with the actual measured values. Secondly, characteristics of the residual are analyzed on the basis of fault judging criteria. Finally, actuator fault diagnosis of the autonomous underwater vehicle is carried out. The results of the simulation experiment show that the method is effective.




    Full Text Available Epileptic attack persons are detected largely on the analysis of Electroencephalogram (EEG signals. The EEG signals recordings generate very bulk data which require a skilled and careful analysis. This method can be automated based on Elman Neural Network by using a time frequency domain characteristics of EEG signal called Approximate Entropy (ApEn. This method consists of EEG collection of data, extraction and classification. EEG data from normal persons and epileptic affected persons was collected, digitized and then fed into the Elman neural network. This proposed system proposes a neural-network-based automated epileptic EEG detection system that uses approximate entropy (ApEn as the input feature. Approximate Entropy (ApEn [1] is a statistical parameter that measures the predictability of the current amplitude values of a physiological signal based on its previous amplitude values. It is known that the value of the Approximate Entropy drops sharply during an epileptic attack[2]and this fact is used in the proposed system. Type of a neural network namely, Elman neural network is considered in this paper. The experimental results portray that this proposed approach efficiently detects the presence of epileptic seizures[3] in EEG signals and showed a reasonable accuracy.

  2. Discrimination of neutrons and γ-rays in liquid scintillator based on Elman neural network

    Zhang, Cai-Xun; Lin, Shin-Ted; Zhao, Jian-Ling; Yu, Xun-Zhen; Wang, Li; Zhu, Jing-Jun; Xing, Hao-Yang


    In this work, a new neutron and γ (n/γ) discrimination method based on an Elman Neural Network (ENN) is proposed to improve the discrimination performance of liquid scintillator (LS) detectors. Neutron and γ data were acquired from an EJ-335 LS detector, which was exposed in a 241Am-9Be radiation field. Neutron and γ events were discriminated using two methods of artificial neural network including the ENN and a typical Back Propagation Neural Network (BPNN) as a control. The results show that the two methods have different n/γ discrimination performances. Compared to the BPNN, the ENN provides an improved of Figure of Merit (FOM) in n/γ discrimination. The FOM increases from 0.907 ± 0.034 to 0.953 ± 0.037 by using the new method of the ENN. The proposed n/γ discrimination method based on ENN provides a new choice of pulse shape discrimination in neutron detection. Supported by National Natural Science Foundation of China (11275134,11475117)

  3. Fault diagnosis for manifold absolute pressure sensor(MAP) of diesel engine based on Elman neural network observer

    Wang, Yingmin; Zhang, Fujun; Cui, Tao; Zhou, Jinlong


    Intake system of diesel engine is a strong nonlinear system, and it is difficult to establish accurate model of intake system; and bias fault and precision degradation fault of MAP of diesel engine can't be diagnosed easily using model-based methods. Thus, a fault diagnosis method based on Elman neural network observer is proposed. By comparing simulation results of intake pressure based on BP network and Elman neural network, lower sampling error magnitude is gained using Elman neural network, and the error is less volatile. Forecast accuracy is between 0.015-0.017 5 and sample error is controlled within 0-0.07. Considering the output stability and complexity of solving comprehensively, Elman neural network with a single hidden layer and with 44 nodes is presented as intake system observer. By comparing the relations of confidence intervals of the residual value between the measured and predicted values, error variance and failures in various fault types. Then four typical MAP faults of diesel engine can be diagnosed: complete failure fault, bias fault, precision degradation fault and drift fault. The simulation results show: intake pressure is observable and selection of diagnostic strategy parameter reasonably can increase the accuracy of diagnosis; the proposed fault diagnosis method only depends on data and structural parameters of observer, not depends on the nonlinear model of air intake system. A fault diagnosis method is proposed not depending system model to observe intake pressure, and bias fault and precision degradation fault of MAP of diesel engine can be diagnosed based on residuals.

  4. Fault Diagnosis for Manifold Absolute Pressure Sensor(MAP) of Diesel Engine Based on Elman Neural Network Observer

    WANG Yingmin; ZHANG Fujun; CUI Tao; ZHOU Jinlong


    Intake system of diesel engine is a strong nonlinear system, and it is difficult to establish accurate model of intake system; and bias fault and precision degradation fault of MAP of diesel engine can’t be diagnosed easily using model-based methods. Thus, a fault diagnosis method based on Elman neural network observer is proposed. By comparing simulation results of intake pressure based on BP network and Elman neural network, lower sampling error magnitude is gained using Elman neural network, and the error is less volatile. Forecast accuracy is between 0.015-0.017 5 and sample error is controlled within 0-0.07. Considering the output stability and complexity of solving comprehensively, Elman neural network with a single hidden layer and with 44 nodes is presented as intake system observer. By comparing the relations of confidence intervals of the residual value between the measured and predicted values, error variance and failures in various fault types. Then four typical MAP faults of diesel engine can be diagnosed: complete failure fault, bias fault, precision degradation fault and drift fault. The simulation results show: intake pressure is observable and selection of diagnostic strategy parameter reasonably can increase the accuracy of diagnosis;the proposed fault diagnosis method only depends on data and structural parameters of observer, not depends on the nonlinear model of air intake system. A fault diagnosis method is proposed not depending system model to observe intake pressure, and bias fault and precision degradation fault of MAP of diesel engine can be diagnosed based on residuals.

  5. Enhanced Dynamic Model of Pneumatic Muscle Actuator with Elman Neural Network

    Alexander Hošovský


    Full Text Available To make effective use of model-based control system design techniques, one needs a good model which captures system’s dynamic properties in the range of interest. Here an analytical model of pneumatic muscle actuator with two pneumatic artificial muscles driving a rotational joint is developed. Use of analytical model makes it possible to retain the physical interpretation of the model and the model is validated using open-loop responses. Since it was considered important to design a robust controller based on this model, the effect of changed moment of inertia (as a representation of uncertain parameter was taken into account and compared with nominal case. To improve the accuracy of the model, these effects are treated as a disturbance modeled using the recurrent (Elman neural network. Recurrent neural network was preferred over feedforward type due to its better long-term prediction capabilities well suited for simulation use of the model. The results confirm that this method improves the model performance (tested for five of the measured variables: joint angle, muscle pressures, and muscle forces while retaining its physical interpretation.


    ZHANG Hongyan; ZHAO Dingxuan; TANG Xinxing; Ding Chunfeng


    From the viewpoint of energy saving and improving transmission efficiency, the ZL50E wheel loader is taken as the study object. And the system model is analyzed based on the transmission system of the construction vehicle. A new four-parameter shift schedule is presented, which can keep the torque converter working in the high efficiency area. The control algorithm based on the Elman recursive neural network is applied, and four-parameter control system is developed which is based on industrial computer. The system is used to collect data accurately and control 4D180 power-shift gearbox of ZL50E wheel loader shift timely. An experiment is done on automatic transmission test-bed, and the result indicates that the control system could reliably and safely work and improve the efficiency of hydraulic torque converter. Four-parameter shift strategy that takes into account the power consuming of the working pump has important operating significance and reflects the actual working status of construction vehicle.

  7. Mobile robot nonlinear feedback control based on Elman neural network observer

    Khaled Al-Mutib


    Full Text Available This article presents a new approach to control a wheeled mobile robot without velocity measurement. The controller developed is based on kinematic model as well as dynamics model to take into account parameters of dynamics. These parameters related to dynamic equations are identified using a proposed methodology. Input–output feedback linearization is considered with a slight modification in the mathematical expressions to implement the dynamic controller and analyze the nonlinear internal behavior. The developed controllers require sensors to obtain the states needed for the closed-loop system. However, some states may not be available due to the absence of the sensors because of the cost, the weight limitation, reliability, induction of errors, failure, and so on. Particularly, for the velocity measurements, the required accuracy may not be achieved in practical applications due to the existence of significant errors induced by stochastic or cyclical noise. In this article, Elman neural network is proposed to work as an observer to estimate the velocity needed to complete the full state required for the closed-loop control and account for all the disturbances and model parameter uncertainties. Different simulations are carried out to demonstrate the feasibility of the approach in tracking different reference trajectories in comparison with other paradigms.

  8. Intelligent Control for USV Based on Improved Elman Neural Network with TSK Fuzzy

    Shang-Jen Chuang


    Full Text Available In recent years, based on the rising of global personal safety demand and human resource cost considerations, development of unmanned vehicles to replace manpower requirement to perform high-risk operations is increasing. In order to acquire useful resources under the marine environment, a large boat as an unmanned surface vehicle (USV was implemented. The USV is equipped with automatic navigation features and a complete substitute artificial manipulation. This USV system for exploring the marine environment has more carrying capacity and that measurement system can also be self-designed through a modular approach in accordance with the needs for various types of environmental conditions. The investigation work becomes more flexible. A catamaran hull is adopted as automatic navigation test with CompactRIO embedded system. Through GPS and direction sensor we not only can know the current location of the boat, but also can calculate the distance with a predetermined position and the angle difference immediately. In this paper, the design of automatic navigation is calculated in accordance with improved Elman neural network (ENN algorithms. Takagi-Sugeno-Kang (TSK fuzzy and improved ENN control are applied to adjust required power and steering, which allows the hull to move straight forward to a predetermined target position. The route will be free from outside influence and realize automatic navigation purpose.

  9. Determining the amount of anesthetic medicine to be applied by using Elman's recurrent neural networks via resilient back propagation.

    Güntürkün, Rüştü


    In this study, Elman recurrent neural networks have been defined by using Resilient Back Propagation in order to determine the depth of anesthesia in the continuation stage of the anesthesia and to estimate the amount of medicine to be applied at that moment. From 30 patients, 57 distinct EEG recordings have been collected prior to during anaesthesia of different levels. The applied artificial neural network is composed of three layers, namely the input layer, the middle layer and the output layer. The nonlinear activation function sigmoid (sigmoid function) has been used in the hidden layer and the output layer. Prediction has been made by means of ANN. Training and testing the ANN have been used previous anaesthesia amount, total power/normal power and total power/previous. The system has been able to correctly purposeful responses in average accuracy of 95% of the cases. This method is also computationally fast and acceptable real-time clinical performance has been obtained.

  10. Traffic Prediction of New Chaotic Elman Neural Network%新型Elman混沌神经网络的流量预测

    党小超; 郝占军; 门健


    根据实际网络中测量得到的网络流量数据,提出一种改进型Elman神经网络模型-季节性输入多层反馈Elman 网络.在网络权值的训练过程中引入混沌搜索机制,利用Tent映射的遍历性进行混沌变量的优化搜索,以减少数据冗余,解决局部收敛问题.实验结果表明,该模型及其算法有效提高了网络的训练速度及网络流量的预测精度.%According to a large amount of network traffic data collected from the actual network, this paper proposes a new modified Elman neural network named Seasonal Input Multilayer Feedback Elman(SIMF Elman).Chaos searching is introduced into model training and uses the ergodicity of the Tent map to search the chaotic variables.Thus the data redundancy is reduced and local optimum problem is solved.Experimental results show that new model and strategy can improve the network training speed and forecast accuracy of network traffic.

  11. Application of an Elman neural network to the problem of predicting the throughput of a petroleum collecting station; Previsao da vazao de uma estacao coletora de petroleo utilizando redes neurais de Elman

    Paula, Wesley R. de [Universidade Federal de Campina Grande (UFCG), PB (Brazil). Curso de Pos-Graduacao em Informatica; Sousa, Andre G. de [Universidade Federal de Campina Grande (UFCG), PB (Brazil). Curso de Ciencia da Computacao; Gomes, Herman M.; Galvao, Carlos de O. [Universidade Federal de Campina Grande (UFCG), PB (Brazil)


    The objective of this paper is to present an initial study on the application of an Elman Neural Network to the problem of predicting the throughput of a petroleum collecting station. This study is part of a wider project, which aims at producing an automatic real-time system to remotely control a petroleum distribution pipeline, in such a way that optimum efficiency can be assured in terms of: (I) maximizing the volume of oil transported; and (II) minimizing energy consumption, risks of failures and damages to the environment. Experiments were carried out to determine the neural network parameters and to examine its performance under varying prediction times in the future. Promising results (with low MSE) have been obtained for predictions obtained up to 10 minutes in the future. (author)

  12. Intelligent nonsingular terminal sliding-mode control using MIMO Elman neural network for piezo-flexural nanopositioning stage.

    Lin, Faa-Jeng; Lee, Shih-Yang; Chou, Po-Huan


    The objective of this study is to develop an intelligent nonsingular terminal sliding-mode control (INTSMC) system using an Elman neural network (ENN) for the threedimensional motion control of a piezo-flexural nanopositioning stage (PFNS). First, the dynamic model of the PFNS is derived in detail. Then, to achieve robust, accurate trajectory-tracking performance, a nonsingular terminal sliding-mode control (NTSMC) system is proposed for the tracking of the reference contours. The steady-state response of the control system can be improved effectively because of the addition of the nonsingularity in the NTSMC. Moreover, to relax the requirements of the bounds and discard the switching function in NTSMC, an INTSMC system using a multi-input-multioutput (MIMO) ENN estimator is proposed to improve the control performance and robustness of the PFNS. The ENN estimator is proposed to estimate the hysteresis phenomenon and lumped uncertainty, including the system parameters and external disturbance of the PFNS online. Furthermore, the adaptive learning algorithms for the training of the parameters of the ENN online are derived using the Lyapunov stability theorem. In addition, two robust compensators are proposed to confront the minimum reconstructed errors in INTSMC. Finally, some experimental results for the tracking of various contours are given to demonstrate the validity of the proposed INTSMC system for PFNS.

  13. Robust Kalman Filtering Cooperated Elman Neural Network Learning for Vision-Sensing-Based Robotic Manipulation with Global Stability

    Xungao Zhong


    Full Text Available In this paper, a global-state-space visual servoing scheme is proposed for uncalibrated model-independent robotic manipulation. The scheme is based on robust Kalman filtering (KF, in conjunction with Elman neural network (ENN learning techniques. The global map relationship between the vision space and the robotic workspace is learned using an ENN. This learned mapping is shown to be an approximate estimate of the Jacobian in global space. In the testing phase, the desired Jacobian is arrived at using a robust KF to improve the ENN learning result so as to achieve robotic precise convergence of the desired pose. Meanwhile, the ENN weights are updated (re-trained using a new input-output data pair vector (obtained from the KF cycle to ensure robot global stability manipulation. Thus, our method, without requiring either camera or model parameters, avoids the corrupted performances caused by camera calibration and modeling errors. To demonstrate the proposed scheme’s performance, various simulation and experimental results have been presented using a six-degree-of-freedom robotic manipulator with eye-in-hand configurations.

  14. Multistep Wind Speed Forecasting Using a Novel Model Hybridizing Singular Spectrum Analysis, Modified Intelligent Optimization, and Rolling Elman Neural Network

    Zhongshan Yang


    Full Text Available Wind speed high-accuracy forecasting, an important part of the electrical system monitoring and control, is of the essence to protect the safety of wind power utilization. However, the wind speed signals are always intermittent and intrinsic complexity; therefore, it is difficult to forecast them accurately. Many traditional wind speed forecasting studies have focused on single models, which leads to poor prediction accuracy. In this paper, a new hybrid model is proposed to overcome the shortcoming of single models by combining singular spectrum analysis, modified intelligent optimization, and the rolling Elman neural network. In this model, except for the multiple seasonal patterns used to reduce interferences from the original data, the rolling model is utilized to forecast the multistep wind speed. To verify the forecasting ability of the proposed hybrid model, 10 min and 60 min wind speed data from the province of Shandong, China, were proposed in this paper as the case study. Compared to the other models, the proposed hybrid model forecasts the wind speed with higher accuracy.

  15. Telephone traffic forecasting of elman neural network based on SAPSO algorithm%基于SAPSO算法优化Elman神经网络的话务量预测

    俞秀婷; 覃锡忠; 贾振红; 傅云瑾; 曹传玲; 常春


    This paper presents a hybrid algorithm that combines simulated annealing (SA) algorithm with parti-cle swarm optimization (PSO) algorithm to optimize the weights and threshold of Elman neural network. By using the advantages of global optimization of PSO, when it is trapped into local optimum, SA is employed to jump out of local optimal solution to find the global optimal solution. The hybrid algorithm is used to train Elman neural network with dynamic recursive properties. The approach is carried out on the forecasting of the busy telephone traffic. The experimental results show that SAPSO-Elman neural network has better precision and adaptability compared with the traditional neural network.%文章提出一种模拟退火(SA)与粒子群优化(PSO)算法相结合的算法来优化Elman神经网络权值和阈值。当PSO处于停滞状态时,利用粒子群优化算法的全局寻优性质,以及SA能跳出局部最优解的特性,在搜索到的最优位置处用模拟退火算法继续寻找最优解,并对具有动态递归性能的Elman神经网络进行学习训练,这样就能对忙时话务量进行预测。结果表明,与传统Elman神经网络和PSO-Elman神经网络相比,基于模拟退火粒子群算法训练的神经网络具有更高的预测精度和良好的自适应性。

  16. Elman神经网络在变形预报中的应用研究%Applied Research in the Deformation Monitoring Based on Elman Neural Network Method

    白雪武; 梁东伟; 马友利


    As a rapid development of nonlinear science in dealing with some background unclear and extremely complex information, neural network will show its unique superiority. This article applies Elman neural network to deformation monitoring of landslide to set up the forecasting model and Matlab neural network toolbox of MATLAB program design is applied to concrete examples. Through the model of prediction accuracy, the Elman neural network model to landslide monitoring and forecasting of feasibility is verified.%神经网络作为一门快速发展起来的非线性科学,在处理一些背景不清楚而且极其复杂信息的时候,就会显示出其独特的优越性。本文通过Elman神经网络应用到滑坡变形监测中,建立预报模型,并以Matlab神经网络工具箱进行程序设计,最后运用到具体实例中,通过模型的预报精度,来验证Elman神经网络模型在滑坡监测预报中的可行性。

  17. 基于小波变换与 Elman 神经网络的短期风速组合预测%Short-term combination forecasting of wind speed based on wavelet transform and Elman neural network

    姚传安; 姬少龙; 余泳昌


    Accurate forecasting of wind speed is important for the economic and secure operation of wind power generation systems. In order to overcome the randomness of wind, improve the accuracy of short-term wind speed forecasting, a combination forecasting model of short-term wind speed based on wavelet transform and Elman neural network is presented in this paper. The model consists of a wavelet pre-processing module and a neural network prediction module. First, using wavelet transform, the wind speed time series is decomposed and reconstructed into the sub-sequences at different frequent band, then these sub-sequences are input into Elman networks for training and prediction, respectively. Results of the actual wind speed forecasting show, in comparison with single Elman network and ARM A method, the prediction accuracy of the combination forecasting model has greatly improved, which can be used as short-term wind speed prediction.%风速的准确预测对风电场发电系统的经济和安全运行有着重要的作用.为了克服风速随机性强的缺点,提高短期风速预测的精度,提出了一种将小波变换与Elman神经网络相结合的短期风速组合预测模型.该模型由小波预处理模块和神经网络预测模块组成.首先利用小波预处理模块将风速序列作多尺度分解,重构得到不同频段的子序列,然后利用Elman神经网络模块分别对其训练和预测.实际风速预测结果表明,与单一的Elman和ARMA法相比,该组合预测模型的预测精度有较大的改善,可以用于风电场短期风速的预测.

  18. Competition and Collaboration in Cooperative Coevolution of Elman Recurrent Neural Networks for Time-Series Prediction.

    Chandra, Rohitash


    Collaboration enables weak species to survive in an environment where different species compete for limited resources. Cooperative coevolution (CC) is a nature-inspired optimization method that divides a problem into subcomponents and evolves them while genetically isolating them. Problem decomposition is an important aspect in using CC for neuroevolution. CC employs different problem decomposition methods to decompose the neural network training problem into subcomponents. Different problem decomposition methods have features that are helpful at different stages in the evolutionary process. Adaptation, collaboration, and competition are needed for CC, as multiple subpopulations are used to represent the problem. It is important to add collaboration and competition in CC. This paper presents a competitive CC method for training recurrent neural networks for chaotic time-series prediction. Two different instances of the competitive method are proposed that employs different problem decomposition methods to enforce island-based competition. The results show improvement in the performance of the proposed methods in most cases when compared with standalone CC and other methods from the literature.

  19. 基于遗传算法的 Elman 神经网络模型在大坝位移预测中的应用%Application of Elman neural network model in prediction of dam deformation based on genetic algorithms

    刘雄峰; 李博; 李俊


    Aimed at the complexity and time variability of forecast of dam deformation ,and the shortage of traditional prediction model ,combined with the overall ability of random search of genetic algorithm and the charactertics of misalignment mapping ,dynamic feedback and memory function of Elman neural net-work, the paper built the model of genetic algorithms ( GA) and Elman neural network .Compared with the Elman neural network ,the GA-Elman model has the characteristics of global convergence and can o-vercome the fault that Elman neural network was susceptible to fail into local minimum .The model was used to forecast some measured data of a dam deformation in a hydropower station .The result showed that the forecast precision of GA-Elman model is high and has practicability in dam deformation prediction .%针对大坝位移预测问题的复杂性、时变性和传统预测模型的不足,结合遗传算法( GA)的全局随机搜索能力和Elman神经网络的非线性映射、动态反馈信息和记忆功能的特点,建立了GA-Elman神经网络模型。与El-man神经网络模型相比,GA-Elman神经网络模型在预测大坝变形时具有全局收敛的特点,可以克服Elman神经网络容易陷入局部极小的缺陷。将该模型用于预测某水电站大坝实测变形数据,表明GA-Elman神经网络模型的预测精度高,在大坝位移预测中具备实用性。


    李翔; 陈增强; 袁著祉


    Elman networks' dynamical modeling capability is discussed in this paper firstly. According to Elman networks' unique structure ,a weight training algorithm is designed and a nonlinear adaptive controller is constructed. Without the PE presumption, neural networkscontroller's closed-loop properties are studied and the whole Elman networks' passivity is demonstrated.

  1. Elman neural network for the early identification of cognitive impairment in Alzheimer’s disease

    Bertè, Francesco; Lamponi, Giuseppe; Calabrò, Rocco Salvatore; Bramanti, Placido


    Early detection of dementia can be useful to delay progression of the disease and to raise awareness of the condition. Alterations in temporal and spatial EEG markers have been found in patients with Alzheimer’s disease (AD) and mild cognitive impairment (MCI). Herein, we propose an automatic recognition method of cognitive impairment evaluation based on EEG analysis using an artificial neural network (ANN) combined with a genetic algorithm (GA). The EEGs of 43 AD and MCI patients (aged between 62 and 88 years) were recorded, analyzed and correlated with their MMSE scores. Quantitative EEGs were calculated using discrete wavelet transform. The data obtained were analyzed by the means of the combined use of ANN and GA to determine the degree of cognitive impairment. The good recognition rate of ANN fed with these inputs suggests that the combined GA/ANN approach may be useful for early detection of AD and could be a valuable tool to support physicians in clinical practice. PMID:25014050

  2. Short-Term Wind Power Forecasting Based on the Elman Neural Network%基于Elman神经网络的短期风电功率预测

    张靠社; 杨剑


    为提高风电场输出功率预测精度,提出一种动态基于神经网络的功率预测方法.根据实际运行的风电场相关风速、相关风向和风电功率的历史数据,建立了基于Elman神经元网络的短期风电功率预测模型.运用多层Elman神经网络模型对西北某风电场实际1h和24 h的风电输出功率预测,与BP神经网络模型对比,经仿真分析证明前者具有预测精度高的特点,三隐含层Elman神经网络模型预测效果最佳.这表明利用Elman回归神经网络建模对风电功率进行预测是可行的,能有效提高功率预测精度.%In order to improve the precision of wind farm power outputs forecasting, an artificial neural network (ANN) approach for power forecasting is proposed. Based on historical data from an operating wind farm such as wind speed, wind direction, wind power and so on, a short-term wind power forecasting model based on the well—developed Elman neural network is presented for forecasting. The multilayer Elman neural network model is used for a certain wind farm in the Northwest region for the actual 1 h and 24 h wind power prediction, and compared with the BP neural network model. The simulation and analysis prove that the former has a high forecasting precision while the three-hidden-layer Elman neural network has the best prediction effect. The simulation results show that the method is feasible, and effective in improving the precision of power forecasting.

  3. Using Elman recurrent neural networks with conjugate gradient algorithm in determining the anesthetic the amount of anesthetic medicine to be applied.

    Güntürkün, Rüştü


    In this study, Elman recurrent neural networks have been defined by using conjugate gradient algorithm in order to determine the depth of anesthesia in the continuation stage of the anesthesia and to estimate the amount of medicine to be applied at that moment. The feed forward neural networks are also used for comparison. The conjugate gradient algorithm is compared with back propagation (BP) for training of the neural Networks. The applied artificial neural network is composed of three layers, namely the input layer, the hidden layer and the output layer. The nonlinear activation function sigmoid (sigmoid function) has been used in the hidden layer and the output layer. EEG data has been recorded with Nihon Kohden 9200 brand 22-channel EEG device. The international 8-channel bipolar 10-20 montage system (8 TB-b system) has been used in assembling the recording electrodes. EEG data have been recorded by being sampled once in every 2 milliseconds. The artificial neural network has been designed so as to have 60 neurons in the input layer, 30 neurons in the hidden layer and 1 neuron in the output layer. The values of the power spectral density (PSD) of 10-second EEG segments which correspond to the 1-50 Hz frequency range; the ratio of the total power of PSD values of the EEG segment at that moment in the same range to the total of PSD values of EEG segment taken prior to the anesthesia.

  4. 基于ELMAN神经网络的短期风速预测%The Wind Speed Short-term Forecast Analysis Based on the Elman Neural Network Predict Model

    孙斌; 姚海涛; 齐城龙


    In order to improve the accuracy of short-term wind speed forecast,this paper proposes a Elman neural network model.Reconstruction the phase space of the chaotic wind speed time series by calculating the embedding dimension and the delay time of the wind speed time series.Then the Elman neural network model can be used to forecast the wind speed.The results show the Elman neural network model can meet the accuracy requirements.At the same time,this thesis will use the BP neural network prediction model to forecast the wind speed time series.The simulation results show that the Elman neural network prediction model can be a good short-term wind speed prediction model.So it can be widely used in engineering practice.%为了提高风电场风速短期预测的精确性,本文提出了基于Elman神经网络的预测。首先求出风速时间序列的嵌入维数和延迟时间,进而对混沌风速时间序列进行相空间重构。然后利用Elman神经网络对相空间重构后的风速时间序列进行预测,预测结果表明基于Elman神经网络的预测效果满足了精度要求。本文同时运用BP神经网络进行预测。仿真结果表明,基于ELMAN神经网络的预测模型能够较为准确的进行短期风速的预测,具有很高的工程实际应用意义。

  5. 基于遗传算法优化Elman神经网络的机床热误差建模∗%Thermal Error Modeling for Machine Tool Based on Genetic Algorithm Optimization Elman Neural Network

    黄玉春; 田建平; 杨海栗; 胡勇; 张良栋


    In order to improve the prediction accuracy of CNC machine thermal error model, a vertical ma-chining center was taken as a research object. The temperature measuring points of machine are optimized by using a combined method of fuzzy clustering and grey comprehensive relationship degree. The tempera-ture measuring points were reduced from 8 to 3. The prediction model of spindle thermal drift error was es-tablished based on genetic algorithm optimization Elman neural network. The predictive effect of GA-Elman neural network model and the common Elman neural network models were compared through the example. Compared with the prediction model built by the ordinary Elman neural network, the results show that the Elman neural network has higher fitting accuracy, smaller residual error and better generalization capacity on spindle axial thermal drift error.%为了提高数控机床热误差模型的预测精度,以某型号立式加工中心为实验对象,采用模糊聚类与灰色综合关联度相结合的方法对机床测温点进行优化,将测温点从8个减少到3个。利用遗传算法( GA)优化的Elman神经网络建立了主轴热漂移误差预测模型,通过实例比较了 GA-Elman神经网络模型与普通Elman 神经网络模型的预测效果。结果表明,与普通Elman神经网络所建的预测模型相比,GA-Elman神经网络模型对主轴轴向热漂移误差的预测精度较高,残差较小,网络的泛化能力较好。

  6. Comparative study of Elman and BP neural networks used for pattern classification%Elman和BP神经网络在模式分类领域内的对比研究

    丁硕; 常晓恒; 巫庆辉; 杨友林; 胡庆功


    To study which type of network in Elman neural networks or standard BPNN is more effective for pattern classifi-cation,two classification models based on Elman neural network and standard BPNN are established respectively. The classifica-tion of two- dimensional vector pattern on a plane is taken as an example to train the two classification models and test their generalization abilities respectively. The simulation results show that Elman neural network has higher classification accuracy and faster convergence speed than BPNN under the conditions of the same quantity of the training samples and small or medium size network. And this makes Elman neural network more suitable for solving the problem of pattern classification.%为了研究Elman神经网络和标准BPNN中何种网络类型更适合于解决模式分类问题,分别构建了基于Elman神经网络的分类模型和基于标准BPNN的分类模型。以平面上二维向量模式的分类为例,对2种分类模型进行训练和泛化能力测试。仿真结果表明,在训练样本数量相等且中小规模网络的条件下,Elman网络模型比BP网络模型具有更高的分类精度,更快的收敛速度,更适合于解决模式分类问题。

  7. Medical Image Classification Using Genetic Optimized Elman Network

    T. Baranidharan


    Full Text Available Problem statement: Advancements in the internet and digital images have resulted in a huge database of images. Most of the current search engines found in the web depends only on images that can be retrieved using metadata, which generates a lot of unwanted results in the results got. Content-Based Image Retrieval (CBIR system is the utilization of computer vision techniques in the predicament of image retrieval. In other words, it is used for searching and retrieving of the right digital image among a huge database using query image. CBIR finds extensive applications in the field of medicine as it helps medical professionals in diagnosis and plan treatment. Approach: Various methods have been proposed for CBIR using the images low level features like histogram, color, texture and shape. Similarly various classification algorithms like Naive Bayes classifier, Support Vector Machine, Decision tree induction algorithms and Neural Network based classifiers have been studied extensively. In this study it is proposed to extract global features using Hilbert Transform (HT, select features based on the correlation of the extracted vectors with respect to the class label and propose a enhanced Elman Neural Network Genetic Algorithm Optimized Elman (GAOE Neural Network. Results and Conclusion: The proposed method for feature extraction and the classification algorithm was tested on a dataset consisting of 180 medical images. The classification accuracy of 92.22% was obtained in the proposed method.

  8. 基于FOA-Elman神经网络的光伏电站短期出力预测模型%Short-Term Photovoltaic Power Forecasting Based on Elman Neural Network with Fruit Fly Optimization Algorithm

    韩伟; 王宏华; 杜炜


    The model based on Elman neural network(NN) with fruit fly optimization algorithm(FOA) is proposed to forecast the short-term photovoltaic (PV) power. Using dynamic recurrent Elman NN, the reasoning and generalization capacity of PV power forecasting model is enhanced, and forecasting accuracy is ensured. The human body amenity is introduced to reduce the number of input vectors. The FOA is used to train the Elman NN, which can make full use of the global optimization performance of FOA and overcome the defects such as local optimal solution, slow convergence speed and complex programming. Finally, in comparison with the simulation results of Elman NN, the numerical results verify the effectiveness and correctness of the proposed mode.%提出了基于果蝇优化算法(FOA)-Elman神经网络的光伏电站出力短期预测模型,采用具有动态递归性能的Elman神经网络,可增强光伏电站出力预测模型的联想和泛化推理能力,保证出力预测的精度。引入人体舒适度,减少输入向量个数;通过FOA对Elman神经网络进行学习训练,可充分利用FOA的全局寻优性能,克服常规学习算法易于陷入局部最优解、收敛速度慢、编程复杂等缺陷。最后,与常规Elman模型进行对比仿真实验,结果表明所提出预测模型的正确性和有效性。

  9. 基于FOA_Elman神经网络的微网短期负荷预测%Microgrid Short-term Load Forecasting Based on Elman Neural Network Optimized by FOA

    赵敏; 尤冬梅


    To meet the requirement of the load forecasting efficiency and accuracy introduced by the construction and development of microgrid, according to the characteristics of microgrid load: small base load, high intermittent and big randomness, etc., a microgrid short-term load forecasting model based on Elman neural network optimized by fruit fly optimization algorithm (FOA) is proposed. Considering that the microgrid load is influenced by meteorological factors accumulative effect, the human body amenity index is introduced to reduce the input vector dimensions. To overcome the defects of conventional learning algorithm such as slow convergence speed, local optimal solution and complex programming, the fruit fly optimization algorithm possessing global optimization performance is utilized to the optimization for the structure, weights and threshold of Elman neural network. And taking a domestic microgrid trial project for example, the FOA_Elman neural network is used for microgrid short-term load forecasting. The simulation results show that the proposed forecasting model provides greater application value and is superior to the conventional Elman neural network model.%为适应微网的建设和发展对其负荷预测效率及精度的要求,针对微网负荷基数小、间歇性、随机性大等特点,提出一种基于果蝇优化算法(fruit fly optimization algorithm,FOA)优化Elman神经网络的微网短期负荷预测模型.考虑到微网负荷受气象因素累计效应的影响,引入人体舒适度指数以降低输入向量维数.为克服常规学习算法收敛速度慢、易陷入局部最优解、编程复杂等缺陷,利用具有全局寻优性能的FOA对Elman神经网络的结构、权值和阈值进行优化,并以国内某微网示范工程项目为例,将FOA_Elman神经网络用于微网短期负荷预测.仿真结果表明,所提出的预测模型优于常规Elman神经网络模型,更具应用价值.

  10. Application of Elman Recursive Neural Network to Recognition of Vehicle License Plate%Elman递归神经网络在车牌字符识别中的应用

    杨晓艳; 李飞; 白艳萍


    为了提高车牌自动识别系统的速度和准确度,采用适应性较强的十三特征提取法进行车牌字符的特征提取,将提取的特征向量作为网络的输入;在对网络进行训练时,选用具有一个承接层作为一步延迟算子的动态建模性质比较好的Elman递归神经网络.此网络在权值更新时不仅考虑了当前的梯度方向,而且还考虑了前一时刻的梯度方向,从而降低了网络性能对参数调整的敏感性,有效地抑制了局部极小值的出现.最后与BP网络训练的结果进行对比,结果表明Elman递归神经网络在识别速度和准确度方面都更具优越性.%In order to improve speed and accuracy of automatic vehicle identification system, a 13-characteristic extraction process is used, and the feature vectors are inputted into network.An Elman recursion neural network is selected which including an undertaking layer as a step delay operator.This network when updating the weight value not only considers the current gradient direction, but also considers the former gradient direction, thereby reducing sensitivity of the network performance parameters and effectively suppressing the emergence of local minima.Finally by comparing with BP network training, the results show in the recognition of speed and accuracy the Elman recurrent neural network has more advantages.

  11. 基于粗糙集理论-主成分分析的Elman神经网络短期风速预测%Short-term wind speed forecasting using Elman neural network based on rough set theory and principal components analysis

    尹东阳; 盛义发; 蒋明洁; 李永胜; 谢曲天


    为了解决传统静态前馈神经网络(FNN)在短期风速预测中易陷入局部最优值及动态性能的不足,引入Elman动态神经网络建立风速预测模型,采用主成分分析法(PCA)对原始风速数据进行特征提取以优化神经网络的输入,改进激励函数和网络结构以寻求函数收敛速度和预测精度的最优解。针对Elman神经网络预测模型在风速波动的峰值处预测误差较大及预测精度存在波动性,提出采用粗糙值理论对模型预测值进行修正与补偿,进一步提高预测精度。实验证明:所提出的方法能有效提高预测精度,增强神经网络模型的泛化能力,具有较好的实用性。%Because the traditional static feed forward neural networks (FNN) are easy to fall into local optimum and lack of dynamic performance, the wind speed prediction model using Elman neural network (ElmanNN) is established, the principal component analysis (PCA) is used to extract the feature of wind speed data, which optimizes the inputs of ElmanNN. Furthermore, excitation function and the structures of network are improved to search for the optimum solution of function convergence rate and prediction accuracy. To solve large error and prediction accuracy fluctuations of the ElmanNN model at the peak value of wind speed, the rough set theory is proposed to compensate and correct the predicted values to further improve the forecasted results. Experimental results show that the prediction accuracy is effectively improved and the generalization ability of ElmanNN is enhanced using the proposed method. This model has precise forecasting and strong practicability, so it has promoted value.

  12. Application of Elman feedback neural network model to predict the incidence of hemorrhagic fever with renal syndrome%基于Elman反馈型神经网络的肾综合征出血热发病率预测模型

    吴伟; 郭军巧; 安淑一; 关鹏; 周宝森


    Objective To describe the procedure of building Elman neural network model, and explore the value of potential application of the above model. Methods Monthly incidence of hemorrhagic fever with renal syndrome(HFRS) in China from 2004 to 2013 was used to build Elman neural network model and SARIMA model and forecasted the monthly incidence of HFRS in China from January 2014 to September 2014. The fitting and prediction effects of the two models were compared. Results For training sample, MAE, MAPE and RMSE of Elman neural network were 0.0088, 0.1191 and 0.0127 respectively; MAE, MAPE and RMSE of SARIMA model were 0.0111, 0.1268 and 0.0206 respectively. For predicting sample, MAE, RMSE and MAPE of Elman neural network were 0.0079, 0.1180 and 0.0096 respectively;MAE, RMSE and MAPE of SARIMA model were 0.0178, 0.2778 and 0.1861 respectively. Conclusion Elman neural network fits and forecasts the HFRS incidence trend in China well, and the fitting and prediction effect is superior to the SARIMA model, which is of great application value for the prevention and control of hemorrhagic fever with renal syndrome.%目的:阐述建立Elman神经网络模型预测肾综合征出血热(HFRS)发病率的方法和步骤,探讨其应用前景。方法使用全国2004-2013年HFRS的月发病率资料,建立Elman神经网络预测模型和SARIMA模型,对2014年1-9月HFRS的月发病率进行预测,比较2个模型的拟合和预测效果。结果对于训练样本,Elman神经网络的平均绝对误差(MAE)、平均绝对误差百分比(MAPE)以及均方误差平方根(RMSE)分别为0.0088、0.1191和0.0127;SARIMA模型的MAE、MAPE和RMSE分别为0.0111、0.1268和0.0206。对于预测样本,Elman神经网络的MAE、MAPE和RMSE分别为0.0079、0.1180和0.0096;SARIMA模型的MAE、MAPE和RMSE分别为0.0178、0.2778和0.1861。结论 Elman神经网络较好地拟合和预测了全国HFRS的发病趋势,并且其拟合和预测效果优

  13. Approximation Property of the Modified Elman Network%改进Elman网络的逼近性质研究

    任雪梅; 陈杰; 龚至豪; 窦丽华


    提出了能够用于非线性系统建模的一种新型回归网络,该网络是Elman网络的改进,由输入层、隐层和输出层构成.输入层由外部输入和内部状态层组成,隐层到状态层的连接是任意的,因此在逼近系统时,改进的Elman网络比Elman网络有更多记忆空间.同时证明了改进的Elman网络能够逼近一定时间内的非线性系统的输出轨线,提出了利用动态反向传播算法训练神经网络的前向和反向权值,仿真结果验证了该方案的有效性.%A new type of recurrent neural network is discussed, which provides the potential for modelling unknown nonlinear systems. The proposed network is a generalization of the network described by Elman, which has three layers including the input layer, the hidden layer and the output layer. The input layer is composed of two different groups of neurons, the group of external input neurons and the group of the internal context neurons. Since arbitrary connections can be allowed from the hidden layer to the context layer, the modified Elman network has more memory space to represent dynamic systems than the Elman network. In addition, it is proved that the proposed network with appropriate neurons in the context layer can approximate the trajectory of a given dynamical system for any fixed finite length of time. The dynamic backpropagation algorithm is used to estimate the weights of both the feedforward and feedback connections. The methods have been successfully applied to the modelling of nonlinear plants.

  14. Model of Cement Rotary Kiln Based on Elman Neural Network and Design of DHP Controller%基于Elman网的水泥回转窑模型及其DHP控制器设计

    黄清宝; 林小峰; 宋绍剑; 佘乾仲; 杨宝生


    水泥回转窑熟料煅烧过程是一个涉及传质、传热和物理化学反应的复杂的多变量、多扰动、非线性过程.为了稳定回转窑温度以提高水泥熟料烧成质量,降低能耗,需要探索新型优化控制方法.近似动态规划(ADP)综合神经网络、强化学习和动态规划等方法和技术,是一种新型优化方法.其中的双启发式动态规划(DHP)算法由于其评价网络的输出是代价函数关于状态量的偏导数,它具有动态性好、收敛速度快、控制精度高等优点.在分析水泥回转窑工艺的基础上,采用Elman神经回络建立回转窑系统的模型,并利用近似动态规划中的双重启发式动态规划算法设计回转窑温度优化控制器.仿真结果表明,在经历控制初期的波动后,回转窑烧成带温度逐渐趋于稳定,实现了对水泥回转窑的仿真控制.%Calcination process of cement clinker is a complex multi-variable large-disturbances and nonlinear system which is full of mass transfer, heat transfer, physical and chemical reactions. In order to reduce energy consumption and ensure the quality of cement clinker burning, it's necessary to explore new aptimal control methods to stabilize the temperature of rotary kiln. Approximate Dynamic Programming (ADP) integrated neural networks, reinforcement learning and dynamic programming techniques, is a new algorithm for optimal control. The dual heuristic dynamic programming (DHP) is an algorithm of ADP, whose output is a partial derivatrve of cost function with respect to state. It has many advantages such as good dynamics, fast convergence rate, high controlled resolution and so on. Based on the detailed analysis of rotary kiln technology, the model was established by Elman neural network, and the optimization controller was designed with the dual heuristic dynamic programming. The resuks show that, after the fluctuations in the early control period, the temperature of cement rotary kiln tends to

  15. Fuzzy Shape Control Based on Elman Dynamic Recursion Network Prediction Model

    JIA Chun-yu; LIU Hong-min


    In the strip rolling process, shape control system possesses the characteristics of nonlinearity, strong coupling, time delay and time variation. Based on self-adapting Elman dynamic recursion network prediction model, the fuzzy control method was used to control the shape on four-high cold mill. The simulation results showed that the system can be applied to real time on line control of the shape.

  16. 基于 GGA-Elman 网络的头部体态语言 sEMG识别%An sEMG approach to recognize the body language of the head based on the GGA-Elman network

    杨钟亮; 陈育苗


    In order to improve the recognition effects of the "agreement"and"disagreement"attitudes expressed by the body language of the head movements , a surface electromyography ( sEMG ) approach in combination with the greedy genetic algorithm ( GGA) and the Elman neural network is proposed .The sEMG signals of the neck muscles were detected while eight participants were nodding and shaking their heads respectively during a pilot experiment . By means of the Wilcoxon ’ s signed-rank test , ten features of the sEMG time domain indices were extracted with significant differences .Furthermore , the body language recognition model was constructed based on the Elman net-work optimized by GGA .Experimental results show that the model can successfully recognize the "agreement and disagreement"attitudes spontaneously expressed by the different body languages of the head .Compared with the recognition models using the standard Elman and BP network , the correlation coefficient of this present model is higher, the mean squared error is less , and the correct recognition rate of the test set is increases by over 3.2%, which demonstrate the reliability of this approach .%为提高头部体态语言表达“同意”与“不同意”态度的识别效果,提出结合贪心遗传算法和Elman神经网络的表面肌电识别方法。通过前导实验分别采集8名被试者点头与摇头时颈部肌肉的表面肌电信号,利用Wilcoxon秩和检验提取具有显著性差异的10个肌电时域特征值,进而基于贪心遗传算法优化的Elman神经网络建立体态语言识别模型。实验结果表明,该模型能成功识别自发表达“同意”与“不同意”的头部体态语言,与标准Elman神经网络和BP神经网络的识别模型相比,相关系数更高、均方误差更小,对测试集的正确识别率提高了3.2%以上,从而验证了该方法的可靠性。

  17. A Neural Network Approach for Misuse and Anomaly Intrusion Detection

    YAO Yu; YU Ge; GAO Fu-xiang


    An MLP(Multi-Layer Perceptron)/Elman neural network is proposed in this paper, which realizes classification with memory of past events using the real-time classification of MLP and the memorial functionality of Elman. The system's sensitivity for the memory of past events can be easily reconfigured without retraining the whole network. This approach can be used for both misuse and anomaly detection system. The intrusion detection systems(IDSs) using the hybrid MLP/Elman neural network are evaluated by the intrusion detection evaluation data sponsored by U. S. Defense Advanced Research Projects Agency (DARPA). The results of experiment are presented in Receiver Operating Characteristic (ROC) curves. The capabilites of these IDSs to identify Deny of Service(DOS) and probing attacks are enhanced.

  18. Application of a genetic algorithm Elman network in temperature drift modeling for a fiber-optic gyroscope.

    Chen, Xiyuan; Song, Rui; Shen, Chong; Zhang, Hong


    The fiber-optic gyroscope (FOG) has been widely used as a satellite and automobile attitude sensor in many industrial and defense fields such as navigation and positioning. Based on the fact that the FOG is sensitive to temperature variation, a novel (to our knowledge) error-processing technique for the FOG through a set of temperature experiment results and error analysis is presented. The method contains two parts: one is denoising, and the other is modeling and compensating. After the denoising part, a novel modeling method which is based on the dynamic modified Elman neural network (ENN) is proposed. In order to get the optimum parameters of the ENN, the genetic algorithm (GA) is applied and the optimization objective function was set as the difference between the predicted data and real data. The modeling and compensating results indicate that the drift caused by the varying temperature can be reduced and compensated effectively by the proposed model; the prediction accuracy of the GA-ENN is improved 20% over the ENN.

  19. Performance Analysis of Software Effort Estimation Models Using Neural Networks



    Full Text Available Software Effort estimation involves the estimation of effort required to develop software. Cost overrun, schedule overrun occur in the software development due to the wrong estimate made during the initial stage of software development. Proper estimation is very essential for successful completion of software development. Lot of estimation techniques available to estimate the effort in which neural network based estimation technique play a prominent role. Back propagation Network is the most widely used architecture. ELMAN neural network a recurrent type network can be used on par with Back propagation Network. For a good predictor system the difference between estimated effort and actual effort should be as low as possible. Data from historic project of NASA is used for training and testing. The experimental Results confirm that Back propagation algorithm is efficient than Elman neural network.

  20. Multi-agent reinforcement learning using modular neural network Q-learning algorithms

    YANG Yin-xian; FANG Kai


    Reinforcement learning is an excellent approach which is used in artificial intelligence,automatic control, etc. However, ordinary reinforcement learning algorithm, such as Q-learning with lookup table cannot cope with extremely complex and dynamic environment due to the huge state space. To reduce the state space, modular neural network Q-learning algorithm is proposed, which combines Q-learning algorithm with neural network and module method. Forward feedback neural network, Elman neural network and radius-basis neural network are separately employed to construct such algorithm. It is revealed that Elman neural network Q-learning algorithm has the best performance under the condition that the same neural network training method, i.e. gradient descent error back-propagation algorithm is applied.

  1. Wind Resource Assessment and Forecast Planning with Neural Networks

    Nicolus K. Rotich


    Full Text Available In this paper we built three types of artificial neural networks, namely: Feed forward networks, Elman networks and Cascade forward networks, for forecasting wind speeds and directions. A similar network topology was used for all the forecast horizons, regardless of the model type. All the models were then trained with real data of collected wind speeds and directions over a period of two years in the municipal of Puumala, Finland. Up to 70th percentile of the data was used for training, validation and testing, while 71–85th percentile was presented to the trained models for validation. The model outputs were then compared to the last 15% of the original data, by measuring the statistical errors between them. The feed forward networks returned the lowest errors for wind speeds. Cascade forward networks gave the lowest errors for wind directions; Elman networks returned the lowest errors when used for short term forecasting.

  2. International Neural Network Society Annual Meeting (1994) Held in San Diego, California on 5-9 June 1994. Volume 3.


    cell image inspection- a task for artificial neural networks. Network, 3, 15-18. 16. Simpson, J.L. (1990). Incidence and timing of pregnancy losses... Teen . on Neural Networks 2, 302-309. [3] Corradit, V. and White, H., (1992), Regularized neural networks: some convergence rate results, Manuscript... Africa Abstract In this paper we explore the Elman recurrent network by constructing and ie.entifying finite state automata (FSA) for the addition task

  3. EMP response modeling of TVS based on the recurrent neural network

    Zhiqiang JI


    Full Text Available Due to the larger workload in the implementation process and the poor consistence between the test results and actual situation problems when using the transmission line pulse (TLP testing methods, a modeling method based on the recurrent neural network is proposed for EMP response forecast. Based on the TLP testing system, two categories of EMP are increased, which are the machine model ESD EMP and human metal model ESD EMP. Elman neural network, Jordan neural network and their combination namely Elman-Jordan neural network are established for response modeling of NUP2105L transient voltage suppressor (TVS forecasting the response under different EMP. The simulation results show that the recurrent neural network has satisfying modeling effects and high computation efficiency.

  4. Predicting Model forComplex Production Process Based on Dynamic Neural Network


    Based on the comparison of several methods of time series predicting, this paper points out that it is nec-essary to use dynamic neural network in modeling of complex production process. Because self-feedback and mutu-al-feedback are adopted among nodes at the same layer in Elman network, it has stronger ability of dynamic ap-proximation, and can describe any non-linear dynamic system. After the structure and mathematical description be-ing given, dynamic back-propagation (BP) algorithm of training weights of Elman neural network is deduced. Atlast, the network is used to predict ash content of black amber in jigging production process. The results show thatthis neural network is powerful in predicting and suitable for modeling, predicting, and controling of complex pro-duction process.

  5. Neural Networks

    Schwindling Jerome


    Full Text Available This course presents an overview of the concepts of the neural networks and their aplication in the framework of High energy physics analyses. After a brief introduction on the concept of neural networks, the concept is explained in the frame of neuro-biology, introducing the concept of multi-layer perceptron, learning and their use as data classifer. The concept is then presented in a second part using in more details the mathematical approach focussing on typical use cases faced in particle physics. Finally, the last part presents the best way to use such statistical tools in view of event classifers, putting the emphasis on the setup of the multi-layer perceptron. The full article (15 p. corresponding to this lecture is written in french and is provided in the proceedings of the book SOS 2008.

  6. Self-organising T-S fuzzy Elman network based on EKF%基于EKF的自组织T-S模糊Elman网络

    乔俊飞; 袁喜春; 韩红桂


    For the design of the fuzzy neural network architecture and the deficiency of fuzzy sets on semantic description, a self-organising T-S fuzzy Elman network(SOTSFEN) based on extended Kalman filter(EKF) is proposed, and the training algorithm is derived. Furthermore, recursive least square(RLS) and EKF are used to update linear and non-linear parameters respectively. Then the criterion of rule generation is given and error ratio reduction(ERR) is regarded as the fuzzy rule pruning strategy. Finally, the simulation results of system identification and sewage treatment modeling show that the precision and generalization ability of SOTSFEN are ensured, and a simpler architecture network can be achieved simultaneously.%针对模糊神经网络结构设计问题及模糊集在语言描述上存在的不足,提出一种基于扩展的卡尔曼滤波(EKF)的自组织T-S模糊Elman网络,并推导了网络训练算法。分别采用递归最小二乘法和EKF对线性参数和非线性参数进行更新;基于模糊规则生成准则和误差下降率修剪策略实现了模糊规则的增删减。最后通过系统辨识和污水处理建模实验,表明了该算法在保证网络精度和泛化能力的同时,可以有效地简化网络结构。

  7. Evaluation of Neural Networks Performance in Active Cancellation of Acoustic Noise

    Mehrshad Salmasi,


    Full Text Available Active Noise Control (ANC works on the principle of destructive interference between the primary disturbance field heard as undesired noise and the secondary field which is generated from control actuators. In the simplest system, the disturbance field can be a simple sine wave, and the secondary field is the same sine wave but 180 degrees out of phase. This research presents an investigation on the use of different types of neural networks in active noise control. Performance of the multilayer perceptron (MLP, Elman and generalized regression neural networks (GRNN in active cancellation of acoustic noise signals is investigated and compared in this paper. Acoustic noise signals are selected from a Signal Processing Information Base (SPIB database. In order to compare the networks appropriately, similar structures and similar training and test samples are deduced for neural networks. The simulation results show that MLP, GRNN, and Elman neural networks present proper performance in active cancellation of acoustic noise. It is concluded that Elman and MLP neural networks have better performance than GRNN in noise attenuation. It is demonstrated that designed ANC system achieve good noise reduction in low frequencies.

  8. Neural Network Applications

    Vonk, E.; Jain, L.C.; Veelenturf, L.P.J.


    Artificial neural networks, also called neural networks, have been used successfully in many fields including engineering, science and business. This paper presents the implementation of several neural network simulators and their applications in character recognition and other engineering areas

  9. Artificial Neural Network Model for Monitoring Oil Film Regime in Spur Gear Based on Acoustic Emission Data

    Yasir Hassan Ali


    Full Text Available The thickness of an oil film lubricant can contribute to less gear tooth wear and surface failure. The purpose of this research is to use artificial neural network (ANN computational modelling to correlate spur gear data from acoustic emissions, lubricant temperature, and specific film thickness (λ. The approach is using an algorithm to monitor the oil film thickness and to detect which lubrication regime the gearbox is running either hydrodynamic, elastohydrodynamic, or boundary. This monitoring can aid identification of fault development. Feed-forward and recurrent Elman neural network algorithms were used to develop ANN models, which are subjected to training, testing, and validation process. The Levenberg-Marquardt back-propagation algorithm was applied to reduce errors. Log-sigmoid and Purelin were identified as suitable transfer functions for hidden and output nodes. The methods used in this paper shows accurate predictions from ANN and the feed-forward network performance is superior to the Elman neural network.

  10. Soft Sensor of Vehicle State Estimation Based on the Kernel Principal Component and Improved Neural Network

    Haorui Liu


    Full Text Available In the car control systems, it is hard to measure some key vehicle states directly and accurately when running on the road and the cost of the measurement is high as well. To address these problems, a vehicle state estimation method based on the kernel principal component analysis and the improved Elman neural network is proposed. Combining with nonlinear vehicle model of three degrees of freedom (3 DOF, longitudinal, lateral, and yaw motion, this paper applies the method to the soft sensor of the vehicle states. The simulation results of the double lane change tested by Matlab/SIMULINK cosimulation prove the KPCA-IENN algorithm (kernel principal component algorithm and improved Elman neural network to be quick and precise when tracking the vehicle states within the nonlinear area. This algorithm method can meet the software performance requirements of the vehicle states estimation in precision, tracking speed, noise suppression, and other aspects.

  11. Rotor Resistance Online Identification of Vector Controlled Induction Motor Based on Neural Network

    Bo Fan


    Full Text Available Rotor resistance identification has been well recognized as one of the most critical factors affecting the theoretical study and applications of AC motor’s control for high performance variable frequency speed adjustment. This paper proposes a novel model for rotor resistance parameters identification based on Elman neural networks. Elman recurrent neural network is capable of performing nonlinear function approximation and possesses the ability of time-variable characteristic adaptation. Those influencing factors of specified parameter are analyzed, respectively, and various work states are covered to ensure the completeness of the training samples. Through signal preprocessing on samples and training dataset, different input parameters identifications with one network are compared and analyzed. The trained Elman neural network, applied in the identification model, is able to efficiently predict the rotor resistance in high accuracy. The simulation and experimental results show that the proposed method owns extensive adaptability and performs very well in its application to vector controlled induction motor. This identification method is able to enhance the performance of induction motor’s variable-frequency speed regulation.

  12. Morphological neural networks

    Ritter, G.X.; Sussner, P. [Univ. of Florida, Gainesville, FL (United States)


    The theory of artificial neural networks has been successfully applied to a wide variety of pattern recognition problems. In this theory, the first step in computing the next state of a neuron or in performing the next layer neural network computation involves the linear operation of multiplying neural values by their synaptic strengths and adding the results. Thresholding usually follows the linear operation in order to provide for nonlinearity of the network. In this paper we introduce a novel class of neural networks, called morphological neural networks, in which the operations of multiplication and addition are replaced by addition and maximum (or minimum), respectively. By taking the maximum (or minimum) of sums instead of the sum of products, morphological network computation is nonlinear before thresholding. As a consequence, the properties of morphological neural networks are drastically different than those of traditional neural network models. In this paper we consider some of these differences and provide some particular examples of morphological neural network.

  13. Discriminating lysosomal membrane protein types using dynamic neural network.

    Tripathi, Vijay; Gupta, Dwijendra Kumar


    This work presents a dynamic artificial neural network methodology, which classifies the proteins into their classes from their sequences alone: the lysosomal membrane protein classes and the various other membranes protein classes. In this paper, neural networks-based lysosomal-associated membrane protein type prediction system is proposed. Different protein sequence representations are fused to extract the features of a protein sequence, which includes seven feature sets; amino acid (AA) composition, sequence length, hydrophobic group, electronic group, sum of hydrophobicity, R-group, and dipeptide composition. To reduce the dimensionality of the large feature vector, we applied the principal component analysis. The probabilistic neural network, generalized regression neural network, and Elman regression neural network (RNN) are used as classifiers and compared with layer recurrent network (LRN), a dynamic network. The dynamic networks have memory, i.e. its output depends not only on the input but the previous outputs also. Thus, the accuracy of LRN classifier among all other artificial neural networks comes out to be the highest. The overall accuracy of jackknife cross-validation is 93.2% for the data-set. These predicted results suggest that the method can be effectively applied to discriminate lysosomal associated membrane proteins from other membrane proteins (Type-I, Outer membrane proteins, GPI-Anchored) and Globular proteins, and it also indicates that the protein sequence representation can better reflect the core feature of membrane proteins than the classical AA composition.

  14. Constructive neural network learning

    Lin, Shaobo; Zeng, Jinshan; Zhang, Xiaoqin


    In this paper, we aim at developing scalable neural network-type learning systems. Motivated by the idea of "constructive neural networks" in approximation theory, we focus on "constructing" rather than "training" feed-forward neural networks (FNNs) for learning, and propose a novel FNNs learning system called the constructive feed-forward neural network (CFN). Theoretically, we prove that the proposed method not only overcomes the classical saturation problem for FNN approximation, but also ...

  15. Generalized classifier neural network.

    Ozyildirim, Buse Melis; Avci, Mutlu


    In this work a new radial basis function based classification neural network named as generalized classifier neural network, is proposed. The proposed generalized classifier neural network has five layers, unlike other radial basis function based neural networks such as generalized regression neural network and probabilistic neural network. They are input, pattern, summation, normalization and output layers. In addition to topological difference, the proposed neural network has gradient descent based optimization of smoothing parameter approach and diverge effect term added calculation improvements. Diverge effect term is an improvement on summation layer calculation to supply additional separation ability and flexibility. Performance of generalized classifier neural network is compared with that of the probabilistic neural network, multilayer perceptron algorithm and radial basis function neural network on 9 different data sets and with that of generalized regression neural network on 3 different data sets include only two classes in MATLAB environment. Better classification performance up to %89 is observed. Improved classification performances proved the effectivity of the proposed neural network.

  16. Chaotic diagonal recurrent neural network

    Wang Xing-Yuan; Zhang Yi


    We propose a novel neural network based on a diagonal recurrent neural network and chaos,and its structure andlearning algorithm are designed.The multilayer feedforward neural network,diagonal recurrent neural network,and chaotic diagonal recurrent neural network are used to approach the cubic symmetry map.The simulation results show that the approximation capability of the chaotic diagonal recurrent neural network is better than the other two neural networks.

  17. Artificial Neural Networks

    Chung-Ming Kuan


    Artificial neural networks (ANNs) constitute a class of flexible nonlinear models designed to mimic biological neural systems. In this entry, we introduce ANN using familiar econometric terminology and provide an overview of ANN modeling approach and its implementation methods.

  18. Structure and weights optimisation of a modified Elman network emotion classifier using hybrid computational intelligence algorithms: a comparative study

    Sheikhan, Mansour; Abbasnezhad Arabi, Mahdi; Gharavian, Davood


    Artificial neural networks are efficient models in pattern recognition applications, but their performance is dependent on employing suitable structure and connection weights. This study used a hybrid method for obtaining the optimal weight set and architecture of a recurrent neural emotion classifier based on gravitational search algorithm (GSA) and its binary version (BGSA), respectively. By considering the features of speech signal that were related to prosody, voice quality, and spectrum, a rich feature set was constructed. To select more efficient features, a fast feature selection method was employed. The performance of the proposed hybrid GSA-BGSA method was compared with similar hybrid methods based on particle swarm optimisation (PSO) algorithm and its binary version, PSO and discrete firefly algorithm, and hybrid of error back-propagation and genetic algorithm that were used for optimisation. Experimental tests on Berlin emotional database demonstrated the superior performance of the proposed method using a lighter network structure.

  19. Neural Networks: Implementations and Applications

    Vonk, E.; Veelenturf, L.P.J.; Jain, L.C.


    Artificial neural networks, also called neural networks, have been used successfully in many fields including engineering, science and business. This paper presents the implementation of several neural network simulators and their applications in character recognition and other engineering areas

  20. Neural Networks: Implementations and Applications

    Vonk, E.; Veelenturf, L.P.J.; Jain, L.C.


    Artificial neural networks, also called neural networks, have been used successfully in many fields including engineering, science and business. This paper presents the implementation of several neural network simulators and their applications in character recognition and other engineering areas

  1. Hidden neural networks

    Krogh, Anders Stærmose; Riis, Søren Kamaric


    A general framework for hybrids of hidden Markov models (HMMs) and neural networks (NNs) called hidden neural networks (HNNs) is described. The article begins by reviewing standard HMMs and estimation by conditional maximum likelihood, which is used by the HNN. In the HNN, the usual HMM probability...... parameters are replaced by the outputs of state-specific neural networks. As opposed to many other hybrids, the HNN is normalized globally and therefore has a valid probabilistic interpretation. All parameters in the HNN are estimated simultaneously according to the discriminative conditional maximum...... likelihood criterion. The HNN can be viewed as an undirected probabilistic independence network (a graphical model), where the neural networks provide a compact representation of the clique functions. An evaluation of the HNN on the task of recognizing broad phoneme classes in the TIMIT database shows clear...

  2. Neural Network Ensembles

    Hansen, Lars Kai; Salamon, Peter


    We propose several means for improving the performance an training of neural networks for classification. We use crossvalidation as a tool for optimizing network parameters and architecture. We show further that the remaining generalization error can be reduced by invoking ensembles of similar...... networks....

  3. Critical Branching Neural Networks

    Kello, Christopher T.


    It is now well-established that intrinsic variations in human neural and behavioral activity tend to exhibit scaling laws in their fluctuations and distributions. The meaning of these scaling laws is an ongoing matter of debate between isolable causes versus pervasive causes. A spiking neural network model is presented that self-tunes to critical…

  4. Critical Branching Neural Networks

    Kello, Christopher T.


    It is now well-established that intrinsic variations in human neural and behavioral activity tend to exhibit scaling laws in their fluctuations and distributions. The meaning of these scaling laws is an ongoing matter of debate between isolable causes versus pervasive causes. A spiking neural network model is presented that self-tunes to critical…

  5. Dual extended Kalman filtering in recurrent neural networks(1).

    Leung, Chi-Sing; Chan, Lai-Wan


    In the classical deterministic Elman model, the estimation of parameters must be very accurate. Otherwise, the system performance is very poor. To improve the system performance, we can use a Kalman filtering algorithm to guide the operation of a trained recurrent neural network (RNN). In this case, during training, we need to estimate the state of hidden layer, as well as the weights of the RNN. This paper discusses how to use the dual extended Kalman filtering (DEKF) for this dual estimation and how to use our proposing DEKF for removing some unimportant weights from a trained RNN. In our approach, one Kalman algorithm is used for estimating the state of the hidden layer, and one recursive least square (RLS) algorithm is used for estimating the weights. After training, we use the error covariance matrix of the RLS algorithm to remove unimportant weights. Simulation showed that our approach is an effective joint-learning-pruning method for RNNs under the online operation.

  6. Neural networks and graph theory

    许进; 保铮


    The relationships between artificial neural networks and graph theory are considered in detail. The applications of artificial neural networks to many difficult problems of graph theory, especially NP-complete problems, and the applications of graph theory to artificial neural networks are discussed. For example graph theory is used to study the pattern classification problem on the discrete type feedforward neural networks, and the stability analysis of feedback artificial neural networks etc.

  7. Neural networks in seismic discrimination

    Dowla, F.U.


    Neural networks are powerful and elegant computational tools that can be used in the analysis of geophysical signals. At Lawrence Livermore National Laboratory, we have developed neural networks to solve problems in seismic discrimination, event classification, and seismic and hydrodynamic yield estimation. Other researchers have used neural networks for seismic phase identification. We are currently developing neural networks to estimate depths of seismic events using regional seismograms. In this paper different types of network architecture and representation techniques are discussed. We address the important problem of designing neural networks with good generalization capabilities. Examples of neural networks for treaty verification applications are also described.

  8. Real-time multi-step-ahead water level forecasting by recurrent neural networks for urban flood control

    Chang, Fi-John; Chen, Pin-An; Lu, Ying-Ray; Huang, Eric; Chang, Kai-Yao


    Urban flood control is a crucial task, which commonly faces fast rising peak flows resulting from urbanization. To mitigate future flood damages, it is imperative to construct an on-line accurate model to forecast inundation levels during flood periods. The Yu-Cheng Pumping Station located in Taipei City of Taiwan is selected as the study area. Firstly, historical hydrologic data are fully explored by statistical techniques to identify the time span of rainfall affecting the rise of the water level in the floodwater storage pond (FSP) at the pumping station. Secondly, effective factors (rainfall stations) that significantly affect the FSP water level are extracted by the Gamma test (GT). Thirdly, one static artificial neural network (ANN) (backpropagation neural network-BPNN) and two dynamic ANNs (Elman neural network-Elman NN; nonlinear autoregressive network with exogenous inputs-NARX network) are used to construct multi-step-ahead FSP water level forecast models through two scenarios, in which scenario I adopts rainfall and FSP water level data as model inputs while scenario II adopts only rainfall data as model inputs. The results demonstrate that the GT can efficiently identify the effective rainfall stations as important inputs to the three ANNs; the recurrent connections from the output layer (NARX network) impose more effects on the output than those of the hidden layer (Elman NN) do; and the NARX network performs the best in real-time forecasting. The NARX network produces coefficients of efficiency within 0.9-0.7 (scenario I) and 0.7-0.5 (scenario II) in the testing stages for 10-60-min-ahead forecasts accordingly. This study suggests that the proposed NARX models can be valuable and beneficial to the government authority for urban flood control.

  9. Fuzzy Multiresolution Neural Networks

    Ying, Li; Qigang, Shang; Na, Lei

    A fuzzy multi-resolution neural network (FMRANN) based on particle swarm algorithm is proposed to approximate arbitrary nonlinear function. The active function of the FMRANN consists of not only the wavelet functions, but also the scaling functions, whose translation parameters and dilation parameters are adjustable. A set of fuzzy rules are involved in the FMRANN. Each rule either corresponding to a subset consists of scaling functions, or corresponding to a sub-wavelet neural network consists of wavelets with same dilation parameters. Incorporating the time-frequency localization and multi-resolution properties of wavelets with the ability of self-learning of fuzzy neural network, the approximation ability of FMRANN can be remarkable improved. A particle swarm algorithm is adopted to learn the translation and dilation parameters of the wavelets and adjusting the shape of membership functions. Simulation examples are presented to validate the effectiveness of FMRANN.

  10. Learning character-wise text representations with Elman nets

    Chrupala, Grzegorz


    Simple recurrent networks (SRNs) were introduced by Elman (1990) in order to model temporal structures in general and sequential structure in language in particular. More recently, SRN-based language models have become practical to train on large datasets and shown to outperform n-gram language mode

  11. Rrsearch of Mobile Robot Obstacle Avoidance in Unkown Environment Based Elman Network Force Control%未知环境下基于Elman网络力控制的移动机器人避障研究

    温淑慧; 郑维


    Collision avoidance is always difficult in path planning of mobile robot.A dynamic environment of robots based on neural network method of dynamic obstacle avoidance,is presented while the intelligent hybrid force/position control technology is applied to mobile robot obstacle avoidance control areas.Through the force control algorithm is formed between the mobile robot and obstacles virtual force field,and its setting,so that they can maintain the hope distance between the two.However,in the simulation process,the uncertainty of the mobile robot dynamic model and the obstacles will have impact on the performance of obstacle avoidance.Therefore,Elman neural network tocompensate for the uncertainty caused by the environment,is used while ajusting the exact distance between the mobile robots and the obstacles.Simulation results show that the dynamic obstacle avoidance algorithm is effective.%避障控制一直是移动机器人路径规划的难点.提出了一种未知环境下基于神经网络的机器人动态避障方法,同时把混合力/位置控制结构应用到移动机器人的避障控制中.力控制算法是通过在移动机器人和障碍物之间形成虚拟力场,并对其整定,以使它们两者之间能保持期望距离.由于移动机器人的动力学模型和障碍物的不确定性也会对避障控制的性能造成影响,因此采用Elman神经网络来补偿不确定性,同时整定移动机器人和障碍物之间的精确距离.仿真实验表明该动态避障算法是有效的.

  12. Rule Extraction:Using Neural Networks or for Neural Networks?

    Zhi-Hua Zhou


    In the research of rule extraction from neural networks, fidelity describes how well the rules mimic the behavior of a neural network while accuracy describes how well the rules can be generalized. This paper identifies the fidelity-accuracy dilemma. It argues to distinguish rule extraction using neural networks and rule extraction for neural networks according to their different goals, where fidelity and accuracy should be excluded from the rule quality evaluation framework, respectively.

  13. Introduction to Artificial Neural Networks

    Larsen, Jan


    The note addresses introduction to signal analysis and classification based on artificial feed-forward neural networks.......The note addresses introduction to signal analysis and classification based on artificial feed-forward neural networks....

  14. Sistem Evaluasi Kelayakan Mahasiswa MagangMenggunakan Elman Recurrent Neural Network

    Agus Aan Jiwa Permana; Widodo Prijodiprodjo


    Abstrak Jaringan Syaraf Tiruan (JST) dapat digunakan untuk memecahkan permasalahan tertentu seperti prediksi, klasifikasi, pengolahan data, dan robotik.Berdasarkan paparan tersebut, sehingga dalam penelitian ini mencoba menerapkan JST untuk menangani permasalahan dalam program magang yang sedang dihadapi dalam upaya untuk meningkatkan kompetensi, pengalaman, serta melatih softskill mahasiswa.Sistem yang dikembangkan dapat digunakan untuk mengevaluasi kelayakan mahasiswa dalam program maga...

  15. Compressing Convolutional Neural Networks

    Chen, Wenlin; Wilson, James T.; Tyree, Stephen; Weinberger, Kilian Q.; Chen, Yixin


    Convolutional neural networks (CNN) are increasingly used in many areas of computer vision. They are particularly attractive because of their ability to "absorb" great quantities of labeled data through millions of parameters. However, as model sizes increase, so do the storage and memory requirements of the classifiers. We present a novel network architecture, Frequency-Sensitive Hashed Nets (FreshNets), which exploits inherent redundancy in both convolutional layers and fully-connected laye...

  16. Artificial neural network modelling

    Samarasinghe, Sandhya


    This book covers theoretical aspects as well as recent innovative applications of Artificial Neural networks (ANNs) in natural, environmental, biological, social, industrial and automated systems. It presents recent results of ANNs in modelling small, large and complex systems under three categories, namely, 1) Networks, Structure Optimisation, Robustness and Stochasticity 2) Advances in Modelling Biological and Environmental Systems and 3) Advances in Modelling Social and Economic Systems. The book aims at serving undergraduates, postgraduates and researchers in ANN computational modelling. .

  17. Critical branching neural networks.

    Kello, Christopher T


    It is now well-established that intrinsic variations in human neural and behavioral activity tend to exhibit scaling laws in their fluctuations and distributions. The meaning of these scaling laws is an ongoing matter of debate between isolable causes versus pervasive causes. A spiking neural network model is presented that self-tunes to critical branching and, in doing so, simulates observed scaling laws as pervasive to neural and behavioral activity. These scaling laws are related to neural and cognitive functions, in that critical branching is shown to yield spiking activity with maximal memory and encoding capacities when analyzed using reservoir computing techniques. The model is also shown to account for findings of pervasive 1/f scaling in speech and cued response behaviors that are difficult to explain by isolable causes. Issues and questions raised by the model and its results are discussed from the perspectives of physics, neuroscience, computer and information sciences, and psychological and cognitive sciences.

  18. Generalized Adaptive Artificial Neural Networks

    Tawel, Raoul


    Mathematical model of supervised learning by artificial neural network provides for simultaneous adjustments of both temperatures of neurons and synaptic weights, and includes feedback as well as feedforward synaptic connections. Extension of mathematical model described in "Adaptive Neurons For Artificial Neural Networks" (NPO-17803). Dynamics of neural network represented in new model by less-restrictive continuous formalism.

  19. Quantum Neural Networks

    Gupta, S; Gupta, Sanjay


    This paper initiates the study of quantum computing within the constraints of using a polylogarithmic ($O(\\log^k n), k\\geq 1$) number of qubits and a polylogarithmic number of computation steps. The current research in the literature has focussed on using a polynomial number of qubits. A new mathematical model of computation called \\emph{Quantum Neural Networks (QNNs)} is defined, building on Deutsch's model of quantum computational network. The model introduces a nonlinear and irreversible gate, similar to the speculative operator defined by Abrams and Lloyd. The precise dynamics of this operator are defined and while giving examples in which nonlinear Schr\\"{o}dinger's equations are applied, we speculate on its possible implementation. The many practical problems associated with the current model of quantum computing are alleviated in the new model. It is shown that QNNs of logarithmic size and constant depth have the same computational power as threshold circuits, which are used for modeling neural network...

  20. Interval probabilistic neural network.

    Kowalski, Piotr A; Kulczycki, Piotr


    Automated classification systems have allowed for the rapid development of exploratory data analysis. Such systems increase the independence of human intervention in obtaining the analysis results, especially when inaccurate information is under consideration. The aim of this paper is to present a novel approach, a neural networking, for use in classifying interval information. As presented, neural methodology is a generalization of probabilistic neural network for interval data processing. The simple structure of this neural classification algorithm makes it applicable for research purposes. The procedure is based on the Bayes approach, ensuring minimal potential losses with regard to that which comes about through classification errors. In this article, the topological structure of the network and the learning process are described in detail. Of note, the correctness of the procedure proposed here has been verified by way of numerical tests. These tests include examples of both synthetic data, as well as benchmark instances. The results of numerical verification, carried out for different shapes of data sets, as well as a comparative analysis with other methods of similar conditioning, have validated both the concept presented here and its positive features.

  1. Artificial Neural Network

    Kapil Nahar


    Full Text Available An artificial neural network is an information-processing paradigm that is inspired by the way biological nervous systems, such as the brain, process information.The key element of this paradigm is the novel structure of the information processing system. It is composed of a large number of highly interconnected processing elements (neurons working in unison to solve specific problems.Ann’s, like people, learn by example.

  2. Neural networks for triggering

    Denby, B. (Fermi National Accelerator Lab., Batavia, IL (USA)); Campbell, M. (Michigan Univ., Ann Arbor, MI (USA)); Bedeschi, F. (Istituto Nazionale di Fisica Nucleare, Pisa (Italy)); Chriss, N.; Bowers, C. (Chicago Univ., IL (USA)); Nesti, F. (Scuola Normale Superiore, Pisa (Italy))


    Two types of neural network beauty trigger architectures, based on identification of electrons in jets and recognition of secondary vertices, have been simulated in the environment of the Fermilab CDF experiment. The efficiencies for B's and rejection of background obtained are encouraging. If hardware tests are successful, the electron identification architecture will be tested in the 1991 run of CDF. 10 refs., 5 figs., 1 tab.

  3. Artificial Neural Network

    Kapil Nahar


    Full Text Available An artificial neural network is an information-processing paradigm that is inspired by the way biological nervous systems, such as the brain, process information. The key element of this paradigm is the novel structure of the information processing system. It is composed of a large number of highly interconnected processing elements (neurons working in unison to solve specific problems. Ann’s, like people, learn by example.

  4. Application of feedback connection artificial neural network to seismic data filtering

    Djarfour, Noureddine; Baddari, Kamel; Mihoubi, Abdelhafid; Ferahtia, Jalal; 10.1016/j.crte.2008.03.003


    The Elman artificial neural network (ANN) (feedback connection) was used for seismic data filtering. The recurrent connection that characterizes this network offers the advantage of storing values from the previous time step to be used in the current time step. The proposed structure has the advantage of training simplicity by a back-propagation algorithm (steepest descent). Several trials were addressed on synthetic (with 10% and 50% of random and Gaussian noise) and real seismic data using respectively 10 to 30 neurons and a minimum of 60 neurons in the hidden layer. Both an iteration number up to 4000 and arrest criteria were used to obtain satisfactory performances. Application of such networks on real data shows that the filtered seismic section was efficient. Adequate cross-validation test is done to ensure the performance of network on new data sets.


    VOLTAGE COMPENSATION USING ARTIFICIAL NEURAL NETWORK: A CASE STUDY OF RUMUOLA DISTRIBUTION NETWORK. ... The artificial neural networks controller engaged to controlling the dynamic voltage ... Article Metrics.

  6. Trimaran Resistance Artificial Neural Network


    11th International Conference on Fast Sea Transportation FAST 2011, Honolulu, Hawaii, USA, September 2011 Trimaran Resistance Artificial Neural Network Richard...Trimaran Resistance Artificial Neural Network 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) 5d. PROJECT NUMBER 5e... Artificial Neural Network and is restricted to the center and side-hull configurations tested. The value in the parametric model is that it is able to

  7. [Artificial neural networks in Neurosciences].

    Porras Chavarino, Carmen; Salinas Martínez de Lecea, José María


    This article shows that artificial neural networks are used for confirming the relationships between physiological and cognitive changes. Specifically, we explore the influence of a decrease of neurotransmitters on the behaviour of old people in recognition tasks. This artificial neural network recognizes learned patterns. When we change the threshold of activation in some units, the artificial neural network simulates the experimental results of old people in recognition tasks. However, the main contributions of this paper are the design of an artificial neural network and its operation inspired by the nervous system and the way the inputs are coded and the process of orthogonalization of patterns.

  8. via dynamic neural networks

    J. Reyes-Reyes


    Full Text Available In this paper, an adaptive technique is suggested to provide the passivity property for a class of partially known SISO nonlinear systems. A simple Dynamic Neural Network (DNN, containing only two neurons and without any hidden-layers, is used to identify the unknown nonlinear system. By means of a Lyapunov-like analysis the new learning law for this DNN, guarantying both successful identification and passivation effects, is derived. Based on this adaptive DNN model, an adaptive feedback controller, serving for wide class of nonlinear systems with an a priori incomplete model description, is designed. Two typical examples illustrate the effectiveness of the suggested approach.

  9. Analysis of neural networks

    Heiden, Uwe


    The purpose of this work is a unified and general treatment of activity in neural networks from a mathematical pOint of view. Possible applications of the theory presented are indica­ ted throughout the text. However, they are not explored in de­ tail for two reasons : first, the universal character of n- ral activity in nearly all animals requires some type of a general approach~ secondly, the mathematical perspicuity would suffer if too many experimental details and empirical peculiarities were interspersed among the mathematical investigation. A guide to many applications is supplied by the references concerning a variety of specific issues. Of course the theory does not aim at covering all individual problems. Moreover there are other approaches to neural network theory (see e.g. Poggio-Torre, 1978) based on the different lev­ els at which the nervous system may be viewed. The theory is a deterministic one reflecting the average be­ havior of neurons or neuron pools. In this respect the essay is writt...

  10. Neural Networks for Optimal Control

    Sørensen, O.


    Two neural networks are trained to act as an observer and a controller, respectively, to control a non-linear, multi-variable process.......Two neural networks are trained to act as an observer and a controller, respectively, to control a non-linear, multi-variable process....

  11. Neural Networks for Optimal Control

    Sørensen, O.


    Two neural networks are trained to act as an observer and a controller, respectively, to control a non-linear, multi-variable process.......Two neural networks are trained to act as an observer and a controller, respectively, to control a non-linear, multi-variable process....

  12. Neural networks in astronomy.

    Tagliaferri, Roberto; Longo, Giuseppe; Milano, Leopoldo; Acernese, Fausto; Barone, Fabrizio; Ciaramella, Angelo; De Rosa, Rosario; Donalek, Ciro; Eleuteri, Antonio; Raiconi, Giancarlo; Sessa, Salvatore; Staiano, Antonino; Volpicelli, Alfredo


    In the last decade, the use of neural networks (NN) and of other soft computing methods has begun to spread also in the astronomical community which, due to the required accuracy of the measurements, is usually reluctant to use automatic tools to perform even the most common tasks of data reduction and data mining. The federation of heterogeneous large astronomical databases which is foreseen in the framework of the astrophysical virtual observatory and national virtual observatory projects, is, however, posing unprecedented data mining and visualization problems which will find a rather natural and user friendly answer in artificial intelligence tools based on NNs, fuzzy sets or genetic algorithms. This review is aimed to both astronomers (who often have little knowledge of the methodological background) and computer scientists (who often know little about potentially interesting applications), and therefore will be structured as follows: after giving a short introduction to the subject, we shall summarize the methodological background and focus our attention on some of the most interesting fields of application, namely: object extraction and classification, time series analysis, noise identification, and data mining. Most of the original work described in the paper has been performed in the framework of the AstroNeural collaboration (Napoli-Salerno).

  13. Logic Mining Using Neural Networks

    Sathasivam, Saratha


    Knowledge could be gained from experts, specialists in the area of interest, or it can be gained by induction from sets of data. Automatic induction of knowledge from data sets, usually stored in large databases, is called data mining. Data mining methods are important in the management of complex systems. There are many technologies available to data mining practitioners, including Artificial Neural Networks, Regression, and Decision Trees. Neural networks have been successfully applied in wide range of supervised and unsupervised learning applications. Neural network methods are not commonly used for data mining tasks, because they often produce incomprehensible models, and require long training times. One way in which the collective properties of a neural network may be used to implement a computational task is by way of the concept of energy minimization. The Hopfield network is well-known example of such an approach. The Hopfield network is useful as content addressable memory or an analog computer for s...

  14. Neural Networks in Control Applications

    Sørensen, O.

    The intention of this report is to make a systematic examination of the possibilities of applying neural networks in those technical areas, which are familiar to a control engineer. In other words, the potential of neural networks in control applications is given higher priority than a detailed...... study of the networks themselves. With this end in view the following restrictions have been made: - Amongst numerous neural network structures, only the Multi Layer Perceptron (a feed-forward network) is applied. - Amongst numerous training algorithms, only four algorithms are examined, all...... in a recursive form (sample updating). The simplest is the Back Probagation Error Algorithm, and the most complex is the recursive Prediction Error Method using a Gauss-Newton search direction. - Over-fitting is often considered to be a serious problem when training neural networks. This problem is specifically...

  15. Medical diagnosis using neural network

    Kamruzzaman, S M; Siddiquee, Abu Bakar; Mazumder, Md Ehsanul Hoque


    This research is to search for alternatives to the resolution of complex medical diagnosis where human knowledge should be apprehended in a general fashion. Successful application examples show that human diagnostic capabilities are significantly worse than the neural diagnostic system. This paper describes a modified feedforward neural network constructive algorithm (MFNNCA), a new algorithm for medical diagnosis. The new constructive algorithm with backpropagation; offer an approach for the incremental construction of near-minimal neural network architectures for pattern classification. The algorithm starts with minimal number of hidden units in the single hidden layer; additional units are added to the hidden layer one at a time to improve the accuracy of the network and to get an optimal size of a neural network. The MFNNCA was tested on several benchmarking classification problems including the cancer, heart disease and diabetes. Experimental results show that the MFNNCA can produce optimal neural networ...

  16. Artificial Neural Network Analysis System


    Contract No. DASG60-00-M-0201 Purchase request no.: Foot in the Door-01 Title Name: Artificial Neural Network Analysis System Company: Atlantic... Artificial Neural Network Analysis System 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) Powell, Bruce C 5d. PROJECT NUMBER 5e. TASK NUMBER...34) 27-02-2001 Report Type N/A Dates Covered (from... to) ("DD MON YYYY") 28-10-2000 27-02-2001 Title and Subtitle Artificial Neural Network Analysis

  17. Modular, Hierarchical Learning By Artificial Neural Networks

    Baldi, Pierre F.; Toomarian, Nikzad


    Modular and hierarchical approach to supervised learning by artificial neural networks leads to neural networks more structured than neural networks in which all neurons fully interconnected. These networks utilize general feedforward flow of information and sparse recurrent connections to achieve dynamical effects. The modular organization, sparsity of modular units and connections, and fact that learning is much more circumscribed are all attractive features for designing neural-network hardware. Learning streamlined by imitating some aspects of biological neural networks.

  18. Neural networks and statistical learning

    Du, Ke-Lin


    Providing a broad but in-depth introduction to neural network and machine learning in a statistical framework, this book provides a single, comprehensive resource for study and further research. All the major popular neural network models and statistical learning approaches are covered with examples and exercises in every chapter to develop a practical working understanding of the content. Each of the twenty-five chapters includes state-of-the-art descriptions and important research results on the respective topics. The broad coverage includes the multilayer perceptron, the Hopfield network, associative memory models, clustering models and algorithms, the radial basis function network, recurrent neural networks, principal component analysis, nonnegative matrix factorization, independent component analysis, discriminant analysis, support vector machines, kernel methods, reinforcement learning, probabilistic and Bayesian networks, data fusion and ensemble learning, fuzzy sets and logic, neurofuzzy models, hardw...

  19. Neural Networks in Control Applications

    Sørensen, O.

    examined, and it appears that considering 'normal' neural network models with, say, 500 samples, the problem of over-fitting is neglible, and therefore it is not taken into consideration afterwards. Numerous model types, often met in control applications, are implemented as neural network models....... - Control concepts including parameter estimation - Control concepts including inverse modelling - Control concepts including optimal control For each of the three groups, different control concepts and specific training methods are detailed described.Further, all control concepts are tested on the same......The intention of this report is to make a systematic examination of the possibilities of applying neural networks in those technical areas, which are familiar to a control engineer. In other words, the potential of neural networks in control applications is given higher priority than a detailed...

  20. The holographic neural network: Performance comparison with other neural networks

    Klepko, Robert


    The artificial neural network shows promise for use in recognition of high resolution radar images of ships. The holographic neural network (HNN) promises a very large data storage capacity and excellent generalization capability, both of which can be achieved with only a few learning trials, unlike most neural networks which require on the order of thousands of learning trials. The HNN is specially designed for pattern association storage, and mathematically realizes the storage and retrieval mechanisms of holograms. The pattern recognition capability of the HNN was studied, and its performance was compared with five other commonly used neural networks: the Adaline, Hamming, bidirectional associative memory, recirculation, and back propagation networks. The patterns used for testing represented artificial high resolution radar images of ships, and appear as a two dimensional topology of peaks with various amplitudes. The performance comparisons showed that the HNN does not perform as well as the other neural networks when using the same test data. However, modification of the data to make it appear more Gaussian distributed, improved the performance of the network. The HNN performs best if the data is completely Gaussian distributed.

  1. Neural Network Communications Signal Processing


    Technical Information Report for the Neural Network Communications Signal Processing Program, CDRL A003, 31 March 1993. Software Development Plan for...track changing jamming conditions to provide the decoder with the best log- likelihood ratio metrics at a given time. As part of our development plan we...Artificial Neural Networks (ICANN-91) Volume 2, June 24-28, 1991, pp. 1677-1680. Kohonen, Teuvo, Raivio, Kimmo, Simula, Oli, Venta , 011i, Henriksson

  2. What are artificial neural networks?

    Krogh, Anders


    Artificial neural networks have been applied to problems ranging from speech recognition to prediction of protein secondary structure, classification of cancers and gene prediction. How do they work and what might they be good for? Udgivelsesdato: 2008-Feb......Artificial neural networks have been applied to problems ranging from speech recognition to prediction of protein secondary structure, classification of cancers and gene prediction. How do they work and what might they be good for? Udgivelsesdato: 2008-Feb...

  3. Design on the HCB Based on IGCT and Neural Network Current Detection

    Yue Feng


    Full Text Available Nowaday,the traditional circuit breaker is which action slow, poor reliability, can not meet Large grid interconnection and flexible AC transmission requirements. To address those shortcomings of traditional circuit breaker,an idear is put forward that traditional mechanical circuit breaker combinates of power electronic switch-IGCT to build a new type of hybrid circuit breaker device (Hybrid Circuit Breaker shorted at HCB. Basing on natural converter circuit principles,when grid line failure,the novel device adopts Elman neural network to detect short-circuit fault current,can disconnect quickly by IGCT’s rapidity to ensure the safety of the power grid and improve switching speed and service life of the mechanical switch. It has a very important significance in fast switching of power system.

  4. VLSI implementation of neural networks.

    Wilamowski, B M; Binfet, J; Kaynak, M O


    Currently, fuzzy controllers are the most popular choice for hardware implementation of complex control surfaces because they are easy to design. Neural controllers are more complex and hard to train, but provide an outstanding control surface with much less error than that of a fuzzy controller. There are also some problems that have to be solved before the networks can be implemented on VLSI chips. First, an approximation function needs to be developed because CMOS neural networks have an activation function different than any function used in neural network software. Next, this function has to be used to train the network. Finally, the last problem for VLSI designers is the quantization effect caused by discrete values of the channel length (L) and width (W) of MOS transistor geometries. Two neural networks were designed in 1.5 microm technology. Using adequate approximation functions solved the problem of activation function. With this approach, trained networks were characterized by very small errors. Unfortunately, when the weights were quantized, errors were increased by an order of magnitude. However, even though the errors were enlarged, the results obtained from neural network hardware implementations were superior to the results obtained with fuzzy system approach.

  5. Complex-Valued Neural Networks

    Hirose, Akira


    This book is the second enlarged and revised edition of the first successful monograph on complex-valued neural networks (CVNNs) published in 2006, which lends itself to graduate and undergraduate courses in electrical engineering, informatics, control engineering, mechanics, robotics, bioengineering, and other relevant fields. In the second edition the recent trends in CVNNs research are included, resulting in e.g. almost a doubled number of references. The parametron invented in 1954 is also referred to with discussion on analogy and disparity. Also various additional arguments on the advantages of the complex-valued neural networks enhancing the difference to real-valued neural networks are given in various sections. The book is useful for those beginning their studies, for instance, in adaptive signal processing for highly functional sensing and imaging, control in unknown and changing environment, robotics inspired by human neural systems, and brain-like information processing, as well as interdisciplina...

  6. Antenna analysis using neural networks

    Smith, William T.


    Conventional computing schemes have long been used to analyze problems in electromagnetics (EM). The vast majority of EM applications require computationally intensive algorithms involving numerical integration and solutions to large systems of equations. The feasibility of using neural network computing algorithms for antenna analysis is investigated. The ultimate goal is to use a trained neural network algorithm to reduce the computational demands of existing reflector surface error compensation techniques. Neural networks are computational algorithms based on neurobiological systems. Neural nets consist of massively parallel interconnected nonlinear computational elements. They are often employed in pattern recognition and image processing problems. Recently, neural network analysis has been applied in the electromagnetics area for the design of frequency selective surfaces and beam forming networks. The backpropagation training algorithm was employed to simulate classical antenna array synthesis techniques. The Woodward-Lawson (W-L) and Dolph-Chebyshev (D-C) array pattern synthesis techniques were used to train the neural network. The inputs to the network were samples of the desired synthesis pattern. The outputs are the array element excitations required to synthesize the desired pattern. Once trained, the network is used to simulate the W-L or D-C techniques. Various sector patterns and cosecant-type patterns (27 total) generated using W-L synthesis were used to train the network. Desired pattern samples were then fed to the neural network. The outputs of the network were the simulated W-L excitations. A 20 element linear array was used. There were 41 input pattern samples with 40 output excitations (20 real parts, 20 imaginary). A comparison between the simulated and actual W-L techniques is shown for a triangular-shaped pattern. Dolph-Chebyshev is a different class of synthesis technique in that D-C is used for side lobe control as opposed to pattern

  7. Fractional Hopfield Neural Networks: Fractional Dynamic Associative Recurrent Neural Networks.

    Pu, Yi-Fei; Yi, Zhang; Zhou, Ji-Liu


    This paper mainly discusses a novel conceptual framework: fractional Hopfield neural networks (FHNN). As is commonly known, fractional calculus has been incorporated into artificial neural networks, mainly because of its long-term memory and nonlocality. Some researchers have made interesting attempts at fractional neural networks and gained competitive advantages over integer-order neural networks. Therefore, it is naturally makes one ponder how to generalize the first-order Hopfield neural networks to the fractional-order ones, and how to implement FHNN by means of fractional calculus. We propose to introduce a novel mathematical method: fractional calculus to implement FHNN. First, we implement fractor in the form of an analog circuit. Second, we implement FHNN by utilizing fractor and the fractional steepest descent approach, construct its Lyapunov function, and further analyze its attractors. Third, we perform experiments to analyze the stability and convergence of FHNN, and further discuss its applications to the defense against chip cloning attacks for anticounterfeiting. The main contribution of our work is to propose FHNN in the form of an analog circuit by utilizing a fractor and the fractional steepest descent approach, construct its Lyapunov function, prove its Lyapunov stability, analyze its attractors, and apply FHNN to the defense against chip cloning attacks for anticounterfeiting. A significant advantage of FHNN is that its attractors essentially relate to the neuron's fractional order. FHNN possesses the fractional-order-stability and fractional-order-sensitivity characteristics.

  8. Spatial interpolation and radiological mapping of ambient gamma dose rate by using artificial neural networks and fuzzy logic methods.

    Yeşilkanat, Cafer Mert; Kobya, Yaşar; Taşkın, Halim; Çevik, Uğur


    The aim of this study was to determine spatial risk dispersion of ambient gamma dose rate (AGDR) by using both artificial neural network (ANN) and fuzzy logic (FL) methods, compare the performances of methods, make dose estimations for intermediate stations with no previous measurements and create dose rate risk maps of the study area. In order to determine the dose distribution by using artificial neural networks, two main networks and five different network structures were used; feed forward ANN; Multi-layer perceptron (MLP), Radial basis functional neural network (RBFNN), Quantile regression neural network (QRNN) and recurrent ANN; Jordan networks (JN), Elman networks (EN). In the evaluation of estimation performance obtained for the test data, all models appear to give similar results. According to the cross-validation results obtained for explaining AGDR distribution, Pearson's r coefficients were calculated as 0.94, 0.91, 0.89, 0.91, 0.91 and 0.92 and RMSE values were calculated as 34.78, 43.28, 63.92, 44.86, 46.77 and 37.92 for MLP, RBFNN, QRNN, JN, EN and FL, respectively. In addition, spatial risk maps showing distributions of AGDR of the study area were created by all models and results were compared with geological, topological and soil structure. Copyright © 2017 Elsevier Ltd. All rights reserved.

  9. Multigradient for Neural Networks for Equalizers

    Chulhee Lee


    Full Text Available Recently, a new training algorithm, multigradient, has been published for neural networks and it is reported that the multigradient outperforms the backpropagation when neural networks are used as a classifier. When neural networks are used as an equalizer in communications, they can be viewed as a classifier. In this paper, we apply the multigradient algorithm to train the neural networks that are used as equalizers. Experiments show that the neural networks trained using the multigradient noticeably outperforms the neural networks trained by the backpropagation.

  10. Relations Between Wavelet Network and Feedforward Neural Network

    刘志刚; 何正友; 钱清泉


    A comparison of construction forms and base functions is made between feedforward neural network and wavelet network. The relations between them are studied from the constructions of wavelet functions or dilation functions in wavelet network by different activation functions in feedforward neural network. It is concluded that some wavelet function is equal to the linear combination of several neurons in feedforward neural network.

  11. Plant Growth Models Using Artificial Neural Networks

    Bubenheim, David


    In this paper, we descrive our motivation and approach to devloping models and the neural network architecture. Initial use of the artificial neural network for modeling the single plant process of transpiration is presented.

  12. Ocean wave forecasting using recurrent neural networks

    Mandal, S.; Prabaharan, N.

    , merchant vessel routing, nearshore construction, etc. more efficiently and safely. This paper describes an artificial neural network, namely recurrent neural network with rprop update algorithm and is applied for wave forecasting. Measured ocean waves off...

  13. Generalization performance of regularized neural network models

    Larsen, Jan; Hansen, Lars Kai


    Architecture optimization is a fundamental problem of neural network modeling. The optimal architecture is defined as the one which minimizes the generalization error. This paper addresses estimation of the generalization performance of regularized, complete neural network models. Regularization...

  14. Improved transformer protection using probabilistic neural network ...


    This article presents a novel technique to distinguish between magnetizing inrush ... Protective relaying, Probabilistic neural network, Active power relays, Power ... Forward Neural Network (MFFNN) with back-propagation learning technique.

  15. Neural Network for Sparse Reconstruction

    Qingfa Li


    Full Text Available We construct a neural network based on smoothing approximation techniques and projected gradient method to solve a kind of sparse reconstruction problems. Neural network can be implemented by circuits and can be seen as an important method for solving optimization problems, especially large scale problems. Smoothing approximation is an efficient technique for solving nonsmooth optimization problems. We combine these two techniques to overcome the difficulties of the choices of the step size in discrete algorithms and the item in the set-valued map of differential inclusion. In theory, the proposed network can converge to the optimal solution set of the given problem. Furthermore, some numerical experiments show the effectiveness of the proposed network in this paper.

  16. The Physics of Neural Networks

    Gutfreund, Hanoch; Toulouse, Gerard

    The following sections are included: * Introduction * Historical Perspective * Why Statistical Physics? * Purpose and Outline of the Paper * Basic Elements of Neural Network Models * The Biological Neuron * From the Biological to the Formal Neuron * The Formal Neuron * Network Architecture * Network Dynamics * Basic Functions of Neural Network Models * Associative Memory * Learning * Categorization * Generalization * Optimization * The Hopfield Model * Solution of the Model * The Merit of the Hopfield Model * Beyond the Standard Model * The Gardner Approach * A Microcanonical Formulation * The Case of Biased Patterns * A Canonical Formulation * Constraints on the Synaptic Weights * Learning with Errors * Learning with Noise * Hierarchically Correlated Data and Categorization * Hierarchical Data Structures * Storage of Hierarchical Data Structures * Categorization * Generalization * Learning a Classification Task * The Reference Perceptron Problem * The Contiguity Problem * Discussion - Issues of Relevance * The Notion of Attractors and Modes of Computation * The Nature of Attractors * Temporal versus Spatial Coding * Acknowledgements * References

  17. Performance evaluation of neural network and linear predictors for near-lossless compression of EEG signals.

    Sriraam, N; Eswaran, C


    This paper presents a comparison of the performances of neural network and linear predictors for near-lossless compression of EEG signals. Three neural network predictors, namely, single-layer perceptron (SLP), multilayer perceptron (MLP), and Elman network (EN), and two linear predictors, namely, autoregressive model (AR) and finite-impulse response filter (FIR) are used. For all the predictors, uniform quantization is applied on the residue signals obtained as the difference between the original and the predicted values. The maximum allowable reconstruction error delta is varied to determine the theoretical bound delta 0 for near-lossless compression and the corresponding bit rate rp. It is shown that among all the predictors, the SLP yields the best results in achieving the lowest values for delta 0 and rp. The corresponding values of the fidelity parameters, namely, percent of root-mean-square difference, peak SNR and cross correlation are also determined. A compression efficiency of 82.8% is achieved using the SLP with a near-lossless bound delta 0 = 3, with the diagnostic quality of the reconstructed EEG signal preserved. Thus, the proposed near-lossless scheme facilitates transmission of real time as well as offline EEG signals over network to remote interpretation center economically with less bandwidth utilization compared to other known lossless and near-lossless schemes.

  18. Neural networks and applications tutorial

    Guyon, I.


    The importance of neural networks has grown dramatically during this decade. While only a few years ago they were primarily of academic interest, now dozens of companies and many universities are investigating the potential use of these systems and products are beginning to appear. The idea of building a machine whose architecture is inspired by that of the brain has roots which go far back in history. Nowadays, technological advances of computers and the availability of custom integrated circuits, permit simulations of hundreds or even thousands of neurons. In conjunction, the growing interest in learning machines, non-linear dynamics and parallel computation spurred renewed attention in artificial neural networks. Many tentative applications have been proposed, including decision systems (associative memories, classifiers, data compressors and optimizers), or parametric models for signal processing purposes (system identification, automatic control, noise canceling, etc.). While they do not always outperform standard methods, neural network approaches are already used in some real world applications for pattern recognition and signal processing tasks. The tutorial is divided into six lectures, that where presented at the Third Graduate Summer Course on Computational Physics (September 3-7, 1990) on Parallel Architectures and Applications, organized by the European Physical Society: (1) Introduction: machine learning and biological computation. (2) Adaptive artificial neurons (perceptron, ADALINE, sigmoid units, etc.): learning rules and implementations. (3) Neural network systems: architectures, learning algorithms. (4) Applications: pattern recognition, signal processing, etc. (5) Elements of learning theory: how to build networks which generalize. (6) A case study: a neural network for on-line recognition of handwritten alphanumeric characters.

  19. Meta-Learning Evolutionary Artificial Neural Networks

    Abraham, Ajith


    In this paper, we present MLEANN (Meta-Learning Evolutionary Artificial Neural Network), an automatic computational framework for the adaptive optimization of artificial neural networks wherein the neural network architecture, activation function, connection weights; learning algorithm and its parameters are adapted according to the problem. We explored the performance of MLEANN and conventionally designed artificial neural networks for function approximation problems. To evaluate the compara...

  20. Building a Chaotic Proved Neural Network

    Bahi, Jacques M; Salomon, Michel


    Chaotic neural networks have received a great deal of attention these last years. In this paper we establish a precise correspondence between the so-called chaotic iterations and a particular class of artificial neural networks: global recurrent multi-layer perceptrons. We show formally that it is possible to make these iterations behave chaotically, as defined by Devaney, and thus we obtain the first neural networks proven chaotic. Several neural networks with different architectures are trained to exhibit a chaotical behavior.

  1. Comprehensive Development and Comparison of two Feed Forward Back Propagation Neural Networks for Forward and Reverse Modeling of Aluminum Alloy AA5083; H111 TIG Welding Process



    Full Text Available The development of an intelligent system for the establishment of relationship between input parameters and the responses utilizing both reverse and forward modeling of artificial neural networks is the main objective of the present research work. Prediction of quality characteristics such as front width, back width, front height and back height of the weld bead geometry in Tungsten Inert Gas welding process of AA5083; H111 Aluminum alloy is the aim in forward modeling from known set of process parameters such as current, %balance, welding speed, arc gap, gas flow rate, and frequency. Reverse modeling meets the industrial requirements of automatic welding to predict the recommended weld bead geometry characteristics. Comprehensive approach for the development of two back propagation networks viz. feed forward back propagation (FFBP and Elman back propagation (EBP neural networks is adopted. 212 Face centered central composite design based experimental data is utilized for the development of both supervised learning networks with batch mode training approach. A comparison of performance of FFBPP and EBP neural networks are made with that of stepwise multiple regression statistical modeling. Analysis of results showed that both neural network modeling outperformed the statistical approach in making better predictions and the models are efficient in selection of parameters effectively for the desired responses. FFBP performance found to marginally better than that of EBP neural network. Also the forward modeling performance was better than that of reverse modeling in both neural networks

  2. Move Ordering using Neural Networks

    Kocsis, L.; Uiterwijk, J.; Van Den Herik, J.


    © Springer-Verlag Berlin Heidelberg 2001. The efficiency of alpha-beta search algorithms heavily depends on the order in which the moves are examined. This paper focuses on using neural networks to estimate the likelihood of a move being the best in a certain position. The moves considered more like

  3. Neural Network based Consumption Forecasting

    Madsen, Per Printz


    This paper describe a Neural Network based method for consumption forecasting. This work has been financed by the The ENCOURAGE project. The aims of The ENCOURAGE project is to develop embedded intelligence and integration technologies that will directly optimize energy use in buildings and enable...

  4. Spin glasses and neural networks

    Parga, N. (Comision Nacional de Energia Atomica, San Carlos de Bariloche (Argentina). Centro Atomico Bariloche; Universidad Nacional de Cuyo, San Carlos de Bariloche (Argentina). Inst. Balseiro)


    The mean-field theory of spin glass models has been used as a prototype of systems with frustration and disorder. One of the most interesting related systems are models of associative memories. In these lectures we review the main concepts developed to solve the Sherrington-Kirkpatrick model and its application to neural networks. (orig.).

  5. Artificial neural networks in medicine

    Keller, P.E.


    This Technology Brief provides an overview of artificial neural networks (ANN). A definition and explanation of an ANN is given and situations in which an ANN is used are described. ANN applications to medicine specifically are then explored and the areas in which it is currently being used are discussed. Included are medical diagnostic aides, biochemical analysis, medical image analysis and drug development.

  6. Competition Based Neural Networks for Assignment Problems

    李涛; LuyuanFang


    Competition based neural networks have been used to solve the generalized assignment problem and the quadratic assignment problem.Both problems are very difficult and are ε approximation complete.The neural network approach has yielded highly competitive performance and good performance for the quadratic assignment problem.These neural networks are guaranteed to produce feasible solutions.

  7. Analysis of neural networks through base functions

    van der Zwaag, B.J.; Slump, Cornelis H.; Spaanenburg, L.

    Problem statement. Despite their success-story, neural networks have one major disadvantage compared to other techniques: the inability to explain comprehensively how a trained neural network reaches its output; neural networks are not only (incorrectly) seen as a "magic tool" but possibly even more

  8. Simplified LQG Control with Neural Networks

    Sørensen, O.


    A new neural network application for non-linear state control is described. One neural network is modelled to form a Kalmann predictor and trained to act as an optimal state observer for a non-linear process. Another neural network is modelled to form a state controller and trained to produce...

  9. Analysis of Neural Networks through Base Functions

    Zwaag, van der B.J.; Slump, C.H.; Spaanenburg, L.


    Problem statement. Despite their success-story, neural networks have one major disadvantage compared to other techniques: the inability to explain comprehensively how a trained neural network reaches its output; neural networks are not only (incorrectly) seen as a "magic tool" but possibly even more

  10. Streamflow predictions in Alpine Catchments by using artificial neural networks. Application in the Alto Genil Basin (South Spain)

    Jimeno-Saez, Patricia; Pegalajar-Cuellar, Manuel; Pulido-Velazquez, David


    This study explores techniques of modeling water inflow series, focusing on techniques of short-term steamflow prediction. An appropriate estimation of streamflow in advance is necessary to anticipate measures to mitigate the impacts and risks related to drought conditions. This study analyzes the prediction of future streamflow of nineteen subbasins in the Alto-Genil basin in Granada (Southeast of Spain). Some of these basin streamflow have an important component of snowmelt due to part of the system is located in Sierra Nevada Mountain Range, the highest mountain of continental Spain. Streamflow prediction models have been calibrated using time series of historical natural streamflows. The available streamflow measurements have been downloaded from several public data sources. These original data have been preprocessed to turn them to the original natural regime, removing the anthropic effects. The missing values in the adopted horizon period to calibrate the prediction models have been estimated by using a Temez hydrological balance model, approaching the snowmelt processes with a hybrid degree day method. In the experimentation, ARIMA models are used as baseline method, and recurrent neural networks ELMAN and nonlinear autoregressive neural network (NAR) to test if the prediction accuracy can be improved. After performing the multiple experiments with these models, non-parametric statistical tests are applied to select the best of these techniques. In the experiments carried out with ARIMA, it is concluded that ARIMA models are not adequate in this case study due to the existence of a nonlinear component that cannot be modeled. Secondly, ELMAN and NAR neural networks with multi-start training is performed with each network structure to deal with the local optimum problem, since in neural network training there is a very strong dependence on the initial weights of the network. The obtained results suggest that both neural networks are efficient for the short

  11. Direct and inverse neural networks modelling applied to study the influence of the gas diffusion layer properties on PBI-based PEM fuel cells

    Lobato, Justo; Canizares, Pablo; Rodrigo, Manuel A.; Linares, Jose J. [Chemical Engineering Department, University of Castilla-La Mancha, Campus Universitario s/n, 13004 Ciudad Real (Spain); Piuleac, Ciprian-George; Curteanu, Silvia [Faculty of Chemical Engineering and Environmental Protection, Department of Chemical Engineering, ' ' Gh. Asachi' ' Technical University Iasi Bd. D. Mangeron, No. 71A, 700050 IASI (Romania)


    This article shows the application of a very useful mathematical tool, artificial neural networks, to predict the fuel cells results (the value of the tortuosity and the cell voltage, at a given current density, and therefore, the power) on the basis of several properties that define a Gas Diffusion Layer: Teflon content, air permeability, porosity, mean pore size, hydrophobia level. Four neural networks types (multilayer perceptron, generalized feedforward network, modular neural network, and Jordan-Elman neural network) have been applied, with a good fitting between the predicted and the experimental values in the polarization curves. A simple feedforward neural network with one hidden layer proved to be an accurate model with good generalization capability (error about 1% in the validation phase). A procedure based on inverse neural network modelling was able to determine, with small errors, the initial conditions leading to imposed values for characteristics of the fuel cell. In addition, the use of this tool has been proved to be very attractive in order to predict the cell performance, and more interestingly, the influence of the properties of the gas diffusion layer on the cell performance, allowing possible enhancements of this material by changing some of its properties. (author)

  12. Quality-on-Demand Compression of EEG Signals for Telemedicine Applications Using Neural Network Predictors

    N. Sriraam


    Full Text Available A telemedicine system using communication and information technology to deliver medical signals such as ECG, EEG for long distance medical services has become reality. In either the urgent treatment or ordinary healthcare, it is necessary to compress these signals for the efficient use of bandwidth. This paper discusses a quality on demand compression of EEG signals using neural network predictors for telemedicine applications. The objective is to obtain a greater compression gains at a low bit rate while preserving the clinical information content. A two-stage compression scheme with a predictor and an entropy encoder is used. The residue signals obtained after prediction is first thresholded using various levels of thresholds and are further quantized and then encoded using an arithmetic encoder. Three neural network models, single-layer and multi-layer perceptrons and Elman network are used and the results are compared with linear predictors such as FIR filters and AR modeling. The fidelity of the reconstructed EEG signal is assessed quantitatively using parameters such as PRD, SNR, cross correlation and power spectral density. It is found from the results that the quality of the reconstructed signal is preserved at a low PRD thereby yielding better compression results compared to results obtained using lossless scheme.

  13. Quantum computing in neural networks

    Gralewicz, P


    According to the statistical interpretation of quantum theory, quantum computers form a distinguished class of probabilistic machines (PMs) by encoding n qubits in 2n pbits. This raises the possibility of a large-scale quantum computing using PMs, especially with neural networks which have the innate capability for probabilistic information processing. Restricting ourselves to a particular model, we construct and numerically examine the performance of neural circuits implementing universal quantum gates. A discussion on the physiological plausibility of proposed coding scheme is also provided.

  14. Discontinuities in recurrent neural networks.

    Gavaldá, R; Siegelmann, H T


    This article studies the computational power of various discontinuous real computational models that are based on the classical analog recurrent neural network (ARNN). This ARNN consists of finite number of neurons; each neuron computes a polynomial net function and a sigmoid-like continuous activation function. We introduce arithmetic networks as ARNN augmented with a few simple discontinuous (e.g., threshold or zero test) neurons. We argue that even with weights restricted to polynomial time computable reals, arithmetic networks are able to compute arbitrarily complex recursive functions. We identify many types of neural networks that are at least as powerful as arithmetic nets, some of which are not in fact discontinuous, but they boost other arithmetic operations in the net function (e.g., neurons that can use divisions and polynomial net functions inside sigmoid-like continuous activation functions). These arithmetic networks are equivalent to the Blum-Shub-Smale model, when the latter is restricted to a bounded number of registers. With respect to implementation on digital computers, we show that arithmetic networks with rational weights can be simulated with exponential precision, but even with polynomial-time computable real weights, arithmetic networks are not subject to any fixed precision bounds. This is in contrast with the ARNN that are known to demand precision that is linear in the computation time. When nontrivial periodic functions (e.g., fractional part, sine, tangent) are added to arithmetic networks, the resulting networks are computationally equivalent to a massively parallel machine. Thus, these highly discontinuous networks can solve the presumably intractable class of PSPACE-complete problems in polynomial time.

  15. Fuzzy logic systems are equivalent to feedforward neural networks



    Fuzzy logic systems and feedforward neural networks are equivalent in essence. First, interpolation representations of fuzzy logic systems are introduced and several important conclusions are given. Then three important kinds of neural networks are defined, i.e. linear neural networks, rectangle wave neural networks and nonlinear neural networks. Then it is proved that nonlinear neural networks can be represented by rectangle wave neural networks. Based on the results mentioned above, the equivalence between fuzzy logic systems and feedforward neural networks is proved, which will be very useful for theoretical research or applications on fuzzy logic systems or neural networks by means of combining fuzzy logic systems with neural networks.

  16. Fiber optic Adaline neural networks

    Ghosh, Anjan K.; Trepka, Jim; Paparao, Palacharla


    Optoelectronic realization of adaptive filters and equalizers using fiber optic tapped delay lines and spatial light modulators has been discussed recently. We describe the design of a single layer fiber optic Adaline neural network which can be used as a bit pattern classifier. In our realization we employ as few electronic devices as possible and use optical computation to utilize the advantages of optics in processing speed, parallelism, and interconnection. The new optical neural network described in this paper is designed for optical processing of guided lightwave signals, not electronic signals. We analyzed the convergence or learning characteristics of the optically implemented Adaline in the presence of errors in the hardware, and we studied methods for improving the convergence rate of the Adaline.

  17. Neural Networks Methodology and Applications

    Dreyfus, Gérard


    Neural networks represent a powerful data processing technique that has reached maturity and broad application. When clearly understood and appropriately used, they are a mandatory component in the toolbox of any engineer who wants make the best use of the available data, in order to build models, make predictions, mine data, recognize shapes or signals, etc. Ranging from theoretical foundations to real-life applications, this book is intended to provide engineers and researchers with clear methodologies for taking advantage of neural networks in industrial, financial or banking applications, many instances of which are presented in the book. For the benefit of readers wishing to gain deeper knowledge of the topics, the book features appendices that provide theoretical details for greater insight, and algorithmic details for efficient programming and implementation. The chapters have been written by experts ands seemlessly edited to present a coherent and comprehensive, yet not redundant, practically-oriented...

  18. Neural Networks for Speech Application.


    operation and neurocrience theories of how neurons process information in the brain. design. Early studies by McCulloch and Pitts dunng the forties led to...developed the commercially available Mark III and Mark IV neurocom- established by McCulloch and Pits. puters that model neural networks and run...ORGANIZERS Infonuiaonienes (1986) FOR Lashley, K. Brain Mehaius and Cblali (129)SPEECHOTECH 󈨜 McCullch. W and Pitts . W, ’A Logical Calculusof the

  19. Analog electronic neural network circuits

    Graf, H.P.; Jackel, L.D. (AT and T Bell Labs., Holmdel, NJ (USA))


    The large interconnectivity and moderate precision required in neural network models present new opportunities for analog computing. This paper discusses analog circuits for a variety of problems such as pattern matching, optimization, and learning. Most of the circuits build so far are relatively small, exploratory designs. The most mature circuits are those for template matching. Chips performing this function are now being applied to pattern recognition problems.

  20. The LILARTI neural network system

    Allen, J.D. Jr.; Schell, F.M.; Dodd, C.V.


    The material of this Technical Memorandum is intended to provide the reader with conceptual and technical background information on the LILARTI neural network system of detail sufficient to confer an understanding of the LILARTI method as it is presently allied and to facilitate application of the method to problems beyond the scope of this document. Of particular importance in this regard are the descriptive sections and the Appendices which include operating instructions, partial listings of program output and data files, and network construction information.

  1. Process Neural Networks Theory and Applications

    He, Xingui


    "Process Neural Networks - Theory and Applications" proposes the concept and model of a process neural network for the first time, showing how it expands the mapping relationship between the input and output of traditional neural networks, and enhancing the expression capability for practical problems, with broad applicability to solving problems relating to process in practice. Some theoretical problems such as continuity, functional approximation capability, and computing capability, are strictly proved. The application methods, network construction principles, and optimization alg

  2. Neural network subtyping of depression.

    Florio, T M; Parker, G; Austin, M P; Hickie, I; Mitchell, P; Wilhelm, K


    To examine the applicability of a neural network classification strategy to examine the independent contribution of psychomotor disturbance (PMD) and endogeneity symptoms to the DSM-III-R definition of melancholia. We studied 407 depressed patients with the clinical dataset comprising 17 endogeneity symptoms and the 18-item CORE measure of behaviourally rated PMD. A multilayer perception neural network was used to fit non-linear models of varying complexity. A linear discriminant function analysis was also used to generate a model for comparison with the non-linear models. Models (linear and non-linear) using PMD items only and endogeneity symptoms only had similar rates of successful classification, while non-linear models combining both PMD and symptoms scores achieved the best classifications. Our current non-linear model was superior to a linear analysis, a finding which may have wider application to psychiatric classification. Our non-linear analysis of depressive subtypes supports the binary view that melancholic and non-melancholic depression are separate clinical disorders rather than different forms of the same entity. This study illustrates how non-linear modelling with neural networks is a potentially fruitful approach to the study of the diagnostic taxonomy of psychiatric disorders and to clinical decision-making.

  3. Novel quantum inspired binary neural network algorithm



    In this paper, a quantum based binary neural network algorithm is proposed, named as novel quantum binary neural network algorithm (NQ-BNN). It forms a neural network structure by deciding weights and separability parameter in quantum based manner. Quantum computing concept represents solution probabilistically and gives large search space to find optimal value of required parameters using Gaussian random number generator. The neural network structure forms constructively having three number of layers input layer: hidden layer and output layer. A constructive way of deciding the network eliminates the unnecessary training of neural network. A new parameter that is a quantum separability parameter (QSP) is introduced here, which finds an optimal separability plane to classify input samples. During learning, it searches for an optimal separability plane. This parameter is taken as the threshold of neuron for learning of neural network. This algorithm is tested with three benchmark datasets and produces improved results than existing quantum inspired and other classification approaches.

  4. Practical neural network recipies in C++



    This text serves as a cookbook for neural network solutions to practical problems using C++. It will enable those with moderate programming experience to select a neural network model appropriate to solving a particular problem, and to produce a working program implementing that network. The book provides guidance along the entire problem-solving path, including designing the training set, preprocessing variables, training and validating the network, and evaluating its performance. Though the book is not intended as a general course in neural networks, no background in neural works is assum

  5. Understanding Neural Networks for Machine Learning using Microsoft Neural Network Algorithm

    Nagesh Ramprasad


    .... In this research, focus is on the Microsoft Neural System Algorithm. The Microsoft Neural System Algorithm is a simple implementation of the adaptable and popular neural networks that are used in the machine learning...

  6. Neural network modeling of emotion

    Levine, Daniel S.


    This article reviews the history and development of computational neural network modeling of cognitive and behavioral processes that involve emotion. The exposition starts with models of classical conditioning dating from the early 1970s. Then it proceeds toward models of interactions between emotion and attention. Then models of emotional influences on decision making are reviewed, including some speculative (not and not yet simulated) models of the evolution of decision rules. Through the late 1980s, the neural networks developed to model emotional processes were mainly embodiments of significant functional principles motivated by psychological data. In the last two decades, network models of these processes have become much more detailed in their incorporation of known physiological properties of specific brain regions, while preserving many of the psychological principles from the earlier models. Most network models of emotional processes so far have dealt with positive and negative emotion in general, rather than specific emotions such as fear, joy, sadness, and anger. But a later section of this article reviews a few models relevant to specific emotions: one family of models of auditory fear conditioning in rats, and one model of induced pleasure enhancing creativity in humans. Then models of emotional disorders are reviewed. The article concludes with philosophical statements about the essential contributions of emotion to intelligent behavior and the importance of quantitative theories and models to the interdisciplinary enterprise of understanding the interactions of emotion, cognition, and behavior.


    Artur Popko


    Full Text Available Recognition of visual patterns is one of significant applications of Artificial Neural Networks, which partially emulate human thinking in the domain of artificial intelligence. In the paper, a simplified neural approach to recognition of visual patterns is portrayed and discussed. This paper is dedicated for investigators in visual patterns recognition, Artificial Neural Networking and related disciplines. The document describes also MemBrain application environment as a powerful and easy to use neural networks’ editor and simulator supporting ANN.

  8. Salience-Affected Neural Networks

    Remmelzwaal, Leendert A; Ellis, George F R


    We present a simple neural network model which combines a locally-connected feedforward structure, as is traditionally used to model inter-neuron connectivity, with a layer of undifferentiated connections which model the diffuse projections from the human limbic system to the cortex. This new layer makes it possible to model global effects such as salience, at the same time as the local network processes task-specific or local information. This simple combination network displays interactions between salience and regular processing which correspond to known effects in the developing brain, such as enhanced learning as a result of heightened affect. The cortex biases neuronal responses to affect both learning and memory, through the use of diffuse projections from the limbic system to the cortex. Standard ANNs do not model this non-local flow of information represented by the ascending systems, which are a significant feature of the structure of the brain, and although they do allow associational learning with...

  9. Dynamic Analysis of Structures Using Neural Networks

    N. Ahmadi


    Full Text Available In the recent years, neural networks are considered as the best candidate for fast approximation with arbitrary accuracy in the time consuming problems. Dynamic analysis of structures against earthquake has the time consuming process. We employed two kinds of neural networks: Generalized Regression neural network (GR and Back-Propagation Wavenet neural network (BPW, for approximating of dynamic time history response of frame structures. GR is a traditional radial basis function neural network while BPW categorized as a wavelet neural network. In BPW, sigmoid activation functions of hidden layer neurons are substituted with wavelets and weights training are achieved using Scaled Conjugate Gradient (SCG algorithm. Comparison the results of BPW with those of GR in the dynamic analysis of eight story steel frame indicates that accuracy of the properly trained BPW was better than that of GR and therefore, BPW can be efficiently used for approximate dynamic analysis of structures.

  10. Fast Algorithms for Convolutional Neural Networks

    Lavin, Andrew; Gray, Scott


    Deep convolutional neural networks take GPU days of compute time to train on large data sets. Pedestrian detection for self driving cars requires very low latency. Image recognition for mobile phones is constrained by limited processing resources. The success of convolutional neural networks in these situations is limited by how fast we can compute them. Conventional FFT based convolution is fast for large filters, but state of the art convolutional neural networks use small, 3x3 filters. We ...

  11. Modelling Microwave Devices Using Artificial Neural Networks

    Andrius Katkevičius


    Full Text Available Artificial neural networks (ANN have recently gained attention as fast and flexible equipment for modelling and designing microwave devices. The paper reviews the opportunities to use them for undertaking the tasks on the analysis and synthesis. The article focuses on what tasks might be solved using neural networks, what challenges might rise when using artificial neural networks for carrying out tasks on microwave devices and discusses problem-solving techniques for microwave devices with intermittent characteristics.Article in Lithuanian

  12. Rule Extraction using Artificial Neural Networks


    Artificial neural networks have been successfully applied to a variety of business application problems involving classification and regression. Although backpropagation neural networks generally predict better than decision trees do for pattern classification problems, they are often regarded as black boxes, i.e., their predictions are not as interpretable as those of decision trees. In many applications, it is desirable to extract knowledge from trained neural networks so that the users can...

  13. Adaptive optimization and control using neural networks

    Mead, W.C.; Brown, S.K.; Jones, R.D.; Bowling, P.S.; Barnes, C.W.


    Recent work has demonstrated the ability of neural-network-based controllers to optimize and control machines with complex, non-linear, relatively unknown control spaces. We present a brief overview of neural networks via a taxonomy illustrating some capabilities of different kinds of neural networks. We present some successful control examples, particularly the optimization and control of a small-angle negative ion source.

  14. Forecasting Exchange Rate Using Neural Networks

    Raksaseree, Sukhita


    The artificial neural network models become increasingly popular among researchers and investors since many studies have shown that it has superior performance over the traditional statistical model. This paper aims to investigate the neural network performance in forecasting foreign exchange rates based on backpropagation algorithm. The forecast of Thai Baht against seven currencies are conducted to observe the performance of the neural network models using the performance criteria for both ...

  15. Semantic Interpretation of An Artificial Neural Network


    ARTIFICIAL NEURAL NETWORK .7,’ THESIS Stanley Dale Kinderknecht Captain, USAF 770 DEAT7ET77,’H IR O C 7... ARTIFICIAL NEURAL NETWORK THESIS Stanley Dale Kinderknecht Captain, USAF AFIT/GCS/ENG/95D-07 Approved for public release; distribution unlimited The views...Government. AFIT/GCS/ENG/95D-07 SEMANTIC INTERPRETATION OF AN ARTIFICIAL NEURAL NETWORK THESIS Presented to the Faculty of the School of Engineering of

  16. Feature Weight Tuning for Recursive Neural Networks


    This paper addresses how a recursive neural network model can automatically leave out useless information and emphasize important evidence, in other words, to perform "weight tuning" for higher-level representation acquisition. We propose two models, Weighted Neural Network (WNN) and Binary-Expectation Neural Network (BENN), which automatically control how much one specific unit contributes to the higher-level representation. The proposed model can be viewed as incorporating a more powerful c...

  17. Fuzzy neural network theory and application

    Liu, Puyin


    This book systematically synthesizes research achievements in the field of fuzzy neural networks in recent years. It also provides a comprehensive presentation of the developments in fuzzy neural networks, with regard to theory as well as their application to system modeling and image restoration. Special emphasis is placed on the fundamental concepts and architecture analysis of fuzzy neural networks. The book is unique in treating all kinds of fuzzy neural networks and their learning algorithms and universal approximations, and employing simulation examples which are carefully designed to he

  18. Neural networks for nuclear spectroscopy

    Keller, P.E.; Kangas, L.J.; Hashem, S.; Kouzes, R.T. [Pacific Northwest Lab., Richland, WA (United States)] [and others


    In this paper two applications of artificial neural networks (ANNs) in nuclear spectroscopy analysis are discussed. In the first application, an ANN assigns quality coefficients to alpha particle energy spectra. These spectra are used to detect plutonium contamination in the work environment. The quality coefficients represent the levels of spectral degradation caused by miscalibration and foreign matter affecting the instruments. A set of spectra was labeled with quality coefficients by an expert and used to train the ANN expert system. Our investigation shows that the expert knowledge of spectral quality can be transferred to an ANN system. The second application combines a portable gamma-ray spectrometer with an ANN. In this system the ANN is used to automatically identify, radioactive isotopes in real-time from their gamma-ray spectra. Two neural network paradigms are examined: the linear perception and the optimal linear associative memory (OLAM). A comparison of the two paradigms shows that OLAM is superior to linear perception for this application. Both networks have a linear response and are useful in determining the composition of an unknown sample when the spectrum of the unknown is a linear superposition of known spectra. One feature of this technique is that it uses the whole spectrum in the identification process instead of only the individual photo-peaks. For this reason, it is potentially more useful for processing data from lower resolution gamma-ray spectrometers. This approach has been tested with data generated by Monte Carlo simulations and with field data from sodium iodide and Germanium detectors. With the ANN approach, the intense computation takes place during the training process. Once the network is trained, normal operation consists of propagating the data through the network, which results in rapid identification of samples. This approach is useful in situations that require fast response where precise quantification is less important.

  19. Neural Networks for Rapid Design and Analysis

    Sparks, Dean W., Jr.; Maghami, Peiman G.


    Artificial neural networks have been employed for rapid and efficient dynamics and control analysis of flexible systems. Specifically, feedforward neural networks are designed to approximate nonlinear dynamic components over prescribed input ranges, and are used in simulations as a means to speed up the overall time response analysis process. To capture the recursive nature of dynamic components with artificial neural networks, recurrent networks, which use state feedback with the appropriate number of time delays, as inputs to the networks, are employed. Once properly trained, neural networks can give very good approximations to nonlinear dynamic components, and by their judicious use in simulations, allow the analyst the potential to speed up the analysis process considerably. To illustrate this potential speed up, an existing simulation model of a spacecraft reaction wheel system is executed, first conventionally, and then with an artificial neural network in place.

  20. Assessment of Global Voltage Stability Margin through Radial Basis Function Neural Network

    Akash Saxena


    Full Text Available Dynamic operating conditions along with contingencies often present formidable challenges to the power engineers. Decisions pertaining to the control strategies taken by the system operators at energy management centre are based on the information about the system’s behavior. The application of ANN as a tool for voltage stability assessment is empirical because of its ability to do parallel data processing with high accuracy, fast response, and capability to model dynamic, nonlinear, and noisy data. This paper presents an effective methodology based on Radial Basis Function Neural Network (RBFN to predict Global Voltage Stability Margin (GVSM, for any unseen loading condition of the system. GVSM is used to assess the overall voltage stability status of the power system. A comparative analysis of different topologies of ANN, namely, Feedforward Backprop (FFBP, Cascade Forward Backprop (CFB, Generalized Regression (GR, Layer Recurrent (LR, Nonlinear Autoregressive Exogenous (NARX, ELMAN Backprop, and Feedforward Distributed Time Delay Network (FFDTDN, is carried out on the basis of capability of the prediction of GVSM. The efficacy of RBFN is better than other networks, which is validated by taking the predictions of GVSM at different levels of Additive White Gaussian Noise (AWGN in input features. The results obtained from ANNs are validated through the offline Newton Raphson (N-R method. The proposed methodology is tested over IEEE 14-bus, IEEE 30-bus, and IEEE 118-bus test systems.

  1. Systolic implementation of neural networks

    De Groot, A.J.; Parker, S.R.


    The backpropagation algorithm for error gradient calculations in multilayer, feed-forward neural networks is derived in matrix form involving inner and outer products. It is demonstrated that these calculations can be carried out efficiently using systolic processing techniques, particularly using the SPRINT, a 64-element systolic processor developed at Lawrence Livermore National Laboratory. This machine contains one million synapses, and forward-propagates 12 million connections per second, using 100 watts of power. When executing the algorithm, each SPRINT processor performs useful work 97% of the time. The theory and applications are confirmed by some nontrivial examples involving seismic signal recognition. 4 refs., 7 figs.

  2. Magnitude Sensitive Competitive Neural Networks

    Pelayo Campillos, Enrique; Buldain Pérez, David; Orrite Uruñuela, Carlos


    En esta Tesis se presentan un conjunto de redes neuronales llamadas Magnitude Sensitive Competitive Neural Networks (MSCNNs). Se trata de un conjunto de algoritmos de Competitive Learning que incluyen un término de magnitud como un factor de modulación de la distancia usada en la competición. Al igual que otros métodos competitivos, MSCNNs realizan la cuantización vectorial de los datos, pero el término de magnitud guía el entrenamiento de los centroides de modo que se representan con alto de...

  3. The Laplacian spectrum of neural networks.

    de Lange, Siemon C; de Reus, Marcel A; van den Heuvel, Martijn P


    The brain is a complex network of neural interactions, both at the microscopic and macroscopic level. Graph theory is well suited to examine the global network architecture of these neural networks. Many popular graph metrics, however, encode average properties of individual network elements. Complementing these "conventional" graph metrics, the eigenvalue spectrum of the normalized Laplacian describes a network's structure directly at a systems level, without referring to individual nodes or connections. In this paper, the Laplacian spectra of the macroscopic anatomical neuronal networks of the macaque and cat, and the microscopic network of the Caenorhabditis elegans were examined. Consistent with conventional graph metrics, analysis of the Laplacian spectra revealed an integrative community structure in neural brain networks. Extending previous findings of overlap of network attributes across species, similarity of the Laplacian spectra across the cat, macaque and C. elegans neural networks suggests a certain level of consistency in the overall architecture of the anatomical neural networks of these species. Our results further suggest a specific network class for neural networks, distinct from conceptual small-world and scale-free models as well as several empirical networks.

  4. The Laplacian spectrum of neural networks

    de Lange, Siemon C.; de Reus, Marcel A.; van den Heuvel, Martijn P.


    The brain is a complex network of neural interactions, both at the microscopic and macroscopic level. Graph theory is well suited to examine the global network architecture of these neural networks. Many popular graph metrics, however, encode average properties of individual network elements. Complementing these “conventional” graph metrics, the eigenvalue spectrum of the normalized Laplacian describes a network's structure directly at a systems level, without referring to individual nodes or connections. In this paper, the Laplacian spectra of the macroscopic anatomical neuronal networks of the macaque and cat, and the microscopic network of the Caenorhabditis elegans were examined. Consistent with conventional graph metrics, analysis of the Laplacian spectra revealed an integrative community structure in neural brain networks. Extending previous findings of overlap of network attributes across species, similarity of the Laplacian spectra across the cat, macaque and C. elegans neural networks suggests a certain level of consistency in the overall architecture of the anatomical neural networks of these species. Our results further suggest a specific network class for neural networks, distinct from conceptual small-world and scale-free models as well as several empirical networks. PMID:24454286

  5. Neural Network Controlled Visual Saccades

    Johnson, Jeffrey D.; Grogan, Timothy A.


    The paper to be presented will discuss research on a computer vision system controlled by a neural network capable of learning through classical (Pavlovian) conditioning. Through the use of unconditional stimuli (reward and punishment) the system will develop scan patterns of eye saccades necessary to differentiate and recognize members of an input set. By foveating only those portions of the input image that the system has found to be necessary for recognition the drawback of computational explosion as the size of the input image grows is avoided. The model incorporates many features found in animal vision systems, and is governed by understandable and modifiable behavior patterns similar to those reported by Pavlov in his classic study. These behavioral patterns are a result of a neuronal model, used in the network, explicitly designed to reproduce this behavior.

  6. The Prediction of the Risk Level of Pulmonary Embolism and Deep Vein Thrombosis through Artificial Neural Network.

    Agharezaei, Laleh; Agharezaei, Zhila; Nemati, Ali; Bahaadinbeigy, Kambiz; Keynia, Farshid; Baneshi, Mohammad Reza; Iranpour, Abedin; Agharezaei, Moslem


    Venous thromboembolism is a common cause of mortality among hospitalized patients and yet it is preventable through detecting the precipitating factors and a prompt diagnosis by specialists. The present study has been carried out in order to assist specialists in the diagnosis and prediction of the risk level of pulmonary embolism in patients, by means of artificial neural network. A number of 31 risk factors have been used in this study in order to evaluate the conditions of 294 patients hospitalized in 3 educational hospitals affiliated with Kerman University of Medical Sciences. Two types of artificial neural networks, namely Feed-Forward Back Propagation and Elman Back Propagation, were compared in this study. Through an optimized artificial neural network model, an accuracy and risk level index of 93.23 percent was achieved and, subsequently, the results have been compared with those obtained from the perfusion scan of the patients. 86.61 percent of high risk patients diagnosed through perfusion scan diagnostic method were also diagnosed correctly through the method proposed in the present study. The results of this study can be a good resource for physicians, medical assistants, and healthcare staff to diagnose high risk patients more precisely and prevent the mortalities. Additionally, expenses and other unnecessary diagnostic methods such as perfusion scans can be efficiently reduced.

  7. Video Traffic Prediction Using Neural Networks

    Miloš Oravec


    Full Text Available In this paper, we consider video stream prediction for application in services likevideo-on-demand, videoconferencing, video broadcasting, etc. The aim is to predict thevideo stream for an efficient bandwidth allocation of the video signal. Efficient predictionof traffic generated by multimedia sources is an important part of traffic and congestioncontrol procedures at the network edges. As a tool for the prediction, we use neuralnetworks – multilayer perceptron (MLP, radial basis function networks (RBF networksand backpropagation through time (BPTT neural networks. At first, we briefly introducetheoretical background of neural networks, the prediction methods and the differencebetween them. We propose also video time-series processing using moving averages.Simulation results for each type of neural network together with final comparisons arepresented. For comparison purposes, also conventional (non-neural prediction isincluded. The purpose of our work is to construct suitable neural networks for variable bitrate video prediction and evaluate them. We use video traces from [1].

  8. Neural networks with discontinuous/impact activations

    Akhmet, Marat


    This book presents as its main subject new models in mathematical neuroscience. A wide range of neural networks models with discontinuities are discussed, including impulsive differential equations, differential equations with piecewise constant arguments, and models of mixed type. These models involve discontinuities, which are natural because huge velocities and short distances are usually observed in devices modeling the networks. A discussion of the models, appropriate for the proposed applications, is also provided. This book also: Explores questions related to the biological underpinning for models of neural networks\\ Considers neural networks modeling using differential equations with impulsive and piecewise constant argument discontinuities Provides all necessary mathematical basics for application to the theory of neural networks Neural Networks with Discontinuous/Impact Activations is an ideal book for researchers and professionals in the field of engineering mathematics that have an interest in app...

  9. Neural Networks for Emotion Classification

    Sun, Yafei


    It is argued that for the computer to be able to interact with humans, it needs to have the communication skills of humans. One of these skills is the ability to understand the emotional state of the person. This thesis describes a neural network-based approach for emotion classification. We learn a classifier that can recognize six basic emotions with an average accuracy of 77% over the Cohn-Kanade database. The novelty of this work is that instead of empirically selecting the parameters of the neural network, i.e. the learning rate, activation function parameter, momentum number, the number of nodes in one layer, etc. we developed a strategy that can automatically select comparatively better combination of these parameters. We also introduce another way to perform back propagation. Instead of using the partial differential of the error function, we use optimal algorithm; namely Powell's direction set to minimize the error function. We were also interested in construction an authentic emotion databases. This...

  10. Artificial neural networks in neurosurgery.

    Azimi, Parisa; Mohammadi, Hasan Reza; Benzel, Edward C; Shahzadi, Sohrab; Azhari, Shirzad; Montazeri, Ali


    Artificial neural networks (ANNs) effectively analyze non-linear data sets. The aimed was A review of the relevant published articles that focused on the application of ANNs as a tool for assisting clinical decision-making in neurosurgery. A literature review of all full publications in English biomedical journals (1993-2013) was undertaken. The strategy included a combination of key words 'artificial neural networks', 'prognostic', 'brain', 'tumor tracking', 'head', 'tumor', 'spine', 'classification' and 'back pain' in the title and abstract of the manuscripts using the PubMed search engine. The major findings are summarized, with a focus on the application of ANNs for diagnostic and prognostic purposes. Finally, the future of ANNs in neurosurgery is explored. A total of 1093 citations were identified and screened. In all, 57 citations were found to be relevant. Of these, 50 articles were eligible for inclusion in this review. The synthesis of the data showed several applications of ANN in neurosurgery, including: (1) diagnosis and assessment of disease progression in low back pain, brain tumours and primary epilepsy; (2) enhancing clinically relevant information extraction from radiographic images, intracranial pressure processing, low back pain and real-time tumour tracking; (3) outcome prediction in epilepsy, brain metastases, lumbar spinal stenosis, lumbar disc herniation, childhood hydrocephalus, trauma mortality, and the occurrence of symptomatic cerebral vasospasm in patients with aneurysmal subarachnoid haemorrhage; (4) the use in the biomechanical assessments of spinal disease. ANNs can be effectively employed for diagnosis, prognosis and outcome prediction in neurosurgery.

  11. Optimizing neural network forecast by immune algorithm

    YANG Shu-xia; LI Xiang; LI Ning; YANG Shang-dong


    Considering multi-factor influence, a forecasting model was built. The structure of BP neural network was designed, and immune algorithm was applied to optimize its network structure and weight. After training the data of power demand from the year 1980 to 2005 in China, a nonlinear network model was obtained on the relationship between power demand and the factors which had impacts on it, and thus the above proposed method was verified. Meanwhile, the results were compared to those of neural network optimized by genetic algorithm. The results show that this method is superior to neural network optimized by genetic algorithm and is one of the effective ways of time series forecast.

  12. Optimising the topology of complex neural networks

    Jiang, Fei; Schoenauer, Marc


    In this paper, we study instances of complex neural networks, i.e. neural netwo rks with complex topologies. We use Self-Organizing Map neural networks whose n eighbourhood relationships are defined by a complex network, to classify handwr itten digits. We show that topology has a small impact on performance and robus tness to neuron failures, at least at long learning times. Performance may howe ver be increased (by almost 10%) by artificial evolution of the network topo logy. In our experimental conditions, the evolved networks are more random than their parents, but display a more heterogeneous degree distribution.

  13. A new formulation for feedforward neural networks.

    Razavi, Saman; Tolson, Bryan A


    Feedforward neural network is one of the most commonly used function approximation techniques and has been applied to a wide variety of problems arising from various disciplines. However, neural networks are black-box models having multiple challenges/difficulties associated with training and generalization. This paper initially looks into the internal behavior of neural networks and develops a detailed interpretation of the neural network functional geometry. Based on this geometrical interpretation, a new set of variables describing neural networks is proposed as a more effective and geometrically interpretable alternative to the traditional set of network weights and biases. Then, this paper develops a new formulation for neural networks with respect to the newly defined variables; this reformulated neural network (ReNN) is equivalent to the common feedforward neural network but has a less complex error response surface. To demonstrate the learning ability of ReNN, in this paper, two training methods involving a derivative-based (a variation of backpropagation) and a derivative-free optimization algorithms are employed. Moreover, a new measure of regularization on the basis of the developed geometrical interpretation is proposed to evaluate and improve the generalization ability of neural networks. The value of the proposed geometrical interpretation, the ReNN approach, and the new regularization measure are demonstrated across multiple test problems. Results show that ReNN can be trained more effectively and efficiently compared to the common neural networks and the proposed regularization measure is an effective indicator of how a network would perform in terms of generalization.

  14. Drift chamber tracking with neural networks

    Lindsey, C.S.; Denby, B.; Haggerty, H.


    We discuss drift chamber tracking with a commercial log VLSI neural network chip. Voltages proportional to the drift times in a 4-layer drift chamber were presented to the Intel ETANN chip. The network was trained to provide the intercept and slope of straight tracks traversing the chamber. The outputs were recorded and later compared off line to conventional track fits. Two types of network architectures were studied. Applications of neural network tracking to high energy physics detector triggers is discussed.

  15. Coherence resonance in bursting neural networks.

    Kim, June Hoan; Lee, Ho Jun; Min, Cheol Hong; Lee, Kyoung J


    Synchronized neural bursts are one of the most noticeable dynamic features of neural networks, being essential for various phenomena in neuroscience, yet their complex dynamics are not well understood. With extrinsic electrical and optical manipulations on cultured neural networks, we demonstrate that the regularity (or randomness) of burst sequences is in many cases determined by a (few) low-dimensional attractor(s) working under strong neural noise. Moreover, there is an optimal level of noise strength at which the regularity of the interburst interval sequence becomes maximal-a phenomenon of coherence resonance. The experimental observations are successfully reproduced through computer simulations on a well-established neural network model, suggesting that the same phenomena may occur in many in vivo as well as in vitro neural networks.

  16. Prediction of SYM-H index by NARX neural network from IMF and solar wind data


    SYM-H is one of the important indices for space weather. It indicates the intensity of magnetic storm, similarly to Dst index but with much higher time-resolution. In this paper an artificial neural network (ANN) of Nonlinear Auto Regressive with eXogenous inputs (NARX) has been developed to predict SYM-H index from solar wind and IMF data. In comparison with usual BP and Elman network, the new NRAX model shows much better prediction capability. For 15 testing great storms including 5 super-storms of Min. SYM-H < -200 nT, the cross-correlation of SYM-H indices between NARX network predicted and really observed is 0.91 as a whole. For the 5 individual super-storms, the lowest coefficients is 0.91 relating to the super-storm of March 2001 with Min.SYM-H of -434 nT; while for the two super-storms with Min. SYM-H ranging from -300 nT to -400 nT, the correlations reach as high as 0.93 and 0.96 respectively. The remarkable improvement of the model performance can be attributed to such a key feedback from the network output of SYM-H with a suitable length (about 120 min) to the input, which implies that some information on the quasi real-time ring currents with a proper length of history does its work in the prediction. It tells us that, in addition to the direct driving by solar wind and IMF, the own status of the ring current plays an important role in its evolution especially for recovery phase and must properly be considered in storm-time SYM-H prediction by ANN. The neural network model of NARX developed in this paper provides an effective way to achieve it.

  17. Neural network classification - A Bayesian interpretation

    Wan, Eric A.


    The relationship between minimizing a mean squared error and finding the optimal Bayesian classifier is reviewed. This provides a theoretical interpretation for the process by which neural networks are used in classification. A number of confidence measures are proposed to evaluate the performance of the neural network classifier within a statistical framework.

  18. Adaptive Neurons For Artificial Neural Networks

    Tawel, Raoul


    Training time decreases dramatically. In improved mathematical model of neural-network processor, temperature of neurons (in addition to connection strengths, also called weights, of synapses) varied during supervised-learning phase of operation according to mathematical formalism and not heuristic rule. Evidence that biological neural networks also process information at neuronal level.

  19. Isolated Speech Recognition Using Artificial Neural Networks


    In this project Artificial Neural Networks are used as research tool to accomplish Automated Speech Recognition of normal speech. A small size...the first stage of this work are satisfactory and thus the application of artificial neural networks in conjunction with cepstral analysis in isolated word recognition holds promise.

  20. Neural Network Algorithm for Particle Loading

    J. L. V. Lewandowski


    An artificial neural network algorithm for continuous minimization is developed and applied to the case of numerical particle loading. It is shown that higher-order moments of the probability distribution function can be efficiently renormalized using this technique. A general neural network for the renormalization of an arbitrary number of moments is given.

  1. Medical image analysis with artificial neural networks.

    Jiang, J; Trundle, P; Ren, J


    Given that neural networks have been widely reported in the research community of medical imaging, we provide a focused literature survey on recent neural network developments in computer-aided diagnosis, medical image segmentation and edge detection towards visual content analysis, and medical image registration for its pre-processing and post-processing, with the aims of increasing awareness of how neural networks can be applied to these areas and to provide a foundation for further research and practical development. Representative techniques and algorithms are explained in detail to provide inspiring examples illustrating: (i) how a known neural network with fixed structure and training procedure could be applied to resolve a medical imaging problem; (ii) how medical images could be analysed, processed, and characterised by neural networks; and (iii) how neural networks could be expanded further to resolve problems relevant to medical imaging. In the concluding section, a highlight of comparisons among many neural network applications is included to provide a global view on computational intelligence with neural networks in medical imaging. Copyright © 2010 Elsevier Ltd. All rights reserved.

  2. Creativity in design and artificial neural networks

    Neocleous, C.C.; Esat, I.I. [Brunel Univ. Uxbridge (United Kingdom); Schizas, C.N. [Univ. of Cyprus, Nicosia (Cyprus)


    The creativity phase is identified as an integral part of the design phase. The characteristics of creative persons which are relevant to designing artificial neural networks manifesting aspects of creativity, are identified. Based on these identifications, a general framework of artificial neural network characteristics to implement such a goal are proposed.

  3. Neural Networks for Non-linear Control

    Sørensen, O.


    This paper describes how a neural network, structured as a Multi Layer Perceptron, is trained to predict, simulate and control a non-linear process.......This paper describes how a neural network, structured as a Multi Layer Perceptron, is trained to predict, simulate and control a non-linear process....

  4. Application of Neural Networks for Energy Reconstruction

    Damgov, Jordan


    The possibility to use Neural Networks for reconstruction ofthe energy deposited in the calorimetry system of the CMS detector is investigated. It is shown that using feed-forward neural network, good linearity, Gaussian energy distribution and good energy resolution can be achieved. Significant improvement of the energy resolution and linearity is reached in comparison with other weighting methods for energy reconstruction.

  5. Neural Networks for Non-linear Control

    Sørensen, O.


    This paper describes how a neural network, structured as a Multi Layer Perceptron, is trained to predict, simulate and control a non-linear process.......This paper describes how a neural network, structured as a Multi Layer Perceptron, is trained to predict, simulate and control a non-linear process....

  6. Introduction to Concepts in Artificial Neural Networks

    Niebur, Dagmar


    This introduction to artificial neural networks summarizes some basic concepts of computational neuroscience and the resulting models of artificial neurons. The terminology of biological and artificial neurons, biological and machine learning and neural processing is introduced. The concepts of supervised and unsupervised learning are explained with examples from the power system area. Finally, a taxonomy of different types of neurons and different classes of artificial neural networks is presented.

  7. Rule Extraction using Artificial Neural Networks

    Kamruzzaman, S M


    Artificial neural networks have been successfully applied to a variety of business application problems involving classification and regression. Although backpropagation neural networks generally predict better than decision trees do for pattern classification problems, they are often regarded as black boxes, i.e., their predictions are not as interpretable as those of decision trees. In many applications, it is desirable to extract knowledge from trained neural networks so that the users can gain a better understanding of the solution. This paper presents an efficient algorithm to extract rules from artificial neural networks. We use two-phase training algorithm for backpropagation learning. In the first phase, the number of hidden nodes of the network is determined automatically in a constructive fashion by adding nodes one after another based on the performance of the network on training data. In the second phase, the number of relevant input units of the network is determined using pruning algorithm. The ...

  8. International Conference on Artificial Neural Networks (ICANN)

    Mladenov, Valeri; Kasabov, Nikola; Artificial Neural Networks : Methods and Applications in Bio-/Neuroinformatics


    The book reports on the latest theories on artificial neural networks, with a special emphasis on bio-neuroinformatics methods. It includes twenty-three papers selected from among the best contributions on bio-neuroinformatics-related issues, which were presented at the International Conference on Artificial Neural Networks, held in Sofia, Bulgaria, on September 10-13, 2013 (ICANN 2013). The book covers a broad range of topics concerning the theory and applications of artificial neural networks, including recurrent neural networks, super-Turing computation and reservoir computing, double-layer vector perceptrons, nonnegative matrix factorization, bio-inspired models of cell communities, Gestalt laws, embodied theory of language understanding, saccadic gaze shifts and memory formation, and new training algorithms for Deep Boltzmann Machines, as well as dynamic neural networks and kernel machines. It also reports on new approaches to reinforcement learning, optimal control of discrete time-delay systems, new al...

  9. The neural networks based modeling of a polybenzimidazole-based polymer electrolyte membrane fuel cell: Effect of temperature

    Lobato, Justo; Cañizares, Pablo; Rodrigo, Manuel A.; Linares, José J.; Piuleac, Ciprian-George; Curteanu, Silvia

    Neural network models represent an important tool of Artificial Intelligence for fuel cell researchers in order to help them to elucidate the processes within the cells, by allowing optimization of materials, cells, stacks, and systems and support control systems. In this work three types of neural networks, that have as common characteristic the supervised learning control (Multilayer Perceptron, Generalized Feedforward Network and Jordan and Elman Network), have been designed to model the performance of a polybenzimidazole-polymer electrolyte membrane fuel cells operating upon a temperature range of 100-175 °C. The influence of temperature of two periods was studied: the temperature in the conditioning period and temperature when the fuel cell was operating. Three inputs variables: the conditioning temperature, the operating temperature and current density were taken into account in order to evaluate their influence upon the potential, the cathode resistance and the ohmic resistance. The Multilayer Perceptron model provides good predictions for different values of operating temperatures and potential and, hence, it is the best choice among the study models, recommended to investigate the influence of process variables of PEMFCs.

  10. The neural networks based modeling of a polybenzimidazole-based polymer electrolyte membrane fuel cell: Effect of temperature

    Lobato, Justo; Canizares, Pablo; Rodrigo, Manuel A.; Linares, Jose J. [Chemical Engineering Department, University of Castilla-La Mancha, Campus Universitario s/n. 13004, Ciudad Real (Spain); Piuleac, Ciprian-George; Curteanu, Silvia [Gh. Asachi Technical University Iasi, Department of Chemical Engineering (Romania)


    Neural network models represent an important tool of Artificial Intelligence for fuel cell researchers in order to help them to elucidate the processes within the cells, by allowing optimization of materials, cells, stacks, and systems and support control systems. In this work three types of neural networks, that have as common characteristic the supervised learning control (Multilayer Perceptron, Generalized Feedforward Network and Jordan and Elman Network), have been designed to model the performance of a polybenzimidazole-polymer electrolyte membrane fuel cells operating upon a temperature range of 100-175 C. The influence of temperature of two periods was studied: the temperature in the conditioning period and temperature when the fuel cell was operating. Three inputs variables: the conditioning temperature, the operating temperature and current density were taken into account in order to evaluate their influence upon the potential, the cathode resistance and the ohmic resistance. The Multilayer Perceptron model provides good predictions for different values of operating temperatures and potential and, hence, it is the best choice among the study models, recommended to investigate the influence of process variables of PEMFCs. (author)

  11. Wavelet Neural Networks for Adaptive Equalization

    JIANGMinghu; DENGBeixing; GIELENGeorges; ZHANGBo


    A structure based on the Wavelet neural networks (WNNs) is proposed for nonlinear channel equalization in a digital communication system. The construction algorithm of the Minimum error probability (MEP) is presented and applied as a performance criterion to update the parameter matrix of wavelet networks. Our experimental results show that performance of the proposed wavelet networks based on equalizer can significantly improve the neural modeling accuracy, perform quite well in compensating the nonlinear distortion introduced by the channel, and outperform the conventional neural networks in signal to noise ratio and channel non-llnearity.

  12. Subspace learning of neural networks

    Cheng Lv, Jian; Zhou, Jiliu


    PrefaceChapter 1. Introduction1.1 Introduction1.1.1 Linear Neural Networks1.1.2 Subspace Learning1.2 Subspace Learning Algorithms1.2.1 PCA Learning Algorithms1.2.2 MCA Learning Algorithms1.2.3 ICA Learning Algorithms1.3 Methods for Convergence Analysis1.3.1 SDT Method1.3.2 DCT Method1.3.3 DDT Method1.4 Block Algorithms1.5 Simulation Data Set and Notation1.6 ConclusionsChapter 2. PCA Learning Algorithms with Constants Learning Rates2.1 Oja's PCA Learning Algorithms2.1.1 The Algorithms2.1.2 Convergence Issue2.2 Invariant Sets2.2.1 Properties of Invariant Sets2.2.2 Conditions for Invariant Sets2.

  13. Neural networks for damage identification

    Paez, T.L.; Klenke, S.E.


    Efforts to optimize the design of mechanical systems for preestablished use environments and to extend the durations of use cycles establish a need for in-service health monitoring. Numerous studies have proposed measures of structural response for the identification of structural damage, but few have suggested systematic techniques to guide the decision as to whether or not damage has occurred based on real data. Such techniques are necessary because in field applications the environments in which systems operate and the measurements that characterize system behavior are random. This paper investigates the use of artificial neural networks (ANNs) to identify damage in mechanical systems. Two probabilistic neural networks (PNNs) are developed and used to judge whether or not damage has occurred in a specific mechanical system, based on experimental measurements. The first PNN is a classical type that casts Bayesian decision analysis into an ANN framework; it uses exemplars measured from the undamaged and damaged system to establish whether system response measurements of unknown origin come from the former class (undamaged) or the latter class (damaged). The second PNN establishes the character of the undamaged system in terms of a kernel density estimator of measures of system response; when presented with system response measures of unknown origin, it makes a probabilistic judgment whether or not the data come from the undamaged population. The physical system used to carry out the experiments is an aerospace system component, and the environment used to excite the system is a stationary random vibration. The results of damage identification experiments are presented along with conclusions rating the effectiveness of the approaches.

  14. Nonlinear programming with feedforward neural networks.

    Reifman, J.


    We provide a practical and effective method for solving constrained optimization problems by successively training a multilayer feedforward neural network in a coupled neural-network/objective-function representation. Nonlinear programming problems are easily mapped into this representation which has a simpler and more transparent method of solution than optimization performed with Hopfield-like networks and poses very mild requirements on the functions appearing in the problem. Simulation results are illustrated and compared with an off-the-shelf optimization tool.

  15. Learning Processes of Layered Neural Networks

    Fujiki, Sumiyoshi; FUJIKI, Nahomi, M.


    A positive reinforcement type learning algorithm is formulated for a stochastic feed-forward neural network, and a learning equation similar to that of the Boltzmann machine algorithm is obtained. By applying a mean field approximation to the same stochastic feed-forward neural network, a deterministic analog feed-forward network is obtained and the back-propagation learning rule is re-derived.

  16. Learning Algorithms of Multilayer Neural Networks

    Fujiki, Sumiyoshi; FUJIKI, Nahomi, M.


    A positive reinforcement type learning algorithm is formulated for a stochastic feed-forward multilayer neural network, with far interlayer synaptic connections, and we obtain a learning rule similar to that of the Boltzmann machine on the same multilayer structure. By applying a mean field approximation to the stochastic feed-forward neural network, the generalized error back-propagation learning rule is derived for a deterministic analog feed-forward multilayer network with the far interlay...

  17. Research of The Deeper Neural Networks

    Xiao You Rong


    Full Text Available Neural networks (NNs have powerful computational abilities and could be used in a variety of applications; however, training these networks is still a difficult problem. With different network structures, many neural models have been constructed. In this report, a deeper neural networks (DNNs architecture is proposed. The training algorithm of deeper neural network insides searching the global optimal point in the actual error surface. Before the training algorithm is designed, the error surface of the deeper neural network is analyzed from simple to complicated, and the features of the error surface is obtained. Based on these characters, the initialization method and training algorithm of DNNs is designed. For the initialization, a block-uniform design method is proposed which separates the error surface into some blocks and finds the optimal block using the uniform design method. For the training algorithm, the improved gradient-descent method is proposed which adds a penalty term into the cost function of the old gradient descent method. This algorithm makes the network have a great approximating ability and keeps the network state stable. All of these improve the practicality of the neural network.

  18. Acute appendicitis diagnosis using artificial neural networks.

    Park, Sung Yun; Kim, Sung Min


    Artificial neural networks is one of pattern analyzer method which are rapidly applied on a bio-medical field. The aim of this research was to propose an appendicitis diagnosis system using artificial neural networks (ANNs). Data from 801 patients of the university hospital in Dongguk were used to construct artificial neural networks for diagnosing appendicitis and acute appendicitis. A radial basis function neural network structure (RBF), a multilayer neural network structure (MLNN), and a probabilistic neural network structure (PNN) were used for artificial neural network models. The Alvarado clinical scoring system was used for comparison with the ANNs. The accuracy of the RBF, PNN, MLNN, and Alvarado was 99.80%, 99.41%, 97.84%, and 72.19%, respectively. The area under ROC (receiver operating characteristic) curve of RBF, PNN, MLNN, and Alvarado was 0.998, 0.993, 0.985, and 0.633, respectively. The proposed models using ANNs for diagnosing appendicitis showed good performances, and were significantly better than the Alvarado clinical scoring system (p < 0.001). With cooperation among facilities, the accuracy for diagnosing this serious health condition can be improved.

  19. Mobility Prediction in Wireless Ad Hoc Networks using Neural Networks

    Kaaniche, Heni


    Mobility prediction allows estimating the stability of paths in a mobile wireless Ad Hoc networks. Identifying stable paths helps to improve routing by reducing the overhead and the number of connection interruptions. In this paper, we introduce a neural network based method for mobility prediction in Ad Hoc networks. This method consists of a multi-layer and recurrent neural network using back propagation through time algorithm for training.

  20. Neural network regulation driven by autonomous neural firings

    Cho, Myoung Won


    Biological neurons naturally fire spontaneously due to the existence of a noisy current. Such autonomous firings may provide a driving force for network formation because synaptic connections can be modified due to neural firings. Here, we study the effect of autonomous firings on network formation. For the temporally asymmetric Hebbian learning, bidirectional connections lose their balance easily and become unidirectional ones. Defining the difference between reciprocal connections as new variables, we could express the learning dynamics as if Ising model spins interact with each other in magnetism. We present a theoretical method to estimate the interaction between the new variables in a neural system. We apply the method to some network systems and find some tendencies of autonomous neural network regulation.

  1. Prediction of SYM-H index during large storms by NARX neural network from IMF and solar wind data

    L. Cai


    Full Text Available Similar to the Dst index, the SYM-H index may also serve as an indicator of magnetic storm intensity, but having distinct advantage of higher time-resolution. In this study the NARX neural network has been used for the first time to predict SYM-H index from solar wind (SW and IMF parameters. In total 73 time intervals of great storm events with IMF/SW data available from ACE satellite during 1998 to 2006 are used to establish the ANN model. Out of them, 67 are used to train the network and the other 6 samples for test. Additionally, the NARX prediction model is also validated using IMF/SW data from WIND satellite for 7 great storms during 1995–1997 and 2005, as well as for the July 2000 Bastille day storm and November 2001 superstorm using Geotail and OMNI data at 1 AU, respectively. Five interplanetary parameters of IMF Bz, By and total B components along with proton density and velocity of solar wind are used as the original external inputs of the neural network to predict the SYM-H index about one hour ahead. For the 6 test storms registered by ACE including two super-storms of min. SYM-H<−200 nT, the correlation coefficient between observed and NARX network predicted SYM-H is 0.95 as a whole, even as high as 0.95 and 0.98 with average relative variance of 13.2% and 7.4%, respectively, for the two super-storms. The prediction for the 7 storms with WIND data is also satisfactory, showing averaged correlation coefficient about 0.91 and RMSE of 14.2 nT. The newly developed NARX model shows much better capability than Elman network for SYM-H prediction, which can partly be attributed to a key feedback to the input layer from the output neuron with a suitable length (about 120 min. This feedback means that nearly real information of the ring current status is effectively directed to take part in the prediction of SYM-H index by ANN. The proper history length of the output-feedback may mainly reflect

  2. Genetic algorithm for neural networks optimization

    Setyawati, Bina R.; Creese, Robert C.; Sahirman, Sidharta


    This paper examines the forecasting performance of multi-layer feed forward neural networks in modeling a particular foreign exchange rates, i.e. Japanese Yen/US Dollar. The effects of two learning methods, Back Propagation and Genetic Algorithm, in which the neural network topology and other parameters fixed, were investigated. The early results indicate that the application of this hybrid system seems to be well suited for the forecasting of foreign exchange rates. The Neural Networks and Genetic Algorithm were programmed using MATLAB«.

  3. Neural networks techniques applied to reservoir engineering

    Flores, M. [Gerencia de Proyectos Geotermoelectricos, Morelia (Mexico); Barragan, C. [RockoHill de Mexico, Indiana (Mexico)


    Neural Networks are considered the greatest technological advance since the transistor. They are expected to be a common household item by the year 2000. An attempt to apply Neural Networks to an important geothermal problem has been made, predictions on the well production and well completion during drilling in a geothermal field. This was done in Los Humeros geothermal field, using two common types of Neural Network models, available in commercial software. Results show the learning capacity of the developed model, and its precision in the predictions that were made.

  4. Assessing Landslide Hazard Using Artificial Neural Network

    Farrokhzad, Farzad; Choobbasti, Asskar Janalizadeh; Barari, Amin


    neural network has been developed for use in the stability evaluation of slopes under various geological conditions and engineering requirements. The Artificial neural network model of this research uses slope characteristics as input and leads to the output in form of the probability of failure...... and factor of safety. It can be stated that the trained neural networks are capable of predicting the stability of slopes and safety factor of landslide hazard in study area with an acceptable level of confidence. Landslide hazard analysis and mapping can provide useful information for catastrophic loss...

  5. Estimation of Conditional Quantile using Neural Networks

    Kulczycki, P.; Schiøler, Henrik


    The problem of estimating conditional quantiles using neural networks is investigated here. A basic structure is developed using the methodology of kernel estimation, and a theory guaranteeing con-sistency on a mild set of assumptions is provided. The constructed structure constitutes a basis...... for the design of a variety of different neural networks, some of which are considered in detail. The task of estimating conditional quantiles is related to Bayes point estimation whereby a broad range of applications within engineering, economics and management can be suggested. Numerical results illustrating...... the capabilities of the elaborated neural network are also given....

  6. Estimation of Conditional Quantile using Neural Networks

    Kulczycki, P.; Schiøler, Henrik


    The problem of estimating conditional quantiles using neural networks is investigated here. A basic structure is developed using the methodology of kernel estimation, and a theory guaranteeing con-sistency on a mild set of assumptions is provided. The constructed structure constitutes a basis...... for the design of a variety of different neural networks, some of which are considered in detail. The task of estimating conditional quantiles is related to Bayes point estimation whereby a broad range of applications within engineering, economics and management can be suggested. Numerical results illustrating...... the capabilities of the elaborated neural network are also given....

  7. Convolutional Neural Network for Image Recognition

    Seifnashri, Sahand


    The aim of this project is to use machine learning techniques especially Convolutional Neural Networks for image processing. These techniques can be used for Quark-Gluon discrimination using calorimeters data, but unfortunately I didn’t manage to get the calorimeters data and I just used the Jet data fromminiaodsim(ak4 chs). The Jet data was not good enough for Convolutional Neural Network which is designed for ’image’ recognition. This report is made of twomain part, part one is mainly about implementing Convolutional Neural Network on unphysical data such as MNIST digits and CIFAR-10 dataset and part 2 is about the Jet data.

  8. Threshold control of chaotic neural network.

    He, Guoguang; Shrimali, Manish Dev; Aihara, Kazuyuki


    The chaotic neural network constructed with chaotic neurons exhibits rich dynamic behaviour with a nonperiodic associative memory. In the chaotic neural network, however, it is difficult to distinguish the stored patterns in the output patterns because of the chaotic state of the network. In order to apply the nonperiodic associative memory into information search, pattern recognition etc. it is necessary to control chaos in the chaotic neural network. We have studied the chaotic neural network with threshold activated coupling, which provides a controlled network with associative memory dynamics. The network converges to one of its stored patterns or/and reverse patterns which has the smallest Hamming distance from the initial state of the network. The range of the threshold applied to control the neurons in the network depends on the noise level in the initial pattern and decreases with the increase of noise. The chaos control in the chaotic neural network by threshold activated coupling at varying time interval provides controlled output patterns with different temporal periods which depend upon the control parameters.

  9. Nonequilibrium landscape theory of neural networks.

    Yan, Han; Zhao, Lei; Hu, Liang; Wang, Xidi; Wang, Erkang; Wang, Jin


    The brain map project aims to map out the neuron connections of the human brain. Even with all of the wirings mapped out, the global and physical understandings of the function and behavior are still challenging. Hopfield quantified the learning and memory process of symmetrically connected neural networks globally through equilibrium energy. The energy basins of attractions represent memories, and the memory retrieval dynamics is determined by the energy gradient. However, the realistic neural networks are asymmetrically connected, and oscillations cannot emerge from symmetric neural networks. Here, we developed a nonequilibrium landscape-flux theory for realistic asymmetrically connected neural networks. We uncovered the underlying potential landscape and the associated Lyapunov function for quantifying the global stability and function. We found the dynamics and oscillations in human brains responsible for cognitive processes and physiological rhythm regulations are determined not only by the landscape gradient but also by the flux. We found that the flux is closely related to the degrees of the asymmetric connections in neural networks and is the origin of the neural oscillations. The neural oscillation landscape shows a closed-ring attractor topology. The landscape gradient attracts the network down to the ring. The flux is responsible for coherent oscillations on the ring. We suggest the flux may provide the driving force for associations among memories. We applied our theory to rapid-eye movement sleep cycle. We identified the key regulation factors for function through global sensitivity analysis of landscape topography against wirings, which are in good agreements with experiments.

  10. Nonequilibrium landscape theory of neural networks

    Yan, Han; Zhao, Lei; Hu, Liang; Wang, Xidi; Wang, Erkang; Wang, Jin


    The brain map project aims to map out the neuron connections of the human brain. Even with all of the wirings mapped out, the global and physical understandings of the function and behavior are still challenging. Hopfield quantified the learning and memory process of symmetrically connected neural networks globally through equilibrium energy. The energy basins of attractions represent memories, and the memory retrieval dynamics is determined by the energy gradient. However, the realistic neural networks are asymmetrically connected, and oscillations cannot emerge from symmetric neural networks. Here, we developed a nonequilibrium landscape–flux theory for realistic asymmetrically connected neural networks. We uncovered the underlying potential landscape and the associated Lyapunov function for quantifying the global stability and function. We found the dynamics and oscillations in human brains responsible for cognitive processes and physiological rhythm regulations are determined not only by the landscape gradient but also by the flux. We found that the flux is closely related to the degrees of the asymmetric connections in neural networks and is the origin of the neural oscillations. The neural oscillation landscape shows a closed-ring attractor topology. The landscape gradient attracts the network down to the ring. The flux is responsible for coherent oscillations on the ring. We suggest the flux may provide the driving force for associations among memories. We applied our theory to rapid-eye movement sleep cycle. We identified the key regulation factors for function through global sensitivity analysis of landscape topography against wirings, which are in good agreements with experiments. PMID:24145451

  11. Character Recognition Using Novel Optoelectronic Neural Network


    17 2.3.7. Learning rule ................................................................... 18 3. ADALINE ... ADALINE neuron and linear separability which provides a justification for multilayer networks. The MADALINE (many ADALINE ) multi layer network is also...element used In many neural networks (Figure 3.1). The ADALINE functions as an adaptive threshold logic element. In digital Implementation, an input

  12. Neural Network for Estimating Conditional Distribution

    Schiøler, Henrik; Kulczycki, P.

    Neural networks for estimating conditional distributions and their associated quantiles are investigated in this paper. A basic network structure is developed on the basis of kernel estimation theory, and consistency is proved from a mild set of assumptions. A number of applications within...... statistcs, decision theory and signal processing are suggested, and a numerical example illustrating the capabilities of the elaborated network is given...

  13. Nonlinear System Control Using Neural Networks

    Jaroslava Žilková


    Full Text Available The paper is focused especially on presenting possibilities of applying off-linetrained artificial neural networks at creating the system inverse models that are used atdesigning control algorithm for non-linear dynamic system. The ability of cascadefeedforward neural networks to model arbitrary non-linear functions and their inverses isexploited. This paper presents a quasi-inverse neural model, which works as a speedcontroller of an induction motor. The neural speed controller consists of two cascadefeedforward neural networks subsystems. The first subsystem provides desired statorcurrent components for control algorithm and the second subsystem providescorresponding voltage components for PWM converter. The availability of the proposedcontroller is verified through the MATLAB simulation. The effectiveness of the controller isdemonstrated for different operating conditions of the drive system.

  14. Recognition of Telugu characters using neural networks.

    Sukhaswami, M B; Seetharamulu, P; Pujari, A K


    The aim of the present work is to recognize printed and handwritten Telugu characters using artificial neural networks (ANNs). Earlier work on recognition of Telugu characters has been done using conventional pattern recognition techniques. We make an initial attempt here of using neural networks for recognition with the aim of improving upon earlier methods which do not perform effectively in the presence of noise and distortion in the characters. The Hopfield model of neural network working as an associative memory is chosen for recognition purposes initially. Due to limitation in the capacity of the Hopfield neural network, we propose a new scheme named here as the Multiple Neural Network Associative Memory (MNNAM). The limitation in storage capacity has been overcome by combining multiple neural networks which work in parallel. It is also demonstrated that the Hopfield network is suitable for recognizing noisy printed characters as well as handwritten characters written by different "hands" in a variety of styles. Detailed experiments have been carried out using several learning strategies and results are reported. It is shown here that satisfactory recognition is possible using the proposed strategy. A detailed preprocessing scheme of the Telugu characters from digitized documents is also described.

  15. An Introduction to Neural Networks for Hearing Aid Noise Recognition.

    Kim, Jun W.; Tyler, Richard S.


    This article introduces the use of multilayered artificial neural networks in hearing aid noise recognition. It reviews basic principles of neural networks, and offers an example of an application in which a neural network is used to identify the presence or absence of noise in speech. The ability of neural networks to "learn" the…

  16. Neural Networks for Dynamic Flight Control


    uses the Adaline (22) model for development of the neural networks. Neural Graphics and other AFIT applications use a slightly different model. The...primary difference in the Nguyen application is that the Adaline uses the nonlinear function .f(a) = tanh(a) where standard backprop uses the sigmoid

  17. Neural networks convergence using physicochemical data.

    Karelson, Mati; Dobchev, Dimitar A; Kulshyn, Oleksandr V; Katritzky, Alan R


    An investigation of the neural network convergence and prediction based on three optimization algorithms, namely, Levenberg-Marquardt, conjugate gradient, and delta rule, is described. Several simulated neural networks built using the above three algorithms indicated that the Levenberg-Marquardt optimizer implemented as a back-propagation neural network converged faster than the other two algorithms and provides in most of the cases better prediction. These conclusions are based on eight physicochemical data sets, each with a significant number of compounds comparable to that usually used in the QSAR/QSPR modeling. The superiority of the Levenberg-Marquardt algorithm is revealed in terms of functional dependence of the change of the neural network weights with respect to the gradient of the error propagation as well as distribution of the weight values. The prediction of the models is assessed by the error of the validation sets not used in the training process.

  18. Application of neural networks in coastal engineering

    Mandal, S.

    methods. That is why it is becoming popular in various fields including coastal engineering. Waves and tides will play important roles in coastal erosion or accretion. This paper briefly describes the back-propagation neural networks and its application...

  19. Neural Network Based 3D Surface Reconstruction

    Vincy Joseph


    Full Text Available This paper proposes a novel neural-network-based adaptive hybrid-reflectance three-dimensional (3-D surface reconstruction model. The neural network combines the diffuse and specular components into a hybrid model. The proposed model considers the characteristics of each point and the variant albedo to prevent the reconstructed surface from being distorted. The neural network inputs are the pixel values of the two-dimensional images to be reconstructed. The normal vectors of the surface can then be obtained from the output of the neural network after supervised learning, where the illuminant direction does not have to be known in advance. Finally, the obtained normal vectors can be applied to integration method when reconstructing 3-D objects. Facial images were used for training in the proposed approach

  20. Control of autonomous robot using neural networks

    Barton, Adam; Volna, Eva


    The aim of the article is to design a method of control of an autonomous robot using artificial neural networks. The introductory part describes control issues from the perspective of autonomous robot navigation and the current mobile robots controlled by neural networks. The core of the article is the design of the controlling neural network, and generation and filtration of the training set using ART1 (Adaptive Resonance Theory). The outcome of the practical part is an assembled Lego Mindstorms EV3 robot solving the problem of avoiding obstacles in space. To verify models of an autonomous robot behavior, a set of experiments was created as well as evaluation criteria. The speed of each motor was adjusted by the controlling neural network with respect to the situation in which the robot was found.

  1. Additive Feed Forward Control with Neural Networks

    Sørensen, O.


    This paper demonstrates a method to control a non-linear, multivariable, noisy process using trained neural networks. The basis for the method is a trained neural network controller acting as the inverse process model. A training method for obtaining such an inverse process model is applied....... A suitable 'shaped' (low-pass filtered) reference is used to overcome problems with excessive control action when using a controller acting as the inverse process model. The control concept is Additive Feed Forward Control, where the trained neural network controller, acting as the inverse process model......, is placed in a supplementary pure feed-forward path to an existing feedback controller. This concept benefits from the fact, that an existing, traditional designed, feedback controller can be retained without any modifications, and after training the connection of the neural network feed-forward controller...




    Full Text Available Recent studies have shown the classification and prediction power of the Neural Networks. It has been demonstrated that a NN can approximate any continuous function. Neural networks have been successfully used for forecasting of financial data series. The classical methods used for time series prediction like Box-Jenkins or ARIMA assumes that there is a linear relationship between inputs and outputs. Neural Networks have the advantage that can approximate nonlinear functions. In this paper we compared the performances of different feed forward and recurrent neural networks and training algorithms for predicting the exchange rate EUR/RON and USD/RON. We used data series with daily exchange rates starting from 2005 until 2013.

  3. Artificial neural networks a practical course

    da Silva, Ivan Nunes; Andrade Flauzino, Rogerio; Liboni, Luisa Helena Bartocci; dos Reis Alves, Silas Franco


    This book provides comprehensive coverage of neural networks, their evolution, their structure, the problems they can solve, and their applications. The first half of the book looks at theoretical investigations on artificial neural networks and addresses the key architectures that are capable of implementation in various application scenarios. The second half is designed specifically for the production of solutions using artificial neural networks to solve practical problems arising from different areas of knowledge. It also describes the various implementation details that were taken into account to achieve the reported results. These aspects contribute to the maturation and improvement of experimental techniques to specify the neural network architecture that is most appropriate for a particular application scope. The book is appropriate for students in graduate and upper undergraduate courses in addition to researchers and professionals.

  4. Additive Feed Forward Control with Neural Networks

    Sørensen, O.


    This paper demonstrates a method to control a non-linear, multivariable, noisy process using trained neural networks. The basis for the method is a trained neural network controller acting as the inverse process model. A training method for obtaining such an inverse process model is applied....... A suitable 'shaped' (low-pass filtered) reference is used to overcome problems with excessive control action when using a controller acting as the inverse process model. The control concept is Additive Feed Forward Control, where the trained neural network controller, acting as the inverse process model......, is placed in a supplementary pure feed-forward path to an existing feedback controller. This concept benefits from the fact, that an existing, traditional designed, feedback controller can be retained without any modifications, and after training the connection of the neural network feed-forward controller...

  5. Artificial neural network and medicine.

    Khan, Z H; Mohapatra, S K; Khodiar, P K; Ragu Kumar, S N


    The introduction of human brain functions such as perception and cognition into the computer has been made possible by the use of Artificial Neural Network (ANN). ANN are computer models inspired by the structure and behavior of neurons. Like the brain, ANN can recognize patterns, manage data and most significantly, learn. This learning ability, not seen in other computer models simulating human intelligence, constantly improves its functional accuracy as it keeps on performing. Experience is as important for an ANN as it is for man. It is being increasingly used to supplement and even (may be) replace experts, in medicine. However, there is still scope for improvement in some areas. Its ability to classify and interpret various forms of medical data comes as a helping hand to clinical decision making in both diagnosis and treatment. Treatment planning in medicine, radiotherapy, rehabilitation, etc. is being done using ANN. Morbidity and mortality prediction by ANN in different medical situations can be very helpful for hospital management. ANN has a promising future in fundamental research, medical education and surgical robotics.

  6. Neural network for image segmentation

    Skourikhine, Alexei N.; Prasad, Lakshman; Schlei, Bernd R.


    Image analysis is an important requirement of many artificial intelligence systems. Though great effort has been devoted to inventing efficient algorithms for image analysis, there is still much work to be done. It is natural to turn to mammalian vision systems for guidance because they are the best known performers of visual tasks. The pulse- coupled neural network (PCNN) model of the cat visual cortex has proven to have interesting properties for image processing. This article describes the PCNN application to the processing of images of heterogeneous materials; specifically PCNN is applied to image denoising and image segmentation. Our results show that PCNNs do well at segmentation if we perform image smoothing prior to segmentation. We use PCNN for obth smoothing and segmentation. Combining smoothing and segmentation enable us to eliminate PCNN sensitivity to the setting of the various PCNN parameters whose optimal selection can be difficult and can vary even for the same problem. This approach makes image processing based on PCNN more automatic in our application and also results in better segmentation.

  7. Pattern Recognition Using Neural Networks

    Santaji Ghorpade


    Full Text Available Face Recognition has been identified as one of the attracting research areas and it has drawn the attention of many researchers due to its varying applications such as security systems, medical systems,entertainment, etc. Face recognition is the preferred mode of identification by humans: it is natural,robust and non-intrusive. A wide variety of systems requires reliable personal recognition schemes to either confirm or determine the identity of an individual requesting their services. The purpose of such schemes is to ensure that the rendered services are accessed only by a legitimate user and no one else.Examples of such applications include secure access to buildings, computer systems, laptops, cellular phones, and ATMs. In the absence of robust personal recognition schemes, these systems are vulnerable to the wiles of an impostor.In this paper we have developed and illustrated a recognition system for human faces using a novel Kohonen self-organizing map (SOM or Self-Organizing Feature Map (SOFM based retrieval system.SOM has good feature extracting property due to its topological ordering. The Facial Analytics results for the 400 images of AT&T database reflects that the face recognition rate using one of the neural network algorithm SOM is 85.5% for 40 persons.

  8. Applications of Pulse-Coupled Neural Networks

    Ma, Yide; Wang, Zhaobin


    "Applications of Pulse-Coupled Neural Networks" explores the fields of image processing, including image filtering, image segmentation, image fusion, image coding, image retrieval, and biometric recognition, and the role of pulse-coupled neural networks in these fields. This book is intended for researchers and graduate students in artificial intelligence, pattern recognition, electronic engineering, and computer science. Prof. Yide Ma conducts research on intelligent information processing, biomedical image processing, and embedded system development at the School of Information Sci

  9. NARX neural networks for sequence processing tasks

    Hristev, Eugen


    This project aims at researching and implementing a neural network architecture system for the NARX (Nonlinear AutoRegressive with eXogenous inputs) model, used in sequence processing tasks and particularly in time series prediction. The model can fallback to different types of architectures including time-delay neural networks and multi layer perceptron. The NARX simulator tests and compares the different architectures for both synthetic and real data, including the time series o...

  10. Neural network models of protein domain evolution

    Sylvia Nagl


    Protein domains are complex adaptive systems, and here a novel procedure is presented that models the evolution of new functional sites within stable domain folds using neural networks. Neural networks, which were originally developed in cognitive science for the modeling of brain functions, can provide a fruitful methodology for the study of complex systems in general. Ethical implications of developing complex systems models of biomolecules are discussed, with particular reference to molecu...

  11. Neural network segmentation of magnetic resonance images

    Frederick, Blaise


    Neural networks are well adapted to the task of grouping input patterns into subsets which share some similarity. Moreover once trained they can generalize their classification rules to classify new data sets. Sets of pixel intensities from magnetic resonance (MR) images provide a natural input to a neural network by varying imaging parameters MR images can reflect various independent physical parameters of tissues in their pixel intensities. A neural net can then be trained to classify physically similar tissue types based on sets of pixel intensities resulting from different imaging studies on the same subject. A neural network classifier for image segmentation was implemented on a Sun 4/60 and was tested on the task of classifying tissues of canine head MR images. Four images of a transaxial slice with different imaging sequences were taken as input to the network (three spin-echo images and an inversion recovery image). The training set consisted of 691 representative samples of gray matter white matter cerebrospinal fluid bone and muscle preclassified by a neuroscientist. The network was trained using a fast backpropagation algorithm to derive the decision criteria to classify any location in the image by its pixel intensities and the image was subsequently segmented by the classifier. The classifier''s performance was evaluated as a function of network size number of network layers and length of training. A single layer neural network performed quite well at

  12. Logarithmic learning for generalized classifier neural network.

    Ozyildirim, Buse Melis; Avci, Mutlu


    Generalized classifier neural network is introduced as an efficient classifier among the others. Unless the initial smoothing parameter value is close to the optimal one, generalized classifier neural network suffers from convergence problem and requires quite a long time to converge. In this work, to overcome this problem, a logarithmic learning approach is proposed. The proposed method uses logarithmic cost function instead of squared error. Minimization of this cost function reduces the number of iterations used for reaching the minima. The proposed method is tested on 15 different data sets and performance of logarithmic learning generalized classifier neural network is compared with that of standard one. Thanks to operation range of radial basis function included by generalized classifier neural network, proposed logarithmic approach and its derivative has continuous values. This makes it possible to adopt the advantage of logarithmic fast convergence by the proposed learning method. Due to fast convergence ability of logarithmic cost function, training time is maximally decreased to 99.2%. In addition to decrease in training time, classification performance may also be improved till 60%. According to the test results, while the proposed method provides a solution for time requirement problem of generalized classifier neural network, it may also improve the classification accuracy. The proposed method can be considered as an efficient way for reducing the time requirement problem of generalized classifier neural network. Copyright © 2014 Elsevier Ltd. All rights reserved.

  13. Diabetic retinopathy screening using deep neural network.

    Ramachandran, Nishanthan; Chiong, Hong Sheng; Sime, Mary Jane; Wilson, Graham A


    Importance There is a burgeoning interest in the use of deep neural network in diabetic retinal screening. To determine whether a deep neural network could satisfactorily detect diabetic retinopathy that requires referral to an ophthalmologist from a local diabetic retinal screening programme and an international database. Design Retrospective audit Samples Diabetic retinal photos from Otago database photographed during October 2016 (485 photos); and 1200 photos from Messidor international database. Receiver operating characteristic curve to illustrate the ability of a deep neural network to identify referable diabetic retinopathy (moderate or worse diabetic retinopathy or exudates within one disc diameter of the fovea). Main Outcome Measures Area under the receiver operating characteristic curve, sensitivity and specificity RESULTS: For detecting referable diabetic retinopathy, the deep neural network had an area under receiver operating characteristic curve of 0.901 (95% CI, 0.807-0.995) with 84.6% sensitivity and 79.7% specificity for Otago and 0.980 (95% CI, 0.973-0.986) with 96.0% sensitivity and 90.0% specificity for Messidor. Conclusions and Relevance This study has shown that a deep neural network can detect referable diabetic retinopathy with sensitivities and specificities close to or better than 80% from both an international and a domestic (New Zealand) database. We believe that deep neural networks can be integrated into community screening once they can successfully detect both diabetic retinopathy and diabetic macular oedema. This article is protected by copyright. All rights reserved.

  14. Neural networks for segmentation, tracking, and identification

    Rogers, Steven K.; Ruck, Dennis W.; Priddy, Kevin L.; Tarr, Gregory L.


    The main thrust of this paper is to encourage the use of neural networks to process raw data for subsequent classification. This article addresses neural network techniques for processing raw pixel information. For this paper the definition of neural networks includes the conventional artificial neural networks such as the multilayer perceptrons and also biologically inspired processing techniques. Previously, we have successfully used the biologically inspired Gabor transform to process raw pixel information and segment images. In this paper we extend those ideas to both segment and track objects in multiframe sequences. It is also desirable for the neural network processing data to learn features for subsequent recognition. A common first step for processing raw data is to transform the data and use the transform coefficients as features for recognition. For example, handwritten English characters become linearly separable in the feature space of the low frequency Fourier coefficients. Much of human visual perception can be modelled by assuming low frequency Fourier as the feature space used by the human visual system. The optimum linear transform, with respect to reconstruction, is the Karhunen-Loeve transform (KLT). It has been shown that some neural network architectures can compute approximations to the KLT. The KLT coefficients can be used for recognition as well as for compression. We tested the use of the KLT on the problem of interfacing a nonverbal patient to a computer. The KLT uses an optimal basis set for object reconstruction. For object recognition, the KLT may not be optimal.

  15. Hopfield neural network based on ant system

    洪炳镕; 金飞虎; 郭琦


    Hopfield neural network is a single layer feedforward neural network. Hopfield network requires some control parameters to be carefully selected, else the network is apt to converge to local minimum. An ant system is a nature inspired meta heuristic algorithm. It has been applied to several combinatorial optimization problems such as Traveling Salesman Problem, Scheduling Problems, etc. This paper will show an ant system may be used in tuning the network control parameters by a group of cooperated ants. The major advantage of this network is to adjust the network parameters automatically, avoiding a blind search for the set of control parameters.This network was tested on two TSP problems, 5 cities and 10 cities. The results have shown an obvious improvement.

  16. Forecast and restoration of geomagnetic activity indices by using the software-computational neural network complex

    Barkhatov, Nikolay; Revunov, Sergey


    It is known that currently used indices of geomagnetic activity to some extent reflect the physical processes occurring in the interaction of the perturbed solar wind with Earth's magnetosphere. Therefore, they are connected to each other and with the parameters of near-Earth space. The establishment of such nonlinear connections is interest. For such purposes when the physical problem is complex or has many parameters the technology of artificial neural networks is applied. Such approach for development of the automated forecast and restoration method of geomagnetic activity indices with the establishment of creative software-computational neural network complex is used. Each neural network experiments were carried out at this complex aims to search for a specific nonlinear relation between the analyzed indices and parameters. At the core of the algorithm work program a complex scheme of the functioning of artificial neural networks (ANN) of different types is contained: back propagation Elman network, feed forward network, fuzzy logic network and Kohonen layer classification network. Tools of the main window of the complex (the application) the settings used by neural networks allow you to change: the number of hidden layers, the number of neurons in the layer, the input and target data, the number of cycles of training. Process and the quality of training the ANN is a dynamic plot of changing training error. Plot of comparison of network response with the test sequence is result of the network training. The last-trained neural network with established nonlinear connection for repeated numerical experiments can be run. At the same time additional training is not executed and the previously trained network as a filter input parameters get through and output parameters with the test event are compared. At statement of the large number of different experiments provided the ability to run the program in a "batch" mode is stipulated. For this purpose the user a

  17. Neural-Network Object-Recognition Program

    Spirkovska, L.; Reid, M. B.


    HONTIOR computer program implements third-order neural network exhibiting invariance under translation, change of scale, and in-plane rotation. Invariance incorporated directly into architecture of network. Only one view of each object needed to train network for two-dimensional-translation-invariant recognition of object. Also used for three-dimensional-transformation-invariant recognition by training network on only set of out-of-plane rotated views. Written in C language.

  18. Hidden neural networks: application to speech recognition

    Riis, Søren Kamaric


    We evaluate the hidden neural network HMM/NN hybrid on two speech recognition benchmark tasks; (1) task independent isolated word recognition on the Phonebook database, and (2) recognition of broad phoneme classes in continuous speech from the TIMIT database. It is shown how hidden neural networks...... (HNNs) with much fewer parameters than conventional HMMs and other hybrids can obtain comparable performance, and for the broad class task it is illustrated how the HNN can be applied as a purely transition based system, where acoustic context dependent transition probabilities are estimated by neural...

  19. Matrix representation of a Neural Network

    Christensen, Bjørn Klint

    This paper describes the implementation of a three-layer feedforward backpropagation neural network. The paper does not explain feedforward, backpropagation or what a neural network is. It is assumed, that the reader knows all this. If not please read chapters 2, 8 and 9 in Parallel Distributed...... Processing, by David Rummelhart (Rummelhart 1986) for an easy-to-read introduction. What the paper does explain is how a matrix representation of a neural net allows for a very simple implementation. The matrix representation is introduced in (Rummelhart 1986, chapter 9), but only for a two-layer linear...... network and the feedforward algorithm. This paper develops the idea further to three-layer non-linear networks and the backpropagation algorithm. Figure 1 shows the layout of a three-layer network. There are I input nodes, J hidden nodes and K output nodes all indexed from 0. Bias-node for the hidden...

  20. Application of Partially Connected Neural Network


    This paper focuses mainly on application of Partially Connected Backpropagation Neural Network (PCBP) instead of typical Fully Connected Neural Network (FCBP). The initial neural network is fully connected, after training with sample data using cross-entropy as error function, a clustering method is employed to cluster weights between inputs to hidden layer and from hidden to output layer, and connections that are relatively unnecessary are deleted, thus the initial network becomes a PCBP network.Then PCBP can be used in prediction or data mining by training PCBP with data that comes from database. At the end of this paper, several experiments are conducted to illustrate the effects of PCBP using Iris data set.

  1. On neural networks that design neural associative memories.

    Chan, H Y; Zak, S H


    The design problem of generalized brain-state-in-a-box (GBSB) type associative memories is formulated as a constrained optimization program, and "designer" neural networks for solving the program in real time are proposed. The stability of the designer networks is analyzed using Barbalat's lemma. The analyzed and synthesized neural associative memories do not require symmetric weight matrices. Two types of the GBSB-based associative memories are analyzed, one when the network trajectories are constrained to reside in the hypercube [-1, 1](n) and the other type when the network trajectories are confined to stay in the hypercube [0, 1](n). Numerical examples and simulations are presented to illustrate the results obtained.

  2. Artificial astrocytes improve neural network performance.

    Porto-Pazos, Ana B; Veiguela, Noha; Mesejo, Pablo; Navarrete, Marta; Alvarellos, Alberto; Ibáñez, Oscar; Pazos, Alejandro; Araque, Alfonso


    Compelling evidence indicates the existence of bidirectional communication between astrocytes and neurons. Astrocytes, a type of glial cells classically considered to be passive supportive cells, have been recently demonstrated to be actively involved in the processing and regulation of synaptic information, suggesting that brain function arises from the activity of neuron-glia networks. However, the actual impact of astrocytes in neural network function is largely unknown and its application in artificial intelligence remains untested. We have investigated the consequences of including artificial astrocytes, which present the biologically defined properties involved in astrocyte-neuron communication, on artificial neural network performance. Using connectionist systems and evolutionary algorithms, we have compared the performance of artificial neural networks (NN) and artificial neuron-glia networks (NGN) to solve classification problems. We show that the degree of success of NGN is superior to NN. Analysis of performances of NN with different number of neurons or different architectures indicate that the effects of NGN cannot be accounted for an increased number of network elements, but rather they are specifically due to astrocytes. Furthermore, the relative efficacy of NGN vs. NN increases as the complexity of the network increases. These results indicate that artificial astrocytes improve neural network performance, and established the concept of Artificial Neuron-Glia Networks, which represents a novel concept in Artificial Intelligence with implications in computational science as well as in the understanding of brain function.

  3. Hardware implementation of stochastic spiking neural networks.

    Rosselló, Josep L; Canals, Vincent; Morro, Antoni; Oliver, Antoni


    Spiking Neural Networks, the last generation of Artificial Neural Networks, are characterized by its bio-inspired nature and by a higher computational capacity with respect to other neural models. In real biological neurons, stochastic processes represent an important mechanism of neural behavior and are responsible of its special arithmetic capabilities. In this work we present a simple hardware implementation of spiking neurons that considers this probabilistic nature. The advantage of the proposed implementation is that it is fully digital and therefore can be massively implemented in Field Programmable Gate Arrays. The high computational capabilities of the proposed model are demonstrated by the study of both feed-forward and recurrent networks that are able to implement high-speed signal filtering and to solve complex systems of linear equations.

  4. Stability prediction of berm breakwater using neural network

    Mandal, S.; Rao, S.; Manjunath, Y.R.

    . In order to allow the network to learn both non-linear and linear relationships between input nodes and output nodes, multiple-layer networks are often used. Among many neural network architectures, the three layers feed forward backpropagation neural...

  5. Pattern Classification using Simplified Neural Networks

    Kamruzzaman, S M


    In recent years, many neural network models have been proposed for pattern classification, function approximation and regression problems. This paper presents an approach for classifying patterns from simplified NNs. Although the predictive accuracy of ANNs is often higher than that of other methods or human experts, it is often said that ANNs are practically "black boxes", due to the complexity of the networks. In this paper, we have an attempted to open up these black boxes by reducing the complexity of the network. The factor makes this possible is the pruning algorithm. By eliminating redundant weights, redundant input and hidden units are identified and removed from the network. Using the pruning algorithm, we have been able to prune networks such that only a few input units, hidden units and connections left yield a simplified network. Experimental results on several benchmarks problems in neural networks show the effectiveness of the proposed approach with good generalization ability.

  6. Artificial neural network intelligent method for prediction

    Trifonov, Roumen; Yoshinov, Radoslav; Pavlova, Galya; Tsochev, Georgi


    Accounting and financial classification and prediction problems are high challenge and researchers use different methods to solve them. Methods and instruments for short time prediction of financial operations using artificial neural network are considered. The methods, used for prediction of financial data as well as the developed forecasting system with neural network are described in the paper. The architecture of a neural network used four different technical indicators, which are based on the raw data and the current day of the week is presented. The network developed is used for forecasting movement of stock prices one day ahead and consists of an input layer, one hidden layer and an output layer. The training method is algorithm with back propagation of the error. The main advantage of the developed system is self-determination of the optimal topology of neural network, due to which it becomes flexible and more precise The proposed system with neural network is universal and can be applied to various financial instruments using only basic technical indicators as input data.

  7. Artificial Neural Networks and Instructional Technology.

    Carlson, Patricia A.


    Artificial neural networks (ANN), part of artificial intelligence, are discussed. Such networks are fed sample cases (training sets), learn how to recognize patterns in the sample data, and use this experience in handling new cases. Two cognitive roles for ANNs (intelligent filters and spreading, associative memories) are examined. Prototypes…

  8. Learning drifting concepts with neural networks

    Biehl, Michael; Schwarze, Holm


    The learning of time-dependent concepts with a neural network is studied analytically and numerically. The linearly separable target rule is represented by an N-vector, whose time dependence is modelled by a random or deterministic drift process. A single-layer network is trained online using differ

  9. Estimating Conditional Distributions by Neural Networks

    Kulczycki, P.; Schiøler, Henrik


    Neural Networks for estimating conditionaldistributions and their associated quantiles are investigated in this paper. A basic network structure is developed on the basis of kernel estimation theory, and consistency property is considered from a mild set of assumptions. A number of applications...

  10. Artificial Neural Networks and Instructional Technology.

    Carlson, Patricia A.


    Artificial neural networks (ANN), part of artificial intelligence, are discussed. Such networks are fed sample cases (training sets), learn how to recognize patterns in the sample data, and use this experience in handling new cases. Two cognitive roles for ANNs (intelligent filters and spreading, associative memories) are examined. Prototypes…

  11. Neural networks as perpetual information generators

    Englisch, Harald; Xiao, Yegao; Yao, Kailun


    The information gain in a neural network cannot be larger than the bit capacity of the synapses. It is shown that the equation derived by Engel et al. [Phys. Rev. A 42, 4998 (1990)] for the strongly diluted network with persistent stimuli contradicts this condition. Furthermore, for any time step the correct equation is derived by taking the correlation between random variables into account.

  12. A quantum-implementable neural network model

    Chen, Jialin; Wang, Lingli; Charbon, Edoardo


    A quantum-implementable neural network, namely quantum probability neural network (QPNN) model, is proposed in this paper. QPNN can use quantum parallelism to trace all possible network states to improve the result. Due to its unique quantum nature, this model is robust to several quantum noises under certain conditions, which can be efficiently implemented by the qubus quantum computer. Another advantage is that QPNN can be used as memory to retrieve the most relevant data and even to generate new data. The MATLAB experimental results of Iris data classification and MNIST handwriting recognition show that much less neuron resources are required in QPNN to obtain a good result than the classical feedforward neural network. The proposed QPNN model indicates that quantum effects are useful for real-life classification tasks.

  13. Neural Network Approaches to Visual Motion Perception

    郭爱克; 杨先一


    This paper concerns certain difficult problems in image processing and perception: neuro-computation of visual motion information. The first part of this paper deals with the spatial physiological integration by the figure-ground discrimination neural network in the visual system of the fly. We have outlined the fundamental organization and algorithms of this neural network, and mainly concentrated on the results of computer simulations of spatial physiological integration. It has been shown that the gain control mechanism , the nonlinearity of synaptic transmission characteristic , the interaction between the two eyes , and the directional selectivity of the pool cells play decisive roles in the spatial physiological integration. In the second part, we have presented a self-organizing neural network for the perception of visual motion by using a retinotopic array of Reichardt’s motion detectors and Kohonen’s self-organizing maps. It .has been demonstrated by computer simulations that the network is abl

  14. Improving neural network performance on SIMD architectures

    Limonova, Elena; Ilin, Dmitry; Nikolaev, Dmitry


    Neural network calculations for the image recognition problems can be very time consuming. In this paper we propose three methods of increasing neural network performance on SIMD architectures. The usage of SIMD extensions is a way to speed up neural network processing available for a number of modern CPUs. In our experiments, we use ARM NEON as SIMD architecture example. The first method deals with half float data type for matrix computations. The second method describes fixed-point data type for the same purpose. The third method considers vectorized activation functions implementation. For each method we set up a series of experiments for convolutional and fully connected networks designed for image recognition task.

  15. Stability analysis of discrete-time BAM neural networks based on standard neural network models

    ZHANG Sen-lin; LIU Mei-qin


    To facilitate stability analysis of discrete-time bidirectional associative memory (BAM) neural networks, they were converted into novel neural network models, termed standard neural network models (SNNMs), which interconnect linear dynamic systems and bounded static nonlinear operators. By combining a number of different Lyapunov functionals with S-procedure, some useful criteria of global asymptotic stability and global exponential stability of the equilibrium points of SNNMs were derived. These stability conditions were formulated as linear matrix inequalities (LMIs). So global stability of the discrete-time BAM neural networks could be analyzed by using the stability results of the SNNMs. Compared to the existing stability analysis methods, the proposed approach is easy to implement, less conservative, and is applicable to other recurrent neural networks.

  16. Neural-networks-based Modelling and a Fuzzy Neural Networks Controller of MCFC


    Molten Carbonate Fuel Cells (MCFC) are produced with a highly efficient and clean power generation technology which will soon be widely utilized. The temperature characters of MCFC stack are briefly analyzed. A radial basis function (RBF) neural networks identification technology is applied to set up the temperature nonlinear model of MCFC stack, and the identification structure, algorithm and modeling training process are given in detail. A fuzzy controller of MCFC stack is designed. In order to improve its online control ability, a neural network trained by the I/O data of a fuzzy controller is designed. The neural networks can memorize and expand the inference rules of the fuzzy controller and substitute for the fuzzy controller to control MCFC stack online. A detailed design of the controller is given. The validity of MCFC stack modelling based on neural networks and the superior performance of the fuzzy neural networks controller are proved by Simulations.

  17. Dynamic pricing by hopfield neural network

    Lusajo M Minga; FENG Yu-qiang(冯玉强); LI Yi-jun(李一军); LU Yang(路杨); Kimutai Kimeli


    The increase in the number of shopbots users in e-commerce has triggered flexibility of sellers in their pricing strategies. Sellers see the importance of automated price setting which provides efficient services to a large number of buyers who are using shopbots. This paper studies the characteristic of decreasing energy with time in a continuous model of a Hopfield neural network that is the decreasing of errors in the network with respect to time. The characteristic shows that it is possible to use Hopfield neural network to get the main factor of dynamic pricing; the least variable cost, from production function principles. The least variable cost is obtained by reducing or increasing the input combination factors, and then making the comparison of the network output with the desired output, where the difference between the network output and desired output will be decreasing in the same manner as in the Hopfield neural network energy. Hopfield neural network will simplify the rapid change of prices in e-commerce during transaction that depends on the demand quantity for demand sensitive model of pricing.

  18. Neutron spectrometry with artificial neural networks

    Vega C, H.R.; Hernandez D, V.M.; Manzanares A, E.; Rodriguez, J.M.; Mercado S, G.A. [Universidad Autonoma de Zacatecas, A.P. 336, 98000 Zacatecas (Mexico); Iniguez de la Torre Bayo, M.P. [Universidad de Valladolid, Valladolid (Spain); Barquero, R. [Hospital Universitario Rio Hortega, Valladolid (Spain); Arteaga A, T. [Envases de Zacatecas, S.A. de C.V., Zacatecas (Mexico)]. e-mail:


    An artificial neural network has been designed to obtain the neutron spectra from the Bonner spheres spectrometer's count rates. The neural network was trained using 129 neutron spectra. These include isotopic neutron sources; reference and operational spectra from accelerators and nuclear reactors, spectra from mathematical functions as well as few energy groups and monoenergetic spectra. The spectra were transformed from lethargy to energy distribution and were re-bin ned to 31 energy groups using the MCNP 4C code. Re-binned spectra and UTA4 response matrix were used to calculate the expected count rates in Bonner spheres spectrometer. These count rates were used as input and the respective spectrum was used as output during neural network training. After training the network was tested with the Bonner spheres count rates produced by a set of neutron spectra. This set contains data used during network training as well as data not used. Training and testing was carried out in the Mat lab program. To verify the network unfolding performance the original and unfolded spectra were compared using the {chi}{sup 2}-test and the total fluence ratios. The use of Artificial Neural Networks to unfold neutron spectra in neutron spectrometry is an alternative procedure that overcomes the drawbacks associated in this ill-conditioned problem. (Author)

  19. Neural network technologies for image classification

    Korikov, A. M.; Tungusova, A. V.


    We analyze the classes of problems with an objective necessity to use neural network technologies, i.e. representation and resolution problems in the neural network logical basis. Among these problems, image recognition takes an important place, in particular the classification of multi-dimensional data based on information about textural characteristics. These problems occur in aerospace and seismic monitoring, materials science, medicine and other. We reviewed different approaches for the texture description: statistical, structural, and spectral. We developed a neural network technology for resolving a practical problem of cloud image classification for satellite snapshots from the spectroradiometer MODIS. The cloud texture is described by the statistical characteristics of the GLCM (Gray Level Co- Occurrence Matrix) method. From the range of neural network models that might be applied for image classification, we chose the probabilistic neural network model (PNN) and developed an implementation which performs the classification of the main types and subtypes of clouds. Also, we chose experimentally the optimal architecture and parameters for the PNN model which is used for image classification.

  20. Representations in neural network based empirical potentials

    Cubuk, Ekin D.; Malone, Brad D.; Onat, Berk; Waterland, Amos; Kaxiras, Efthimios


    Many structural and mechanical properties of crystals, glasses, and biological macromolecules can be modeled from the local interactions between atoms. These interactions ultimately derive from the quantum nature of electrons, which can be prohibitively expensive to simulate. Machine learning has the potential to revolutionize materials modeling due to its ability to efficiently approximate complex functions. For example, neural networks can be trained to reproduce results of density functional theory calculations at a much lower cost. However, how neural networks reach their predictions is not well understood, which has led to them being used as a "black box" tool. This lack of understanding is not desirable especially for applications of neural networks in scientific inquiry. We argue that machine learning models trained on physical systems can be used as more than just approximations since they had to "learn" physical concepts in order to reproduce the labels they were trained on. We use dimensionality reduction techniques to study in detail the representation of silicon atoms at different stages in a neural network, which provides insight into how a neural network learns to model atomic interactions.

  1. Using neural networks to describe tracer correlations

    D. J. Lary


    Full Text Available Neural networks are ideally suited to describe the spatial and temporal dependence of tracer-tracer correlations. The neural network performs well even in regions where the correlations are less compact and normally a family of correlation curves would be required. For example, the CH4-N2O correlation can be well described using a neural network trained with the latitude, pressure, time of year, and methane volume mixing ratio (v.m.r.. In this study a neural network using Quickprop learning and one hidden layer with eight nodes was able to reproduce the CH4-N2O correlation with a correlation coefficient between simulated and training values of 0.9995. Such an accurate representation of tracer-tracer correlations allows more use to be made of long-term datasets to constrain chemical models. Such as the dataset from the Halogen Occultation Experiment (HALOE which has continuously observed CH4  (but not N2O from 1991 till the present. The neural network Fortran code used is available for download.

  2. Estimates on compressed neural networks regression.

    Zhang, Yongquan; Li, Youmei; Sun, Jianyong; Ji, Jiabing


    When the neural element number n of neural networks is larger than the sample size m, the overfitting problem arises since there are more parameters than actual data (more variable than constraints). In order to overcome the overfitting problem, we propose to reduce the number of neural elements by using compressed projection A which does not need to satisfy the condition of Restricted Isometric Property (RIP). By applying probability inequalities and approximation properties of the feedforward neural networks (FNNs), we prove that solving the FNNs regression learning algorithm in the compressed domain instead of the original domain reduces the sample error at the price of an increased (but controlled) approximation error, where the covering number theory is used to estimate the excess error, and an upper bound of the excess error is given.

  3. Community structure of complex networks based on continuous neural network

    Dai, Ting-ting; Shan, Chang-ji; Dong, Yan-shou


    As a new subject, the research of complex networks has attracted the attention of researchers from different disciplines. Community structure is one of the key structures of complex networks, so it is a very important task to analyze the community structure of complex networks accurately. In this paper, we study the problem of extracting the community structure of complex networks, and propose a continuous neural network (CNN) algorithm. It is proved that for any given initial value, the continuous neural network algorithm converges to the eigenvector of the maximum eigenvalue of the network modularity matrix. Therefore, according to the stability of the evolution of the network symbol will be able to get two community structure.

  4. Identification and Position Control of Marine Helm using Artificial Neural Network Neural Network

    Hui ZHU


    Full Text Available If nonlinearities such as saturation of the amplifier gain and motor torque, gear backlash, and shaft compliances- just to name a few - are considered in the position control system of marine helm, traditional control methods are no longer sufficient to be used to improve the performance of the system. In this paper an alternative approach to traditional control methods - a neural network reference controller - is proposed to establish an adaptive control of the position of the marine helm to achieve the controlled variable at the command position. This neural network controller comprises of two neural networks. One is the plant model network used to identify the nonlinear system and the other the controller network used to control the output to follow the reference model. The experimental results demonstrate that this adaptive neural network reference controller has much better control performance than is obtained with traditional controllers.

  5. Digital systems for artificial neural networks

    Atlas, L.E. (Interactive Systems Design Lab., Univ. of Washington, WA (US)); Suzuki, Y. (NTT Human Interface Labs. (US))


    A tremendous flurry of research activity has developed around artificial neural systems. These systems have also been tested in many applications, often with positive results. Most of this work has taken place as digital simulations on general-purpose serial or parallel digital computers. Specialized neural network emulation systems have also been developed for more efficient learning and use. The authors discussed how dedicated digital VLSI integrated circuits offer the highest near-term future potential for this technology.

  6. Equivalence of Conventional and Modified Network of Generalized Neural Elements

    E. V. Konovalov


    Full Text Available The article is devoted to the analysis of neural networks consisting of generalized neural elements. The first part of the article proposes a new neural network model — a modified network of generalized neural elements (MGNE-network. This network developes the model of generalized neural element, whose formal description contains some flaws. In the model of the MGNE-network these drawbacks are overcome. A neural network is introduced all at once, without preliminary description of the model of a single neural element and method of such elements interaction. The description of neural network mathematical model is simplified and makes it relatively easy to construct on its basis a simulation model to conduct numerical experiments. The model of the MGNE-network is universal, uniting properties of networks consisting of neurons-oscillators and neurons-detectors. In the second part of the article we prove the equivalence of the dynamics of the two considered neural networks: the network, consisting of classical generalized neural elements, and MGNE-network. We introduce the definition of equivalence in the functioning of the generalized neural element and the MGNE-network consisting of a single element. Then we introduce the definition of the equivalence of the dynamics of the two neural networks in general. It is determined the correlation of different parameters of the two considered neural network models. We discuss the issue of matching the initial conditions of the two considered neural network models. We prove the theorem about the equivalence of the dynamics of the two considered neural networks. This theorem allows us to apply all previously obtained results for the networks, consisting of classical generalized neural elements, to the MGNE-network.

  7. Implementing Signature Neural Networks with Spiking Neurons.

    Carrillo-Medina, José Luis; Latorre, Roberto


    Spiking Neural Networks constitute the most promising approach to develop realistic Artificial Neural Networks (ANNs). Unlike traditional firing rate-based paradigms, information coding in spiking models is based on the precise timing of individual spikes. It has been demonstrated that spiking ANNs can be successfully and efficiently applied to multiple realistic problems solvable with traditional strategies (e.g., data classification or pattern recognition). In recent years, major breakthroughs in neuroscience research have discovered new relevant computational principles in different living neural systems. Could ANNs benefit from some of these recent findings providing novel elements of inspiration? This is an intriguing question for the research community and the development of spiking ANNs including novel bio-inspired information coding and processing strategies is gaining attention. From this perspective, in this work, we adapt the core concepts of the recently proposed Signature Neural Network paradigm-i.e., neural signatures to identify each unit in the network, local information contextualization during the processing, and multicoding strategies for information propagation regarding the origin and the content of the data-to be employed in a spiking neural network. To the best of our knowledge, none of these mechanisms have been used yet in the context of ANNs of spiking neurons. This paper provides a proof-of-concept for their applicability in such networks. Computer simulations show that a simple network model like the discussed here exhibits complex self-organizing properties. The combination of multiple simultaneous encoding schemes allows the network to generate coexisting spatio-temporal patterns of activity encoding information in different spatio-temporal spaces. As a function of the network and/or intra-unit parameters shaping the corresponding encoding modality, different forms of competition among the evoked patterns can emerge even in the absence

  8. Implementing Signature Neural Networks with Spiking Neurons

    Carrillo-Medina, José Luis; Latorre, Roberto


    Spiking Neural Networks constitute the most promising approach to develop realistic Artificial Neural Networks (ANNs). Unlike traditional firing rate-based paradigms, information coding in spiking models is based on the precise timing of individual spikes. It has been demonstrated that spiking ANNs can be successfully and efficiently applied to multiple realistic problems solvable with traditional strategies (e.g., data classification or pattern recognition). In recent years, major breakthroughs in neuroscience research have discovered new relevant computational principles in different living neural systems. Could ANNs benefit from some of these recent findings providing novel elements of inspiration? This is an intriguing question for the research community and the development of spiking ANNs including novel bio-inspired information coding and processing strategies is gaining attention. From this perspective, in this work, we adapt the core concepts of the recently proposed Signature Neural Network paradigm—i.e., neural signatures to identify each unit in the network, local information contextualization during the processing, and multicoding strategies for information propagation regarding the origin and the content of the data—to be employed in a spiking neural network. To the best of our knowledge, none of these mechanisms have been used yet in the context of ANNs of spiking neurons. This paper provides a proof-of-concept for their applicability in such networks. Computer simulations show that a simple network model like the discussed here exhibits complex self-organizing properties. The combination of multiple simultaneous encoding schemes allows the network to generate coexisting spatio-temporal patterns of activity encoding information in different spatio-temporal spaces. As a function of the network and/or intra-unit parameters shaping the corresponding encoding modality, different forms of competition among the evoked patterns can emerge even in the

  9. Network Traffic Prediction based on Particle Swarm BP Neural Network

    Yan Zhu


    Full Text Available The traditional BP neural network algorithm has some bugs such that it is easy to fall into local minimum and the slow convergence speed. Particle swarm optimization is an evolutionary computation technology based on swarm intelligence which can not guarantee global convergence. Artificial Bee Colony algorithm is a global optimum algorithm with many advantages such as simple, convenient and strong robust. In this paper, a new BP neural network based on Artificial Bee Colony algorithm and particle swarm optimization algorithm is proposed to optimize the weight and threshold value of BP neural network. After network traffic prediction experiment, we can conclude that optimized BP network traffic prediction based on PSO-ABC has high prediction accuracy and has stable prediction performance.

  10. Training Deep Spiking Neural Networks Using Backpropagation.

    Lee, Jun Haeng; Delbruck, Tobi; Pfeiffer, Michael


    Deep spiking neural networks (SNNs) hold the potential for improving the latency and energy efficiency of deep neural networks through data-driven event-based computation. However, training such networks is difficult due to the non-differentiable nature of spike events. In this paper, we introduce a novel technique, which treats the membrane potentials of spiking neurons as differentiable signals, where discontinuities at spike times are considered as noise. This enables an error backpropagation mechanism for deep SNNs that follows the same principles as in conventional deep networks, but works directly on spike signals and membrane potentials. Compared with previous methods relying on indirect training and conversion, our technique has the potential to capture the statistics of spikes more precisely. We evaluate the proposed framework on artificially generated events from the original MNIST handwritten digit benchmark, and also on the N-MNIST benchmark recorded with an event-based dynamic vision sensor, in which the proposed method reduces the error rate by a factor of more than three compared to the best previous SNN, and also achieves a higher accuracy than a conventional convolutional neural network (CNN) trained and tested on the same data. We demonstrate in the context of the MNIST task that thanks to their event-driven operation, deep SNNs (both fully connected and convolutional) trained with our method achieve accuracy equivalent with conventional neural networks. In the N-MNIST example, equivalent accuracy is achieved with about five times fewer computational operations.

  11. Foreign currency rate forecasting using neural networks

    Pandya, Abhijit S.; Kondo, Tadashi; Talati, Amit; Jayadevappa, Suryaprasad


    Neural networks are increasingly being used as a forecasting tool in many forecasting problems. This paper discusses the application of neural networks in predicting daily foreign exchange rates between the USD, GBP as well as DEM. We approach the problem from a time-series analysis framework - where future exchange rates are forecasted solely using past exchange rates. This relies on the belief that the past prices and future prices are very close related, and interdependent. We present the result of training a neural network with historical USD-GBP data. The methodology used in explained, as well as the training process. We discuss the selection of inputs to the network, and present a comparison of using the actual exchange rates and the exchange rate differences as inputs. Price and rate differences are the preferred way of training neural network in financial applications. Results of both approaches are present together for comparison. We show that the network is able to learn the trends in the exchange rate movements correctly, and present the results of the prediction over several periods of time.

  12. Training Deep Spiking Neural Networks using Backpropagation

    Jun Haeng Lee


    Full Text Available Deep spiking neural networks (SNNs hold the potential for improving the latency and energy efficiency of deep neural networks through data-driven event-based computation. However, training such networks is difficult due to the non-differentiable nature of spike events. In this paper, we introduce a novel technique, which treats the membrane potentials of spiking neurons as differentiable signals, where discontinuities at spike times are considered as noise. This enables an error backpropagation mechanism for deep SNNs that follows the same principles as in conventional deep networks, but works directly on spike signals and membrane potentials. Compared with previous methods relying on indirect training and conversion, our technique has the potential to capture the statistics of spikes more precisely. We evaluate the proposed framework on artificially generated events from the original MNIST handwritten digit benchmark, and also on the N-MNIST benchmark recorded with an event-based dynamic vision sensor, in which the proposed method reduces the error rate by a factor of more than three compared to the best previous SNN, and also achieves a higher accuracy than a conventional convolutional neural network (CNN trained and tested on the same data. We demonstrate in the context of the MNIST task that thanks to their event-driven operation, deep SNNs (both fully connected and convolutional trained with our method achieve accuracy equivalent with conventional neural networks. In the N-MNIST example, equivalent accuracy is achieved with about five times fewer computational operations.

  13. Kannada character recognition system using neural network

    Kumar, Suresh D. S.; Kamalapuram, Srinivasa K.; Kumar, Ajay B. R.


    Handwriting recognition has been one of the active and challenging research areas in the field of pattern recognition. It has numerous applications which include, reading aid for blind, bank cheques and conversion of any hand written document into structural text form. As there is no sufficient number of works on Indian language character recognition especially Kannada script among 15 major scripts in India. In this paper an attempt is made to recognize handwritten Kannada characters using Feed Forward neural networks. A handwritten Kannada character is resized into 20x30 Pixel. The resized character is used for training the neural network. Once the training process is completed the same character is given as input to the neural network with different set of neurons in hidden layer and their recognition accuracy rate for different Kannada characters has been calculated and compared. The results show that the proposed system yields good recognition accuracy rates comparable to that of other handwritten character recognition systems.

  14. Parameter estimation using compensatory neural networks

    M Sinha; P K Kalra; K Kumar


    Proposed here is a new neuron model, a basis for Compensatory Neural Network Architecture (CNNA), which not only reduces the total number of interconnections among neurons but also reduces the total computing time for training. The suggested model has properties of the basic neuron model as well as the higher neuron model (multiplicative aggregation function). It can adapt to standard neuron and higher order neuron, as well as a combination of the two. This approach is found to estimate the orbit with accuracy significantly better than Kalman Filter (KF) and Feedforward Multilayer Neural Network (FMNN) (also simply referred to as Artificial Neural Network, ANN) with lambda-gamma learning. The typical simulation runs also bring out the superiority of the proposed scheme over Kalman filter from the standpoint of computation time and the amount of data needed for the desired degree of estimated accuracy for the specific problem of orbit determination.

  15. Assessing Landslide Hazard Using Artificial Neural Network

    Farrokhzad, Farzad; Choobbasti, Asskar Janalizadeh; Barari, Amin


    neural network has been developed for use in the stability evaluation of slopes under various geological conditions and engineering requirements. The Artificial neural network model of this research uses slope characteristics as input and leads to the output in form of the probability of failure...... and factor of safety. It can be stated that the trained neural networks are capable of predicting the stability of slopes and safety factor of landslide hazard in study area with an acceptable level of confidence. Landslide hazard analysis and mapping can provide useful information for catastrophic loss...... failure" which is main concentration of the current research and "liquefaction failure". Shear failures along shear planes occur when the shear stress along the sliding surfaces exceed the effective shear strength. These slides have been referred to as landslide. An expert system based on artificial...

  16. Recurrent Neural Network for Computing Outer Inverse.

    Živković, Ivan S; Stanimirović, Predrag S; Wei, Yimin


    Two linear recurrent neural networks for generating outer inverses with prescribed range and null space are defined. Each of the proposed recurrent neural networks is based on the matrix-valued differential equation, a generalization of dynamic equations proposed earlier for the nonsingular matrix inversion, the Moore-Penrose inversion, as well as the Drazin inversion, under the condition of zero initial state. The application of the first approach is conditioned by the properties of the spectrum of a certain matrix; the second approach eliminates this drawback, though at the cost of increasing the number of matrix operations. The cases corresponding to the most common generalized inverses are defined. The conditions that ensure stability of the proposed neural network are presented. Illustrative examples present the results of numerical simulations.

  17. Classification of radar clutter using neural networks.

    Haykin, S; Deng, C


    A classifier that incorporates both preprocessing and postprocessing procedures as well as a multilayer feedforward network (based on the back-propagation algorithm) in its design to distinguish between several major classes of radar returns including weather, birds, and aircraft is described. The classifier achieves an average classification accuracy of 89% on generalization for data collected during a single scan of the radar antenna. The procedures of feature selection for neural network training, the classifier design considerations, the learning algorithm development, the implementation, and the experimental results of the neural clutter classifier, which is simulated on a Warp systolic computer, are discussed. A comparative evaluation of the multilayer neural network with a traditional Bayes classifier is presented.

  18. Cotton genotypes selection through artificial neural networks.

    Júnior, E G Silva; Cardoso, D B O; Reis, M C; Nascimento, A F O; Bortolin, D I; Martins, M R; Sousa, L B


    Breeding programs currently use statistical analysis to assist in the identification of superior genotypes at various stages of a cultivar's development. Differently from these analyses, the computational intelligence approach has been little explored in genetic improvement of cotton. Thus, this study was carried out with the objective of presenting the use of artificial neural networks as auxiliary tools in the improvement of the cotton to improve fiber quality. To demonstrate the applicability of this approach, this research was carried out using the evaluation data of 40 genotypes. In order to classify the genotypes for fiber quality, the artificial neural networks were trained with replicate data of 20 genotypes of cotton evaluated in the harvests of 2013/14 and 2014/15, regarding fiber length, uniformity of length, fiber strength, micronaire index, elongation, short fiber index, maturity index, reflectance degree, and fiber quality index. This quality index was estimated by means of a weighted average on the determined score (1 to 5) of each characteristic of the HVI evaluated, according to its industry standards. The artificial neural networks presented a high capacity of correct classification of the 20 selected genotypes based on the fiber quality index, so that when using fiber length associated with the short fiber index, fiber maturation, and micronaire index, the artificial neural networks presented better results than using only fiber length and previous associations. It was also observed that to submit data of means of new genotypes to the neural networks trained with data of repetition, provides better results of classification of the genotypes. When observing the results obtained in the present study, it was verified that the artificial neural networks present great potential to be used in the different stages of a genetic improvement program of the cotton, aiming at the improvement of the fiber quality of the future cultivars.

  19. Neural networks and particle physics

    Peterson, Carsten


    1. Introduction : Structure of the Central Nervous System Generics2. Feed-forward networks, Perceptions, Function approximators3. Self-organisation, Feature Maps4. Feed-back Networks, The Hopfield model, Optimization problems, Feed-back, Networks, Deformable templates, Graph bisection

  20. Implementation aspects of Graph Neural Networks

    Barcz, A.; Szymański, Z.; Jankowski, S.


    This article summarises the results of implementation of a Graph Neural Network classi er. The Graph Neural Network model is a connectionist model, capable of processing various types of structured data, including non- positional and cyclic graphs. In order to operate correctly, the GNN model must implement a transition function being a contraction map, which is assured by imposing a penalty on model weights. This article presents research results concerning the impact of the penalty parameter on the model training process and the practical decisions that were made during the GNN implementation process.

  1. Livermore Big Artificial Neural Network Toolkit


    LBANN is a toolkit that is designed to train artificial neural networks efficiently on high performance computing architectures. It is optimized to take advantages of key High Performance Computing features to accelerate neural network training. Specifically it is optimized for low-latency, high bandwidth interconnects, node-local NVRAM, node-local GPU accelerators, and high bandwidth parallel file systems. It is built on top of the open source Elemental distributed-memory dense and spars-direct linear algebra and optimization library that is released under the BSD license. The algorithms contained within LBANN are drawn from the academic literature and implemented to work within a distributed-memory framework.

  2. Human Face Recognition Using Convolutional Neural Networks

    Răzvan-Daniel Albu


    Full Text Available In this paper, I present a novel hybrid face recognition approach based on a convolutional neural architecture, designed to robustly detect highly variable face patterns. The convolutional network extracts successively larger features in a hierarchical set of layers. With the weights of the trained neural networks there are created kernel windows used for feature extraction in a 3-stage algorithm. I present experimental results illustrating the efficiency of the proposed approach. I use a database of 796 images of 159 individuals from Reims University which contains quite a high degree of variability in expression, pose, and facial details.

  3. Spectral classification using convolutional neural networks

    Hála, Pavel


    There is a great need for accurate and autonomous spectral classification methods in astrophysics. This thesis is about training a convolutional neural network (ConvNet) to recognize an object class (quasar, star or galaxy) from one-dimension spectra only. Author developed several scripts and C programs for datasets preparation, preprocessing and postprocessing of the data. EBLearn library (developed by Pierre Sermanet and Yann LeCun) was used to create ConvNets. Application on dataset of more than 60000 spectra yielded success rate of nearly 95%. This thesis conclusively proved great potential of convolutional neural networks and deep learning methods in astrophysics.

  4. Neural networks advances and applications 2

    Gelenbe, E


    The present volume is a natural follow-up to Neural Networks: Advances and Applications which appeared one year previously. As the title indicates, it combines the presentation of recent methodological results concerning computational models and results inspired by neural networks, and of well-documented applications which illustrate the use of such models in the solution of difficult problems. The volume is balanced with respect to these two orientations: it contains six papers concerning methodological developments and five papers concerning applications and examples illustrating the theoret

  5. SAR ATR Based on Convolutional Neural Network

    Tian Zhuangzhuang


    Full Text Available This study presents a new method of Synthetic Aperture Radar (SAR image target recognition based on a convolutional neural network. First, we introduce a class separability measure into the cost function to improve this network’s ability to distinguish between categories. Then, we extract SAR image features using the improved convolutional neural network and classify these features using a support vector machine. Experimental results using moving and stationary target acquisition and recognition SAR datasets prove the validity of this method.

  6. Contractor Prequalification Based on Neural Networks

    ZHANG Jin-long; YANG Lan-rong


    Contractor Prequalification involves the screening of contractors by a project owner, according to a given set of criteria, in order to determine their competence to perform the work if awarded the construction contract. This paper introduces the capabilities of neural networks in solving problems related to contractor prequalification. The neural network systems for contractor prequalification has an input vector of 8 components and an output vector of 1 component. The output vector represents whether a contractor is qualified or not qualified to submit a bid on a project.

  7. Simulation of photosynthetic production using neural network

    Kmet, Tibor; Kmetova, Maria


    This paper deals with neural network based optimal control synthesis for solving optimal control problems with control and state constraints and discrete time delay. The optimal control problem is transcribed into nonlinear programming problem which is implemented with adaptive critic neural network. This approach is applicable to a wide class of nonlinear systems. The proposed simulation methods is illustrated by the optimal control problem of photosynthetic production described by discrete time delay differential equations. Results show that adaptive critic based systematic approach holds promise for obtaining the optimal control with control and state constraints.

  8. Top tagging with deep neural networks [Vidyo

    CERN. Geneva


    Recent literature on deep neural networks for top tagging has focussed on image based techniques or multivariate approaches using high level jet substructure variables. Here, we take a sequential approach to this task by using anordered sequence of energy deposits as training inputs. Unlike previous approaches, this strategy does not result in a loss of information during pixelization or the calculation of high level features. We also propose new preprocessing methods that do not alter key physical quantities such as jet mass. We compare the performance of this approach to standard tagging techniques and present results evaluating the robustness of the neural network to pileup.

  9. Intelligent neural network classifier for automatic testing

    Bai, Baoxing; Yu, Heping


    This paper is concerned with an application of a multilayer feedforward neural network for the vision detection of industrial pictures, and introduces a high characteristics image processing and recognizing system which can be used for real-time testing blemishes, streaks and cracks, etc. on the inner walls of high-accuracy pipes. To take full advantage of the functions of the artificial neural network, such as the information distributed memory, large scale self-adapting parallel processing, high fault-tolerance ability, this system uses a multilayer perceptron as a regular detector to extract features of the images to be inspected and classify them.

  10. Speech Recognition Method Based on Multilayer Chaotic Neural Network

    REN Xiaolin; HU Guangrui


    In this paper,speech recognitionusing neural networks is investigated.Especially,chaotic dynamics is introduced to neurons,and a mul-tilayer chaotic neural network (MLCNN) architectureis built.A learning algorithm is also derived to trainthe weights of the network.We apply the MLCNNto speech recognition and compare the performanceof the network with those of recurrent neural net-work (RNN) and time-delay neural network (TDNN).Experimental results show that the MLCNN methodoutperforms the other neural networks methods withrespect to average recognition rate.

  11. Multiprocessor Realization of Neural Networks


    the unique capabilities of receiving, processing, and transmitting electo-chemical signals. These signals are sent over neural pathways that make up...these switching nodes and a clever arrangement of internode links to guaranteee at least one’ path between each processor and memory. These types of

  12. Optically excited synapse for neural networks.

    Boyd, G D


    What can optics with its promise of parallelism do for neural networks which require matrix multipliers? An all optical approach requires optical logic devices which are still in their infancy. An alternative is to retain electronic logic while optically addressing the synapse matrix. This paper considers several versions of an optically addressed neural network compatible with VLSI that could be fabricated with the synapse connection unspecified. This optical matrix multiplier circuit is compared to an all electronic matrix multiplier. For the optical version a synapse consisting of back-to-back photodiodes is found to have a suitable i-v characteristic for optical matrix multiplication (a linear region) plus a clipping or nonlinear region as required for neural networks. Four photodiodes per synapse are required. The strength of the synapse connection is controlled by the optical power and is thus an adjustable parameter. The synapse network can be programmed in various ways such as a shadow mask of metal, imaged mask (static), or light valve or an acoustooptic scanned laser beam or array of beams (dynamic). A milliwatt from LEDs or lasers is adequate power. The neuron has a linear transfer function and is either a summing amplifier, in which case the synapse signal is current, or an integrator, in which case the synapse signal is charge, the choice of which depends on the programming mode. Optical addressing and settling times of microseconds are anticipated. Electronic neural networks using single-value resistor synapses or single-bit programmable synapses have been demonstrated in the high-gain region of discrete single-value feedback. As an alternative to these networks and the above proposed optical synapses, an electronic analog-voltage vector matrix multiplier is considered using MOSFETS as the variable conductance in CMOS VLSI. It is concluded that a shadow mask addressed (static) optical neural network is promising.

  13. Porosity Log Prediction Using Artificial Neural Network

    Dwi Saputro, Oki; Lazuardi Maulana, Zulfikar; Dzar Eljabbar Latief, Fourier


    Well logging is important in oil and gas exploration. Many physical parameters of reservoir is derived from well logging measurement. Geophysicists often use well logging to obtain reservoir properties such as porosity, water saturation and permeability. Most of the time, the measurement of the reservoir properties are considered expensive. One of method to substitute the measurement is by conducting a prediction using artificial neural network. In this paper, artificial neural network is performed to predict porosity log data from other log data. Three well from ‘yy’ field are used to conduct the prediction experiment. The log data are sonic, gamma ray, and porosity log. One of three well is used as training data for the artificial neural network which employ the Levenberg-Marquardt Backpropagation algorithm. Through several trials, we devise that the most optimal input training is sonic log data and gamma ray log data with 10 hidden layer. The prediction result in well 1 has correlation of 0.92 and mean squared error of 5.67 x10-4. Trained network apply to other well data. The result show that correlation in well 2 and well 3 is 0.872 and 0.9077 respectively. Mean squared error in well 2 and well 3 is 11 x 10-4 and 9.539 x 10-4. From the result we can conclude that sonic log and gamma ray log could be good combination for predicting porosity with neural network.

  14. Autonomous robot behavior based on neural networks

    Grolinger, Katarina; Jerbic, Bojan; Vranjes, Bozo


    The purpose of autonomous robot is to solve various tasks while adapting its behavior to the variable environment, expecting it is able to navigate much like a human would, including handling uncertain and unexpected obstacles. To achieve this the robot has to be able to find solution to unknown situations, to learn experienced knowledge, that means action procedure together with corresponding knowledge on the work space structure, and to recognize working environment. The planning of the intelligent robot behavior presented in this paper implements the reinforcement learning based on strategic and random attempts for finding solution and neural network approach for memorizing and recognizing work space structure (structural assignment problem). Some of the well known neural networks based on unsupervised learning are considered with regard to the structural assignment problem. The adaptive fuzzy shadowed neural network is developed. It has the additional shadowed hidden layer, specific learning rule and initialization phase. The developed neural network combines advantages of networks based on the Adaptive Resonance Theory and using shadowed hidden layer provides ability to recognize lightly translated or rotated obstacles in any direction.

  15. Exploiting network redundancy for low-cost neural network realizations.

    Keegstra, H; Jansen, WJ; Nijhuis, JAG; Spaanenburg, L; Stevens, H; Udding, JT


    A method is presented to optimize a trained neural network for physical realization styles. Target architectures are embedded microcontrollers or standard cell based ASIC designs. The approach exploits the redundancy in the network, required for successful training, to replace the synaptic weighting

  16. Neutron spectrum unfolding using neural networks

    Vega C, H.R.; Hernandez D, V.M.; Manzanares A, E. [Universidad Autonoma de Zacatecas, A.P. 336, 98000 Zacatecas (Mexico)]. E-mail:


    An artificial neural network has been designed to obtain the neutron spectra from the Bonner spheres spectrometer's count rates. The neural network was trained using a large set of neutron spectra compiled by the International Atomic Energy Agency. These include spectra from iso- topic neutron sources, reference and operational neutron spectra obtained from accelerators and nuclear reactors. The spectra were transformed from lethargy to energy distribution and were re-binned to 31 energy groups using the MCNP 4C code. Re-binned spectra and UTA4 matrix were used to calculate the expected count rates in Bonner spheres spectrometer. These count rates were used as input and correspondent spectrum was used as output during neural network training. The network has 7 input nodes, 56 neurons as hidden layer and 31 neurons in the output layer. After training the network was tested with the Bonner spheres count rates produced by twelve neutron spectra. The network allows unfolding the neutron spectrum from count rates measured with Bonner spheres. Good results are obtained when testing count rates belong to neutron spectra used during training, acceptable results are obtained for count rates obtained from actual neutron fields; however the network fails when count rates belong to monoenergetic neutron sources. (Author)

  17. Analysis of Recurrent Analog Neural Networks

    Z. Raida


    Full Text Available In this paper, an original rigorous analysis of recurrent analog neural networks, which are built from opamp neurons, is presented. The analysis, which comes from the approximate model of the operational amplifier, reveals causes of possible non-stable states and enables to determine convergence properties of the network. Results of the analysis are discussed in order to enable development of original robust and fast analog networks. In the analysis, the special attention is turned to the examination of the influence of real circuit elements and of the statistical parameters of processed signals to the parameters of the network.

  18. Time-space analysis in photoelasticity images using recurrent neural networks to detect zones with stress concentration

    Briñez de León, Juan C.; Restrepo M., Alejandro; Branch, John W.


    Digital photoelasticity is based on image analysis techniques to describe the stress distribution in birefringent materials subjected to mechanical loads. However, optical assemblies for capturing the images, the steps to extract the information, and the ambiguities of the results limit the analysis in zones with stress concentrations. These zones contain stress values that could produce a failure, making important their identification. This paper identifies zones with stress concentration in a sequence of photoelasticity images, which was captured from a circular disc under diametral compression. The capturing process was developed assembling a plane polariscope around the disc, and a digital camera stored the temporal fringe colors generated during the load application. Stress concentration zones were identified modeling the temporal intensities captured by every pixel contained into the sequence. In this case, an Elman artificial recurrent neural network was trained to model the temporal intensities. Pixel positions near to the stress concentration zones trained different network parameters in comparison with pixel positions belonging to zones of lower stress concentration.

  19. Predicting Water Levels at Kainji Dam Using Artificial Neural Networks

    Predicting Water Levels at Kainji Dam Using Artificial Neural Networks. ... The aim of this study is to develop artificial neural network models for predicting water levels at Kainji Dam, which supplies water to Nigeria's largest ... Article Metrics.

  20. Parameter Identification by Bayes Decision and Neural Networks

    Kulczycki, P.; Schiøler, Henrik


    The problem of parameter identification by Bayes point estimation using neural networks is investigated.......The problem of parameter identification by Bayes point estimation using neural networks is investigated....

  1. Development of programmable artificial neural networks

    Meade, Andrew J.


    Conventionally programmed digital computers can process numbers with great speed and precision, but do not easily recognize patterns or imprecise or contradictory data. Instead of being programmed in the conventional sense, artificial neural networks are capable of self-learning through exposure to repeated examples. However, the training of an ANN can be a time consuming and unpredictable process. A general method is being developed to mate the adaptability of the ANN with the speed and precision of the digital computer. This method was successful in building feedforward networks that can approximate functions and their partial derivatives from examples in a single iteration. The general method also allows the formation of feedforward networks that can approximate the solution to nonlinear ordinary and partial differential equations to desired accuracy without the need of examples. It is believed that continued research will produce artificial neural networks that can be used with confidence in practical scientific computing and engineering applications.

  2. Sparse neural networks with large learning diversity

    Gripon, Vincent


    Coded recurrent neural networks with three levels of sparsity are introduced. The first level is related to the size of messages, much smaller than the number of available neurons. The second one is provided by a particular coding rule, acting as a local constraint in the neural activity. The third one is a characteristic of the low final connection density of the network after the learning phase. Though the proposed network is very simple since it is based on binary neurons and binary connections, it is able to learn a large number of messages and recall them, even in presence of strong erasures. The performance of the network is assessed as a classifier and as an associative memory.

  3. The labeled systems of multiple neural networks.

    Nemissi, M; Seridi, H; Akdag, H


    This paper proposes an implementation scheme of K-class classification problem using systems of multiple neural networks. Usually, a multi-class problem is decomposed into simple sub-problems solved independently using similar single neural networks. For the reason that these sub-problems are not equivalent in their complexity, we propose a system that includes reinforced networks destined to solve complicated parts of the entire problem. Our approach is inspired from principles of the multi-classifiers systems and the labeled classification, which aims to improve performances of the networks trained by the Back-Propagation algorithm. We propose two implementation schemes based on both OAO (one-against-all) and OAA (one-against-one). The proposed models are evaluated using iris and human thigh databases.

  4. Implementing Signature Neural Networks with Spiking Neurons

    José Luis Carrillo-Medina


    Full Text Available Spiking Neural Networks constitute the most promising approach to develop realistic ArtificialNeural Networks (ANNs. Unlike traditional firing rate-based paradigms, information coding inspiking models is based on the precise timing of individual spikes. Spiking ANNs can be successfully and efficiently applied to multiple realistic problems solvable with traditional strategies (e.g., data classification or pattern recognition. In recent years, majorbreakthroughs in neuroscience research have discovered new relevant computational principles indifferent living neural systems. Could ANNs benefit from some of these recent findings providingnovel elements of inspiration? This is an intriguing question and the development of spiking ANNsincluding novel bio-inspired information coding and processing strategies is gaining attention. Fromthis perspective, in this work, we adapt the core concepts of the recently proposed SignatureNeural Network paradigm – i.e., neural signatures to identify each unit in the network, localinformation contextualization during the processing and multicoding strategies for informationpropagation regarding the origin and the content of the data – to be employed in a spiking neuralnetwork. To the best of our knowledge, none of these mechanisms have been used yet in thecontext of ANNs of spiking neurons. This paper provides a proof-of-concept for their applicabilityin such networks. Computer simulations show that a simple network model like the discussed hereexhibits complex self-organizing properties. The combination of multiple simultaneous encodingschemes allows the network to generate coexisting spatio-temporal patterns of activity encodinginformation in different spatio-temporal spaces. As a function of the network and/or intra-unitparameters shaping the corresponding encoding modality, different forms of competition amongthe evoked patterns can emerge even in the absence of inhibitory connections. These parametersalso

  5. Performance Comparison of Neural Networks for HRTFs Approximation


    In order to approach to head-related transfer functions (HRTFs), this paper employs and compares three kinds of one-input neural network models, namely, multi-layer perceptron (MLP) networks, radial basis function (RBF) networks and wavelet neural networks (WNN) so as to select the best network model for further HRTFs approximation. Experimental results demonstrate that wavelet neural networks are more efficient and useful.

  6. Applications of Neural Networks in Spinning Prediction

    程文红; 陆凯


    The neural network spinning prediction model (BP and RBF Networks) trained by data from the mill can predict yarn qualities and spinning performance. The input parameters of the model are as follows: yarn count, diameter, hauteur, bundle strength, spinning draft, spinning speed, traveler number and twist.And the output parameters are: yarn evenness, thin places, tenacity and elongation, ends-down.Predicting results match the testing data well.

  7. Temporal association in asymmetric neural networks

    Sompolinsky, H.; Kanter, I.


    A neural network model which is capable of recalling time sequences and cycles of patterns is introduced. In this model, some of the synaptic connections, Jij, between pairs of neurons are asymmetric (Jij≠Jji) and have slow dynamic response. The effects of thermal noise on the generated sequences are discussed. Simulation results demonstrating the performance of the network are presented. The model may be also useful in understanding the generation of rhythmic patterns in biological motor systems.

  8. Incremental construction of LSTM recurrent neural network

    Ribeiro, Evandsa Sabrine Lopes-Lima; Alquézar Mancho, René


    Long Short--Term Memory (LSTM) is a recurrent neural network that uses structures called memory blocks to allow the net remember significant events distant in the past input sequence in order to solve long time lag tasks, where other RNN approaches fail. Throughout this work we have performed experiments using LSTM networks extended with growing abilities, which we call GLSTM. Four methods of training growing LSTM has been compared. These methods include cascade and ...

  9. Stability and Adaptation of Neural Networks


    Feature discovery by competitive works.-~ IEEE Trans- Si’st.. Man. Cybern.. vol. SMC-13. pp. 815- learning.- Cogniive Science , vol. 9. pp. 75-112. 1985...include Electronic Engineering Times, the Los Angeles Times, Popular Science , the Economist, and Breakthroughs. As program chairman of the neural networks.*’ Science . vol. 235. pp. 1226-1227. Mar. 6. 1987. networks.- submitted for publication. 141 G. A. Carpenter and S. Grossberg

  10. Neural networks of human nature and nurture

    Daniel S. Levine


    Full Text Available Neural network methods have facilitated the unifi - cation of several unfortunate splits in psychology, including nature versus nurture. We review the contributions of this methodology and then discuss tentative network theories of caring behavior, of uncaring behavior, and of how the frontal lobes are involved in the choices between them. The implications of our theory are optimistic about the prospects of society to encourage the human potential for caring.

  11. Compressing Neural Networks with the Hashing Trick

    Chen, Wenlin; Wilson, James T.; Tyree, Stephen; Weinberger, Kilian Q.; Chen, Yixin


    As deep nets are increasingly used in applications suited for mobile devices, a fundamental dilemma becomes apparent: the trend in deep learning is to grow models to absorb ever-increasing data set sizes; however mobile devices are designed with very little memory and cannot store such large models. We present a novel network architecture, HashedNets, that exploits inherent redundancy in neural networks to achieve drastic reductions in model sizes. HashedNets uses a low-cost hash function to ...

  12. Neural networks of human nature and nurture

    Daniel S. Levine


    Full Text Available Neural network methods have facilitated the unification of several unfortunate splits in psychology, including nature versus nurture. We review the contributions of this methodology and then discuss tentative network theories of caring behavior, of uncaring behavior, and of how the frontal lobes are involved in the choices between them. The implications of our theory are optimistic about the prospects of society to encourage the human potential for caring.

  13. Auto-associative nanoelectronic neural network

    Nogueira, C. P. S. M.; Guimarães, J. G. [Departamento de Engenharia Elétrica - Laboratório de Dispositivos e Circuito Integrado, Universidade de Brasília, CP 4386, CEP 70904-970 Brasília DF (Brazil)


    In this paper, an auto-associative neural network using single-electron tunneling (SET) devices is proposed and simulated at low temperature. The nanoelectronic auto-associative network is able to converge to a stable state, previously stored during training. The recognition of the pattern involves decreasing the energy of the input state until it achieves a point of local minimum energy, which corresponds to one of the stored patterns.

  14. Estimation of concrete compressive strength using artificial neural network

    Kostić, Srđan; Vasović, Dejan


    In present paper, concrete compressive strength is evaluated using back propagation feed-forward artificial neural network. Training of neural network is performed using Levenberg-Marquardt learning algorithm for four architectures of artificial neural networks, one, three, eight and twelve nodes in a hidden layer in order to avoid the occurrence of overfitting. Training, validation and testing of neural network is conducted for 75 concrete samples with distinct w/c ratio and amount of superp...

  15. Analysis of Wideband Beamformers Designed with Artificial Neural Networks


    TECHNICAL REPORT 0-90-1 ANALYSIS OF WIDEBAND BEAMFORMERS DESIGNED WITH ARTIFICIAL NEURAL NETWORKS by Cary Cox Instrumentation Services Division...included. A briel tutorial on beamformers and neural networks is also provided. 14. SUBJECT TERMS 15, NUMBER OF PAGES Artificial neural networks Fecdforwa:,l...Beamformers Designed with Artificial Neural Networks ". The study was conducted under the general supervision of Messrs. George P. Bonner, Chief

  16. Neural network method for solving elastoplastic finite element problems


    A basic optimization principle of Artificial Neural Network-the Lagrange Programming Neural Network (LPNN) model for solving elastoplastic finite element problems is presented. The nonlinear problems of mechanics are represented as a neural network based optimization problem by adopting the nonlinear function as nerve cell transfer function. Finally, two simple elastoplastic problems are numerically simulated. LPNN optimization results for elastoplastic problem are found to be comparable to traditional Hopfield neural network optimization model.

  17. Combining logistic regression and neural networks to create predictive models.

    Spackman, K. A.


    Neural networks are being used widely in medicine and other areas to create predictive models from data. The statistical method that most closely parallels neural networks is logistic regression. This paper outlines some ways in which neural networks and logistic regression are similar, shows how a small modification of logistic regression can be used in the training of neural network models, and illustrates the use of this modification for variable selection and predictive model building wit...

  18. Dynamic Object Identification with SOM-based neural networks

    Aleksey Averkin


    Full Text Available In this article a number of neural networks based on self-organizing maps, that can be successfully used for dynamic object identification, is described. Unique SOM-based modular neural networks with vector quantized associative memory and recurrent self-organizing maps as modules are presented. The structured algorithms of learning and operation of such SOM-based neural networks are described in details, also some experimental results and comparison with some other neural networks are given.

  19. Remote Sensing Image Segmentation with Probabilistic Neural Networks

    LIU Gang


    This paper focuses on the image segmentation with probabilistic neural networks (PNNs). Back propagation neural networks (BpNNs) and multi perceptron neural networks (MLPs) are also considered in this study. Especially, this paper investigates the implementation of PNNs in image segmentation and optimal processing of image segmentation with a PNN. The comparison between image segmentations with PNNs and with other neural networks is given. The experimental results show that PNNs can be successfully applied to image segmentation for good results.

  20. Optimizing neural network models: motivation and case studies

    Harp, S A; T. Samad


    Practical successes have been achieved  with neural network models in a variety of domains, including energy-related industry. The large, complex design space presented by neural networks is only minimally explored in current practice. The satisfactory results that nevertheless have been obtained testify that neural networks are a robust modeling technology; at the same time, however, the lack of a systematic design approach implies that the best neural network models generally  rem...

  1. Hopfield Neural Network Approach to Clustering in Mobile Radio Networks

    JiangYan; LiChengshu


    In this paper ,the Hopfield neural network(NN) algorithm is developed for selecting gateways in cluster linkage.The linked cluster(LC) architecture is assumed to achieve distributed network control in multihop radio networks throrgh the local controllers,called clusterheads and the nodes connecting these clusterheads are defined to be gateways.In Hopfield NN models ,the most critical issue being the determination of connection weights,we use the approach of Lagrange multipliers(LM) for its dynamic nature.

  2. A Modified Algorithm for Feedforward Neural Networks

    夏战国; 管红杰; 李政伟; 孟斌


    As a most popular learning algorithm for the feedforward neural networks, the classic BP algorithm has its many shortages. To overcome some of the shortages, a modified learning algorithm is proposed in the article. And the simulation result illustrate the modified algorithm is more effective and practicable.

  3. Convolutional Neural Networks for SAR Image Segmentation

    Malmgren-Hansen, David; Nobel-Jørgensen, Morten


    Segmentation of Synthetic Aperture Radar (SAR) images has several uses, but it is a difficult task due to a number of properties related to SAR images. In this article we show how Convolutional Neural Networks (CNNs) can easily be trained for SAR image segmentation with good results. Besides...

  4. Psychometric Measurement Models and Artificial Neural Networks

    Sese, Albert; Palmer, Alfonso L.; Montano, Juan J.


    The study of measurement models in psychometrics by means of dimensionality reduction techniques such as Principal Components Analysis (PCA) is a very common practice. In recent times, an upsurge of interest in the study of artificial neural networks apt to computing a principal component extraction has been observed. Despite this interest, the…

  5. Applying Artificial Neural Networks for Face Recognition

    Thai Hoang Le


    Full Text Available This paper introduces some novel models for all steps of a face recognition system. In the step of face detection, we propose a hybrid model combining AdaBoost and Artificial Neural Network (ABANN to solve the process efficiently. In the next step, labeled faces detected by ABANN will be aligned by Active Shape Model and Multi Layer Perceptron. In this alignment step, we propose a new 2D local texture model based on Multi Layer Perceptron. The classifier of the model significantly improves the accuracy and the robustness of local searching on faces with expression variation and ambiguous contours. In the feature extraction step, we describe a methodology for improving the efficiency by the association of two methods: geometric feature based method and Independent Component Analysis method. In the face matching step, we apply a model combining many Neural Networks for matching geometric features of human face. The model links many Neural Networks together, so we call it Multi Artificial Neural Network. MIT + CMU database is used for evaluating our proposed methods for face detection and alignment. Finally, the experimental results of all steps on CallTech database show the feasibility of our proposed model.

  6. Artificial neural networks in neutron dosimetry

    Vega C, H.R.; Hernandez D, V.M.; Manzanares A, E.; Mercado, G.A.; Perales M, W.A.; Robles R, J.A. [Unidades Academicas de Estudios Nucleares, UAZ, A.P. 336, 98000 Zacatecas (Mexico); Gallego, E.; Lorente, A. [Depto. de Ingenieria Nuclear, Universidad Politecnica de Madrid, (Spain)


    An artificial neural network has been designed to obtain the neutron doses using only the Bonner spheres spectrometer's count rates. Ambient, personal and effective neutron doses were included. 187 neutron spectra were utilized to calculate the Bonner count rates and the neutron doses. The spectra were transformed from lethargy to energy distribution and were re-binned to 31 energy groups using the MCNP 4C code. Re-binned spectra, UTA4 response matrix and fluence-to-dose coefficients were used to calculate the count rates in Bonner spheres spectrometer and the doses. Count rates were used as input and the respective doses were used as output during neural network training. Training and testing was carried out in Mat lab environment. The artificial neural network performance was evaluated using the {chi}{sup 2}- test, where the original and calculated doses were compared. The use of Artificial Neural Networks in neutron dosimetry is an alternative procedure that overcomes the drawbacks associated in this ill-conditioned problem. (Author)

  7. Chaotic behavior of a layered neural network

    Derrida, B.; Meir, R.


    We consider the evolution of configurations in a layered feed-forward neural network. Exact expressions for the evolution of the distance between two configurations are obtained in the thermodynamic limit. Our results show that the distance between two arbitrarily close configurations always increases, implying chaotic behavior, even in the phase of good retrieval.

  8. Visualization of neural networks using saliency maps

    Mørch, Niels J.S.; Kjems, Ulrik; Hansen, Lars Kai


    The saliency map is proposed as a new method for understanding and visualizing the nonlinearities embedded in feedforward neural networks, with emphasis on the ill-posed case, where the dimensionality of the input-field by far exceeds the number of examples. Several levels of approximations...

  9. Towards semen quality assessment using neural networks

    Linneberg, Christian; Salamon, P.; Svarer, C.


    The paper presents the methodology and results from a neural net based classification of human sperm head morphology. The methodology uses a preprocessing scheme in which invariant Fourier descriptors are lumped into “energy” bands. The resulting networks are pruned using optimal brain damage...

  10. Neural Networks for protein Structure Prediction

    Bohr, Henrik


    This is a review about neural network applications in bioinformatics. Especially the applications to protein structure prediction, e.g. prediction of secondary structures, prediction of surface structure, fold class recognition and prediction of the 3-dimensional structure of protein backbones...

  11. Nonlinear Time Series Analysis via Neural Networks

    Volná, Eva; Janošek, Michal; Kocian, Václav; Kotyrba, Martin

    This article deals with a time series analysis based on neural networks in order to make an effective forex market [Moore and Roche, J. Int. Econ. 58, 387-411 (2002)] pattern recognition. Our goal is to find and recognize important patterns which repeatedly appear in the market history to adapt our trading system behaviour based on them.

  12. Epileptiform spike detection via convolutional neural networks

    Johansen, Alexander Rosenberg; Jin, Jing; Maszczyk, Tomasz


    The EEG of epileptic patients often contains sharp waveforms called "spikes", occurring between seizures. Detecting such spikes is crucial for diagnosing epilepsy. In this paper, we develop a convolutional neural network (CNN) for detecting spikes in EEG of epileptic patients in an automated fash...

  13. Learning chaotic attractors by neural networks

    Bakker, R; Schouten, JC; Giles, CL; Takens, F; van den Bleek, CM


    An algorithm is introduced that trains a neural network to identify chaotic dynamics from a single measured time series. During training, the algorithm learns to short-term predict the time series. At the same time a criterion, developed by Diks, van Zwet, Takens, and de Goede (1996) is monitored th

  14. Neural Networks for protein Structure Prediction

    Bohr, Henrik


    This is a review about neural network applications in bioinformatics. Especially the applications to protein structure prediction, e.g. prediction of secondary structures, prediction of surface structure, fold class recognition and prediction of the 3-dimensional structure of protein backbones...

  15. Binaural Sound Localization Using Neural Networks


    by Brennan, involved the implementation of a neural network to model the ability of a bat to discriminate between a mealworm and an inedible object...locate, identify and capture airborne prey (6:2). The sonar returns were collected from the mealworms , spheres and disks at various rotations (90 to

  16. Brain tumor grading based on Neural Networks and Convolutional Neural Networks.

    Yuehao Pan; Weimin Huang; Zhiping Lin; Wanzheng Zhu; Jiayin Zhou; Wong, Jocelyn; Zhongxiang Ding


    This paper studies brain tumor grading using multiphase MRI images and compares the results with various configurations of deep learning structure and baseline Neural Networks. The MRI images are used directly into the learning machine, with some combination operations between multiphase MRIs. Compared to other researches, which involve additional effort to design and choose feature sets, the approach used in this paper leverages the learning capability of deep learning machine. We present the grading performance on the testing data measured by the sensitivity and specificity. The results show a maximum improvement of 18% on grading performance of Convolutional Neural Networks based on sensitivity and specificity compared to Neural Networks. We also visualize the kernels trained in different layers and display some self-learned features obtained from Convolutional Neural Networks.

  17. Neural networks in economic modelling : An empirical study

    Verkooijen, W.J.H.


    This dissertation addresses the statistical aspects of neural networks and their usability for solving problems in economics and finance. Neural networks are discussed in a framework of modelling which is generally accepted in econometrics. Within this framework a neural network is regarded as a sta

  18. Combining neural networks for protein secondary structure prediction

    Riis, Søren Kamaric


    In this paper structured neural networks are applied to the problem of predicting the secondary structure of proteins. A hierarchical approach is used where specialized neural networks are designed for each structural class and then combined using another neural network. The submodels are designe...... is better than most secondary structure prediction methods based on single sequences even though this model contains much fewer parameters...

  19. Extracting Knowledge from Supervised Neural Networks in Image Procsssing

    Zwaag, van der Berend Jan; Slump, Kees; Spaanenburg, Lambert; Jain, R.; Abraham, A.; Faucher, C.; Zwaag, van der B.J.


    Despite their success-story, artificial neural networks have one major disadvantage compared to other techniques: the inability to explain comprehensively how a trained neural network reaches its output; neural networks are not only (incorrectly) seen as a ¿magic tool¿ but possibly even more as a my

  20. Analysis of Neural Networks in Terms of Domain Functions

    Zwaag, van der Berend Jan; Slump, Cees; Spaanenburg, Lambert


    Despite their success-story, artificial neural networks have one major disadvantage compared to other techniques: the inability to explain comprehensively how a trained neural network reaches its output; neural networks are not only (incorrectly) seen as a "magic tool" but possibly even more as a my

  1. Recognition of Continuous Digits by Quantum Neural Networks


    This paper describes a new kind of neural network-Quantum Neural Network (QNN) and its application to recognition of continuous digits. QNN combines the advantages of neural modeling and fuzzy theoretic principles. Experiment results show that more than 15 percent error reduction is achieved on a speaker-independent continuous digits recognition task compared with BP networks.



    For the redundant manipulators, neural network is used to tackle the velocity inverse kinematics of robot manipulators. The neural networks utilized are multi-layered perceptions with a back-propagation training algorithm. The weight table is used to save the weights solving the inverse kinematics based on the different optimization performance criteria. Simulations verify the effectiveness of using neural network.

  3. A Fuzzy Neural Network for Fault Pattern Recognition


    This paper combines fuzzy set theory with AR T neural network, and demonstrates some important properties of the fuzzy ART neural network algorithm. The results from application on a ball bearing diagnosis indicate that a fuzzy ART neural network has an effect of fast stable recognition for fuzzy patterns.

  4. A Direct Feedback Control Based on Fuzzy Recurrent Neural Network

    李明; 马小平


    A direct feedback control system based on fuzzy-recurrent neural network is proposed, and a method of training weights of fuzzy-recurrent neural network was designed by applying modified contract mapping genetic algorithm. Computer simul ation results indicate that fuzzy-recurrent neural network controller has perfect dynamic and static performances .

  5. [Application of artificial neural networks in infectious diseases].

    Xu, Jun-fang; Zhou, Xiao-nong


    With the development of information technology, artificial neural networks has been applied to many research fields. Due to the special features such as nonlinearity, self-adaptation, and parallel processing, artificial neural networks are applied in medicine and biology. This review summarizes the application of artificial neural networks in the relative factors, prediction and diagnosis of infectious diseases in recent years.

  6. Prediction based chaos control via a new neural network

    Shen Liqun [School of Electrical Engineering and Automation, Harbin Institute of Technology, Harbin 150001 (China)], E-mail:; Wang Mao [Space Control and Inertia Technology Research Center, Harbin Institute of Technology, Harbin 150001 (China); Liu Wanyu [School of Electrical Engineering and Automation, Harbin Institute of Technology, Harbin 150001 (China); Sun Guanghui [Space Control and Inertia Technology Research Center, Harbin Institute of Technology, Harbin 150001 (China)


    In this Letter, a new chaos control scheme based on chaos prediction is proposed. To perform chaos prediction, a new neural network architecture for complex nonlinear approximation is proposed. And the difficulty in building and training the neural network is also reduced. Simulation results of Logistic map and Lorenz system show the effectiveness of the proposed chaos control scheme and the proposed neural network.

  7. From Designing A Single Neural Network to Designing Neural Network Ensembles

    Liu Yong; Zou Xiu-fer


    This paper introduces supervised learning model,and surveys related research work. The paper is organised as follows. A supervised learning model is firstly described. The bias variance trade-off is then discussed for the supervised learning model. Based on the bias variance trade-off, both the single neural network approaches and the neural network en semble approaches are overviewed, and problems with the existing approaches are indicated. Finally, the paper concludes with specifying potential future research directions.

  8. A Fuzzy Quantum Neural Network and Its Application in Pattern Recognition

    MIAOFuyou; XIONGYan; CHENHuanhuan; WANGXingfu


    This paper proposes a fuzzy quantum neural network model combining quantum neural network and fuzzy logic, which applies the fuzzy logic to design the collapse rules of the quantum neural network, and solves the character recognition problem. Theoretical analysis and experimental results show that fuzzy quantum neural network improves recognizing veracity than the traditional neural network and quantum neural network.

  9. Optical implementation of neural networks

    Yu, Francis T. S.; Guo, Ruyan


    An adaptive optical neuro-computing (ONC) using inexpensive pocket size liquid crystal televisions (LCTVs) had been developed by the graduate students in the Electro-Optics Laboratory at The Pennsylvania State University. Although this neuro-computing has only 8×8=64 neurons, it can be easily extended to 16×20=320 neurons. The major advantages of this LCTV architecture as compared with other reported ONCs, are low cost and the flexibility to operate. To test the performance, several neural net models are used. These models are Interpattern Association, Hetero-association and unsupervised learning algorithms. The system design considerations and experimental demonstrations are also included.

  10. Distribution network planning algorithm based on Hopfield neural network

    GAO Wei-xin; LUO Xian-jue


    This paper presents a new algorithm based on Hopfield neural network to find the optimal solution for an electric distribution network. This algorithm transforms the distribution power network-planning problem into a directed graph-planning problem. The Hopfield neural network is designed to decide the in-degree of each node and is in combined application with an energy function. The new algorithm doesn't need to code city streets and normalize data, so the program is easier to be realized. A case study applying the method to a district of 29 street proved that an optimal solution for the planning of such a power system could be obtained by only 26 iterations. The energy function and algorithm developed in this work have the following advantages over many existing algorithms for electric distribution network planning: fast convergence and unnecessary to code all possible lines.

  11. Neural networks in windprofiler data processing

    Weber, H.; Richner, H.; Kretzschmar, R.; Ruffieux, D.


    Wind profilers are basically Doppler radars yielding 3-dimensional wind profiles that are deduced from the Doppler shift caused by turbulent elements in the atmosphere. These signals can be contaminated by other airborne elements such as birds or hydrometeors. Using a feed-forward neural network with one hidden layer and one output unit, birds and hydrometeors can be successfully identified in non-averaged single spectra; theses are subsequently removed in the wind computation. An infrared camera was used to identify birds in one of the beams of the wind profiler. After training the network with about 6000 contaminated data sets, it was able to identify contaminated data in a test data set with a reliability of 96 percent. The assumption was made that the neural network parameters obtained in the beam for which bird data was collected can be transferred to the other beams (at least three beams are needed for computing wind vectors). Comparing the evolution of a wind field with and without the neural network shows a significant improvement of wind data quality. Current work concentrates on training the network also for hydrometeors. It is hoped that the instrument's capability can thus be expanded to measure not only correct winds, but also observe bird migration, estimate precipitation and -- by combining precipitation information with vertical velocity measurement -- the monitoring of the height of the melting layer.

  12. Color control of printers by neural networks

    Tominaga, Shoji


    A method is proposed for solving the mapping problem from the 3D color space to the 4D CMYK space of printer ink signals by means of a neural network. The CIE-L*a*b* color system is used as the device-independent color space. The color reproduction problem is considered as the problem of controlling an unknown static system with four inputs and three outputs. A controller determines the CMYK signals necessary to produce the desired L*a*b* values with a given printer. Our solution method for this control problem is based on a two-phase procedure which eliminates the need for UCR and GCR. The first phase determines a neural network as a model of the given printer, and the second phase determines the combined neural network system by combining the printer model and the controller in such a way that it represents an identity mapping in the L*a*b* color space. Then the network of the controller part realizes the mapping from the L*a*b* space to the CMYK space. Practical algorithms are presented in the form of multilayer feedforward networks. The feasibility of the proposed method is shown in experiments using a dye sublimation printer and an ink jet printer.

  13. Computationally Efficient Neural Network Intrusion Security Awareness

    Todd Vollmer; Milos Manic


    An enhanced version of an algorithm to provide anomaly based intrusion detection alerts for cyber security state awareness is detailed. A unique aspect is the training of an error back-propagation neural network with intrusion detection rule features to provide a recognition basis. Network packet details are subsequently provided to the trained network to produce a classification. This leverages rule knowledge sets to produce classifications for anomaly based systems. Several test cases executed on ICMP protocol revealed a 60% identification rate of true positives. This rate matched the previous work, but 70% less memory was used and the run time was reduced to less than 1 second from 37 seconds.

  14. Reconstruction of periodic signals using neural networks

    José Danilo Rairán Antolines


    Full Text Available In this paper, we reconstruct a periodic signal by using two neural networks. The first network is trained to approximate the period of a signal, and the second network estimates the corresponding coefficients of the signal's Fourier expansion. The reconstruction strategy consists in minimizing the mean-square error via backpro-pagation algorithms over a single neuron with a sine transfer function. Additionally, this paper presents mathematical proof about the quality of the approximation as well as a first modification of the algorithm, which requires less data to reach the same estimation; thus making the algorithm suitable for real-time implementations.

  15. Computationally Efficient Neural Network Intrusion Security Awareness

    Todd Vollmer; Milos Manic


    An enhanced version of an algorithm to provide anomaly based intrusion detection alerts for cyber security state awareness is detailed. A unique aspect is the training of an error back-propagation neural network with intrusion detection rule features to provide a recognition basis. Network packet details are subsequently provided to the trained network to produce a classification. This leverages rule knowledge sets to produce classifications for anomaly based systems. Several test cases executed on ICMP protocol revealed a 60% identification rate of true positives. This rate matched the previous work, but 70% less memory was used and the run time was reduced to less than 1 second from 37 seconds.

  16. The Stellar parametrization using Artificial Neural Network

    Giridhar, Sunetra; Kunder, Andrea; Muneer, S; Kumar, G Selva


    An update on recent methods for automated stellar parametrization is given. We present preliminary results of the ongoing program for rapid parametrization of field stars using medium resolution spectra obtained using Vainu Bappu Telescope at VBO, Kavalur, India. We have used Artificial Neural Network for estimating temperature, gravity, metallicity and absolute magnitude of the field stars. The network for each parameter is trained independently using a large number of calibrating stars. The trained network is used for estimating atmospheric parameters of unexplored field stars.

  17. Neural networks: Application to medical imaging

    Clarke, Laurence P.


    The research mission is the development of computer assisted diagnostic (CAD) methods for improved diagnosis of medical images including digital x-ray sensors and tomographic imaging modalities. The CAD algorithms include advanced methods for adaptive nonlinear filters for image noise suppression, hybrid wavelet methods for feature segmentation and enhancement, and high convergence neural networks for feature detection and VLSI implementation of neural networks for real time analysis. Other missions include (1) implementation of CAD methods on hospital based picture archiving computer systems (PACS) and information networks for central and remote diagnosis and (2) collaboration with defense and medical industry, NASA, and federal laboratories in the area of dual use technology conversion from defense or aerospace to medicine.

  18. a Heterosynaptic Learning Rule for Neural Networks

    Emmert-Streib, Frank

    In this article we introduce a novel stochastic Hebb-like learning rule for neural networks that is neurobiologically motivated. This learning rule combines features of unsupervised (Hebbian) and supervised (reinforcement) learning and is stochastic with respect to the selection of the time points when a synapse is modified. Moreover, the learning rule does not only affect the synapse between pre- and postsynaptic neuron, which is called homosynaptic plasticity, but effects also further remote synapses of the pre- and postsynaptic neuron. This more complex form of synaptic plasticity has recently come under investigations in neurobiology and is called heterosynaptic plasticity. We demonstrate that this learning rule is useful in training neural networks by learning parity functions including the exclusive-or (XOR) mapping in a multilayer feed-forward network. We find, that our stochastic learning rule works well, even in the presence of noise. Importantly, the mean learning time increases with the number of patterns to be learned polynomially, indicating efficient learning.

  19. Neural network for sonogram gap filling

    Klebæk, Henrik; Jensen, Jørgen Arendt; Hansen, Lars Kai


    . The neural network is trained on part of the data and the network is pruned by the optimal brain damage procedure in order to reduce the number of parameters in the network, and thereby reduce the risk of overfitting. The neural predictor is compared to using a linear filter for the mean and variance time......In duplex imaging both an anatomical B-mode image and a sonogram are acquired, and the time for data acquisition is divided between the two images. This gives problems when rapid B-mode image display is needed, since there is not time for measuring the velocity data. Gaps then appear...... in the sonogram and in the audio signal, rendering the audio signal useless, thus making diagnosis difficult. The current goal for ultrasound scanners is to maintain a high refresh rate for the B-mode image and at the same time attain a high maximum velocity in the sonogram display. This precludes the intermixing...

  20. Fuzzy logic and neural network technologies

    Villarreal, James A.; Lea, Robert N.; Savely, Robert T.


    Applications of fuzzy logic technologies in NASA projects are reviewed to examine their advantages in the development of neural networks for aerospace and commercial expert systems and control. Examples of fuzzy-logic applications include a 6-DOF spacecraft controller, collision-avoidance systems, and reinforcement-learning techniques. The commercial applications examined include a fuzzy autofocusing system, an air conditioning system, and an automobile transmission application. The practical use of fuzzy logic is set in the theoretical context of artificial neural systems (ANSs) to give the background for an overview of ANS research programs at NASA. The research and application programs include the Network Execution and Training Simulator and faster training algorithms such as the Difference Optimized Training Scheme. The networks are well suited for pattern-recognition applications such as predicting sunspots, controlling posture maintenance, and conducting adaptive diagnoses.

  1. Design of Robust Neural Network Classifiers

    Larsen, Jan; Andersen, Lars Nonboe; Hintz-Madsen, Mads


    a modified likelihood function which incorporates the potential risk of outliers in the data. This leads to the introduction of a new parameter, the outlier probability. Designing the neural classifier involves optimization of network weights as well as outlier probability and regularization parameters. We...... suggest to adapt the outlier probability and regularisation parameters by minimizing the error on a validation set, and a simple gradient descent scheme is derived. In addition, the framework allows for constructing a simple outlier detector. Experiments with artificial data demonstrate the potential......This paper addresses a new framework for designing robust neural network classifiers. The network is optimized using the maximum a posteriori technique, i.e., the cost function is the sum of the log-likelihood and a regularization term (prior). In order to perform robust classification, we present...

  2. The loading problem for recursive neural networks.

    Gori, Marco; Sperduti, Alessandro


    The present work deals with one of the major and not yet completely understood topics of supervised connectionist models. Namely, it investigates the relationships between the difficulty of a given learning task and the chosen neural network architecture. These relationships have been investigated and nicely established for some interesting problems in the case of neural networks used for processing vectors and sequences, but only a few studies have dealt with loading problems involving graphical inputs. In this paper, we present sufficient conditions which guarantee the absence of local minima of the error function in the case of learning directed acyclic graphs with recursive neural networks. We introduce topological indices which can be directly calculated from the given training set and that allows us to design the neural architecture with local minima free error function. In particular, we conceive a reduction algorithm that involves both the information attached to the nodes and the topology, which enlarges significantly the class of the problems with unimodal error function previously proposed in the literature.

  3. Inference and contradictory analysis for binary neural networks

    郭宝龙; 郭雷


    A weak-inference theory and a contradictory analysis for binary neural networks (BNNs).are presented.The analysis indicates that the essential reason why a neural network is changing its slates is the existence of superior contradiction inside the network,and that the process by which a neural network seeks a solution corresponds to eliminating the superior contradiction.Different from general constraint satisfaction networks,the solutions found by BNNs may contain inferior contradiction but not superior contradiction.

  4. Clustering in mobile ad hoc network based on neural network

    CHEN Ai-bin; CAI Zi-xing; HU De-wen


    An on-demand distributed clustering algorithm based on neural network was proposed. The system parameters and the combined weight for each node were computed, and cluster-heads were chosen using the weighted clustering algorithm, then a training set was created and a neural network was trained. In this algorithm, several system parameters were taken into account, such as the ideal node-degree, the transmission power, the mobility and the battery power of the nodes. The algorithm can be used directly to test whether a node is a cluster-head or not. Moreover, the clusters recreation can be speeded up.

  5. Pruning Neural Networks with Distribution Estimation Algorithms

    Cantu-Paz, E


    This paper describes the application of four evolutionary algorithms to the pruning of neural networks used in classification problems. Besides of a simple genetic algorithm (GA), the paper considers three distribution estimation algorithms (DEAs): a compact GA, an extended compact GA, and the Bayesian Optimization Algorithm. The objective is to determine if the DEAs present advantages over the simple GA in terms of accuracy or speed in this problem. The experiments used a feed forward neural network trained with standard back propagation and public-domain and artificial data sets. The pruned networks seemed to have better or equal accuracy than the original fully-connected networks. Only in a few cases, pruning resulted in less accurate networks. We found few differences in the accuracy of the networks pruned by the four EAs, but found important differences in the execution time. The results suggest that a simple GA with a small population might be the best algorithm for pruning networks on the data sets we tested.

  6. Phase Diagram of Spiking Neural Networks

    Hamed eSeyed-Allaei


    Full Text Available In computer simulations of spiking neural networks, often it is assumed that every two neurons of the network are connected by a probablilty of 2%, 20% of neurons are inhibitory and 80% are excitatory. These common values are based on experiments, observations. but here, I take a different perspective, inspired by evolution. I simulate many networks, each with a different set of parameters, and then I try to figure out what makes the common values desirable by nature. Networks which are configured according to the common values, have the best dynamic range in response to an impulse and their dynamic range is more robust in respect to synaptic weights. In fact, evolution has favored networks of best dynamic range. I present a phase diagram that shows the dynamic ranges of different networks of different parameteres. This phase diagram gives an insight into the space of parameters -- excitatory to inhibitory ratio, sparseness of connections and synaptic weights. It may serve as a guideline to decide about the values of parameters in a simulation of spiking neural network.

  7. Gait Recognition Based on Convolutional Neural Networks

    Sokolova, A.; Konushin, A.


    In this work we investigate the problem of people recognition by their gait. For this task, we implement deep learning approach using the optical flow as the main source of motion information and combine neural feature extraction with the additional embedding of descriptors for representation improvement. In order to find the best heuristics, we compare several deep neural network architectures, learning and classification strategies. The experiments were made on two popular datasets for gait recognition, so we investigate their advantages and disadvantages and the transferability of considered methods.

  8. Fuzzy logic and neural networks basic concepts & application

    Alavala, Chennakesava R


    About the Book: The primary purpose of this book is to provide the student with a comprehensive knowledge of basic concepts of fuzzy logic and neural networks. The hybridization of fuzzy logic and neural networks is also included. No previous knowledge of fuzzy logic and neural networks is required. Fuzzy logic and neural networks have been discussed in detail through illustrative examples, methods and generic applications. Extensive and carefully selected references is an invaluable resource for further study of fuzzy logic and neural networks. Each chapter is followed by a question bank

  9. Cancer classification based on gene expression using neural networks.

    Hu, H P; Niu, Z J; Bai, Y P; Tan, X H


    Based on gene expression, we have classified 53 colon cancer patients with UICC II into two groups: relapse and no relapse. Samples were taken from each patient, and gene information was extracted. Of the 53 samples examined, 500 genes were considered proper through analyses by S-Kohonen, BP, and SVM neural networks. Classification accuracy obtained by S-Kohonen neural network reaches 91%, which was more accurate than classification by BP and SVM neural networks. The results show that S-Kohonen neural network is more plausible for classification and has a certain feasibility and validity as compared with BP and SVM neural networks.

  10. Functional expansion representations of artificial neural networks

    Gray, W. Steven


    In the past few years, significant interest has developed in using artificial neural networks to model and control nonlinear dynamical systems. While there exists many proposed schemes for accomplishing this and a wealth of supporting empirical results, most approaches to date tend to be ad hoc in nature and rely mainly on heuristic justifications. The purpose of this project was to further develop some analytical tools for representing nonlinear discrete-time input-output systems, which when applied to neural networks would give insight on architecture selection, pruning strategies, and learning algorithms. A long term goal is to determine in what sense, if any, a neural network can be used as a universal approximator for nonliner input-output maps with memory (i.e., realized by a dynamical system). This property is well known for the case of static or memoryless input-output maps. The general architecture under consideration in this project was a single-input, single-output recurrent feedforward network.

  11. Convolutional Neural Network Based dem Super Resolution

    Chen, Zixuan; Wang, Xuewen; Xu, Zekai; Hou, Wenguang


    DEM super resolution is proposed in our previous publication to improve the resolution for a DEM on basis of some learning examples. Meanwhile, the nonlocal algorithm is introduced to deal with it and lots of experiments show that the strategy is feasible. In our publication, the learning examples are defined as the partial original DEM and their related high measurements due to this way can avoid the incompatibility between the data to be processed and the learning examples. To further extent the applications of this new strategy, the learning examples should be diverse and easy to obtain. Yet, it may cause the problem of incompatibility and unrobustness. To overcome it, we intend to investigate a convolutional neural network based method. The input of the convolutional neural network is a low resolution DEM and the output is expected to be its high resolution one. A three layers model will be adopted. The first layer is used to detect some features from the input, the second integrates the detected features to some compressed ones and the final step transforms the compressed features as a new DEM. According to this designed structure, some learning DEMs will be taken to train it. Specifically, the designed network will be optimized by minimizing the error of the output and its expected high resolution DEM. In practical applications, a testing DEM will be input to the convolutional neural network and a super resolution will be obtained. Many experiments show that the CNN based method can obtain better reconstructions than many classic interpolation methods.

  12. Toward implementation of artificial neural networks that "really work".

    Leon, M. A.; Keller, J.


    Artificial neural networks are established analytical methods in bio-medical research. They have repeatedly outperformed traditional tools for pattern recognition and clinical outcome prediction while assuring continued adaptation and learning. However, successful experimental neural networks systems seldom reach a production state. That is, they are not incorporated into clinical information systems. It could be speculated that neural networks simply must undergo a lengthy acceptance process before they become part of the day to day operations of health care systems. However, our experience trying to incorporate experimental neural networks into information systems lead us to believe that there are technical and operational barriers that greatly difficult neural network implementation. A solution for these problems may be the delineation of policies and procedures for neural network implementation and the development a new class of neural network client/server applications that fit the needs of current clinical information systems. PMID:9357613

  13. Evolving Chart Pattern Sensitive Neural Network Based Forex Trading Agents

    Sher, Gene I


    Though machine learning has been applied to the foreign exchange market for quiet some time now, and neural networks have been shown to yield good results, in modern approaches neural network systems are optimized through the traditional methods, and their input signals are vectors containing prices and other indicator elements. The aim of this paper is twofold, the presentation and testing of the application of topology and weight evolving artificial neural network (TWEANN) systems to automated currency trading, and the use of chart images as input to a geometrical regularity aware indirectly encoded neural network systems. This paper presents the benchmark results of neural network based automated currency trading systems evolved using TWEANNs, and compares the generalization capabilities of these direct encoded neural networks which use the standard price vector inputs, and the indirect (substrate) encoded neural networks which use chart images as input. The TWEANN algorithm used to evolve these currency t...

  14. A Projection Neural Network for Constrained Quadratic Minimax Optimization.

    Liu, Qingshan; Wang, Jun


    This paper presents a projection neural network described by a dynamic system for solving constrained quadratic minimax programming problems. Sufficient conditions based on a linear matrix inequality are provided for global convergence of the proposed neural network. Compared with some of the existing neural networks for quadratic minimax optimization, the proposed neural network in this paper is capable of solving more general constrained quadratic minimax optimization problems, and the designed neural network does not include any parameter. Moreover, the neural network has lower model complexities, the number of state variables of which is equal to that of the dimension of the optimization problems. The simulation results on numerical examples are discussed to demonstrate the effectiveness and characteristics of the proposed neural network.

  15. Blood Glucose Prediction Using Artificial Neural Networks Trained with the AIDA Diabetes Simulator: A Proof-of-Concept Pilot Study

    Gavin Robertson


    Full Text Available Diabetes mellitus is a major, and increasing, global problem. However, it has been shown that, through good management of blood glucose levels (BGLs, the associated and costly complications can be reduced significantly. In this pilot study, Elman recurrent artificial neural networks (ANNs were used to make BGL predictions based on a history of BGLs, meal intake, and insulin injections. Twenty-eight datasets (from a single case scenario were compiled from the freeware mathematical diabetes simulator, AIDA. It was found that the most accurate predictions were made during the nocturnal period of the 24 hour daily cycle. The accuracy of the nocturnal predictions was measured as the root mean square error over five test days (RMSE5 day not used during ANN training. For BGL predictions of up to 1 hour a RMSE5 day of (±SD 0.15±0.04 mmol/L was observed. For BGL predictions up to 10 hours, a RMSE5  day of (±SD 0.14±0.16 mmol/L was observed. Future research will investigate a wider range of AIDA case scenarios, real-patient data, and data relating to other factors influencing BGLs. ANN paradigms based on real-time recurrent learning will also be explored to accommodate dynamic physiology in diabetes.

  16. Brain Machine Interface: Analysis of segmented EEG Signal Classification Using Short-Time PCA and Recurrent Neural Networks

    C. R. Hema


    Full Text Available Brain machine interface provides a communication channel between the human brain and an external device. Brain interfaces are studied to provide rehabilitation to patients with neurodegenerative diseases; such patients loose all communication pathways except for their sensory and cognitive functions. One of the possible rehabilitation methods for these patients is to provide a brain machine interface (BMI for communication; the BMI uses the electrical activity of the brain detected by scalp EEG electrodes. Classification of EEG signals extracted during mental tasks is a technique for designing a BMI. In this paper a BMI design using five mental tasks from two subjects were studied, a combination of two tasks is studied per subject. An Elman recurrent neural network is proposed for classification of EEG signals. Two feature extraction algorithms using overlapped and non overlapped signal segments are analyzed. Principal component analysis is used for extracting features from the EEG signal segments. Classification performance of overlapping EEG signal segments is observed to be better in terms of average classification with a range of 78.5% to 100%, while the non overlapping EEG signal segments show better classification in terms of maximum classifications.

  17. Neural network models of categorical perception.

    Damper, R I; Harnad, S R


    Studies of the categorical perception (CP) of sensory continua have a long and rich history in psychophysics. In 1977, Macmillan, Kaplan, and Creelman introduced the use of signal detection theory to CP studies. Anderson and colleagues simultaneously proposed the first neural model for CP, yet this line of research has been less well explored. In this paper, we assess the ability of neural-network models of CP to predict the psychophysical performance of real observers with speech sounds and artificial/novel stimuli. We show that a variety of neural mechanisms are capable of generating the characteristics of CP. Hence, CP may not be a special model of perception but an emergent property of any sufficiently powerful general learning system.

  18. Neural Networks in R Using the Stuttgart Neural Network Simulator: RSNNS

    Christopher Bergmeir


    Full Text Available Neural networks are important standard machine learning procedures for classification and regression. We describe the R package RSNNS that provides a convenient interface to the popular Stuttgart Neural Network Simulator SNNS. The main features are (a encapsulation of the relevant SNNS parts in a C++ class, for sequential and parallel usage of different networks, (b accessibility of all of the SNNSalgorithmic functionality from R using a low-level interface, and (c a high-level interface for convenient, R-style usage of many standard neural network procedures. The package also includes functions for visualization and analysis of the models and the training procedures, as well as functions for data input/output from/to the original SNNSfile formats.

  19. Development of Polymer Resins using Neural Networks

    Fabiano A. N. Fernandes


    Full Text Available The development of polymer resins can benefit from the application of neural networks, using its great ability to correlate inputs and outputs. In this work we have developed a procedure that uses neural networks to correlate the end-user properties of a polymer with the polymerization reactor's operational condition that will produce that desired polymer. This procedure is aimed at speeding up the development of new resins and help finding the appropriate operational conditions to produce a given polymer resin; reducing experimentation, pilot plant tests and therefore time and money spent on development. The procedure shown in this paper can predict the reactor's operational condition with an error lower than 5%.

  20. Neural network correction of astrometric chromaticity

    Gai, M


    In this paper we deal with the problem of chromaticity, i.e. apparent position variation of stellar images with their spectral distribution, using neural networks to analyse and process astronomical images. The goal is to remove this relevant source of systematic error in the data reduction of high precision astrometric experiments, like Gaia. This task can be accomplished thanks to the capability of neural networks to solve a nonlinear approximation problem, i.e. to construct an hypersurface that approximates a given set of scattered data couples. Images are encoded associating each of them with conveniently chosen moments, evaluated along the y axis. The technique proposed, in the current framework, reduces the initial chromaticity of few milliarcseconds to values of few microarcseconds.

  1. Design of fiber optic adaline neural networks

    Ghosh, Anjan K.; Trepka, Jim


    Based on possible optoelectronic realization of adaptive filters and equalizers using fiber optic tapped delay lines and spatial light modulators we describe the design of a single-layer fiber optic Adaline neural network that can be used as a bit pattern classifier. In our design, we employ as few electronic devices as possible and use optical computation to utilize the advantages of optics in processing speed, parallelism, and interconnection. The described new optical neural network design is for optical processing of guided light wave signals, not electronic signals. We analyze the convergence or learning characteristics of the optoelectronic Adaline in the presence of errors in the hardware. We show that with such an optoelectronic Adaline it is possible to detect a desired code word/token/header with good accuracy.

  2. Web Page Categorization Using Artificial Neural Networks

    Kamruzzaman, S M


    Web page categorization is one of the challenging tasks in the world of ever increasing web technologies. There are many ways of categorization of web pages based on different approach and features. This paper proposes a new dimension in the way of categorization of web pages using artificial neural network (ANN) through extracting the features automatically. Here eight major categories of web pages have been selected for categorization; these are business & economy, education, government, entertainment, sports, news & media, job search, and science. The whole process of the proposed system is done in three successive stages. In the first stage, the features are automatically extracted through analyzing the source of the web pages. The second stage includes fixing the input values of the neural network; all the values remain between 0 and 1. The variations in those values affect the output. Finally the third stage determines the class of a certain web page out of eight predefined classes. This stage i...

  3. Neural networks for aerosol particles characterization

    Berdnik, V. V.; Loiko, V. A.


    Multilayer perceptron neural networks with one, two and three inputs are built to retrieve parameters of spherical homogeneous nonabsorbing particle. The refractive index ranges from 1.3 to 1.7; particle radius ranges from 0.251 μm to 56.234 μm. The logarithms of the scattered radiation intensity are used as input signals. The problem of the most informative scattering angles selection is elucidated. It is shown that polychromatic illumination helps one to increase significantly the retrieval accuracy. In the absence of measurement errors relative error of radius retrieval by the neural network with three inputs is 0.54%, relative error of the refractive index retrieval is 0.84%. The effect of measurement errors on the result of retrieval is simulated.

  4. Supervised Sequence Labelling with Recurrent Neural Networks

    Graves, Alex


    Supervised sequence labelling is a vital area of machine learning, encompassing tasks such as speech, handwriting and gesture recognition, protein secondary structure prediction and part-of-speech tagging. Recurrent neural networks are powerful sequence learning tools—robust to input noise and distortion, able to exploit long-range contextual information—that would seem ideally suited to such problems. However their role in large-scale sequence labelling systems has so far been auxiliary.    The goal of this book is a complete framework for classifying and transcribing sequential data with recurrent neural networks only. Three main innovations are introduced in order to realise this goal. Firstly, the connectionist temporal classification output layer allows the framework to be trained with unsegmented target sequences, such as phoneme-level speech transcriptions; this is in contrast to previous connectionist approaches, which were dependent on error-prone prior segmentation. Secondly, multidimensional...

  5. Neural Network Program Package for Prosody Modeling

    J. Santarius


    Full Text Available This contribution describes the programme for one part of theautomatic Text-to-Speech (TTS synthesis. Some experiments (for example[14] documented the considerable improvement of the naturalness ofsynthetic speech, but this approach requires completing the inputfeature values by hand. This completing takes a lot of time for bigfiles. We need to improve the prosody by other approaches which useonly automatically classified features (input parameters. Theartificial neural network (ANN approach is used for the modeling ofprosody parameters. The program package contains all modules necessaryfor the text and speech signal pre-processing, neural network training,sensitivity analysis, result processing and a module for the creationof the input data protocol for Czech speech synthesizer ARTIC [1].

  6. Face Recognition using Eigenfaces and Neural Networks

    Mohamed Rizon


    Full Text Available In this study, we develop a computational model to identify the face of an unknown person’s by applying eigenfaces. The eigenfaces has been applied to extract the basic face of the human face images. The eigenfaces is then projecting onto human faces to identify unique features vectors. This significant features vector can be used to identify an unknown face by using the backpropagation neural network that utilized euclidean distance for classification and recognition. The ORL database for this investigation consists of 40 people with various 400 face images had been used for the learning. The eigenfaces including implemented Jacobi’s method for eigenvalues and eigenvectors has been performed. The classification and recognition using backpropagation neural network showed impressive positive result to classify face images.

  7. Hierarchical Neural Network Structures for Phoneme Recognition

    Vasquez, Daniel; Minker, Wolfgang


    In this book, hierarchical structures based on neural networks are investigated for automatic speech recognition. These structures are evaluated on the phoneme recognition task where a  Hybrid Hidden Markov Model/Artificial Neural Network paradigm is used. The baseline hierarchical scheme consists of two levels each which is based on a Multilayered Perceptron. Additionally, the output of the first level serves as a second level input. The computational speed of the phoneme recognizer can be substantially increased by removing redundant information still contained at the first level output. Several techniques based on temporal and phonetic criteria have been investigated to remove this redundant information. The computational time could be reduced by 57% whilst keeping the system accuracy comparable to the baseline hierarchical approach.

  8. Multi-Dimensional Recurrent Neural Networks

    Graves, Alex; Schmidhuber, Juergen


    Recurrent neural networks (RNNs) have proved effective at one dimensional sequence learning tasks, such as speech and online handwriting recognition. Some of the properties that make RNNs suitable for such tasks, for example robustness to input warping, and the ability to access contextual information, are also desirable in multidimensional domains. However, there has so far been no direct way of applying RNNs to data with more than one spatio-temporal dimension. This paper introduces multi-dimensional recurrent neural networks (MDRNNs), thereby extending the potential applicability of RNNs to vision, video processing, medical imaging and many other areas, while avoiding the scaling problems that have plagued other multi-dimensional models. Experimental results are provided for two image segmentation tasks.

  9. On analog implementations of discrete neural networks

    Beiu, V.; Moore, K.R.


    The paper will show that in order to obtain minimum size neural networks (i.e., size-optimal) for implementing any Boolean function, the nonlinear activation function of the neutrons has to be the identity function. The authors shall shortly present many results dealing with the approximation capabilities of neural networks, and detail several bounds on the size of threshold gate circuits. Based on a constructive solution for Kolmogorov`s superpositions they will show that implementing Boolean functions can be done using neurons having an identity nonlinear function. It follows that size-optimal solutions can be obtained only using analog circuitry. Conclusions, and several comments on the required precision are ending the paper.

  10. Learning in Neural Networks: VLSI Implementation Strategies

    Duong, Tuan Anh


    Fully-parallel hardware neural network implementations may be applied to high-speed recognition, classification, and mapping tasks in areas such as vision, or can be used as low-cost self-contained units for tasks such as error detection in mechanical systems (e.g. autos). Learning is required not only to satisfy application requirements, but also to overcome hardware-imposed limitations such as reduced dynamic range of connections.

  11. Applying neural networks to optimize instrumentation performance

    Start, S.E.; Peters, G.G.


    Well calibrated instrumentation is essential in providing meaningful information about the status of a plant. Signals from plant instrumentation frequently have inherent non-linearities, may be affected by environmental conditions and can therefore cause calibration difficulties for the people who maintain them. Two neural network approaches are described in this paper for improving the accuracy of a non-linear, temperature sensitive level probe ised in Expermental Breeder Reactor II (EBR-II) that was difficult to calibrate.

  12. Identifying Tracks Duplicates via Neural Network

    Sunjerga, Antonio; CERN. Geneva. EP Department


    The goal of the project is to study feasibility of state of the art machine learning techniques in track reconstruction. Machine learning techniques provide promising ways to speed up the pattern recognition of tracks by adding more intelligence in the algorithms. Implementation of neural network to process of track duplicates identifying will be discussed. Different approaches are shown and results are compared to method that is currently in use.

  13. Neural Network-Based Hyperspectral Algorithms


    Neural Network-Based Hyperspectral Algorithms Walter F. Smith, Jr. and Juanita Sandidge Naval Research Laboratory Code 7340, Bldg 1105 Stennis Space...our effort is development of robust numerical inversion algorithms , which will retrieve inherent optical properties of the water column as well as...validate the resulting inversion algorithms with in-situ data and provide estimates of the error bounds associated with the inversion algorithm . APPROACH

  14. Diagnosing process faults using neural network models

    Buescher, K.L.; Jones, R.D.; Messina, M.J.


    In order to be of use for realistic problems, a fault diagnosis method should have the following three features. First, it should apply to nonlinear processes. Second, it should not rely on extensive amounts of data regarding previous faults. Lastly, it should detect faults promptly. The authors present such a scheme for static (i.e., non-dynamic) systems. It involves using a neural network to create an associative memory whose fixed points represent the normal behavior of the system.

  15. Artificial Neural Networks in Stellar Astronomy

    R. K. Gulati


    Full Text Available Next generation of optical spectroscopic surveys, such as the Sloan Digital Sky Survey and the 2 degree field survey, will provide large stellar databases. New tools will be required to extract useful information from these. We show the applications of artificial neural networks to stellar databases. In another application of this method, we predict spectral and luminosity classes from the catalog of spectral indices. We assess the importance of such methods for stellar populations studies.

  16. Neural Networks with Complex and Quaternion Inputs

    Rishiyur, Adityan


    This article investigates Kak neural networks, which can be instantaneously trained, for complex and quaternion inputs. The performance of the basic algorithm has been analyzed and shown how it provides a plausible model of human perception and understanding of images. The motivation for studying quaternion inputs is their use in representing spatial rotations that find applications in computer graphics, robotics, global navigation, computer vision and the spatial orientation of instruments. ...

  17. Adaptive Filtering Using Recurrent Neural Networks

    Parlos, Alexander G.; Menon, Sunil K.; Atiya, Amir F.


    A method for adaptive (or, optionally, nonadaptive) filtering has been developed for estimating the states of complex process systems (e.g., chemical plants, factories, or manufacturing processes at some level of abstraction) from time series of measurements of system inputs and outputs. The method is based partly on the fundamental principles of the Kalman filter and partly on the use of recurrent neural networks. The standard Kalman filter involves an assumption of linearity of the mathematical model used to describe a process system. The extended Kalman filter accommodates a nonlinear process model but still requires linearization about the state estimate. Both the standard and extended Kalman filters involve the often unrealistic assumption that process and measurement noise are zero-mean, Gaussian, and white. In contrast, the present method does not involve any assumptions of linearity of process models or of the nature of process noise; on the contrary, few (if any) assumptions are made about process models, noise models, or the parameters of such models. In this regard, the method can be characterized as one of nonlinear, nonparametric filtering. The method exploits the unique ability of neural networks to approximate nonlinear functions. In a given case, the process model is limited mainly by limitations of the approximation ability of the neural networks chosen for that case. Moreover, despite the lack of assumptions regarding process noise, the method yields minimum- variance filters. In that they do not require statistical models of noise, the neural- network-based state filters of this method are comparable to conventional nonlinear least-squares estimators.

  18. Neural Networks in Chemical Reaction Dynamics

    Raff, Lionel; Hagan, Martin


    This monograph presents recent advances in neural network (NN) approaches and applications to chemical reaction dynamics. Topics covered include: (i) the development of ab initio potential-energy surfaces (PES) for complex multichannel systems using modified novelty sampling and feedforward NNs; (ii) methods for sampling the configuration space of critical importance, such as trajectory and novelty sampling methods and gradient fitting methods; (iii) parametrization of interatomic potential functions using a genetic algorithm accelerated with a NN; (iv) parametrization of analytic interatomic

  19. A Bionic Neural Network for Fish-Robot Locomotion

    Dai-bing Zhang; De-wen Hu; Lin-cheng Shen; Hai-bin Xie


    A bionic neural network for fish-robot locomotion is presented. The bionic neural network inspired from fish neural network consists of one high level controller and one chain of central pattern generators (CPGs). Each CPG contains a nonlinear neural Zhang oscillator which shows properties similar to sine-cosine model. Simulation results show that the bionic neural network presents a good performance in controlling the fish-robot to execute various motions such as startup,stop,forward swimming,backward swimming,turn right and turn left.

  20. Fast implementation of neural network classification

    Seo, Guiwon; Ok, Jiheon; Lee, Chulhee


    Most artificial neural networks use a nonlinear activation function that includes sigmoid and hyperbolic tangent functions. Most artificial networks employ nonlinear functions such as these sigmoid and hyperbolic tangent functions, which incur high complexity costs, particularly during hardware implementation. In this paper, we propose new polynomial approximation methods for nonlinear activation functions that can substantially reduce complexity without sacrificing performance. The proposed approximation methods were applied to pattern classification problems. Experimental results show that the processing time was reduced by up to 50% without any performance degradations in terms of computer simulation.

  1. Multilingual Text Detection with Nonlinear Neural Network

    Lin Li


    Full Text Available Multilingual text detection in natural scenes is still a challenging task in computer vision. In this paper, we apply an unsupervised learning algorithm to learn language-independent stroke feature and combine unsupervised stroke feature learning and automatically multilayer feature extraction to improve the representational power of text feature. We also develop a novel nonlinear network based on traditional Convolutional Neural Network that is able to detect multilingual text regions in the images. The proposed method is evaluated on standard benchmarks and multilingual dataset and demonstrates improvement over the previous work.

  2. Hindcasting of storm waves using neural networks

    Rao, S.; Mandal, S.

    Department NN neural network net i weighted sum of the inputs of neuron i o k network output at kth output node P total number of training pattern s i output of neuron i t k target output at kth output node 1. Introduction Severe storms occur in Bay of Bengal... useful in the planning and maintenance of marine activities. Wave hindcasting is a non-real time application of numerical wave models in the broad field of climatology. Just as weather conditions, w ij weight from neuron j to neuron i YM Young’s model h a...

  3. Deep learning in neural networks: an overview.

    Schmidhuber, Jürgen


    In recent years, deep artificial neural networks (including recurrent ones) have won numerous contests in pattern recognition and machine learning. This historical survey compactly summarizes relevant work, much of it from the previous millennium. Shallow and Deep Learners are distinguished by the depth of their credit assignment paths, which are chains of possibly learnable, causal links between actions and effects. I review deep supervised learning (also recapitulating the history of backpropagation), unsupervised learning, reinforcement learning & evolutionary computation, and indirect search for short programs encoding deep and large networks.

  4. Rule Extraction Algorithm for Deep Neural Networks: A Review

    Hailesilassie, Tameru


    Despite the highest classification accuracy in wide varieties of application areas, artificial neural network has one disadvantage. The way this Network comes to a decision is not easily comprehensible. The lack of explanation ability reduces the acceptability of neural network in data mining and decision system. This drawback is the reason why researchers have proposed many rule extraction algorithms to solve the problem. Recently, Deep Neural Network (DNN) is achieving a profound result ove...

  5. Classification of Respiratory Sounds by Using An Artificial Neural Network


    CLASSIFICATION OF RESPIRATORY SOUNDS BY USING AN ARTIFICIAL NEURAL NETWORK M.C. Sezgin, Z. Dokur, T. Ölmez, M. Korürek Department of Electronics and...successfully classified by the GAL network. Keywords-Respiratory Sounds, Classification of Biomedical Signals, Artificial Neural Network . I. INTRODUCTION...process, feature extraction, and classification by the artificial neural network . At first, the RS signal obtained from a real-time measurement equipment is

  6. Efficient implementation of neural network deinterlacing

    Seo, Guiwon; Choi, Hyunsoo; Lee, Chulhee


    Interlaced scanning has been widely used in most broadcasting systems. However, there are some undesirable artifacts such as jagged patterns, flickering, and line twitters. Moreover, most recent TV monitors utilize flat panel display technologies such as LCD or PDP monitors and these monitors require progressive formats. Consequently, the conversion of interlaced video into progressive video is required in many applications and a number of deinterlacing methods have been proposed. Recently deinterlacing methods based on neural network have been proposed with good results. On the other hand, with high resolution video contents such as HDTV, the amount of video data to be processed is very large. As a result, the processing time and hardware complexity become an important issue. In this paper, we propose an efficient implementation of neural network deinterlacing using polynomial approximation of the sigmoid function. Experimental results show that these approximations provide equivalent performance with a considerable reduction of complexity. This implementation of neural network deinterlacing can be efficiently incorporated in HW implementation.

  7. Functional model of biological neural networks.

    Lo, James Ting-Ho


    A functional model of biological neural networks, called temporal hierarchical probabilistic associative memory (THPAM), is proposed in this paper. THPAM comprises functional models of dendritic trees for encoding inputs to neurons, a first type of neuron for generating spike trains, a second type of neuron for generating graded signals to modulate neurons of the first type, supervised and unsupervised Hebbian learning mechanisms for easy learning and retrieving, an arrangement of dendritic trees for maximizing generalization, hardwiring for rotation-translation-scaling invariance, and feedback connections with different delay durations for neurons to make full use of present and past informations generated by neurons in the same and higher layers. These functional models and their processing operations have many functions of biological neural networks that have not been achieved by other models in the open literature and provide logically coherent answers to many long-standing neuroscientific questions. However, biological justifications of these functional models and their processing operations are required for THPAM to qualify as a macroscopic model (or low-order approximate) of biological neural networks.

  8. File access prediction using neural networks.

    Patra, Prashanta Kumar; Sahu, Muktikanta; Mohapatra, Subasish; Samantray, Ronak Kumar


    One of the most vexing issues in design of a high-speed computer is the wide gap of access times between the memory and the disk. To solve this problem, static file access predictors have been used. In this paper, we propose dynamic file access predictors using neural networks to significantly improve upon the accuracy, success-per-reference, and effective-success-rate-per-reference by using neural-network-based file access predictor with proper tuning. In particular, we verified that the incorrect prediction has been reduced from 53.11% to 43.63% for the proposed neural network prediction method with a standard configuration than the recent popularity (RP) method. With manual tuning for each trace, we are able to improve upon the misprediction rate and effective-success-rate-per-reference using a standard configuration. Simulations on distributed file system (DFS) traces reveal that exact fit radial basis function (RBF) gives better prediction in high end system whereas multilayer perceptron (MLP) trained with Levenberg-Marquardt (LM) backpropagation outperforms in system having good computational capability. Probabilistic and competitive predictors are the most suitable for work stations having limited resources to deal with and the former predictor is more efficient than the latter for servers having maximum system calls. Finally, we conclude that MLP with LM backpropagation algorithm has better success rate of file prediction than those of simple perceptron, last successor, stable successor, and best k out of m predictors.

  9. Neural Network Approach for Eye Detection

    Vijayalaxmi,; Sreehari, S


    Driving support systems, such as car navigation systems are becoming common and they support driver in several aspects. Non-intrusive method of detecting Fatigue and drowsiness based on eye-blink count and eye directed instruction controlhelps the driver to prevent from collision caused by drowsy driving. Eye detection and tracking under various conditions such as illumination, background, face alignment and facial expression makes the problem complex.Neural Network based algorithm is proposed in this paper to detect the eyes efficiently. In the proposed algorithm, first the neural Network is trained to reject the non-eye regionbased on images with features of eyes and the images with features of non-eye using Gabor filter and Support Vector Machines to reduce the dimension and classify efficiently. In the algorithm, first the face is segmented using L*a*btransform color space, then eyes are detected using HSV and Neural Network approach. The algorithm is tested on nearly 100 images of different persons under...

  10. Artificial Neural Network Model for Predicting Compressive

    Salim T. Yousif


    Full Text Available   Compressive strength of concrete is a commonly used criterion in evaluating concrete. Although testing of the compressive strength of concrete specimens is done routinely, it is performed on the 28th day after concrete placement. Therefore, strength estimation of concrete at early time is highly desirable. This study presents the effort in applying neural network-based system identification techniques to predict the compressive strength of concrete based on concrete mix proportions, maximum aggregate size (MAS, and slump of fresh concrete. Back-propagation neural networks model is successively developed, trained, and tested using actual data sets of concrete mix proportions gathered from literature.    The test of the model by un-used data within the range of input parameters shows that the maximum absolute error for model is about 20% and 88% of the output results has absolute errors less than 10%. The parametric study shows that water/cement ratio (w/c is the most significant factor  affecting the output of the model.     The results showed that neural networks has strong potential as a feasible tool for predicting compressive strength of concrete.

  11. The next generation of neural network chips

    Beiu, V.


    There have been many national and international neural networks research initiatives: USA (DARPA, NIBS), Canada (IRIS), Japan (HFSP) and Europe (BRAIN, GALA TEA, NERVES, ELENE NERVES 2) -- just to mention a few. Recent developments in the field of neural networks, cognitive science, bioengineering and electrical engineering have made it possible to understand more about the functioning of large ensembles of identical processing elements. There are more research papers than ever proposing solutions and hardware implementations are by no means an exception. Two fields (computing and neuroscience) are interacting in ways nobody could imagine just several years ago, and -- with the advent of new technologies -- researchers are focusing on trying to copy the Brain. Such an exciting confluence may quite shortly lead to revolutionary new computers and it is the aim of this invited session to bring to light some of the challenging research aspects dealing with the hardware realizability of future intelligent chips. Present-day (conventional) technology is (still) mostly digital and, thus, occupies wider areas and consumes much more power than the solutions envisaged. The innovative algorithmic and architectural ideals should represent important breakthroughs, paving the way towards making neural network chips available to the industry at competitive prices, in relatively small packages and consuming a fraction of the power required by equivalent digital solutions.

  12. Phase Transitions in Living Neural Networks

    Williams-Garcia, Rashid Vladimir

    Our nervous systems are composed of intricate webs of interconnected neurons interacting in complex ways. These complex interactions result in a wide range of collective behaviors with implications for features of brain function, e.g., information processing. Under certain conditions, such interactions can drive neural network dynamics towards critical phase transitions, where power-law scaling is conjectured to allow optimal behavior. Recent experimental evidence is consistent with this idea and it seems plausible that healthy neural networks would tend towards optimality. This hypothesis, however, is based on two problematic assumptions, which I describe and for which I present alternatives in this thesis. First, critical transitions may vanish due to the influence of an environment, e.g., a sensory stimulus, and so living neural networks may be incapable of achieving "critical" optimality. I develop a framework known as quasicriticality, in which a relative optimality can be achieved depending on the strength of the environmental influence. Second, the power-law scaling supporting this hypothesis is based on statistical analysis of cascades of activity known as neuronal avalanches, which conflate causal and non-causal activity, thus confounding important dynamical information. In this thesis, I present a new method to unveil causal links, known as causal webs, between neuronal activations, thus allowing for experimental tests of the quasicriticality hypothesis and other practical applications.


    Rajive Ganguli; Daniel E. Walsh; Shaohai Yu


    Neural networks were used to calibrate an online ash analyzer at the Usibelli Coal Mine, Healy, Alaska, by relating the Americium and Cesium counts to the ash content. A total of 104 samples were collected from the mine, with 47 being from screened coal, and the rest being from unscreened coal. Each sample corresponded to 20 seconds of coal on the running conveyor belt. Neural network modeling used the quick stop training procedure. Therefore, the samples were split into training, calibration and prediction subsets. Special techniques, using genetic algorithms, were developed to representatively split the sample into the three subsets. Two separate approaches were tried. In one approach, the screened and unscreened coal was modeled separately. In another, a single model was developed for the entire dataset. No advantage was seen from modeling the two subsets separately. The neural network method performed very well on average but not individually, i.e. though each prediction was unreliable, the average of a few predictions was close to the true average. Thus, the method demonstrated that the analyzers were accurate at 2-3 minutes intervals (average of 6-9 samples), but not at 20 seconds (each prediction).

  14. Identifying Broadband Rotational Spectra with Neural Networks

    Zaleski, Daniel P.; Prozument, Kirill


    A typical broadband rotational spectrum may contain several thousand observable transitions, spanning many species. Identifying the individual spectra, particularly when the dynamic range reaches 1,000:1 or even 10,000:1, can be challenging. One approach is to apply automated fitting routines. In this approach, combinations of 3 transitions can be created to form a "triple", which allows fitting of the A, B, and C rotational constants in a Watson-type Hamiltonian. On a standard desktop computer, with a target molecule of interest, a typical AUTOFIT routine takes 2-12 hours depending on the spectral density. A new approach is to utilize machine learning to train a computer to recognize the patterns (frequency spacing and relative intensities) inherit in rotational spectra and to identify the individual spectra in a raw broadband rotational spectrum. Here, recurrent neural networks have been trained to identify different types of rotational spectra and classify them accordingly. Furthermore, early results in applying convolutional neural networks for spectral object recognition in broadband rotational spectra appear promising. Perez et al. "Broadband Fourier transform rotational spectroscopy for structure determination: The water heptamer." Chem. Phys. Lett., 2013, 571, 1-15. Seifert et al. "AUTOFIT, an Automated Fitting Tool for Broadband Rotational Spectra, and Applications to 1-Hexanal." J. Mol. Spectrosc., 2015, 312, 13-21. Bishop. "Neural networks for pattern recognition." Oxford university press, 1995.

  15. Neural network parameters affecting image classification

    K.C. Tiwari


    Full Text Available The study is to assess the behaviour and impact of various neural network parameters and their effects on the classification accuracy of remotely sensed images which resulted in successful classification of an IRS-1B LISS II image of Roorkee and its surrounding areas using neural network classification techniques. The method can be applied for various defence applications, such as for the identification of enemy troop concentrations and in logistical planning in deserts by identification of suitable areas for vehicular movement. Five parameters, namely training sample size, number of hidden layers, number of hidden nodes, learning rate and momentum factor were selected. In each case, sets of values were decided based on earlier works reported. Neural network-based classifications were carried out for as many as 450 combinations of these parameters. Finally, a graphical analysis of the results obtained was carried out to understand the relationship among these parameters. A table of recommended values for these parameters for achieving 90 per cent and higher classification accuracy was generated and used in classification of an IRS-1B LISS II image. The analysis suggests the existence of an intricate relationship among these parameters and calls for a wider series of classification experiments as also a more intricate analysis of the relationships.

  16. Markovian architectural bias of recurrent neural networks.

    Tino, Peter; Cernanský, Michal; Benusková, Lubica


    In this paper, we elaborate upon the claim that clustering in the recurrent layer of recurrent neural networks (RNNs) reflects meaningful information processing states even prior to training [1], [2]. By concentrating on activation clusters in RNNs, while not throwing away the continuous state space network dynamics, we extract predictive models that we call neural prediction machines (NPMs). When RNNs with sigmoid activation functions are initialized with small weights (a common technique in the RNN community), the clusters of recurrent activations emerging prior to training are indeed meaningful and correspond to Markov prediction contexts. In this case, the extracted NPMs correspond to a class of Markov models, called variable memory length Markov models (VLMMs). In order to appreciate how much information has really been induced during the training, the RNN performance should always be compared with that of VLMMs and NPMs extracted before training as the "null" base models. Our arguments are supported by experiments on a chaotic symbolic sequence and a context-free language with a deep recursive structure. Index Terms-Complex symbolic sequences, information latching problem, iterative function systems, Markov models, recurrent neural networks (RNNs).

  17. Artificial neural network applications in ionospheric studies

    L. R. Cander


    Full Text Available The ionosphere of Earth exhibits considerable spatial changes and has large temporal variability of various timescales related to the mechanisms of creation, decay and transport of space ionospheric plasma. Many techniques for modelling electron density profiles through entire ionosphere have been developed in order to solve the "age-old problem" of ionospheric physics which has not yet been fully solved. A new way to address this problem is by applying artificial intelligence methodologies to current large amounts of solar-terrestrial and ionospheric data. It is the aim of this paper to show by the most recent examples that modern development of numerical models for ionospheric monthly median long-term prediction and daily hourly short-term forecasting may proceed successfully applying the artificial neural networks. The performance of these techniques is illustrated with different artificial neural networks developed to model and predict the temporal and spatial variations of ionospheric critical frequency, f0F2 and Total Electron Content (TEC. Comparisons between results obtained by the proposed approaches and measured f0F2 and TEC data provide prospects for future applications of the artificial neural networks in ionospheric studies.

  18. Improved Extension Neural Network and Its Applications

    Yu Zhou


    Full Text Available Extension neural network (ENN is a new neural network that is a combination of extension theory and artificial neural network (ANN. The learning algorithm of ENN is based on supervised learning algorithm. One of important issues in the field of classification and recognition of ENN is how to achieve the best possible classifier with a small number of labeled training data. Training data selection is an effective approach to solve this issue. In this work, in order to improve the supervised learning performance and expand the engineering application range of ENN, we use a novel data selection method based on shadowed sets to refine the training data set of ENN. Firstly, we use clustering algorithm to label the data and induce shadowed sets. Then, in the framework of shadowed sets, the samples located around each cluster centers (core data and the borders between clusters (boundary data are selected as training data. Lastly, we use selected data to train ENN. Compared with traditional ENN, the proposed improved ENN (IENN has a better performance. Moreover, IENN is independent of the supervised learning algorithms and initial labeled data. Experimental results verify the effectiveness and applicability of our proposed work.

  19. A new approach to artificial neural networks.

    Baptista Filho, B D; Cabral, E L; Soares, A J


    A novel approach to artificial neural networks is presented. The philosophy of this approach is based on two aspects: the design of task-specific networks, and a new neuron model with multiple synapses. The synapses' connective strengths are modified through selective and cumulative processes conducted by axo-axonic connections from a feedforward circuit. This new concept was applied to the position control of a planar two-link manipulator exhibiting excellent results on learning capability and generalization when compared with a conventional feedforward network. In the present paper, the example shows only a network developed from a neuronal reflexive circuit with some useful artifices, nevertheless without the intention of covering all possibilities devised.

  20. Microscopic instability in recurrent neural networks

    Yamanaka, Yuzuru; Amari, Shun-ichi; Shinomoto, Shigeru


    In a manner similar to the molecular chaos that underlies the stable thermodynamics of gases, a neuronal system may exhibit microscopic instability in individual neuronal dynamics while a macroscopic order of the entire population possibly remains stable. In this study, we analyze the microscopic stability of a network of neurons whose macroscopic activity obeys stable dynamics, expressing either monostable, bistable, or periodic state. We reveal that the network exhibits a variety of dynamical states for microscopic instability residing in a given stable macroscopic dynamics. The presence of a variety of dynamical states in such a simple random network implies more abundant microscopic fluctuations in real neural networks which consist of more complex and hierarchically structured interactions.

  1. Neural networks optimally trained with noisy data

    Wong, K. Y. Michael; Sherrington, David


    We study the retrieval behaviors of neural networks which are trained to optimize their performance for an ensemble of noisy example patterns. In particular, we consider (1) the performance overlap, which reflects the performance of the network in an operating condition identical to the training condition; (2) the storage overlap, which reflects the ability of the network to merely memorize the stored information; (3) the attractor overlap, which reflects the precision of retrieval for dilute feedback networks; and (4) the boundary overlap, which defines the boundary of the basin of attraction, and hence the associative ability for dilute feedback networks. We find that for sufficiently low training noise, the network optimizes its overall performance by sacrificing the individual performance of a minority of patterns, resulting in a two-band distribution of the aligning fields. For a narrow range of storage level, the network loses and then regains its retrieval capability when the training noise level increases, and we interpret that this reentrant retrieval behavior is related to competing tendencies in structuring the basins of attraction for the stored patterns. Reentrant behavior is also observed in the space of synaptic interactions, in which the replica symmetric solution of the optimal network destabilizes and then restabilizes when the training noise level increases. We summarize these observations by picturing training noises as an instrument for widening the basins of attractions of the stored patterns at the expense of reducing the precision of retrieval.

  2. Fuzzy Neural Network Based Traffic Prediction and Congestion Control in High-Speed Networks

    费翔; 何小燕; 罗军舟; 吴介一; 顾冠群


    Congestion control is one of the key problems in high-speed networks, such as ATM. In this paper, a kind of traffic prediction and preventive congestion control scheme is proposed using neural network approach. Traditional predictor using BP neural network has suffered from long convergence time and dissatisfying error. Fuzzy neural network developed in this paper can solve these problems satisfactorily. Simulations show the comparison among no-feedback control scheme,reactive control scheme and neural network based control scheme.

  3. A recurrent neural network approach to quantitatively studying solar wind effects on TEC derived from GPS; preliminary results

    J. B. Habarulema


    Full Text Available This paper attempts to describe the search for the parameter(s to represent solar wind effects in Global Positioning System total electron content (GPS TEC modelling using the technique of neural networks (NNs. A study is carried out by including solar wind velocity (Vsw, proton number density (Np and the Bz component of the interplanetary magnetic field (IMF Bz obtained from the Advanced Composition Explorer (ACE satellite as separate inputs to the NN each along with day number of the year (DN, hour (HR, a 4-month running mean of the daily sunspot number (R4 and the running mean of the previous eight 3-hourly magnetic A index values (A8. Hourly GPS TEC values derived from a dual frequency receiver located at Sutherland (32.38° S, 20.81° E, South Africa for 8 years (2000–2007 have been used to train the Elman neural network (ENN and the result has been used to predict TEC variations for a GPS station located at Cape Town (33.95° S, 18.47° E. Quantitative results indicate that each of the parameters considered may have some degree of influence on GPS TEC at certain periods although a decrease in prediction accuracy is also observed for some parameters for different days and seasons. It is also evident that there is still a difficulty in predicting TEC values during disturbed conditions. The improvements and degradation in prediction accuracies are both close to the benchmark values which lends weight to the belief that diurnal, seasonal, solar and magnetic variabilities may be the major determinants of TEC variability.

  4. Models of neural networks with fuzzy activation functions

    Nguyen, A. T.; Korikov, A. M.


    This paper investigates the application of a new form of neuron activation functions that are based on the fuzzy membership functions derived from the theory of fuzzy systems. On the basis of the results regarding neuron models with fuzzy activation functions, we created the models of fuzzy-neural networks. These fuzzy-neural network models differ from conventional networks that employ the fuzzy inference systems using the methods of neural networks. While conventional fuzzy-neural networks belong to the first type, fuzzy-neural networks proposed here are defined as the second-type models. The simulation results show that the proposed second-type model can successfully solve the problem of the property prediction for time – dependent signals. Neural networks with fuzzy impulse activation functions can be widely applied in many fields of science, technology and mechanical engineering to solve the problems of classification, prediction, approximation, etc.

  5. Time Series Prediction based on Hybrid Neural Networks

    S. A. Yarushev


    Full Text Available In this paper, we suggest to use hybrid approach to time series forecasting problem. In first part of paper, we create a literature review of time series forecasting methods based on hybrid neural networks and neuro-fuzzy approaches. Hybrid neural networks especially effective for specific types of applications such as forecasting or classification problem, in contrast to traditional monolithic neural networks. These classes of problems include problems with different characteristics in different modules. The main part of paper create a detailed overview of hybrid networks benefits, its architectures and performance under traditional neural networks. Hybrid neural networks models for time series forecasting are discussed in the paper. Experiments with modular neural networks are given.

  6. PSO optimized Feed Forward Neural Network for offline Signature Classification

    Pratik R. Hajare


    Full Text Available The paper is based on feed forward neural network (FFNN optimization by particle swarm intelligence (PSI used to provide initial weights and biases to train neural network. Once the weights and biases are found using Particle swarm optimization (PSO with neural network used as training algorithm for specified epoch, the same are used to train the neural network for training and classification of benchmark problems. Further the approach is tested for offline signature classifications. A comparison is made between normal FFNN with random weights and biases and FFNN with particle swarm optimized weights and biases. Firstly, the performance is tested on two benchmark databases for neural network, The Breast Cancer Database and the Diabetic Database. Result shows that neural network performs better with initial weights and biases obtained by Particle Swarm optimization. The network converges faster with PSO obtained initial weights and biases for FFNN and classification accuracy is increased.

  7. Runoff Modelling in Urban Storm Drainage by Neural Networks

    Rasmussen, Michael R.; Brorsen, Michael; Schaarup-Jensen, Kjeld


    network is used to compute flow or water level at selected points in the sewer system, and to forecast the flow from a small residential area. The main advantages of the neural network are the build-in self calibration procedure and high speed performance, but the neural network cannot be used to extract......A neural network is used to simulate folw and water levels in a sewer system. The calibration of th neural network is based on a few measured events and the network is validated against measureed events as well as flow simulated with the MOUSE model (Lindberg and Joergensen, 1986). The neural...... knowledge of the runoff process. The neural network was found to simulate 150 times faster than e.g. the MOUSE model....

  8. Detection of Wildfires with Artificial Neural Networks

    Umphlett, B.; Leeman, J.; Morrissey, M. L.


    Currently fire detection for the National Oceanic and Atmospheric Administration (NOAA) using satellite data is accomplished with algorithms and error checking human analysts. Artificial neural networks (ANNs) have been shown to be more accurate than algorithms or statistical methods for applications dealing with multiple datasets of complex observed data in the natural sciences. ANNs also deal well with multiple data sources that are not all equally reliable or equally informative to the problem. An ANN was tested to evaluate its accuracy in detecting wildfires utilizing polar orbiter numerical data from the Advanced Very High Resolution Radiometer (AVHRR). Datasets containing locations of known fires were gathered from the NOAA's polar orbiting satellites via the Comprehensive Large Array-data Stewardship System (CLASS). The data was then calibrated and navigation corrected using the Environment for Visualizing Images (ENVI). Fires were located with the aid of shapefiles generated via ArcGIS. Afterwards, several smaller ten pixel by ten pixel datasets were created for each fire (using the ENVI corrected data). Several datasets were created for each fire in order to vary fire position and avoid training the ANN to look only at fires in the center of an image. Datasets containing no fires were also created. A basic pattern recognition neural network was established with the MATLAB neural network toolbox. The datasets were then randomly separated into categories used to train, validate, and test the ANN. To prevent over fitting of the data, the mean squared error (MSE) of the network was monitored and training was stopped when the MSE began to rise. Networks were tested using each channel of the AVHRR data independently, channels 3a and 3b combined, and all six channels. The number of hidden neurons for each input set was also varied between 5-350 in steps of 5 neurons. Each configuration was run 10 times, totaling about 4,200 individual network evaluations. Thirty

  9. Phase Synchronization in Small World Chaotic Neural Networks

    WANG Qing-Yun; LU Qi-Shao


    @@ To understand collective motion of realneural networks very well, we investigate collective phase synchronization of small world chaotic Hindmarsh-Rose (HR) neural networks. By numerical simulations, we conclude that small world chaotic HR neural networks can achieve collective phase synchronization. Furthermore, it is shown that phase synchronization of small world chaotic HR neural networks is dependent on the coupling strength,the connection topology (which is determined by the probability p), as well as the coupling number. These phenomena are important to guide us to understand the synchronization of real neural networks.

  10. Network traffic anomaly prediction using Artificial Neural Network

    Ciptaningtyas, Hening Titi; Fatichah, Chastine; Sabila, Altea


    As the excessive increase of internet usage, the malicious software (malware) has also increase significantly. Malware is software developed by hacker for illegal purpose(s), such as stealing data and identity, causing computer damage, or denying service to other user[1]. Malware which attack computer or server often triggers network traffic anomaly phenomena. Based on Sophos's report[2], Indonesia is the riskiest country of malware attack and it also has high network traffic anomaly. This research uses Artificial Neural Network (ANN) to predict network traffic anomaly based on malware attack in Indonesia which is recorded by Id-SIRTII/CC (Indonesia Security Incident Response Team on Internet Infrastructure/Coordination Center). The case study is the highest malware attack (SQL injection) which has happened in three consecutive years: 2012, 2013, and 2014[4]. The data series is preprocessed first, then the network traffic anomaly is predicted using Artificial Neural Network and using two weight update algorithms: Gradient Descent and Momentum. Error of prediction is calculated using Mean Squared Error (MSE) [7]. The experimental result shows that MSE for SQL Injection is 0.03856. So, this approach can be used to predict network traffic anomaly.

  11. Brain tumor segmentation with Deep Neural Networks.

    Havaei, Mohammad; Davy, Axel; Warde-Farley, David; Biard, Antoine; Courville, Aaron; Bengio, Yoshua; Pal, Chris; Jodoin, Pierre-Marc; Larochelle, Hugo


    In this paper, we present a fully automatic brain tumor segmentation method based on Deep Neural Networks (DNNs). The proposed networks are tailored to glioblastomas (both low and high grade) pictured in MR images. By their very nature, these tumors can appear anywhere in the brain and have almost any kind of shape, size, and contrast. These reasons motivate our exploration of a machine learning solution that exploits a flexible, high capacity DNN while being extremely efficient. Here, we give a description of different model choices that we've found to be necessary for obtaining competitive performance. We explore in particular different architectures based on Convolutional Neural Networks (CNN), i.e. DNNs specifically adapted to image data. We present a novel CNN architecture which differs from those traditionally used in computer vision. Our CNN exploits both local features as well as more global contextual features simultaneously. Also, different from most traditional uses of CNNs, our networks use a final layer that is a convolutional implementation of a fully connected layer which allows a 40 fold speed up. We also describe a 2-phase training procedure that allows us to tackle difficulties related to the imbalance of tumor labels. Finally, we explore a cascade architecture in which the output of a basic CNN is treated as an additional source of information for a subsequent CNN. Results reported on the 2013 BRATS test data-set reveal that our architecture improves over the currently published state-of-the-art while being over 30 times faster.

  12. Sparse coding for layered neural networks

    Katayama, Katsuki; Sakata, Yasuo; Horiguchi, Tsuyoshi


    We investigate storage capacity of two types of fully connected layered neural networks with sparse coding when binary patterns are embedded into the networks by a Hebbian learning rule. One of them is a layered network, in which a transfer function of even layers is different from that of odd layers. The other is a layered network with intra-layer connections, in which the transfer function of inter-layer is different from that of intra-layer, and inter-layered neurons and intra-layered neurons are updated alternately. We derive recursion relations for order parameters by means of the signal-to-noise ratio method, and then apply the self-control threshold method proposed by Dominguez and Bollé to both layered networks with monotonic transfer functions. We find that a critical value αC of storage capacity is about 0.11|a ln a| -1 ( a≪1) for both layered networks, where a is a neuronal activity. It turns out that the basin of attraction is larger for both layered networks when the self-control threshold method is applied.

  13. The effect of the neural activity on topological properties of growing neural networks.

    Gafarov, F M; Gafarova, V R


    The connectivity structure in cortical networks defines how information is transmitted and processed, and it is a source of the complex spatiotemporal patterns of network's development, and the process of creation and deletion of connections is continuous in the whole life of the organism. In this paper, we study how neural activity influences the growth process in neural networks. By using a two-dimensional activity-dependent growth model we demonstrated the neural network growth process from disconnected neurons to fully connected networks. For making quantitative investigation of the network's activity influence on its topological properties we compared it with the random growth network not depending on network's activity. By using the random graphs theory methods for the analysis of the network's connections structure it is shown that the growth in neural networks results in the formation of a well-known "small-world" network.

  14. Granular neural networks, pattern recognition and bioinformatics

    Pal, Sankar K; Ganivada, Avatharam


    This book provides a uniform framework describing how fuzzy rough granular neural network technologies can be formulated and used in building efficient pattern recognition and mining models. It also discusses the formation of granules in the notion of both fuzzy and rough sets. Judicious integration in forming fuzzy-rough information granules based on lower approximate regions enables the network to determine the exactness in class shape as well as to handle the uncertainties arising from overlapping regions, resulting in efficient and speedy learning with enhanced performance. Layered network and self-organizing analysis maps, which have a strong potential in big data, are considered as basic modules,. The book is structured according to the major phases of a pattern recognition system (e.g., classification, clustering, and feature selection) with a balanced mixture of theory, algorithm, and application. It covers the latest findings as well as directions for future research, particularly highlighting bioinf...

  15. Dynamic artificial neural networks with affective systems.

    Schuman, Catherine D; Birdwell, J Douglas


    Artificial neural networks (ANNs) are processors that are trained to perform particular tasks. We couple a computational ANN with a simulated affective system in order to explore the interaction between the two. In particular, we design a simple affective system that adjusts the threshold values in the neurons of our ANN. The aim of this paper is to demonstrate that this simple affective system can control the firing rate of the ensemble of neurons in the ANN, as well as to explore the coupling between the affective system and the processes of long term potentiation (LTP) and long term depression (LTD), and the effect of the parameters of the affective system on its performance. We apply our networks with affective systems to a simple pole balancing example and briefly discuss the effect of affective systems on network performance.

  16. Flood routing modelling with Artificial Neural Networks

    R. Peters


    Full Text Available For the modelling of the flood routing in the lower reaches of the Freiberger Mulde river and its tributaries the one-dimensional hydrodynamic modelling system HEC-RAS has been applied. Furthermore, this model was used to generate a database to train multilayer feedforward networks. To guarantee numerical stability for the hydrodynamic modelling of some 60 km of streamcourse an adequate resolution in space requires very small calculation time steps, which are some two orders of magnitude smaller than the input data resolution. This leads to quite high computation requirements seriously restricting the application – especially when dealing with real time operations such as online flood forecasting. In order to solve this problem we tested the application of Artificial Neural Networks (ANN. First studies show the ability of adequately trained multilayer feedforward networks (MLFN to reproduce the model performance.

  17. Stability of discrete Hopfield neural networks with delay

    Ma Runnian; Lei Sheping; Liu Naigong


    Discrete Hopfield neural network with delay is an extension of discrete Hopfield neural network. As it is well known, the stability of neural networks is not only the most basic and important problem but also foundation of the network's applications. The stability of discrete Hopfield neural networks with delay is mainly investigated by using Lyapunov function. The sufficient conditions for the networks with delay converging towards a limit cycle of length 4 are obtained. Also, some sufficient criteria are given to ensure the networks having neither a stable state nor a limit cycle with length 2. The obtained results here generalize the previous results on stability of discrete Hopfield neural network with delay and without delay.

  18. Advances in Artificial Neural Networks - Methodological Development and Application

    Artificial neural networks as a major soft-computing technology have been extensively studied and applied during the last three decades. Research on backpropagation training algorithms for multilayer perceptron networks has spurred development of other neural network training algorithms for other ne...

  19. An evolutionary approach to associative memory in recurrent neural networks

    Fujita, Sh; Fujita, Sh; Nishimura, H


    Abstract: In this paper, we investigate the associative memory in recurrent neural networks, based on the model of evolving neural networks proposed by Nolfi, Miglino and Parisi. Experimentally developed network has highly asymmetric synaptic weights and dilute connections, quite different from those of the Hopfield model. Some results on the effect of learning efficiency on the evolution are also presented.

  20. Solving quadratic programming problems by delayed projection neural network.

    Yang, Yongqing; Cao, Jinde


    In this letter, the delayed projection neural network for solving convex quadratic programming problems is proposed. The neural network is proved to be globally exponentially stable and can converge to an optimal solution of the optimization problem. Three examples show the effectiveness of the proposed network.

  1. The Projection Neural Network for Solving Convex Nonlinear Programming

    Yang, Yongqing; Xu, Xianyun

    In this paper, a projection neural network for solving convex optimization is investigated. Using Lyapunov stability theory and LaSalle invariance principle, the proposed network is showed to be globally stable and converge to exact optimal solution. Two examples show the effectiveness of the proposed neural network model.

  2. Prediction of Parametric Roll Resonance by Multilayer Perceptron Neural Network

    Míguez González, M; López Peña, F.; Díaz Casás, V.


    acknowledged in the last few years. This work proposes a prediction system based on a multilayer perceptron (MP) neural network. The training and testing of the MP network is accomplished by feeding it with simulated data of a three degrees-of-freedom nonlinear model of a fishing vessel. The neural network...

  3. Prediction of Parametric Roll Resonance by Multilayer Perceptron Neural Network

    Míguez González, M; López Peña, F.; Díaz Casás, V.


    acknowledged in the last few years. This work proposes a prediction system based on a multilayer perceptron (MP) neural network. The training and testing of the MP network is accomplished by feeding it with simulated data of a three degrees-of-freedom nonlinear model of a fishing vessel. The neural network...

  4. Neural network model to control an experimental chaotic pendulum

    Bakker, R; Schouten, JC; Takens, F; vandenBleek, CM


    A feedforward neural network was trained to predict the motion of an experimental, driven, and damped pendulum operating in a chaotic regime. The network learned the behavior of the pendulum from a time series of the pendulum's angle, the single measured variable. The validity of the neural network,

  5. Explicit neural representations, recursive neural networks and conscious visual perception.

    Pollen, Daniel A


    The fundamental question as to whether the neural correlates of any given conscious visual experience are expressed locally within a given cortical area or more globally within some widely distributed network remains unresolved. We inquire as to whether recursive processing-by which we mean the combined flow and integrated outcome of afferent and recurrent activity across a series of cortical areas-is essential for the emergence of conscious visual experience. If so, we further inquire as to whether such recursive processing is essential only for loops between extrastriate cortical areas explicitly representing experiences such as color or motion back to V1 or whether it is processing between still higher levels and the areas computing such explicit representations that is exclusively or additionally essential for visual experience. If recursive processing is not essential for the emergence of conscious visual experience, then it should also be possible to determine whether it is only the intracortical sensory processing within areas computing explicit sensory representations that is required for perceptual experience or whether it is the subsequent processing of the output of such areas within more anterior cortical regions that engenders perception. The present analysis suggests that the questions posed here may ultimately become experimentally resolvable. Whatever the outcome, the results will likely open new approaches to identify the neural correlates of conscious visual perception.

  6. Nonlinear system identification and control based on modular neural networks.

    Puscasu, Gheorghe; Codres, Bogdan


    A new approach for nonlinear system identification and control based on modular neural networks (MNN) is proposed in this paper. The computational complexity of neural identification can be greatly reduced if the whole system is decomposed into several subsystems. This is obtained using a partitioning algorithm. Each local nonlinear model is associated with a nonlinear controller. These are also implemented by neural networks. The switching between the neural controllers is done by a dynamical switcher, also implemented by neural networks, that tracks the different operating points. The proposed multiple modelling and control strategy has been successfully tested on simulated laboratory scale liquid-level system.

  7. Three-dimensional thinning by neural networks

    Shen, Jun; Shen, Wei


    3D thinning is widely used in 3D object representation in computer vision and in trajectory planning in robotics to find the topological structure of the free space. In the present paper, we propose a 3D image thinning method by neural networks. Each voxel in the 3D image corresponds to a set of neurons, called 3D Thinron, in the network. Taking the 3D Thinron as the elementary unit, the global structure of the network is a 3D array in which each Thinron is connected with the 26 neighbors in the neighborhood 3 X 3 X 3. As to the Thinron itself, the set of neurons are organized in multiple layers. In the first layer, we have neurons for boundary analysis, connectivity analysis and connectivity verification, taking as input the voxels in the 3 X 3 X 3 neighborhood and the intermediate outputs of neighboring Thinrons. In the second layer, we have the neurons for synthetical analysis to give the intermediate output of Thinron. In the third layer, we have the decision neurons whose state determines the final output. All neurons in the Thinron are the adaline neurons of Widrow, except the connectivity analysis and verification neurons which are nonlinear neurons. With the 3D Thinron neural network, the state transition of the network will take place automatically, and the network converges to the final steady state, which gives the result medial surface of 3D objects, preserving the connectivity in the initial image. The method presented is simulated and tested for 3D images, experimental results are reported.

  8. An introduction to neural network methods for differential equations

    Yadav, Neha; Kumar, Manoj


    This book introduces a variety of neural network methods for solving differential equations arising in science and engineering. The emphasis is placed on a deep understanding of the neural network techniques, which has been presented in a mostly heuristic and intuitive manner. This approach will enable the reader to understand the working, efficiency and shortcomings of each neural network technique for solving differential equations. The objective of this book is to provide the reader with a sound understanding of the foundations of neural networks, and a comprehensive introduction to neural network methods for solving differential equations together with recent developments in the techniques and their applications. The book comprises four major sections. Section I consists of a brief overview of differential equations and the relevant physical problems arising in science and engineering. Section II illustrates the history of neural networks starting from their beginnings in the 1940s through to the renewed...

  9. Visualizing Clusters in Artificial Neural Networks Using Morse Theory

    Paul T. Pearson


    Full Text Available This paper develops a process whereby a high-dimensional clustering problem is solved using a neural network and a low-dimensional cluster diagram of the results is produced using the Mapper method from topological data analysis. The low-dimensional cluster diagram makes the neural network's solution to the high-dimensional clustering problem easy to visualize, interpret, and understand. As a case study, a clustering problem from a diabetes study is solved using a neural network. The clusters in this neural network are visualized using the Mapper method during several stages of the iterative process used to construct the neural network. The neural network and Mapper clustering diagram results for the diabetes study are validated by comparison to principal component analysis.

  10. An introduction to bio-inspired artificial neural network architectures.

    Fasel, B


    In this introduction to artificial neural networks we attempt to give an overview of the most important types of neural networks employed in engineering and explain shortly how they operate and also how they relate to biological neural networks. The focus will mainly be on bio-inspired artificial neural network architectures and specifically to neo-perceptions. The latter belong to the family of convolutional neural networks. Their topology is somewhat similar to the one of the human visual cortex and they are based on receptive fields that allow, in combination with sub-sampling layers, for an improved robustness with regard to local spatial distortions. We demonstrate the application of artificial neural networks to face analysis--a domain we human beings are particularly good at, yet which poses great difficulties for digital computers running deterministic software programs.

  11. Modeling of Magneto-Rheological Damper with Neural Network


    With the revival of magnetorheological technology research in the 1980's, its application in vehicles is increasingly focused on vibration suppression. Based on the importance of magnetorheological damper modeling, nonparametric modeling with neural network, which is a promising development in semi-active online control of vehicles with MR suspension, has been carried out in this study. A two layer neural network with 7 neurons in a hidden layer and 3 inputs and 1 output was established to simulate the behavior of MR damper at different excitation currents. In the neural network modeling, the damping force is a function of displacement, velocity and the applied current. A MR damper for vehicles is fabricated and tested by MTS; the data acquired are utilized for neural network training and validation. The application and validation show that the predicted forces of the neural network match well with the forces tested with a small variance, which demonstrates the effectiveness and precision of neural network modeling.

  12. Geophysical phenomena classification by artificial neural networks

    Gough, M. P.; Bruckner, J. R.


    Space science information systems involve accessing vast data bases. There is a need for an automatic process by which properties of the whole data set can be assimilated and presented to the user. Where data are in the form of spectrograms, phenomena can be detected by pattern recognition techniques. Presented are the first results obtained by applying unsupervised Artificial Neural Networks (ANN's) to the classification of magnetospheric wave spectra. The networks used here were a simple unsupervised Hamming network run on a PC and a more sophisticated CALM network run on a Sparc workstation. The ANN's were compared in their geophysical data recognition performance. CALM networks offer such qualities as fast learning, superiority in generalizing, the ability to continuously adapt to changes in the pattern set, and the possibility to modularize the network to allow the inter-relation between phenomena and data sets. This work is the first step toward an information system interface being developed at Sussex, the Whole Information System Expert (WISE). Phenomena in the data are automatically identified and provided to the user in the form of a data occurrence morphology, the Whole Information System Data Occurrence Morphology (WISDOM), along with relationships to other parameters and phenomena.

  13. SOFM Neural Network Based Hierarchical Topology Control for Wireless Sensor Networks


    Well-designed network topology provides vital support for routing, data fusion, and target tracking in wireless sensor networks (WSNs). Self-organization feature map (SOFM) neural network is a major branch of artificial neural networks, which has self-organizing and self-learning features. In this paper, we propose a cluster-based topology control algorithm for WSNs, named SOFMHTC, which uses SOFM neural network to form a hierarchical network structure, completes cluster head selection by the...

  14. Classification of Chronic Whiplash Associated Disorders With Artificial Neural Networks


    question is how to analyse a multiple of features in an appropriate way. Different Artificial Neural Networks (ANN) have been developed during the past...sample IR-light, at 60 Hz, reflected by the retro-reflective markers. CLASSIFICATION OF CHRONIC WHIPLASH ASSOCIATED DISORDERS WITH ARTIFICIAL NEURAL NETWORKS F...Associated Disorders With Artificial Neural Networks Contract Number Grant Number Program Element Number Author(s) Project Number Task Number

  15. Improved Landmine Detection by Complex-Valued Artificial Neural Networks


    IMPROVED LANDMINE DETECTION BY COMPLEX-VALUED ARTIFICIAL NEURAL NETWORKS Research was Sponsored by: U. S. ARMY RESEARCH OFFICE Program Manager... artificial neural networks in conjunction with fuzzy logic for improved system performance over and above the good results already attained are...of detecting mines. One of the more promising avenues of research in this area involves the use of artificial neural networks [3]. More specifically

  16. An Analysis of Stopping Criteria in Artificial Neural Networks



  17. Position Sensorless Driving of BLDCM Using Neural Networks

    Guo, Hai-Jiao; Sagawa, Seiji; Ichinokura, Osamu

    A sensorless driving method of brushless DC Motors (BLDCM) using neural network has been studied in this paper. Considering the nonlinear characteristics and the parameter error of the modeling, neural networks are introduced to estimate the electromotive force (EMF). The results of simulation and experiment using offline trained neural networks show the proposed method is useful. In addition, the robustness about the parameters is discussed.

  18. Neural Networks for Modeling and Control of Particle Accelerators

    Edelen, A.L.; Chase, B.E.; Edstrom, D.; Milton, S.V.; Stabile, P.


    We describe some of the challenges of particle accelerator control, highlight recent advances in neural network techniques, discuss some promising avenues for incorporating neural networks into particle accelerator control systems, and describe a neural network-based control system that is being developed for resonance control of an RF electron gun at the Fermilab Accelerator Science and Technology (FAST) facility, including initial experimental results from a benchmark controller.

  19. A Neural Network-Based Interval Pattern Matcher

    Jing Lu


    Full Text Available One of the most important roles in the machine learning area is to classify, and neural networks are very important classifiers. However, traditional neural networks cannot identify intervals, let alone classify them. To improve their identification ability, we propose a neural network-based interval matcher in our paper. After summarizing the theoretical construction of the model, we take a simple and a practical weather forecasting experiment, which show that the recognizer accuracy reaches 100% and that is promising.

  20. A Neural Network-Based Interval Pattern Matcher

    Jing Lu; Shengjun Xue; Xiakun Zhang; Yang Han


    One of the most important roles in the machine learning area is to classify, and neural networks are very important classifiers. However, traditional neural networks cannot identify intervals, let alone classify them. To improve their identification ability, we propose a neural network-based interval matcher in our paper. After summarizing the theoretical construction of the model, we take a simple and a practical weather forecasting experiment, which show that the recognizer accuracy reaches...

  1. Training product unit neural networks with genetic algorithms

    Janson, D. J.; Frenzel, J. F.; Thelen, D. C.


    The training of product neural networks using genetic algorithms is discussed. Two unusual neural network techniques are combined; product units are employed instead of the traditional summing units and genetic algorithms train the network rather than backpropagation. As an example, a neural netork is trained to calculate the optimum width of transistors in a CMOS switch. It is shown how local minima affect the performance of a genetic algorithm, and one method of overcoming this is presented.

  2. Analysis of Heart Diseases Dataset using Neural Network Approach

    Rani, K Usha


    One of the important techniques of Data mining is Classification. Many real world problems in various fields such as business, science, industry and medicine can be solved by using classification approach. Neural Networks have emerged as an important tool for classification. The advantages of Neural Networks helps for efficient classification of given data. In this study a Heart diseases dataset is analyzed using Neural Network approach. To increase the efficiency of the classification process parallel approach is also adopted in the training phase.

  3. A C-LSTM Neural Network for Text Classification

    Zhou, Chunting; Sun, Chonglin; Liu, Zhiyuan; Lau, Francis C. M.


    Neural network models have been demonstrated to be capable of achieving remarkable performance in sentence and document modeling. Convolutional neural network (CNN) and recurrent neural network (RNN) are two mainstream architectures for such modeling tasks, which adopt totally different ways of understanding natural languages. In this work, we combine the strengths of both architectures and propose a novel and unified model called C-LSTM for sentence representation and text classification. C-...

  4. One pass learning for generalized classifier neural network.

    Ozyildirim, Buse Melis; Avci, Mutlu


    Generalized classifier neural network introduced as a kind of radial basis function neural network, uses gradient descent based optimized smoothing parameter value to provide efficient classification. However, optimization consumes quite a long time and may cause a drawback. In this work, one pass learning for generalized classifier neural network is proposed to overcome this disadvantage. Proposed method utilizes standard deviation of each class to calculate corresponding smoothing parameter. Since different datasets may have different standard deviations and data distributions, proposed method tries to handle these differences by defining two functions for smoothing parameter calculation. Thresholding is applied to determine which function will be used. One of these functions is defined for datasets having different range of values. It provides balanced smoothing parameters for these datasets through logarithmic function and changing the operation range to lower boundary. On the other hand, the other function calculates smoothing parameter value for classes having standard deviation smaller than the threshold value. Proposed method is tested on 14 datasets and performance of one pass learning generalized classifier neural network is compared with that of probabilistic neural network, radial basis function neural network, extreme learning machines, and standard and logarithmic learning generalized classifier neural network in MATLAB environment. One pass learning generalized classifier neural network provides more than a thousand times faster classification than standard and logarithmic generalized classifier neural network. Due to its classification accuracy and speed, one pass generalized classifier neural network can be considered as an efficient alternative to probabilistic neural network. Test results show that proposed method overcomes computational drawback of generalized classifier neural network and may increase the classification performance. Copyright

  5. An Artificial Neural Network Control System for Spacecraft Attitude Stabilization


    NAVAL POSTGRADUATE SCHOOL Monterey, California ’-DTIC 0 ELECT f NMARO 5 191 N S, U, THESIS B . AN ARTIFICIAL NEURAL NETWORK CONTROL SYSTEM FOR...NO. NO. NO ACCESSION NO 11. TITLE (Include Security Classification) AN ARTIFICIAL NEURAL NETWORK CONTROL SYSTEM FOR SPACECRAFT ATTITUDE STABILIZATION...obsolete a U.S. G v pi.. iim n P.. oiice! toog-eo.5s43 i Approved for public release; distribution is unlimited. AN ARTIFICIAL NEURAL NETWORK CONTROL

  6. Artificial Neural Network Metamodels of Stochastic Computer Simulations


    SUBTITLE r 5. FUNDING NUMBERS Artificial Neural Network Metamodels of Stochastic I () Computer Simulations 6. AUTHOR(S) AD- A285 951 Robert Allen...8217!298*1C2 ARTIFICIAL NEURAL NETWORK METAMODELS OF STOCHASTIC COMPUTER SIMULATIONS by Robert Allen Kilmer B.S. in Education Mathematics, Indiana...dedicate this document to the memory of my father, William Ralph Kilmer. mi ABSTRACT Signature ARTIFICIAL NEURAL NETWORK METAMODELS OF STOCHASTIC

  7. Discrete Orthogonal Transforms and Neural Networks for Image Interpolation

    J. Polec


    Full Text Available In this contribution we present transform and neural network approaches to the interpolation of images. From transform point of view, the principles from [1] are modified for 1st and 2nd order interpolation. We present several new interpolation discrete orthogonal transforms. From neural network point of view, we present interpolation possibilities of multilayer perceptrons. We use various configurations of neural networks for 1st and 2nd order interpolation. The results are compared by means of tables.

  8. Dissipativity Analysis of Neural Networks with Time-varying Delays

    Yan Sun; Bao-Tong Cui


    A new definition of dissipativity for neural networks is presented in this paper. By constructing proper Lyapunov func- tionals and using some analytic techniques, sufficient conditions are given to ensure the dissipativity of neural networks with or without time-varying parametric uncertainties and the integro-differential neural networks in terms of linear matrix inequalities. Numerical examples are given to illustrate the effectiveness of the obtained results.

  9. An Approach to Structural Approximation Analysis by Artificial Neural Networks

    陆金桂; 周济; 王浩; 陈新度; 余俊; 肖世德


    This paper theoretically proves that a three-layer neural network can be applied to implementing exactly the function between the stresses and displacements and the design variables of any elastic structure based on the Kolmogorov’s mapping neural network existence theorem. A new approach to the structural approximation analysis with the global characteristic based on artificial neural networks is presented. The computer simulation experiments made by this paper show that the new approach is effective.

  10. Representational Distance Learning for Deep Neural Networks.

    McClure, Patrick; Kriegeskorte, Nikolaus


    Deep neural networks (DNNs) provide useful models of visual representational transformations. We present a method that enables a DNN (student) to learn from the internal representational spaces of a reference model (teacher), which could be another DNN or, in the future, a biological brain. Representational spaces of the student and the teacher are characterized by representational distance matrices (RDMs). We propose representational distance learning (RDL), a stochastic gradient descent method that drives the RDMs of the student to approximate the RDMs of the teacher. We demonstrate that RDL is competitive with other transfer learning techniques for two publicly available benchmark computer vision datasets (MNIST and CIFAR-100), while allowing for architectural differences between student and teacher. By pulling the student's RDMs toward those of the teacher, RDL significantly improved visual classification performance when compared to baseline networks that did not use transfer learning. In the future, RDL may enable combined supervised training of deep neural networks using task constraints (e.g., images and category labels) and constraints from brain-activity measurements, so as to build models that replicate the internal representational spaces of biological brains.

  11. Correlated neural variability in persistent state networks.

    Polk, Amber; Litwin-Kumar, Ashok; Doiron, Brent


    Neural activity that persists long after stimulus presentation is a biological correlate of short-term memory. Variability in spiking activity causes persistent states to drift over time, ultimately degrading memory. Models of short-term memory often assume that the input fluctuations to neural populations are independent across cells, a feature that attenuates population-level variability and stabilizes persistent activity. However, this assumption is at odds with experimental recordings from pairs of cortical neurons showing that both the input currents and output spike trains are correlated. It remains unclear how correlated variability affects the stability of persistent activity and the performance of cognitive tasks that it supports. We consider the stochastic long-timescale attractor dynamics of pairs of mutually inhibitory populations of spiking neurons. In these networks, persistent activity was less variable when correlated variability was globally distributed across both populations compared with the case when correlations were locally distributed only within each population. Using a reduced firing rate model with a continuum of persistent states, we show that, when input fluctuations are correlated across both populations, they drive firing rate fluctuations orthogonal to the persistent state attractor, thereby causing minimal stochastic drift. Using these insights, we establish that distributing correlated fluctuations globally as opposed to locally improves network's performance on a two-interval, delayed response discrimination task. Our work shows that the correlation structure of input fluctuations to a network is an important factor when determining long-timescale, persistent population spiking activity.

  12. Application of Elman recursive neural network to structural analysis%Elman递归神经网络在结构分析中的应用

    雷铁安; 吴作伟; 杨周妮



  13. Application of Elman recursive neural network to dam safety monitoring%Elman回归神经网络在大坝安全监控中的应用

    赖道平; 顾冲时



  14. Advances in Artificial Neural Networks – Methodological Development and Application

    Yanbo Huang


    Full Text Available Artificial neural networks as a major soft-computing technology have been extensively studied and applied during the last three decades. Research on backpropagation training algorithms for multilayer perceptron networks has spurred development of other neural network training algorithms for other networks such as radial basis function, recurrent network, feedback network, and unsupervised Kohonen self-organizing network. These networks, especially the multilayer perceptron network with a backpropagation training algorithm, have gained recognition in research and applications in various scientific and engineering areas. In order to accelerate the training process and overcome data over-fitting, research has been conducted to improve the backpropagation algorithm. Further, artificial neural networks have been integrated with other advanced methods such as fuzzy logic and wavelet analysis, to enhance the ability of data interpretation and modeling and to avoid subjectivity in the operation of the training algorithm. In recent years, support vector machines have emerged as a set of high-performance supervised generalized linear classifiers in parallel with artificial neural networks. A review on development history of artificial neural networks is presented and the standard architectures and algorithms of artificial neural networks are described. Furthermore, advanced artificial neural networks will be introduced with support vector machines, and limitations of ANNs will be identified. The future of artificial neural network development in tandem with support vector machines will be discussed in conjunction with further applications to food science and engineering, soil and water relationship for crop management, and decision support for precision agriculture. Along with the network structures and training algorithms, the applications of artificial neural networks will be reviewed as well, especially in the fields of agricultural and biological

  15. Digital Watermarking Algorithm Based on Wavelet Transform and Neural Network

    WANG Zhenfei; ZHAI Guangqun; WANG Nengchao


    An effective blind digital watermarking algorithm based on neural networks in the wavelet domain is presented. Firstly, the host image is decomposed through wavelet transform. The significant coefficients of wavelet are selected according to the human visual system (HVS) characteristics. Watermark bits are added to them. And then effectively cooperates neural networks to learn the characteristics of the embedded watermark related to them. Because of the learning and adaptive capabilities of neural networks, the trained neural networks almost exactly recover the watermark from the watermarked image. Experimental results and comparisons with other techniques prove the effectiveness of the new algorithm.

  16. A hardware implementation of neural network with modified HANNIBAL architecture

    Lee, Bum youb; Chung, Duck Jin [Inha University, Inchon (Korea, Republic of)


    A digital hardware architecture for artificial neural network with learning capability is described in this paper. It is a modified hardware architecture known as HANNIBAL(Hardware Architecture for Neural Networks Implementing Back propagation Algorithm Learning). For implementing an efficient neural network hardware, we analyzed various type of multiplier which is major function block of neuro-processor cell. With this result, we design a efficient digital neural network hardware using serial/parallel multiplier, and test the operation. We also analyze the hardware efficiency with logic level simulation. (author). 14 refs., 10 figs., 3 tabs.

  17. Power converters and AC electrical drives with linear neural networks

    Cirrincione, Maurizio


    The first book of its kind, Power Converters and AC Electrical Drives with Linear Neural Networks systematically explores the application of neural networks in the field of power electronics, with particular emphasis on the sensorless control of AC drives. It presents the classical theory based on space-vectors in identification, discusses control of electrical drives and power converters, and examines improvements that can be attained when using linear neural networks. The book integrates power electronics and electrical drives with artificial neural networks (ANN). Organized into four parts,

  18. Neural network and its application to CT imaging

    Nikravesh, M.; Kovscek, A.R.; Patzek, T.W. [Lawrence Berkeley National Lab., CA (United States)] [and others


    We present an integrated approach to imaging the progress of air displacement by spontaneous imbibition of oil into sandstone. We combine Computerized Tomography (CT) scanning and neural network image processing. The main aspects of our approach are (I) visualization of the distribution of oil and air saturation by CT, (II) interpretation of CT scans using neural networks, and (III) reconstruction of 3-D images of oil saturation from the CT scans with a neural network model. Excellent agreement between the actual images and the neural network predictions is found.

  19. Neural network for solving convex quadratic bilevel programming problems.

    He, Xing; Li, Chuandong; Huang, Tingwen; Li, Chaojie


    In this paper, using the idea of successive approximation, we propose a neural network to solve convex quadratic bilevel programming problems (CQBPPs), which is modeled by a nonautonomous differential inclusion. Different from the existing neural network for CQBPP, the model has the least number of state variables and simple structure. Based on the theory of nonsmooth analysis, differential inclusions and Lyapunov-like method, the limit equilibrium points sequence of the proposed neural networks can approximately converge to an optimal solution of CQBPP under certain conditions. Finally, simulation results on two numerical examples and the portfolio selection problem show the effectiveness and performance of the proposed neural network.

  20. Robustness of the ATLAS pixel clustering neural network algorithm

    AUTHOR|(INSPIRE)INSPIRE-00407780; The ATLAS collaboration


    Proton-proton collisions at the energy frontier puts strong constraints on track reconstruction algorithms. In the ATLAS track reconstruction algorithm, an artificial neural network is utilised to identify and split clusters of neighbouring read-out elements in the ATLAS pixel detector created by multiple charged particles. The robustness of the neural network algorithm is presented, probing its sensitivity to uncertainties in the detector conditions. The robustness is studied by evaluating the stability of the algorithm's performance under a range of variations in the inputs to the neural networks. Within reasonable variation magnitudes, the neural networks prove to be robust to most variation types.