WorldWideScience

Sample records for neural networks trained

  1. Local Dynamics in Trained Recurrent Neural Networks.

    Science.gov (United States)

    Rivkind, Alexander; Barak, Omri

    2017-06-23

    Learning a task induces connectivity changes in neural circuits, thereby changing their dynamics. To elucidate task-related neural dynamics, we study trained recurrent neural networks. We develop a mean field theory for reservoir computing networks trained to have multiple fixed point attractors. Our main result is that the dynamics of the network's output in the vicinity of attractors is governed by a low-order linear ordinary differential equation. The stability of the resulting equation can be assessed, predicting training success or failure. As a consequence, networks of rectified linear units and of sigmoidal nonlinearities are shown to have diametrically different properties when it comes to learning attractors. Furthermore, a characteristic time constant, which remains finite at the edge of chaos, offers an explanation of the network's output robustness in the presence of variability of the internal neural dynamics. Finally, the proposed theory predicts state-dependent frequency selectivity in the network response.

  2. Local Dynamics in Trained Recurrent Neural Networks

    Science.gov (United States)

    Rivkind, Alexander; Barak, Omri

    2017-06-01

    Learning a task induces connectivity changes in neural circuits, thereby changing their dynamics. To elucidate task-related neural dynamics, we study trained recurrent neural networks. We develop a mean field theory for reservoir computing networks trained to have multiple fixed point attractors. Our main result is that the dynamics of the network's output in the vicinity of attractors is governed by a low-order linear ordinary differential equation. The stability of the resulting equation can be assessed, predicting training success or failure. As a consequence, networks of rectified linear units and of sigmoidal nonlinearities are shown to have diametrically different properties when it comes to learning attractors. Furthermore, a characteristic time constant, which remains finite at the edge of chaos, offers an explanation of the network's output robustness in the presence of variability of the internal neural dynamics. Finally, the proposed theory predicts state-dependent frequency selectivity in the network response.

  3. Adaptive training of feedforward neural networks by Kalman filtering

    International Nuclear Information System (INIS)

    Ciftcioglu, Oe.

    1995-02-01

    Adaptive training of feedforward neural networks by Kalman filtering is described. Adaptive training is particularly important in estimation by neural network in real-time environmental where the trained network is used for system estimation while the network is further trained by means of the information provided by the experienced/exercised ongoing operation. As result of this, neural network adapts itself to a changing environment to perform its mission without recourse to re-training. The performance of the training method is demonstrated by means of actual process signals from a nuclear power plant. (orig.)

  4. Training Deep Spiking Neural Networks Using Backpropagation.

    Science.gov (United States)

    Lee, Jun Haeng; Delbruck, Tobi; Pfeiffer, Michael

    2016-01-01

    Deep spiking neural networks (SNNs) hold the potential for improving the latency and energy efficiency of deep neural networks through data-driven event-based computation. However, training such networks is difficult due to the non-differentiable nature of spike events. In this paper, we introduce a novel technique, which treats the membrane potentials of spiking neurons as differentiable signals, where discontinuities at spike times are considered as noise. This enables an error backpropagation mechanism for deep SNNs that follows the same principles as in conventional deep networks, but works directly on spike signals and membrane potentials. Compared with previous methods relying on indirect training and conversion, our technique has the potential to capture the statistics of spikes more precisely. We evaluate the proposed framework on artificially generated events from the original MNIST handwritten digit benchmark, and also on the N-MNIST benchmark recorded with an event-based dynamic vision sensor, in which the proposed method reduces the error rate by a factor of more than three compared to the best previous SNN, and also achieves a higher accuracy than a conventional convolutional neural network (CNN) trained and tested on the same data. We demonstrate in the context of the MNIST task that thanks to their event-driven operation, deep SNNs (both fully connected and convolutional) trained with our method achieve accuracy equivalent with conventional neural networks. In the N-MNIST example, equivalent accuracy is achieved with about five times fewer computational operations.

  5. Dynamic training algorithm for dynamic neural networks

    International Nuclear Information System (INIS)

    Tan, Y.; Van Cauwenberghe, A.; Liu, Z.

    1996-01-01

    The widely used backpropagation algorithm for training neural networks based on the gradient descent has a significant drawback of slow convergence. A Gauss-Newton method based recursive least squares (RLS) type algorithm with dynamic error backpropagation is presented to speed-up the learning procedure of neural networks with local recurrent terms. Finally, simulation examples concerning the applications of the RLS type algorithm to identification of nonlinear processes using a local recurrent neural network are also included in this paper

  6. Neural network training by Kalman filtering in process system monitoring

    International Nuclear Information System (INIS)

    Ciftcioglu, Oe.

    1996-03-01

    Kalman filtering approach for neural network training is described. Its extended form is used as an adaptive filter in a nonlinear environment of the form a feedforward neural network. Kalman filtering approach generally provides fast training as well as avoiding excessive learning which results in enhanced generalization capability. The network is used in a process monitoring application where the inputs are measurement signals. Since the measurement errors are also modelled in Kalman filter the approach yields accurate training with the implication of accurate neural network model representing the input and output relationships in the application. As the process of concern is a dynamic system, the input source of information to neural network is time dependent so that the training algorithm presents an adaptive form for real-time operation for the monitoring task. (orig.)

  7. Behaviour in O of the Neural Networks Training Cost

    DEFF Research Database (Denmark)

    Goutte, Cyril

    1998-01-01

    We study the behaviour in zero of the derivatives of the cost function used when training non-linear neural networks. It is shown that a fair number offirst, second and higher order derivatives vanish in zero, validating the belief that 0 is a peculiar and potentially harmful location. These calc......We study the behaviour in zero of the derivatives of the cost function used when training non-linear neural networks. It is shown that a fair number offirst, second and higher order derivatives vanish in zero, validating the belief that 0 is a peculiar and potentially harmful location....... These calculations arerelated to practical and theoretical aspects of neural networks training....

  8. Parallelization of Neural Network Training for NLP with Hogwild!

    Directory of Open Access Journals (Sweden)

    Deyringer Valentin

    2017-10-01

    Full Text Available Neural Networks are prevalent in todays NLP research. Despite their success for different tasks, training time is relatively long. We use Hogwild! to counteract this phenomenon and show that it is a suitable method to speed up training Neural Networks of different architectures and complexity. For POS tagging and translation we report considerable speedups of training, especially for the latter. We show that Hogwild! can be an important tool for training complex NLP architectures.

  9. Applications of neural networks in training science.

    Science.gov (United States)

    Pfeiffer, Mark; Hohmann, Andreas

    2012-04-01

    Training science views itself as an integrated and applied science, developing practical measures founded on scientific method. Therefore, it demands consideration of a wide spectrum of approaches and methods. Especially in the field of competitive sports, research questions are usually located in complex environments, so that mainly field studies are drawn upon to obtain broad external validity. Here, the interrelations between different variables or variable sets are mostly of a nonlinear character. In these cases, methods like neural networks, e.g., the pattern recognizing methods of Self-Organizing Kohonen Feature Maps or similar instruments to identify interactions might be successfully applied to analyze data. Following on from a classification of data analysis methods in training-science research, the aim of the contribution is to give examples of varied sports in which network approaches can be effectually used in training science. First, two examples are given in which neural networks are employed for pattern recognition. While one investigation deals with the detection of sporting talent in swimming, the other is located in game sports research, identifying tactical patterns in team handball. The third and last example shows how an artificial neural network can be used to predict competitive performance in swimming. Copyright © 2011 Elsevier B.V. All rights reserved.

  10. Character Recognition Using Genetically Trained Neural Networks

    Energy Technology Data Exchange (ETDEWEB)

    Diniz, C.; Stantz, K.M.; Trahan, M.W.; Wagner, J.S.

    1998-10-01

    Computationally intelligent recognition of characters and symbols addresses a wide range of applications including foreign language translation and chemical formula identification. The combination of intelligent learning and optimization algorithms with layered neural structures offers powerful techniques for character recognition. These techniques were originally developed by Sandia National Laboratories for pattern and spectral analysis; however, their ability to optimize vast amounts of data make them ideal for character recognition. An adaptation of the Neural Network Designer soflsvare allows the user to create a neural network (NN_) trained by a genetic algorithm (GA) that correctly identifies multiple distinct characters. The initial successfid recognition of standard capital letters can be expanded to include chemical and mathematical symbols and alphabets of foreign languages, especially Arabic and Chinese. The FIN model constructed for this project uses a three layer feed-forward architecture. To facilitate the input of characters and symbols, a graphic user interface (GUI) has been developed to convert the traditional representation of each character or symbol to a bitmap. The 8 x 8 bitmap representations used for these tests are mapped onto the input nodes of the feed-forward neural network (FFNN) in a one-to-one correspondence. The input nodes feed forward into a hidden layer, and the hidden layer feeds into five output nodes correlated to possible character outcomes. During the training period the GA optimizes the weights of the NN until it can successfully recognize distinct characters. Systematic deviations from the base design test the network's range of applicability. Increasing capacity, the number of letters to be recognized, requires a nonlinear increase in the number of hidden layer neurodes. Optimal character recognition performance necessitates a minimum threshold for the number of cases when genetically training the net. And, the

  11. Modelling electric trains energy consumption using Neural Networks

    Energy Technology Data Exchange (ETDEWEB)

    Martinez Fernandez, P.; Garcia Roman, C.; Insa Franco, R.

    2016-07-01

    Nowadays there is an evident concern regarding the efficiency and sustainability of the transport sector due to both the threat of climate change and the current financial crisis. This concern explains the growth of railways over the last years as they present an inherent efficiency compared to other transport means. However, in order to further expand their role, it is necessary to optimise their energy consumption so as to increase their competitiveness. Improving railways energy efficiency requires both reliable data and modelling tools that will allow the study of different variables and alternatives. With this need in mind, this paper presents the development of consumption models based on neural networks that calculate the energy consumption of electric trains. These networks have been trained based on an extensive set of consumption data measured in line 1 of the Valencia Metro Network. Once trained, the neural networks provide a reliable estimation of the vehicles consumption along a specific route when fed with input data such as train speed, acceleration or track longitudinal slope. These networks represent a useful modelling tool that may allow a deeper study of railway lines in terms of energy expenditure with the objective of reducing the costs and environmental impact associated to railways. (Author)

  12. Non-Linear State Estimation Using Pre-Trained Neural Networks

    DEFF Research Database (Denmark)

    Bayramoglu, Enis; Andersen, Nils Axel; Ravn, Ole

    2010-01-01

    effecting the transformation. This function is approximated by a neural network using offline training. The training is based on monte carlo sampling. A way to obtain parametric distributions of flexible shape to be used easily with these networks is also presented. The method can also be used to improve...... other parametric methods around regions with strong non-linearities by including them inside the network....

  13. An Improved Neural Network Training Algorithm for Wi-Fi Fingerprinting Positioning

    Directory of Open Access Journals (Sweden)

    Esmond Mok

    2013-09-01

    Full Text Available Ubiquitous positioning provides continuous positional information in both indoor and outdoor environments for a wide spectrum of location based service (LBS applications. With the rapid development of the low-cost and high speed data communication, Wi-Fi networks in many metropolitan cities, strength of signals propagated from the Wi-Fi access points (APs namely received signal strength (RSS have been cleverly adopted for indoor positioning. In this paper, a Wi-Fi positioning algorithm based on neural network modeling of Wi-Fi signal patterns is proposed. This algorithm is based on the correlation between the initial parameter setting for neural network training and output of the mean square error to obtain better modeling of the nonlinear highly complex Wi-Fi signal power propagation surface. The test results show that this neural network based data processing algorithm can significantly improve the neural network training surface to achieve the highest possible accuracy of the Wi-Fi fingerprinting positioning method.

  14. Efficient probabilistic inference in generic neural networks trained with non-probabilistic feedback.

    Science.gov (United States)

    Orhan, A Emin; Ma, Wei Ji

    2017-07-26

    Animals perform near-optimal probabilistic inference in a wide range of psychophysical tasks. Probabilistic inference requires trial-to-trial representation of the uncertainties associated with task variables and subsequent use of this representation. Previous work has implemented such computations using neural networks with hand-crafted and task-dependent operations. We show that generic neural networks trained with a simple error-based learning rule perform near-optimal probabilistic inference in nine common psychophysical tasks. In a probabilistic categorization task, error-based learning in a generic network simultaneously explains a monkey's learning curve and the evolution of qualitative aspects of its choice behavior. In all tasks, the number of neurons required for a given level of performance grows sublinearly with the input population size, a substantial improvement on previous implementations of probabilistic inference. The trained networks develop a novel sparsity-based probabilistic population code. Our results suggest that probabilistic inference emerges naturally in generic neural networks trained with error-based learning rules.Behavioural tasks often require probability distributions to be inferred about task specific variables. Here, the authors demonstrate that generic neural networks can be trained using a simple error-based learning rule to perform such probabilistic computations efficiently without any need for task specific operations.

  15. Adaptive training of neural networks for control of autonomous mobile robots

    NARCIS (Netherlands)

    Steur, E.; Vromen, T.; Nijmeijer, H.; Fossen, T.I.; Nijmeijer, H.; Pettersen, K.Y.

    2017-01-01

    We present an adaptive training procedure for a spiking neural network, which is used for control of a mobile robot. Because of manufacturing tolerances, any hardware implementation of a spiking neural network has non-identical nodes, which limit the performance of the controller. The adaptive

  16. Training feed-forward neural networks with gain constraints

    Science.gov (United States)

    Hartman

    2000-04-01

    Inaccurate input-output gains (partial derivatives of outputs with respect to inputs) are common in neural network models when input variables are correlated or when data are incomplete or inaccurate. Accurate gains are essential for optimization, control, and other purposes. We develop and explore a method for training feedforward neural networks subject to inequality or equality-bound constraints on the gains of the learned mapping. Gain constraints are implemented as penalty terms added to the objective function, and training is done using gradient descent. Adaptive and robust procedures are devised for balancing the relative strengths of the various terms in the objective function, which is essential when the constraints are inconsistent with the data. The approach has the virtue that the model domain of validity can be extended via extrapolation training, which can dramatically improve generalization. The algorithm is demonstrated here on artificial and real-world problems with very good results and has been advantageously applied to dozens of models currently in commercial use.

  17. A neural network driving curve generation method for the heavy-haul train

    Directory of Open Access Journals (Sweden)

    Youneng Huang

    2016-05-01

    Full Text Available The heavy-haul train has a series of characteristics, such as the locomotive traction properties, the longer length of train, and the nonlinear train pipe pressure during train braking. When the train is running on a continuous long and steep downgrade railway line, the safety of the train is ensured by cycle braking, which puts high demands on the driving skills of the driver. In this article, a driving curve generation method for the heavy-haul train based on a neural network is proposed. First, in order to describe the nonlinear characteristics of train braking, the neural network model is constructed and trained by practical driving data. In the neural network model, various nonlinear neurons are interconnected to work for information processing and transmission. The target value of train braking pressure reduction and release time is achieved by modeling the braking process. The equation of train motion is computed to obtain the driving curve. Finally, in four typical operation scenarios, comparing the curve data generated by the method with corresponding practical data of the Shuohuang heavy-haul railway line, the results show that the method is effective.

  18. Improving the Robustness of Deep Neural Networks via Stability Training

    OpenAIRE

    Zheng, Stephan; Song, Yang; Leung, Thomas; Goodfellow, Ian

    2016-01-01

    In this paper we address the issue of output instability of deep neural networks: small perturbations in the visual input can significantly distort the feature embeddings and output of a neural network. Such instability affects many deep architectures with state-of-the-art performance on a wide range of computer vision tasks. We present a general stability training method to stabilize deep networks against small input distortions that result from various types of common image processing, such...

  19. Novel maximum-margin training algorithms for supervised neural networks.

    Science.gov (United States)

    Ludwig, Oswaldo; Nunes, Urbano

    2010-06-01

    This paper proposes three novel training methods, two of them based on the backpropagation approach and a third one based on information theory for multilayer perceptron (MLP) binary classifiers. Both backpropagation methods are based on the maximal-margin (MM) principle. The first one, based on the gradient descent with adaptive learning rate algorithm (GDX) and named maximum-margin GDX (MMGDX), directly increases the margin of the MLP output-layer hyperplane. The proposed method jointly optimizes both MLP layers in a single process, backpropagating the gradient of an MM-based objective function, through the output and hidden layers, in order to create a hidden-layer space that enables a higher margin for the output-layer hyperplane, avoiding the testing of many arbitrary kernels, as occurs in case of support vector machine (SVM) training. The proposed MM-based objective function aims to stretch out the margin to its limit. An objective function based on Lp-norm is also proposed in order to take into account the idea of support vectors, however, overcoming the complexity involved in solving a constrained optimization problem, usually in SVM training. In fact, all the training methods proposed in this paper have time and space complexities O(N) while usual SVM training methods have time complexity O(N (3)) and space complexity O(N (2)) , where N is the training-data-set size. The second approach, named minimization of interclass interference (MICI), has an objective function inspired on the Fisher discriminant analysis. Such algorithm aims to create an MLP hidden output where the patterns have a desirable statistical distribution. In both training methods, the maximum area under ROC curve (AUC) is applied as stop criterion. The third approach offers a robust training framework able to take the best of each proposed training method. The main idea is to compose a neural model by using neurons extracted from three other neural networks, each one previously trained by

  20. Pap-smear Classification Using Efficient Second Order Neural Network Training Algorithms

    DEFF Research Database (Denmark)

    Ampazis, Nikolaos; Dounias, George; Jantzen, Jan

    2004-01-01

    In this paper we make use of two highly efficient second order neural network training algorithms, namely the LMAM (Levenberg-Marquardt with Adaptive Momentum) and OLMAM (Optimized Levenberg-Marquardt with Adaptive Momentum), for the construction of an efficient pap-smear test classifier. The alg......In this paper we make use of two highly efficient second order neural network training algorithms, namely the LMAM (Levenberg-Marquardt with Adaptive Momentum) and OLMAM (Optimized Levenberg-Marquardt with Adaptive Momentum), for the construction of an efficient pap-smear test classifier...

  1. Artificial neural network classification using a minimal training set - Comparison to conventional supervised classification

    Science.gov (United States)

    Hepner, George F.; Logan, Thomas; Ritter, Niles; Bryant, Nevin

    1990-01-01

    Recent research has shown an artificial neural network (ANN) to be capable of pattern recognition and the classification of image data. This paper examines the potential for the application of neural network computing to satellite image processing. A second objective is to provide a preliminary comparison and ANN classification. An artificial neural network can be trained to do land-cover classification of satellite imagery using selected sites representative of each class in a manner similar to conventional supervised classification. One of the major problems associated with recognition and classifications of pattern from remotely sensed data is the time and cost of developing a set of training sites. This reseach compares the use of an ANN back propagation classification procedure with a conventional supervised maximum likelihood classification procedure using a minimal training set. When using a minimal training set, the neural network is able to provide a land-cover classification superior to the classification derived from the conventional classification procedure. This research is the foundation for developing application parameters for further prototyping of software and hardware implementations for artificial neural networks in satellite image and geographic information processing.

  2. C-RNN-GAN: Continuous recurrent neural networks with adversarial training

    OpenAIRE

    Mogren, Olof

    2016-01-01

    Generative adversarial networks have been proposed as a way of efficiently training deep generative neural networks. We propose a generative adversarial model that works on continuous sequential data, and apply it by training it on a collection of classical music. We conclude that it generates music that sounds better and better as the model is trained, report statistics on generated music, and let the reader judge the quality by downloading the generated songs.

  3. Reward-based training of recurrent neural networks for cognitive and value-based tasks.

    Science.gov (United States)

    Song, H Francis; Yang, Guangyu R; Wang, Xiao-Jing

    2017-01-13

    Trained neural network models, which exhibit features of neural activity recorded from behaving animals, may provide insights into the circuit mechanisms of cognitive functions through systematic analysis of network activity and connectivity. However, in contrast to the graded error signals commonly used to train networks through supervised learning, animals learn from reward feedback on definite actions through reinforcement learning. Reward maximization is particularly relevant when optimal behavior depends on an animal's internal judgment of confidence or subjective preferences. Here, we implement reward-based training of recurrent neural networks in which a value network guides learning by using the activity of the decision network to predict future reward. We show that such models capture behavioral and electrophysiological findings from well-known experimental paradigms. Our work provides a unified framework for investigating diverse cognitive and value-based computations, and predicts a role for value representation that is essential for learning, but not executing, a task.

  4. Supervised learning in spiking neural networks with FORCE training.

    Science.gov (United States)

    Nicola, Wilten; Clopath, Claudia

    2017-12-20

    Populations of neurons display an extraordinary diversity in the behaviors they affect and display. Machine learning techniques have recently emerged that allow us to create networks of model neurons that display behaviors of similar complexity. Here we demonstrate the direct applicability of one such technique, the FORCE method, to spiking neural networks. We train these networks to mimic dynamical systems, classify inputs, and store discrete sequences that correspond to the notes of a song. Finally, we use FORCE training to create two biologically motivated model circuits. One is inspired by the zebra finch and successfully reproduces songbird singing. The second network is motivated by the hippocampus and is trained to store and replay a movie scene. FORCE trained networks reproduce behaviors comparable in complexity to their inspired circuits and yield information not easily obtainable with other techniques, such as behavioral responses to pharmacological manipulations and spike timing statistics.

  5. Distributed computing methodology for training neural networks in an image-guided diagnostic application.

    Science.gov (United States)

    Plagianakos, V P; Magoulas, G D; Vrahatis, M N

    2006-03-01

    Distributed computing is a process through which a set of computers connected by a network is used collectively to solve a single problem. In this paper, we propose a distributed computing methodology for training neural networks for the detection of lesions in colonoscopy. Our approach is based on partitioning the training set across multiple processors using a parallel virtual machine. In this way, interconnected computers of varied architectures can be used for the distributed evaluation of the error function and gradient values, and, thus, training neural networks utilizing various learning methods. The proposed methodology has large granularity and low synchronization, and has been implemented and tested. Our results indicate that the parallel virtual machine implementation of the training algorithms developed leads to considerable speedup, especially when large network architectures and training sets are used.

  6. Consistently Trained Artificial Neural Network for Automatic Ship Berthing Control

    Directory of Open Access Journals (Sweden)

    Y.A. Ahmed

    2015-09-01

    Full Text Available In this paper, consistently trained Artificial Neural Network controller for automatic ship berthing is discussed. Minimum time course changing manoeuvre is utilised to ensure such consistency and a new concept named ‘virtual window’ is introduced. Such consistent teaching data are then used to train two separate multi-layered feed forward neural networks for command rudder and propeller revolution output. After proper training, several known and unknown conditions are tested to judge the effectiveness of the proposed controller using Monte Carlo simulations. After getting acceptable percentages of success, the trained networks are implemented for the free running experiment system to judge the network’s real time response for Esso Osaka 3-m model ship. The network’s behaviour during such experiments is also investigated for possible effect of initial conditions as well as wind disturbances. Moreover, since the final goal point of the proposed controller is set at some distance from the actual pier to ensure safety, therefore a study on automatic tug assistance is also discussed for the final alignment of the ship with actual pier.

  7. Statistical and optimization methods to expedite neural network training for transient identification

    International Nuclear Information System (INIS)

    Reifman, J.; Vitela, E.J.; Lee, J.C.

    1993-01-01

    Two complementary methods, statistical feature selection and nonlinear optimization through conjugate gradients, are used to expedite feedforward neural network training. Statistical feature selection techniques in the form of linear correlation coefficients and information-theoretic entropy are used to eliminate redundant and non-informative plant parameters to reduce the size of the network. The method of conjugate gradients is used to accelerate the network training convergence and to systematically calculate the Teaming and momentum constants at each iteration. The proposed techniques are compared with the backpropagation algorithm using the entire set of plant parameters in the training of neural networks to identify transients simulated with the Midland Nuclear Power Plant Unit 2 simulator. By using 25% of the plant parameters and the conjugate gradients, a 30-fold reduction in CPU time was obtained without degrading the diagnostic ability of the network

  8. Gradual DropIn of Layers to Train Very Deep Neural Networks

    OpenAIRE

    Smith, Leslie N.; Hand, Emily M.; Doster, Timothy

    2015-01-01

    We introduce the concept of dynamically growing a neural network during training. In particular, an untrainable deep network starts as a trainable shallow network and newly added layers are slowly, organically added during training, thereby increasing the network's depth. This is accomplished by a new layer, which we call DropIn. The DropIn layer starts by passing the output from a previous layer (effectively skipping over the newly added layers), then increasingly including units from the ne...

  9. Training strategy for convolutional neural networks in pedestrian gender classification

    Science.gov (United States)

    Ng, Choon-Boon; Tay, Yong-Haur; Goi, Bok-Min

    2017-06-01

    In this work, we studied a strategy for training a convolutional neural network in pedestrian gender classification with limited amount of labeled training data. Unsupervised learning by k-means clustering on pedestrian images was used to learn the filters to initialize the first layer of the network. As a form of pre-training, supervised learning for the related task of pedestrian classification was performed. Finally, the network was fine-tuned for gender classification. We found that this strategy improved the network's generalization ability in gender classification, achieving better test results when compared to random weights initialization and slightly more beneficial than merely initializing the first layer filters by unsupervised learning. This shows that unsupervised learning followed by pre-training with pedestrian images is an effective strategy to learn useful features for pedestrian gender classification.

  10. The Analysis of User Behaviour of a Network Management Training Tool using a Neural Network

    Directory of Open Access Journals (Sweden)

    Helen Donelan

    2005-10-01

    Full Text Available A novel method for the analysis and interpretation of data that describes the interaction between trainee network managers and a network management training tool is presented. A simulation based approach is currently being used to train network managers, through the use of a simulated network. The motivation is to provide a tool for exposing trainees to a life like situation without disrupting a live network. The data logged by this system describes the detailed interaction between trainee network manager and simulated network. The work presented here provides an analysis of this interaction data that enables an assessment of the capabilities of the trainee network manager as well as an understanding of how the network management tasks are being approached. A neural network architecture is implemented in order to perform an exploratory data analysis of the interaction data. The neural network employs a novel form of continuous self-organisation to discover key features in the data and thus provide new insights into the learning and teaching strategies employed.

  11. Pap-smear Classification Using Efficient Second Order Neural Network Training Algorithms

    DEFF Research Database (Denmark)

    Ampazis, Nikolaos; Dounias, George; Jantzen, Jan

    2004-01-01

    In this paper we make use of two highly efficient second order neural network training algorithms, namely the LMAM (Levenberg-Marquardt with Adaptive Momentum) and OLMAM (Optimized Levenberg-Marquardt with Adaptive Momentum), for the construction of an efficient pap-smear test classifier. The alg......In this paper we make use of two highly efficient second order neural network training algorithms, namely the LMAM (Levenberg-Marquardt with Adaptive Momentum) and OLMAM (Optimized Levenberg-Marquardt with Adaptive Momentum), for the construction of an efficient pap-smear test classifier....... The algorithms are methodologically similar, and are based on iterations of the form employed in the Levenberg-Marquardt (LM) method for non-linear least squares problems with the inclusion of an additional adaptive momentum term arising from the formulation of the training task as a constrained optimization...

  12. PARTICLE SWARM OPTIMIZATION (PSO FOR TRAINING OPTIMIZATION ON CONVOLUTIONAL NEURAL NETWORK (CNN

    Directory of Open Access Journals (Sweden)

    Arie Rachmad Syulistyo

    2016-02-01

    Full Text Available Neural network attracts plenty of researchers lately. Substantial number of renowned universities have developed neural network for various both academically and industrially applications. Neural network shows considerable performance on various purposes. Nevertheless, for complex applications, neural network’s accuracy significantly deteriorates. To tackle the aforementioned drawback, lot of researches had been undertaken on the improvement of the standard neural network. One of the most promising modifications on standard neural network for complex applications is deep learning method. In this paper, we proposed the utilization of Particle Swarm Optimization (PSO in Convolutional Neural Networks (CNNs, which is one of the basic methods in deep learning. The use of PSO on the training process aims to optimize the results of the solution vectors on CNN in order to improve the recognition accuracy. The data used in this research is handwritten digit from MNIST. The experiments exhibited that the accuracy can be attained in 4 epoch is 95.08%. This result was better than the conventional CNN and DBN.  The execution time was also almost similar to the conventional CNN. Therefore, the proposed method was a promising method.

  13. Simplified LQG Control with Neural Networks

    DEFF Research Database (Denmark)

    Sørensen, O.

    1997-01-01

    A new neural network application for non-linear state control is described. One neural network is modelled to form a Kalmann predictor and trained to act as an optimal state observer for a non-linear process. Another neural network is modelled to form a state controller and trained to produce...

  14. Internal measuring models in trained neural networks for parameter estimation from images

    NARCIS (Netherlands)

    Feng, Tian-Jin; Feng, T.J.; Houkes, Z.; Korsten, Maarten J.; Spreeuwers, Lieuwe Jan

    1992-01-01

    The internal representations of 'learned' knowledge in neural networks are still poorly understood, even for backpropagation networks. The paper discusses a possible interpretation of learned knowledge of a network trained for parameter estimation from images. The outputs of the hidden layer are the

  15. Bayesian model ensembling using meta-trained recurrent neural networks

    NARCIS (Netherlands)

    Ambrogioni, L.; Berezutskaya, Y.; Gü ç lü , U.; Borne, E.W.P. van den; Gü ç lü tü rk, Y.; Gerven, M.A.J. van; Maris, E.G.G.

    2017-01-01

    In this paper we demonstrate that a recurrent neural network meta-trained on an ensemble of arbitrary classification tasks can be used as an approximation of the Bayes optimal classifier. This result is obtained by relying on the framework of e-free approximate Bayesian inference, where the Bayesian

  16. Planning Training Loads for the 400 M Hurdles in Three-Month Mesocycles using Artificial Neural Networks.

    Science.gov (United States)

    Przednowek, Krzysztof; Iskra, Janusz; Wiktorowicz, Krzysztof; Krzeszowski, Tomasz; Maszczyk, Adam

    2017-12-01

    This paper presents a novel approach to planning training loads in hurdling using artificial neural networks. The neural models performed the task of generating loads for athletes' training for the 400 meters hurdles. All the models were calculated based on the training data of 21 Polish National Team hurdlers, aged 22.25 ± 1.96, competing between 1989 and 2012. The analysis included 144 training plans that represented different stages in the annual training cycle. The main contribution of this paper is to develop neural models for planning training loads for the entire career of a typical hurdler. In the models, 29 variables were used, where four characterized the runner and 25 described the training process. Two artificial neural networks were used: a multi-layer perceptron and a network with radial basis functions. To assess the quality of the models, the leave-one-out cross-validation method was used in which the Normalized Root Mean Squared Error was calculated. The analysis shows that the method generating the smallest error was the radial basis function network with nine neurons in the hidden layer. Most of the calculated training loads demonstrated a non-linear relationship across the entire competitive period. The resulting model can be used as a tool to assist a coach in planning training loads during a selected training period.

  17. Planning Training Loads for The 400 M Hurdles in Three-Month Mesocycles Using Artificial Neural Networks

    Directory of Open Access Journals (Sweden)

    Przednowek Krzysztof

    2017-12-01

    Full Text Available This paper presents a novel approach to planning training loads in hurdling using artificial neural networks. The neural models performed the task of generating loads for athletes’ training for the 400 meters hurdles. All the models were calculated based on the training data of 21 Polish National Team hurdlers, aged 22.25 ± 1.96, competing between 1989 and 2012. The analysis included 144 training plans that represented different stages in the annual training cycle. The main contribution of this paper is to develop neural models for planning training loads for the entire career of a typical hurdler. In the models, 29 variables were used, where four characterized the runner and 25 described the training process. Two artificial neural networks were used: a multi-layer perceptron and a network with radial basis functions. To assess the quality of the models, the leave-one-out cross-validation method was used in which the Normalized Root Mean Squared Error was calculated. The analysis shows that the method generating the smallest error was the radial basis function network with nine neurons in the hidden layer. Most of the calculated training loads demonstrated a non-linear relationship across the entire competitive period. The resulting model can be used as a tool to assist a coach in planning training loads during a selected training period.

  18. Training Deep Convolutional Neural Networks with Resistive Cross-Point Devices.

    Science.gov (United States)

    Gokmen, Tayfun; Onen, Murat; Haensch, Wilfried

    2017-01-01

    In a previous work we have detailed the requirements for obtaining maximal deep learning performance benefit by implementing fully connected deep neural networks (DNN) in the form of arrays of resistive devices. Here we extend the concept of Resistive Processing Unit (RPU) devices to convolutional neural networks (CNNs). We show how to map the convolutional layers to fully connected RPU arrays such that the parallelism of the hardware can be fully utilized in all three cycles of the backpropagation algorithm. We find that the noise and bound limitations imposed by the analog nature of the computations performed on the arrays significantly affect the training accuracy of the CNNs. Noise and bound management techniques are presented that mitigate these problems without introducing any additional complexity in the analog circuits and that can be addressed by the digital circuits. In addition, we discuss digitally programmable update management and device variability reduction techniques that can be used selectively for some of the layers in a CNN. We show that a combination of all those techniques enables a successful application of the RPU concept for training CNNs. The techniques discussed here are more general and can be applied beyond CNN architectures and therefore enables applicability of the RPU approach to a large class of neural network architectures.

  19. Training Deep Convolutional Neural Networks with Resistive Cross-Point Devices

    Science.gov (United States)

    Gokmen, Tayfun; Onen, Murat; Haensch, Wilfried

    2017-01-01

    In a previous work we have detailed the requirements for obtaining maximal deep learning performance benefit by implementing fully connected deep neural networks (DNN) in the form of arrays of resistive devices. Here we extend the concept of Resistive Processing Unit (RPU) devices to convolutional neural networks (CNNs). We show how to map the convolutional layers to fully connected RPU arrays such that the parallelism of the hardware can be fully utilized in all three cycles of the backpropagation algorithm. We find that the noise and bound limitations imposed by the analog nature of the computations performed on the arrays significantly affect the training accuracy of the CNNs. Noise and bound management techniques are presented that mitigate these problems without introducing any additional complexity in the analog circuits and that can be addressed by the digital circuits. In addition, we discuss digitally programmable update management and device variability reduction techniques that can be used selectively for some of the layers in a CNN. We show that a combination of all those techniques enables a successful application of the RPU concept for training CNNs. The techniques discussed here are more general and can be applied beyond CNN architectures and therefore enables applicability of the RPU approach to a large class of neural network architectures. PMID:29066942

  20. Practical neural network recipies in C++

    CERN Document Server

    Masters

    2014-01-01

    This text serves as a cookbook for neural network solutions to practical problems using C++. It will enable those with moderate programming experience to select a neural network model appropriate to solving a particular problem, and to produce a working program implementing that network. The book provides guidance along the entire problem-solving path, including designing the training set, preprocessing variables, training and validating the network, and evaluating its performance. Though the book is not intended as a general course in neural networks, no background in neural works is assum

  1. An Efficient Supervised Training Algorithm for Multilayer Spiking Neural Networks.

    Science.gov (United States)

    Xie, Xiurui; Qu, Hong; Liu, Guisong; Zhang, Malu; Kurths, Jürgen

    2016-01-01

    The spiking neural networks (SNNs) are the third generation of neural networks and perform remarkably well in cognitive tasks such as pattern recognition. The spike emitting and information processing mechanisms found in biological cognitive systems motivate the application of the hierarchical structure and temporal encoding mechanism in spiking neural networks, which have exhibited strong computational capability. However, the hierarchical structure and temporal encoding approach require neurons to process information serially in space and time respectively, which reduce the training efficiency significantly. For training the hierarchical SNNs, most existing methods are based on the traditional back-propagation algorithm, inheriting its drawbacks of the gradient diffusion and the sensitivity on parameters. To keep the powerful computation capability of the hierarchical structure and temporal encoding mechanism, but to overcome the low efficiency of the existing algorithms, a new training algorithm, the Normalized Spiking Error Back Propagation (NSEBP) is proposed in this paper. In the feedforward calculation, the output spike times are calculated by solving the quadratic function in the spike response model instead of detecting postsynaptic voltage states at all time points in traditional algorithms. Besides, in the feedback weight modification, the computational error is propagated to previous layers by the presynaptic spike jitter instead of the gradient decent rule, which realizes the layer-wised training. Furthermore, our algorithm investigates the mathematical relation between the weight variation and voltage error change, which makes the normalization in the weight modification applicable. Adopting these strategies, our algorithm outperforms the traditional SNN multi-layer algorithms in terms of learning efficiency and parameter sensitivity, that are also demonstrated by the comprehensive experimental results in this paper.

  2. Modular representation of layered neural networks.

    Science.gov (United States)

    Watanabe, Chihiro; Hiramatsu, Kaoru; Kashino, Kunio

    2018-01-01

    Layered neural networks have greatly improved the performance of various applications including image processing, speech recognition, natural language processing, and bioinformatics. However, it is still difficult to discover or interpret knowledge from the inference provided by a layered neural network, since its internal representation has many nonlinear and complex parameters embedded in hierarchical layers. Therefore, it becomes important to establish a new methodology by which layered neural networks can be understood. In this paper, we propose a new method for extracting a global and simplified structure from a layered neural network. Based on network analysis, the proposed method detects communities or clusters of units with similar connection patterns. We show its effectiveness by applying it to three use cases. (1) Network decomposition: it can decompose a trained neural network into multiple small independent networks thus dividing the problem and reducing the computation time. (2) Training assessment: the appropriateness of a trained result with a given hyperparameter or randomly chosen initial parameters can be evaluated by using a modularity index. And (3) data analysis: in practical data it reveals the community structure in the input, hidden, and output layers, which serves as a clue for discovering knowledge from a trained neural network. Copyright © 2017 Elsevier Ltd. All rights reserved.

  3. Neural Networks for Optimal Control

    DEFF Research Database (Denmark)

    Sørensen, O.

    1995-01-01

    Two neural networks are trained to act as an observer and a controller, respectively, to control a non-linear, multi-variable process.......Two neural networks are trained to act as an observer and a controller, respectively, to control a non-linear, multi-variable process....

  4. Shakeout: A New Approach to Regularized Deep Neural Network Training.

    Science.gov (United States)

    Kang, Guoliang; Li, Jun; Tao, Dacheng

    2018-05-01

    Recent years have witnessed the success of deep neural networks in dealing with a plenty of practical problems. Dropout has played an essential role in many successful deep neural networks, by inducing regularization in the model training. In this paper, we present a new regularized training approach: Shakeout. Instead of randomly discarding units as Dropout does at the training stage, Shakeout randomly chooses to enhance or reverse each unit's contribution to the next layer. This minor modification of Dropout has the statistical trait: the regularizer induced by Shakeout adaptively combines , and regularization terms. Our classification experiments with representative deep architectures on image datasets MNIST, CIFAR-10 and ImageNet show that Shakeout deals with over-fitting effectively and outperforms Dropout. We empirically demonstrate that Shakeout leads to sparser weights under both unsupervised and supervised settings. Shakeout also leads to the grouping effect of the input units in a layer. Considering the weights in reflecting the importance of connections, Shakeout is superior to Dropout, which is valuable for the deep model compression. Moreover, we demonstrate that Shakeout can effectively reduce the instability of the training process of the deep architecture.

  5. On Training Bi-directional Neural Network Language Model with Noise Contrastive Estimation

    OpenAIRE

    He, Tianxing; Zhang, Yu; Droppo, Jasha; Yu, Kai

    2016-01-01

    We propose to train bi-directional neural network language model(NNLM) with noise contrastive estimation(NCE). Experiments are conducted on a rescore task on the PTB data set. It is shown that NCE-trained bi-directional NNLM outperformed the one trained by conventional maximum likelihood training. But still(regretfully), it did not out-perform the baseline uni-directional NNLM.

  6. Antenna analysis using neural networks

    Science.gov (United States)

    Smith, William T.

    1992-01-01

    Conventional computing schemes have long been used to analyze problems in electromagnetics (EM). The vast majority of EM applications require computationally intensive algorithms involving numerical integration and solutions to large systems of equations. The feasibility of using neural network computing algorithms for antenna analysis is investigated. The ultimate goal is to use a trained neural network algorithm to reduce the computational demands of existing reflector surface error compensation techniques. Neural networks are computational algorithms based on neurobiological systems. Neural nets consist of massively parallel interconnected nonlinear computational elements. They are often employed in pattern recognition and image processing problems. Recently, neural network analysis has been applied in the electromagnetics area for the design of frequency selective surfaces and beam forming networks. The backpropagation training algorithm was employed to simulate classical antenna array synthesis techniques. The Woodward-Lawson (W-L) and Dolph-Chebyshev (D-C) array pattern synthesis techniques were used to train the neural network. The inputs to the network were samples of the desired synthesis pattern. The outputs are the array element excitations required to synthesize the desired pattern. Once trained, the network is used to simulate the W-L or D-C techniques. Various sector patterns and cosecant-type patterns (27 total) generated using W-L synthesis were used to train the network. Desired pattern samples were then fed to the neural network. The outputs of the network were the simulated W-L excitations. A 20 element linear array was used. There were 41 input pattern samples with 40 output excitations (20 real parts, 20 imaginary). A comparison between the simulated and actual W-L techniques is shown for a triangular-shaped pattern. Dolph-Chebyshev is a different class of synthesis technique in that D-C is used for side lobe control as opposed to pattern

  7. Distribution network fault section identification and fault location using artificial neural network

    DEFF Research Database (Denmark)

    Dashtdar, Masoud; Dashti, Rahman; Shaker, Hamid Reza

    2018-01-01

    In this paper, a method for fault location in power distribution network is presented. The proposed method uses artificial neural network. In order to train the neural network, a series of specific characteristic are extracted from the recorded fault signals in relay. These characteristics...... components of the sequences as well as three-phase signals could be obtained using statistics to extract the hidden features inside them and present them separately to train the neural network. Also, since the obtained inputs for the training of the neural network strongly depend on the fault angle, fault...... resistance, and fault location, the training data should be selected such that these differences are properly presented so that the neural network does not face any issues for identification. Therefore, selecting the signal processing function, data spectrum and subsequently, statistical parameters...

  8. SuperNeurons: Dynamic GPU Memory Management for Training Deep Neural Networks

    OpenAIRE

    Wang, Linnan; Ye, Jinmian; Zhao, Yiyang; Wu, Wei; Li, Ang; Song, Shuaiwen Leon; Xu, Zenglin; Kraska, Tim

    2018-01-01

    Going deeper and wider in neural architectures improves the accuracy, while the limited GPU DRAM places an undesired restriction on the network design domain. Deep Learning (DL) practitioners either need change to less desired network architectures, or nontrivially dissect a network across multiGPUs. These distract DL practitioners from concentrating on their original machine learning tasks. We present SuperNeurons: a dynamic GPU memory scheduling runtime to enable the network training far be...

  9. Training Deep Convolutional Neural Networks with Resistive Cross-Point Devices

    Directory of Open Access Journals (Sweden)

    Tayfun Gokmen

    2017-10-01

    Full Text Available In a previous work we have detailed the requirements for obtaining maximal deep learning performance benefit by implementing fully connected deep neural networks (DNN in the form of arrays of resistive devices. Here we extend the concept of Resistive Processing Unit (RPU devices to convolutional neural networks (CNNs. We show how to map the convolutional layers to fully connected RPU arrays such that the parallelism of the hardware can be fully utilized in all three cycles of the backpropagation algorithm. We find that the noise and bound limitations imposed by the analog nature of the computations performed on the arrays significantly affect the training accuracy of the CNNs. Noise and bound management techniques are presented that mitigate these problems without introducing any additional complexity in the analog circuits and that can be addressed by the digital circuits. In addition, we discuss digitally programmable update management and device variability reduction techniques that can be used selectively for some of the layers in a CNN. We show that a combination of all those techniques enables a successful application of the RPU concept for training CNNs. The techniques discussed here are more general and can be applied beyond CNN architectures and therefore enables applicability of the RPU approach to a large class of neural network architectures.

  10. Face recognition based on improved BP neural network

    Directory of Open Access Journals (Sweden)

    Yue Gaili

    2017-01-01

    Full Text Available In order to improve the recognition rate of face recognition, face recognition algorithm based on histogram equalization, PCA and BP neural network is proposed. First, the face image is preprocessed by histogram equalization. Then, the classical PCA algorithm is used to extract the features of the histogram equalization image, and extract the principal component of the image. And then train the BP neural network using the trained training samples. This improved BP neural network weight adjustment method is used to train the network because the conventional BP algorithm has the disadvantages of slow convergence, easy to fall into local minima and training process. Finally, the BP neural network with the test sample input is trained to classify and identify the face images, and the recognition rate is obtained. Through the use of ORL database face image simulation experiment, the analysis results show that the improved BP neural network face recognition method can effectively improve the recognition rate of face recognition.

  11. Diagnostics of Nuclear Reactor Accidents Based on Particle Swarm Optimization Trained Neural Networks

    International Nuclear Information System (INIS)

    Abdel-Aal, M.M.Z.

    2004-01-01

    Automation in large, complex systems such as chemical plants, electrical power generation, aerospace and nuclear plants has been steadily increasing in the recent past. automated diagnosis and control forms a necessary part of these systems,this contains thousands of alarms processing in every component, subsystem and system. so the accurate and speed of diagnosis of faults is an important factors in operation and maintaining their health and continued operation and in reducing of repair and recovery time. using of artificial intelligence facilitates the alarm classifications and faults diagnosis to control any abnormal events during the operation cycle of the plant. thesis work uses the artificial neural network as a powerful classification tool. the work basically is has two components, the first is to effectively train the neural network using particle swarm optimization, which non-derivative based technique. to achieve proper training of the neural network to fault classification problem and comparing this technique to already existing techniques

  12. Neural Networks in Control Applications

    DEFF Research Database (Denmark)

    Sørensen, O.

    The intention of this report is to make a systematic examination of the possibilities of applying neural networks in those technical areas, which are familiar to a control engineer. In other words, the potential of neural networks in control applications is given higher priority than a detailed...... study of the networks themselves. With this end in view the following restrictions have been made: - Amongst numerous neural network structures, only the Multi Layer Perceptron (a feed-forward network) is applied. - Amongst numerous training algorithms, only four algorithms are examined, all...... in a recursive form (sample updating). The simplest is the Back Probagation Error Algorithm, and the most complex is the recursive Prediction Error Method using a Gauss-Newton search direction. - Over-fitting is often considered to be a serious problem when training neural networks. This problem is specifically...

  13. Training Excitatory-Inhibitory Recurrent Neural Networks for Cognitive Tasks: A Simple and Flexible Framework.

    Directory of Open Access Journals (Sweden)

    H Francis Song

    2016-02-01

    Full Text Available The ability to simultaneously record from large numbers of neurons in behaving animals has ushered in a new era for the study of the neural circuit mechanisms underlying cognitive functions. One promising approach to uncovering the dynamical and computational principles governing population responses is to analyze model recurrent neural networks (RNNs that have been optimized to perform the same tasks as behaving animals. Because the optimization of network parameters specifies the desired output but not the manner in which to achieve this output, "trained" networks serve as a source of mechanistic hypotheses and a testing ground for data analyses that link neural computation to behavior. Complete access to the activity and connectivity of the circuit, and the ability to manipulate them arbitrarily, make trained networks a convenient proxy for biological circuits and a valuable platform for theoretical investigation. However, existing RNNs lack basic biological features such as the distinction between excitatory and inhibitory units (Dale's principle, which are essential if RNNs are to provide insights into the operation of biological circuits. Moreover, trained networks can achieve the same behavioral performance but differ substantially in their structure and dynamics, highlighting the need for a simple and flexible framework for the exploratory training of RNNs. Here, we describe a framework for gradient descent-based training of excitatory-inhibitory RNNs that can incorporate a variety of biological knowledge. We provide an implementation based on the machine learning library Theano, whose automatic differentiation capabilities facilitate modifications and extensions. We validate this framework by applying it to well-known experimental paradigms such as perceptual decision-making, context-dependent integration, multisensory integration, parametric working memory, and motor sequence generation. Our results demonstrate the wide range of neural

  14. Comparative Analysis of Neural Network Training Methods in Real-time Radiotherapy

    Directory of Open Access Journals (Sweden)

    Nouri S.

    2017-03-01

    Full Text Available Background: The motions of body and tumor in some regions such as chest during radiotherapy treatments are one of the major concerns protecting normal tissues against high doses. By using real-time radiotherapy technique, it is possible to increase the accuracy of delivered dose to the tumor region by means of tracing markers on the body of patients. Objective: This study evaluates the accuracy of some artificial intelligence methods including neural network and those of combination with genetic algorithm as well as particle swarm optimization (PSO estimating tumor positions in real-time radiotherapy. Method: One hundred recorded signals of three external markers were used as input data. The signals from 3 markers thorough 10 breathing cycles of a patient treated via a cyber-knife for a lung tumor were used as data input. Then, neural network method and its combination with genetic or PSO algorithms were applied determining the tumor locations using MATLAB© software program. Results: The accuracies were obtained 0.8%, 12% and 14% in neural network, genetic and particle swarm optimization algorithms, respectively. Conclusion: The internal target volume (ITV should be determined based on the applied neural network algorithm on training steps.

  15. A modified backpropagation algorithm for training neural networks on data with error bars

    International Nuclear Information System (INIS)

    Gernoth, K.A.; Clark, J.W.

    1994-08-01

    A method is proposed for training multilayer feedforward neural networks on data contaminated with noise. Specifically, we consider the case that the artificial neural system is required to learn a physical mapping when the available values of the target variable are subject to experimental uncertainties, but are characterized by error bars. The proposed method, based on maximum likelihood criterion for parameter estimation, involves simple modifications of the on-line backpropagation learning algorithm. These include incorporation of the error-bar assignments in a pattern-specific learning rate, together with epochal updating of a new measure of model accuracy that replaces the usual mean-square error. The extended backpropagation algorithm is successfully tested on two problems relevant to the modelling of atomic-mass systematics by neural networks. Provided the underlying mapping is reasonably smooth, neural nets trained with the new procedure are able to learn the true function to a good approximation even in the presence of high levels of Gaussian noise. (author). 26 refs, 2 figs, 5 tabs

  16. Internal-state analysis in layered artificial neural network trained to categorize lung sounds

    NARCIS (Netherlands)

    Oud, M

    2002-01-01

    In regular use of artificial neural networks, only input and output states of the network are known to the user. Weight and bias values can be extracted but are difficult to interpret. We analyzed internal states of networks trained to map asthmatic lung sound spectra onto lung function parameters.

  17. Neural Networks for the Beginner.

    Science.gov (United States)

    Snyder, Robin M.

    Motivated by the brain, neural networks are a right-brained approach to artificial intelligence that is used to recognize patterns based on previous training. In practice, one would not program an expert system to recognize a pattern and one would not train a neural network to make decisions from rules; but one could combine the best features of…

  18. Neutron spectrometry and dosimetry by means of Bonner spheres system and artificial neural networks applying robust design of artificial neural networks

    Energy Technology Data Exchange (ETDEWEB)

    Martinez B, M.R.; Ortiz R, J.M.; Vega C, H.R. [UAZ, Av. Ramon Lopez Velarde No. 801, 98000 Zacatecas (Mexico)

    2006-07-01

    An Artificial Neural Network has been designed, trained and tested to unfold neutron spectra and simultaneously to calculate equivalent doses. A set of 187 neutron spectra compiled by the International Atomic Energy Agency and 13 equivalent doses were used in the artificial neural network designed, trained and tested. In order to design the neural network was used the robust design of artificial neural networks methodology, which assures that the quality of the neural networks takes into account from the design stage. Unless previous works, here, for first time a group of neural networks were designed and trained to unfold 187 neutron spectra and at the same time to calculate 13 equivalent doses, starting from the count rates coming from the Bonner spheres system by using a systematic and experimental strategy. (Author)

  19. Neutron spectrometry and dosimetry by means of Bonner spheres system and artificial neural networks applying robust design of artificial neural networks

    International Nuclear Information System (INIS)

    Martinez B, M.R.; Ortiz R, J.M.; Vega C, H.R.

    2006-01-01

    An Artificial Neural Network has been designed, trained and tested to unfold neutron spectra and simultaneously to calculate equivalent doses. A set of 187 neutron spectra compiled by the International Atomic Energy Agency and 13 equivalent doses were used in the artificial neural network designed, trained and tested. In order to design the neural network was used the robust design of artificial neural networks methodology, which assures that the quality of the neural networks takes into account from the design stage. Unless previous works, here, for first time a group of neural networks were designed and trained to unfold 187 neutron spectra and at the same time to calculate 13 equivalent doses, starting from the count rates coming from the Bonner spheres system by using a systematic and experimental strategy. (Author)

  20. Additive Feed Forward Control with Neural Networks

    DEFF Research Database (Denmark)

    Sørensen, O.

    1999-01-01

    This paper demonstrates a method to control a non-linear, multivariable, noisy process using trained neural networks. The basis for the method is a trained neural network controller acting as the inverse process model. A training method for obtaining such an inverse process model is applied....... A suitable 'shaped' (low-pass filtered) reference is used to overcome problems with excessive control action when using a controller acting as the inverse process model. The control concept is Additive Feed Forward Control, where the trained neural network controller, acting as the inverse process model......, is placed in a supplementary pure feed-forward path to an existing feedback controller. This concept benefits from the fact, that an existing, traditional designed, feedback controller can be retained without any modifications, and after training the connection of the neural network feed-forward controller...

  1. Parameterization Of Solar Radiation Using Neural Network

    International Nuclear Information System (INIS)

    Jiya, J. D.; Alfa, B.

    2002-01-01

    This paper presents a neural network technique for parameterization of global solar radiation. The available data from twenty-one stations is used for training the neural network and the data from other ten stations is used to validate the neural model. The neural network utilizes latitude, longitude, altitude, sunshine duration and period number to parameterize solar radiation values. The testing data was not used in the training to demonstrate the performance of the neural network in unknown stations to parameterize solar radiation. The results indicate a good agreement between the parameterized solar radiation values and actual measured values

  2. Neutron spectrometry with artificial neural networks

    International Nuclear Information System (INIS)

    Vega C, H.R.; Hernandez D, V.M.; Manzanares A, E.; Rodriguez, J.M.; Mercado S, G.A.; Iniguez de la Torre Bayo, M.P.; Barquero, R.; Arteaga A, T.

    2005-01-01

    An artificial neural network has been designed to obtain the neutron spectra from the Bonner spheres spectrometer's count rates. The neural network was trained using 129 neutron spectra. These include isotopic neutron sources; reference and operational spectra from accelerators and nuclear reactors, spectra from mathematical functions as well as few energy groups and monoenergetic spectra. The spectra were transformed from lethargy to energy distribution and were re-bin ned to 31 energy groups using the MCNP 4C code. Re-binned spectra and UTA4 response matrix were used to calculate the expected count rates in Bonner spheres spectrometer. These count rates were used as input and the respective spectrum was used as output during neural network training. After training the network was tested with the Bonner spheres count rates produced by a set of neutron spectra. This set contains data used during network training as well as data not used. Training and testing was carried out in the Mat lab program. To verify the network unfolding performance the original and unfolded spectra were compared using the χ 2 -test and the total fluence ratios. The use of Artificial Neural Networks to unfold neutron spectra in neutron spectrometry is an alternative procedure that overcomes the drawbacks associated in this ill-conditioned problem. (Author)

  3. Accelerating deep neural network training with inconsistent stochastic gradient descent.

    Science.gov (United States)

    Wang, Linnan; Yang, Yi; Min, Renqiang; Chakradhar, Srimat

    2017-09-01

    Stochastic Gradient Descent (SGD) updates Convolutional Neural Network (CNN) with a noisy gradient computed from a random batch, and each batch evenly updates the network once in an epoch. This model applies the same training effort to each batch, but it overlooks the fact that the gradient variance, induced by Sampling Bias and Intrinsic Image Difference, renders different training dynamics on batches. In this paper, we develop a new training strategy for SGD, referred to as Inconsistent Stochastic Gradient Descent (ISGD) to address this problem. The core concept of ISGD is the inconsistent training, which dynamically adjusts the training effort w.r.t the loss. ISGD models the training as a stochastic process that gradually reduces down the mean of batch's loss, and it utilizes a dynamic upper control limit to identify a large loss batch on the fly. ISGD stays on the identified batch to accelerate the training with additional gradient updates, and it also has a constraint to penalize drastic parameter changes. ISGD is straightforward, computationally efficient and without requiring auxiliary memories. A series of empirical evaluations on real world datasets and networks demonstrate the promising performance of inconsistent training. Copyright © 2017 Elsevier Ltd. All rights reserved.

  4. Neural Networks for Non-linear Control

    DEFF Research Database (Denmark)

    Sørensen, O.

    1994-01-01

    This paper describes how a neural network, structured as a Multi Layer Perceptron, is trained to predict, simulate and control a non-linear process.......This paper describes how a neural network, structured as a Multi Layer Perceptron, is trained to predict, simulate and control a non-linear process....

  5. Machine Learning Topological Invariants with Neural Networks

    Science.gov (United States)

    Zhang, Pengfei; Shen, Huitao; Zhai, Hui

    2018-02-01

    In this Letter we supervisedly train neural networks to distinguish different topological phases in the context of topological band insulators. After training with Hamiltonians of one-dimensional insulators with chiral symmetry, the neural network can predict their topological winding numbers with nearly 100% accuracy, even for Hamiltonians with larger winding numbers that are not included in the training data. These results show a remarkable success that the neural network can capture the global and nonlinear topological features of quantum phases from local inputs. By opening up the neural network, we confirm that the network does learn the discrete version of the winding number formula. We also make a couple of remarks regarding the role of the symmetry and the opposite effect of regularization techniques when applying machine learning to physical systems.

  6. Training Excitatory-Inhibitory Recurrent Neural Networks for Cognitive Tasks: A Simple and Flexible Framework

    Science.gov (United States)

    Wang, Xiao-Jing

    2016-01-01

    The ability to simultaneously record from large numbers of neurons in behaving animals has ushered in a new era for the study of the neural circuit mechanisms underlying cognitive functions. One promising approach to uncovering the dynamical and computational principles governing population responses is to analyze model recurrent neural networks (RNNs) that have been optimized to perform the same tasks as behaving animals. Because the optimization of network parameters specifies the desired output but not the manner in which to achieve this output, “trained” networks serve as a source of mechanistic hypotheses and a testing ground for data analyses that link neural computation to behavior. Complete access to the activity and connectivity of the circuit, and the ability to manipulate them arbitrarily, make trained networks a convenient proxy for biological circuits and a valuable platform for theoretical investigation. However, existing RNNs lack basic biological features such as the distinction between excitatory and inhibitory units (Dale’s principle), which are essential if RNNs are to provide insights into the operation of biological circuits. Moreover, trained networks can achieve the same behavioral performance but differ substantially in their structure and dynamics, highlighting the need for a simple and flexible framework for the exploratory training of RNNs. Here, we describe a framework for gradient descent-based training of excitatory-inhibitory RNNs that can incorporate a variety of biological knowledge. We provide an implementation based on the machine learning library Theano, whose automatic differentiation capabilities facilitate modifications and extensions. We validate this framework by applying it to well-known experimental paradigms such as perceptual decision-making, context-dependent integration, multisensory integration, parametric working memory, and motor sequence generation. Our results demonstrate the wide range of neural activity

  7. Neural Network Ensembles

    DEFF Research Database (Denmark)

    Hansen, Lars Kai; Salamon, Peter

    1990-01-01

    We propose several means for improving the performance an training of neural networks for classification. We use crossvalidation as a tool for optimizing network parameters and architecture. We show further that the remaining generalization error can be reduced by invoking ensembles of similar...... networks....

  8. Neural Networks in R Using the Stuttgart Neural Network Simulator: RSNNS

    Directory of Open Access Journals (Sweden)

    Christopher Bergmeir

    2012-01-01

    Full Text Available Neural networks are important standard machine learning procedures for classification and regression. We describe the R package RSNNS that provides a convenient interface to the popular Stuttgart Neural Network Simulator SNNS. The main features are (a encapsulation of the relevant SNNS parts in a C++ class, for sequential and parallel usage of different networks, (b accessibility of all of the SNNSalgorithmic functionality from R using a low-level interface, and (c a high-level interface for convenient, R-style usage of many standard neural network procedures. The package also includes functions for visualization and analysis of the models and the training procedures, as well as functions for data input/output from/to the original SNNSfile formats.

  9. Neutron spectrometry using artificial neural networks

    International Nuclear Information System (INIS)

    Vega-Carrillo, Hector Rene; Martin Hernandez-Davila, Victor; Manzanares-Acuna, Eduardo; Mercado Sanchez, Gema A.; Pilar Iniguez de la Torre, Maria; Barquero, Raquel; Palacios, Francisco; Mendez Villafane, Roberto; Arteaga Arteaga, Tarcicio; Manuel Ortiz Rodriguez, Jose

    2006-01-01

    An artificial neural network has been designed to obtain neutron spectra from Bonner spheres spectrometer count rates. The neural network was trained using 129 neutron spectra. These include spectra from isotopic neutron sources; reference and operational spectra from accelerators and nuclear reactors, spectra based on mathematical functions as well as few energy groups and monoenergetic spectra. The spectra were transformed from lethargy to energy distribution and were re-binned to 31 energy groups using the MCNP 4C code. The re-binned spectra and the UTA4 response matrix were used to calculate the expected count rates in Bonner spheres spectrometer. These count rates were used as input and their respective spectra were used as output during the neural network training. After training, the network was tested with the Bonner spheres count rates produced by folding a set of neutron spectra with the response matrix. This set contains data used during network training as well as data not used. Training and testing was carried out using the Matlab ( R) program. To verify the network unfolding performance, the original and unfolded spectra were compared using the root mean square error. The use of artificial neural networks to unfold neutron spectra in neutron spectrometry is an alternative procedure that overcomes the drawbacks associated with this ill-conditioned problem

  10. A Fast C++ Implementation of Neural Network Backpropagation Training Algorithm: Application to Bayesian Optimal Image Demosaicing

    Directory of Open Access Journals (Sweden)

    Yi-Qing Wang

    2015-09-01

    Full Text Available Recent years have seen a surge of interest in multilayer neural networks fueled by their successful applications in numerous image processing and computer vision tasks. In this article, we describe a C++ implementation of the stochastic gradient descent to train a multilayer neural network, where a fast and accurate acceleration of tanh(· is achieved with linear interpolation. As an example of application, we present a neural network able to deliver state-of-the-art performance in image demosaicing.

  11. SAGRAD: A Program for Neural Network Training with Simulated Annealing and the Conjugate Gradient Method.

    Science.gov (United States)

    Bernal, Javier; Torres-Jimenez, Jose

    2015-01-01

    SAGRAD (Simulated Annealing GRADient), a Fortran 77 program for computing neural networks for classification using batch learning, is discussed. Neural network training in SAGRAD is based on a combination of simulated annealing and Møller's scaled conjugate gradient algorithm, the latter a variation of the traditional conjugate gradient method, better suited for the nonquadratic nature of neural networks. Different aspects of the implementation of the training process in SAGRAD are discussed, such as the efficient computation of gradients and multiplication of vectors by Hessian matrices that are required by Møller's algorithm; the (re)initialization of weights with simulated annealing required to (re)start Møller's algorithm the first time and each time thereafter that it shows insufficient progress in reaching a possibly local minimum; and the use of simulated annealing when Møller's algorithm, after possibly making considerable progress, becomes stuck at a local minimum or flat area of weight space. Outlines of the scaled conjugate gradient algorithm, the simulated annealing procedure and the training process used in SAGRAD are presented together with results from running SAGRAD on two examples of training data.

  12. Advances in Artificial Neural Networks – Methodological Development and Application

    Directory of Open Access Journals (Sweden)

    Yanbo Huang

    2009-08-01

    Full Text Available Artificial neural networks as a major soft-computing technology have been extensively studied and applied during the last three decades. Research on backpropagation training algorithms for multilayer perceptron networks has spurred development of other neural network training algorithms for other networks such as radial basis function, recurrent network, feedback network, and unsupervised Kohonen self-organizing network. These networks, especially the multilayer perceptron network with a backpropagation training algorithm, have gained recognition in research and applications in various scientific and engineering areas. In order to accelerate the training process and overcome data over-fitting, research has been conducted to improve the backpropagation algorithm. Further, artificial neural networks have been integrated with other advanced methods such as fuzzy logic and wavelet analysis, to enhance the ability of data interpretation and modeling and to avoid subjectivity in the operation of the training algorithm. In recent years, support vector machines have emerged as a set of high-performance supervised generalized linear classifiers in parallel with artificial neural networks. A review on development history of artificial neural networks is presented and the standard architectures and algorithms of artificial neural networks are described. Furthermore, advanced artificial neural networks will be introduced with support vector machines, and limitations of ANNs will be identified. The future of artificial neural network development in tandem with support vector machines will be discussed in conjunction with further applications to food science and engineering, soil and water relationship for crop management, and decision support for precision agriculture. Along with the network structures and training algorithms, the applications of artificial neural networks will be reviewed as well, especially in the fields of agricultural and biological

  13. Training the Recurrent neural network by the Fuzzy Min-Max algorithm for fault prediction

    International Nuclear Information System (INIS)

    Zemouri, Ryad; Racoceanu, Daniel; Zerhouni, Noureddine; Minca, Eugenia; Filip, Florin

    2009-01-01

    In this paper, we present a training technique of a Recurrent Radial Basis Function neural network for fault prediction. We use the Fuzzy Min-Max technique to initialize the k-center of the RRBF neural network. The k-means algorithm is then applied to calculate the centers that minimize the mean square error of the prediction task. The performances of the k-means algorithm are then boosted by the Fuzzy Min-Max technique.

  14. Artificial Neural Network Modeling of an Inverse Fluidized Bed ...

    African Journals Online (AJOL)

    A Radial Basis Function neural network has been successfully employed for the modeling of the inverse fluidized bed reactor. In the proposed model, the trained neural network represents the kinetics of biological decomposition of pollutants in the reactor. The neural network has been trained with experimental data ...

  15. Online Sequence Training of Recurrent Neural Networks with Connectionist Temporal Classification

    OpenAIRE

    Hwang, Kyuyeon; Sung, Wonyong

    2015-01-01

    Connectionist temporal classification (CTC) based supervised sequence training of recurrent neural networks (RNNs) has shown great success in many machine learning areas including end-to-end speech and handwritten character recognition. For the CTC training, however, it is required to unroll (or unfold) the RNN by the length of an input sequence. This unrolling requires a lot of memory and hinders a small footprint implementation of online learning or adaptation. Furthermore, the length of tr...

  16. Analysis of neural networks through base functions

    NARCIS (Netherlands)

    van der Zwaag, B.J.; Slump, Cornelis H.; Spaanenburg, L.

    Problem statement. Despite their success-story, neural networks have one major disadvantage compared to other techniques: the inability to explain comprehensively how a trained neural network reaches its output; neural networks are not only (incorrectly) seen as a "magic tool" but possibly even more

  17. Reconstruction of sparse connectivity in neural networks from spike train covariances

    International Nuclear Information System (INIS)

    Pernice, Volker; Rotter, Stefan

    2013-01-01

    The inference of causation from correlation is in general highly problematic. Correspondingly, it is difficult to infer the existence of physical synaptic connections between neurons from correlations in their activity. Covariances in neural spike trains and their relation to network structure have been the subject of intense research, both experimentally and theoretically. The influence of recurrent connections on covariances can be characterized directly in linear models, where connectivity in the network is described by a matrix of linear coupling kernels. However, as indirect connections also give rise to covariances, the inverse problem of inferring network structure from covariances can generally not be solved unambiguously. Here we study to what degree this ambiguity can be resolved if the sparseness of neural networks is taken into account. To reconstruct a sparse network, we determine the minimal set of linear couplings consistent with the measured covariances by minimizing the L 1 norm of the coupling matrix under appropriate constraints. Contrary to intuition, after stochastic optimization of the coupling matrix, the resulting estimate of the underlying network is directed, despite the fact that a symmetric matrix of count covariances is used for inference. The performance of the new method is best if connections are neither exceedingly sparse, nor too dense, and it is easily applicable for networks of a few hundred nodes. Full coupling kernels can be obtained from the matrix of full covariance functions. We apply our method to networks of leaky integrate-and-fire neurons in an asynchronous–irregular state, where spike train covariances are well described by a linear model. (paper)

  18. Application of a neural network for reflectance spectrum classification

    Science.gov (United States)

    Yang, Gefei; Gartley, Michael

    2017-05-01

    Traditional reflectance spectrum classification algorithms are based on comparing spectrum across the electromagnetic spectrum anywhere from the ultra-violet to the thermal infrared regions. These methods analyze reflectance on a pixel by pixel basis. Inspired by high performance that Convolution Neural Networks (CNN) have demonstrated in image classification, we applied a neural network to analyze directional reflectance pattern images. By using the bidirectional reflectance distribution function (BRDF) data, we can reformulate the 4-dimensional into 2 dimensions, namely incident direction × reflected direction × channels. Meanwhile, RIT's micro-DIRSIG model is utilized to simulate additional training samples for improving the robustness of the neural networks training. Unlike traditional classification by using hand-designed feature extraction with a trainable classifier, neural networks create several layers to learn a feature hierarchy from pixels to classifier and all layers are trained jointly. Hence, the our approach of utilizing the angular features are different to traditional methods utilizing spatial features. Although training processing typically has a large computational cost, simple classifiers work well when subsequently using neural network generated features. Currently, most popular neural networks such as VGG, GoogLeNet and AlexNet are trained based on RGB spatial image data. Our approach aims to build a directional reflectance spectrum based neural network to help us to understand from another perspective. At the end of this paper, we compare the difference among several classifiers and analyze the trade-off among neural networks parameters.

  19. Application of artificial neural network in radiographic diagnosis

    International Nuclear Information System (INIS)

    Piraino, D.; Amartur, S.; Richmond, B.; Schils, J.; Belhobek, G.

    1990-01-01

    This paper reports on an artificial neural network trained to rate the likelihood of different bone neoplasms when given a standard description of a radiograph. A three-layer back propagation algorithm was trained with descriptions of examples of bone neoplasms obtained from standard radiographic textbooks. Fifteen bone neoplasms obtained from clinical material were used as unknowns to test the trained artificial neural network. The artificial neural network correctly rated the pathologic diagnosis as the most likely diagnosis in 10 of the 15 unknown cases

  20. Deep Convolutional Neural Networks: Structure, Feature Extraction and Training

    Directory of Open Access Journals (Sweden)

    Namatēvs Ivars

    2017-12-01

    Full Text Available Deep convolutional neural networks (CNNs are aimed at processing data that have a known network like topology. They are widely used to recognise objects in images and diagnose patterns in time series data as well as in sensor data classification. The aim of the paper is to present theoretical and practical aspects of deep CNNs in terms of convolution operation, typical layers and basic methods to be used for training and learning. Some practical applications are included for signal and image classification. Finally, the present paper describes the proposed block structure of CNN for classifying crucial features from 3D sensor data.

  1. Genetic Algorithm Optimized Neural Networks Ensemble as ...

    African Journals Online (AJOL)

    NJD

    Improvements in neural network calibration models by a novel approach using neural network ensemble (NNE) for the simultaneous ... process by training a number of neural networks. .... Matlab® version 6.1 was employed for building principal component ... provide a fair simulation of calibration data set with some degree.

  2. Training Feedforward Neural Networks Using Symbiotic Organisms Search Algorithm

    Directory of Open Access Journals (Sweden)

    Haizhou Wu

    2016-01-01

    Full Text Available Symbiotic organisms search (SOS is a new robust and powerful metaheuristic algorithm, which stimulates the symbiotic interaction strategies adopted by organisms to survive and propagate in the ecosystem. In the supervised learning area, it is a challenging task to present a satisfactory and efficient training algorithm for feedforward neural networks (FNNs. In this paper, SOS is employed as a new method for training FNNs. To investigate the performance of the aforementioned method, eight different datasets selected from the UCI machine learning repository are employed for experiment and the results are compared among seven metaheuristic algorithms. The results show that SOS performs better than other algorithms for training FNNs in terms of converging speed. It is also proven that an FNN trained by the method of SOS has better accuracy than most algorithms compared.

  3. Advances in Artificial Neural Networks - Methodological Development and Application

    Science.gov (United States)

    Artificial neural networks as a major soft-computing technology have been extensively studied and applied during the last three decades. Research on backpropagation training algorithms for multilayer perceptron networks has spurred development of other neural network training algorithms for other ne...

  4. Convolutional Neural Networks for Medical Image Analysis: Full Training or Fine Tuning?

    OpenAIRE

    Tajbakhsh, Nima; Shin, Jae Y.; Gurudu, Suryakanth R.; Hurst, R. Todd; Kendall, Christopher B.; Gotway, Michael B.; Liang, Jianming

    2017-01-01

    Training a deep convolutional neural network (CNN) from scratch is difficult because it requires a large amount of labeled training data and a great deal of expertise to ensure proper convergence. A promising alternative is to fine-tune a CNN that has been pre-trained using, for instance, a large set of labeled natural images. However, the substantial differences between natural and medical images may advise against such knowledge transfer. In this paper, we seek to answer the following centr...

  5. Foreign currency rate forecasting using neural networks

    Science.gov (United States)

    Pandya, Abhijit S.; Kondo, Tadashi; Talati, Amit; Jayadevappa, Suryaprasad

    2000-03-01

    Neural networks are increasingly being used as a forecasting tool in many forecasting problems. This paper discusses the application of neural networks in predicting daily foreign exchange rates between the USD, GBP as well as DEM. We approach the problem from a time-series analysis framework - where future exchange rates are forecasted solely using past exchange rates. This relies on the belief that the past prices and future prices are very close related, and interdependent. We present the result of training a neural network with historical USD-GBP data. The methodology used in explained, as well as the training process. We discuss the selection of inputs to the network, and present a comparison of using the actual exchange rates and the exchange rate differences as inputs. Price and rate differences are the preferred way of training neural network in financial applications. Results of both approaches are present together for comparison. We show that the network is able to learn the trends in the exchange rate movements correctly, and present the results of the prediction over several periods of time.

  6. Generating Seismograms with Deep Neural Networks

    Science.gov (United States)

    Krischer, L.; Fichtner, A.

    2017-12-01

    The recent surge of successful uses of deep neural networks in computer vision, speech recognition, and natural language processing, mainly enabled by the availability of fast GPUs and extremely large data sets, is starting to see many applications across all natural sciences. In seismology these are largely confined to classification and discrimination tasks. In this contribution we explore the use of deep neural networks for another class of problems: so called generative models.Generative modelling is a branch of statistics concerned with generating new observed data samples, usually by drawing from some underlying probability distribution. Samples with specific attributes can be generated by conditioning on input variables. In this work we condition on seismic source (mechanism and location) and receiver (location) parameters to generate multi-component seismograms.The deep neural networks are trained on synthetic data calculated with Instaseis (http://instaseis.net, van Driel et al. (2015)) and waveforms from the global ShakeMovie project (http://global.shakemovie.princeton.edu, Tromp et al. (2010)). The underlying radially symmetric or smoothly three dimensional Earth structures result in comparatively small waveform differences from similar events or at close receivers and the networks learn to interpolate between training data samples.Of particular importance is the chosen misfit functional. Generative adversarial networks (Goodfellow et al. (2014)) implement a system in which two networks compete: the generator network creates samples and the discriminator network distinguishes these from the true training examples. Both are trained in an adversarial fashion until the discriminator can no longer distinguish between generated and real samples. We show how this can be applied to seismograms and in particular how it compares to networks trained with more conventional misfit metrics. Last but not least we attempt to shed some light on the black-box nature of

  7. Neural Networks

    International Nuclear Information System (INIS)

    Smith, Patrick I.

    2003-01-01

    Physicists use large detectors to measure particles created in high-energy collisions at particle accelerators. These detectors typically produce signals indicating either where ionization occurs along the path of the particle, or where energy is deposited by the particle. The data produced by these signals is fed into pattern recognition programs to try to identify what particles were produced, and to measure the energy and direction of these particles. Ideally, there are many techniques used in this pattern recognition software. One technique, neural networks, is particularly suitable for identifying what type of particle caused by a set of energy deposits. Neural networks can derive meaning from complicated or imprecise data, extract patterns, and detect trends that are too complex to be noticed by either humans or other computer related processes. To assist in the advancement of this technology, Physicists use a tool kit to experiment with several neural network techniques. The goal of this research is interface a neural network tool kit into Java Analysis Studio (JAS3), an application that allows data to be analyzed from any experiment. As the final result, a physicist will have the ability to train, test, and implement a neural network with the desired output while using JAS3 to analyze the results or output. Before an implementation of a neural network can take place, a firm understanding of what a neural network is and how it works is beneficial. A neural network is an artificial representation of the human brain that tries to simulate the learning process [5]. It is also important to think of the word artificial in that definition as computer programs that use calculations during the learning process. In short, a neural network learns by representative examples. Perhaps the easiest way to describe the way neural networks learn is to explain how the human brain functions. The human brain contains billions of neural cells that are responsible for processing

  8. Neutron spectrum unfolding using neural networks

    International Nuclear Information System (INIS)

    Vega C, H.R.; Hernandez D, V.M.; Manzanares A, E.

    2004-01-01

    An artificial neural network has been designed to obtain the neutron spectra from the Bonner spheres spectrometer's count rates. The neural network was trained using a large set of neutron spectra compiled by the International Atomic Energy Agency. These include spectra from iso- topic neutron sources, reference and operational neutron spectra obtained from accelerators and nuclear reactors. The spectra were transformed from lethargy to energy distribution and were re-binned to 31 energy groups using the MCNP 4C code. Re-binned spectra and UTA4 matrix were used to calculate the expected count rates in Bonner spheres spectrometer. These count rates were used as input and correspondent spectrum was used as output during neural network training. The network has 7 input nodes, 56 neurons as hidden layer and 31 neurons in the output layer. After training the network was tested with the Bonner spheres count rates produced by twelve neutron spectra. The network allows unfolding the neutron spectrum from count rates measured with Bonner spheres. Good results are obtained when testing count rates belong to neutron spectra used during training, acceptable results are obtained for count rates obtained from actual neutron fields; however the network fails when count rates belong to monoenergetic neutron sources. (Author)

  9. Fast neutron spectra determination by threshold activation detectors using neural networks

    International Nuclear Information System (INIS)

    Kardan, M.R.; Koohi-Fayegh, R.; Setayeshi, S.; Ghiassi-Nejad, M.

    2004-01-01

    Neural network method was used for fast neutron spectra unfolding in spectrometry by threshold activation detectors. The input layer of the neural networks consisted of 11 neurons for the specific activities of neutron-induced nuclear reaction products, while the output layers were fast neutron spectra which had been subdivided into 6, 8, 10, 12, 15 and 20 energy bins. Neural network training was performed by 437 fast neutron spectra and corresponding threshold activation detector readings. The trained neural network have been applied for unfolding 50 spectra, which were not in training sets and the results were compared with real spectra and unfolded spectra by SANDII. The best results belong to 10 energy bin spectra. The neural network was also trained by detector readings with 5% uncertainty and the response of the trained neural network to detector readings with 5%, 10%, 15%, 20%, 25% and 50% uncertainty was compared with real spectra. Neural network algorithm, in comparison with other unfolding methods, is very fast and needless to detector response matrix and any prior information about spectra and also the outputs have low sensitivity to uncertainty in the activity measurements. The results show that the neural network algorithm is useful when a fast response is required with reasonable accuracy

  10. Neural network recognition of mammographic lesions

    International Nuclear Information System (INIS)

    Oldham, W.J.B.; Downes, P.T.; Hunter, V.

    1987-01-01

    A method for recognition of mammographic lesions through the use of neural networks is presented. Neural networks have exhibited the ability to learn the shape andinternal structure of patterns. Digitized mammograms containing circumscribed and stelate lesions were used to train a feedfoward synchronous neural network that self-organizes to stable attractor states. Encoding of data for submission to the network was accomplished by performing a fractal analysis of the digitized image. This results in scale invariant representation of the lesions. Results are discussed

  11. Thermoelastic steam turbine rotor control based on neural network

    Science.gov (United States)

    Rzadkowski, Romuald; Dominiczak, Krzysztof; Radulski, Wojciech; Szczepanik, R.

    2015-12-01

    Considered here are Nonlinear Auto-Regressive neural networks with eXogenous inputs (NARX) as a mathematical model of a steam turbine rotor for controlling steam turbine stress on-line. In order to obtain neural networks that locate critical stress and temperature points in the steam turbine during transient states, an FE rotor model was built. This model was used to train the neural networks on the basis of steam turbine transient operating data. The training included nonlinearity related to steam turbine expansion, heat exchange and rotor material properties during transients. Simultaneous neural networks are algorithms which can be implemented on PLC controllers. This allows for the application neural networks to control steam turbine stress in industrial power plants.

  12. Artificial Neural Network with Hardware Training and Hardware Refresh

    Science.gov (United States)

    Duong, Tuan A. (Inventor)

    2003-01-01

    A neural network circuit is provided having a plurality of circuits capable of charge storage. Also provided is a plurality of circuits each coupled to at least one of the plurality of charge storage circuits and constructed to generate an output in accordance with a neuron transfer function. Each of a plurality of circuits is coupled to one of the plurality of neuron transfer function circuits and constructed to generate a derivative of the output. A weight update circuit updates the charge storage circuits based upon output from the plurality of transfer function circuits and output from the plurality of derivative circuits. In preferred embodiments, separate training and validation networks share the same set of charge storage circuits and may operate concurrently. The validation network has a separate transfer function circuits each being coupled to the charge storage circuits so as to replicate the training network s coupling of the plurality of charge storage to the plurality of transfer function circuits. The plurality of transfer function circuits may be constructed each having a transconductance amplifier providing differential currents combined to provide an output in accordance with a transfer function. The derivative circuits may have a circuit constructed to generate a biased differential currents combined so as to provide the derivative of the transfer function.

  13. LAI inversion from optical reflectance using a neural network trained with a multiple scattering model

    Science.gov (United States)

    Smith, James A.

    1992-01-01

    The inversion of the leaf area index (LAI) canopy parameter from optical spectral reflectance measurements is obtained using a backpropagation artificial neural network trained using input-output pairs generated by a multiple scattering reflectance model. The problem of LAI estimation over sparse canopies (LAI 1000 percent for low LAI. Minimization methods applied to merit functions constructed from differences between measured reflectances and predicted reflectances using multiple-scattering models are unacceptably sensitive to a good initial guess for the desired parameter. In contrast, the neural network reported generally yields absolute percentage errors of <30 percent when weighting coefficients trained on one soil type were applied to predicted canopy reflectance at a different soil background.

  14. Estimation of Collapse Moment for Wall Thinned Elbows Using Fuzzy Neural Networks

    International Nuclear Information System (INIS)

    Na, Man Gyun; Kim, Jin Weon; Shin, Sun Ho; Kim, Koung Suk; Kang, Ki Soo

    2004-01-01

    In this work, the collapse moment due to wall-thinning defects is estimated by using fuzzy neural networks. The developed fuzzy neural networks have been applied to the numerical data obtained from the finite element analysis. Principal component analysis is used to preprocess the input signals into the fuzzy neural network to reduce the sensitivity to the input change and the fuzzy neural networks are trained by using the data set prepared for training (training data) and verified by using another data set different (independent) from the training data. Also, two fuzzy neural networks are trained for two data sets divided into the two classes of extrados and intrados defects, which is because they have different characteristics. The relative 2-sigma errors of the estimated collapse moment are 3.07% for the training data and 4.12% for the test data. It is known from this result that the fuzzy neural networks are sufficiently accurate to be used in the wall-thinning monitoring of elbows

  15. Applications of neural network to numerical analyses

    International Nuclear Information System (INIS)

    Takeda, Tatsuoki; Fukuhara, Makoto; Ma, Xiao-Feng; Liaqat, Ali

    1999-01-01

    Applications of a multi-layer neural network to numerical analyses are described. We are mainly concerned with the computed tomography and the solution of differential equations. In both cases as the objective functions for the training process of the neural network we employed residuals of the integral equation or the differential equations. This is different from the conventional neural network training where sum of the squared errors of the output values is adopted as the objective function. For model problems both the methods gave satisfactory results and the methods are considered promising for some kind of problems. (author)

  16. Neural-Network Object-Recognition Program

    Science.gov (United States)

    Spirkovska, L.; Reid, M. B.

    1993-01-01

    HONTIOR computer program implements third-order neural network exhibiting invariance under translation, change of scale, and in-plane rotation. Invariance incorporated directly into architecture of network. Only one view of each object needed to train network for two-dimensional-translation-invariant recognition of object. Also used for three-dimensional-transformation-invariant recognition by training network on only set of out-of-plane rotated views. Written in C language.

  17. Artificial neural networks in neutron dosimetry

    Energy Technology Data Exchange (ETDEWEB)

    Vega C, H.R.; Hernandez D, V.M.; Manzanares A, E.; Mercado, G.A.; Perales M, W.A.; Robles R, J.A. [Unidades Academicas de Estudios Nucleares, UAZ, A.P. 336, 98000 Zacatecas (Mexico); Gallego, E.; Lorente, A. [Depto. de Ingenieria Nuclear, Universidad Politecnica de Madrid, (Spain)

    2005-07-01

    An artificial neural network has been designed to obtain the neutron doses using only the Bonner spheres spectrometer's count rates. Ambient, personal and effective neutron doses were included. 187 neutron spectra were utilized to calculate the Bonner count rates and the neutron doses. The spectra were transformed from lethargy to energy distribution and were re-binned to 31 energy groups using the MCNP 4C code. Re-binned spectra, UTA4 response matrix and fluence-to-dose coefficients were used to calculate the count rates in Bonner spheres spectrometer and the doses. Count rates were used as input and the respective doses were used as output during neural network training. Training and testing was carried out in Mat lab environment. The artificial neural network performance was evaluated using the {chi}{sup 2}- test, where the original and calculated doses were compared. The use of Artificial Neural Networks in neutron dosimetry is an alternative procedure that overcomes the drawbacks associated in this ill-conditioned problem. (Author)

  18. Artificial neural networks in neutron dosimetry

    International Nuclear Information System (INIS)

    Vega C, H.R.; Hernandez D, V.M.; Manzanares A, E.; Mercado, G.A.; Perales M, W.A.; Robles R, J.A.; Gallego, E.; Lorente, A.

    2005-01-01

    An artificial neural network has been designed to obtain the neutron doses using only the Bonner spheres spectrometer's count rates. Ambient, personal and effective neutron doses were included. 187 neutron spectra were utilized to calculate the Bonner count rates and the neutron doses. The spectra were transformed from lethargy to energy distribution and were re-binned to 31 energy groups using the MCNP 4C code. Re-binned spectra, UTA4 response matrix and fluence-to-dose coefficients were used to calculate the count rates in Bonner spheres spectrometer and the doses. Count rates were used as input and the respective doses were used as output during neural network training. Training and testing was carried out in Mat lab environment. The artificial neural network performance was evaluated using the χ 2 - test, where the original and calculated doses were compared. The use of Artificial Neural Networks in neutron dosimetry is an alternative procedure that overcomes the drawbacks associated in this ill-conditioned problem. (Author)

  19. Training algorithms evaluation for artificial neural network to temporal prediction of photovoltaic generation

    International Nuclear Information System (INIS)

    Arantes Monteiro, Raul Vitor; Caixeta Guimarães, Geraldo; Rocio Castillo, Madeleine; Matheus Moura, Fabrício Augusto; Tamashiro, Márcio Augusto

    2016-01-01

    Current energy policies are encouraging the connection of power generation based on low-polluting technologies, mainly those using renewable sources, to distribution networks. Hence, it becomes increasingly important to understand technical challenges, facing high penetration of PV systems at the grid, especially considering the effects of intermittence of this source on the power quality, reliability and stability of the electric distribution system. This fact can affect the distribution networks on which they are attached causing overvoltage, undervoltage and frequency oscillations. In order to predict these disturbs, artificial neural networks are used. This article aims to analyze 3 training algorithms used in artificial neural networks for temporal prediction of the generated active power thru photovoltaic panels. As a result it was concluded that the algorithm with the best performance among the 3 analyzed was the Levenberg-Marquadrt.

  20. On the use of harmony search algorithm in the training of wavelet neural networks

    Science.gov (United States)

    Lai, Kee Huong; Zainuddin, Zarita; Ong, Pauline

    2015-10-01

    Wavelet neural networks (WNNs) are a class of feedforward neural networks that have been used in a wide range of industrial and engineering applications to model the complex relationships between the given inputs and outputs. The training of WNNs involves the configuration of the weight values between neurons. The backpropagation training algorithm, which is a gradient-descent method, can be used for this training purpose. Nonetheless, the solutions found by this algorithm often get trapped at local minima. In this paper, a harmony search-based algorithm is proposed for the training of WNNs. The training of WNNs, thus can be formulated as a continuous optimization problem, where the objective is to maximize the overall classification accuracy. Each candidate solution proposed by the harmony search algorithm represents a specific WNN architecture. In order to speed up the training process, the solution space is divided into disjoint partitions during the random initialization step of harmony search algorithm. The proposed training algorithm is tested onthree benchmark problems from the UCI machine learning repository, as well as one real life application, namely, the classification of electroencephalography signals in the task of epileptic seizure detection. The results obtained show that the proposed algorithm outperforms the traditional harmony search algorithm in terms of overall classification accuracy.

  1. Neural network segmentation of magnetic resonance images

    International Nuclear Information System (INIS)

    Frederick, B.

    1990-01-01

    Neural networks are well adapted to the task of grouping input patterns into subsets which share some similarity. Moreover, once trained, they can generalize their classification rules to classify new data sets. Sets of pixel intensities from magnetic resonance (MR) images provide a natural input to a neural network; by varying imaging parameters, MR images can reflect various independent physical parameters of tissues in their pixel intensities. A neural net can then be trained to classify physically similar tissue types based on sets of pixel intensities resulting from different imaging studies on the same subject. This paper reports that a neural network classifier for image segmentation was implanted on a Sun 4/60, and was tested on the task of classifying tissues of canine head MR images. Four images of a transaxial slice with different imaging sequences were taken as input to the network (three spin-echo images and an inversion recovery image). The training set consisted of 691 representative samples of gray matter, white matter, cerebrospinal fluid, bone, and muscle preclassified by a neuroscientist. The network was trained using a fast backpropagation algorithm to derive the decision criteria to classify any location in the image by its pixel intensities, and the image was subsequently segmented by the classifier

  2. Application of neural networks in coastal engineering

    Digital Repository Service at National Institute of Oceanography (India)

    Mandal, S.

    the neural network attractive. A neural network is an information processing system modeled on the structure of the dynamic process. It can solve the complex/nonlinear problems quickly once trained by operating on problems using an interconnected number...

  3. Evolutionary Algorithms For Neural Networks Binary And Real Data Classification

    Directory of Open Access Journals (Sweden)

    Dr. Hanan A.R. Akkar

    2015-08-01

    Full Text Available Artificial neural networks are complex networks emulating the way human rational neurons process data. They have been widely used generally in prediction clustering classification and association. The training algorithms that used to determine the network weights are almost the most important factor that influence the neural networks performance. Recently many meta-heuristic and Evolutionary algorithms are employed to optimize neural networks weights to achieve better neural performance. This paper aims to use recently proposed algorithms for optimizing neural networks weights comparing these algorithms performance with other classical meta-heuristic algorithms used for the same purpose. However to evaluate the performance of such algorithms for training neural networks we examine such algorithms to classify four opposite binary XOR clusters and classification of continuous real data sets such as Iris and Ecoli.

  4. Modeling of steam generator in nuclear power plant using neural network ensemble

    International Nuclear Information System (INIS)

    Lee, S. K.; Lee, E. C.; Jang, J. W.

    2003-01-01

    Neural network is now being used in modeling the steam generator is known to be difficult due to the reverse dynamics. However, Neural network is prone to the problem of overfitting. This paper investigates the use of neural network combining methods to model steam generator water level and compares with single neural network. The results show that neural network ensemble is effective tool which can offer improved generalization, lower dependence of the training set and reduced training time

  5. Neural networks and orbit control in accelerators

    International Nuclear Information System (INIS)

    Bozoki, E.; Friedman, A.

    1994-01-01

    An overview of the architecture, workings and training of Neural Networks is given. We stress the aspects which are important for the use of Neural Networks for orbit control in accelerators and storage rings, especially its ability to cope with the nonlinear behavior of the orbit response to 'kicks' and the slow drift in the orbit response during long-term operation. Results obtained for the two NSLS storage rings with several network architectures and various training methods for each architecture are given

  6. Illusory Motion Reproduced by Deep Neural Networks Trained for Prediction.

    Science.gov (United States)

    Watanabe, Eiji; Kitaoka, Akiyoshi; Sakamoto, Kiwako; Yasugi, Masaki; Tanaka, Kenta

    2018-01-01

    The cerebral cortex predicts visual motion to adapt human behavior to surrounding objects moving in real time. Although the underlying mechanisms are still unknown, predictive coding is one of the leading theories. Predictive coding assumes that the brain's internal models (which are acquired through learning) predict the visual world at all times and that errors between the prediction and the actual sensory input further refine the internal models. In the past year, deep neural networks based on predictive coding were reported for a video prediction machine called PredNet. If the theory substantially reproduces the visual information processing of the cerebral cortex, then PredNet can be expected to represent the human visual perception of motion. In this study, PredNet was trained with natural scene videos of the self-motion of the viewer, and the motion prediction ability of the obtained computer model was verified using unlearned videos. We found that the computer model accurately predicted the magnitude and direction of motion of a rotating propeller in unlearned videos. Surprisingly, it also represented the rotational motion for illusion images that were not moving physically, much like human visual perception. While the trained network accurately reproduced the direction of illusory rotation, it did not detect motion components in negative control pictures wherein people do not perceive illusory motion. This research supports the exciting idea that the mechanism assumed by the predictive coding theory is one of basis of motion illusion generation. Using sensory illusions as indicators of human perception, deep neural networks are expected to contribute significantly to the development of brain research.

  7. Multi-modular neural networks for the classification of e+e- hadronic events

    International Nuclear Information System (INIS)

    Proriol, J.

    1994-01-01

    Some multi-modular neural network methods of classifying e + e - hadronic events are presented. We compare the performances of the following neural networks: MLP (multilayer perceptron), MLP and LVQ (learning vector quantization) trained sequentially, and MLP and RBF (radial basis function) trained sequentially. We introduce a MLP-RBF cooperative neural network. Our last study is a multi-MLP neural network. (orig.)

  8. Using neural networks to describe tracer correlations

    Directory of Open Access Journals (Sweden)

    D. J. Lary

    2004-01-01

    Full Text Available Neural networks are ideally suited to describe the spatial and temporal dependence of tracer-tracer correlations. The neural network performs well even in regions where the correlations are less compact and normally a family of correlation curves would be required. For example, the CH4-N2O correlation can be well described using a neural network trained with the latitude, pressure, time of year, and methane volume mixing ratio (v.m.r.. In this study a neural network using Quickprop learning and one hidden layer with eight nodes was able to reproduce the CH4-N2O correlation with a correlation coefficient between simulated and training values of 0.9995. Such an accurate representation of tracer-tracer correlations allows more use to be made of long-term datasets to constrain chemical models. Such as the dataset from the Halogen Occultation Experiment (HALOE which has continuously observed CH4  (but not N2O from 1991 till the present. The neural network Fortran code used is available for download.

  9. Deep learning quick reference useful hacks for training and optimizing deep neural networks with TensorFlow and Keras

    CERN Document Server

    Bernico, Michael

    2018-01-01

    This book is a practical guide to applying deep neural networks including MLPs, CNNs, LSTMs, and more in Keras and TensorFlow. Packed with useful hacks to solve real-world challenges along with the supported math and theory around each topic, this book will be a quick reference for training and optimize your deep neural networks.

  10. Analysis of neural networks in terms of domain functions

    NARCIS (Netherlands)

    van der Zwaag, B.J.; Slump, Cornelis H.; Spaanenburg, Lambert

    Despite their success-story, artificial neural networks have one major disadvantage compared to other techniques: the inability to explain comprehensively how a trained neural network reaches its output; neural networks are not only (incorrectly) seen as a "magic tool" but possibly even more as a

  11. One weird trick for parallelizing convolutional neural networks

    OpenAIRE

    Krizhevsky, Alex

    2014-01-01

    I present a new way to parallelize the training of convolutional neural networks across multiple GPUs. The method scales significantly better than all alternatives when applied to modern convolutional neural networks.

  12. Wind power forecast using wavelet neural network trained by improved Clonal selection algorithm

    International Nuclear Information System (INIS)

    Chitsaz, Hamed; Amjady, Nima; Zareipour, Hamidreza

    2015-01-01

    Highlights: • Presenting a Morlet wavelet neural network for wind power forecasting. • Proposing improved Clonal selection algorithm for training the model. • Applying Maximum Correntropy Criterion to evaluate the training performance. • Extensive testing of the proposed wind power forecast method on real-world data. - Abstract: With the integration of wind farms into electric power grids, an accurate wind power prediction is becoming increasingly important for the operation of these power plants. In this paper, a new forecasting engine for wind power prediction is proposed. The proposed engine has the structure of Wavelet Neural Network (WNN) with the activation functions of the hidden neurons constructed based on multi-dimensional Morlet wavelets. This forecast engine is trained by a new improved Clonal selection algorithm, which optimizes the free parameters of the WNN for wind power prediction. Furthermore, Maximum Correntropy Criterion (MCC) has been utilized instead of Mean Squared Error as the error measure in training phase of the forecasting model. The proposed wind power forecaster is tested with real-world hourly data of system level wind power generation in Alberta, Canada. In order to demonstrate the efficiency of the proposed method, it is compared with several other wind power forecast techniques. The obtained results confirm the validity of the developed approach

  13. An Evolutionary Optimization Framework for Neural Networks and Neuromorphic Architectures

    Energy Technology Data Exchange (ETDEWEB)

    Schuman, Catherine D [ORNL; Plank, James [University of Tennessee (UT); Disney, Adam [University of Tennessee (UT); Reynolds, John [University of Tennessee (UT)

    2016-01-01

    As new neural network and neuromorphic architectures are being developed, new training methods that operate within the constraints of the new architectures are required. Evolutionary optimization (EO) is a convenient training method for new architectures. In this work, we review a spiking neural network architecture and a neuromorphic architecture, and we describe an EO training framework for these architectures. We present the results of this training framework on four classification data sets and compare those results to other neural network and neuromorphic implementations. We also discuss how this EO framework may be extended to other architectures.

  14. Efficient Neural Network Modeling for Flight and Space Dynamics Simulation

    Directory of Open Access Journals (Sweden)

    Ayman Hamdy Kassem

    2011-01-01

    Full Text Available This paper represents an efficient technique for neural network modeling of flight and space dynamics simulation. The technique will free the neural network designer from guessing the size and structure for the required neural network model and will help to minimize the number of neurons. For linear flight/space dynamics systems, the technique can find the network weights and biases directly by solving a system of linear equations without the need for training. Nonlinear flight dynamic systems can be easily modeled by training its linearized models keeping the same network structure. The training is fast, as it uses the linear system knowledge to speed up the training process. The technique is tested on different flight/space dynamic models and showed promising results.

  15. Accident scenario diagnostics with neural networks

    International Nuclear Information System (INIS)

    Guo, Z.

    1992-01-01

    Nuclear power plants are very complex systems. The diagnoses of transients or accident conditions is very difficult because a large amount of information, which is often noisy, or intermittent, or even incomplete, need to be processed in real time. To demonstrate their potential application to nuclear power plants, neural networks axe used to monitor the accident scenarios simulated by the training simulator of TVA's Watts Bar Nuclear Power Plant. A self-organization network is used to compress original data to reduce the total number of training patterns. Different accident scenarios are closely related to different key parameters which distinguish one accident scenario from another. Therefore, the accident scenarios can be monitored by a set of small size neural networks, called modular networks, each one of which monitors only one assigned accident scenario, to obtain fast training and recall. Sensitivity analysis is applied to select proper input variables for modular networks

  16. Neutron spectra unfolding in Bonner spheres spectrometry using neural networks

    International Nuclear Information System (INIS)

    Kardan, M.R.; Setayeshi, S.; Koohi-Fayegh, R.; Ghiassi-Nejad, M.

    2003-01-01

    The neural network method has been used for the unfolding of neutron spectra in neutron spectrometry by Bonner spheres. A back propagation algorithm was used for training of neural networks 4mm x 4 mm bare LiI(Eu) and in a polyethylene sphere set: 2, 3, 4, 5, 6, 7, 8, 10, 12, 18 inch diameter have been used for unfolding of neutron spectra. Neural networks were trained by 199 sets of neutron spectra, which were subdivided into 6, 8, 10, 12, 15 and 20 energy bins and for each of them an appropriate neural network was designed and trained. The validation was performed by the 21 sets of neutron spectra. A neural network with 10 energy bins which had a mean value of error of 6% for dose equivalent estimation of spectra in the validation set showed the best results. The obtained results show that neural networks can be applied as an effective method for unfolding neutron spectra especially when the main target is neutron dosimetry. (author)

  17. Optical-Correlator Neural Network Based On Neocognitron

    Science.gov (United States)

    Chao, Tien-Hsin; Stoner, William W.

    1994-01-01

    Multichannel optical correlator implements shift-invariant, high-discrimination pattern-recognizing neural network based on paradigm of neocognitron. Selected as basic building block of this neural network because invariance under shifts is inherent advantage of Fourier optics included in optical correlators in general. Neocognitron is conceptual electronic neural-network model for recognition of visual patterns. Multilayer processing achieved by iteratively feeding back output of feature correlator to input spatial light modulator and updating Fourier filters. Neural network trained by use of characteristic features extracted from target images. Multichannel implementation enables parallel processing of large number of selected features.

  18. Tensor Basis Neural Network v. 1.0 (beta)

    Energy Technology Data Exchange (ETDEWEB)

    2017-03-28

    This software package can be used to build, train, and test a neural network machine learning model. The neural network architecture is specifically designed to embed tensor invariance properties by enforcing that the model predictions sit on an invariant tensor basis. This neural network architecture can be used in developing constitutive models for applications such as turbulence modeling, materials science, and electromagnetism.

  19. Stability prediction of berm breakwater using neural network

    Digital Repository Service at National Institute of Oceanography (India)

    Mandal, S.; Rao, S.; Manjunath, Y.R.

    In the present study, an artificial neural network method has been applied to predict the stability of berm breakwaters. Four neural network models are constructed based on the parameters which influence the stability of breakwater. Training...

  20. Illusory Motion Reproduced by Deep Neural Networks Trained for Prediction

    Directory of Open Access Journals (Sweden)

    Eiji Watanabe

    2018-03-01

    Full Text Available The cerebral cortex predicts visual motion to adapt human behavior to surrounding objects moving in real time. Although the underlying mechanisms are still unknown, predictive coding is one of the leading theories. Predictive coding assumes that the brain's internal models (which are acquired through learning predict the visual world at all times and that errors between the prediction and the actual sensory input further refine the internal models. In the past year, deep neural networks based on predictive coding were reported for a video prediction machine called PredNet. If the theory substantially reproduces the visual information processing of the cerebral cortex, then PredNet can be expected to represent the human visual perception of motion. In this study, PredNet was trained with natural scene videos of the self-motion of the viewer, and the motion prediction ability of the obtained computer model was verified using unlearned videos. We found that the computer model accurately predicted the magnitude and direction of motion of a rotating propeller in unlearned videos. Surprisingly, it also represented the rotational motion for illusion images that were not moving physically, much like human visual perception. While the trained network accurately reproduced the direction of illusory rotation, it did not detect motion components in negative control pictures wherein people do not perceive illusory motion. This research supports the exciting idea that the mechanism assumed by the predictive coding theory is one of basis of motion illusion generation. Using sensory illusions as indicators of human perception, deep neural networks are expected to contribute significantly to the development of brain research.

  1. Neural network based multiscale image restoration approach

    Science.gov (United States)

    de Castro, Ana Paula A.; da Silva, José D. S.

    2007-02-01

    This paper describes a neural network based multiscale image restoration approach. Multilayer perceptrons are trained with artificial images of degraded gray level circles, in an attempt to make the neural network learn inherent space relations of the degraded pixels. The present approach simulates the degradation by a low pass Gaussian filter blurring operation and the addition of noise to the pixels at pre-established rates. The training process considers the degraded image as input and the non-degraded image as output for the supervised learning process. The neural network thus performs an inverse operation by recovering a quasi non-degraded image in terms of least squared. The main difference of the approach to existing ones relies on the fact that the space relations are taken from different scales, thus providing relational space data to the neural network. The approach is an attempt to come up with a simple method that leads to an optimum solution to the problem. Considering different window sizes around a pixel simulates the multiscale operation. In the generalization phase the neural network is exposed to indoor, outdoor, and satellite degraded images following the same steps use for the artificial circle image.

  2. Drift chamber tracking with neural networks

    International Nuclear Information System (INIS)

    Lindsey, C.S.; Denby, B.; Haggerty, H.

    1992-10-01

    We discuss drift chamber tracking with a commercial log VLSI neural network chip. Voltages proportional to the drift times in a 4-layer drift chamber were presented to the Intel ETANN chip. The network was trained to provide the intercept and slope of straight tracks traversing the chamber. The outputs were recorded and later compared off line to conventional track fits. Two types of network architectures were studied. Applications of neural network tracking to high energy physics detector triggers is discussed

  3. Mass reconstruction with a neural network

    International Nuclear Information System (INIS)

    Loennblad, L.; Peterson, C.; Roegnvaldsson, T.

    1992-01-01

    A feed-forward neural network method is developed for reconstructing the invariant mass of hadronic jets appearing in a calorimeter. The approach is illustrated in W→qanti q, where W-bosons are produced in panti p reactions at SPS collider energies. The neural network method yields results that are superior to conventional methods. This neural network application differs from the classification ones in the sense that an analog number (the mass) is computed by the network, rather than a binary decision being made. As a by-product our application clearly demonstrates the need for using 'intelligent' variables in instances when the amount of training instances is limited. (orig.)

  4. Prediction of proteasome cleavage motifs by neural networks

    DEFF Research Database (Denmark)

    Kesimir, C.; Nussbaum, A.K.; Schild, H.

    2002-01-01

    physiological conditions. Our algorithm has been trained not only on in vitro data, but also on MHC Class I ligand data, which reflect a combination of immunoproteasome and constitutive proteasome specificity. This feature, together with the use of neural networks, a non-linear classification technique, make...... the prediction of MHC Class I ligand boundaries more accurate: 65% of the cleavage sites and 85% of the non-cleavage sites are correctly determined. Moreover, we show that the neural networks trained on the constitutive proteasome data learns a specificity that differs from that of the networks trained on MHC...

  5. Analysis Resilient Algorithm on Artificial Neural Network Backpropagation

    Science.gov (United States)

    Saputra, Widodo; Tulus; Zarlis, Muhammad; Widia Sembiring, Rahmat; Hartama, Dedy

    2017-12-01

    Prediction required by decision makers to anticipate future planning. Artificial Neural Network (ANN) Backpropagation is one of method. This method however still has weakness, for long training time. This is a reason to improve a method to accelerate the training. One of Artificial Neural Network (ANN) Backpropagation method is a resilient method. Resilient method of changing weights and bias network with direct adaptation process of weighting based on local gradient information from every learning iteration. Predicting data result of Istanbul Stock Exchange training getting better. Mean Square Error (MSE) value is getting smaller and increasing accuracy.

  6. EEG signal classification using PSO trained RBF neural network for epilepsy identification

    Directory of Open Access Journals (Sweden)

    Sandeep Kumar Satapathy

    Full Text Available The electroencephalogram (EEG is a low amplitude signal generated in the brain, as a result of information flow during the communication of several neurons. Hence, careful analysis of these signals could be useful in understanding many human brain disorder diseases. One such disease topic is epileptic seizure identification, which can be identified via a classification process of the EEG signal after preprocessing with the discrete wavelet transform (DWT. To classify the EEG signal, we used a radial basis function neural network (RBFNN. As shown herein, the network can be trained to optimize the mean square error (MSE by using a modified particle swarm optimization (PSO algorithm. The key idea behind the modification of PSO is to introduce a method to overcome the problem of slow searching in and around the global optimum solution. The effectiveness of this procedure was verified by an experimental analysis on a benchmark dataset which is publicly available. The result of our experimental analysis revealed that the improvement in the algorithm is significant with respect to RBF trained by gradient descent and canonical PSO. Here, two classes of EEG signals were considered: the first being an epileptic and the other being non-epileptic. The proposed method produced a maximum accuracy of 99% as compared to the other techniques. Keywords: Electroencephalography, Radial basis function neural network, Particle swarm optimization, Discrete wavelet transform, Machine learning

  7. Landslide Susceptibility Index Determination Using Aritificial Neural Network

    Science.gov (United States)

    Kawabata, D.; Bandibas, J.; Urai, M.

    2004-12-01

    The occurrence of landslide is the result of the interaction of complex and diverse environmental factors. The geomorphic features, rock types and geologic structure are especially important base factors of the landslide occurrence. Generating landslide susceptibility index by defining the relationship between landslide occurrence and that base factors using conventional mathematical and statistical methods is very difficult and inaccurate. This study focuses on generating landslide susceptibility index using artificial neural networks in Southern Japanese Alps. The training data are geomorphic (e.g. altitude, slope and aspect) and geologic parameters (e.g. rock type, distance from geologic boundary and geologic dip-strike angle) and landslides. Artificial neural network structure and training scheme are formulated to generate the index. Data from areas with and without landslide occurrences are used to train the network. The network is trained to output 1 when the input data are from areas with landslides and 0 when no landslide occurred. The trained network generates an output ranging from 0 to 1 reflecting the possibility of landslide occurrence based on the inputted data. Output values nearer to 1 means higher possibility of landslide occurrence. The artificial neural network model is incorporated into the GIS software to generate a landslide susceptibility map.

  8. Using function approximation to determine neural network accuracy

    International Nuclear Information System (INIS)

    Wichman, R.F.; Alexander, J.

    2013-01-01

    Many, if not most, control processes demonstrate nonlinear behavior in some portion of their operating range and the ability of neural networks to model non-linear dynamics makes them very appealing for control. Control of high reliability safety systems, and autonomous control in process or robotic applications, however, require accurate and consistent control and neural networks are only approximators of various functions so their degree of approximation becomes important. In this paper, the factors affecting the ability of a feed-forward back-propagation neural network to accurately approximate a non-linear function are explored. Compared to pattern recognition using a neural network for function approximation provides an easy and accurate method for determining the network's accuracy. In contrast to other techniques, we show that errors arising in function approximation or curve fitting are caused by the neural network itself rather than scatter in the data. A method is proposed that provides improvements in the accuracy achieved during training and resulting ability of the network to generalize after training. Binary input vectors provided a more accurate model than with scalar inputs and retraining using a small number of the outlier x,y pairs improved generalization. (author)

  9. Towards dropout training for convolutional neural networks.

    Science.gov (United States)

    Wu, Haibing; Gu, Xiaodong

    2015-11-01

    Recently, dropout has seen increasing use in deep learning. For deep convolutional neural networks, dropout is known to work well in fully-connected layers. However, its effect in convolutional and pooling layers is still not clear. This paper demonstrates that max-pooling dropout is equivalent to randomly picking activation based on a multinomial distribution at training time. In light of this insight, we advocate employing our proposed probabilistic weighted pooling, instead of commonly used max-pooling, to act as model averaging at test time. Empirical evidence validates the superiority of probabilistic weighted pooling. We also empirically show that the effect of convolutional dropout is not trivial, despite the dramatically reduced possibility of over-fitting due to the convolutional architecture. Elaborately designing dropout training simultaneously in max-pooling and fully-connected layers, we achieve state-of-the-art performance on MNIST, and very competitive results on CIFAR-10 and CIFAR-100, relative to other approaches without data augmentation. Finally, we compare max-pooling dropout and stochastic pooling, both of which introduce stochasticity based on multinomial distributions at pooling stage. Copyright © 2015 Elsevier Ltd. All rights reserved.

  10. Potential usefulness of an artificial neural network for assessing ventricular size

    International Nuclear Information System (INIS)

    Fukuda, Haruyuki; Nakajima, Hideyuki; Usuki, Noriaki; Saiwai, Shigeo; Miyamoto, Takeshi; Inoue, Yuichi; Onoyama, Yasuto.

    1995-01-01

    An artificial neural network approach was applied to assess ventricular size from computed tomograms. Three layer, feed-forward neural networks with a back propagation algorithm were designed to distinguish between three degree of enlargement of the ventricles on the basis of patient's age and six items of computed tomographic information. Data for training and testing the neural network were created with computed tomograms of the brains selected at random from daily examinations. Four radiologists decided by mutual consent subjectively based on their experience whether the ventricles were within normal limits, slightly enlarged, or enlarged for the patient's age. The data for training was obtained from 38 patients. The data for testing was obtained from 47 other patients. The performance of the neural network trained using the data for training was evaluated by the rate of correct answers to the data for testing. The valid solution ratio to response of the test data obtained from the trained neural networks was more than 90% for all conditions in this study. The solutions were completely valid in the neural networks with two or three units at the hidden layer with 2,200 learning iterations, and with two units at the hidden layer with 11,000 learning iterations. The squared error decreased remarkably in the range from 0 to 500 learning iterations, and was close to a contrast over two thousand learning iterations. The neural network with a hidden layer having two or three units showed high decision performance. The preliminary results strongly suggest that the neural network approach has potential utility in computer-aided estimation of enlargement of the ventricles. (author)

  11. Noise Analysis studies with neural networks

    International Nuclear Information System (INIS)

    Seker, S.; Ciftcioglu, O.

    1996-01-01

    Noise analysis studies with neural network are aimed. Stochastic signals at the input of the network are used to obtain an algorithmic multivariate stochastic signal modeling. To this end, lattice modeling of a stochastic signal is performed to obtain backward residual noise sources which are uncorrelated among themselves. There are applied together with an additional input to the network to obtain an algorithmic model which is used for signal detection for early failure in plant monitoring. The additional input provides the information to the network to minimize the difference between the signal and the network's one-step-ahead prediction. A stochastic algorithm is used for training where the errors reflecting the measurement error during the training are also modelled so that fast and consistent convergence of network's weights is obtained. The lattice structure coupled to neural network investigated with measured signals from an actual power plant. (authors)

  12. Pre-Trained Neural Networks used for Non-Linear State Estimation

    DEFF Research Database (Denmark)

    Bayramoglu, Enis; Andersen, Nils Axel; Ravn, Ole

    2011-01-01

    of the paramters in the distribution. This transformation is approximated by a neural network using offline training, which is based on monte carlo sampling. In the paper, there will also be presented a method to construct a flexible distributions well suited for covering the effect of the non-linearities......The paper focuses on nonlinear state estimation assuming non-Gaussian distributions of the states and the disturbances. The posterior distribution and the aposteriori distribution is described by a chosen family of paramtric distributions. The state transformation then results in a transformation...

  13. Noise-enhanced categorization in a recurrently reconnected neural network

    International Nuclear Information System (INIS)

    Monterola, Christopher; Zapotocky, Martin

    2005-01-01

    We investigate the interplay of recurrence and noise in neural networks trained to categorize spatial patterns of neural activity. We develop the following procedure to demonstrate how, in the presence of noise, the introduction of recurrence permits to significantly extend and homogenize the operating range of a feed-forward neural network. We first train a two-level perceptron in the absence of noise. Following training, we identify the input and output units of the feed-forward network, and thus convert it into a two-layer recurrent network. We show that the performance of the reconnected network has features reminiscent of nondynamic stochastic resonance: the addition of noise enables the network to correctly categorize stimuli of subthreshold strength, with optimal noise magnitude significantly exceeding the stimulus strength. We characterize the dynamics leading to this effect and contrast it to the behavior of a more simple associative memory network in which noise-mediated categorization fails

  14. Noise-enhanced categorization in a recurrently reconnected neural network

    Science.gov (United States)

    Monterola, Christopher; Zapotocky, Martin

    2005-03-01

    We investigate the interplay of recurrence and noise in neural networks trained to categorize spatial patterns of neural activity. We develop the following procedure to demonstrate how, in the presence of noise, the introduction of recurrence permits to significantly extend and homogenize the operating range of a feed-forward neural network. We first train a two-level perceptron in the absence of noise. Following training, we identify the input and output units of the feed-forward network, and thus convert it into a two-layer recurrent network. We show that the performance of the reconnected network has features reminiscent of nondynamic stochastic resonance: the addition of noise enables the network to correctly categorize stimuli of subthreshold strength, with optimal noise magnitude significantly exceeding the stimulus strength. We characterize the dynamics leading to this effect and contrast it to the behavior of a more simple associative memory network in which noise-mediated categorization fails.

  15. Can surgical simulation be used to train detection and classification of neural networks?

    Science.gov (United States)

    Zisimopoulos, Odysseas; Flouty, Evangello; Stacey, Mark; Muscroft, Sam; Giataganas, Petros; Nehme, Jean; Chow, Andre; Stoyanov, Danail

    2017-10-01

    Computer-assisted interventions (CAI) aim to increase the effectiveness, precision and repeatability of procedures to improve surgical outcomes. The presence and motion of surgical tools is a key information input for CAI surgical phase recognition algorithms. Vision-based tool detection and recognition approaches are an attractive solution and can be designed to take advantage of the powerful deep learning paradigm that is rapidly advancing image recognition and classification. The challenge for such algorithms is the availability and quality of labelled data used for training. In this Letter, surgical simulation is used to train tool detection and segmentation based on deep convolutional neural networks and generative adversarial networks. The authors experiment with two network architectures for image segmentation in tool classes commonly encountered during cataract surgery. A commercially-available simulator is used to create a simulated cataract dataset for training models prior to performing transfer learning on real surgical data. To the best of authors' knowledge, this is the first attempt to train deep learning models for surgical instrument detection on simulated data while demonstrating promising results to generalise on real data. Results indicate that simulated data does have some potential for training advanced classification methods for CAI systems.

  16. Nonlinear programming with feedforward neural networks.

    Energy Technology Data Exchange (ETDEWEB)

    Reifman, J.

    1999-06-02

    We provide a practical and effective method for solving constrained optimization problems by successively training a multilayer feedforward neural network in a coupled neural-network/objective-function representation. Nonlinear programming problems are easily mapped into this representation which has a simpler and more transparent method of solution than optimization performed with Hopfield-like networks and poses very mild requirements on the functions appearing in the problem. Simulation results are illustrated and compared with an off-the-shelf optimization tool.

  17. Direct adaptive control using feedforward neural networks

    OpenAIRE

    Cajueiro, Daniel Oliveira; Hemerly, Elder Moreira

    2003-01-01

    ABSTRACT: This paper proposes a new scheme for direct neural adaptive control that works efficiently employing only one neural network, used for simultaneously identifying and controlling the plant. The idea behind this structure of adaptive control is to compensate the control input obtained by a conventional feedback controller. The neural network training process is carried out by using two different techniques: backpropagation and extended Kalman filter algorithm. Additionally, the conver...

  18. PREDIKSI FOREX MENGGUNAKAN MODEL NEURAL NETWORK

    Directory of Open Access Journals (Sweden)

    R. Hadapiningradja Kusumodestoni

    2015-11-01

    Full Text Available ABSTRAK Prediksi adalah salah satu teknik yang paling penting dalam menjalankan bisnis forex. Keputusan dalam memprediksi adalah sangatlah penting, karena dengan prediksi dapat membantu mengetahui nilai forex di waktu tertentu kedepan sehingga dapat mengurangi resiko kerugian. Tujuan dari penelitian ini dimaksudkan memprediksi bisnis fores menggunakan model neural network dengan data time series per 1 menit untuk mengetahui nilai akurasi prediksi sehingga dapat mengurangi resiko dalam menjalankan bisnis forex. Metode penelitian pada penelitian ini meliputi metode pengumpulan data kemudian dilanjutkan ke metode training, learning, testing menggunakan neural network. Setelah di evaluasi hasil penelitian ini menunjukan bahwa penerapan algoritma Neural Network mampu untuk memprediksi forex dengan tingkat akurasi prediksi 0.431 +/- 0.096 sehingga dengan prediksi ini dapat membantu mengurangi resiko dalam menjalankan bisnis forex. Kata kunci: prediksi, forex, neural network.

  19. Application of neural networks to seismic active control

    International Nuclear Information System (INIS)

    Tang, Yu.

    1995-01-01

    An exploratory study on seismic active control using an artificial neural network (ANN) is presented in which a singledegree-of-freedom (SDF) structural system is controlled by a trained neural network. A feed-forward neural network and the backpropagation training method are used in the study. In backpropagation training, the learning rate is determined by ensuring the decrease of the error function at each training cycle. The training patterns for the neural net are generated randomly. Then, the trained ANN is used to compute the control force according to the control algorithm. The control strategy proposed herein is to apply the control force at every time step to destroy the build-up of the system response. The ground motions considered in the simulations are the N21E and N69W components of the Lake Hughes No. 12 record that occurred in the San Fernando Valley in California on February 9, 1971. Significant reduction of the structural response by one order of magnitude is observed. Also, it is shown that the proposed control strategy has the ability to reduce the peak that occurs during the first few cycles of the time history. These promising results assert the potential of applying ANNs to active structural control under seismic loads

  20. Interacting neural networks

    Science.gov (United States)

    Metzler, R.; Kinzel, W.; Kanter, I.

    2000-08-01

    Several scenarios of interacting neural networks which are trained either in an identical or in a competitive way are solved analytically. In the case of identical training each perceptron receives the output of its neighbor. The symmetry of the stationary state as well as the sensitivity to the used training algorithm are investigated. Two competitive perceptrons trained on mutually exclusive learning aims and a perceptron which is trained on the opposite of its own output are examined analytically. An ensemble of competitive perceptrons is used as decision-making algorithms in a model of a closed market (El Farol Bar problem or the Minority Game. In this game, a set of agents who have to make a binary decision is considered.); each network is trained on the history of minority decisions. This ensemble of perceptrons relaxes to a stationary state whose performance can be better than random.

  1. Solving differential equations with unknown constitutive relations as recurrent neural networks

    Energy Technology Data Exchange (ETDEWEB)

    Hagge, Tobias J.; Stinis, Panagiotis; Yeung, Enoch H.; Tartakovsky, Alexandre M.

    2017-12-08

    We solve a system of ordinary differential equations with an unknown functional form of a sink (reaction rate) term. We assume that the measurements (time series) of state variables are partially available, and use a recurrent neural network to “learn” the reaction rate from this data. This is achieved by including discretized ordinary differential equations as part of a recurrent neural network training problem. We extend TensorFlow’s recurrent neural network architecture to create a simple but scalable and effective solver for the unknown functions, and apply it to a fedbatch bioreactor simulation problem. Use of techniques from recent deep learning literature enables training of functions with behavior manifesting over thousands of time steps. Our networks are structurally similar to recurrent neural networks, but differ in purpose, and require modified training strategies.

  2. Neural-network-directed alignment of optical systems using the laser-beam spatial filter as an example

    Science.gov (United States)

    Decker, Arthur J.; Krasowski, Michael J.; Weiland, Kenneth E.

    1993-01-01

    This report describes an effort at NASA Lewis Research Center to use artificial neural networks to automate the alignment and control of optical measurement systems. Specifically, it addresses the use of commercially available neural network software and hardware to direct alignments of the common laser-beam-smoothing spatial filter. The report presents a general approach for designing alignment records and combining these into training sets to teach optical alignment functions to neural networks and discusses the use of these training sets to train several types of neural networks. Neural network configurations used include the adaptive resonance network, the back-propagation-trained network, and the counter-propagation network. This work shows that neural networks can be used to produce robust sequencers. These sequencers can learn by example to execute the step-by-step procedures of optical alignment and also can learn adaptively to correct for environmentally induced misalignment. The long-range objective is to use neural networks to automate the alignment and operation of optical measurement systems in remote, harsh, or dangerous aerospace environments. This work also shows that when neural networks are trained by a human operator, training sets should be recorded, training should be executed, and testing should be done in a manner that does not depend on intellectual judgments of the human operator.

  3. Inversion of a lateral log using neural networks

    International Nuclear Information System (INIS)

    Garcia, G.; Whitman, W.W.

    1992-01-01

    In this paper a technique using neural networks is demonstrated for the inversion of a lateral log. The lateral log is simulated by a finite difference method which in turn is used as an input to a backpropagation neural network. An initial guess earth model is generated from the neural network, which is then input to a Marquardt inversion. The neural network reacts to gross and subtle data features in actual logs and produces a response inferred from the knowledge stored in the network during a training process. The neural network inversion of lateral logs is tested on synthetic and field data. Tests using field data resulted in a final earth model whose simulated lateral is in good agreement with the actual log data

  4. Improved Extension Neural Network and Its Applications

    Directory of Open Access Journals (Sweden)

    Yu Zhou

    2014-01-01

    Full Text Available Extension neural network (ENN is a new neural network that is a combination of extension theory and artificial neural network (ANN. The learning algorithm of ENN is based on supervised learning algorithm. One of important issues in the field of classification and recognition of ENN is how to achieve the best possible classifier with a small number of labeled training data. Training data selection is an effective approach to solve this issue. In this work, in order to improve the supervised learning performance and expand the engineering application range of ENN, we use a novel data selection method based on shadowed sets to refine the training data set of ENN. Firstly, we use clustering algorithm to label the data and induce shadowed sets. Then, in the framework of shadowed sets, the samples located around each cluster centers (core data and the borders between clusters (boundary data are selected as training data. Lastly, we use selected data to train ENN. Compared with traditional ENN, the proposed improved ENN (IENN has a better performance. Moreover, IENN is independent of the supervised learning algorithms and initial labeled data. Experimental results verify the effectiveness and applicability of our proposed work.

  5. Reconstruction of neutron spectra through neural networks

    International Nuclear Information System (INIS)

    Vega C, H.R.; Hernandez D, V.M.; Manzanares A, E.

    2003-01-01

    A neural network has been used to reconstruct the neutron spectra starting from the counting rates of the detectors of the Bonner sphere spectrophotometric system. A group of 56 neutron spectra was selected to calculate the counting rates that would produce in a Bonner sphere system, with these data and the spectra it was trained the neural network. To prove the performance of the net, 12 spectra were used, 6 were taken of the group used for the training, 3 were obtained of mathematical functions and those other 3 correspond to real spectra. When comparing the original spectra of those reconstructed by the net we find that our net has a poor performance when reconstructing monoenergetic spectra, this attributes it to those characteristic of the spectra used for the training of the neural network, however for the other groups of spectra the results of the net are appropriate with the prospective ones. (Author)

  6. Training of reverse propagation neural networks applied to neutron dosimetry

    International Nuclear Information System (INIS)

    Hernandez P, C. F.; Martinez B, M. R.; Leon P, A. A.; Espinoza G, J. G.; Castaneda M, V. H.; Solis S, L. O.; Castaneda M, R.; Ortiz R, M.; Vega C, H. R.; Mendez V, R.; Gallego, E.; De Sousa L, M. A.

    2016-10-01

    Neutron dosimetry is of great importance in radiation protection as aims to provide dosimetric quantities to assess the magnitude of detrimental health effects due to exposure of neutron radiation. To quantify detriment to health is necessary to evaluate the dose received by the occupationally exposed personnel using different detection systems called dosimeters, which have very dependent responses to the energy distribution of neutrons. The neutron detection is a much more complex problem than the detection of charged particles, since it does not carry an electric charge, does not cause direct ionization and has a greater penetration power giving the possibility of interacting with matter in a different way. Because of this, various neutron detection systems have been developed, among which the Bonner spheres spectrometric system stands out due to the advantages that possesses, such as a wide range of energy, high sensitivity and easy operation. However, once obtained the counting rates, the problem lies in the neutron spectrum deconvolution, necessary for the calculation of the doses, using different mathematical methods such as Monte Carlo, maximum entropy, iterative methods among others, which present various difficulties that have motivated the development of new technologies. Nowadays, methods based on artificial intelligence technologies are being used to perform neutron dosimetry, mainly using the theory of artificial neural networks. In these new methods the need for spectrum reconstruction can be eliminated for the calculation of the doses. In this work an artificial neural network or reverse propagation was trained for the calculation of 15 equivalent doses from the counting rates of the Bonner spheres spectrometric system using a set of 7 spheres, one of 2 spheres and two of a single sphere of different sizes, testing different error values until finding the most appropriate. The optimum network topology was obtained through the robust design

  7. Neural networks for sensor validation and plant monitoring

    International Nuclear Information System (INIS)

    Upadhyaya, B.R.; Eryurek, E.; Mathai, G.

    1990-01-01

    Sensor and process monitoring in power plants require the estimation of one or more process variables. Neural network paradigms are suitable for establishing general nonlinear relationships among a set of plant variables. Multiple-input multiple-output autoassociative networks can follow changes in plant-wide behavior. The backpropagation algorithm has been applied for training feedforward networks. A new and enhanced algorithm for training neural networks (BPN) has been developed and implemented in a VAX workstation. Operational data from the Experimental Breeder Reactor-II (EBR-II) have been used to study the performance of BPN. Several results of application to the EBR-II are presented

  8. Logarithmic learning for generalized classifier neural network.

    Science.gov (United States)

    Ozyildirim, Buse Melis; Avci, Mutlu

    2014-12-01

    Generalized classifier neural network is introduced as an efficient classifier among the others. Unless the initial smoothing parameter value is close to the optimal one, generalized classifier neural network suffers from convergence problem and requires quite a long time to converge. In this work, to overcome this problem, a logarithmic learning approach is proposed. The proposed method uses logarithmic cost function instead of squared error. Minimization of this cost function reduces the number of iterations used for reaching the minima. The proposed method is tested on 15 different data sets and performance of logarithmic learning generalized classifier neural network is compared with that of standard one. Thanks to operation range of radial basis function included by generalized classifier neural network, proposed logarithmic approach and its derivative has continuous values. This makes it possible to adopt the advantage of logarithmic fast convergence by the proposed learning method. Due to fast convergence ability of logarithmic cost function, training time is maximally decreased to 99.2%. In addition to decrease in training time, classification performance may also be improved till 60%. According to the test results, while the proposed method provides a solution for time requirement problem of generalized classifier neural network, it may also improve the classification accuracy. The proposed method can be considered as an efficient way for reducing the time requirement problem of generalized classifier neural network. Copyright © 2014 Elsevier Ltd. All rights reserved.

  9. A training rule which guarantees finite-region stability for a class of closed-loop neural-network control systems.

    Science.gov (United States)

    Kuntanapreeda, S; Fullmer, R R

    1996-01-01

    A training method for a class of neural network controllers is presented which guarantees closed-loop system stability. The controllers are assumed to be nonlinear, feedforward, sampled-data, full-state regulators implemented as single hidden-layer neural networks. The controlled systems must be locally hermitian and observable. Stability of the closed-loop system is demonstrated by determining a Lyapunov function, which can be used to identify a finite stability region about the regulator point.

  10. Reducing Wind Tunnel Data Requirements Using Neural Networks

    Science.gov (United States)

    Ross, James C.; Jorgenson, Charles C.; Norgaard, Magnus

    1997-01-01

    The use of neural networks to minimize the amount of data required to completely define the aerodynamic performance of a wind tunnel model is examined. The accuracy requirements for commercial wind tunnel test data are very severe and are difficult to reproduce using neural networks. For the current work, multiple input, single output networks were trained using a Levenberg-Marquardt algorithm for each of the aerodynamic coefficients. When applied to the aerodynamics of a 55% scale model of a U.S. Air Force/ NASA generic fighter configuration, this scheme provided accurate models of the lift, drag, and pitching-moment coefficients. Using only 50% of the data acquired during, the wind tunnel test, the trained neural network had a predictive accuracy equal to or better than the accuracy of the experimental measurements.

  11. Engine cylinder pressure reconstruction using crank kinematics and recurrently-trained neural networks

    Science.gov (United States)

    Bennett, C.; Dunne, J. F.; Trimby, S.; Richardson, D.

    2017-02-01

    A recurrent non-linear autoregressive with exogenous input (NARX) neural network is proposed, and a suitable fully-recurrent training methodology is adapted and tuned, for reconstructing cylinder pressure in multi-cylinder IC engines using measured crank kinematics. This type of indirect sensing is important for cost effective closed-loop combustion control and for On-Board Diagnostics. The challenge addressed is to accurately predict cylinder pressure traces within the cycle under generalisation conditions: i.e. using data not previously seen by the network during training. This involves direct construction and calibration of a suitable inverse crank dynamic model, which owing to singular behaviour at top-dead-centre (TDC), has proved difficult via physical model construction, calibration, and inversion. The NARX architecture is specialised and adapted to cylinder pressure reconstruction, using a fully-recurrent training methodology which is needed because the alternatives are too slow and unreliable for practical network training on production engines. The fully-recurrent Robust Adaptive Gradient Descent (RAGD) algorithm, is tuned initially using synthesised crank kinematics, and then tested on real engine data to assess the reconstruction capability. Real data is obtained from a 1.125 l, 3-cylinder, in-line, direct injection spark ignition (DISI) engine involving synchronised measurements of crank kinematics and cylinder pressure across a range of steady-state speed and load conditions. The paper shows that a RAGD-trained NARX network using both crank velocity and crank acceleration as input information, provides fast and robust training. By using the optimum epoch identified during RAGD training, acceptably accurate cylinder pressures, and especially accurate location-of-peak-pressure, can be reconstructed robustly under generalisation conditions, making it the most practical NARX configuration and recurrent training methodology for use on production engines.

  12. Neural network application to diesel generator diagnostics

    International Nuclear Information System (INIS)

    Logan, K.P.

    1990-01-01

    Diagnostic problems typically begin with the observation of some system behavior which is recognized as a deviation from the expected. The fundamental underlying process is one involving pattern matching cf observed symptoms to a set of compiled symptoms belonging to a fault-symptom mapping. Pattern recognition is often relied upon for initial fault detection and diagnosis. Parallel distributed processing (PDP) models employing neural network paradigms are known to be good pattern recognition devices. This paper describes the application of neural network processing techniques to the malfunction diagnosis of subsystems within a typical diesel generator configuration. Neural network models employing backpropagation learning were developed to correctly recognize fault conditions from the input diagnostic symptom patterns pertaining to various engine subsystems. The resulting network models proved to be excellent pattern recognizers for malfunction examples within the training set. The motivation for employing network models in lieu of a rule-based expert system, however, is related to the network's potential for generalizing malfunctions outside of the training set, as in the case of noisy or partial symptom patterns

  13. Prediction based chaos control via a new neural network

    International Nuclear Information System (INIS)

    Shen Liqun; Wang Mao; Liu Wanyu; Sun Guanghui

    2008-01-01

    In this Letter, a new chaos control scheme based on chaos prediction is proposed. To perform chaos prediction, a new neural network architecture for complex nonlinear approximation is proposed. And the difficulty in building and training the neural network is also reduced. Simulation results of Logistic map and Lorenz system show the effectiveness of the proposed chaos control scheme and the proposed neural network

  14. Training and validation of the ATLAS pixel clustering neural networks

    CERN Document Server

    The ATLAS collaboration

    2018-01-01

    The high centre-of-mass energy of the LHC gives rise to dense environments, such as the core of high-pT jets, in which the charge clusters left by ionising particles in the silicon sensors of the pixel detector can merge, compromising the tracking and vertexing efficiency. To recover optimal performance, a neural network-based approach is used to separate clusters originating from single and multiple particles and to estimate all hit positions within clusters. This note presents the training strategy employed and a set of benchmark performance measurements on a Monte Carlo sample of high-pT dijet events.

  15. International Conference on Artificial Neural Networks (ICANN)

    CERN Document Server

    Mladenov, Valeri; Kasabov, Nikola; Artificial Neural Networks : Methods and Applications in Bio-/Neuroinformatics

    2015-01-01

    The book reports on the latest theories on artificial neural networks, with a special emphasis on bio-neuroinformatics methods. It includes twenty-three papers selected from among the best contributions on bio-neuroinformatics-related issues, which were presented at the International Conference on Artificial Neural Networks, held in Sofia, Bulgaria, on September 10-13, 2013 (ICANN 2013). The book covers a broad range of topics concerning the theory and applications of artificial neural networks, including recurrent neural networks, super-Turing computation and reservoir computing, double-layer vector perceptrons, nonnegative matrix factorization, bio-inspired models of cell communities, Gestalt laws, embodied theory of language understanding, saccadic gaze shifts and memory formation, and new training algorithms for Deep Boltzmann Machines, as well as dynamic neural networks and kernel machines. It also reports on new approaches to reinforcement learning, optimal control of discrete time-delay systems, new al...

  16. SOLAR PHOTOVOLTAIC OUTPUT POWER FORECASTING USING BACK PROPAGATION NEURAL NETWORK

    Directory of Open Access Journals (Sweden)

    B. Jency Paulin

    2016-01-01

    Full Text Available Solar Energy is an important renewable and unlimited source of energy. Solar photovoltaic power forecasting, is an estimation of the expected power production, that help the grid operators to better manage the electric balance between power demand and supply. Neural network is a computational model that can predict new outcomes from past trends. The artificial neural network is used for photovoltaic plant energy forecasting. The output power for solar photovoltaic cell is predicted on hourly basis. In historical dataset collection process, two dataset was collected and used for analysis. The dataset was provided with three independent attributes and one dependent attributes. The implementation of Artificial Neural Network structure is done by Multilayer Perceptron (MLP and training procedure for neural network is done by error Back Propagation (BP. In order to train and test the neural network, the datasets are divided in the ratio 70:30. The accuracy of prediction can be done by using various error measurement criteria and the performance of neural network is to be noted.

  17. Livermore Big Artificial Neural Network Toolkit

    Energy Technology Data Exchange (ETDEWEB)

    2016-07-01

    LBANN is a toolkit that is designed to train artificial neural networks efficiently on high performance computing architectures. It is optimized to take advantages of key High Performance Computing features to accelerate neural network training. Specifically it is optimized for low-latency, high bandwidth interconnects, node-local NVRAM, node-local GPU accelerators, and high bandwidth parallel file systems. It is built on top of the open source Elemental distributed-memory dense and spars-direct linear algebra and optimization library that is released under the BSD license. The algorithms contained within LBANN are drawn from the academic literature and implemented to work within a distributed-memory framework.

  18. Anomaly detection in an automated safeguards system using neural networks

    International Nuclear Information System (INIS)

    Whiteson, R.; Howell, J.A.

    1992-01-01

    An automated safeguards system must be able to detect an anomalous event, identify the nature of the event, and recommend a corrective action. Neural networks represent a new way of thinking about basic computational mechanisms for intelligent information processing. In this paper, we discuss the issues involved in applying a neural network model to the first step of this process: anomaly detection in materials accounting systems. We extend our previous model to a 3-tank problem and compare different neural network architectures and algorithms. We evaluate the computational difficulties in training neural networks and explore how certain design principles affect the problems. The issues involved in building a neural network architecture include how the information flows, how the network is trained, how the neurons in a network are connected, how the neurons process information, and how the connections between neurons are modified. Our approach is based on the demonstrated ability of neural networks to model complex, nonlinear, real-time processes. By modeling the normal behavior of the processes, we can predict how a system should be behaving and, therefore, detect when an abnormality occurs

  19. Artificial Neural Networks for Nonlinear Dynamic Response Simulation in Mechanical Systems

    DEFF Research Database (Denmark)

    Christiansen, Niels Hørbye; Høgsberg, Jan Becker; Winther, Ole

    2011-01-01

    It is shown how artificial neural networks can be trained to predict dynamic response of a simple nonlinear structure. Data generated using a nonlinear finite element model of a simplified wind turbine is used to train a one layer artificial neural network. When trained properly the network is ab...... to perform accurate response prediction much faster than the corresponding finite element model. Initial result indicate a reduction in cpu time by two orders of magnitude....

  20. Seismic signal auto-detecing from different features by using Convolutional Neural Network

    Science.gov (United States)

    Huang, Y.; Zhou, Y.; Yue, H.; Zhou, S.

    2017-12-01

    We try Convolutional Neural Network to detect some features of seismic data and compare their efficience. The features include whether a signal is seismic signal or noise and the arrival time of P and S phase and each feature correspond to a Convolutional Neural Network. We first use traditional STA/LTA to recongnize some events and then use templete matching to find more events as training set for the Neural Network. To make the training set more various, we add some noise to the seismic data and make some synthetic seismic data and noise. The 3-component raw signal and time-frequancy ananlyze are used as the input data for our neural network. Our Training is performed on GPUs to achieve efficient convergence. Our method improved the precision in comparison with STA/LTA and template matching. We will move to recurrent neural network to see if this kind network is better in detect P and S phase.

  1. Decoding small surface codes with feedforward neural networks

    Science.gov (United States)

    Varsamopoulos, Savvas; Criger, Ben; Bertels, Koen

    2018-01-01

    Surface codes reach high error thresholds when decoded with known algorithms, but the decoding time will likely exceed the available time budget, especially for near-term implementations. To decrease the decoding time, we reduce the decoding problem to a classification problem that a feedforward neural network can solve. We investigate quantum error correction and fault tolerance at small code distances using neural network-based decoders, demonstrating that the neural network can generalize to inputs that were not provided during training and that they can reach similar or better decoding performance compared to previous algorithms. We conclude by discussing the time required by a feedforward neural network decoder in hardware.

  2. Neural Network Models for Free Radical Polymerization of Methyl Methacrylate

    International Nuclear Information System (INIS)

    Curteanu, S.; Leon, F.; Galea, D.

    2003-01-01

    In this paper, a neural network modeling of the batch bulk methyl methacrylate polymerization is performed. To obtain conversion, number and weight average molecular weights, three neural networks were built. Each was a multilayer perception with one or two hidden layers. The choice of network topology, i.e. the number of hidden layers and the number of neurons in these layers, was based on achieving a compromise between precision and complexity. Thus, it was intended to have an error as small as possible at the end of back-propagation training phases, while using a network with reduced complexity. The performances of the networks were evaluated by comparing network predictions with training data, validation data (which were not uses for training), and with the results of a mechanistic model. The accurate predictions of neural networks for monomer conversion, number average molecular weight and weight average molecular weight proves that this modeling methodology gives a good representation and generalization of the batch bulk methyl methacrylate polymerization. (author)

  3. Iris double recognition based on modified evolutionary neural network

    Science.gov (United States)

    Liu, Shuai; Liu, Yuan-Ning; Zhu, Xiao-Dong; Huo, Guang; Liu, Wen-Tao; Feng, Jia-Kai

    2017-11-01

    Aiming at multicategory iris recognition under illumination and noise interference, this paper proposes a method of iris double recognition based on a modified evolutionary neural network. An equalization histogram and Laplace of Gaussian operator are used to process the iris to suppress illumination and noise interference and Haar wavelet to convert the iris feature to binary feature encoding. Calculate the Hamming distance for the test iris and template iris , and compare with classification threshold, determine the type of iris. If the iris cannot be identified as a different type, there needs to be a secondary recognition. The connection weights in back-propagation (BP) neural network use modified evolutionary neural network to adaptively train. The modified neural network is composed of particle swarm optimization with mutation operator and BP neural network. According to different iris libraries in different circumstances of experimental results, under illumination and noise interference, the correct recognition rate of this algorithm is higher, the ROC curve is closer to the coordinate axis, the training and recognition time is shorter, and the stability and the robustness are better.

  4. Classification of conductance traces with recurrent neural networks

    Science.gov (United States)

    Lauritzen, Kasper P.; Magyarkuti, András; Balogh, Zoltán; Halbritter, András; Solomon, Gemma C.

    2018-02-01

    We present a new automated method for structural classification of the traces obtained in break junction experiments. Using recurrent neural networks trained on the traces of minimal cross-sectional area in molecular dynamics simulations, we successfully separate the traces into two classes: point contact or nanowire. This is done without any assumptions about the expected features of each class. The trained neural network is applied to experimental break junction conductance traces, and it separates the classes as well as the previously used experimental methods. The effect of using partial conductance traces is explored, and we show that the method performs equally well using full or partial traces (as long as the trace just prior to breaking is included). When only the initial part of the trace is included, the results are still better than random chance. Finally, we show that the neural network classification method can be used to classify experimental conductance traces without using simulated results for training, but instead training the network on a few representative experimental traces. This offers a tool to recognize some characteristic motifs of the traces, which can be hard to find by simple data selection algorithms.

  5. DCS-Neural-Network Program for Aircraft Control and Testing

    Science.gov (United States)

    Jorgensen, Charles C.

    2006-01-01

    A computer program implements a dynamic-cell-structure (DCS) artificial neural network that can perform such tasks as learning selected aerodynamic characteristics of an airplane from wind-tunnel test data and computing real-time stability and control derivatives of the airplane for use in feedback linearized control. A DCS neural network is one of several types of neural networks that can incorporate additional nodes in order to rapidly learn increasingly complex relationships between inputs and outputs. In the DCS neural network implemented by the present program, the insertion of nodes is based on accumulated error. A competitive Hebbian learning rule (a supervised-learning rule in which connection weights are adjusted to minimize differences between actual and desired outputs for training examples) is used. A Kohonen-style learning rule (derived from a relatively simple training algorithm, implements a Delaunay triangulation layout of neurons) is used to adjust node positions during training. Neighborhood topology determines which nodes are used to estimate new values. The network learns, starting with two nodes, and adds new nodes sequentially in locations chosen to maximize reductions in global error. At any given time during learning, the error becomes homogeneously distributed over all nodes.

  6. Convolutional neural networks based on augmented training samples for synthetic aperture radar target recognition

    Science.gov (United States)

    Yan, Yue

    2018-03-01

    A synthetic aperture radar (SAR) automatic target recognition (ATR) method based on the convolutional neural networks (CNN) trained by augmented training samples is proposed. To enhance the robustness of CNN to various extended operating conditions (EOCs), the original training images are used to generate the noisy samples at different signal-to-noise ratios (SNRs), multiresolution representations, and partially occluded images. Then, the generated images together with the original ones are used to train a designed CNN for target recognition. The augmented training samples can contrapuntally improve the robustness of the trained CNN to the covered EOCs, i.e., the noise corruption, resolution variance, and partial occlusion. Moreover, the significantly larger training set effectively enhances the representation capability for other conditions, e.g., the standard operating condition (SOC), as well as the stability of the network. Therefore, better performance can be achieved by the proposed method for SAR ATR. For experimental evaluation, extensive experiments are conducted on the Moving and Stationary Target Acquisition and Recognition dataset under SOC and several typical EOCs.

  7. Probing many-body localization with neural networks

    Science.gov (United States)

    Schindler, Frank; Regnault, Nicolas; Neupert, Titus

    2017-06-01

    We show that a simple artificial neural network trained on entanglement spectra of individual states of a many-body quantum system can be used to determine the transition between a many-body localized and a thermalizing regime. Specifically, we study the Heisenberg spin-1/2 chain in a random external field. We employ a multilayer perceptron with a single hidden layer, which is trained on labeled entanglement spectra pertaining to the fully localized and fully thermal regimes. We then apply this network to classify spectra belonging to states in the transition region. For training, we use a cost function that contains, in addition to the usual error and regularization parts, a term that favors a confident classification of the transition region states. The resulting phase diagram is in good agreement with the one obtained by more conventional methods and can be computed for small systems. In particular, the neural network outperforms conventional methods in classifying individual eigenstates pertaining to a single disorder realization. It allows us to map out the structure of these eigenstates across the transition with spatial resolution. Furthermore, we analyze the network operation using the dreaming technique to show that the neural network correctly learns by itself the power-law structure of the entanglement spectra in the many-body localized regime.

  8. Manifold absolute pressure estimation using neural network with hybrid training algorithm.

    Directory of Open Access Journals (Sweden)

    Mohd Taufiq Muslim

    Full Text Available In a modern small gasoline engine fuel injection system, the load of the engine is estimated based on the measurement of the manifold absolute pressure (MAP sensor, which took place in the intake manifold. This paper present a more economical approach on estimating the MAP by using only the measurements of the throttle position and engine speed, resulting in lower implementation cost. The estimation was done via two-stage multilayer feed-forward neural network by combining Levenberg-Marquardt (LM algorithm, Bayesian Regularization (BR algorithm and Particle Swarm Optimization (PSO algorithm. Based on the results found in 20 runs, the second variant of the hybrid algorithm yields a better network performance than the first variant of hybrid algorithm, LM, LM with BR and PSO by estimating the MAP closely to the simulated MAP values. By using a valid experimental training data, the estimator network that trained with the second variant of the hybrid algorithm showed the best performance among other algorithms when used in an actual retrofit fuel injection system (RFIS. The performance of the estimator was also validated in steady-state and transient condition by showing a closer MAP estimation to the actual value.

  9. Manifold absolute pressure estimation using neural network with hybrid training algorithm.

    Science.gov (United States)

    Muslim, Mohd Taufiq; Selamat, Hazlina; Alimin, Ahmad Jais; Haniff, Mohamad Fadzli

    2017-01-01

    In a modern small gasoline engine fuel injection system, the load of the engine is estimated based on the measurement of the manifold absolute pressure (MAP) sensor, which took place in the intake manifold. This paper present a more economical approach on estimating the MAP by using only the measurements of the throttle position and engine speed, resulting in lower implementation cost. The estimation was done via two-stage multilayer feed-forward neural network by combining Levenberg-Marquardt (LM) algorithm, Bayesian Regularization (BR) algorithm and Particle Swarm Optimization (PSO) algorithm. Based on the results found in 20 runs, the second variant of the hybrid algorithm yields a better network performance than the first variant of hybrid algorithm, LM, LM with BR and PSO by estimating the MAP closely to the simulated MAP values. By using a valid experimental training data, the estimator network that trained with the second variant of the hybrid algorithm showed the best performance among other algorithms when used in an actual retrofit fuel injection system (RFIS). The performance of the estimator was also validated in steady-state and transient condition by showing a closer MAP estimation to the actual value.

  10. Bio-inspired spiking neural network for nonlinear systems control.

    Science.gov (United States)

    Pérez, Javier; Cabrera, Juan A; Castillo, Juan J; Velasco, Juan M

    2018-08-01

    Spiking neural networks (SNN) are the third generation of artificial neural networks. SNN are the closest approximation to biological neural networks. SNNs make use of temporal spike trains to command inputs and outputs, allowing a faster and more complex computation. As demonstrated by biological organisms, they are a potentially good approach to designing controllers for highly nonlinear dynamic systems in which the performance of controllers developed by conventional techniques is not satisfactory or difficult to implement. SNN-based controllers exploit their ability for online learning and self-adaptation to evolve when transferred from simulations to the real world. SNN's inherent binary and temporary way of information codification facilitates their hardware implementation compared to analog neurons. Biological neural networks often require a lower number of neurons compared to other controllers based on artificial neural networks. In this work, these neuronal systems are imitated to perform the control of non-linear dynamic systems. For this purpose, a control structure based on spiking neural networks has been designed. Particular attention has been paid to optimizing the structure and size of the neural network. The proposed structure is able to control dynamic systems with a reduced number of neurons and connections. A supervised learning process using evolutionary algorithms has been carried out to perform controller training. The efficiency of the proposed network has been verified in two examples of dynamic systems control. Simulations show that the proposed control based on SNN exhibits superior performance compared to other approaches based on Neural Networks and SNNs. Copyright © 2018 Elsevier Ltd. All rights reserved.

  11. Intelligent neural network diagnostic system

    International Nuclear Information System (INIS)

    Mohamed, A.H.

    2010-01-01

    Recently, artificial neural network (ANN) has made a significant mark in the domain of diagnostic applications. Neural networks are used to implement complex non-linear mappings (functions) using simple elementary units interrelated through connections with adaptive weights. The performance of the ANN is mainly depending on their topology structure and weights. Some systems have been developed using genetic algorithm (GA) to optimize the topology of the ANN. But, they suffer from some limitations. They are : (1) The computation time requires for training the ANN several time reaching for the average weight required, (2) Slowness of GA for optimization process and (3) Fitness noise appeared in the optimization of ANN. This research suggests new issues to overcome these limitations for finding optimal neural network architectures to learn particular problems. This proposed methodology is used to develop a diagnostic neural network system. It has been applied for a 600 MW turbo-generator as a case of real complex systems. The proposed system has proved its significant performance compared to two common methods used in the diagnostic applications.

  12. Parameter estimation in space systems using recurrent neural networks

    Science.gov (United States)

    Parlos, Alexander G.; Atiya, Amir F.; Sunkel, John W.

    1991-01-01

    The identification of time-varying parameters encountered in space systems is addressed, using artificial neural systems. A hybrid feedforward/feedback neural network, namely a recurrent multilayer perception, is used as the model structure in the nonlinear system identification. The feedforward portion of the network architecture provides its well-known interpolation property, while through recurrency and cross-talk, the local information feedback enables representation of temporal variations in the system nonlinearities. The standard back-propagation-learning algorithm is modified and it is used for both the off-line and on-line supervised training of the proposed hybrid network. The performance of recurrent multilayer perceptron networks in identifying parameters of nonlinear dynamic systems is investigated by estimating the mass properties of a representative large spacecraft. The changes in the spacecraft inertia are predicted using a trained neural network, during two configurations corresponding to the early and late stages of the spacecraft on-orbit assembly sequence. The proposed on-line mass properties estimation capability offers encouraging results, though, further research is warranted for training and testing the predictive capabilities of these networks beyond nominal spacecraft operations.

  13. Influence of the Training Set Value on the Quality of the Neural Network to Identify Selected Moulding Sand Properties

    Directory of Open Access Journals (Sweden)

    Jakubski J.

    2013-06-01

    Full Text Available Artificial neural networks are one of the modern methods of the production optimisation. An attempt to apply neural networks for controlling the quality of bentonite moulding sands is presented in this paper. This is the assessment method of sands suitability by means of detecting correlations between their individual parameters. This paper presents the next part of the study on usefulness of artificial neural networks to support rebonding of green moulding sand, using chosen properties of moulding sands, which can be determined fast. The effect of changes in the training set quantity on the quality of the network is presented in this article. It has been shown that a small change in the data set would change the quality of the network, and may also make it necessary to change the type of network in order to obtain good results.

  14. Neural network error correction for solving coupled ordinary differential equations

    Science.gov (United States)

    Shelton, R. O.; Darsey, J. A.; Sumpter, B. G.; Noid, D. W.

    1992-01-01

    A neural network is presented to learn errors generated by a numerical algorithm for solving coupled nonlinear differential equations. The method is based on using a neural network to correctly learn the error generated by, for example, Runge-Kutta on a model molecular dynamics (MD) problem. The neural network programs used in this study were developed by NASA. Comparisons are made for training the neural network using backpropagation and a new method which was found to converge with fewer iterations. The neural net programs, the MD model and the calculations are discussed.

  15. Electricity price forecast using Combinatorial Neural Network trained by a new stochastic search method

    International Nuclear Information System (INIS)

    Abedinia, O.; Amjady, N.; Shafie-khah, M.; Catalão, J.P.S.

    2015-01-01

    Highlights: • Presenting a Combinatorial Neural Network. • Suggesting a new stochastic search method. • Adapting the suggested method as a training mechanism. • Proposing a new forecast strategy. • Testing the proposed strategy on real-world electricity markets. - Abstract: Electricity price forecast is key information for successful operation of electricity market participants. However, the time series of electricity price has nonlinear, non-stationary and volatile behaviour and so its forecast method should have high learning capability to extract the complex input/output mapping function of electricity price. In this paper, a Combinatorial Neural Network (CNN) based forecasting engine is proposed to predict the future values of price data. The CNN-based forecasting engine is equipped with a new training mechanism for optimizing the weights of the CNN. This training mechanism is based on an efficient stochastic search method, which is a modified version of chemical reaction optimization algorithm, giving high learning ability to the CNN. The proposed price forecast strategy is tested on the real-world electricity markets of Pennsylvania–New Jersey–Maryland (PJM) and mainland Spain and its obtained results are extensively compared with the results obtained from several other forecast methods. These comparisons illustrate effectiveness of the proposed strategy.

  16. Radial basis function neural network for power system load-flow

    International Nuclear Information System (INIS)

    Karami, A.; Mohammadi, M.S.

    2008-01-01

    This paper presents a method for solving the load-flow problem of the electric power systems using radial basis function (RBF) neural network with a fast hybrid training method. The main idea is that some operating conditions (values) are needed to solve the set of non-linear algebraic equations of load-flow by employing an iterative numerical technique. Therefore, we may view the outputs of a load-flow program as functions of the operating conditions. Indeed, we are faced with a function approximation problem and this can be done by an RBF neural network. The proposed approach has been successfully applied to the 10-machine and 39-bus New England test system. In addition, this method has been compared with that of a multi-layer perceptron (MLP) neural network model. The simulation results show that the RBF neural network is a simpler method to implement and requires less training time to converge than the MLP neural network. (author)

  17. A pre-trained convolutional neural network based method for thyroid nodule diagnosis.

    Science.gov (United States)

    Ma, Jinlian; Wu, Fa; Zhu, Jiang; Xu, Dong; Kong, Dexing

    2017-01-01

    In ultrasound images, most thyroid nodules are in heterogeneous appearances with various internal components and also have vague boundaries, so it is difficult for physicians to discriminate malignant thyroid nodules from benign ones. In this study, we propose a hybrid method for thyroid nodule diagnosis, which is a fusion of two pre-trained convolutional neural networks (CNNs) with different convolutional layers and fully-connected layers. Firstly, the two networks pre-trained with ImageNet database are separately trained. Secondly, we fuse feature maps learned by trained convolutional filters, pooling and normalization operations of the two CNNs. Finally, with the fused feature maps, a softmax classifier is used to diagnose thyroid nodules. The proposed method is validated on 15,000 ultrasound images collected from two local hospitals. Experiment results show that the proposed CNN based methods can accurately and effectively diagnose thyroid nodules. In addition, the fusion of the two CNN based models lead to significant performance improvement, with an accuracy of 83.02%±0.72%. These demonstrate the potential clinical applications of this method. Copyright © 2016 Elsevier B.V. All rights reserved.

  18. Computer interpretation of thallium SPECT studies based on neural network analysis

    Science.gov (United States)

    Wang, David C.; Karvelis, K. C.

    1991-06-01

    A class of artificial intelligence (Al) programs known as neural networks are well suited to pattern recognition. A neural network is trained rather than programmed to recognize patterns. This differs from "expert system" Al programs in that it is not following an extensive set of rules determined by the programmer, but rather bases its decision on a gestalt interpretation of the image. The "bullseye" images from cardiac stress thallium tests performed on 50 male patients, as well as several simulated images were used to train the network. The network was able to accurately classify all patients in the training set. The network was then tested against 50 unknown patients and was able to correctly categorize 77% of the areas of ischemia and 92% of the areas of infarction. While not yet matching the ability of a trained physician, the neural network shows great promise in this area and has potential application in other areas of medical imaging.

  19. Computer interpretation of thallium SPECT studies based on neural network analysis

    International Nuclear Information System (INIS)

    Wang, D.C.; Karvelis, K.C.

    1991-01-01

    This paper reports that a class of artificial intelligence (AI) programs known as neural-networks are well suited to pattern recognition. A neural network is trained rather than programmed to recognize patterns. This differs from expert system AI programs in that it is not following an extensive set of rules determined by the programmer, but rather bases its decision on a gestalt interpretation of the image. The bullseye images from cardiac stress thallium tests performed on 50 male patients, as well as several simulated images were used to train the network. The network was able to accurately classify all patients in the training set. The network was then tested against 50 unknown patients and was able to correctly categorize 77% of the areas of ischemia and 92% of the areas of infarction. While not yet matching the ability of the trained physician, the neural network shows great promise in this area and has potential application in other areas of medical imaging

  20. A fuzzy neural network for sensor signal estimation

    International Nuclear Information System (INIS)

    Na, Man Gyun

    2000-01-01

    In this work, a fuzzy neural network is used to estimate the relevant sensor signal using other sensor signals. Noise components in input signals into the fuzzy neural network are removed through the wavelet denoising technique. Principal component analysis (PCA) is used to reduce the dimension of an input space without losing a significant amount of information. A lower dimensional input space will also usually reduce the time necessary to train a fuzzy-neural network. Also, the principal component analysis makes easy the selection of the input signals into the fuzzy neural network. The fuzzy neural network parameters are optimized by two learning methods. A genetic algorithm is used to optimize the antecedent parameters of the fuzzy neural network and a least-squares algorithm is used to solve the consequent parameters. The proposed algorithm was verified through the application to the pressurizer water level and the hot-leg flowrate measurements in pressurized water reactors

  1. An artifical neural network for detection of simulated dental caries

    Energy Technology Data Exchange (ETDEWEB)

    Kositbowornchai, S. [Khon Kaen Univ. (Thailand). Dept. of Oral Diagnosis; Siriteptawee, S.; Plermkamon, S.; Bureerat, S. [Khon Kaen Univ. (Thailand). Dept. of Mechanical Engineering; Chetchotsak, D. [Khon Kaen Univ. (Thailand). Dept. of Industrial Engineering

    2006-08-15

    Objects: A neural network was developed to diagnose artificial dental caries using images from a charged-coupled device (CCD)camera and intra-oral digital radiography. The diagnostic performance of this neural network was evaluated against a gold standard. Materials and methods: The neural network design was the Learning Vector Quantization (LVQ) used to classify a tooth surface as sound or as having dental caries. The depth of the dental caries was indicated on a graphic user interface (GUI) screen developed by Matlab programming. Forty-nine images of both sound and simulated dental caries, derived from a CCD camera and by digital radiography, were used to 'train' an artificial neural network. After the 'training' process, a separate test-set comprising 322 unseen images was evaluated. Tooth sections and microscopic examinations were used to confirm the actual dental caries status.The performance of neural network was evaluated using diagnostic test. Results: The sensitivity (95%CI)/specificity (95%CI) of dental caries detection by the CCD camera and digital radiography were 0.77(0.68-0.85)/0.85(0.75-0.92) and 0.81(0.72-0.88)/0.93(0.84-0.97), respectively. The accuracy of caries depth-detection by the CCD camera and digital radiography was 58 and 40%, respectively. Conclusions: The model neural network used in this study could be a prototype for caries detection but should be improved for classifying caries depth. Our study suggests an artificial neural network can be trained to make the correct interpretations of dental caries. (orig.)

  2. An artifical neural network for detection of simulated dental caries

    International Nuclear Information System (INIS)

    Kositbowornchai, S.; Siriteptawee, S.; Plermkamon, S.; Bureerat, S.; Chetchotsak, D.

    2006-01-01

    Objects: A neural network was developed to diagnose artificial dental caries using images from a charged-coupled device (CCD)camera and intra-oral digital radiography. The diagnostic performance of this neural network was evaluated against a gold standard. Materials and methods: The neural network design was the Learning Vector Quantization (LVQ) used to classify a tooth surface as sound or as having dental caries. The depth of the dental caries was indicated on a graphic user interface (GUI) screen developed by Matlab programming. Forty-nine images of both sound and simulated dental caries, derived from a CCD camera and by digital radiography, were used to 'train' an artificial neural network. After the 'training' process, a separate test-set comprising 322 unseen images was evaluated. Tooth sections and microscopic examinations were used to confirm the actual dental caries status.The performance of neural network was evaluated using diagnostic test. Results: The sensitivity (95%CI)/specificity (95%CI) of dental caries detection by the CCD camera and digital radiography were 0.77(0.68-0.85)/0.85(0.75-0.92) and 0.81(0.72-0.88)/0.93(0.84-0.97), respectively. The accuracy of caries depth-detection by the CCD camera and digital radiography was 58 and 40%, respectively. Conclusions: The model neural network used in this study could be a prototype for caries detection but should be improved for classifying caries depth. Our study suggests an artificial neural network can be trained to make the correct interpretations of dental caries. (orig.)

  3. Design of Neural Networks for Fast Convergence and Accuracy: Dynamics and Control

    Science.gov (United States)

    Maghami, Peiman G.; Sparks, Dean W., Jr.

    1997-01-01

    A procedure for the design and training of artificial neural networks, used for rapid and efficient controls and dynamics design and analysis for flexible space systems, has been developed. Artificial neural networks are employed, such that once properly trained, they provide a means of evaluating the impact of design changes rapidly. Specifically, two-layer feedforward neural networks are designed to approximate the functional relationship between the component/spacecraft design changes and measures of its performance or nonlinear dynamics of the system/components. A training algorithm, based on statistical sampling theory, is presented, which guarantees that the trained networks provide a designer-specified degree of accuracy in mapping the functional relationship. Within each iteration of this statistical-based algorithm, a sequential design algorithm is used for the design and training of the feedforward network to provide rapid convergence to the network goals. Here, at each sequence a new network is trained to minimize the error of previous network. The proposed method should work for applications wherein an arbitrary large source of training data can be generated. Two numerical examples are performed on a spacecraft application in order to demonstrate the feasibility of the proposed approach.

  4. Control of autonomous robot using neural networks

    Science.gov (United States)

    Barton, Adam; Volna, Eva

    2017-07-01

    The aim of the article is to design a method of control of an autonomous robot using artificial neural networks. The introductory part describes control issues from the perspective of autonomous robot navigation and the current mobile robots controlled by neural networks. The core of the article is the design of the controlling neural network, and generation and filtration of the training set using ART1 (Adaptive Resonance Theory). The outcome of the practical part is an assembled Lego Mindstorms EV3 robot solving the problem of avoiding obstacles in space. To verify models of an autonomous robot behavior, a set of experiments was created as well as evaluation criteria. The speed of each motor was adjusted by the controlling neural network with respect to the situation in which the robot was found.

  5. Learning Data Set Influence on Identification Accuracy of Gas Turbine Neural Network Model

    Science.gov (United States)

    Kuznetsov, A. V.; Makaryants, G. M.

    2018-01-01

    There are many gas turbine engine identification researches via dynamic neural network models. It should minimize errors between model and real object during identification process. Questions about training data set processing of neural networks are usually missed. This article presents a study about influence of data set type on gas turbine neural network model accuracy. The identification object is thermodynamic model of micro gas turbine engine. The thermodynamic model input signal is the fuel consumption and output signal is the engine rotor rotation frequency. Four types input signals was used for creating training and testing data sets of dynamic neural network models - step, fast, slow and mixed. Four dynamic neural networks were created based on these types of training data sets. Each neural network was tested via four types test data sets. In the result 16 transition processes from four neural networks and four test data sets from analogous solving results of thermodynamic model were compared. The errors comparison was made between all neural network errors in each test data set. In the comparison result it was shown error value ranges of each test data set. It is shown that error values ranges is small therefore the influence of data set types on identification accuracy is low.

  6. UNMANNED AIR VEHICLE STABILIZATION BASED ON NEURAL NETWORK REGULATOR

    Directory of Open Access Journals (Sweden)

    S. S. Andropov

    2016-09-01

    Full Text Available A problem of stabilizing for the multirotor unmanned aerial vehicle in an environment with external disturbances is researched. A classic proportional-integral-derivative controller is analyzed, its flaws are outlined: inability to respond to changing of external conditions and the need for manual adjustment of coefficients. The paper presents an adaptive adjustment method for coefficients of the proportional-integral-derivative controller based on neural networks. A neural network structure, its input and output data are described. Neural networks with three layers are used to create an adaptive stabilization system for the multirotor unmanned aerial vehicle. Training of the networks is done with the back propagation method. Each neural network produces regulator coefficients for each angle of stabilization as its output. A method for network training is explained. Several graphs of transition process on different stages of learning, including processes with external disturbances, are presented. It is shown that the system meets stabilization requirements with sufficient number of iterations. Described adjustment method for coefficients can be used in remote control of unmanned aerial vehicles, operating in the changing environment.

  7. Empirical modeling of nuclear power plants using neural networks

    International Nuclear Information System (INIS)

    Parlos, A.G.; Atiya, A.; Chong, K.T.

    1991-01-01

    A summary of a procedure for nonlinear identification of process dynamics encountered in nuclear power plant components is presented in this paper using artificial neural systems. A hybrid feedforward/feedback neural network, namely, a recurrent multilayer perceptron, is used as the nonlinear structure for system identification. In the overall identification process, the feedforward portion of the network architecture provides its well-known interpolation property, while through recurrency and cross-talk, the local information feedback enables representation of time-dependent system nonlinearities. The standard backpropagation learning algorithm is modified and is used to train the proposed hybrid network in a supervised manner. The performance of recurrent multilayer perceptron networks in identifying process dynamics is investigated via the case study of a U-tube steam generator. The nonlinear response of a representative steam generator is predicted using a neural network and is compared to the response obtained from a sophisticated physical model during both high- and low-power operation. The transient responses compare well, though further research is warranted for training and testing of recurrent neural networks during more severe operational transients and accident scenarios

  8. Feed forward neural networks modeling for K-P interactions

    International Nuclear Information System (INIS)

    El-Bakry, M.Y.

    2003-01-01

    Artificial intelligence techniques involving neural networks became vital modeling tools where model dynamics are difficult to track with conventional techniques. The paper make use of the feed forward neural networks (FFNN) to model the charged multiplicity distribution of K-P interactions at high energies. The FFNN was trained using experimental data for the multiplicity distributions at different lab momenta. Results of the FFNN model were compared to that generated using the parton two fireball model and the experimental data. The proposed FFNN model results showed good fitting to the experimental data. The neural network model performance was also tested at non-trained space and was found to be in good agreement with the experimental data

  9. Fast, Simple and Accurate Handwritten Digit Classification by Training Shallow Neural Network Classifiers with the 'Extreme Learning Machine' Algorithm.

    Directory of Open Access Journals (Sweden)

    Mark D McDonnell

    Full Text Available Recent advances in training deep (multi-layer architectures have inspired a renaissance in neural network use. For example, deep convolutional networks are becoming the default option for difficult tasks on large datasets, such as image and speech recognition. However, here we show that error rates below 1% on the MNIST handwritten digit benchmark can be replicated with shallow non-convolutional neural networks. This is achieved by training such networks using the 'Extreme Learning Machine' (ELM approach, which also enables a very rapid training time (∼ 10 minutes. Adding distortions, as is common practise for MNIST, reduces error rates even further. Our methods are also shown to be capable of achieving less than 5.5% error rates on the NORB image database. To achieve these results, we introduce several enhancements to the standard ELM algorithm, which individually and in combination can significantly improve performance. The main innovation is to ensure each hidden-unit operates only on a randomly sized and positioned patch of each image. This form of random 'receptive field' sampling of the input ensures the input weight matrix is sparse, with about 90% of weights equal to zero. Furthermore, combining our methods with a small number of iterations of a single-batch backpropagation method can significantly reduce the number of hidden-units required to achieve a particular performance. Our close to state-of-the-art results for MNIST and NORB suggest that the ease of use and accuracy of the ELM algorithm for designing a single-hidden-layer neural network classifier should cause it to be given greater consideration either as a standalone method for simpler problems, or as the final classification stage in deep neural networks applied to more difficult problems.

  10. Higher-order neural network software for distortion invariant object recognition

    Science.gov (United States)

    Reid, Max B.; Spirkovska, Lilly

    1991-01-01

    The state-of-the-art in pattern recognition for such applications as automatic target recognition and industrial robotic vision relies on digital image processing. We present a higher-order neural network model and software which performs the complete feature extraction-pattern classification paradigm required for automatic pattern recognition. Using a third-order neural network, we demonstrate complete, 100 percent accurate invariance to distortions of scale, position, and in-plate rotation. In a higher-order neural network, feature extraction is built into the network, and does not have to be learned. Only the relatively simple classification step must be learned. This is key to achieving very rapid training. The training set is much smaller than with standard neural network software because the higher-order network only has to be shown one view of each object to be learned, not every possible view. The software and graphical user interface run on any Sun workstation. Results of the use of the neural software in autonomous robotic vision systems are presented. Such a system could have extensive application in robotic manufacturing.

  11. Classification of E-Nose Aroma Data of Four Fruit Types by ABC-Based Neural Network

    Directory of Open Access Journals (Sweden)

    M. Fatih Adak

    2016-02-01

    Full Text Available Electronic nose technology is used in many areas, and frequently in the beverage industry for classification and quality-control purposes. In this study, four different aroma data (strawberry, lemon, cherry, and melon were obtained using a MOSES II electronic nose for the purpose of fruit classification. To improve the performance of the classification, the training phase of the neural network with two hidden layers was optimized using artificial bee colony algorithm (ABC, which is known to be successful in exploration. Test data were given to two different neural networks, each of which were trained separately with backpropagation (BP and ABC, and average test performances were measured as 60% for the artificial neural network trained with BP and 76.39% for the artificial neural network trained with ABC. Training and test phases were repeated 30 times to obtain these average performance measurements. This level of performance shows that the artificial neural network trained with ABC is successful in classifying aroma data.

  12. Classification of E-Nose Aroma Data of Four Fruit Types by ABC-Based Neural Network.

    Science.gov (United States)

    Adak, M Fatih; Yumusak, Nejat

    2016-02-27

    Electronic nose technology is used in many areas, and frequently in the beverage industry for classification and quality-control purposes. In this study, four different aroma data (strawberry, lemon, cherry, and melon) were obtained using a MOSES II electronic nose for the purpose of fruit classification. To improve the performance of the classification, the training phase of the neural network with two hidden layers was optimized using artificial bee colony algorithm (ABC), which is known to be successful in exploration. Test data were given to two different neural networks, each of which were trained separately with backpropagation (BP) and ABC, and average test performances were measured as 60% for the artificial neural network trained with BP and 76.39% for the artificial neural network trained with ABC. Training and test phases were repeated 30 times to obtain these average performance measurements. This level of performance shows that the artificial neural network trained with ABC is successful in classifying aroma data.

  13. Decorrelated Jet Substructure Tagging using Adversarial Neural Networks

    CERN Multimedia

    CERN. Geneva

    2017-01-01

    We describe a strategy for constructing a neural network jet substructure tagger which powerfully discriminates boosted decay signals while remaining largely uncorrelated with the jet mass. This reduces the impact of systematic uncertainties in background modeling while enhancing signal purity, resulting in improved discovery significance relative to existing taggers. The network is trained using an adversarial strategy, resulting in a tagger that learns to balance classification accuracy with decorrelation. As a benchmark scenario, we consider the case where large-radius jets originating from a boosted Z' decay are discriminated from a background of nonresonant quark and gluon jets. We show that in the presence of systematic uncertainties on the background rate, our adversarially-trained, decorrelated tagger considerably outperforms a conventionally trained neural network, despite having a slightly worse signal-background separation power. We generalize the adversarial training technique to include a paramet...

  14. Transient analysis for PWR reactor core using neural networks predictors

    International Nuclear Information System (INIS)

    Gueray, B.S.

    2001-01-01

    In this study, transient analysis for a Pressurized Water Reactor core has been performed. A lumped parameter approximation is preferred for that purpose, to describe the reactor core together with mechanism which play an important role in dynamic analysis. The dynamic behavior of the reactor core during transients is analyzed considering the transient initiating events, wich are an essential part of Safety Analysis Reports. several transients are simulated based on the employed core model. Simulation results are in accord the physical expectations. A neural network is developed to predict the future response of the reactor core, in advance. The neural network is trained using the simulation results of a number of representative transients. Structure of the neural network is optimized by proper selection of transfer functions for the neurons. Trained neural network is used to predict the future responses following an early observation of the changes in system variables. Estimated behaviour using the neural network is in good agreement with the simulation results for various for types of transients. Results of this study indicate that the designed neural network can be used as an estimator of the time dependent behavior of the reactor core under transient conditions

  15. Artificial neural network intelligent method for prediction

    Science.gov (United States)

    Trifonov, Roumen; Yoshinov, Radoslav; Pavlova, Galya; Tsochev, Georgi

    2017-09-01

    Accounting and financial classification and prediction problems are high challenge and researchers use different methods to solve them. Methods and instruments for short time prediction of financial operations using artificial neural network are considered. The methods, used for prediction of financial data as well as the developed forecasting system with neural network are described in the paper. The architecture of a neural network used four different technical indicators, which are based on the raw data and the current day of the week is presented. The network developed is used for forecasting movement of stock prices one day ahead and consists of an input layer, one hidden layer and an output layer. The training method is algorithm with back propagation of the error. The main advantage of the developed system is self-determination of the optimal topology of neural network, due to which it becomes flexible and more precise The proposed system with neural network is universal and can be applied to various financial instruments using only basic technical indicators as input data.

  16. SCYNet. Testing supersymmetric models at the LHC with neural networks

    Energy Technology Data Exchange (ETDEWEB)

    Bechtle, Philip; Belkner, Sebastian; Hamer, Matthias [Universitaet Bonn, Bonn (Germany); Dercks, Daniel [Universitaet Hamburg, Hamburg (Germany); Keller, Tim; Kraemer, Michael; Sarrazin, Bjoern; Schuette-Engel, Jan; Tattersall, Jamie [RWTH Aachen University, Institute for Theoretical Particle Physics and Cosmology, Aachen (Germany)

    2017-10-15

    SCYNet (SUSY Calculating Yield Net) is a tool for testing supersymmetric models against LHC data. It uses neural network regression for a fast evaluation of the profile likelihood ratio. Two neural network approaches have been developed: one network has been trained using the parameters of the 11-dimensional phenomenological Minimal Supersymmetric Standard Model (pMSSM-11) as an input and evaluates the corresponding profile likelihood ratio within milliseconds. It can thus be used in global pMSSM-11 fits without time penalty. In the second approach, the neural network has been trained using model-independent signature-related objects, such as energies and particle multiplicities, which were estimated from the parameters of a given new physics model. (orig.)

  17. Modeling of quasistatic magnetic hysteresis with feed-forward neural networks

    International Nuclear Information System (INIS)

    Makaveev, Dimitre; Dupre, Luc; De Wulf, Marc; Melkebeek, Jan

    2001-01-01

    A modeling technique for rate-independent (quasistatic) scalar magnetic hysteresis is presented, using neural networks. Based on the theory of dynamic systems and the wiping-out and congruency properties of the classical scalar Preisach hysteresis model, the choice of a feed-forward neural network model is motivated. The neural network input parameters at each time step are the corresponding magnetic field strength and memory state, thereby assuring accurate prediction of the change of magnetic induction. For rate-independent hysteresis, the current memory state can be determined by the last extreme magnetic field strength and induction values, kept in memory. The choice of a network training set is motivated and the performance of the network is illustrated for a test set not used during training. Very accurate prediction of both major and minor hysteresis loops is observed, proving that the neural network technique is suitable for hysteresis modeling. [copyright] 2001 American Institute of Physics

  18. Modeling and control of magnetorheological fluid dampers using neural networks

    Science.gov (United States)

    Wang, D. H.; Liao, W. H.

    2005-02-01

    Due to the inherent nonlinear nature of magnetorheological (MR) fluid dampers, one of the challenging aspects for utilizing these devices to achieve high system performance is the development of accurate models and control algorithms that can take advantage of their unique characteristics. In this paper, the direct identification and inverse dynamic modeling for MR fluid dampers using feedforward and recurrent neural networks are studied. The trained direct identification neural network model can be used to predict the damping force of the MR fluid damper on line, on the basis of the dynamic responses across the MR fluid damper and the command voltage, and the inverse dynamic neural network model can be used to generate the command voltage according to the desired damping force through supervised learning. The architectures and the learning methods of the dynamic neural network models and inverse neural network models for MR fluid dampers are presented, and some simulation results are discussed. Finally, the trained neural network models are applied to predict and control the damping force of the MR fluid damper. Moreover, validation methods for the neural network models developed are proposed and used to evaluate their performance. Validation results with different data sets indicate that the proposed direct identification dynamic model using the recurrent neural network can be used to predict the damping force accurately and the inverse identification dynamic model using the recurrent neural network can act as a damper controller to generate the command voltage when the MR fluid damper is used in a semi-active mode.

  19. Identification of Abnormal System Noise Temperature Patterns in Deep Space Network Antennas Using Neural Network Trained Fuzzy Logic

    Science.gov (United States)

    Lu, Thomas; Pham, Timothy; Liao, Jason

    2011-01-01

    This paper presents the development of a fuzzy logic function trained by an artificial neural network to classify the system noise temperature (SNT) of antennas in the NASA Deep Space Network (DSN). The SNT data were classified into normal, marginal, and abnormal classes. The irregular SNT pattern was further correlated with link margin and weather data. A reasonably good correlation is detected among high SNT, low link margin and the effect of bad weather; however we also saw some unexpected non-correlations which merit further study in the future.

  20. Deep Recurrent Convolutional Neural Network: Improving Performance For Speech Recognition

    OpenAIRE

    Zhang, Zewang; Sun, Zheng; Liu, Jiaqi; Chen, Jingwen; Huo, Zhao; Zhang, Xiao

    2016-01-01

    A deep learning approach has been widely applied in sequence modeling problems. In terms of automatic speech recognition (ASR), its performance has significantly been improved by increasing large speech corpus and deeper neural network. Especially, recurrent neural network and deep convolutional neural network have been applied in ASR successfully. Given the arising problem of training speed, we build a novel deep recurrent convolutional network for acoustic modeling and then apply deep resid...

  1. Quantitative analysis of volatile organic compounds using ion mobility spectra and cascade correlation neural networks

    Science.gov (United States)

    Harrington, Peter DEB.; Zheng, Peng

    1995-01-01

    Ion Mobility Spectrometry (IMS) is a powerful technique for trace organic analysis in the gas phase. Quantitative measurements are difficult, because IMS has a limited linear range. Factors that may affect the instrument response are pressure, temperature, and humidity. Nonlinear calibration methods, such as neural networks, may be ideally suited for IMS. Neural networks have the capability of modeling complex systems. Many neural networks suffer from long training times and overfitting. Cascade correlation neural networks train at very fast rates. They also build their own topology, that is a number of layers and number of units in each layer. By controlling the decay parameter in training neural networks, reproducible and general models may be obtained.

  2. Application of artificial neural network for medical image recognition and diagnostic decision making

    International Nuclear Information System (INIS)

    Asada, N.; Eiho, S.; Doi, K.; MacMahon, H.; Montner, S.M.; Giger, M.L.

    1989-01-01

    An artificial neural network has been applied for pattern recognition and used as a tool in an expert system. The purpose of this study is to examine the potential usefulness of the neural network approach in medical applications for image recognition and decision making. The authors designed multilayer feedforward neural networks with a back-propagation algorithm for our study. Using first-pass radionuclide ventriculograms, we attempted to identify the right and left ventricles of the heart and the lungs by training the neural network from patterns of time-activity curves. In a preliminary study, the neural network enabled identification of the lungs and heart chambers once the network was trained sufficiently by means of repeated entries of data from the same case

  3. Quantum neural networks: Current status and prospects for development

    Science.gov (United States)

    Altaisky, M. V.; Kaputkina, N. E.; Krylov, V. A.

    2014-11-01

    The idea of quantum artificial neural networks, first formulated in [34], unites the artificial neural network concept with the quantum computation paradigm. Quantum artificial neural networks were first systematically considered in the PhD thesis by T. Menneer (1998). Based on the works of Menneer and Narayanan [42, 43], Kouda, Matsui, and Nishimura [35, 36], Altaisky [2, 68], Zhou [67], and others, quantum-inspired learning algorithms for neural networks were developed, and are now used in various training programs and computer games [29, 30]. The first practically realizable scaled hardware-implemented model of the quantum artificial neural network is obtained by D-Wave Systems, Inc. [33]. It is a quantum Hopfield network implemented on the basis of superconducting quantum interference devices (SQUIDs). In this work we analyze possibilities and underlying principles of an alternative way to implement quantum neural networks on the basis of quantum dots. A possibility of using quantum neural network algorithms in automated control systems, associative memory devices, and in modeling biological and social networks is examined.

  4. Neural Network Machine Learning and Dimension Reduction for Data Visualization

    Science.gov (United States)

    Liles, Charles A.

    2014-01-01

    Neural network machine learning in computer science is a continuously developing field of study. Although neural network models have been developed which can accurately predict a numeric value or nominal classification, a general purpose method for constructing neural network architecture has yet to be developed. Computer scientists are often forced to rely on a trial-and-error process of developing and improving accurate neural network models. In many cases, models are constructed from a large number of input parameters. Understanding which input parameters have the greatest impact on the prediction of the model is often difficult to surmise, especially when the number of input variables is very high. This challenge is often labeled the "curse of dimensionality" in scientific fields. However, techniques exist for reducing the dimensionality of problems to just two dimensions. Once a problem's dimensions have been mapped to two dimensions, it can be easily plotted and understood by humans. The ability to visualize a multi-dimensional dataset can provide a means of identifying which input variables have the highest effect on determining a nominal or numeric output. Identifying these variables can provide a better means of training neural network models; models can be more easily and quickly trained using only input variables which appear to affect the outcome variable. The purpose of this project is to explore varying means of training neural networks and to utilize dimensional reduction for visualizing and understanding complex datasets.

  5. Learning drifting concepts with neural networks

    NARCIS (Netherlands)

    Biehl, Michael; Schwarze, Holm

    1993-01-01

    The learning of time-dependent concepts with a neural network is studied analytically and numerically. The linearly separable target rule is represented by an N-vector, whose time dependence is modelled by a random or deterministic drift process. A single-layer network is trained online using

  6. Nonlinear adaptive inverse control via the unified model neural network

    Science.gov (United States)

    Jeng, Jin-Tsong; Lee, Tsu-Tian

    1999-03-01

    In this paper, we propose a new nonlinear adaptive inverse control via a unified model neural network. In order to overcome nonsystematic design and long training time in nonlinear adaptive inverse control, we propose the approximate transformable technique to obtain a Chebyshev Polynomials Based Unified Model (CPBUM) neural network for the feedforward/recurrent neural networks. It turns out that the proposed method can use less training time to get an inverse model. Finally, we apply this proposed method to control magnetic bearing system. The experimental results show that the proposed nonlinear adaptive inverse control architecture provides a greater flexibility and better performance in controlling magnetic bearing systems.

  7. ACO-Initialized Wavelet Neural Network for Vibration Fault Diagnosis of Hydroturbine Generating Unit

    Directory of Open Access Journals (Sweden)

    Zhihuai Xiao

    2015-01-01

    Full Text Available Considering the drawbacks of traditional wavelet neural network, such as low convergence speed and high sensitivity to initial parameters, an ant colony optimization- (ACO- initialized wavelet neural network is proposed in this paper for vibration fault diagnosis of a hydroturbine generating unit. In this method, parameters of the wavelet neural network are initialized by the ACO algorithm, and then the wavelet neural network is trained by the gradient descent algorithm. Amplitudes of the frequency components of the hydroturbine generating unit vibration signals are used as feature vectors for wavelet neural network training to realize mapping relationship from vibration features to fault types. A real vibration fault diagnosis case result of a hydroturbine generating unit shows that the proposed method has faster convergence speed and stronger generalization ability than the traditional wavelet neural network and ACO wavelet neural network. Thus it can provide an effective solution for online vibration fault diagnosis of a hydroturbine generating unit.

  8. A neural network model for credit risk evaluation.

    Science.gov (United States)

    Khashman, Adnan

    2009-08-01

    Credit scoring is one of the key analytical techniques in credit risk evaluation which has been an active research area in financial risk management. This paper presents a credit risk evaluation system that uses a neural network model based on the back propagation learning algorithm. We train and implement the neural network to decide whether to approve or reject a credit application, using seven learning schemes and real world credit applications from the Australian credit approval datasets. A comparison of the system performance under the different learning schemes is provided, furthermore, we compare the performance of two neural networks; with one and two hidden layers following the ideal learning scheme. Experimental results suggest that neural networks can be effectively used in automatic processing of credit applications.

  9. A stochastic learning algorithm for layered neural networks

    International Nuclear Information System (INIS)

    Bartlett, E.B.; Uhrig, R.E.

    1992-01-01

    The random optimization method typically uses a Gaussian probability density function (PDF) to generate a random search vector. In this paper the random search technique is applied to the neural network training problem and is modified to dynamically seek out the optimal probability density function (OPDF) from which to select the search vector. The dynamic OPDF search process, combined with an auto-adaptive stratified sampling technique and a dynamic node architecture (DNA) learning scheme, completes the modifications of the basic method. The DNA technique determines the appropriate number of hidden nodes needed for a given training problem. By using DNA, researchers do not have to set the neural network architectures before training is initiated. The approach is applied to networks of generalized, fully interconnected, continuous perceptions. Computer simulation results are given

  10. Use of neural networks to monitor power plant components

    International Nuclear Information System (INIS)

    Ikonomopoulos, A.; Tsoukalas, L.H.

    1992-01-01

    A new methodology is presented for nondestructive evaluation (NDE) of check valve performance and degradation. Artificial neural network (ANN) technology is utilized for processing frequency domain signatures of check valves operating in a nuclear power plant (NPP). Acoustic signatures obtained from different locations on a check valve are transformed from the time domain to the frequency domain and then used as input to a pretrained neural network. The neural network has been trained with data sets corresponding to normal operation, therefore establishing a basis for check valve satisfactory performance. Results obtained from the proposed methodology demonstrate the ability of neural networks to perform accurate and quick evaluations of check valve performance

  11. HIV lipodystrophy case definition using artificial neural network modelling

    DEFF Research Database (Denmark)

    Ioannidis, John P A; Trikalinos, Thomas A; Law, Matthew

    2003-01-01

    OBJECTIVE: A case definition of HIV lipodystrophy has recently been developed from a combination of clinical, metabolic and imaging/body composition variables using logistic regression methods. We aimed to evaluate whether artificial neural networks could improve the diagnostic accuracy. METHODS......: The database of the case-control Lipodystrophy Case Definition Study was split into 504 subjects (265 with and 239 without lipodystrophy) used for training and 284 independent subjects (152 with and 132 without lipodystrophy) used for validation. Back-propagation neural networks with one or two middle layers...... were trained and validated. Results were compared against logistic regression models using the same information. RESULTS: Neural networks using clinical variables only (41 items) achieved consistently superior performance than logistic regression in terms of specificity, overall accuracy and area under...

  12. TRIGA control rod position and reactivity transient Monitoring by Neural Networks

    International Nuclear Information System (INIS)

    Rosa, R.; Palomba, M.; Sepielli, M.

    2008-01-01

    Plant sensors drift or malfunction and operator actions in nuclear reactor control can be supported by sensor on-line monitoring, and data validation through soft-computing process. On-line recalibration can often avoid manual calibration or drifting component replacement. DSP requires prompt response to the modified conditions. Artificial Neural Network (ANN) and Fuzzy logic ensure: prompt response, link with field measurement and physical system behaviour, data incoming interpretation, and detection of discrepancy for mis-calibration or sensor faults. ANN (Artificial Neural Network) is a system based on the operation of biological neural networks. Although computing is day by day advancing, there are certain tasks that a program made for a common microprocessor is unable to perform. A software implementation of an ANN can be made with Pros and Cons. Pros: A neural network can perform tasks that a linear program can not; When an element of the neural network fails, it can continue without any problem by their parallel nature; A neural network learns and does not need to be reprogrammed; It can be implemented in any application; It can be implemented without any problem. Cons: The architecture of a neural network is different from the architecture of microprocessors therefore needs to be emulated; it requires high processing time for large neural networks; and the neural network needs training to operate. Three possibilities of training exist: Supervised learning: the network is trained providing input and matching output patterns; Unsupervised learning: input patterns are not a priori classified and the system must develop its own representation of the input stimuli; Reinforcement Learning: intermediate form of the above two types of learning, the learning machine does some action on the environment and gets a feedback response from the environment. Two TRIGAN ANN applications are considered: control rod position and fuel temperature. The outcome obtained in this

  13. Neural network tagging in a toy model

    International Nuclear Information System (INIS)

    Milek, Marko; Patel, Popat

    1999-01-01

    The purpose of this study is a comparison of Artificial Neural Network approach to HEP analysis against the traditional methods. A toy model used in this analysis consists of two types of particles defined by four generic properties. A number of 'events' was created according to the model using standard Monte Carlo techniques. Several fully connected, feed forward multi layered Artificial Neural Networks were trained to tag the model events. The performance of each network was compared to the standard analysis mechanisms and significant improvement was observed

  14. Artificial Neural Networks and Instructional Technology.

    Science.gov (United States)

    Carlson, Patricia A.

    1991-01-01

    Artificial neural networks (ANN), part of artificial intelligence, are discussed. Such networks are fed sample cases (training sets), learn how to recognize patterns in the sample data, and use this experience in handling new cases. Two cognitive roles for ANNs (intelligent filters and spreading, associative memories) are examined. Prototypes…

  15. Training-Image Based Geostatistical Inversion Using a Spatial Generative Adversarial Neural Network

    Science.gov (United States)

    Laloy, Eric; Hérault, Romain; Jacques, Diederik; Linde, Niklas

    2018-01-01

    Probabilistic inversion within a multiple-point statistics framework is often computationally prohibitive for high-dimensional problems. To partly address this, we introduce and evaluate a new training-image based inversion approach for complex geologic media. Our approach relies on a deep neural network of the generative adversarial network (GAN) type. After training using a training image (TI), our proposed spatial GAN (SGAN) can quickly generate 2-D and 3-D unconditional realizations. A key characteristic of our SGAN is that it defines a (very) low-dimensional parameterization, thereby allowing for efficient probabilistic inversion using state-of-the-art Markov chain Monte Carlo (MCMC) methods. In addition, available direct conditioning data can be incorporated within the inversion. Several 2-D and 3-D categorical TIs are first used to analyze the performance of our SGAN for unconditional geostatistical simulation. Training our deep network can take several hours. After training, realizations containing a few millions of pixels/voxels can be produced in a matter of seconds. This makes it especially useful for simulating many thousands of realizations (e.g., for MCMC inversion) as the relative cost of the training per realization diminishes with the considered number of realizations. Synthetic inversion case studies involving 2-D steady state flow and 3-D transient hydraulic tomography with and without direct conditioning data are used to illustrate the effectiveness of our proposed SGAN-based inversion. For the 2-D case, the inversion rapidly explores the posterior model distribution. For the 3-D case, the inversion recovers model realizations that fit the data close to the target level and visually resemble the true model well.

  16. Inverting radiometric measurements with a neural network

    Science.gov (United States)

    Measure, Edward M.; Yee, Young P.; Balding, Jeff M.; Watkins, Wendell R.

    1992-02-01

    A neural network scheme for retrieving remotely sensed vertical temperature profiles was applied to observed ground based radiometer measurements. The neural network used microwave radiance measurements and surface measurements of temperature and pressure as inputs. Because the microwave radiometer is capable of measuring 4 oxygen channels at 5 different elevation angles (9, 15, 25, 40, and 90 degs), 20 microwave measurements are potentially available. Because these measurements have considerable redundancy, a neural network was experimented with, accepting as inputs microwave measurements taken at 53.88 GHz, 40 deg; 57.45 GHz, 40 deg; and 57.45, 90 deg. The primary test site was located at White Sands Missile Range (WSMR), NM. Results are compared with measurements made simultaneously with balloon borne radiosonde instruments and with radiometric temperature retrievals made using more conventional retrieval algorithms. The neural network was trained using a Widrow-Hoff delta rule procedure. Functions of date to include season dependence in the retrieval process and functions of time to include diurnal effects were used as inputs to the neural network.

  17. Application of neural networks to waste site screening

    International Nuclear Information System (INIS)

    Dabiri, A.E.; Garrett, M.; Kraft, T.; Hilton, J.; VanHammersveld, M.

    1993-02-01

    Waste site screening requires knowledge of the actual concentrations of hazardous materials and rates of flow around and below the site with time. The present approach consists primarily of drilling boreholes near contaminated sites and chemically analyzing the extracted physical samples and processing the data. This is expensive and time consuming. The feasibility of using neural network techniques to reduce the cost of waste site screening was investigated. Two neural network techniques, gradient descent back propagation and fully recurrent back propagation were utilized. The networks were trained with data received from Westinghouse Hanford Corporation. The results indicate that the network trained with the fully recurrent technique shows satisfactory generalization capability. The predicted results are close to the results obtained from a mathematical flow prediction model. It is possible to develop a new tool to predict the waste plume, thus substantially reducing the number of the bore sites and samplings. There are a variety of applications for this technique in environmental site screening and remediation. One of the obvious applications would be for optimum well siting. A neural network trained from the existing sampling data could be utilized to decide where would be the best position for the next bore site. Other applications are discussed in the report

  18. Morphological neural networks

    Energy Technology Data Exchange (ETDEWEB)

    Ritter, G.X.; Sussner, P. [Univ. of Florida, Gainesville, FL (United States)

    1996-12-31

    The theory of artificial neural networks has been successfully applied to a wide variety of pattern recognition problems. In this theory, the first step in computing the next state of a neuron or in performing the next layer neural network computation involves the linear operation of multiplying neural values by their synaptic strengths and adding the results. Thresholding usually follows the linear operation in order to provide for nonlinearity of the network. In this paper we introduce a novel class of neural networks, called morphological neural networks, in which the operations of multiplication and addition are replaced by addition and maximum (or minimum), respectively. By taking the maximum (or minimum) of sums instead of the sum of products, morphological network computation is nonlinear before thresholding. As a consequence, the properties of morphological neural networks are drastically different than those of traditional neural network models. In this paper we consider some of these differences and provide some particular examples of morphological neural network.

  19. Intrusion Detection System Using Deep Neural Network for In-Vehicle Network Security.

    Directory of Open Access Journals (Sweden)

    Min-Joo Kang

    Full Text Available A novel intrusion detection system (IDS using a deep neural network (DNN is proposed to enhance the security of in-vehicular network. The parameters building the DNN structure are trained with probability-based feature vectors that are extracted from the in-vehicular network packets. For a given packet, the DNN provides the probability of each class discriminating normal and attack packets, and, thus the sensor can identify any malicious attack to the vehicle. As compared to the traditional artificial neural network applied to the IDS, the proposed technique adopts recent advances in deep learning studies such as initializing the parameters through the unsupervised pre-training of deep belief networks (DBN, therefore improving the detection accuracy. It is demonstrated with experimental results that the proposed technique can provide a real-time response to the attack with a significantly improved detection ratio in controller area network (CAN bus.

  20. Intrusion Detection System Using Deep Neural Network for In-Vehicle Network Security.

    Science.gov (United States)

    Kang, Min-Joo; Kang, Je-Won

    2016-01-01

    A novel intrusion detection system (IDS) using a deep neural network (DNN) is proposed to enhance the security of in-vehicular network. The parameters building the DNN structure are trained with probability-based feature vectors that are extracted from the in-vehicular network packets. For a given packet, the DNN provides the probability of each class discriminating normal and attack packets, and, thus the sensor can identify any malicious attack to the vehicle. As compared to the traditional artificial neural network applied to the IDS, the proposed technique adopts recent advances in deep learning studies such as initializing the parameters through the unsupervised pre-training of deep belief networks (DBN), therefore improving the detection accuracy. It is demonstrated with experimental results that the proposed technique can provide a real-time response to the attack with a significantly improved detection ratio in controller area network (CAN) bus.

  1. Cotton genotypes selection through artificial neural networks.

    Science.gov (United States)

    Júnior, E G Silva; Cardoso, D B O; Reis, M C; Nascimento, A F O; Bortolin, D I; Martins, M R; Sousa, L B

    2017-09-27

    Breeding programs currently use statistical analysis to assist in the identification of superior genotypes at various stages of a cultivar's development. Differently from these analyses, the computational intelligence approach has been little explored in genetic improvement of cotton. Thus, this study was carried out with the objective of presenting the use of artificial neural networks as auxiliary tools in the improvement of the cotton to improve fiber quality. To demonstrate the applicability of this approach, this research was carried out using the evaluation data of 40 genotypes. In order to classify the genotypes for fiber quality, the artificial neural networks were trained with replicate data of 20 genotypes of cotton evaluated in the harvests of 2013/14 and 2014/15, regarding fiber length, uniformity of length, fiber strength, micronaire index, elongation, short fiber index, maturity index, reflectance degree, and fiber quality index. This quality index was estimated by means of a weighted average on the determined score (1 to 5) of each characteristic of the HVI evaluated, according to its industry standards. The artificial neural networks presented a high capacity of correct classification of the 20 selected genotypes based on the fiber quality index, so that when using fiber length associated with the short fiber index, fiber maturation, and micronaire index, the artificial neural networks presented better results than using only fiber length and previous associations. It was also observed that to submit data of means of new genotypes to the neural networks trained with data of repetition, provides better results of classification of the genotypes. When observing the results obtained in the present study, it was verified that the artificial neural networks present great potential to be used in the different stages of a genetic improvement program of the cotton, aiming at the improvement of the fiber quality of the future cultivars.

  2. Design and Implementation of Behavior Recognition System Based on Convolutional Neural Network

    Directory of Open Access Journals (Sweden)

    Yu Bo

    2017-01-01

    Full Text Available We build a set of human behavior recognition system based on the convolution neural network constructed for the specific human behavior in public places. Firstly, video of human behavior data set will be segmented into images, then we process the images by the method of background subtraction to extract moving foreground characters of body. Secondly, the training data sets are trained into the designed convolution neural network, and the depth learning network is constructed by stochastic gradient descent. Finally, the various behaviors of samples are classified and identified with the obtained network model, and the recognition results are compared with the current mainstream methods. The result show that the convolution neural network can study human behavior model automatically and identify human’s behaviors without any manually annotated trainings.

  3. Disorder generated by interacting neural networks: application to econophysics and cryptography

    International Nuclear Information System (INIS)

    Kinzel, Wolfgang; Kanter, Ido

    2003-01-01

    When neural networks are trained on their own output signals they generate disordered time series. In particular, when two neural networks are trained on their mutual output they can synchronize; they relax to a time-dependent state with identical synaptic weights. Two applications of this phenomenon are discussed for (a) econophysics and (b) cryptography. (a) When agents competing in a closed market (minority game) are using neural networks to make their decisions, the total system relaxes to a state of good performance. (b) Two partners communicating over a public channel can find a common secret key

  4. Neural network decoder for quantum error correcting codes

    Science.gov (United States)

    Krastanov, Stefan; Jiang, Liang

    Artificial neural networks form a family of extremely powerful - albeit still poorly understood - tools used in anything from image and sound recognition through text generation to, in our case, decoding. We present a straightforward Recurrent Neural Network architecture capable of deducing the correcting procedure for a quantum error-correcting code from a set of repeated stabilizer measurements. We discuss the fault-tolerance of our scheme and the cost of training the neural network for a system of a realistic size. Such decoders are especially interesting when applied to codes, like the quantum LDPC codes, that lack known efficient decoding schemes.

  5. Neural Networks through Shared Maps in Mobile Devices

    Directory of Open Access Journals (Sweden)

    William Raveane

    2014-12-01

    Full Text Available We introduce a hybrid system composed of a convolutional neural network and a discrete graphical model for image recognition. This system improves upon traditional sliding window techniques for analysis of an image larger than the training data by effectively processing the full input scene through the neural network in less time. The final result is then inferred from the neural network output through energy minimization to reach a more precize localization than what traditional maximum value class comparisons yield. These results are apt for applying this process in a mobile device for real time image recognition.

  6. Hindcasting of storm waves using neural networks

    Digital Repository Service at National Institute of Oceanography (India)

    Rao, S.; Mandal, S.

    Department NN neural network net i weighted sum of the inputs of neuron i o k network output at kth output node P total number of training pattern s i output of neuron i t k target output at kth output node 1. Introduction Severe storms occur in Bay of Bengal...), forecasting of runoff (Crespo and Mora, 1993), concrete strength (Kasperkiewicz et al., 1995). The uses of neural network in the coastal the wave conditions will change from year to year, thus a proper statistical and climatological treatment requires several...

  7. Combined Ozone Retrieval From METOP Sensors Using META-Training Of Deep Neural Networks

    Science.gov (United States)

    Felder, Martin; Sehnke, Frank; Kaifel, Anton

    2013-12-01

    The newest installment of our well-proven Neural Net- work Ozone Retrieval System (NNORSY) combines the METOP sensors GOME-2 and IASI with cloud information from AVHRR. Through the use of advanced meta- learning techniques like automatic feature selection and automatic architecture search applied to a set of deep neural networks, having at least two or three hidden layers, we have been able to avoid many technical issues normally encountered during the construction of such a joint retrieval system. This has been made possible by harnessing the processing power of modern consumer graphics cards with high performance graphic processors (GPU), which decreases training times by about two orders of magnitude. The system was trained on data from 2009 and 2010, including target ozone profiles from ozone sondes, ACE- FTS and MLS-AURA. To make maximum use of tropospheric information in the spectra, the data were partitioned into several sets of different cloud fraction ranges with the GOME-2 FOV, on which specialized retrieval networks are being trained. For the final ozone retrieval processing the different specialized networks are combined. The resulting retrieval system is very stable and does not show any systematic dependence on solar zenith angle, scan angle or sensor degradation. We present several sensitivity studies with regard to cloud fraction and target sensor type, as well as the performance in several latitude bands and with respect to independent validation stations. A visual cross-comparison against high-resolution ozone profiles from the KNMI EUMETSAT Ozone SAF product has also been performed and shows some distinctive features which we will briefly discuss. Overall, we demonstrate that a complex retrieval system can now be constructed with a minimum of ma- chine learning knowledge, using automated algorithms for many design decisions previously requiring expert knowledge. Provided sufficient training data and computation power of GPUs is available, the

  8. A TLD dose algorithm using artificial neural networks

    International Nuclear Information System (INIS)

    Moscovitch, M.; Rotunda, J.E.; Tawil, R.A.; Rathbone, B.A.

    1995-01-01

    An artificial neural network was designed and used to develop a dose algorithm for a multi-element thermoluminescence dosimeter (TLD). The neural network architecture is based on the concept of functional links network (FLN). Neural network is an information processing method inspired by the biological nervous system. A dose algorithm based on neural networks is fundamentally different as compared to conventional algorithms, as it has the capability to learn from its own experience. The neural network algorithm is shown the expected dose values (output) associated with given responses of a multi-element dosimeter (input) many times. The algorithm, being trained that way, eventually is capable to produce its own unique solution to similar (but not exactly the same) dose calculation problems. For personal dosimetry, the output consists of the desired dose components: deep dose, shallow dose and eye dose. The input consists of the TL data obtained from the readout of a multi-element dosimeter. The neural network approach was applied to the Harshaw Type 8825 TLD, and was shown to significantly improve the performance of this dosimeter, well within the U.S. accreditation requirements for personnel dosimeters

  9. SYNAPTIC DEPRESSION IN DEEP NEURAL NETWORKS FOR SPEECH PROCESSING.

    Science.gov (United States)

    Zhang, Wenhao; Li, Hanyu; Yang, Minda; Mesgarani, Nima

    2016-03-01

    A characteristic property of biological neurons is their ability to dynamically change the synaptic efficacy in response to variable input conditions. This mechanism, known as synaptic depression, significantly contributes to the formation of normalized representation of speech features. Synaptic depression also contributes to the robust performance of biological systems. In this paper, we describe how synaptic depression can be modeled and incorporated into deep neural network architectures to improve their generalization ability. We observed that when synaptic depression is added to the hidden layers of a neural network, it reduces the effect of changing background activity in the node activations. In addition, we show that when synaptic depression is included in a deep neural network trained for phoneme classification, the performance of the network improves under noisy conditions not included in the training phase. Our results suggest that more complete neuron models may further reduce the gap between the biological performance and artificial computing, resulting in networks that better generalize to novel signal conditions.

  10. Face recognition: a convolutional neural-network approach.

    Science.gov (United States)

    Lawrence, S; Giles, C L; Tsoi, A C; Back, A D

    1997-01-01

    We present a hybrid neural-network for human face recognition which compares favourably with other methods. The system combines local image sampling, a self-organizing map (SOM) neural network, and a convolutional neural network. The SOM provides a quantization of the image samples into a topological space where inputs that are nearby in the original space are also nearby in the output space, thereby providing dimensionality reduction and invariance to minor changes in the image sample, and the convolutional neural network provides partial invariance to translation, rotation, scale, and deformation. The convolutional network extracts successively larger features in a hierarchical set of layers. We present results using the Karhunen-Loeve transform in place of the SOM, and a multilayer perceptron (MLP) in place of the convolutional network for comparison. We use a database of 400 images of 40 individuals which contains quite a high degree of variability in expression, pose, and facial details. We analyze the computational complexity and discuss how new classes could be added to the trained recognizer.

  11. Static human face recognition using artificial neural networks

    International Nuclear Information System (INIS)

    Qamar, R.; Shah, S.H.; Javed-ur-Rehman

    2003-01-01

    This paper presents a novel method of human face recognition using digital computers. A digital PC camera is used to take the BMP images of the human faces. An artificial neural network using Back Propagation Algorithm is developed as a recognition engine. The BMP images of the faces serve as the input patterns for this engine. A software 'Face Recognition' has been developed to recognize the human faces for which it is trained. Once the neural network is trained for patterns of the faces, the software is able to detect and recognize them with success rate of about 97%. (author)

  12. Predicting carbonate permeabilities from wireline logs using a back-propagation neural network

    International Nuclear Information System (INIS)

    Wiener, J.M.; Moll, R.F.; Rogers, J.A.

    1991-01-01

    This paper explores the applicability of using Neural Networks to aid in the determination of carbonate permeability from wireline logs. Resistivity, interval transit time, neutron porosity, and bulk density logs form Texaco's Stockyard Creek Oil field were used as input to a specially designed neural network to predict core permeabilities in this carbonate reservoir. Also of interest was the comparison of the neural network's results to those of standard statistical techniques. The process of developing the neural network for this problem has shown that a good understanding of the data is required when creating the training set from which the network learns. This network was trained to learn core permeabilities from raw and transformed log data using a hyperbolic tangent transfer function and a sum of squares global error function. Also, it required two hidden layers to solve this particular problem

  13. Application of Artificial Neural Networks for Efficient High-Resolution 2D DOA Estimation

    Directory of Open Access Journals (Sweden)

    M. Agatonović

    2012-12-01

    Full Text Available A novel method to provide high-resolution Two-Dimensional Direction of Arrival (2D DOA estimation employing Artificial Neural Networks (ANNs is presented in this paper. The observed space is divided into azimuth and elevation sectors. Multilayer Perceptron (MLP neural networks are employed to detect the presence of a source in a sector while Radial Basis Function (RBF neural networks are utilized for DOA estimation. It is shown that a number of appropriately trained neural networks can be successfully used for the high-resolution DOA estimation of narrowband sources in both azimuth and elevation. The training time of each smaller network is significantly re¬duced as different training sets are used for networks in detection and estimation stage. By avoiding the spectral search, the proposed method is suitable for real-time ap¬plications as it provides DOA estimates in a matter of seconds. At the same time, it demonstrates the accuracy comparable to that of the super-resolution 2D MUSIC algorithm.

  14. Application of genetic neural network in steam generator fault diagnosing

    International Nuclear Information System (INIS)

    Lin Xiaogong; Jiang Xingwei; Liu Tao; Shi Xiaocheng

    2005-01-01

    In the paper, a new algorithm which neural network and genetic algorithm are mixed is adopted, aiming at the problems of slow convergence rate and easily falling into part minimums in network studying of traditional BP neural network, and used in the fault diagnosis of steam generator. The result shows that this algorithm can solve the convergence problem in the network trains effectively. (author)

  15. Influence of the Training Methods in the Diagnosis of Multiple Sclerosis Using Radial Basis Functions Artificial Neural Networks

    Directory of Open Access Journals (Sweden)

    Ángel Gutiérrez

    2015-04-01

    Full Text Available The data available in the average clinical study of a disease is very often small. This is one of the main obstacles in the application of neural networks to the classification of biological signals used for diagnosing diseases. A rule of thumb states that the number of parameters (weights that can be used for training a neural network should be around 15% of the available data, to avoid overlearning. This condition puts a limit on the dimension of the input space. Different authors have used different approaches to solve this problem, like eliminating redundancy in the data, preprocessing the data to find centers for the radial basis functions, or extracting a small number of features that were used as inputs. It is clear that the classification would be better the more features we could feed into the network. The approach utilized in this paper is incrementing the number of training elements with randomly expanding training sets. This way the number of original signals does not constraint the dimension of the input set in the radial basis network. Then we train the network using the method that minimizes the error function using the gradient descent algorithm and the method that uses the particle swarm optimization technique. A comparison between the two methods showed that for the same number of iterations on both methods, the particle swarm optimization was faster, it was learning to recognize only the sick people. On the other hand, the gradient method was not as good in general better at identifying those people.

  16. Pre-trained convolutional neural networks as feature extractors for tuberculosis detection.

    Science.gov (United States)

    Lopes, U K; Valiati, J F

    2017-10-01

    It is estimated that in 2015, approximately 1.8 million people infected by tuberculosis died, most of them in developing countries. Many of those deaths could have been prevented if the disease had been detected at an earlier stage, but the most advanced diagnosis methods are still cost prohibitive for mass adoption. One of the most popular tuberculosis diagnosis methods is the analysis of frontal thoracic radiographs; however, the impact of this method is diminished by the need for individual analysis of each radiography by properly trained radiologists. Significant research can be found on automating diagnosis by applying computational techniques to medical images, thereby eliminating the need for individual image analysis and greatly diminishing overall costs. In addition, recent improvements on deep learning accomplished excellent results classifying images on diverse domains, but its application for tuberculosis diagnosis remains limited. Thus, the focus of this work is to produce an investigation that will advance the research in the area, presenting three proposals to the application of pre-trained convolutional neural networks as feature extractors to detect the disease. The proposals presented in this work are implemented and compared to the current literature. The obtained results are competitive with published works demonstrating the potential of pre-trained convolutional networks as medical image feature extractors. Copyright © 2017 Elsevier Ltd. All rights reserved.

  17. Practical Application of Neural Networks in State Space Control

    DEFF Research Database (Denmark)

    Bendtsen, Jan Dimon

    the networks, although some modifications are needed for the method to apply to the multilayer perceptron network. In connection with the multilayer perceptron networks it is also pointed out how instantaneous, sample-by-sample linearized state space models can be extracted from a trained network, thus opening......In the present thesis we address some problems in discrete-time state space control of nonlinear dynamical systems and attempt to solve them using generic nonlinear models based on artificial neural networks. The main aim of the work is to examine how well such control algorithms perform when...... theoretic notions followed by a detailed description of the topology, neuron functions and learning rules of the two types of neural networks treated in the thesis, the multilayer perceptron and the neurofuzzy networks. In both cases, a Least Squares second-order gradient method is used to train...

  18. Maximum entropy methods for extracting the learned features of deep neural networks.

    Science.gov (United States)

    Finnegan, Alex; Song, Jun S

    2017-10-01

    New architectures of multilayer artificial neural networks and new methods for training them are rapidly revolutionizing the application of machine learning in diverse fields, including business, social science, physical sciences, and biology. Interpreting deep neural networks, however, currently remains elusive, and a critical challenge lies in understanding which meaningful features a network is actually learning. We present a general method for interpreting deep neural networks and extracting network-learned features from input data. We describe our algorithm in the context of biological sequence analysis. Our approach, based on ideas from statistical physics, samples from the maximum entropy distribution over possible sequences, anchored at an input sequence and subject to constraints implied by the empirical function learned by a network. Using our framework, we demonstrate that local transcription factor binding motifs can be identified from a network trained on ChIP-seq data and that nucleosome positioning signals are indeed learned by a network trained on chemical cleavage nucleosome maps. Imposing a further constraint on the maximum entropy distribution also allows us to probe whether a network is learning global sequence features, such as the high GC content in nucleosome-rich regions. This work thus provides valuable mathematical tools for interpreting and extracting learned features from feed-forward neural networks.

  19. Evaluation of the Performance of Feedforward and Recurrent Neural Networks in Active Cancellation of Sound Noise

    Directory of Open Access Journals (Sweden)

    Mehrshad Salmasi

    2012-07-01

    Full Text Available Active noise control is based on the destructive interference between the primary noise and generated noise from the secondary source. An antinoise of equal amplitude and opposite phase is generated and combined with the primary noise. In this paper, performance of the neural networks is evaluated in active cancellation of sound noise. For this reason, feedforward and recurrent neural networks are designed and trained. After training, performance of the feedforwrad and recurrent networks in noise attenuation are compared. We use Elman network as a recurrent neural network. For simulations, noise signals from a SPIB database are used. In order to compare the networks appropriately, equal number of layers and neurons are considered for the networks. Moreover, training and test samples are similar. Simulation results show that feedforward and recurrent neural networks present good performance in noise cancellation. As it is seen, the ability of recurrent neural network in noise attenuation is better than feedforward network.

  20. Optimization of blanking process using neural network simulation

    International Nuclear Information System (INIS)

    Hambli, R.

    2005-01-01

    The present work describes a methodology using the finite element method and neural network simulation in order to predict the optimum punch-die clearance during sheet metal blanking processes. A damage model is used in order to describe crack initiation and propagation into the sheet. The proposed approach combines predictive finite element and neural network modeling of the leading blanking parameters. Numerical results obtained by finite element computation including damage and fracture modeling were utilized to train the developed simulation environment based on back propagation neural network modeling. The comparative study between the numerical results and the experimental ones shows the good agreement. (author)

  1. UAV Trajectory Modeling Using Neural Networks

    Science.gov (United States)

    Xue, Min

    2017-01-01

    Large amount of small Unmanned Aerial Vehicles (sUAVs) are projected to operate in the near future. Potential sUAV applications include, but not limited to, search and rescue, inspection and surveillance, aerial photography and video, precision agriculture, and parcel delivery. sUAVs are expected to operate in the uncontrolled Class G airspace, which is at or below 500 feet above ground level (AGL), where many static and dynamic constraints exist, such as ground properties and terrains, restricted areas, various winds, manned helicopters, and conflict avoidance among sUAVs. How to enable safe, efficient, and massive sUAV operations at the low altitude airspace remains a great challenge. NASA's Unmanned aircraft system Traffic Management (UTM) research initiative works on establishing infrastructure and developing policies, requirement, and rules to enable safe and efficient sUAVs' operations. To achieve this goal, it is important to gain insights of future UTM traffic operations through simulations, where the accurate trajectory model plays an extremely important role. On the other hand, like what happens in current aviation development, trajectory modeling should also serve as the foundation for any advanced concepts and tools in UTM. Accurate models of sUAV dynamics and control systems are very important considering the requirement of the meter level precision in UTM operations. The vehicle dynamics are relatively easy to derive and model, however, vehicle control systems remain unknown as they are usually kept by manufactures as a part of intellectual properties. That brings challenges to trajectory modeling for sUAVs. How to model the vehicle's trajectories with unknown control system? This work proposes to use a neural network to model a vehicle's trajectory. The neural network is first trained to learn the vehicle's responses at numerous conditions. Once being fully trained, given current vehicle states, winds, and desired future trajectory, the neural

  2. Smooth function approximation using neural networks.

    Science.gov (United States)

    Ferrari, Silvia; Stengel, Robert F

    2005-01-01

    An algebraic approach for representing multidimensional nonlinear functions by feedforward neural networks is presented. In this paper, the approach is implemented for the approximation of smooth batch data containing the function's input, output, and possibly, gradient information. The training set is associated to the network adjustable parameters by nonlinear weight equations. The cascade structure of these equations reveals that they can be treated as sets of linear systems. Hence, the training process and the network approximation properties can be investigated via linear algebra. Four algorithms are developed to achieve exact or approximate matching of input-output and/or gradient-based training sets. Their application to the design of forward and feedback neurocontrollers shows that algebraic training is characterized by faster execution speeds and better generalization properties than contemporary optimization techniques.

  3. Nuclear power plant fault-diagnosis using artificial neural networks

    International Nuclear Information System (INIS)

    Kim, Keehoon; Aljundi, T.L.; Bartlett, E.B.

    1992-01-01

    Artificial neural networks (ANNs) have been applied to various fields due to their fault and noise tolerance and generalization characteristics. As an application to nuclear engineering, we apply neural networks to the early recognition of nuclear power plant operational transients. If a transient or accident occurs, the network will advise the plant operators in a timely manner. More importantly, we investigate the ability of the network to provide a measure of the confidence level in its diagnosis. In this research an ANN is trained to diagnose the status of the San Onofre Nuclear Generation Station using data obtained from the plant's training simulator. Stacked generalization is then applied to predict the error in the ANN diagnosis. The data used consisted of 10 scenarios that include typical design basis accidents as well as less severe transients. The results show that the trained network is capable of diagnosing all 10 instabilities as well as providing a measure of the level of confidence in its diagnoses

  4. System Identification, Prediction, Simulation and Control with Neural Networks

    DEFF Research Database (Denmark)

    Sørensen, O.

    1997-01-01

    a Gauss-Newton search direction is applied. 3) Amongst numerous model types, often met in control applications, only the Non-linear ARMAX (NARMAX) model, representing input/output description, is examined. A simulated example confirms that a neural network has the potential to perform excellent System......The intention of this paper is to make a systematic examination of the possibilities of applying neural networks in those technical areas, which are familiar to a control engineer. In other words, the potential of neural networks in control applications is given higher priority than a detailed...... study of the networks themselves. With this end in view the following restrictions have been made: 1) Amongst numerous neural network structures, only the Multi Layer Perceptron (a feed-forward network) is applied. 2) Amongst numerous training algorithms, only the Recursive Prediction Error Method using...

  5. Forecasting Flare Activity Using Deep Convolutional Neural Networks

    Science.gov (United States)

    Hernandez, T.

    2017-12-01

    Current operational flare forecasting relies on human morphological analysis of active regions and the persistence of solar flare activity through time (i.e. that the Sun will continue to do what it is doing right now: flaring or remaining calm). In this talk we present the results of applying deep Convolutional Neural Networks (CNNs) to the problem of solar flare forecasting. CNNs operate by training a set of tunable spatial filters that, in combination with neural layer interconnectivity, allow CNNs to automatically identify significant spatial structures predictive for classification and regression problems. We will start by discussing the applicability and success rate of the approach, the advantages it has over non-automated forecasts, and how mining our trained neural network provides a fresh look into the mechanisms behind magnetic energy storage and release.

  6. Input data preprocessing method for exchange rate forecasting via neural network

    Directory of Open Access Journals (Sweden)

    Antić Dragan S.

    2014-01-01

    Full Text Available The aim of this paper is to present a method for neural network input parameters selection and preprocessing. The purpose of this network is to forecast foreign exchange rates using artificial intelligence. Two data sets are formed for two different economic systems. Each system is represented by six categories with 70 economic parameters which are used in the analysis. Reduction of these parameters within each category was performed by using the principal component analysis method. Component interdependencies are established and relations between them are formed. Newly formed relations were used to create input vectors of a neural network. The multilayer feed forward neural network is formed and trained using batch training. Finally, simulation results are presented and it is concluded that input data preparation method is an effective way for preprocessing neural network data. [Projekat Ministarstva nauke Republike Srbije, br.TR 35005, br. III 43007 i br. III 44006

  7. Inverse kinematics problem in robotics using neural networks

    Science.gov (United States)

    Choi, Benjamin B.; Lawrence, Charles

    1992-01-01

    In this paper, Multilayer Feedforward Networks are applied to the robot inverse kinematic problem. The networks are trained with endeffector position and joint angles. After training, performance is measured by having the network generate joint angles for arbitrary endeffector trajectories. A 3-degree-of-freedom (DOF) spatial manipulator is used for the study. It is found that neural networks provide a simple and effective way to both model the manipulator inverse kinematics and circumvent the problems associated with algorithmic solution methods.

  8. Evaluation and scoring of radiotherapy treatment plans using an artificial neural network

    International Nuclear Information System (INIS)

    Willoughby, Twyla R.; Starkschall, George; Janjan, Nora A.; Rosen, Isaac I.

    1996-01-01

    Purpose: The objective of this work was to demonstrate the feasibility of using an artificial neural network to predict the clinical evaluation of radiotherapy treatment plans. Methods and Materials: Approximately 150 treatment plans were developed for 16 patients who received external-beam radiotherapy for soft-tissue sarcomas of the lower extremity. Plans were assigned a figure of merit by a radiation oncologist using a five-point rating scale. Plan scoring was performed by a single physician to ensure consistency in rating. Dose-volume information extracted from a training set of 511 treatment plans on 14 patients was correlated to the physician-generated figure of merit using an artificial neural network. The neural network was tested with a test set of 19 treatment plans on two patients whose plans were not used in the training of the neural net. Results: Physician scoring of treatment plans was consistent to within one point on the rating scale 88% of the time. The neural net reproduced the physician scores in the training set to within one point approximately 90% of the time. It reproduced the physician scores in the test set to within one point approximately 83% of the time. Conclusions: An artificial neural network can be trained to generate a score for a treatment plan that can be correlated to a clinically-based figure of merit. The accuracy of the neural net in scoring plans compares well with the reproducibility of the clinical scoring. The system of radiotherapy treatment plan evaluation using an artificial neural network demonstrates promise as a method for generating a clinically relevant figure of merit

  9. IMPLEMENTATION OF ARTIFICIAL NEURAL NETWORK FOR FACE RECOGNITION USING GABOR FEATURE EXTRACTION

    Directory of Open Access Journals (Sweden)

    Muthukannan K

    2013-11-01

    Full Text Available Face detection and recognition is the first step for many applications in various fields such as identification and is used as a key to enter into the various electronic devices, video surveillance, and human computer interface and image database management. This paper focuses on feature extraction in an image using Gabor filter and the extracted image feature vector is then given as an input to the neural network. The neural network is trained with the input data. The Gabor wavelet concentrates on the important components of the face including eye, mouth, nose, cheeks. The main requirement of this technique is the threshold, which gives privileged sensitivity. The threshold values are the feature vectors taken from the faces. These feature vectors are given into the feed forward neural network to train the network. Using the feed forward neural network as a classifier, the recognized and unrecognized faces are classified. This classifier attains a higher face deduction rate. By training more input vectors the system proves to be effective. The effectiveness of the proposed method is demonstrated by the experimental results.

  10. Computing single step operators of logic programming in radial basis function neural networks

    Science.gov (United States)

    Hamadneh, Nawaf; Sathasivam, Saratha; Choon, Ong Hong

    2014-07-01

    Logic programming is the process that leads from an original formulation of a computing problem to executable programs. A normal logic program consists of a finite set of clauses. A valuation I of logic programming is a mapping from ground atoms to false or true. The single step operator of any logic programming is defined as a function (Tp:I→I). Logic programming is well-suited to building the artificial intelligence systems. In this study, we established a new technique to compute the single step operators of logic programming in the radial basis function neural networks. To do that, we proposed a new technique to generate the training data sets of single step operators. The training data sets are used to build the neural networks. We used the recurrent radial basis function neural networks to get to the steady state (the fixed point of the operators). To improve the performance of the neural networks, we used the particle swarm optimization algorithm to train the networks.

  11. Computing single step operators of logic programming in radial basis function neural networks

    Energy Technology Data Exchange (ETDEWEB)

    Hamadneh, Nawaf; Sathasivam, Saratha; Choon, Ong Hong [School of Mathematical Sciences, Universiti Sains Malaysia, 11800 USM, Penang (Malaysia)

    2014-07-10

    Logic programming is the process that leads from an original formulation of a computing problem to executable programs. A normal logic program consists of a finite set of clauses. A valuation I of logic programming is a mapping from ground atoms to false or true. The single step operator of any logic programming is defined as a function (T{sub p}:I→I). Logic programming is well-suited to building the artificial intelligence systems. In this study, we established a new technique to compute the single step operators of logic programming in the radial basis function neural networks. To do that, we proposed a new technique to generate the training data sets of single step operators. The training data sets are used to build the neural networks. We used the recurrent radial basis function neural networks to get to the steady state (the fixed point of the operators). To improve the performance of the neural networks, we used the particle swarm optimization algorithm to train the networks.

  12. Computing single step operators of logic programming in radial basis function neural networks

    International Nuclear Information System (INIS)

    Hamadneh, Nawaf; Sathasivam, Saratha; Choon, Ong Hong

    2014-01-01

    Logic programming is the process that leads from an original formulation of a computing problem to executable programs. A normal logic program consists of a finite set of clauses. A valuation I of logic programming is a mapping from ground atoms to false or true. The single step operator of any logic programming is defined as a function (T p :I→I). Logic programming is well-suited to building the artificial intelligence systems. In this study, we established a new technique to compute the single step operators of logic programming in the radial basis function neural networks. To do that, we proposed a new technique to generate the training data sets of single step operators. The training data sets are used to build the neural networks. We used the recurrent radial basis function neural networks to get to the steady state (the fixed point of the operators). To improve the performance of the neural networks, we used the particle swarm optimization algorithm to train the networks

  13. Synthesis of recurrent neural networks for dynamical system simulation.

    Science.gov (United States)

    Trischler, Adam P; D'Eleuterio, Gabriele M T

    2016-08-01

    We review several of the most widely used techniques for training recurrent neural networks to approximate dynamical systems, then describe a novel algorithm for this task. The algorithm is based on an earlier theoretical result that guarantees the quality of the network approximation. We show that a feedforward neural network can be trained on the vector-field representation of a given dynamical system using backpropagation, then recast it as a recurrent network that replicates the original system's dynamics. After detailing this algorithm and its relation to earlier approaches, we present numerical examples that demonstrate its capabilities. One of the distinguishing features of our approach is that both the original dynamical systems and the recurrent networks that simulate them operate in continuous time. Copyright © 2016 Elsevier Ltd. All rights reserved.

  14. Adaptive model predictive process control using neural networks

    Science.gov (United States)

    Buescher, K.L.; Baum, C.C.; Jones, R.D.

    1997-08-19

    A control system for controlling the output of at least one plant process output parameter is implemented by adaptive model predictive control using a neural network. An improved method and apparatus provides for sampling plant output and control input at a first sampling rate to provide control inputs at the fast rate. The MPC system is, however, provided with a network state vector that is constructed at a second, slower rate so that the input control values used by the MPC system are averaged over a gapped time period. Another improvement is a provision for on-line training that may include difference training, curvature training, and basis center adjustment to maintain the weights and basis centers of the neural in an updated state that can follow changes in the plant operation apart from initial off-line training data. 46 figs.

  15. Artificial Neural Networks to Detect Risk of Type 2 Diabetes | Baha ...

    African Journals Online (AJOL)

    A multilayer feedforward architecture with backpropagation algorithm was designed using Neural Network Toolbox of Matlab. The network was trained using batch mode backpropagation with gradient descent and momentum. Best performed network identified during the training was 2 hidden layers of 6 and 3 neurons, ...

  16. Designing neural networks that process mean values of random variables

    International Nuclear Information System (INIS)

    Barber, Michael J.; Clark, John W.

    2014-01-01

    We develop a class of neural networks derived from probabilistic models posed in the form of Bayesian networks. Making biologically and technically plausible assumptions about the nature of the probabilistic models to be represented in the networks, we derive neural networks exhibiting standard dynamics that require no training to determine the synaptic weights, that perform accurate calculation of the mean values of the relevant random variables, that can pool multiple sources of evidence, and that deal appropriately with ambivalent, inconsistent, or contradictory evidence. - Highlights: • High-level neural computations are specified by Bayesian belief networks of random variables. • Probability densities of random variables are encoded in activities of populations of neurons. • Top-down algorithm generates specific neural network implementation of given computation. • Resulting “neural belief networks” process mean values of random variables. • Such networks pool multiple sources of evidence and deal properly with inconsistent evidence

  17. Designing neural networks that process mean values of random variables

    Energy Technology Data Exchange (ETDEWEB)

    Barber, Michael J. [AIT Austrian Institute of Technology, Innovation Systems Department, 1220 Vienna (Austria); Clark, John W. [Department of Physics and McDonnell Center for the Space Sciences, Washington University, St. Louis, MO 63130 (United States); Centro de Ciências Matemáticas, Universidade de Madeira, 9000-390 Funchal (Portugal)

    2014-06-13

    We develop a class of neural networks derived from probabilistic models posed in the form of Bayesian networks. Making biologically and technically plausible assumptions about the nature of the probabilistic models to be represented in the networks, we derive neural networks exhibiting standard dynamics that require no training to determine the synaptic weights, that perform accurate calculation of the mean values of the relevant random variables, that can pool multiple sources of evidence, and that deal appropriately with ambivalent, inconsistent, or contradictory evidence. - Highlights: • High-level neural computations are specified by Bayesian belief networks of random variables. • Probability densities of random variables are encoded in activities of populations of neurons. • Top-down algorithm generates specific neural network implementation of given computation. • Resulting “neural belief networks” process mean values of random variables. • Such networks pool multiple sources of evidence and deal properly with inconsistent evidence.

  18. Accelerator and feedback control simulation using neural networks

    International Nuclear Information System (INIS)

    Nguyen, D.; Lee, M.; Sass, R.; Shoaee, H.

    1991-05-01

    Unlike present constant model feedback system, neural networks can adapt as the dynamics of the process changes with time. Using a process model, the ''Accelerator'' network is first trained to simulate the dynamics of the beam for a given beam line. This ''Accelerator'' network is then used to train a second ''Controller'' network which performs the control function. In simulation, the networks are used to adjust corrector magnetics to control the launch angle and position of the beam to keep it on the desired trajectory when the incoming beam is perturbed. 4 refs., 3 figs

  19. neural network based model o work based model of an industrial oil

    African Journals Online (AJOL)

    eobe

    technique. g, Neural Network Model, Regression, Mean Square Error, PID controller. ... during the training processes. An additio ... used to carry out simulation studies of the mode .... A two-layer feed-forward neural network with Matlab.

  20. Comparison of four Adaboost algorithm based artificial neural networks in wind speed predictions

    International Nuclear Information System (INIS)

    Liu, Hui; Tian, Hong-qi; Li, Yan-fei; Zhang, Lei

    2015-01-01

    Highlights: • Four hybrid algorithms are proposed for the wind speed decomposition. • Adaboost algorithm is adopted to provide a hybrid training framework. • MLP neural networks are built to do the forecasting computation. • Four important network training algorithms are included in the MLP networks. • All the proposed hybrid algorithms are suitable for the wind speed predictions. - Abstract: The technology of wind speed prediction is important to guarantee the safety of wind power utilization. In this paper, four different hybrid methods are proposed for the high-precision multi-step wind speed predictions based on the Adaboost (Adaptive Boosting) algorithm and the MLP (Multilayer Perceptron) neural networks. In the hybrid Adaboost–MLP forecasting architecture, four important algorithms are adopted for the training and modeling of the MLP neural networks, including GD-ALR-BP algorithm, GDM-ALR-BP algorithm, CG-BP-FR algorithm and BFGS algorithm. The aim of the study is to investigate the promoted forecasting percentages of the MLP neural networks by the Adaboost algorithm’ optimization under various training algorithms. The hybrid models in the performance comparison include Adaboost–GD-ALR-BP–MLP, Adaboost–GDM-ALR-BP–MLP, Adaboost–CG-BP-FR–MLP, Adaboost–BFGS–MLP, GD-ALR-BP–MLP, GDM-ALR-BP–MLP, CG-BP-FR–MLP and BFGS–MLP. Two experimental results show that: (1) the proposed hybrid Adaboost–MLP forecasting architecture is effective for the wind speed predictions; (2) the Adaboost algorithm has promoted the forecasting performance of the MLP neural networks considerably; (3) among the proposed Adaboost–MLP forecasting models, the Adaboost–CG-BP-FR–MLP model has the best performance; and (4) the improved percentages of the MLP neural networks by the Adaboost algorithm decrease step by step with the following sequence of training algorithms as: GD-ALR-BP, GDM-ALR-BP, CG-BP-FR and BFGS

  1. Optimal Parameter for the Training of Multilayer Perceptron Neural Networks by Using Hierarchical Genetic Algorithm

    International Nuclear Information System (INIS)

    Orozco-Monteagudo, Maykel; Taboada-Crispi, Alberto; Gutierrez-Hernandez, Liliana

    2008-01-01

    This paper deals with the controversial topic of the selection of the parameters of a genetic algorithm, in this case hierarchical, used for training of multilayer perceptron neural networks for the binary classification. The parameters to select are the crossover and mutation probabilities of the control and parametric genes and the permanency percent. The results can be considered as a guide for using this kind of algorithm.

  2. ACO-Initialized Wavelet Neural Network for Vibration Fault Diagnosis of Hydroturbine Generating Unit

    OpenAIRE

    Xiao, Zhihuai; He, Xinying; Fu, Xiangqian; Malik, O. P.

    2015-01-01

    Considering the drawbacks of traditional wavelet neural network, such as low convergence speed and high sensitivity to initial parameters, an ant colony optimization- (ACO-) initialized wavelet neural network is proposed in this paper for vibration fault diagnosis of a hydroturbine generating unit. In this method, parameters of the wavelet neural network are initialized by the ACO algorithm, and then the wavelet neural network is trained by the gradient descent algorithm. Amplitudes of the fr...

  3. Self-consistent determination of the spike-train power spectrum in a neural network with sparse connectivity

    Directory of Open Access Journals (Sweden)

    Benjamin eDummer

    2014-09-01

    Full Text Available A major source of random variability in cortical networks is the quasi-random arrival of presynaptic action potentials from many other cells. In network studies as well as in the study of the response properties of single cells embedded in a network, synaptic background input is often approximated by Poissonian spike trains. However, the output statistics of the cells is in most cases far from being Poisson. This is inconsistent with the assumption of similar spike-train statistics for pre- and postsynaptic cells in a recurrent network. Here we tackle this problem for the popular class of integrate-and-fire neurons and study a self-consistent statistics of input and output spectra of neural spike trains. Instead of actually using a large network, we use an iterative scheme, in which we simulate a single neuron over several generations. In each of these generations, the neuron is stimulated with surrogate stochastic input that has a similar statistics as the output of the previous generation. For the surrogate input, we employ two distinct approximations: (i a superposition of renewal spike trains with the same interspike interval density as observed in the previous generation and (ii a Gaussian current with a power spectrum proportional to that observed in the previous generation. For input parameters that correspond to balanced input in the network, both the renewal and the Gaussian iteration procedure converge quickly and yield comparable results for the self-consistent spike-train power spectrum. We compare our results to large-scale simulations of a random sparsely connected network of leaky integrate-and-fire neurons (Brunel, J. Comp. Neurosci. 2000 and show that in the asynchronous regime close to a state of balanced synaptic input from the network, our iterative schemes provide excellent approximations to the autocorrelation of spike trains in the recurrent network.

  4. Application of neural network technology to setpoint control of a simulated reactor experiment loop

    International Nuclear Information System (INIS)

    Cordes, G.A.; Bryan, S.R.; Powell, R.H.; Chick, D.R.

    1991-01-01

    This paper describes the design, implementation, and application of artificial neural networks to achieve temperature and flow rate control for a simulation of a typical experiment loop in the Advanced Test Reactor (ATR) located at the Idaho National Engineering Laboratory (INEL). The goal of the project was to research multivariate, nonlinear control using neural networks. A loop simulation code was adapted for the project and used to create a training set and test the neural network controller for comparison with the existing loop controllers. The results for the best neural network design are documented and compared with existing loop controller action. The neural network was shown to be as accurate at loop control as the classical controllers in the operating region represented by the training set. 5 refs., 8 figs., 3 tabs

  5. Detecting and diagnosing SSME faults using an autoassociative neural network topology

    Science.gov (United States)

    Ali, M.; Dietz, W. E.; Kiech, E. L.

    1989-01-01

    An effort is underway at the University of Tennessee Space Institute to develop diagnostic expert system methodologies based on the analysis of patterns of behavior of physical mechanisms. In this approach, fault diagnosis is conceptualized as the mapping or association of patterns of sensor data to patterns representing fault conditions. Neural networks are being investigated as a means of storing and retrieving fault scenarios. Neural networks offer several powerful features in fault diagnosis, including (1) general pattern matching capabilities, (2) resistance to noisy input data, (3) the ability to be trained by example, and (4) the potential for implementation on parallel computer architectures. This paper presents (1) an autoassociative neural network topology, i.e. the network input and output is identical when properly trained, and hence learning is unsupervised; (2) the training regimen used; and (3) the response of the system to inputs representing both previously observed and unkown fault scenarios. The effects of noise on the integrity of the diagnosis are also evaluated.

  6. Neural-Network Control Of Prosthetic And Robotic Hands

    Science.gov (United States)

    Buckley, Theresa M.

    1991-01-01

    Electronic neural networks proposed for use in controlling robotic and prosthetic hands and exoskeletal or glovelike electromechanical devices aiding intact but nonfunctional hands. Specific to patient, who activates grasping motion by voice command, by mechanical switch, or by myoelectric impulse. Patient retains higher-level control, while lower-level control provided by neural network analogous to that of miniature brain. During training, patient teaches miniature brain to perform specialized, anthropomorphic movements unique to himself or herself.

  7. Gear Fault Diagnosis Based on BP Neural Network

    Science.gov (United States)

    Huang, Yongsheng; Huang, Ruoshi

    2018-03-01

    Gear transmission is more complex, widely used in machinery fields, which form of fault has some nonlinear characteristics. This paper uses BP neural network to train the gear of four typical failure modes, and achieves satisfactory results. Tested by using test data, test results have an agreement with the actual results. The results show that the BP neural network can effectively solve the complex state of gear fault in the gear fault diagnosis.

  8. Quantitative phase microscopy using deep neural networks

    Science.gov (United States)

    Li, Shuai; Sinha, Ayan; Lee, Justin; Barbastathis, George

    2018-02-01

    Deep learning has been proven to achieve ground-breaking accuracy in various tasks. In this paper, we implemented a deep neural network (DNN) to achieve phase retrieval in a wide-field microscope. Our DNN utilized the residual neural network (ResNet) architecture and was trained using the data generated by a phase SLM. The results showed that our DNN was able to reconstruct the profile of the phase target qualitatively. In the meantime, large error still existed, which indicated that our approach still need to be improved.

  9. UAV Trajectory Modeling Using Neural Networks

    Science.gov (United States)

    Xue, Min

    2017-01-01

    Massive small unmanned aerial vehicles are envisioned to operate in the near future. While there are lots of research problems need to be addressed before dense operations can happen, trajectory modeling remains as one of the keys to understand and develop policies, regulations, and requirements for safe and efficient unmanned aerial vehicle operations. The fidelity requirement of a small unmanned vehicle trajectory model is high because these vehicles are sensitive to winds due to their small size and low operational altitude. Both vehicle control systems and dynamic models are needed for trajectory modeling, which makes the modeling a great challenge, especially considering the fact that manufactures are not willing to share their control systems. This work proposed to use a neural network approach for modelling small unmanned vehicle's trajectory without knowing its control system and bypassing exhaustive efforts for aerodynamic parameter identification. As a proof of concept, instead of collecting data from flight tests, this work used the trajectory data generated by a mathematical vehicle model for training and testing the neural network. The results showed great promise because the trained neural network can predict 4D trajectories accurately, and prediction errors were less than 2:0 meters in both temporal and spatial dimensions.

  10. Analysis of wave directional spreading using neural networks

    Digital Repository Service at National Institute of Oceanography (India)

    Deo, M.C.; Gondane, D.S.; SanilKumar, V.

    describes how a representative spreading parameter could be arrived at from easily available wave parameters such as significant wave height and average zero-cross wave period, using the technique of neural networks. It is shown that training of the network...

  11. Improved algorithms for circuit fault diagnosis based on wavelet packet and neural network

    International Nuclear Information System (INIS)

    Zhang, W-Q; Xu, C

    2008-01-01

    In this paper, two improved BP neural network algorithms of fault diagnosis for analog circuit are presented through using optimal wavelet packet transform(OWPT) or incomplete wavelet packet transform(IWPT) as preprocessor. The purpose of preprocessing is to reduce the nodes in input layer and hidden layer of BP neural network, so that the neural network gains faster training and convergence speed. At first, we apply OWPT or IWPT to the response signal of circuit under test(CUT), and then calculate the normalization energy of each frequency band. The normalization energy is used to train the BP neural network to diagnose faulty components in the analog circuit. These two algorithms need small network size, while have faster learning and convergence speed. Finally, simulation results illustrate the two algorithms are effective for fault diagnosis

  12. Development of an accident diagnosis system using a dynamic neural network for nuclear power plants

    International Nuclear Information System (INIS)

    Lee, Seung Jun; Kim, Jong Hyun; Seong, Poong Hyun

    2004-01-01

    In this work, an accident diagnosis system using the dynamic neural network is developed. In order to help the plant operators to quickly identify the problem, perform diagnosis and initiate recovery actions ensuring the safety of the plant, many operator support system and accident diagnosis systems have been developed. Neural networks have been recognized as a good method to implement an accident diagnosis system. However, conventional accident diagnosis systems that used neural networks did not consider a time factor sufficiently. If the neural network could be trained according to time, it is possible to perform more efficient and detailed accidents analysis. Therefore, this work suggests a dynamic neural network which has different features from existing dynamic neural networks. And a simple accident diagnosis system is implemented in order to validate the dynamic neural network. After training of the prototype, several accident diagnoses were performed. The results show that the prototype can detect the accidents correctly with good performances

  13. Neutron spectrometry and dosimetry by means of evolutive neural networks

    International Nuclear Information System (INIS)

    Ortiz R, J.M.; Martinez B, M.R.; Vega C, H.R.

    2008-01-01

    The artificial neural networks and the genetic algorithms are two relatively new areas of research, which have been subject to a growing interest during the last years. Both models are inspired by the nature, however, the neural networks are interested in the learning of a single individual, which is defined as fenotypic learning, while the evolutionary algorithms are interested in the adaptation of a population to a changing environment, that which is defined as genotypic learning. Recently, the use of the technology of neural networks has been applied with success in the area of the nuclear sciences, mainly in the areas of neutron spectrometry and dosimetry. The structure (network topology), as well as the learning parameters of a neural network, are factors that contribute in a significant way with the acting of the same one, however, it has been observed that the investigators in this area, carry out the selection of the network parameters through the essay and error technique, that which produces neural networks of poor performance and low generalization capacity. From the revised sources, it has been observed that the use of the evolutionary algorithms, seen as search techniques, it has allowed him to be possible to evolve and to optimize different properties of the neural networks, just as the initialization of the synaptic weights, the network architecture or the training algorithms without the human intervention. The objective of the present work is focused in analyzing the intersection of the neural networks and the evolutionary algorithms, analyzing like it is that the same ones can be used to help in the design processes and training of a neural network, this is, in the good selection of the structural parameters and of network learning, improving its generalization capacity, in such way that the same one is able to reconstruct in an efficient way neutron spectra and to calculate equivalent doses starting from the counting rates of a Bonner sphere

  14. Photon spectrometry utilizing neural networks

    International Nuclear Information System (INIS)

    Silveira, R.; Benevides, C.; Lima, F.; Vilela, E.

    2015-01-01

    Having in mind the time spent on the uneventful work of characterization of the radiation beams used in a ionizing radiation metrology laboratory, the Metrology Service of the Centro Regional de Ciencias Nucleares do Nordeste - CRCN-NE verified the applicability of artificial intelligence (artificial neural networks) to perform the spectrometry in photon fields. For this, was developed a multilayer neural network, as an application for the classification of patterns in energy, associated with a thermoluminescent dosimetric system (TLD-700 and TLD-600). A set of dosimeters was initially exposed to various well known medium energies, between 40 keV and 1.2 MeV, coinciding with the beams determined by ISO 4037 standard, for the dose of 10 mSv in the quantity Hp(10), on a chest phantom (ISO slab phantom) with the purpose of generating a set of training data for the neural network. Subsequently, a new set of dosimeters irradiated in unknown energies was presented to the network with the purpose to test the method. The methodology used in this work was suitable for application in the classification of energy beams, having obtained 100% of the classification performed. (authors)

  15. Artificial neural networks for processing fluorescence spectroscopy data in skin cancer diagnostics

    International Nuclear Information System (INIS)

    Lenhardt, L; Zeković, I; Dramićanin, T; Dramićanin, M D

    2013-01-01

    Over the years various optical spectroscopic techniques have been widely used as diagnostic tools in the discrimination of many types of malignant diseases. Recently, synchronous fluorescent spectroscopy (SFS) coupled with chemometrics has been applied in cancer diagnostics. The SFS method involves simultaneous scanning of both emission and excitation wavelengths while keeping the interval of wavelengths (constant-wavelength mode) or frequencies (constant-energy mode) between them constant. This method is fast, relatively inexpensive, sensitive and non-invasive. Total synchronous fluorescence spectra of normal skin, nevus and melanoma samples were used as input for training of artificial neural networks. Two different types of artificial neural networks were trained, the self-organizing map and the feed-forward neural network. Histopathology results of investigated skin samples were used as the gold standard for network output. Based on the obtained classification success rate of neural networks, we concluded that both networks provided high sensitivity with classification errors between 2 and 4%. (paper)

  16. Artificial neural network based approach to transmission lines protection

    International Nuclear Information System (INIS)

    Joorabian, M.

    1999-05-01

    The aim of this paper is to present and accurate fault detection technique for high speed distance protection using artificial neural networks. The feed-forward multi-layer neural network with the use of supervised learning and the common training rule of error back-propagation is chosen for this study. Information available locally at the relay point is passed to a neural network in order for an assessment of the fault location to be made. However in practice there is a large amount of information available, and a feature extraction process is required to reduce the dimensionality of the pattern vectors, whilst retaining important information that distinguishes the fault point. The choice of features is critical to the performance of the neural networks learning and operation. A significant feature in this paper is that an artificial neural network has been designed and tested to enhance the precision of the adaptive capabilities for distance protection

  17. Wind power prediction based on genetic neural network

    Science.gov (United States)

    Zhang, Suhan

    2017-04-01

    The scale of grid connected wind farms keeps increasing. To ensure the stability of power system operation, make a reasonable scheduling scheme and improve the competitiveness of wind farm in the electricity generation market, it's important to accurately forecast the short-term wind power. To reduce the influence of the nonlinear relationship between the disturbance factor and the wind power, the improved prediction model based on genetic algorithm and neural network method is established. To overcome the shortcomings of long training time of BP neural network and easy to fall into local minimum and improve the accuracy of the neural network, genetic algorithm is adopted to optimize the parameters and topology of neural network. The historical data is used as input to predict short-term wind power. The effectiveness and feasibility of the method is verified by the actual data of a certain wind farm as an example.

  18. SuperSpike: Supervised Learning in Multilayer Spiking Neural Networks.

    Science.gov (United States)

    Zenke, Friedemann; Ganguli, Surya

    2018-04-13

    A vast majority of computation in the brain is performed by spiking neural networks. Despite the ubiquity of such spiking, we currently lack an understanding of how biological spiking neural circuits learn and compute in vivo, as well as how we can instantiate such capabilities in artificial spiking circuits in silico. Here we revisit the problem of supervised learning in temporally coding multilayer spiking neural networks. First, by using a surrogate gradient approach, we derive SuperSpike, a nonlinear voltage-based three-factor learning rule capable of training multilayer networks of deterministic integrate-and-fire neurons to perform nonlinear computations on spatiotemporal spike patterns. Second, inspired by recent results on feedback alignment, we compare the performance of our learning rule under different credit assignment strategies for propagating output errors to hidden units. Specifically, we test uniform, symmetric, and random feedback, finding that simpler tasks can be solved with any type of feedback, while more complex tasks require symmetric feedback. In summary, our results open the door to obtaining a better scientific understanding of learning and computation in spiking neural networks by advancing our ability to train them to solve nonlinear problems involving transformations between different spatiotemporal spike time patterns.

  19. Adaptive Control of Nonlinear Discrete-Time Systems by Using OS-ELM Neural Networks

    Directory of Open Access Journals (Sweden)

    Xiao-Li Li

    2014-01-01

    Full Text Available As a kind of novel feedforward neural network with single hidden layer, ELM (extreme learning machine neural networks are studied for the identification and control of nonlinear dynamic systems. The property of simple structure and fast convergence of ELM can be shown clearly. In this paper, we are interested in adaptive control of nonlinear dynamic plants by using OS-ELM (online sequential extreme learning machine neural networks. Based on data scope division, the problem that training process of ELM neural network is sensitive to the initial training data is also solved. According to the output range of the controlled plant, the data corresponding to this range will be used to initialize ELM. Furthermore, due to the drawback of conventional adaptive control, when the OS-ELM neural network is used for adaptive control of the system with jumping parameters, the topological structure of the neural network can be adjusted dynamically by using multiple model switching strategy, and an MMAC (multiple model adaptive control will be used to improve the control performance. Simulation results are included to complement the theoretical results.

  20. Performance of artificial neural networks and genetical evolved artificial neural networks unfolding techniques

    International Nuclear Information System (INIS)

    Ortiz R, J. M.; Martinez B, M. R.; Vega C, H. R.; Gallego D, E.; Lorente F, A.; Mendez V, R.; Los Arcos M, J. M.; Guerrero A, J. E.

    2011-01-01

    With the Bonner spheres spectrometer neutron spectrum is obtained through an unfolding procedure. Monte Carlo methods, Regularization, Parametrization, Least-squares, and Maximum Entropy are some of the techniques utilized for unfolding. In the last decade methods based on Artificial Intelligence Technology have been used. Approaches based on Genetic Algorithms and Artificial Neural Networks (Ann) have been developed in order to overcome the drawbacks of previous techniques. Nevertheless the advantages of Ann still it has some drawbacks mainly in the design process of the network, vg the optimum selection of the architectural and learning Ann parameters. In recent years the use of hybrid technologies, combining Ann and genetic algorithms, has been utilized to. In this work, several Ann topologies were trained and tested using Ann and Genetically Evolved Artificial Neural Networks in the aim to unfold neutron spectra using the count rates of a Bonner sphere spectrometer. Here, a comparative study of both procedures has been carried out. (Author)

  1. The Evaluation on Data Mining Methods of Horizontal Bar Training Based on BP Neural Network

    Directory of Open Access Journals (Sweden)

    Zhang Yanhui

    2015-01-01

    Full Text Available With the rapid development of science and technology, data analysis has become an indispensable part of people’s work and life. Horizontal bar training has multiple categories. It is an emphasis for the re-search of related workers that categories of the training and match should be reduced. The application of data mining methods is discussed based on the problem of reducing categories of horizontal bar training. The BP neural network is applied to the cluster analysis and the principal component analysis, which are used to evaluate horizontal bar training. Two kinds of data mining methods are analyzed from two aspects, namely the operational convenience of data mining and the rationality of results. It turns out that the principal component analysis is more suitable for data processing of horizontal bar training.

  2. Neural Network Classifiers for Local Wind Prediction.

    Science.gov (United States)

    Kretzschmar, Ralf; Eckert, Pierre; Cattani, Daniel; Eggimann, Fritz

    2004-05-01

    This paper evaluates the quality of neural network classifiers for wind speed and wind gust prediction with prediction lead times between +1 and +24 h. The predictions were realized based on local time series and model data. The selection of appropriate input features was initiated by time series analysis and completed by empirical comparison of neural network classifiers trained on several choices of input features. The selected input features involved day time, yearday, features from a single wind observation device at the site of interest, and features derived from model data. The quality of the resulting classifiers was benchmarked against persistence for two different sites in Switzerland. The neural network classifiers exhibited superior quality when compared with persistence judged on a specific performance measure, hit and false-alarm rates.

  3. Inverse problems in eddy current testing using neural network

    Science.gov (United States)

    Yusa, N.; Cheng, W.; Miya, K.

    2000-05-01

    Reconstruction of crack in conductive material is one of the most important issues in the field of eddy current testing. Although many attempts to reconstruct cracks have been made, most of them deal with only artificial cracks machined with electro-discharge. However, in the case of natural cracks like stress corrosion cracking or inter-granular attack, there must be contact region and therefore their conductivity is not necessarily zero. In this study, an attempt to reconstruct natural cracks using neural network is presented. The neural network was trained through numerical simulated data obtained by the fast forward solver that calculated unflawed potential data a priori to save computational time. The solver is based on A-φ method discretized by using FEM-BEM A natural crack was modeled as an area whose conductivity was less than that of a specimen. The distribution of conductivity in that area was reconstructed as well. It took much time to train the network, but the speed of reconstruction was extremely fast after once it was trained. Well-trained network gave good reconstruction result.

  4. Application of neural networks to signal prediction in nuclear power plant

    International Nuclear Information System (INIS)

    Wan Joo Kim; Soon Heung Chang; Byung Ho Lee

    1993-01-01

    This paper describes the feasibility study of an artificial neural network for signal prediction. The purpose of signal prediction is to estimate the value of undetected next time step signal. As the prediction method, based on the idea of auto regression, a few previous signals are inputs to the artificial neural network and the signal value of next time step is estimated with the outputs of the network. The artificial neural network can be applied to the nonlinear system and answers in short time. The training algorithm is a modified backpropagation model, which can effectively reduce the training time. The target signal of the simulation is the steam generator water level, which is one of the important parameters in nuclear power plants. The simulation result shows that the predicted value follows the real trend well

  5. Fault detection and diagnosis for complex multivariable processes using neural networks

    International Nuclear Information System (INIS)

    Weerasinghe, M.

    1998-06-01

    Development of a reliable fault diagnosis method for large-scale industrial plants is laborious and often difficult to achieve due to the complexity of the targeted systems. The main objective of this thesis is to investigate the application of neural networks to the diagnosis of non-catastrophic faults in an industrial nuclear fuel processing plant. The proposed methods were initially developed by application to a simulated chemical process prior to further validation on real industrial data. The diagnosis of faults at a single operating point is first investigated. Statistical data conditioning methods of data scaling and principal component analysis are investigated to facilitate fault classification and reduce the complexity of neural networks. Successful fault diagnosis was achieved with significantly smaller networks than using all process variables as network inputs. Industrial processes often manufacture at various operating points, but demonstrated applications of neural networks for fault diagnosis usually only consider a single (primary) operating point. Developing a standard neural network scheme for fault diagnosis at all operating points would be usually impractical due to the unavailability of suitable training data for less frequently used (secondary) operating points. To overcome this problem, the application of a single neural network for the diagnosis of faults operating at different points is investigated. The data conditioning followed the same techniques as used for the fault diagnosis of a single operating point. The results showed that a single neural network could be successfully used to diagnose faults at operating points other than that it is trained for, and the data conditioning significantly improved the classification. Artificial neural networks have been shown to be an effective tool for process fault diagnosis. However, a main criticism is that details of the procedures taken to reach the fault diagnosis decisions are embedded in

  6. Process identification through modular neural networks and rule extraction (extended abstract)

    NARCIS (Netherlands)

    van der Zwaag, B.J.; Slump, Cornelis H.; Spaanenburg, L.; Blockeel, Hendrik; Denecker, Marc

    2002-01-01

    Monolithic neural networks may be trained from measured data to establish knowledge about the process. Unfortunately, this knowledge is not guaranteed to be found and – if at all – hard to extract. Modular neural networks are better suited for this purpose. Domain-ordered by topology, rule

  7. Optical Calibration Process Developed for Neural-Network-Based Optical Nondestructive Evaluation Method

    Science.gov (United States)

    Decker, Arthur J.

    2004-01-01

    A completely optical calibration process has been developed at Glenn for calibrating a neural-network-based nondestructive evaluation (NDE) method. The NDE method itself detects very small changes in the characteristic patterns or vibration mode shapes of vibrating structures as discussed in many references. The mode shapes or characteristic patterns are recorded using television or electronic holography and change when a structure experiences, for example, cracking, debonds, or variations in fastener properties. An artificial neural network can be trained to be very sensitive to changes in the mode shapes, but quantifying or calibrating that sensitivity in a consistent, meaningful, and deliverable manner has been challenging. The standard calibration approach has been difficult to implement, where the response to damage of the trained neural network is compared with the responses of vibration-measurement sensors. In particular, the vibration-measurement sensors are intrusive, insufficiently sensitive, and not numerous enough. In response to these difficulties, a completely optical alternative to the standard calibration approach was proposed and tested successfully. Specifically, the vibration mode to be monitored for structural damage was intentionally contaminated with known amounts of another mode, and the response of the trained neural network was measured as a function of the peak-to-peak amplitude of the contaminating mode. The neural network calibration technique essentially uses the vibration mode shapes of the undamaged structure as standards against which the changed mode shapes are compared. The published response of the network can be made nearly independent of the contaminating mode, if enough vibration modes are used to train the net. The sensitivity of the neural network can be adjusted for the environment in which the test is to be conducted. The response of a neural network trained with measured vibration patterns for use on a vibration isolation

  8. Neural networks to predict exosphere temperature corrections

    Science.gov (United States)

    Choury, Anna; Bruinsma, Sean; Schaeffer, Philippe

    2013-10-01

    Precise orbit prediction requires a forecast of the atmospheric drag force with a high degree of accuracy. Artificial neural networks are universal approximators derived from artificial intelligence and are widely used for prediction. This paper presents a method of artificial neural networking for prediction of the thermosphere density by forecasting exospheric temperature, which will be used by the semiempirical thermosphere Drag Temperature Model (DTM) currently developed. Artificial neural network has shown to be an effective and robust forecasting model for temperature prediction. The proposed model can be used for any mission from which temperature can be deduced accurately, i.e., it does not require specific training. Although the primary goal of the study was to create a model for 1 day ahead forecast, the proposed architecture has been generalized to 2 and 3 days prediction as well. The impact of artificial neural network predictions has been quantified for the low-orbiting satellite Gravity Field and Steady-State Ocean Circulation Explorer in 2011, and an order of magnitude smaller orbit errors were found when compared with orbits propagated using the thermosphere model DTM2009.

  9. Application of particle swarm optimization to identify gamma spectrum with neural network

    International Nuclear Information System (INIS)

    Shi Dongsheng; Di Yuming; Zhou Chunlin

    2007-01-01

    In applying neural network to identification of gamma spectra back propagation (BP) algorithm is usually trapped to a local optimum and has a low speed of convergence, whereas particle swarm optimization (PSO) is advantageous in terms of globe optimal searching. In this paper, we propose a new algorithm for neural network training, i.e. combined BP and PSO optimization, or PSO-BP algorithm. Practical example shows that the new algorithm can overcome shortcomings of BP algorithm and the neural network trained by it has a high ability of generalization with identification result of 100% correctness. It can be used effectively and reliably to identify gamma spectra. (authors)

  10. Container-code recognition system based on computer vision and deep neural networks

    Science.gov (United States)

    Liu, Yi; Li, Tianjian; Jiang, Li; Liang, Xiaoyao

    2018-04-01

    Automatic container-code recognition system becomes a crucial requirement for ship transportation industry in recent years. In this paper, an automatic container-code recognition system based on computer vision and deep neural networks is proposed. The system consists of two modules, detection module and recognition module. The detection module applies both algorithms based on computer vision and neural networks, and generates a better detection result through combination to avoid the drawbacks of the two methods. The combined detection results are also collected for online training of the neural networks. The recognition module exploits both character segmentation and end-to-end recognition, and outputs the recognition result which passes the verification. When the recognition module generates false recognition, the result will be corrected and collected for online training of the end-to-end recognition sub-module. By combining several algorithms, the system is able to deal with more situations, and the online training mechanism can improve the performance of the neural networks at runtime. The proposed system is able to achieve 93% of overall recognition accuracy.

  11. Alpha spectral analysis via artificial neural networks

    International Nuclear Information System (INIS)

    Kangas, L.J.; Hashem, S.; Keller, P.E.; Kouzes, R.T.; Troyer, G.L.

    1994-10-01

    An artificial neural network system that assigns quality factors to alpha particle energy spectra is discussed. The alpha energy spectra are used to detect plutonium contamination in the work environment. The quality factors represent the levels of spectral degradation caused by miscalibration and foreign matter affecting the instruments. A set of spectra was labeled with a quality factor by an expert and used in training the artificial neural network expert system. The investigation shows that the expert knowledge of alpha spectra quality factors can be transferred to an ANN system

  12. Neural network approach to radiologic lesion detection

    International Nuclear Information System (INIS)

    Newman, F.D.; Raff, U.; Stroud, D.

    1989-01-01

    An area of artificial intelligence that has gained recent attention is the neural network approach to pattern recognition. The authors explore the use of neural networks in radiologic lesion detection with what is known in the literature as the novelty filter. This filter uses a linear model; images of normal patterns become training vectors and are stored as columns of a matrix. An image of an abnormal pattern is introduced and the abnormality or novelty is extracted. A VAX 750 was used to encode the novelty filter, and two experiments have been examined

  13. High-Dimensional Function Approximation With Neural Networks for Large Volumes of Data.

    Science.gov (United States)

    Andras, Peter

    2018-02-01

    Approximation of high-dimensional functions is a challenge for neural networks due to the curse of dimensionality. Often the data for which the approximated function is defined resides on a low-dimensional manifold and in principle the approximation of the function over this manifold should improve the approximation performance. It has been show that projecting the data manifold into a lower dimensional space, followed by the neural network approximation of the function over this space, provides a more precise approximation of the function than the approximation of the function with neural networks in the original data space. However, if the data volume is very large, the projection into the low-dimensional space has to be based on a limited sample of the data. Here, we investigate the nature of the approximation error of neural networks trained over the projection space. We show that such neural networks should have better approximation performance than neural networks trained on high-dimensional data even if the projection is based on a relatively sparse sample of the data manifold. We also find that it is preferable to use a uniformly distributed sparse sample of the data for the purpose of the generation of the low-dimensional projection. We illustrate these results considering the practical neural network approximation of a set of functions defined on high-dimensional data including real world data as well.

  14. Tracking and vertex finding with drift chambers and neural networks

    International Nuclear Information System (INIS)

    Lindsey, C.

    1991-09-01

    Finding tracks, track vertices and event vertices with neural networks from drift chamber signals is discussed. Simulated feed-forward neural networks have been trained with back-propagation to give track parameters using Monte Carlo simulated tracks in one case and actual experimental data in another. Effects on network performance of limited weight resolution, noise and drift chamber resolution are given. Possible implementations in hardware are discussed. 7 refs., 10 figs

  15. Development of Artificial Neural Network Model for Diesel Fuel Properties Prediction using Vibrational Spectroscopy.

    Science.gov (United States)

    Bolanča, Tomislav; Marinović, Slavica; Ukić, Sime; Jukić, Ante; Rukavina, Vinko

    2012-06-01

    This paper describes development of artificial neural network models which can be used to correlate and predict diesel fuel properties from several FTIR-ATR absorbances and Raman intensities as input variables. Multilayer feed forward and radial basis function neural networks have been used to rapid and simultaneous prediction of cetane number, cetane index, density, viscosity, distillation temperatures at 10% (T10), 50% (T50) and 90% (T90) recovery, contents of total aromatics and polycyclic aromatic hydrocarbons of commercial diesel fuels. In this study two-phase training procedures for multilayer feed forward networks were applied. While first phase training algorithm was constantly the back propagation one, two second phase training algorithms were varied and compared, namely: conjugate gradient and quasi Newton. In case of radial basis function network, radial layer was trained using K-means radial assignment algorithm and three different radial spread algorithms: explicit, isotropic and K-nearest neighbour. The number of hidden layer neurons and experimental data points used for the training set have been optimized for both neural networks in order to insure good predictive ability by reducing unnecessary experimental work. This work shows that developed artificial neural network models can determine main properties of diesel fuels simultaneously based on a single and fast IR or Raman measurement.

  16. Identification of generalized state transfer matrix using neural networks

    International Nuclear Information System (INIS)

    Zhu Changchun

    2001-01-01

    The research is introduced on identification of generalized state transfer matrix of linear time-invariant (LTI) system by use of neural networks based on LM (Levenberg-Marquart) algorithm. Firstly, the generalized state transfer matrix is defined. The relationship between the identification of state transfer matrix of structural dynamics and the identification of the weight matrix of neural networks has been established in theory. A singular layer neural network is adopted to obtain the structural parameters as a powerful tool that has parallel distributed processing ability and the property of adaptation or learning. The constraint condition of weight matrix of the neural network is deduced so that the learning and training of the designed network can be more effective. The identified neural network can be used to simulate the structural response excited by any other signals. In order to cope with its further application in practical problems, some noise (5% and 10%) is expected to be present in the response measurements. Results from computer simulation studies show that this method is valid and feasible

  17. Artificial neural networks in neutron dosimetry

    International Nuclear Information System (INIS)

    Vega-Carrillo, H. R.; Hernandez-Davila, V. M.; Manzanares-Acuna, E.; Mercado, G. A.; Gallego, E.; Lorente, A.; Perales-Munoz, W. A.; Robles-Rodriguez, J. A.

    2006-01-01

    An artificial neural network (ANN) has been designed to obtain neutron doses using only the count rates of a Bonner spheres spectrometer (BSS). Ambient, personal and effective neutron doses were included. One hundred and eighty-one neutron spectra were utilised to calculate the Bonner count rates and the neutron doses. The spectra were transformed from lethargy to energy distribution and were re-binned to 31 energy groups using the MCNP 4C code. Re-binned spectra, UTA4 response matrix and fluence-to-dose coefficients were used to calculate the count rates in the BSS and the doses. Count rates were used as input and the respective doses were used as output during neural network training. Training and testing were carried out in the MATLAB R environment. The impact of uncertainties in BSS count rates upon the dose quantities calculated with the ANN was investigated by modifying by ±5% the BSS count rates used in the training set. The use of ANNs in neutron dosimetry is an alternative procedure that overcomes the drawbacks associated with this ill-conditioned problem. (authors)

  18. Forecasting solar proton event with artificial neural network

    Science.gov (United States)

    Gong, J.; Wang, J.; Xue, B.; Liu, S.; Zou, Z.

    Solar proton event (SPE), relatively rare but popular in solar maximum, can bring hazard situation to spacecraft. As a special event, SPE always accompanies flare, which is also called proton flare. To produce such an eruptive event, large amount energy must be accumulated within the active region. So we can investigate the character of the active region and its evolving trend, together with other such as cm radio emission and soft X-ray background to evaluate the potential of SEP in chosen area. In order to summarize the omen of SPEs in the active regions behind the observed parameters, we employed AI technology. Full connecting neural network was chosen to fulfil this job. After constructing the network, we train it with 13 parameters that was able to exhibit the character of active regions and their evolution trend. More than 80 sets of event parameter were defined to teach the neural network to identify whether an active region was potential of SPE. Then we test this model with a data base consisting SPE and non-SPE cases that was not used to train the neural network. The result showed that 75% of the choice by the model was right.

  19. Selected aspects of modelling of foreign exchange rates with neural networks

    Directory of Open Access Journals (Sweden)

    Václav Mastný

    2005-01-01

    Full Text Available This paper deals with forecasting of the high-frequency foreign exchange market with neural networks. The objective is to investigate some aspects of modelling with neural networks (impact of topology, size of training set and time horizon of the forecast on the performance of the network. The data used for the purpose of this paper contain 15-minute time series of US dollar against other major currencies, Japanese Yen, British Pound and Euro. The results show, that performance of the network in terms of correct directorial change is negatively influenced by increasing number of hidden neurons and decreasing size of training set. The performance of the network is influenced by sampling frequency.

  20. Artificial Neural Network L* from different magnetospheric field models

    Science.gov (United States)

    Yu, Y.; Koller, J.; Zaharia, S. G.; Jordanova, V. K.

    2011-12-01

    The third adiabatic invariant L* plays an important role in modeling and understanding the radiation belt dynamics. The popular way to numerically obtain the L* value follows the recipe described by Roederer [1970], which is, however, slow and computational expensive. This work focuses on a new technique, which can compute the L* value in microseconds without losing much accuracy: artificial neural networks. Since L* is related to the magnetic flux enclosed by a particle drift shell, global magnetic field information needed to trace the drift shell is required. A series of currently popular empirical magnetic field models are applied to create the L* data pool using 1 million data samples which are randomly selected within a solar cycle and within the global magnetosphere. The networks, trained from the above L* data pool, can thereby be used for fairly efficient L* calculation given input parameters valid within the trained temporal and spatial range. Besides the empirical magnetospheric models, a physics-based self-consistent inner magnetosphere model (RAM-SCB) developed at LANL is also utilized to calculate L* values and then to train the L* neural network. This model better predicts the magnetospheric configuration and therefore can significantly improve the L*. The above neural network L* technique will enable, for the first time, comprehensive solar-cycle long studies of radiation belt processes. However, neural networks trained from different magnetic field models can result in different L* values, which could cause mis-interpretation of radiation belt dynamics, such as where the source of the radiation belt charged particle is and which mechanism is dominant in accelerating the particles. Such a fact calls for attention to cautiously choose a magnetospheric field model for the L* calculation.

  1. Research of convolutional neural networks for traffic sign recognition

    OpenAIRE

    Stadalnikas, Kasparas

    2017-01-01

    In this thesis the convolutional neural networks application for traffic sign recognition is analyzed. Thesis describes the basic operations, techniques that are commonly used to apply in the image classification using convolutional neural networks. Also, this paper describes the data sets used for traffic sign recognition, their problems affecting the final training results. The paper reviews most popular existing technologies – frameworks for developing the solution for traffic sign recogni...

  2. Eddy Current Flaw Characterization Using Neural Networks

    International Nuclear Information System (INIS)

    Song, S. J.; Park, H. J.; Shin, Y. K.

    1998-01-01

    Determination of location, shape and size of a flaw from its eddy current testing signal is one of the fundamental issues in eddy current nondestructive evaluation of steam generator tubes. Here, we propose an approach to this problem; an inversion of eddy current flaw signal using neural networks trained by finite element model-based synthetic signatures. Total 216 eddy current signals from four different types of axisymmetric flaws in tubes are generated by finite element models of which the accuracy is experimentally validated. From each simulated signature, total 24 eddy current features are extracted and among them 13 features are finally selected for flaw characterization. Based on these features, probabilistic neural networks discriminate flaws into four different types according to the location and the shape, and successively back propagation neural networks determine the size parameters of the discriminated flaw

  3. Liquefaction Microzonation of Babol City Using Artificial Neural Network

    DEFF Research Database (Denmark)

    Farrokhzad, F.; Choobbasti, A.J.; Barari, Amin

    2012-01-01

    that will be less susceptible to damage during earthquakes. The scope of present study is to prepare the liquefaction microzonation map for the Babol city based on Seed and Idriss (1983) method using artificial neural network. Artificial neural network (ANN) is one of the artificial intelligence (AI) approaches...... microzonation map is produced for research area. Based on the obtained results, it can be stated that the trained neural network is capable in prediction of liquefaction potential with an acceptable level of confidence. At the end, zoning of the city is carried out based on the prediction of liquefaction...... that can be classified as machine learning. Simplified methods have been practiced by researchers to assess nonlinear liquefaction potential of soil. In order to address the collective knowledge built-up in conventional liquefaction engineering, an alternative general regression neural network model...

  4. Classification of urine sediment based on convolution neural network

    Science.gov (United States)

    Pan, Jingjing; Jiang, Cunbo; Zhu, Tiantian

    2018-04-01

    By designing a new convolution neural network framework, this paper breaks the constraints of the original convolution neural network framework requiring large training samples and samples of the same size. Move and cropping the input images, generate the same size of the sub-graph. And then, the generated sub-graph uses the method of dropout, increasing the diversity of samples and preventing the fitting generation. Randomly select some proper subset in the sub-graphic set and ensure that the number of elements in the proper subset is same and the proper subset is not the same. The proper subsets are used as input layers for the convolution neural network. Through the convolution layer, the pooling, the full connection layer and output layer, we can obtained the classification loss rate of test set and training set. In the red blood cells, white blood cells, calcium oxalate crystallization classification experiment, the classification accuracy rate of 97% or more.

  5. Predicting Student Academic Performance: A Comparison of Two Meta-Heuristic Algorithms Inspired by Cuckoo Birds for Training Neural Networks

    Directory of Open Access Journals (Sweden)

    Jeng-Fung Chen

    2014-10-01

    Full Text Available Predicting student academic performance with a high accuracy facilitates admission decisions and enhances educational services at educational institutions. This raises the need to propose a model that predicts student performance, based on the results of standardized exams, including university entrance exams, high school graduation exams, and other influential factors. In this study, an approach to the problem based on the artificial neural network (ANN with the two meta-heuristic algorithms inspired by cuckoo birds and their lifestyle, namely, Cuckoo Search (CS and Cuckoo Optimization Algorithm (COA is proposed. In particular, we used previous exam results and other factors, such as the location of the student’s high school and the student’s gender as input variables, and predicted the student academic performance. The standard CS and standard COA were separately utilized to train the feed-forward network for prediction. The algorithms optimized the weights between layers and biases of the neuron network. The simulation results were then discussed and analyzed to investigate the prediction ability of the neural network trained by these two algorithms. The findings demonstrated that both CS and COA have potential in training ANN and ANN-COA obtained slightly better results for predicting student academic performance in this case. It is expected that this work may be used to support student admission procedures and strengthen the service system in educational institutions.

  6. Design of Jetty Piles Using Artificial Neural Networks

    Directory of Open Access Journals (Sweden)

    Yongjei Lee

    2014-01-01

    Full Text Available To overcome the complication of jetty pile design process, artificial neural networks (ANN are adopted. To generate the training samples for training ANN, finite element (FE analysis was performed 50 times for 50 different design cases. The trained ANN was verified with another FE analysis case and then used as a structural analyzer. The multilayer neural network (MBPNN with two hidden layers was used for ANN. The framework of MBPNN was defined as the input with the lateral forces on the jetty structure and the type of piles and the output with the stress ratio of the piles. The results from the MBPNN agree well with those from FE analysis. Particularly for more complex modes with hundreds of different design cases, the MBPNN would possibly substitute parametric studies with FE analysis saving design time and cost.

  7. Neural network based approach for tuning of SNS feedback and feedforward controllers

    International Nuclear Information System (INIS)

    Kwon, Sung-Il; Prokop, Mark S.; Regan, Amy H.

    2002-01-01

    The primary controllers in the SNS low level RF system are proportional-integral (PI) feedback controllers. To obtain the best performance of the linac control systems, approximately 91 individual PI controller gains should be optimally tuned. Tuning is time consuming and requires automation. In this paper, a neural network is used for the controller gain tuning. A neural network can approximate any continuous mapping through learning. In a sense, the cavity loop PI controller is a continuous mapping of the tracking error and its one-sample-delay inputs to the controller output. Also, monotonic cavity output with respect to its input makes knowing the detailed parameters of the cavity unnecessary. Hence the PI controller is a prime candidate for approximation through a neural network. Using mean square error minimization to train the neural network along with a continuous mapping of appropriate weights, optimally tuned PI controller gains can be determined. The same neural network approximation property is also applied to enhance the adaptive feedforward controller performance. This is done by adjusting the feedforward controller gains, forgetting factor, and learning ratio. Lastly, the automation of the tuning procedure data measurement, neural network training, tuning and loading the controller gain to the DSP is addressed.

  8. Tomographic image reconstruction using Artificial Neural Networks

    International Nuclear Information System (INIS)

    Paschalis, P.; Giokaris, N.D.; Karabarbounis, A.; Loudos, G.K.; Maintas, D.; Papanicolas, C.N.; Spanoudaki, V.; Tsoumpas, Ch.; Stiliaris, E.

    2004-01-01

    A new image reconstruction technique based on the usage of an Artificial Neural Network (ANN) is presented. The most crucial factor in designing such a reconstruction system is the network architecture and the number of the input projections needed to reconstruct the image. Although the training phase requires a large amount of input samples and a considerable CPU time, the trained network is characterized by simplicity and quick response. The performance of this ANN is tested using several image patterns. It is intended to be used together with a phantom rotating table and the γ-camera of IASA for SPECT image reconstruction

  9. Neural network versus classical time series forecasting models

    Science.gov (United States)

    Nor, Maria Elena; Safuan, Hamizah Mohd; Shab, Noorzehan Fazahiyah Md; Asrul, Mohd; Abdullah, Affendi; Mohamad, Nurul Asmaa Izzati; Lee, Muhammad Hisyam

    2017-05-01

    Artificial neural network (ANN) has advantage in time series forecasting as it has potential to solve complex forecasting problems. This is because ANN is data driven approach which able to be trained to map past values of a time series. In this study the forecast performance between neural network and classical time series forecasting method namely seasonal autoregressive integrated moving average models was being compared by utilizing gold price data. Moreover, the effect of different data preprocessing on the forecast performance of neural network being examined. The forecast accuracy was evaluated using mean absolute deviation, root mean square error and mean absolute percentage error. It was found that ANN produced the most accurate forecast when Box-Cox transformation was used as data preprocessing.

  10. Chaotic diagonal recurrent neural network

    International Nuclear Information System (INIS)

    Wang Xing-Yuan; Zhang Yi

    2012-01-01

    We propose a novel neural network based on a diagonal recurrent neural network and chaos, and its structure and learning algorithm are designed. The multilayer feedforward neural network, diagonal recurrent neural network, and chaotic diagonal recurrent neural network are used to approach the cubic symmetry map. The simulation results show that the approximation capability of the chaotic diagonal recurrent neural network is better than the other two neural networks. (interdisciplinary physics and related areas of science and technology)

  11. Identification of illicit drugs by using SOM neural networks

    Energy Technology Data Exchange (ETDEWEB)

    Liang Meiyan; Shen Jingling; Wang Guangqin [Beijing Key Lab for Terahertz Spectroscopy and Imaging, Key Laboratory of Terahertz Optoelectronics, Ministry of Education, Department of Physics, Capital Normal University, Beijing 100037 (China)], E-mail: liangyan661982@163.com, E-mail: jinglingshen@gmail.com, E-mail: pywgq2004@163.com

    2008-07-07

    Absorption spectra of six illicit drugs were measured by using the terahertz time-domain spectroscopy technique in the range 0.2-2.6 THz and then clustered with self-organization feature map (SOM) artificial neural network. After the network training process, the spectra collected at another time were identified successfully by the well-trained SOM network. An effective distance was introduced as a quantitative criterion to decide which cluster the new spectra were affiliated with.

  12. Identification of illicit drugs by using SOM neural networks

    International Nuclear Information System (INIS)

    Liang Meiyan; Shen Jingling; Wang Guangqin

    2008-01-01

    Absorption spectra of six illicit drugs were measured by using the terahertz time-domain spectroscopy technique in the range 0.2-2.6 THz and then clustered with self-organization feature map (SOM) artificial neural network. After the network training process, the spectra collected at another time were identified successfully by the well-trained SOM network. An effective distance was introduced as a quantitative criterion to decide which cluster the new spectra were affiliated with

  13. The application of artificial neural network in radon disaster model of uranium mining

    International Nuclear Information System (INIS)

    Zhu Yufeng; Zhu Guogen; Zhou Shijian

    2012-01-01

    The structural features, data analysis and learning process of feed-forward neural network (BP ANN) were analyzed at first. Rodon sample from Fuzhou Jinan Uranium Industry Limited Company were used to training the network and make the forecast then, and a forecasting model was established for the radon disaster in uranium mines. The method and effectiveness of BP neural network in predicting radon disaster was discussed. The test of training samples showed that the BP network had gotten fairly satisfied result in predicting mine radon disaster. (authors)

  14. Plant species classification using deep convolutional neural network

    DEFF Research Database (Denmark)

    Dyrmann, Mads; Karstoft, Henrik; Midtiby, Henrik Skov

    2016-01-01

    Information on which weed species are present within agricultural fields is important for site specific weed management. This paper presents a method that is capable of recognising plant species in colour images by using a convolutional neural network. The network is built from scratch trained an...

  15. Short-Term Load Forecasting Model Based on Quantum Elman Neural Networks

    Directory of Open Access Journals (Sweden)

    Zhisheng Zhang

    2016-01-01

    Full Text Available Short-term load forecasting model based on quantum Elman neural networks was constructed in this paper. The quantum computation and Elman feedback mechanism were integrated into quantum Elman neural networks. Quantum computation can effectively improve the approximation capability and the information processing ability of the neural networks. Quantum Elman neural networks have not only the feedforward connection but also the feedback connection. The feedback connection between the hidden nodes and the context nodes belongs to the state feedback in the internal system, which has formed specific dynamic memory performance. Phase space reconstruction theory is the theoretical basis of constructing the forecasting model. The training samples are formed by means of K-nearest neighbor approach. Through the example simulation, the testing results show that the model based on quantum Elman neural networks is better than the model based on the quantum feedforward neural network, the model based on the conventional Elman neural network, and the model based on the conventional feedforward neural network. So the proposed model can effectively improve the prediction accuracy. The research in the paper makes a theoretical foundation for the practical engineering application of the short-term load forecasting model based on quantum Elman neural networks.

  16. Research on wind field algorithm of wind lidar based on BP neural network and grey prediction

    Science.gov (United States)

    Chen, Yong; Chen, Chun-Li; Luo, Xiong; Zhang, Yan; Yang, Ze-hou; Zhou, Jie; Shi, Xiao-ding; Wang, Lei

    2018-01-01

    This paper uses the BP neural network and grey algorithm to forecast and study radar wind field. In order to reduce the residual error in the wind field prediction which uses BP neural network and grey algorithm, calculating the minimum value of residual error function, adopting the residuals of the gray algorithm trained by BP neural network, using the trained network model to forecast the residual sequence, using the predicted residual error sequence to modify the forecast sequence of the grey algorithm. The test data show that using the grey algorithm modified by BP neural network can effectively reduce the residual value and improve the prediction precision.

  17. Neural networks for predicting breeding values and genetic gains

    Directory of Open Access Journals (Sweden)

    Gabi Nunes Silva

    2014-12-01

    Full Text Available Analysis using Artificial Neural Networks has been described as an approach in the decision-making process that, although incipient, has been reported as presenting high potential for use in animal and plant breeding. In this study, we introduce the procedure of using the expanded data set for training the network. Wealso proposed using statistical parameters to estimate the breeding value of genotypes in simulated scenarios, in addition to the mean phenotypic value in a feed-forward back propagation multilayer perceptron network. After evaluating artificial neural network configurations, our results showed its superiority to estimates based on linear models, as well as its applicability in the genetic value prediction process. The results further indicated the good generalization performance of the neural network model in several additional validation experiments.

  18. The DSFPN, a new neural network for optical character recognition.

    Science.gov (United States)

    Morns, L P; Dlay, S S

    1999-01-01

    A new type of neural network for recognition tasks is presented in this paper. The network, called the dynamic supervised forward-propagation network (DSFPN), is based on the forward only version of the counterpropagation network (CPN). The DSFPN, trains using a supervised algorithm and can grow dynamically during training, allowing subclasses in the training data to be learnt in an unsupervised manner. It is shown to train in times comparable to the CPN while giving better classification accuracies than the popular backpropagation network. Both Fourier descriptors and wavelet descriptors are used for image preprocessing and the wavelets are proven to give a far better performance.

  19. Separation prediction in two dimensional boundary layer flows using artificial neural networks

    International Nuclear Information System (INIS)

    Sabetghadam, F.; Ghomi, H.A.

    2003-01-01

    In this article, the ability of artificial neural networks in prediction of separation in steady two dimensional boundary layer flows is studied. Data for network training is extracted from numerical solution of an ODE obtained from Von Karman integral equation with approximate one parameter Pohlhousen velocity profile. As an appropriate neural network, a two layer radial basis generalized regression artificial neural network is used. The results shows good agreements between the overall behavior of the flow fields predicted by the artificial neural network and the actual flow fields for some cases. The method easily can be extended to unsteady separation and turbulent as well as compressible boundary layer flows. (author)

  20. A mixed-scale dense convolutional neural network for image analysis

    NARCIS (Netherlands)

    D.M. Pelt (Daniël); J.A. Sethian (James)

    2016-01-01

    textabstractDeep convolutional neural networks have been successfully applied to many image-processing problems in recent works. Popular network architectures often add additional operations and connections to the standard architecture to enable training deeper networks. To achieve accurate results

  1. Hybrid computing using a neural network with dynamic external memory.

    Science.gov (United States)

    Graves, Alex; Wayne, Greg; Reynolds, Malcolm; Harley, Tim; Danihelka, Ivo; Grabska-Barwińska, Agnieszka; Colmenarejo, Sergio Gómez; Grefenstette, Edward; Ramalho, Tiago; Agapiou, John; Badia, Adrià Puigdomènech; Hermann, Karl Moritz; Zwols, Yori; Ostrovski, Georg; Cain, Adam; King, Helen; Summerfield, Christopher; Blunsom, Phil; Kavukcuoglu, Koray; Hassabis, Demis

    2016-10-27

    Artificial neural networks are remarkably adept at sensory processing, sequence learning and reinforcement learning, but are limited in their ability to represent variables and data structures and to store data over long timescales, owing to the lack of an external memory. Here we introduce a machine learning model called a differentiable neural computer (DNC), which consists of a neural network that can read from and write to an external memory matrix, analogous to the random-access memory in a conventional computer. Like a conventional computer, it can use its memory to represent and manipulate complex data structures, but, like a neural network, it can learn to do so from data. When trained with supervised learning, we demonstrate that a DNC can successfully answer synthetic questions designed to emulate reasoning and inference problems in natural language. We show that it can learn tasks such as finding the shortest path between specified points and inferring the missing links in randomly generated graphs, and then generalize these tasks to specific graphs such as transport networks and family trees. When trained with reinforcement learning, a DNC can complete a moving blocks puzzle in which changing goals are specified by sequences of symbols. Taken together, our results demonstrate that DNCs have the capacity to solve complex, structured tasks that are inaccessible to neural networks without external read-write memory.

  2. Tests of track segment and vertex finding with neural networks

    International Nuclear Information System (INIS)

    Denby, B.; Lessner, E.; Lindsey, C.S.

    1990-04-01

    Feed forward neural networks have been trained, using back-propagation, to find the slopes of simulated track segments in a straw chamber and to find the vertex of tracks from both simulated and real events in a more conventional drift chamber geometry. Network architectures, training, and performance are presented. 12 refs., 7 figs

  3. Performance of an artificial neural network for vertical root fracture detection: an ex vivo study.

    Science.gov (United States)

    Kositbowornchai, Suwadee; Plermkamon, Supattra; Tangkosol, Tawan

    2013-04-01

    To develop an artificial neural network for vertical root fracture detection. A probabilistic neural network design was used to clarify whether a tooth root was sound or had a vertical root fracture. Two hundred images (50 sound and 150 vertical root fractures) derived from digital radiography--used to train and test the artificial neural network--were divided into three groups according to the number of training and test data sets: 80/120,105/95 and 130/70, respectively. Either training or tested data were evaluated using grey-scale data per line passing through the root. These data were normalized to reduce the grey-scale variance and fed as input data of the neural network. The variance of function in recognition data was calculated between 0 and 1 to select the best performance of neural network. The performance of the neural network was evaluated using a diagnostic test. After testing data under several variances of function, we found the highest sensitivity (98%), specificity (90.5%) and accuracy (95.7%) occurred in Group three, for which the variance of function in recognition data was between 0.025 and 0.005. The neural network designed in this study has sufficient sensitivity, specificity and accuracy to be a model for vertical root fracture detection. © 2012 John Wiley & Sons A/S.

  4. Prediction of Aerodynamic Coefficient using Genetic Algorithm Optimized Neural Network for Sparse Data

    Science.gov (United States)

    Rajkumar, T.; Bardina, Jorge; Clancy, Daniel (Technical Monitor)

    2002-01-01

    Wind tunnels use scale models to characterize aerodynamic coefficients, Wind tunnel testing can be slow and costly due to high personnel overhead and intensive power utilization. Although manual curve fitting can be done, it is highly efficient to use a neural network to define the complex relationship between variables. Numerical simulation of complex vehicles on the wide range of conditions required for flight simulation requires static and dynamic data. Static data at low Mach numbers and angles of attack may be obtained with simpler Euler codes. Static data of stalled vehicles where zones of flow separation are usually present at higher angles of attack require Navier-Stokes simulations which are costly due to the large processing time required to attain convergence. Preliminary dynamic data may be obtained with simpler methods based on correlations and vortex methods; however, accurate prediction of the dynamic coefficients requires complex and costly numerical simulations. A reliable and fast method of predicting complex aerodynamic coefficients for flight simulation I'S presented using a neural network. The training data for the neural network are derived from numerical simulations and wind-tunnel experiments. The aerodynamic coefficients are modeled as functions of the flow characteristics and the control surfaces of the vehicle. The basic coefficients of lift, drag and pitching moment are expressed as functions of angles of attack and Mach number. The modeled and training aerodynamic coefficients show good agreement. This method shows excellent potential for rapid development of aerodynamic models for flight simulation. Genetic Algorithms (GA) are used to optimize a previously built Artificial Neural Network (ANN) that reliably predicts aerodynamic coefficients. Results indicate that the GA provided an efficient method of optimizing the ANN model to predict aerodynamic coefficients. The reliability of the ANN using the GA includes prediction of aerodynamic

  5. PEAK TRACKING WITH A NEURAL NETWORK FOR SPECTRAL RECOGNITION

    NARCIS (Netherlands)

    COENEGRACHT, PMJ; METTING, HJ; VANLOO, EM; SNOEIJER, GJ; DOORNBOS, DA

    1993-01-01

    A peak tracking method based on a simulated feed-forward neural network with back-propagation is presented. The network uses the normalized UV spectra and peak areas measured in one chromatogram for peak recognition. It suffices to train the network with only one set of spectra recorded in one

  6. Neural networks

    International Nuclear Information System (INIS)

    Denby, Bruce; Lindsey, Clark; Lyons, Louis

    1992-01-01

    The 1980s saw a tremendous renewal of interest in 'neural' information processing systems, or 'artificial neural networks', among computer scientists and computational biologists studying cognition. Since then, the growth of interest in neural networks in high energy physics, fueled by the need for new information processing technologies for the next generation of high energy proton colliders, can only be described as explosive

  7. Causal Learning and Explanation of Deep Neural Networks via Autoencoded Activations

    OpenAIRE

    Harradon, Michael; Druce, Jeff; Ruttenberg, Brian

    2018-01-01

    Deep neural networks are complex and opaque. As they enter application in a variety of important and safety critical domains, users seek methods to explain their output predictions. We develop an approach to explaining deep neural networks by constructing causal models on salient concepts contained in a CNN. We develop methods to extract salient concepts throughout a target network by using autoencoders trained to extract human-understandable representations of network activations. We then bu...

  8. Parameter estimation using compensatory neural networks

    Indian Academy of Sciences (India)

    of interconnections among neurons but also reduces the total computing time for training. The suggested model has properties of the basic neuron ..... Engelbrecht A P, Cloete I, Geldenhuys J, Zurada J M 1995 Automatic scaling using gamma learning for feedforward neural networks. From natural to artificial computing.

  9. Particle identification using artificial neural networks at BESIII

    International Nuclear Information System (INIS)

    Qin Gang; Lv Junguang; Bian Jianming; Chinese Academy of Sciences, Beijing

    2008-01-01

    A multilayered perceptrons' neural network technique has been applied in the particle identification at BESIII. The networks are trained in each sub-detector level. The NN output of sub-detectors can be sent to a sequential network or be constructed as PDFs for a likelihood. Good muon-ID, electron-ID and hadron-ID are obtained from the networks by using the simulated Monte Carlo samples. (authors)

  10. Development and Validation of a Deep Neural Network Model for Prediction of Postoperative In-hospital Mortality.

    Science.gov (United States)

    Lee, Christine K; Hofer, Ira; Gabel, Eilon; Baldi, Pierre; Cannesson, Maxime

    2018-04-17

    The authors tested the hypothesis that deep neural networks trained on intraoperative features can predict postoperative in-hospital mortality. The data used to train and validate the algorithm consists of 59,985 patients with 87 features extracted at the end of surgery. Feed-forward networks with a logistic output were trained using stochastic gradient descent with momentum. The deep neural networks were trained on 80% of the data, with 20% reserved for testing. The authors assessed improvement of the deep neural network by adding American Society of Anesthesiologists (ASA) Physical Status Classification and robustness of the deep neural network to a reduced feature set. The networks were then compared to ASA Physical Status, logistic regression, and other published clinical scores including the Surgical Apgar, Preoperative Score to Predict Postoperative Mortality, Risk Quantification Index, and the Risk Stratification Index. In-hospital mortality in the training and test sets were 0.81% and 0.73%. The deep neural network with a reduced feature set and ASA Physical Status classification had the highest area under the receiver operating characteristics curve, 0.91 (95% CI, 0.88 to 0.93). The highest logistic regression area under the curve was found with a reduced feature set and ASA Physical Status (0.90, 95% CI, 0.87 to 0.93). The Risk Stratification Index had the highest area under the receiver operating characteristics curve, at 0.97 (95% CI, 0.94 to 0.99). Deep neural networks can predict in-hospital mortality based on automatically extractable intraoperative data, but are not (yet) superior to existing methods.

  11. Using neural networks to infer the hydrodynamic yield of aspherical sources

    International Nuclear Information System (INIS)

    Moran, B.; Glenn, L.

    1993-01-01

    We distinguish two kinds of difficulties with yield determination from aspherical sources. The first kind, the spoofing difficulty, occurs when a fraction of the energy of the explosion is channeled in such a way that it is not detected by the CORRTEX cable. In this case, neither neural networks nor any expert system can be expected to accurately estimate the yield without detailed information about device emplacement within the canister. Numerical simulations however, can provide an upper bound on the undetected fraction of the explosive energy. In the second instance, the interpretation difficulty, the data appear abnormal when analyzed using similar-explosion-scaling and the assumption of a spherical front. The inferred yield varies with time and the confidence in the yield estimate decreases. It is this kind of problem we address in this paper and for which neural networks can make a contribution. We used a back propagation neural network to infer the hydrodynamic yield of simulated aspherical sources. We trained the network using a subset of simulations from 3 different aspherical sources, with 3 different yield, and 3 satellite offset separations. The trained network was able to predict the yield within 15% in all cases and to identify the correct type of aspherical source in most cases. The predictive capability of the network increased with a larger training set. The neural network approach can easily incorporate information from new calculations or experiments and is therefore flexible and easy to maintain. We describe the potential capabilities and limitations in using such networks for yield estimations

  12. Automatic Classification of volcano-seismic events based on Deep Neural Networks.

    Science.gov (United States)

    Titos Luzón, M.; Bueno Rodriguez, A.; Garcia Martinez, L.; Benitez, C.; Ibáñez, J. M.

    2017-12-01

    Seismic monitoring of active volcanoes is a popular remote sensing technique to detect seismic activity, often associated to energy exchanges between the volcano and the environment. As a result, seismographs register a wide range of volcano-seismic signals that reflect the nature and underlying physics of volcanic processes. Machine learning and signal processing techniques provide an appropriate framework to analyze such data. In this research, we propose a new classification framework for seismic events based on deep neural networks. Deep neural networks are composed by multiple processing layers, and can discover intrinsic patterns from the data itself. Internal parameters can be initialized using a greedy unsupervised pre-training stage, leading to an efficient training of fully connected architectures. We aim to determine the robustness of these architectures as classifiers of seven different types of seismic events recorded at "Volcán de Fuego" (Colima, Mexico). Two deep neural networks with different pre-training strategies are studied: stacked denoising autoencoder and deep belief networks. Results are compared to existing machine learning algorithms (SVM, Random Forest, Multilayer Perceptron). We used 5 LPC coefficients over three non-overlapping segments as training features in order to characterize temporal evolution, avoid redundancy and encode the signal, regardless of its duration. Experimental results show that deep architectures can classify seismic events with higher accuracy than classical algorithms, attaining up to 92% recognition accuracy. Pre-training initialization helps these models to detect events that occur simultaneously in time (such explosions and rockfalls), increase robustness against noisy inputs, and provide better generalization. These results demonstrate deep neural networks are robust classifiers, and can be deployed in real-environments to monitor the seismicity of restless volcanoes.

  13. Neural network evaluation of tokamak current profiles for real time control

    Science.gov (United States)

    Wróblewski, Dariusz

    1997-02-01

    Active feedback control of the current profile, requiring real-time determination of the current profile parameters, is envisioned for tokamaks operating in enhanced confinement regimes. The distribution of toroidal current in a tokamak is now routinely evaluated based on external (magnetic probes, flux loops) and internal (motional Stark effect) measurements of the poloidal magnetic field. However, the analysis involves reconstruction of magnetohydrodynamic equilibrium and is too intensive computationally to be performed in real time. In the present study, a neural network is used to provide a mapping from the magnetic measurements (internal and external) to selected parameters of the safety factor profile. The single-pass, feedforward calculation of output of a trained neural network is very fast, making this approach particularly suitable for real-time applications. The network was trained on a large set of simulated equilibrium data for the DIII-D tokamak. The database encompasses a large variety of current profiles including the hollow current profiles important for reversed central shear operation. The parameters of safety factor profile (a quantity related to the current profile through the magnetic field tilt angle) estimated by the neural network include central safety factor, q0, minimum value of q, qmin, and the location of qmin. Very good performance of the trained neural network both for simulated test data and for experimental datais demonstrated.

  14. Neural network evaluation of tokamak current profiles for real time control

    International Nuclear Information System (INIS)

    Wroblewski, D.

    1997-01-01

    Active feedback control of the current profile, requiring real-time determination of the current profile parameters, is envisioned for tokamaks operating in enhanced confinement regimes. The distribution of toroidal current in a tokamak is now routinely evaluated based on external (magnetic probes, flux loops) and internal (motional Stark effect) measurements of the poloidal magnetic field. However, the analysis involves reconstruction of magnetohydrodynamic equilibrium and is too intensive computationally to be performed in real time. In the present study, a neural network is used to provide a mapping from the magnetic measurements (internal and external) to selected parameters of the safety factor profile. The single-pass, feedforward calculation of output of a trained neural network is very fast, making this approach particularly suitable for real-time applications. The network was trained on a large set of simulated equilibrium data for the DIII-D tokamak. The database encompasses a large variety of current profiles including the hollow current profiles important for reversed central shear operation. The parameters of safety factor profile (a quantity related to the current profile through the magnetic field tilt angle) estimated by the neural network include central safety factor, q 0 , minimum value of q, q min , and the location of q min . Very good performance of the trained neural network both for simulated test data and for experimental datais demonstrated. copyright 1997 American Institute of Physics

  15. A novel and generalized approach in the inversion of geoelectrical resistivity data using Artificial Neural Networks (ANN)

    Science.gov (United States)

    Raj, A. Stanley; Srinivas, Y.; Oliver, D. Hudson; Muthuraj, D.

    2014-03-01

    The non-linear apparent resistivity problem in the subsurface study of the earth takes into account the model parameters in terms of resistivity and thickness of individual subsurface layers using the trained synthetic data by means of Artificial Neural Networks (ANN). Here we used a single layer feed-forward neural network with fast back propagation learning algorithm. So on proper training of back propagation networks it tends to give the resistivity and thickness of the subsurface layer model of the field resistivity data with reference to the synthetic data trained in the appropriate network. During training, the weights and biases of the network are iteratively adjusted to make network performance function level more efficient. On adequate training, errors are minimized and the best result is obtained using the artificial neural networks. The network is trained with more number of VES data and this trained network is demonstrated by the field data. The accuracy of inversion depends upon the number of data trained. In this novel and specially designed algorithm, the interpretation of the vertical electrical sounding has been done successfully with the more accurate layer model.

  16. Target recognition based on convolutional neural network

    Science.gov (United States)

    Wang, Liqiang; Wang, Xin; Xi, Fubiao; Dong, Jian

    2017-11-01

    One of the important part of object target recognition is the feature extraction, which can be classified into feature extraction and automatic feature extraction. The traditional neural network is one of the automatic feature extraction methods, while it causes high possibility of over-fitting due to the global connection. The deep learning algorithm used in this paper is a hierarchical automatic feature extraction method, trained with the layer-by-layer convolutional neural network (CNN), which can extract the features from lower layers to higher layers. The features are more discriminative and it is beneficial to the object target recognition.

  17. Human Face Recognition Using Convolutional Neural Networks

    Directory of Open Access Journals (Sweden)

    Răzvan-Daniel Albu

    2009-10-01

    Full Text Available In this paper, I present a novel hybrid face recognition approach based on a convolutional neural architecture, designed to robustly detect highly variable face patterns. The convolutional network extracts successively larger features in a hierarchical set of layers. With the weights of the trained neural networks there are created kernel windows used for feature extraction in a 3-stage algorithm. I present experimental results illustrating the efficiency of the proposed approach. I use a database of 796 images of 159 individuals from Reims University which contains quite a high degree of variability in expression, pose, and facial details.

  18. Modelling and Prediction of Photovoltaic Power Output Using Artificial Neural Networks

    Directory of Open Access Journals (Sweden)

    Aminmohammad Saberian

    2014-01-01

    Full Text Available This paper presents a solar power modelling method using artificial neural networks (ANNs. Two neural network structures, namely, general regression neural network (GRNN feedforward back propagation (FFBP, have been used to model a photovoltaic panel output power and approximate the generated power. Both neural networks have four inputs and one output. The inputs are maximum temperature, minimum temperature, mean temperature, and irradiance; the output is the power. The data used in this paper started from January 1, 2006, until December 31, 2010. The five years of data were split into two parts: 2006–2008 and 2009-2010; the first part was used for training and the second part was used for testing the neural networks. A mathematical equation is used to estimate the generated power. At the end, both of these networks have shown good modelling performance; however, FFBP has shown a better performance comparing with GRNN.

  19. Applying neural networks to control the TFTR neutral beam ion sources

    International Nuclear Information System (INIS)

    Lagin, L.

    1992-01-01

    This paper describes the application of neural networks to the control of the neutral beam long-pulse positive ion source accelerators on the Tokamak Fusion Test Reactor (TFTR) at Princeton University. Neural networks were used to learn how the operators adjust the control setpoints when running these sources. The data sets used to train these networks were derived from a large database containing actual setpoints and power supply waveform calculations for the 1990 run period. The networks learned what the optimum control setpoints should initially be set based uon desired accel voltage and perveance levels. Neural networks were also used to predict the divergence of the ion beam

  20. Maximum solid concentrations of coal water slurries predicted by neural network models

    Energy Technology Data Exchange (ETDEWEB)

    Cheng, Jun; Li, Yanchang; Zhou, Junhu; Liu, Jianzhong; Cen, Kefa

    2010-12-15

    The nonlinear back-propagation (BP) neural network models were developed to predict the maximum solid concentration of coal water slurry (CWS) which is a substitute for oil fuel, based on physicochemical properties of 37 typical Chinese coals. The Levenberg-Marquardt algorithm was used to train five BP neural network models with different input factors. The data pretreatment method, learning rate and hidden neuron number were optimized by training models. It is found that the Hardgrove grindability index (HGI), moisture and coalification degree of parent coal are 3 indispensable factors for the prediction of CWS maximum solid concentration. Each BP neural network model gives a more accurate prediction result than the traditional polynomial regression equation. The BP neural network model with 3 input factors of HGI, moisture and oxygen/carbon ratio gives the smallest mean absolute error of 0.40%, which is much lower than that of 1.15% given by the traditional polynomial regression equation. (author)

  1. Neural-network-designed pulse sequences for robust control of singlet-triplet qubits

    Science.gov (United States)

    Yang, Xu-Chen; Yung, Man-Hong; Wang, Xin

    2018-04-01

    Composite pulses are essential for universal manipulation of singlet-triplet spin qubits. In the absence of noise, they are required to perform arbitrary single-qubit operations due to the special control constraint of a singlet-triplet qubit, while in a noisy environment, more complicated sequences have been developed to dynamically correct the error. Tailoring these sequences typically requires numerically solving a set of nonlinear equations. Here we demonstrate that these pulse sequences can be generated by a well-trained, double-layer neural network. For sequences designed for the noise-free case, the trained neural network is capable of producing almost exactly the same pulses known in the literature. For more complicated noise-correcting sequences, the neural network produces pulses with slightly different line shapes, but the robustness against noises remains comparable. These results indicate that the neural network can be a judicious and powerful alternative to existing techniques in developing pulse sequences for universal fault-tolerant quantum computation.

  2. IMNN: Information Maximizing Neural Networks

    Science.gov (United States)

    Charnock, Tom; Lavaux, Guilhem; Wandelt, Benjamin D.

    2018-04-01

    This software trains artificial neural networks to find non-linear functionals of data that maximize Fisher information: information maximizing neural networks (IMNNs). As compressing large data sets vastly simplifies both frequentist and Bayesian inference, important information may be inadvertently missed. Likelihood-free inference based on automatically derived IMNN summaries produces summaries that are good approximations to sufficient statistics. IMNNs are robustly capable of automatically finding optimal, non-linear summaries of the data even in cases where linear compression fails: inferring the variance of Gaussian signal in the presence of noise, inferring cosmological parameters from mock simulations of the Lyman-α forest in quasar spectra, and inferring frequency-domain parameters from LISA-like detections of gravitational waveforms. In this final case, the IMNN summary outperforms linear data compression by avoiding the introduction of spurious likelihood maxima.

  3. PWR system simulation and parameter estimation with neural networks

    Energy Technology Data Exchange (ETDEWEB)

    Akkurt, Hatice; Colak, Uener E-mail: uc@nuke.hacettepe.edu.tr

    2002-11-01

    A detailed nonlinear model for a typical PWR system has been considered for the development of simulation software. Each component in the system has been represented by appropriate differential equations. The SCILAB software was used for solving nonlinear equations to simulate steady-state and transient operational conditions. Overall system has been constructed by connecting individual components to each other. The validity of models for individual components and overall system has been verified. The system response against given transients have been analyzed. A neural network has been utilized to estimate system parameters during transients. Different transients have been imposed in training and prediction stages with neural networks. Reactor power and system reactivity during the transient event have been predicted by the neural network. Results show that neural networks estimations are in good agreement with the calculated response of the reactor system. The maximum errors are within {+-}0.254% for power and between -0.146 and 0.353% for reactivity prediction cases. Steam generator parameters, pressure and water level, are also successfully predicted by the neural network employed in this study. The noise imposed on the input parameters of the neural network deteriorates the power estimation capability whereas the reactivity estimation capability is not significantly affected.

  4. PWR system simulation and parameter estimation with neural networks

    International Nuclear Information System (INIS)

    Akkurt, Hatice; Colak, Uener

    2002-01-01

    A detailed nonlinear model for a typical PWR system has been considered for the development of simulation software. Each component in the system has been represented by appropriate differential equations. The SCILAB software was used for solving nonlinear equations to simulate steady-state and transient operational conditions. Overall system has been constructed by connecting individual components to each other. The validity of models for individual components and overall system has been verified. The system response against given transients have been analyzed. A neural network has been utilized to estimate system parameters during transients. Different transients have been imposed in training and prediction stages with neural networks. Reactor power and system reactivity during the transient event have been predicted by the neural network. Results show that neural networks estimations are in good agreement with the calculated response of the reactor system. The maximum errors are within ±0.254% for power and between -0.146 and 0.353% for reactivity prediction cases. Steam generator parameters, pressure and water level, are also successfully predicted by the neural network employed in this study. The noise imposed on the input parameters of the neural network deteriorates the power estimation capability whereas the reactivity estimation capability is not significantly affected

  5. Kontrol Kecepatan Motor Induksi menggunakan Algoritma Backpropagation Neural Network

    Directory of Open Access Journals (Sweden)

    MUHAMMAD RUSWANDI DJALAL

    2017-07-01

    Full Text Available ABSTRAKBanyak strategi kontrol berbasis kecerdasan buatan telah diusulkan dalam penelitian seperti Fuzzy Logic dan Artificial Neural Network (ANN. Tujuan dari penelitian ini adalah untuk mendesain sebuah kontrol agar kecepatan motor induksi dapat diatur sesuai kebutuhan serta membandingkan kinerja motor induksi tanpa kontrol dan dengan kontrol. Dalam penelitian ini diusulkan sebuah metode artificial neural network untuk mengontrol kecepatan motor induksi tiga fasa. Kecepatan referensi motor diatur pada kecepatan 140 rad/s, 150 rad/s, dan 130 rad/s. Perubahan kecepatan diatur pada setiap interval 0.3 detik dan waktu simulasi maksimum adalah 0,9 detik. Kasus 1 tanpa kontrol, menunjukkan respon torka dan kecepatan dari motor induksi tiga fasa tanpa kontrol. Meskipun kecepatan motor induksi tiga fasa diatur berubah pada setiap 0,3 detik tidak akan mempengaruhi torka. Selain itu, motor induksi tiga fasa tanpa kontrol memiliki kinerja yang buruk dikarenakan kecepatan motor induksi tidak dapat diatur sesuai dengan kebutuhan. Kasus 2 dengan control backpropagation neural network, meskipun kecepatan motor induksi tiga fasa berubah pada setiap 0.3 detik tidak akan mempengaruhi torsi. Selain itu, kontrol backpropagation neural network memiliki kinerja yang baik dikarenakan kecepatan motor induksi dapat diatur sesuai dengan kebutuhan.Kata kunci: Backpropagation Neural Network (BPNN, NN Training, NN Testing, Motor.ABSTRACTMany artificial intelligence-based control strategies have been proposed in research such as Fuzzy Logic and Artificial Neural Network (ANN. The purpose of this research was design a control for the induction motor speed that could be adjusted as needed and compare the performance of induction motor without control and with control. In this research, it was proposed an artificial neural network method to control the speed of three-phase induction motors. The reference speed of motor was set at the rate of 140 rad / s, 150 rad / s, and 130

  6. A 3D Active Learning Application for NeMO-Net, the NASA Neural Multi-Modal Observation and Training Network for Global Coral Reef Assessment

    Science.gov (United States)

    van den Bergh, J.; Schutz, J.; Chirayath, V.; Li, A.

    2017-12-01

    NeMO-Net, the NASA neural multi-modal observation and training network for global coral reef assessment, is an open-source deep convolutional neural network and interactive active learning training software aiming to accurately assess the present and past dynamics of coral reef ecosystems through determination of percent living cover and morphology as well as mapping of spatial distribution. We present an interactive video game prototype for tablet and mobile devices where users interactively label morphology classifications over mm-scale 3D coral reef imagery captured using fluid lensing to create a dataset that will be used to train NeMO-Net's convolutional neural network. The application currently allows for users to classify preselected regions of coral in the Pacific and will be expanded to include additional regions captured using our NASA FluidCam instrument, presently the highest-resolution remote sensing benthic imaging technology capable of removing ocean wave distortion, as well as lower-resolution airborne remote sensing data from the ongoing NASA CORAL campaign.Active learning applications present a novel methodology for efficiently training large-scale Neural Networks wherein variances in identification can be rapidly mitigated against control data. NeMO-Net periodically checks users' input against pre-classified coral imagery to gauge their accuracy and utilizes in-game mechanics to provide classification training. Users actively communicate with a server and are requested to classify areas of coral for which other users had conflicting classifications and contribute their input to a larger database for ranking. In partnering with Mission Blue and IUCN, NeMO-Net leverages an international consortium of subject matter experts to classify areas of confusion identified by NeMO-Net and generate additional labels crucial for identifying decision boundary locations in coral reef assessment.

  7. A 3D Active Learning Application for NeMO-Net, the NASA Neural Multi-Modal Observation and Training Network for Global Coral Reef Assessment

    Science.gov (United States)

    van den Bergh, Jarrett; Schutz, Joey; Li, Alan; Chirayath, Ved

    2017-01-01

    NeMO-Net, the NASA neural multi-modal observation and training network for global coral reef assessment, is an open-source deep convolutional neural network and interactive active learning training software aiming to accurately assess the present and past dynamics of coral reef ecosystems through determination of percent living cover and morphology as well as mapping of spatial distribution. We present an interactive video game prototype for tablet and mobile devices where users interactively label morphology classifications over mm-scale 3D coral reef imagery captured using fluid lensing to create a dataset that will be used to train NeMO-Nets convolutional neural network. The application currently allows for users to classify preselected regions of coral in the Pacific and will be expanded to include additional regions captured using our NASA FluidCam instrument, presently the highest-resolution remote sensing benthic imaging technology capable of removing ocean wave distortion, as well as lower-resolution airborne remote sensing data from the ongoing NASA CORAL campaign. Active learning applications present a novel methodology for efficiently training large-scale Neural Networks wherein variances in identification can be rapidly mitigated against control data. NeMO-Net periodically checks users input against pre-classified coral imagery to gauge their accuracy and utilize in-game mechanics to provide classification training. Users actively communicate with a server and are requested to classify areas of coral for which other users had conflicting classifications and contribute their input to a larger database for ranking. In partnering with Mission Blue and IUCN, NeMO-Net leverages an international consortium of subject matter experts to classify areas of confusion identified by NeMO-Net and generate additional labels crucial for identifying decision boundary locations in coral reef assessment.

  8. A study of reactor monitoring method with neural network

    Energy Technology Data Exchange (ETDEWEB)

    Nabeshima, Kunihiko [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment

    2001-03-01

    The purpose of this study is to investigate the methodology of Nuclear Power Plant (NPP) monitoring with neural networks, which create the plant models by the learning of the past normal operation patterns. The concept of this method is to detect the symptom of small anomalies by monitoring the deviations between the process signals measured from an actual plant and corresponding output signals from the neural network model, which might not be equal if the abnormal operational patterns are presented to the input of the neural network. Auto-associative network, which has same output as inputs, can detect an kind of anomaly condition by using normal operation data only. The monitoring tests of the feedforward neural network with adaptive learning were performed using the PWR plant simulator by which many kinds of anomaly conditions can be easily simulated. The adaptively trained feedforward network could follow the actual plant dynamics and the changes of plant condition, and then find most of the anomalies much earlier than the conventional alarm system during steady state and transient operations. Then the off-line and on-line test results during one year operation at the actual NPP (PWR) showed that the neural network could detect several small anomalies which the operators or the conventional alarm system didn't noticed. Furthermore, the sensitivity analysis suggests that the plant models by neural networks are appropriate. Finally, the simulation results show that the recurrent neural network with feedback connections could successfully model the slow behavior of the reactor dynamics without adaptive learning. Therefore, the recurrent neural network with adaptive learning will be the best choice for the actual reactor monitoring system. (author)

  9. Integration of Neural Networks and Cellular Automata for Urban Planning

    Institute of Scientific and Technical Information of China (English)

    Anthony Gar-on Yeh; LI Xia

    2004-01-01

    This paper presents a new type of cellular automata (CA) model for the simulation of alternative land development using neural networks for urban planning. CA models can be regarded as a planning tool because they can generate alternative urban growth. Alternative development patterns can be formed by using different sets of parameter values in CA simulation. A critical issue is how to define parameter values for realistic and idealized simulation. This paper demonstrates that neural networks can simplify CA models but generate more plausible results. The simulation is based on a simple three-layer network with an output neuron to generate conversion probability. No transition rules are required for the simulation. Parameter values are automatically obtained from the training of network by using satellite remote sensing data. Original training data can be assessed and modified according to planning objectives. Alternative urban patterns can be easily formulated by using the modified training data sets rather than changing the model.

  10. Application of Artificial Neural Networks in the Heart Electrical Axis Position Conclusion Modeling

    Science.gov (United States)

    Bakanovskaya, L. N.

    2016-08-01

    The article touches upon building of a heart electrical axis position conclusion model using an artificial neural network. The input signals of the neural network are the values of deflections Q, R and S; and the output signal is the value of the heart electrical axis position. Training of the network is carried out by the error propagation method. The test results allow concluding that the created neural network makes a conclusion with a high degree of accuracy.

  11. Neural network scatter correction technique for digital radiography

    International Nuclear Information System (INIS)

    Boone, J.M.

    1990-01-01

    This paper presents a scatter correction technique based on artificial neural networks. The technique utilizes the acquisition of a conventional digital radiographic image, coupled with the acquisition of a multiple pencil beam (micro-aperture) digital image. Image subtraction results in a sparsely sampled estimate of the scatter component in the image. The neural network is trained to develop a causal relationship between image data on the low-pass filtered open field image and the sparsely sampled scatter image, and then the trained network is used to correct the entire image (pixel by pixel) in a manner which is operationally similar to but potentially more powerful than convolution. The technique is described and is illustrated using clinical primary component images combined with scatter component images that are realistically simulated using the results from previously reported Monte Carlo investigations. The results indicate that an accurate scatter correction can be realized using this technique

  12. A SIMULATION OF THE PENICILLIN G PRODUCTION BIOPROCESS APPLYING NEURAL NETWORKS

    Directory of Open Access Journals (Sweden)

    A.J.G. da Cruz

    1997-12-01

    Full Text Available The production of penicillin G by Penicillium chrysogenum IFO 8644 was simulated employing a feedforward neural network with three layers. The neural network training procedure used an algorithm combining two procedures: random search and backpropagation. The results of this approach were very promising, and it was observed that the neural network was able to accurately describe the nonlinear behavior of the process. Besides, the results showed that this technique can be successfully applied to control process algorithms due to its long processing time and its flexibility in the incorporation of new data

  13. A Neural Network Approach to Fluid Level Measurement in Dynamic Environments Using a Single Capacitive Sensor

    Directory of Open Access Journals (Sweden)

    Edin TERZIC

    2010-03-01

    Full Text Available A measurement system has been developed using a single tube capacitive sensor to accurately determine the fluid level in vehicular fuel tanks. A novel approach based on artificial neural networks based signal pre-processing and classification has been described in this article. A broad investigation on the Backpropagation neural network and some selected signal pre-processing filters, namely, Moving Mean, Moving Median, and Wavelet Filter has also been presented. An on field drive trial was conducted under normal driving conditions at various fuel volumes ranging from 5 L to 50 L to acquire training samples from the capacitive sensor. A second field trial was conducted to obtain test samples to verify the performance of the neural network. The neural network was trained and verified with 50 % of the training and test samples. The results obtained using the neural network approach having different filtration methods are compared with the results obtained using simple Moving Mean and Moving Median functions. It is demonstrated that the Backpropagation neural network with Moving Median filter produced the most accurate outcome compared with the other signal filtration methods.

  14. Using neural networks for prediction of nuclear parameters

    Energy Technology Data Exchange (ETDEWEB)

    Pereira Filho, Leonidas; Souto, Kelling Cabral, E-mail: leonidasmilenium@hotmail.com, E-mail: kcsouto@bol.com.br [Instituto Federal de Educacao, Ciencia e Tecnologia do Rio de Janeiro (IFRJ), Rio de Janeiro, RJ (Brazil); Machado, Marcelo Dornellas, E-mail: dornemd@eletronuclear.gov.br [Eletrobras Termonuclear S.A. (GCN.T/ELETRONUCLEAR), Rio de Janeiro, RJ (Brazil). Gerencia de Combustivel Nuclear

    2013-07-01

    Dating from 1943, the earliest work on artificial neural networks (ANN), when Warren Mc Cullock and Walter Pitts developed a study on the behavior of the biological neuron, with the goal of creating a mathematical model. Some other work was done until after the 80 witnessed an explosion of interest in ANNs, mainly due to advances in technology, especially microelectronics. Because ANNs are able to solve many problems such as approximation, classification, categorization, prediction and others, they have numerous applications in various areas, including nuclear. Nodal method is adopted as a tool for analyzing core parameters such as boron concentration and pin power peaks for pressurized water reactors. However, this method is extremely slow when it is necessary to perform various core evaluations, for example core reloading optimization. To overcome this difficulty, in this paper a model of Multi-layer Perceptron (MLP) artificial neural network type backpropagation will be trained to predict these values. The main objective of this work is the development of Multi-layer Perceptron (MLP) artificial neural network capable to predict, in very short time, with good accuracy, two important parameters used in the core reloading problem - Boron Concentration and Power Peaking Factor. For the training of the neural networks are provided loading patterns and nuclear data used in cycle 19 of Angra 1 nuclear power plant. Three models of networks are constructed using the same input data and providing the following outputs: 1- Boron Concentration and Power Peaking Factor, 2 - Boron Concentration and 3 - Power Peaking Factor. (author)

  15. Using neural networks for prediction of nuclear parameters

    International Nuclear Information System (INIS)

    Pereira Filho, Leonidas; Souto, Kelling Cabral; Machado, Marcelo Dornellas

    2013-01-01

    Dating from 1943, the earliest work on artificial neural networks (ANN), when Warren Mc Cullock and Walter Pitts developed a study on the behavior of the biological neuron, with the goal of creating a mathematical model. Some other work was done until after the 80 witnessed an explosion of interest in ANNs, mainly due to advances in technology, especially microelectronics. Because ANNs are able to solve many problems such as approximation, classification, categorization, prediction and others, they have numerous applications in various areas, including nuclear. Nodal method is adopted as a tool for analyzing core parameters such as boron concentration and pin power peaks for pressurized water reactors. However, this method is extremely slow when it is necessary to perform various core evaluations, for example core reloading optimization. To overcome this difficulty, in this paper a model of Multi-layer Perceptron (MLP) artificial neural network type backpropagation will be trained to predict these values. The main objective of this work is the development of Multi-layer Perceptron (MLP) artificial neural network capable to predict, in very short time, with good accuracy, two important parameters used in the core reloading problem - Boron Concentration and Power Peaking Factor. For the training of the neural networks are provided loading patterns and nuclear data used in cycle 19 of Angra 1 nuclear power plant. Three models of networks are constructed using the same input data and providing the following outputs: 1- Boron Concentration and Power Peaking Factor, 2 - Boron Concentration and 3 - Power Peaking Factor. (author)

  16. A Fault Diagnosis Approach for the Hydraulic System by Artificial Neural Networks

    OpenAIRE

    Xiangyu He; Shanghong He

    2014-01-01

    Based on artificial neural networks, a fault diagnosis approach for the hydraulic system was proposed in this paper. Normal state samples were used as the training data to develop a dynamic general regression neural network (DGRNN) model. The trained DGRNN model then served as the fault determinant to diagnose test faults and the work condition of the hydraulic system was identified. Several typical faults of the hydraulic system were used to verify the fault diagnosis approach. Experiment re...

  17. Phonematic translation of Polish texts by the neural network

    International Nuclear Information System (INIS)

    Bielecki, A.; Podolak, I.T.; Wosiek, J.; Majkut, E.

    1996-01-01

    Using the back propagation algorithm, we have trained the feed forward neural network to pronounce Polish language, more precisely to translate Polish text into its phonematic counterpart. Depending on the input coding and network architecture, 88%-95% translation efficiency was achieved. (author)

  18. Identifying apple surface defects using principal components analysis and artifical neural networks

    Science.gov (United States)

    Artificial neural networks and principal components were used to detect surface defects on apples in near-infrared images. Neural networks were trained and tested on sets of principal components derived from columns of pixels from images of apples acquired at two wavelengths (740 nm and 950 nm). I...

  19. Prediction of residual stress for dissimilar metals welding at nuclear power plants using fuzzy neural network models

    International Nuclear Information System (INIS)

    Na, Man Gyun; Kim, Jin Weon; Lim, Dong Hyuk

    2007-01-01

    A fuzzy neural network model is presented to predict residual stress for dissimilar metal welding under various welding conditions. The fuzzy neural network model, which consists of a fuzzy inference system and a neuronal training system, is optimized by a hybrid learning method that combines a genetic algorithm to optimize the membership function parameters and a least squares method to solve the consequent parameters. The data of finite element analysis are divided into four data groups, which are split according to two end-section constraints and two prediction paths. Four fuzzy neural network models were therefore applied to the numerical data obtained from the finite element analysis for the two end-section constraints and the two prediction paths. The fuzzy neural network models were trained with the aid of a data set prepared for training (training data), optimized by means of an optimization data set and verified by means of a test data set that was different (independent) from the training data and the optimization data. The accuracy of fuzzy neural network models is known to be sufficiently accurate for use in an integrity evaluation by predicting the residual stress of dissimilar metal welding zones

  20. Application of a neural network to control a pressurized water reactor

    International Nuclear Information System (INIS)

    Lin, C.; Ku, C.C.; Lee, C.S.

    1993-01-01

    A neural network has been trained to control a pressurized water reactor. The inputs of the training pattern are the plant signals, and the outputs are the control rod actions. The training patterns are some kind of lookup table of control action. The table is designed by the heuristic method, which is based on the designer's knowledge of the controlled system and the operation experience. This method has two advantages: The controller's performance does not depend on the mathematical model of the plant, and the controller could be a nonlinear one. The advantages of using neural networks to implement the controller are to save computing time and overcome partial hardware failure

  1. Detection of high-grade small bowel obstruction on conventional radiography with convolutional neural networks.

    Science.gov (United States)

    Cheng, Phillip M; Tejura, Tapas K; Tran, Khoa N; Whang, Gilbert

    2018-05-01

    The purpose of this pilot study is to determine whether a deep convolutional neural network can be trained with limited image data to detect high-grade small bowel obstruction patterns on supine abdominal radiographs. Grayscale images from 3663 clinical supine abdominal radiographs were categorized into obstructive and non-obstructive categories independently by three abdominal radiologists, and the majority classification was used as ground truth; 74 images were found to be consistent with small bowel obstruction. Images were rescaled and randomized, with 2210 images constituting the training set (39 with small bowel obstruction) and 1453 images constituting the test set (35 with small bowel obstruction). Weight parameters for the final classification layer of the Inception v3 convolutional neural network, previously trained on the 2014 Large Scale Visual Recognition Challenge dataset, were retrained on the training set. After training, the neural network achieved an AUC of 0.84 on the test set (95% CI 0.78-0.89). At the maximum Youden index (sensitivity + specificity-1), the sensitivity of the system for small bowel obstruction is 83.8%, with a specificity of 68.1%. The results demonstrate that transfer learning with convolutional neural networks, even with limited training data, may be used to train a detector for high-grade small bowel obstruction gas patterns on supine radiographs.

  2. Unloading arm movement modeling using neural networks for a rotary hearth furnace

    Directory of Open Access Journals (Sweden)

    Iulia Inoan

    2011-12-01

    Full Text Available Neural networks are being applied in many fields of engineering having nowadays a wide range of application. Neural networks are very useful for modeling dynamic processes for which the mathematical modeling is hard to obtain, or for processes that can’t be modeled using mathematical equations. This paper describes the modeling process for the unloading arm movement from a rotary hearth furnace using neural networks with back propagation algorithm. In this case the designed network was trained using the simulation results from a previous calculated mathematical model.

  3. Computationally Efficient Neural Network Intrusion Security Awareness

    Energy Technology Data Exchange (ETDEWEB)

    Todd Vollmer; Milos Manic

    2009-08-01

    An enhanced version of an algorithm to provide anomaly based intrusion detection alerts for cyber security state awareness is detailed. A unique aspect is the training of an error back-propagation neural network with intrusion detection rule features to provide a recognition basis. Network packet details are subsequently provided to the trained network to produce a classification. This leverages rule knowledge sets to produce classifications for anomaly based systems. Several test cases executed on ICMP protocol revealed a 60% identification rate of true positives. This rate matched the previous work, but 70% less memory was used and the run time was reduced to less than 1 second from 37 seconds.

  4. Neural networks for aircraft control

    Science.gov (United States)

    Linse, Dennis

    1990-01-01

    Current research in Artificial Neural Networks indicates that networks offer some potential advantages in adaptation and fault tolerance. This research is directed at determining the possible applicability of neural networks to aircraft control. The first application will be to aircraft trim. Neural network node characteristics, network topology and operation, neural network learning and example histories using neighboring optimal control with a neural net are discussed.

  5. Sejarah, Penerapan, dan Analisis Resiko dari Neural Network: Sebuah Tinjauan Pustaka

    Directory of Open Access Journals (Sweden)

    Cristina Cristina

    2018-05-01

    Full Text Available A neural network is a form of artificial intelligence that has the ability to learn, grow, and adapt in a dynamic environment. Neural network began since 1890 because a great American psychologist named William James created the book "Principles of Psycology". James was the first one publish a number of facts related to the structure and function of the brain. The history of neural network development is divided into 4 epochs, the Camelot era, the Depression, the Renaissance, and the Neoconnectiosm era. Neural networks used today are not 100 percent accurate. However, neural networks are still used because of better performance than alternative computing models. The use of neural network consists of pattern recognition, signal analysis, robotics, and expert systems. For risk analysis of the neural network, it is first performed using hazards and operability studies (HAZOPS. Determining the neural network requirements in a good way will help in determining its contribution to system hazards and validating the control or mitigation of any hazards. After completion of the first stage at HAZOPS and the second stage determines the requirements, the next stage is designing. Neural network underwent repeated design-train-test development. At the design stage, the hazard analysis should consider the design aspects of the development, which include neural network architecture, size, intended use, and so on. It will be continued at the implementation stage, test phase, installation and inspection phase, operation phase, and ends at the maintenance stage.

  6. An application of neural networks and artificial intelligence for in-core fuel management

    International Nuclear Information System (INIS)

    Miller, L.F.; Algutifan, F.; Uhrig, R.E.

    1992-01-01

    This paper reports the feasibility of using expert systems in combination with neural networks and neutronics calculations to improve the efficiency for obtaining optimal candidate reload core designs. The general objectives of this research are as follows: (1) generate a suitable data base and ancillary software for training neural networks that duplicate neutronics calculations. (2) develop a graphical interface with neutronics software and neural networks for manual shuffling of reload cores. (3) construct an expert system for shuffling reload cores with specified rules. (4) develp neural networks that capture the nonlinear behavior of fuel depletion. (5) integrate the neural networks and neutronics software with an expert system to specify reload cores that obtain appropriate figure of merit

  7. Functional model of biological neural networks.

    Science.gov (United States)

    Lo, James Ting-Ho

    2010-12-01

    A functional model of biological neural networks, called temporal hierarchical probabilistic associative memory (THPAM), is proposed in this paper. THPAM comprises functional models of dendritic trees for encoding inputs to neurons, a first type of neuron for generating spike trains, a second type of neuron for generating graded signals to modulate neurons of the first type, supervised and unsupervised Hebbian learning mechanisms for easy learning and retrieving, an arrangement of dendritic trees for maximizing generalization, hardwiring for rotation-translation-scaling invariance, and feedback connections with different delay durations for neurons to make full use of present and past informations generated by neurons in the same and higher layers. These functional models and their processing operations have many functions of biological neural networks that have not been achieved by other models in the open literature and provide logically coherent answers to many long-standing neuroscientific questions. However, biological justifications of these functional models and their processing operations are required for THPAM to qualify as a macroscopic model (or low-order approximate) of biological neural networks.

  8. Method Accelerates Training Of Some Neural Networks

    Science.gov (United States)

    Shelton, Robert O.

    1992-01-01

    Three-layer networks trained faster provided two conditions are satisfied: numbers of neurons in layers are such that majority of work done in synaptic connections between input and hidden layers, and number of neurons in input layer at least as great as number of training pairs of input and output vectors. Based on modified version of back-propagation method.

  9. Tuning Recurrent Neural Networks for Recognizing Handwritten Arabic Words

    KAUST Repository

    Qaralleh, Esam

    2013-10-01

    Artificial neural networks have the abilities to learn by example and are capable of solving problems that are hard to solve using ordinary rule-based programming. They have many design parameters that affect their performance such as the number and sizes of the hidden layers. Large sizes are slow and small sizes are generally not accurate. Tuning the neural network size is a hard task because the design space is often large and training is often a long process. We use design of experiments techniques to tune the recurrent neural network used in an Arabic handwriting recognition system. We show that best results are achieved with three hidden layers and two subsampling layers. To tune the sizes of these five layers, we use fractional factorial experiment design to limit the number of experiments to a feasible number. Moreover, we replicate the experiment configuration multiple times to overcome the randomness in the training process. The accuracy and time measurements are analyzed and modeled. The two models are then used to locate network sizes that are on the Pareto optimal frontier. The approach described in this paper reduces the label error from 26.2% to 19.8%.

  10. The application of artificial neural networks to TLD dose algorithm

    International Nuclear Information System (INIS)

    Moscovitch, M.

    1997-01-01

    We review the application of feed forward neural networks to multi element thermoluminescence dosimetry (TLD) dose algorithm development. A Neural Network is an information processing method inspired by the biological nervous system. A dose algorithm based on a neural network is a fundamentally different approach from conventional algorithms, as it has the capability to learn from its own experience. The neural network algorithm is shown the expected dose values (output) associated with a given response of a multi-element dosimeter (input) many times.The algorithm, being trained that way, eventually is able to produce its own unique solution to similar (but not exactly the same) dose calculation problems. For personnel dosimetry, the output consists of the desired dose components: deep dose, shallow dose, and eye dose. The input consists of the TL data obtained from the readout of a multi-element dosimeter. For this application, a neural network architecture was developed based on the concept of functional links network (FLN). The FLN concept allowed an increase in the dimensionality of the input space and construction of a neural network without any hidden layers. This simplifies the problem and results in a relatively simple and reliable dose calculation algorithm. Overall, the neural network dose algorithm approach has been shown to significantly improve the precision and accuracy of dose calculations. (authors)

  11. NNSYSID and NNCTRL Tools for system identification and control with neural networks

    DEFF Research Database (Denmark)

    Nørgaard, Magnus; Ravn, Ole; Poulsen, Niels Kjølstad

    2001-01-01

    choose among several designs such as direct inverse control, internal model control, nonlinear feedforward, feedback linearisation, optimal control, gain scheduling based on instantaneous linearisation of neural network models and nonlinear model predictive control. This article gives an overview......Two toolsets for use with MATLAB have been developed: the neural network based system identification toolbox (NNSYSID) and the neural network based control system design toolkit (NNCTRL). The NNSYSID toolbox has been designed to assist identification of nonlinear dynamic systems. It contains...... a number of nonlinear model structures based on neural networks, effective training algorithms and tools for model validation and model structure selection. The NNCTRL toolkit is an add-on to NNSYSID and provides tools for design and simulation of control systems based on neural networks. The user can...

  12. NNSYSID and NNCTRL Tools for system identification and control with neural networks

    DEFF Research Database (Denmark)

    Nørgaard, Magnus; Ravn, Ole; Poulsen, Niels Kjølstad

    2001-01-01

    a number of nonlinear model structures based on neural networks, effective training algorithms and tools for model validation and model structure selection. The NNCTRL toolkit is an add-on to NNSYSID and provides tools for design and simulation of control systems based on neural networks. The user can...... choose among several designs such as direct inverse control, internal model control, nonlinear feedforward, feedback linearisation, optimal control, gain scheduling based on instantaneous linearisation of neural network models and nonlinear model predictive control. This article gives an overview......Two toolsets for use with MATLAB have been developed: the neural network based system identification toolbox (NNSYSID) and the neural network based control system design toolkit (NNCTRL). The NNSYSID toolbox has been designed to assist identification of nonlinear dynamic systems. It contains...

  13. LEARNING ALGORITHM EFFECT ON MULTILAYER FEED FORWARD ARTIFICIAL NEURAL NETWORK PERFORMANCE IN IMAGE CODING

    Directory of Open Access Journals (Sweden)

    OMER MAHMOUD

    2007-08-01

    Full Text Available One of the essential factors that affect the performance of Artificial Neural Networks is the learning algorithm. The performance of Multilayer Feed Forward Artificial Neural Network performance in image compression using different learning algorithms is examined in this paper. Based on Gradient Descent, Conjugate Gradient, Quasi-Newton techniques three different error back propagation algorithms have been developed for use in training two types of neural networks, a single hidden layer network and three hidden layers network. The essence of this study is to investigate the most efficient and effective training methods for use in image compression and its subsequent applications. The obtained results show that the Quasi-Newton based algorithm has better performance as compared to the other two algorithms.

  14. Autonomous Navigation Apparatus With Neural Network for a Mobile Vehicle

    Science.gov (United States)

    Quraishi, Naveed (Inventor)

    1996-01-01

    An autonomous navigation system for a mobile vehicle arranged to move within an environment includes a plurality of sensors arranged on the vehicle and at least one neural network including an input layer coupled to the sensors, a hidden layer coupled to the input layer, and an output layer coupled to the hidden layer. The neural network produces output signals representing respective positions of the vehicle, such as the X coordinate, the Y coordinate, and the angular orientation of the vehicle. A plurality of patch locations within the environment are used to train the neural networks to produce the correct outputs in response to the distances sensed.

  15. Machine and component residual life estimation through the application of neural networks

    International Nuclear Information System (INIS)

    Herzog, M.A.; Marwala, T.; Heyns, P.S.

    2009-01-01

    This paper concerns the use of neural networks for predicting the residual life of machines and components. In addition, the advantage of using condition-monitoring data to enhance the predictive capability of these neural networks was also investigated. A number of neural network variations were trained and tested with the data of two different reliability-related datasets. The first dataset represents the renewal case where the failed unit is repaired and restored to a good-as-new condition. Data were collected in the laboratory by subjecting a series of similar test pieces to fatigue loading with a hydraulic actuator. The average prediction error of the various neural networks being compared varied from 431 to 841 s on this dataset, where test pieces had a characteristic life of 8971 s. The second dataset were collected from a group of pumps used to circulate a water and magnetite solution within a plant. The data therefore originated from a repaired system affected by reliability degradation. When optimized, the multi-layer perceptron neural networks trained with the Levenberg-Marquardt algorithm and the general regression neural network produced a sum-of-squares error within 11.1% of each other for the renewal dataset. The small number of inputs and poorly mapped input space on the second dataset meant that much larger errors were recorded on some of the test data. The potential for using neural networks for residual life prediction and the advantage of incorporating condition-based data into the model was nevertheless proven for both examples

  16. Three neural network based sensor systems for environmental monitoring

    International Nuclear Information System (INIS)

    Keller, P.E.; Kouzes, R.T.; Kangas, L.J.

    1994-05-01

    Compact, portable systems capable of quickly identifying contaminants in the field are of great importance when monitoring the environment. One of the missions of the Pacific Northwest Laboratory is to examine and develop new technologies for environmental restoration and waste management at the Hanford Site. In this paper, three prototype sensing systems are discussed. These prototypes are composed of sensing elements, data acquisition system, computer, and neural network implemented in software, and are capable of automatically identifying contaminants. The first system employs an array of tin-oxide gas sensors and is used to identify chemical vapors. The second system employs an array of optical sensors and is used to identify the composition of chemical dyes in liquids. The third system contains a portable gamma-ray spectrometer and is used to identify radioactive isotopes. In these systems, the neural network is used to identify the composition of the sensed contaminant. With a neural network, the intense computation takes place during the training process. Once the network is trained, operation consists of propagating the data through the network. Since the computation involved during operation consists of vector-matrix multiplication and application of look-up tables unknown samples can be rapidly identified in the field

  17. Nonlinear identification of process dynamics using neural networks

    International Nuclear Information System (INIS)

    Parlos, A.G.; Atiya, A.F.; Chong, K.T.

    1992-01-01

    In this paper the nonlinear identification of process dynamics encountered in nuclear power plant components is addressed, in an input-output sense, using artificial neural systems. A hybrid feedforward/feedback neural network, namely, a recurrent multilayer perceptron, is used as the model structure to be identified. The feedforward portion of the network architecture provides its well-known interpolation property, while through recurrency and cross-talk, the local information feedback enables representation of temporal variations in the system nonlinearities. The standard backpropagation learning algorithm is modified, and it is used for the supervised training of the proposed hybrid network. The performance of recurrent multilayer perceptron networks in identifying process dynamics is investigated via the case study of a U-tube steam generator. The response of representative steam generator is predicted using a neural network, and it is compared to the response obtained from a sophisticated computer model based on first principles. The transient responses compare well, although further research is warranted to determine the predictive capabilities of these networks during more severe operational transients and accident scenarios

  18. Two-Stage Approach to Image Classification by Deep Neural Networks

    Science.gov (United States)

    Ososkov, Gennady; Goncharov, Pavel

    2018-02-01

    The paper demonstrates the advantages of the deep learning networks over the ordinary neural networks on their comparative applications to image classifying. An autoassociative neural network is used as a standalone autoencoder for prior extraction of the most informative features of the input data for neural networks to be compared further as classifiers. The main efforts to deal with deep learning networks are spent for a quite painstaking work of optimizing the structures of those networks and their components, as activation functions, weights, as well as the procedures of minimizing their loss function to improve their performances and speed up their learning time. It is also shown that the deep autoencoders develop the remarkable ability for denoising images after being specially trained. Convolutional Neural Networks are also used to solve a quite actual problem of protein genetics on the example of the durum wheat classification. Results of our comparative study demonstrate the undoubted advantage of the deep networks, as well as the denoising power of the autoencoders. In our work we use both GPU and cloud services to speed up the calculations.

  19. T-wave end detection using neural networks and Support Vector Machines.

    Science.gov (United States)

    Suárez-León, Alexander Alexeis; Varon, Carolina; Willems, Rik; Van Huffel, Sabine; Vázquez-Seisdedos, Carlos Román

    2018-05-01

    In this paper we propose a new approach for detecting the end of the T-wave in the electrocardiogram (ECG) using Neural Networks and Support Vector Machines. Both, Multilayer Perceptron (MLP) neural networks and Fixed-Size Least-Squares Support Vector Machines (FS-LSSVM) were used as regression algorithms to determine the end of the T-wave. Different strategies for selecting the training set such as random selection, k-means, robust clustering and maximum quadratic (Rényi) entropy were evaluated. Individual parameters were tuned for each method during training and the results are given for the evaluation set. A comparison between MLP and FS-LSSVM approaches was performed. Finally, a fair comparison of the FS-LSSVM method with other state-of-the-art algorithms for detecting the end of the T-wave was included. The experimental results show that FS-LSSVM approaches are more suitable as regression algorithms than MLP neural networks. Despite the small training sets used, the FS-LSSVM methods outperformed the state-of-the-art techniques. FS-LSSVM can be successfully used as a T-wave end detection algorithm in ECG even with small training set sizes. Copyright © 2018 Elsevier Ltd. All rights reserved.

  20. Statistical physics of interacting neural networks

    Science.gov (United States)

    Kinzel, Wolfgang; Metzler, Richard; Kanter, Ido

    2001-12-01

    Recent results on the statistical physics of time series generation and prediction are presented. A neural network is trained on quasi-periodic and chaotic sequences and overlaps to the sequence generator as well as the prediction errors are calculated numerically. For each network there exists a sequence for which it completely fails to make predictions. Two interacting networks show a transition to perfect synchronization. A pool of interacting networks shows good coordination in the minority game-a model of competition in a closed market. Finally, as a demonstration, a perceptron predicts bit sequences produced by human beings.

  1. Finding strong lenses in CFHTLS using convolutional neural networks

    Science.gov (United States)

    Jacobs, C.; Glazebrook, K.; Collett, T.; More, A.; McCarthy, C.

    2017-10-01

    We train and apply convolutional neural networks, a machine learning technique developed to learn from and classify image data, to Canada-France-Hawaii Telescope Legacy Survey (CFHTLS) imaging for the identification of potential strong lensing systems. An ensemble of four convolutional neural networks was trained on images of simulated galaxy-galaxy lenses. The training sets consisted of a total of 62 406 simulated lenses and 64 673 non-lens negative examples generated with two different methodologies. An ensemble of trained networks was applied to all of the 171 deg2 of the CFHTLS wide field image data, identifying 18 861 candidates including 63 known and 139 other potential lens candidates. A second search of 1.4 million early-type galaxies selected from the survey catalogue as potential deflectors, identified 2465 candidates including 117 previously known lens candidates, 29 confirmed lenses/high-quality lens candidates, 266 novel probable or potential lenses and 2097 candidates we classify as false positives. For the catalogue-based search we estimate a completeness of 21-28 per cent with respect to detectable lenses and a purity of 15 per cent, with a false-positive rate of 1 in 671 images tested. We predict a human astronomer reviewing candidates produced by the system would identify 20 probable lenses and 100 possible lenses per hour in a sample selected by the robot. Convolutional neural networks are therefore a promising tool for use in the search for lenses in current and forthcoming surveys such as the Dark Energy Survey and the Large Synoptic Survey Telescope.

  2. Bringing Interpretability and Visualization with Artificial Neural Networks

    Science.gov (United States)

    Gritsenko, Andrey

    2017-01-01

    Extreme Learning Machine (ELM) is a training algorithm for Single-Layer Feed-forward Neural Network (SLFN). The difference in theory of ELM from other training algorithms is in the existence of explicitly-given solution due to the immutability of initialed weights. In practice, ELMs achieve performance similar to that of other state-of-the-art…

  3. Information Extraction with Character-level Neural Networks and Free Noisy Supervision

    OpenAIRE

    Meerkamp, Philipp; Zhou, Zhengyi

    2016-01-01

    We present an architecture for information extraction from text that augments an existing parser with a character-level neural network. The network is trained using a measure of consistency of extracted data with existing databases as a form of noisy supervision. Our architecture combines the ability of constraint-based information extraction systems to easily incorporate domain knowledge and constraints with the ability of deep neural networks to leverage large amounts of data to learn compl...

  4. Using domain-specific basic functions for the analysis of supervised artificial neural networks

    NARCIS (Netherlands)

    van der Zwaag, B.J.

    2003-01-01

    Since the early development of artificial neural networks, researchers have tried to analyze trained neural networks in order to gain insight into their behavior. For certain applications and in certain problem domains this has been successful, for example by the development of so-called rule

  5. Image Finder Mobile Application Based on Neural Networks

    Directory of Open Access Journals (Sweden)

    Nabil M. Hewahi

    2017-04-01

    Full Text Available Nowadays taking photos via mobile phone has become a very important part of everyone’s life. Almost each and every person who has a smart phone also has thousands of photos in their mobile device. At times it becomes very difficult to find a particular photo from thousands of photos, and it takes time. This research was done to come up with an innovative solution that could solve this problem. The solution will allow the user to find the required photo by simply drawing a sketch on the objects in the required picture, for example a tree or car, etc. Two types of supervised Artificial Neural Networks are used for this purpose; one is trained to identify the handmade sketches and other is trained to identify the images. The proposed approach introduces a mechanism to relate the sketches with the images by matching them after training. The experimentation results for testing the trained neural networks reached 100% for the sketches, and 84% for the images of two objects as a case study.

  6. Prediction of composite fatigue life under variable amplitude loading using artificial neural network trained by genetic algorithm

    Science.gov (United States)

    Rohman, Muhamad Nur; Hidayat, Mas Irfan P.; Purniawan, Agung

    2018-04-01

    Neural networks (NN) have been widely used in application of fatigue life prediction. In the use of fatigue life prediction for polymeric-base composite, development of NN model is necessary with respect to the limited fatigue data and applicable to be used to predict the fatigue life under varying stress amplitudes in the different stress ratios. In the present paper, Multilayer-Perceptrons (MLP) model of neural network is developed, and Genetic Algorithm was employed to optimize the respective weights of NN for prediction of polymeric-base composite materials under variable amplitude loading. From the simulation result obtained with two different composite systems, named E-glass fabrics/epoxy (layups [(±45)/(0)2]S), and E-glass/polyester (layups [90/0/±45/0]S), NN model were trained with fatigue data from two different stress ratios, which represent limited fatigue data, can be used to predict another four and seven stress ratios respectively, with high accuracy of fatigue life prediction. The accuracy of NN prediction were quantified with the small value of mean square error (MSE). When using 33% from the total fatigue data for training, the NN model able to produce high accuracy for all stress ratios. When using less fatigue data during training (22% from the total fatigue data), the NN model still able to produce high coefficient of determination between the prediction result compared with obtained by experiment.

  7. Artificial neural network application for predicting soil distribution coefficient of nickel

    International Nuclear Information System (INIS)

    Falamaki, Amin

    2013-01-01

    The distribution (or partition) coefficient (K d ) is an applicable parameter for modeling contaminant and radionuclide transport as well as risk analysis. Selection of this parameter may cause significant error in predicting the impacts of contaminant migration or site-remediation options. In this regards, various models were presented to predict K d values for different contaminants specially heavy metals and radionuclides. In this study, artificial neural network (ANN) is used to present simplified model for predicting K d of nickel. The main objective is to develop a more accurate model with a minimal number of parameters, which can be determined experimentally or select by review of different studies. In addition, the effects of training as well as the type of the network are considered. The K d values of Ni is strongly dependent on pH of the soil and mathematical relationships were presented between pH and K d of nickel recently. In this study, the same database of these presented models was used to verify that neural network may be more useful tools for predicting of K d . Two different types of ANN, multilayer perceptron and redial basis function, were used to investigate the effect of the network geometry on the results. In addition, each network was trained by 80 and 90% of the data and tested for 20 and 10% of the rest data. Then the results of the networks compared with the results of the mathematical models. Although the networks trained by 80 and 90% of the data the results show that all the networks predict with higher accuracy relative to mathematical models which were derived by 100% of data. More training of a network increases the accuracy of the network. Multilayer perceptron network used in this study predicts better than redial basis function network. - Highlights: ► Simplified models for predicting K d of nickel presented using artificial neural networks. ► Multilayer perceptron and redial basis function used to predict K d of nickel in

  8. Top tagging with deep neural networks [Vidyo

    CERN Multimedia

    CERN. Geneva

    2017-01-01

    Recent literature on deep neural networks for top tagging has focussed on image based techniques or multivariate approaches using high level jet substructure variables. Here, we take a sequential approach to this task by using anordered sequence of energy deposits as training inputs. Unlike previous approaches, this strategy does not result in a loss of information during pixelization or the calculation of high level features. We also propose new preprocessing methods that do not alter key physical quantities such as jet mass. We compare the performance of this approach to standard tagging techniques and present results evaluating the robustness of the neural network to pileup.

  9. Moving image compression and generalization capability of constructive neural networks

    Science.gov (United States)

    Ma, Liying; Khorasani, Khashayar

    2001-03-01

    To date numerous techniques have been proposed to compress digital images to ease their storage and transmission over communication channels. Recently, a number of image compression algorithms using Neural Networks NNs have been developed. Particularly, several constructive feed-forward neural networks FNNs have been proposed by researchers for image compression, and promising results have been reported. At the previous SPIE AeroSense conference 2000, we proposed to use a constructive One-Hidden-Layer Feedforward Neural Network OHL-FNN for compressing digital images. In this paper, we first investigate the generalization capability of the proposed OHL-FNN in the presence of additive noise for network training and/ or generalization. Extensive experimental results for different scenarios are presented. It is revealed that the constructive OHL-FNN is not as robust to additive noise in input image as expected. Next, the constructive OHL-FNN is applied to moving images, video sequences. The first, or other specified frame in a moving image sequence is used to train the network. The remaining moving images that follow are then generalized/compressed by this trained network. Three types of correlation-like criteria measuring the similarity of any two images are introduced. The relationship between the generalization capability of the constructed net and the similarity of images is investigated in some detail. It is shown that the constructive OHL-FNN is promising even for changing images such as those extracted from a football game.

  10. Static Voltage Stability Analysis by Using SVM and Neural Network

    Directory of Open Access Journals (Sweden)

    Mehdi Hajian

    2013-01-01

    Full Text Available Voltage stability is an important problem in power system networks. In this paper, in terms of static voltage stability, and application of Neural Networks (NN and Supported Vector Machine (SVM for estimating of voltage stability margin (VSM and predicting of voltage collapse has been investigated. This paper considers voltage stability in power system in two parts. The first part calculates static voltage stability margin by Radial Basis Function Neural Network (RBFNN. The advantage of the used method is high accuracy in online detecting the VSM. Whereas the second one, voltage collapse analysis of power system is performed by Probabilistic Neural Network (PNN and SVM. The obtained results in this paper indicate, that time and number of training samples of SVM, are less than NN. In this paper, a new model of training samples for detection system, using the normal distribution load curve at each load feeder, has been used. Voltage stability analysis is estimated by well-know L and VSM indexes. To demonstrate the validity of the proposed methods, IEEE 14 bus grid and the actual network of Yazd Province are used.

  11. An artificial neural network approach to reconstruct the source term of a nuclear accident

    International Nuclear Information System (INIS)

    Giles, J.; Palma, C. R.; Weller, P.

    1997-01-01

    This work makes use of one of the main features of artificial neural networks, which is their ability to 'learn' from sets of known input and output data. Indeed, a trained artificial neural network can be used to make predictions on the input data when the output is known, and this feedback process enables one to reconstruct the source term from field observations. With this aim, an artificial neural networks has been trained, using the projections of a segmented plume atmospheric dispersion model at fixed points, simulating a set of gamma detectors located outside the perimeter of a nuclear facility. The resulting set of artificial neural networks was used to determine the release fraction and rate for each of the noble gases, iodines and particulate fission products that could originate from a nuclear accident. Model projections were made using a large data set consisting of effective release height, release fraction of noble gases, iodines and particulate fission products, atmospheric stability, wind speed and wind direction. The model computed nuclide-specific gamma dose rates. The locations of the detectors were chosen taking into account both building shine and wake effects, and varied in distance between 800 and 1200 m from the reactor.The inputs to the artificial neural networks consisted of the measurements from the detector array, atmospheric stability, wind speed and wind direction; the outputs comprised a set of release fractions and heights. Once trained, the artificial neural networks was used to reconstruct the source term from the detector responses for data sets not used in training. The preliminary results are encouraging and show that the noble gases and particulate fission product release fractions are well determined

  12. Modified-hybrid optical neural network filter for multiple object recognition within cluttered scenes

    Science.gov (United States)

    Kypraios, Ioannis; Young, Rupert C. D.; Chatwin, Chris R.

    2009-08-01

    Motivated by the non-linear interpolation and generalization abilities of the hybrid optical neural network filter between the reference and non-reference images of the true-class object we designed the modifiedhybrid optical neural network filter. We applied an optical mask to the hybrid optical neural network's filter input. The mask was built with the constant weight connections of a randomly chosen image included in the training set. The resulted design of the modified-hybrid optical neural network filter is optimized for performing best in cluttered scenes of the true-class object. Due to the shift invariance properties inherited by its correlator unit the filter can accommodate multiple objects of the same class to be detected within an input cluttered image. Additionally, the architecture of the neural network unit of the general hybrid optical neural network filter allows the recognition of multiple objects of different classes within the input cluttered image by modifying the output layer of the unit. We test the modified-hybrid optical neural network filter for multiple objects of the same and of different classes' recognition within cluttered input images and video sequences of cluttered scenes. The filter is shown to exhibit with a single pass over the input data simultaneously out-of-plane rotation, shift invariance and good clutter tolerance. It is able to successfully detect and classify correctly the true-class objects within background clutter for which there has been no previous training.

  13. A neural network for locating the primary vertex in a pixel detector

    International Nuclear Information System (INIS)

    Kantowski, R.; Marzban, C.

    1995-01-01

    Using simulated collider data for p+p→2Jets interactions in a two-barrel pixel detector, a neural network is trained to construct the coordinate of the primary vertex to a high degree of accuracy. Three other estimates of this coordinate are also considered and compared to that of the neural network. It is shown that the network can match the best of the traditional estimates. ((orig.))

  14. Aeroelasticity of morphing wings using neural networks

    Science.gov (United States)

    Natarajan, Anand

    In this dissertation, neural networks are designed to effectively model static non-linear aeroelastic problems in adaptive structures and linear dynamic aeroelastic systems with time varying stiffness. The use of adaptive materials in aircraft wings allows for the change of the contour or the configuration of a wing (morphing) in flight. The use of smart materials, to accomplish these deformations, can imply that the stiffness of the wing with a morphing contour changes as the contour changes. For a rapidly oscillating body in a fluid field, continuously adapting structural parameters may render the wing to behave as a time variant system. Even the internal spars/ribs of the aircraft wing which define the wing stiffness can be made adaptive, that is, their stiffness can be made to vary with time. The immediate effect on the structural dynamics of the wing, is that, the wing motion is governed by a differential equation with time varying coefficients. The study of this concept of a time varying torsional stiffness, made possible by the use of active materials and adaptive spars, in the dynamic aeroelastic behavior of an adaptable airfoil is performed here. Another type of aeroelastic problem of an adaptive structure that is investigated here, is the shape control of an adaptive bump situated on the leading edge of an airfoil. Such a bump is useful in achieving flow separation control for lateral directional maneuverability of the aircraft. Since actuators are being used to create this bump on the wing surface, the energy required to do so needs to be minimized. The adverse pressure drag as a result of this bump needs to be controlled so that the loss in lift over the wing is made minimal. The design of such a "spoiler bump" on the surface of the airfoil is an optimization problem of maximizing pressure drag due to flow separation while minimizing the loss in lift and energy required to deform the bump. One neural network is trained using the CFD code FLUENT to

  15. Control of 12-Cylinder Camless Engine with Neural Networks

    OpenAIRE

    Ashhab Moh’d Sami

    2017-01-01

    The 12-cyliner camless engine breathing process is modeled with artificial neural networks (ANN’s). The inputs to the net are the intake valve lift (IVL) and intake valve closing timing (IVC) whereas the output of the net is the cylinder air charge (CAC). The ANN is trained with data collected from an engine simulation model which is based on thermodynamics principles and calibrated against real engine data. A method for adapting single-output feed-forward neural networks is proposed and appl...

  16. Impact parameter determination in experimental analysis using neural network

    International Nuclear Information System (INIS)

    Haddad, F.; David, C.; Freslier, M.; Aichelin, J.; Haddad, F.; Hagel, K.; Li, J.; Mdeiwayeh, N.; Natowitz, J.B.; Wada, R.; Xiao, B.

    1997-01-01

    A neural network is used to determine the impact parameter in 40 Ca + 40 Ca reactions. The effect of the detection efficiency as well as the model dependence of the training procedure have been studied carefully. An overall improvement of the impact parameter determination of 25 % is obtained using this technique. The analysis of Amphora 40 Ca+ 40 Ca data at 35 MeV per nucleon using a neural network shows two well separated classes of events among the selected 'complete' events. (authors)

  17. Parameter estimation of breast tumour using dynamic neural network from thermal pattern

    Directory of Open Access Journals (Sweden)

    Elham Saniei

    2016-11-01

    Full Text Available This article presents a new approach for estimating the depth, size, and metabolic heat generation rate of a tumour. For this purpose, the surface temperature distribution of a breast thermal image and the dynamic neural network was used. The research consisted of two steps: forward and inverse. For the forward section, a finite element model was created. The Pennes bio-heat equation was solved to find surface and depth temperature distributions. Data from the analysis, then, were used to train the dynamic neural network model (DNN. Results from the DNN training/testing confirmed those of the finite element model. For the inverse section, the trained neural network was applied to estimate the depth temperature distribution (tumour position from the surface temperature profile, extracted from the thermal image. Finally, tumour parameters were obtained from the depth temperature distribution. Experimental findings (20 patients were promising in terms of the model’s potential for retrieving tumour parameters.

  18. Predicting wettability behavior of fluorosilica coated metal surface using optimum neural network

    Science.gov (United States)

    Taghipour-Gorjikolaie, Mehran; Valipour Motlagh, Naser

    2018-02-01

    The interaction between variables, which are effective on the surface wettability, is very complex to predict the contact angles and sliding angles of liquid drops. In this paper, in order to solve this complexity, artificial neural network was used to develop reliable models for predicting the angles of liquid drops. Experimental data are divided into training data and testing data. By using training data and feed forward structure for the neural network and using particle swarm optimization for training the neural network based models, the optimum models were developed. The obtained results showed that regression index for the proposed models for the contact angles and sliding angles are 0.9874 and 0.9920, respectively. As it can be seen, these values are close to unit and it means the reliable performance of the models. Also, it can be inferred from the results that the proposed model have more reliable performance than multi-layer perceptron and radial basis function based models.

  19. Wavelet neural network load frequency controller

    International Nuclear Information System (INIS)

    Hemeida, Ashraf Mohamed

    2005-01-01

    This paper presents the feasibility of applying a wavelet neural network (WNN) approach for the load frequency controller (LFC) to damp the frequency oscillations of two area power systems due to load disturbances. The present intelligent control system trained the wavelet neural network (WNN) controller on line with adaptive learning rates, which are derived in the sense of a discrete type Lyapunov stability theorem. The present WNN controller is designed individually for each area. The proposed technique is applied successfully for a wide range of operating conditions. The time simulation results indicate its superiority and effectiveness over the conventional approach. The effects of consideration of the governor dead zone on the system performance are studied using the proposed controller and the conventional one

  20. Transfer Learning with Convolutional Neural Networks for Classification of Abdominal Ultrasound Images.

    Science.gov (United States)

    Cheng, Phillip M; Malhi, Harshawn S

    2017-04-01

    The purpose of this study is to evaluate transfer learning with deep convolutional neural networks for the classification of abdominal ultrasound images. Grayscale images from 185 consecutive clinical abdominal ultrasound studies were categorized into 11 categories based on the text annotation specified by the technologist for the image. Cropped images were rescaled to 256 × 256 resolution and randomized, with 4094 images from 136 studies constituting the training set, and 1423 images from 49 studies constituting the test set. The fully connected layers of two convolutional neural networks based on CaffeNet and VGGNet, previously trained on the 2012 Large Scale Visual Recognition Challenge data set, were retrained on the training set. Weights in the convolutional layers of each network were frozen to serve as fixed feature extractors. Accuracy on the test set was evaluated for each network. A radiologist experienced in abdominal ultrasound also independently classified the images in the test set into the same 11 categories. The CaffeNet network classified 77.3% of the test set images accurately (1100/1423 images), with a top-2 accuracy of 90.4% (1287/1423 images). The larger VGGNet network classified 77.9% of the test set accurately (1109/1423 images), with a top-2 accuracy of VGGNet was 89.7% (1276/1423 images). The radiologist classified 71.7% of the test set images correctly (1020/1423 images). The differences in classification accuracies between both neural networks and the radiologist were statistically significant (p convolutional neural networks may be used to construct effective classifiers for abdominal ultrasound images.

  1. A Telescopic Binary Learning Machine for Training Neural Networks.

    Science.gov (United States)

    Brunato, Mauro; Battiti, Roberto

    2017-03-01

    This paper proposes a new algorithm based on multiscale stochastic local search with binary representation for training neural networks [binary learning machine (BLM)]. We study the effects of neighborhood evaluation strategies, the effect of the number of bits per weight and that of the maximum weight range used for mapping binary strings to real values. Following this preliminary investigation, we propose a telescopic multiscale version of local search, where the number of bits is increased in an adaptive manner, leading to a faster search and to local minima of better quality. An analysis related to adapting the number of bits in a dynamic way is presented. The control on the number of bits, which happens in a natural manner in the proposed method, is effective to increase the generalization performance. The learning dynamics are discussed and validated on a highly nonlinear artificial problem and on real-world tasks in many application domains; BLM is finally applied to a problem requiring either feedforward or recurrent architectures for feedback control.

  2. Neural network evaluation of tokamak current profiles for real time control (abstract)

    Science.gov (United States)

    Wróblewski, Dariusz

    1997-01-01

    Active feedback control of the current profile, requiring real-time determination of the current profile parameters, is envisioned for tokamaks operating in enhanced confinement regimes. The distribution of toroidal current in a tokamak is now routinely evaluated based on external (magnetic probes, flux loops) and internal (motional Stark effect) measurements of the poloidal magnetic field. However, the analysis involves reconstruction of magnetohydrodynamic equilibrium and is too intensive computationally to be performed in real time. In the present study, a neural network is used to provide a mapping from the magnetic measurements (internal and external) to selected parameters of the safety factor profile. The single-pass, feedforward calculation of output of a trained neural network is very fast, making this approach particularly suitable for real-time applications. The network was trained on a large set of simulated equilibrium data for the DIII-D tokamak. The database encompasses a large variety of current profiles including the hollow current profiles important for reversed central shear operation. The parameters of safety factor profile (a quantity related to the current profile through the magnetic field tilt angle) estimated by the neural network include central safety factor, q0, minimum value of q, qmin, and the location of qmin. Very good performance of the trained neural network both for simulated test data and for experimental data is demonstrated.

  3. Neural network evaluation of tokamak current profiles for real time control (abstract)

    International Nuclear Information System (INIS)

    Wroblewski, D.

    1997-01-01

    Active feedback control of the current profile, requiring real-time determination of the current profile parameters, is envisioned for tokamaks operating in enhanced confinement regimes. The distribution of toroidal current in a tokamak is now routinely evaluated based on external (magnetic probes, flux loops) and internal (motional Stark effect) measurements of the poloidal magnetic field. However, the analysis involves reconstruction of magnetohydrodynamic equilibrium and is too intensive computationally to be performed in real time. In the present study, a neural network is used to provide a mapping from the magnetic measurements (internal and external) to selected parameters of the safety factor profile. The single-pass, feedforward calculation of output of a trained neural network is very fast, making this approach particularly suitable for real-time applications. The network was trained on a large set of simulated equilibrium data for the DIII-D tokamak. The database encompasses a large variety of current profiles including the hollow current profiles important for reversed central shear operation. The parameters of safety factor profile (a quantity related to the current profile through the magnetic field tilt angle) estimated by the neural network include central safety factor, q 0 , minimum value of q, q min , and the location of q min . Very good performance of the trained neural network both for simulated test data and for experimental data is demonstrated. copyright 1997 American Institute of Physics

  4. Using modular neural networks to monitor accident conditions in nuclear power plants

    International Nuclear Information System (INIS)

    Guo, Z.

    1992-01-01

    Nuclear power plants are very complex systems. The diagnoses of transients or accident conditions is very difficult because a large amount of information, which is often noisy, or intermittent, or even incomplete, need to be processed in real time. To demonstrate their potential application to nuclear power plants, neural networks axe used to monitor the accident scenarios simulated by the training simulator of TVA's Watts Bar Nuclear Power Plant. A self-organization network is used to compress original data to reduce the total number of training patterns. Different accident scenarios are closely related to different key parameters which distinguish one accident scenario from another. Therefore, the accident scenarios can be monitored by a set of small size neural networks, called modular networks, each one of which monitors only one assigned accident scenario, to obtain fast training and recall. Sensitivity analysis is applied to select proper input variables for modular networks

  5. Study on pattern recognition of Raman spectrum based on fuzzy neural network

    Science.gov (United States)

    Zheng, Xiangxiang; Lv, Xiaoyi; Mo, Jiaqing

    2017-10-01

    Hydatid disease is a serious parasitic disease in many regions worldwide, especially in Xinjiang, China. Raman spectrum of the serum of patients with echinococcosis was selected as the research object in this paper. The Raman spectrum of blood samples from healthy people and patients with echinococcosis are measured, of which the spectrum characteristics are analyzed. The fuzzy neural network not only has the ability of fuzzy logic to deal with uncertain information, but also has the ability to store knowledge of neural network, so it is combined with the Raman spectrum on the disease diagnosis problem based on Raman spectrum. Firstly, principal component analysis (PCA) is used to extract the principal components of the Raman spectrum, reducing the network input and accelerating the prediction speed and accuracy of Network based on remaining the original data. Then, the information of the extracted principal component is used as the input of the neural network, the hidden layer of the network is the generation of rules and the inference process, and the output layer of the network is fuzzy classification output. Finally, a part of samples are randomly selected for the use of training network, then the trained network is used for predicting the rest of the samples, and the predicted results are compared with general BP neural network to illustrate the feasibility and advantages of fuzzy neural network. Success in this endeavor would be helpful for the research work of spectroscopic diagnosis of disease and it can be applied in practice in many other spectral analysis technique fields.

  6. Wavelet neural networks with applications in financial engineering, chaos, and classification

    CERN Document Server

    Alexandridis, Antonios K

    2014-01-01

    Through extensive examples and case studies, Wavelet Neural Networks provides a step-by-step introduction to modeling, training, and forecasting using wavelet networks. The acclaimed authors present a statistical model identification framework to successfully apply wavelet networks in various applications, specifically, providing the mathematical and statistical framework needed for model selection, variable selection, wavelet network construction, initialization, training, forecasting and prediction, confidence intervals, prediction intervals, and model adequacy testing. The text is ideal for

  7. A novel approach for voltage secure operation using Probabilistic Neural Network in transmission network

    Directory of Open Access Journals (Sweden)

    Santi Behera

    2016-05-01

    Full Text Available This work proposes a unique approach for improving voltage stability limit using a Probabilistic Neural Network (PNN classifier that gives corrective controls available in the system in the scenario of contingencies. The sensitivity of system is analyzed to identify weak buses with ENVCI evaluation approaching zero. The input to the classifier, termed as voltage stability enhancing neural network (VSENN classifier, for training are line flows and bus voltages near the notch point of the P–V curve and the output of the VSENN is a control variable. For various contingencies the control action that improves the voltage profile as well as stability index is identified and trained accordingly. The trained VSENN is finally tested for its robustness to improve load margin and ENVCI as well, apart from trained set of operating condition of the system along with contingencies. The proposed approach is verified in IEEE 39-bus test system.

  8. Dynamic Adaptive Neural Network Arrays: A Neuromorphic Architecture

    Energy Technology Data Exchange (ETDEWEB)

    Disney, Adam [University of Tennessee (UT); Reynolds, John [University of Tennessee (UT)

    2015-01-01

    Dynamic Adaptive Neural Network Array (DANNA) is a neuromorphic hardware implementation. It differs from most other neuromorphic projects in that it allows for programmability of structure, and it is trained or designed using evolutionary optimization. This paper describes the DANNA structure, how DANNA is trained using evolutionary optimization, and an application of DANNA to a very simple classification task.

  9. Action Potential Modulation of Neural Spin Networks Suggests Possible Role of Spin

    CERN Document Server

    Hu, H P

    2004-01-01

    In this paper we show that nuclear spin networks in neural membranes are modulated by action potentials through J-coupling, dipolar coupling and chemical shielding tensors and perturbed by microscopically strong and fluctuating internal magnetic fields produced largely by paramagnetic oxygen. We suggest that these spin networks could be involved in brain functions since said modulation inputs information carried by the neural spike trains into them, said perturbation activates various dynamics within them and the combination of the two likely produce stochastic resonance thus synchronizing said dynamics to the neural firings. Although quantum coherence is desirable and may indeed exist, it is not required for these spin networks to serve as the subatomic components for the conventional neural networks.

  10. Training trajectories by continuous recurrent multilayer networks.

    Science.gov (United States)

    Leistritz, L; Galicki, M; Witte, H; Kochs, E

    2002-01-01

    This paper addresses the problem of training trajectories by means of continuous recurrent neural networks whose feedforward parts are multilayer perceptrons. Such networks can approximate a general nonlinear dynamic system with arbitrary accuracy. The learning process is transformed into an optimal control framework where the weights are the controls to be determined. A training algorithm based upon a variational formulation of Pontryagin's maximum principle is proposed for such networks. Computer examples demonstrating the efficiency of the given approach are also presented.

  11. Neural network approach for the calculation of potential coefficients in quantum mechanics

    Science.gov (United States)

    Ossandón, Sebastián; Reyes, Camilo; Cumsille, Patricio; Reyes, Carlos M.

    2017-05-01

    A numerical method based on artificial neural networks is used to solve the inverse Schrödinger equation for a multi-parameter class of potentials. First, the finite element method was used to solve repeatedly the direct problem for different parametrizations of the chosen potential function. Then, using the attainable eigenvalues as a training set of the direct radial basis neural network a map of new eigenvalues was obtained. This relationship was later inverted and refined by training an inverse radial basis neural network, allowing the calculation of the unknown parameters and therefore estimating the potential function. Three numerical examples are presented in order to prove the effectiveness of the method. The results show that the method proposed has the advantage to use less computational resources without a significant accuracy loss.

  12. Hidden neural networks

    DEFF Research Database (Denmark)

    Krogh, Anders Stærmose; Riis, Søren Kamaric

    1999-01-01

    A general framework for hybrids of hidden Markov models (HMMs) and neural networks (NNs) called hidden neural networks (HNNs) is described. The article begins by reviewing standard HMMs and estimation by conditional maximum likelihood, which is used by the HNN. In the HNN, the usual HMM probability...... parameters are replaced by the outputs of state-specific neural networks. As opposed to many other hybrids, the HNN is normalized globally and therefore has a valid probabilistic interpretation. All parameters in the HNN are estimated simultaneously according to the discriminative conditional maximum...... likelihood criterion. The HNN can be viewed as an undirected probabilistic independence network (a graphical model), where the neural networks provide a compact representation of the clique functions. An evaluation of the HNN on the task of recognizing broad phoneme classes in the TIMIT database shows clear...

  13. Optimal artificial neural network architecture selection for performance prediction of compact heat exchanger with the EBaLM-OTR technique

    Energy Technology Data Exchange (ETDEWEB)

    Wijayasekara, Dumidu, E-mail: wija2589@vandals.uidaho.edu [Department of Computer Science, University of Idaho, 1776 Science Center Drive, Idaho Falls, ID 83402 (United States); Manic, Milos [Department of Computer Science, University of Idaho, 1776 Science Center Drive, Idaho Falls, ID 83402 (United States); Sabharwall, Piyush [Idaho National Laboratory, Idaho Falls, ID (United States); Utgikar, Vivek [Department of Chemical Engineering, University of Idaho, Idaho Falls, ID 83402 (United States)

    2011-07-15

    Highlights: > Performance prediction of PCHE using artificial neural networks. > Evaluating artificial neural network performance for PCHE modeling. > Selection of over-training resilient artificial neural networks. > Artificial neural network architecture selection for modeling problems with small data sets. - Abstract: Artificial Neural Networks (ANN) have been used in the past to predict the performance of printed circuit heat exchangers (PCHE) with satisfactory accuracy. Typically published literature has focused on optimizing ANN using a training dataset to train the network and a testing dataset to evaluate it. Although this may produce outputs that agree with experimental results, there is a risk of over-training or over-learning the network rather than generalizing it, which should be the ultimate goal. An over-trained network is able to produce good results with the training dataset but fails when new datasets with subtle changes are introduced. In this paper we present EBaLM-OTR (error back propagation and Levenberg-Marquardt algorithms for over training resilience) technique, which is based on a previously discussed method of selecting neural network architecture that uses a separate validation set to evaluate different network architectures based on mean square error (MSE), and standard deviation of MSE. The method uses k-fold cross validation. Therefore in order to select the optimal architecture for the problem, the dataset is divided into three parts which are used to train, validate and test each network architecture. Then each architecture is evaluated according to their generalization capability and capability to conform to original data. The method proved to be a comprehensive tool in identifying the weaknesses and advantages of different network architectures. The method also highlighted the fact that the architecture with the lowest training error is not always the most generalized and therefore not the optimal. Using the method the testing

  14. Optimal artificial neural network architecture selection for performance prediction of compact heat exchanger with the EBaLM-OTR technique

    International Nuclear Information System (INIS)

    Wijayasekara, Dumidu; Manic, Milos; Sabharwall, Piyush; Utgikar, Vivek

    2011-01-01

    Highlights: → Performance prediction of PCHE using artificial neural networks. → Evaluating artificial neural network performance for PCHE modeling. → Selection of over-training resilient artificial neural networks. → Artificial neural network architecture selection for modeling problems with small data sets. - Abstract: Artificial Neural Networks (ANN) have been used in the past to predict the performance of printed circuit heat exchangers (PCHE) with satisfactory accuracy. Typically published literature has focused on optimizing ANN using a training dataset to train the network and a testing dataset to evaluate it. Although this may produce outputs that agree with experimental results, there is a risk of over-training or over-learning the network rather than generalizing it, which should be the ultimate goal. An over-trained network is able to produce good results with the training dataset but fails when new datasets with subtle changes are introduced. In this paper we present EBaLM-OTR (error back propagation and Levenberg-Marquardt algorithms for over training resilience) technique, which is based on a previously discussed method of selecting neural network architecture that uses a separate validation set to evaluate different network architectures based on mean square error (MSE), and standard deviation of MSE. The method uses k-fold cross validation. Therefore in order to select the optimal architecture for the problem, the dataset is divided into three parts which are used to train, validate and test each network architecture. Then each architecture is evaluated according to their generalization capability and capability to conform to original data. The method proved to be a comprehensive tool in identifying the weaknesses and advantages of different network architectures. The method also highlighted the fact that the architecture with the lowest training error is not always the most generalized and therefore not the optimal. Using the method the

  15. Applying Gradient Descent in Convolutional Neural Networks

    Science.gov (United States)

    Cui, Nan

    2018-04-01

    With the development of the integrated circuit and computer science, people become caring more about solving practical issues via information technologies. Along with that, a new subject called Artificial Intelligent (AI) comes up. One popular research interest of AI is about recognition algorithm. In this paper, one of the most common algorithms, Convolutional Neural Networks (CNNs) will be introduced, for image recognition. Understanding its theory and structure is of great significance for every scholar who is interested in this field. Convolution Neural Network is an artificial neural network which combines the mathematical method of convolution and neural network. The hieratical structure of CNN provides it reliable computer speed and reasonable error rate. The most significant characteristics of CNNs are feature extraction, weight sharing and dimension reduction. Meanwhile, combining with the Back Propagation (BP) mechanism and the Gradient Descent (GD) method, CNNs has the ability to self-study and in-depth learning. Basically, BP provides an opportunity for backwardfeedback for enhancing reliability and GD is used for self-training process. This paper mainly discusses the CNN and the related BP and GD algorithms, including the basic structure and function of CNN, details of each layer, the principles and features of BP and GD, and some examples in practice with a summary in the end.

  16. The Multi-Layered Perceptrons Neural Networks for the Prediction of Daily Solar Radiation

    OpenAIRE

    Radouane Iqdour; Abdelouhab Zeroual

    2007-01-01

    The Multi-Layered Perceptron (MLP) Neural networks have been very successful in a number of signal processing applications. In this work we have studied the possibilities and the met difficulties in the application of the MLP neural networks for the prediction of daily solar radiation data. We have used the Polack-Ribière algorithm for training the neural networks. A comparison, in term of the statistical indicators, with a linear model most used in literature, is also perfo...

  17. Predictive Control of Hydronic Floor Heating Systems using Neural Networks and Genetic Algorithms

    DEFF Research Database (Denmark)

    Vinther, Kasper; Green, Torben; Østergaard, Søren

    2017-01-01

    This paper presents the use a neural network and a micro genetic algorithm to optimize future set-points in existing hydronic floor heating systems for improved energy efficiency. The neural network can be trained to predict the impact of changes in set-points on future room temperatures. Additio...... space is not guaranteed. Evaluation of the performance of multiple neural networks is performed, using different levels of information, and optimization results are presented on a detailed house simulation model....

  18. Gas Turbine Engine Control Design Using Fuzzy Logic and Neural Networks

    Directory of Open Access Journals (Sweden)

    M. Bazazzadeh

    2011-01-01

    Full Text Available This paper presents a successful approach in designing a Fuzzy Logic Controller (FLC for a specific Jet Engine. At first, a suitable mathematical model for the jet engine is presented by the aid of SIMULINK. Then by applying different reasonable fuel flow functions via the engine model, some important engine-transient operation parameters (such as thrust, compressor surge margin, turbine inlet temperature, etc. are obtained. These parameters provide a precious database, which train a neural network. At the second step, by designing and training a feedforward multilayer perceptron neural network according to this available database; a number of different reasonable fuel flow functions for various engine acceleration operations are determined. These functions are used to define the desired fuzzy fuel functions. Indeed, the neural networks are used as an effective method to define the optimum fuzzy fuel functions. At the next step, we propose a FLC by using the engine simulation model and the neural network results. The proposed control scheme is proved by computer simulation using the designed engine model. The simulation results of engine model with FLC illustrate that the proposed controller achieves the desired performance and stability.

  19. HONTIOR - HIGHER-ORDER NEURAL NETWORK FOR TRANSFORMATION INVARIANT OBJECT RECOGNITION

    Science.gov (United States)

    Spirkovska, L.

    1994-01-01

    Neural networks have been applied in numerous fields, including transformation invariant object recognition, wherein an object is recognized despite changes in the object's position in the input field, size, or rotation. One of the more successful neural network methods used in invariant object recognition is the higher-order neural network (HONN) method. With a HONN, known relationships are exploited and the desired invariances are built directly into the architecture of the network, eliminating the need for the network to learn invariance to transformations. This results in a significant reduction in the training time required, since the network needs to be trained on only one view of each object, not on numerous transformed views. Moreover, one hundred percent accuracy is guaranteed for images characterized by the built-in distortions, providing noise is not introduced through pixelation. The program HONTIOR implements a third-order neural network having invariance to translation, scale, and in-plane rotation built directly into the architecture, Thus, for 2-D transformation invariance, the network needs only to be trained on just one view of each object. HONTIOR can also be used for 3-D transformation invariant object recognition by training the network only on a set of out-of-plane rotated views. Historically, the major drawback of HONNs has been that the size of the input field was limited to the memory required for the large number of interconnections in a fully connected network. HONTIOR solves this problem by coarse coding the input images (coding an image as a set of overlapping but offset coarser images). Using this scheme, large input fields (4096 x 4096 pixels) can easily be represented using very little virtual memory (30Mb). The HONTIOR distribution consists of three main programs. The first program contains the training and testing routines for a third-order neural network. The second program contains the same training and testing procedures as the

  20. PERFORMANCE COMPARISON FOR INTRUSION DETECTION SYSTEM USING NEURAL NETWORK WITH KDD DATASET

    Directory of Open Access Journals (Sweden)

    S. Devaraju

    2014-04-01

    Full Text Available Intrusion Detection Systems are challenging task for finding the user as normal user or attack user in any organizational information systems or IT Industry. The Intrusion Detection System is an effective method to deal with the kinds of problem in networks. Different classifiers are used to detect the different kinds of attacks in networks. In this paper, the performance of intrusion detection is compared with various neural network classifiers. In the proposed research the four types of classifiers used are Feed Forward Neural Network (FFNN, Generalized Regression Neural Network (GRNN, Probabilistic Neural Network (PNN and Radial Basis Neural Network (RBNN. The performance of the full featured KDD Cup 1999 dataset is compared with that of the reduced featured KDD Cup 1999 dataset. The MATLAB software is used to train and test the dataset and the efficiency and False Alarm Rate is measured. It is proved that the reduced dataset is performing better than the full featured dataset.

  1. SU-F-E-09: Respiratory Signal Prediction Based On Multi-Layer Perceptron Neural Network Using Adjustable Training Samples

    Energy Technology Data Exchange (ETDEWEB)

    Sun, W; Jiang, M; Yin, F [Duke University Medical Center, Durham, NC (United States)

    2016-06-15

    Purpose: Dynamic tracking of moving organs, such as lung and liver tumors, under radiation therapy requires prediction of organ motions prior to delivery. The shift of moving organ may change a lot due to huge transform of respiration at different periods. This study aims to reduce the influence of that changes using adjustable training signals and multi-layer perceptron neural network (ASMLP). Methods: Respiratory signals obtained using a Real-time Position Management(RPM) device were used for this study. The ASMLP uses two multi-layer perceptron neural networks(MLPs) to infer respiration position alternately and the training sample will be updated with time. Firstly, a Savitzky-Golay finite impulse response smoothing filter was established to smooth the respiratory signal. Secondly, two same MLPs were developed to estimate respiratory position from its previous positions separately. Weights and thresholds were updated to minimize network errors according to Leverberg-Marquart optimization algorithm through backward propagation method. Finally, MLP 1 was used to predict 120∼150s respiration position using 0∼120s training signals. At the same time, MLP 2 was trained using 30∼150s training signals. Then MLP is used to predict 150∼180s training signals according to 30∼150s training signals. The respiration position is predicted as this way until it was finished. Results: In this experiment, the two methods were used to predict 2.5 minute respiratory signals. For predicting 1s ahead of response time, correlation coefficient was improved from 0.8250(MLP method) to 0.8856(ASMLP method). Besides, a 30% improvement of mean absolute error between MLP(0.1798 on average) and ASMLP(0.1267 on average) was achieved. For predicting 2s ahead of response time, correlation coefficient was improved from 0.61415 to 0.7098.Mean absolute error of MLP method(0.3111 on average) was reduced by 35% using ASMLP method(0.2020 on average). Conclusion: The preliminary results

  2. Identifying Broadband Rotational Spectra with Neural Networks

    Science.gov (United States)

    Zaleski, Daniel P.; Prozument, Kirill

    2017-06-01

    A typical broadband rotational spectrum may contain several thousand observable transitions, spanning many species. Identifying the individual spectra, particularly when the dynamic range reaches 1,000:1 or even 10,000:1, can be challenging. One approach is to apply automated fitting routines. In this approach, combinations of 3 transitions can be created to form a "triple", which allows fitting of the A, B, and C rotational constants in a Watson-type Hamiltonian. On a standard desktop computer, with a target molecule of interest, a typical AUTOFIT routine takes 2-12 hours depending on the spectral density. A new approach is to utilize machine learning to train a computer to recognize the patterns (frequency spacing and relative intensities) inherit in rotational spectra and to identify the individual spectra in a raw broadband rotational spectrum. Here, recurrent neural networks have been trained to identify different types of rotational spectra and classify them accordingly. Furthermore, early results in applying convolutional neural networks for spectral object recognition in broadband rotational spectra appear promising. Perez et al. "Broadband Fourier transform rotational spectroscopy for structure determination: The water heptamer." Chem. Phys. Lett., 2013, 571, 1-15. Seifert et al. "AUTOFIT, an Automated Fitting Tool for Broadband Rotational Spectra, and Applications to 1-Hexanal." J. Mol. Spectrosc., 2015, 312, 13-21. Bishop. "Neural networks for pattern recognition." Oxford university press, 1995.

  3. Predicting local field potentials with recurrent neural networks.

    Science.gov (United States)

    Kim, Louis; Harer, Jacob; Rangamani, Akshay; Moran, James; Parks, Philip D; Widge, Alik; Eskandar, Emad; Dougherty, Darin; Chin, Sang Peter

    2016-08-01

    We present a Recurrent Neural Network using LSTM (Long Short Term Memory) that is capable of modeling and predicting Local Field Potentials. We train and test the network on real data recorded from epilepsy patients. We construct networks that predict multi-channel LFPs for 1, 10, and 100 milliseconds forward in time. Our results show that prediction using LSTM outperforms regression when predicting 10 and 100 millisecond forward in time.

  4. On-line plant-wide monitoring using neural networks

    International Nuclear Information System (INIS)

    Turkcan, E.; Ciftcioglu, O.; Eryurek, E.; Upadhyaya, B.R.

    1992-06-01

    The on-line signal analysis system designed for a multi-level mode operation using neural networks is described. The system is capable of monitoring the plant states by tracking different number of signals up to 32 simultaneously. The data used for this study were acquired from the Borssele Nuclear Power Plant (PWR type), and using the on-line monitoring system. An on-line plant-wide monitoring study using a multilayer neural network model is discussed in this paper. The back-propagation neural network algorithm is used for training the network. The technique assumes that each physical state of the power plant can be represented by a unique pattern of instrument readings which can be related to the condition of the plant. When disturbance occurs, the sensor readings undergo a transient, and form a different set of patterns which represent the new operational status. Diagnosing these patterns can be helpful in identifying this new state of the power plant. To this end, plant-wide monitoring with neutral networks is one of the new techniques in real-time applications. (author). 9 refs.; 5 figs

  5. Fault diagnosis method for nuclear power plants based on neural networks and voting fusion

    International Nuclear Information System (INIS)

    Zhou Gang; Ge Shengqi; Yang Li

    2010-01-01

    A new fault diagnosis method based on multiple neural networks (ANNs) and voting fusion for nuclear power plants (NPPs) was proposed in view of the shortcoming of single neural network fault diagnosis method. In this method, multiple neural networks that the types of neural networks were different were trained for the fault diagnosis of NPP. The operation parameters of NPP, which have important affect on the safety of NPP, were selected as the input variable of neural networks. The output of neural networks is fault patterns of NPP. The last results of diagnosis for NPP were obtained by fusing the diagnosing results of different neural networks by voting fusion. The typical operation patterns of NPP were diagnosed to demonstrate the effect of the proposed method. The results show that employing the proposed diagnosing method can improve the precision and reliability of fault diagnosis results of NPPs. (authors)

  6. NEURAL NETWORKS AS A CLASSIFICATION TOOL BIOTECHNOLOGICAL SYSTEMS (FOR EXAMPLE FLOUR PRODUCTION

    Directory of Open Access Journals (Sweden)

    V. K. Bitykov

    2015-01-01

    Full Text Available Summary. To date, artificial intelligence systems are the most common type to classify objects of different quality. The proposed modeling technology to predict the quality of flour products by using artificial neural networks allows to solve problems of analysis of the factors determining the quality of the products. Interest in artificial neural networks has grown due to the fact that they can change their behavior depending on external environment. This factor more than any other responsible for the interest that they cause. After the presentation of input signals (possibly together with the desired outputs, they self-configurable to provide the desired reaction. We developed a set of training algorithms, each with their own strengths and weaknesses. The solution to the problem of classification is one of the most important applications of neural networks, which represents a problem of attributing the sample to one of several non-intersecting sets. To solve this problem developed algorithms for synthesis of NA with the use of nonlinear activation functions, the algorithms for training the network. Training the NS involves determining the weights of layers of neurons. Training the NA occurs with the teacher, that is, the network must meet the values of both input and desired output signals, and it is according to some internal algorithm adjusts the weights of their synaptic connections. The work was built an artificial neural network, multilayer perceptron example. With the help of correlation analysis in total sample revealed that the traits are correlated at the significance level of 0.01 with grade quality bread. The classification accuracy exceeds 90%.

  7. A Parallel Adaboost-Backpropagation Neural Network for Massive Image Dataset Classification

    Science.gov (United States)

    Cao, Jianfang; Chen, Lichao; Wang, Min; Shi, Hao; Tian, Yun

    2016-01-01

    Image classification uses computers to simulate human understanding and cognition of images by automatically categorizing images. This study proposes a faster image classification approach that parallelizes the traditional Adaboost-Backpropagation (BP) neural network using the MapReduce parallel programming model. First, we construct a strong classifier by assembling the outputs of 15 BP neural networks (which are individually regarded as weak classifiers) based on the Adaboost algorithm. Second, we design Map and Reduce tasks for both the parallel Adaboost-BP neural network and the feature extraction algorithm. Finally, we establish an automated classification model by building a Hadoop cluster. We use the Pascal VOC2007 and Caltech256 datasets to train and test the classification model. The results are superior to those obtained using traditional Adaboost-BP neural network or parallel BP neural network approaches. Our approach increased the average classification accuracy rate by approximately 14.5% and 26.0% compared to the traditional Adaboost-BP neural network and parallel BP neural network, respectively. Furthermore, the proposed approach requires less computation time and scales very well as evaluated by speedup, sizeup and scaleup. The proposed approach may provide a foundation for automated large-scale image classification and demonstrates practical value. PMID:27905520

  8. Modeling Broadband Microwave Structures by Artificial Neural Networks

    Directory of Open Access Journals (Sweden)

    V. Otevrel

    2004-06-01

    Full Text Available The paper describes the exploitation of feed-forward neural networksand recurrent neural networks for replacing full-wave numerical modelsof microwave structures in complex microwave design tools. Building aneural model, attention is turned to the modeling accuracy and to theefficiency of building a model. Dealing with the accuracy, we describea method of increasing it by successive completing a training set.Neural models are mutually compared in order to highlight theiradvantages and disadvantages. As a reference model for comparisons,approximations based on standard cubic splines are used. Neural modelsare used to replace both the time-domain numeric models and thefrequency-domain ones.

  9. Fault diagnosis for temperature, flow rate and pressure sensors in VAV systems using wavelet neural network

    Energy Technology Data Exchange (ETDEWEB)

    Du, Zhimin; Jin, Xinqiao; Yang, Yunyu [School of Mechanical Engineering, Shanghai Jiao Tong University, 800, Dongchuan Road, Shanghai (China)

    2009-09-15

    Wavelet neural network, the integration of wavelet analysis and neural network, is presented to diagnose the faults of sensors including temperature, flow rate and pressure in variable air volume (VAV) systems to ensure well capacity of energy conservation. Wavelet analysis is used to process the original data collected from the building automation first. With three-level wavelet decomposition, the series of characteristic information representing various operation conditions of the system are obtained. In addition, neural network is developed to diagnose the source of the fault. To improve the diagnosis efficiency, three data groups based on several physical models or balances are classified and constructed. Using the data decomposed by three-level wavelet, the neural network can be well trained and series of convergent networks are obtained. Finally, the new measurements to diagnose are similarly processed by wavelet. And the well-trained convergent neural networks are used to identify the operation condition and isolate the source of the fault. (author)

  10. Application of neural networks to multiple alarm processing and diagnosis in nuclear power plants

    International Nuclear Information System (INIS)

    Cheon, Se Woo; Chang Soon Heung; Chung, Hak Yeong

    1992-01-01

    This paper presents feasibility studies of multiple alarm processing and diagnosis using neural networks. The back-propagation neural network model is applied to the training of multiple alarm patterns for the identification of failure in a reactor coolant pump (RCP) system. The general mapping capability of the neural network enables to identify a fault easily. The case studies are performed with emphasis on the applicability of the neural network to pattern recognition problems. It is revealed that the neural network model can identify the cause of multiple alarms properly, even when untrained or sensor-failed alarm symptoms are given. It is also shown that multiple failures are easily identified using the symptoms of multiple alarms

  11. Fast Fingerprint Classification with Deep Neural Network

    DEFF Research Database (Denmark)

    Michelsanti, Daniel; Guichi, Yanis; Ene, Andreea-Daniela

    2018-01-01

    . In this work we evaluate the performance of two pre-trained convolutional neural networks fine-tuned on the NIST SD4 benchmark database. The obtained results show that this approach is comparable with other results in the literature, with the advantage of a fast feature extraction stage....

  12. Enhanced online convolutional neural networks for object tracking

    Science.gov (United States)

    Zhang, Dengzhuo; Gao, Yun; Zhou, Hao; Li, Tianwen

    2018-04-01

    In recent several years, object tracking based on convolution neural network has gained more and more attention. The initialization and update of convolution filters can directly affect the precision of object tracking effective. In this paper, a novel object tracking via an enhanced online convolution neural network without offline training is proposed, which initializes the convolution filters by a k-means++ algorithm and updates the filters by an error back-propagation. The comparative experiments of 7 trackers on 15 challenging sequences showed that our tracker can perform better than other trackers in terms of AUC and precision.

  13. Unscented Kalman Filter-Trained Neural Networks for Slip Model Prediction

    Science.gov (United States)

    Li, Zhencai; Wang, Yang; Liu, Zhen

    2016-01-01

    The purpose of this work is to investigate the accurate trajectory tracking control of a wheeled mobile robot (WMR) based on the slip model prediction. Generally, a nonholonomic WMR may increase the slippage risk, when traveling on outdoor unstructured terrain (such as longitudinal and lateral slippage of wheels). In order to control a WMR stably and accurately under the effect of slippage, an unscented Kalman filter and neural networks (NNs) are applied to estimate the slip model in real time. This method exploits the model approximating capabilities of nonlinear state–space NN, and the unscented Kalman filter is used to train NN’s weights online. The slip parameters can be estimated and used to predict the time series of deviation velocity, which can be used to compensate control inputs of a WMR. The results of numerical simulation show that the desired trajectory tracking control can be performed by predicting the nonlinear slip model. PMID:27467703

  14. Supervised Learning Based on Temporal Coding in Spiking Neural Networks.

    Science.gov (United States)

    Mostafa, Hesham

    2017-08-01

    Gradient descent training techniques are remarkably successful in training analog-valued artificial neural networks (ANNs). Such training techniques, however, do not transfer easily to spiking networks due to the spike generation hard nonlinearity and the discrete nature of spike communication. We show that in a feedforward spiking network that uses a temporal coding scheme where information is encoded in spike times instead of spike rates, the network input-output relation is differentiable almost everywhere. Moreover, this relation is piecewise linear after a transformation of variables. Methods for training ANNs thus carry directly to the training of such spiking networks as we show when training on the permutation invariant MNIST task. In contrast to rate-based spiking networks that are often used to approximate the behavior of ANNs, the networks we present spike much more sparsely and their behavior cannot be directly approximated by conventional ANNs. Our results highlight a new approach for controlling the behavior of spiking networks with realistic temporal dynamics, opening up the potential for using these networks to process spike patterns with complex temporal information.

  15. Deep Learning Neural Networks and Bayesian Neural Networks in Data Analysis

    Directory of Open Access Journals (Sweden)

    Chernoded Andrey

    2017-01-01

    Full Text Available Most of the modern analyses in high energy physics use signal-versus-background classification techniques of machine learning methods and neural networks in particular. Deep learning neural network is the most promising modern technique to separate signal and background and now days can be widely and successfully implemented as a part of physical analysis. In this article we compare Deep learning and Bayesian neural networks application as a classifiers in an instance of top quark analysis.

  16. Power plant fault detection using artificial neural network

    Science.gov (United States)

    Thanakodi, Suresh; Nazar, Nazatul Shiema Moh; Joini, Nur Fazriana; Hidzir, Hidzrin Dayana Mohd; Awira, Mohammad Zulfikar Khairul

    2018-02-01

    The fault that commonly occurs in power plants is due to various factors that affect the system outage. There are many types of faults in power plants such as single line to ground fault, double line to ground fault, and line to line fault. The primary aim of this paper is to diagnose the fault in 14 buses power plants by using an Artificial Neural Network (ANN). The Multilayered Perceptron Network (MLP) that detection trained utilized the offline training methods such as Gradient Descent Backpropagation (GDBP), Levenberg-Marquardt (LM), and Bayesian Regularization (BR). The best method is used to build the Graphical User Interface (GUI). The modelling of 14 buses power plant, network training, and GUI used the MATLAB software.

  17. A review and analysis of neural networks for classification of remotely sensed multispectral imagery

    Science.gov (United States)

    Paola, Justin D.; Schowengerdt, Robert A.

    1993-01-01

    A literature survey and analysis of the use of neural networks for the classification of remotely sensed multispectral imagery is presented. As part of a brief mathematical review, the backpropagation algorithm, which is the most common method of training multi-layer networks, is discussed with an emphasis on its application to pattern recognition. The analysis is divided into five aspects of neural network classification: (1) input data preprocessing, structure, and encoding; (2) output encoding and extraction of classes; (3) network architecture, (4) training algorithms; and (5) comparisons to conventional classifiers. The advantages of the neural network method over traditional classifiers are its non-parametric nature, arbitrary decision boundary capabilities, easy adaptation to different types of data and input structures, fuzzy output values that can enhance classification, and good generalization for use with multiple images. The disadvantages of the method are slow training time, inconsistent results due to random initial weights, and the requirement of obscure initialization values (e.g., learning rate and hidden layer size). Possible techniques for ameliorating these problems are discussed. It is concluded that, although the neural network method has several unique capabilities, it will become a useful tool in remote sensing only if it is made faster, more predictable, and easier to use.

  18. The parallel implementation of a backpropagation neural network and its applicability to SPECT image reconstruction

    Energy Technology Data Exchange (ETDEWEB)

    Kerr, John Patrick [Iowa State Univ., Ames, IA (United States)

    1992-01-01

    The objective of this study was to determine the feasibility of using an Artificial Neural Network (ANN), in particular a backpropagation ANN, to improve the speed and quality of the reconstruction of three-dimensional SPECT (single photon emission computed tomography) images. In addition, since the processing elements (PE)s in each layer of an ANN are independent of each other, the speed and efficiency of the neural network architecture could be better optimized by implementing the ANN on a massively parallel computer. The specific goals of this research were: to implement a fully interconnected backpropagation neural network on a serial computer and a SIMD parallel computer, to identify any reduction in the time required to train these networks on the parallel machine versus the serial machine, to determine if these neural networks can learn to recognize SPECT data by training them on a section of an actual SPECT image, and to determine from the knowledge obtained in this research if full SPECT image reconstruction by an ANN implemented on a parallel computer is feasible both in time required to train the network, and in quality of the images reconstructed.

  19. A Gamma Memory Neural Network for System Identification

    Science.gov (United States)

    Motter, Mark A.; Principe, Jose C.

    1992-01-01

    A gamma neural network topology is investigated for a system identification application. A discrete gamma memory structure is used in the input layer, providing delayed values of both the control inputs and the network output to the input layer. The discrete gamma memory structure implements a tapped dispersive delay line, with the amount of dispersion regulated by a single, adaptable parameter. The network is trained using static back propagation, but captures significant features of the system dynamics. The system dynamics identified with the network are the Mach number dynamics of the 16 Foot Transonic Tunnel at NASA Langley Research Center, Hampton, Virginia. The training data spans an operating range of Mach numbers from 0.4 to 1.3.

  20. Neural networks for combined control of capacitor banks and voltage regulators in distribution systems

    Energy Technology Data Exchange (ETDEWEB)

    Gu, Z.; Rizy, D.T.

    1996-02-01

    A neural network for controlling shunt capacitor banks and feeder voltage regulators in electric distribution systems is presented. The objective of the neural controller is to minimize total I{sup 2}R losses and maintain all bus voltages within standard limits. The performance of the neural network for different input selections and training data is discussed and compared. Two different input selections are tried, one using the previous control states of the capacitors and regulator along with measured line flows and voltage which is equivalent to having feedback and the other with measured line flows and voltage without previous control settings. The results indicate that the neural net controller with feedback can outperform the one without. Also, proper selection of a training data set that adequately covers the operating space of the distribution system is important for achieving satisfactory performance with the neural controller. The neural controller is tested on a radially configured distribution system with 30 buses, 5 switchable capacitor banks an d one nine tap line regulator to demonstrate the performance characteristics associated with these principles. Monte Carlo simulations show that a carefully designed and relatively compact neural network with a small but carefully developed training set can perform quite well under slight and extreme variation of loading conditions.

  1. A robust neural network-based approach for microseismic event detection

    KAUST Repository

    Akram, Jubran; Ovcharenko, Oleg; Peter, Daniel

    2017-01-01

    We present an artificial neural network based approach for robust event detection from low S/N waveforms. We use a feed-forward network with a single hidden layer that is tuned on a training dataset and later applied on the entire example dataset

  2. Deep Recurrent Neural Networks for Supernovae Classification

    Science.gov (United States)

    Charnock, Tom; Moss, Adam

    2017-03-01

    We apply deep recurrent neural networks, which are capable of learning complex sequential information, to classify supernovae (code available at https://github.com/adammoss/supernovae). The observational time and filter fluxes are used as inputs to the network, but since the inputs are agnostic, additional data such as host galaxy information can also be included. Using the Supernovae Photometric Classification Challenge (SPCC) data, we find that deep networks are capable of learning about light curves, however the performance of the network is highly sensitive to the amount of training data. For a training size of 50% of the representational SPCC data set (around 104 supernovae) we obtain a type-Ia versus non-type-Ia classification accuracy of 94.7%, an area under the Receiver Operating Characteristic curve AUC of 0.986 and an SPCC figure-of-merit F 1 = 0.64. When using only the data for the early-epoch challenge defined by the SPCC, we achieve a classification accuracy of 93.1%, AUC of 0.977, and F 1 = 0.58, results almost as good as with the whole light curve. By employing bidirectional neural networks, we can acquire impressive classification results between supernovae types I, II and III at an accuracy of 90.4% and AUC of 0.974. We also apply a pre-trained model to obtain classification probabilities as a function of time and show that it can give early indications of supernovae type. Our method is competitive with existing algorithms and has applications for future large-scale photometric surveys.

  3. On the identification of instabilities with neural networks on JET

    International Nuclear Information System (INIS)

    Murari, A.; Arena, P.; Buscarino, A.; Fortuna, L.; Iachello, M.

    2013-01-01

    JET plasmas are affected by various instabilities, which can be particularly dangerous in high performance discharges. An identification method, based on the use of advanced neural networks, called Recurrent Neural Networks (RNNs), has been applied to ELMs. The potential of the recurrent networks to identify the dynamics of the instabilities has been first tested using synthetic data. The networks have then been applied to JET experimental signals. An appropriate selection of the networks topology allows identifying quite well the time evolution of the edge temperature and of the magnetic fields, considered the best indicators of the ELMs. A quite limited number of periodic oscillations are used to train the networks, which then manage to follow quite well the dynamics of the instabilities, in a recurrent configuration on one of the inputs. The time evolution of the aforementioned signals, also during intervals not used in the training and never seen by the networks, are properly reproduced. A careful analysis of the various terms in the RNNs has the potential to give clear indications about the nature of these instabilities and their dynamical behaviour

  4. Accelerating learning of neural networks with conjugate gradients for nuclear power plant applications

    International Nuclear Information System (INIS)

    Reifman, J.; Vitela, J.E.

    1994-01-01

    The method of conjugate gradients is used to expedite the learning process of feedforward multilayer artificial neural networks and to systematically update both the learning parameter and the momentum parameter at each training cycle. The mechanism for the occurrence of premature saturation of the network nodes observed with the back propagation algorithm is described, suggestions are made to eliminate this undesirable phenomenon, and the reason by which this phenomenon is precluded in the method of conjugate gradients is presented. The proposed method is compared with the standard back propagation algorithm in the training of neural networks to classify transient events in neural power plants simulated by the Midland Nuclear Power Plant Unit 2 simulator. The comparison results indicate that the rate of convergence of the proposed method is much greater than the standard back propagation, that it reduces both the number of training cycles and the CPU time, and that it is less sensitive to the choice of initial weights. The advantages of the method are more noticeable and important for problems where the network architecture consists of a large number of nodes, the training database is large, and a tight convergence criterion is desired

  5. Application of neural networks to quantitative spectrometry analysis

    International Nuclear Information System (INIS)

    Pilato, V.; Tola, F.; Martinez, J.M.; Huver, M.

    1999-01-01

    Accurate quantitative analysis of complex spectra (fission and activation products), relies upon experts' knowledge. In some cases several hours, even days of tedious calculations are needed. This is because current software is unable to solve deconvolution problems when several rays overlap. We have shown that such analysis can be correctly handled by a neural network, and the procedure can be automated with minimum laboratory measurements for networks training, as long as all the elements of the analysed solution figure in the training set and provided that adequate scaling of input data is performed. Once the network has been trained, analysis is carried out in a few seconds. On submitting to a test between several well-known laboratories, where unknown quantities of 57 Co, 58 Co, 85 Sr, 88 Y, 131 I, 139 Ce, 141 Ce present in a sample had to be determined, the results yielded by our network classed it amongst the best. The method is described, including experimental device and measures, training set designing, relevant input parameters definition, input data scaling and networks training. Main results are presented together with a statistical model allowing networks error prediction

  6. Training for Micrographia Alters Neural Connectivity in Parkinson's Disease

    Directory of Open Access Journals (Sweden)

    Evelien Nackaerts

    2018-01-01

    Full Text Available Despite recent advances in clarifying the neural networks underlying rehabilitation in Parkinson's disease (PD, the impact of prolonged motor learning interventions on brain connectivity in people with PD is currently unknown. Therefore, the objective of this study was to compare cortical network changes after 6 weeks of visually cued handwriting training (= experimental with a placebo intervention to address micrographia, a common problem in PD. Twenty seven early Parkinson's patients on dopaminergic medication performed a pre-writing task in both the presence and absence of visual cues during behavioral tests and during fMRI. Subsequently, patients were randomized to the experimental (N = 13 or placebo intervention (N = 14 both lasting 6 weeks, after which they underwent the same testing procedure. We used dynamic causal modeling to compare the neural network dynamics in both groups before and after training. Most importantly, intensive writing training propagated connectivity via the left hemispheric visuomotor stream to an increased coupling with the supplementary motor area, not witnessed in the placebo group. Training enhanced communication in the left visuomotor integration system in line with the learned visually steered training. Notably, this pattern was apparent irrespective of the presence of cues, suggesting transfer from cued to uncued handwriting. We conclude that in early PD intensive motor skill learning, which led to clinical improvement, alters cortical network functioning. We showed for the first time in a placebo-controlled design that it remains possible to enhance the drive to the supplementary motor area through motor learning.

  7. Measuring dynamic process of working memory training with functional brain networks

    Directory of Open Access Journals (Sweden)

    Hong Wang

    2015-12-01

    Full Text Available In this paper, we proposed the functional brain networks and graphic theory method to measure the effect of working memory training on the neural activities. 12 subjects were recruited in this study, and they did the same working memory task before they had been trained and after training. We architected functional brain networks based on EEG coherence and calculated properties of brain networks to measure the neural co-activities and the working memory level of subjects. As the result, the internal connections in frontal region decreased after working memory training, but the connection between frontal region and top region increased. And the more small-world feature was observed after training. The features observed above were in alpha (8-13 Hz and beta (13-30 Hz bands. The functional brain networks based on EEG coherence proposed in this paper can be used as the indicator of working memory level.

  8. Ontology Mapping Neural Network: An Approach to Learning and Inferring Correspondences among Ontologies

    Science.gov (United States)

    Peng, Yefei

    2010-01-01

    An ontology mapping neural network (OMNN) is proposed in order to learn and infer correspondences among ontologies. It extends the Identical Elements Neural Network (IENN)'s ability to represent and map complex relationships. The learning dynamics of simultaneous (interlaced) training of similar tasks interact at the shared connections of the…

  9. Reduced-Order Modeling for Flutter/LCO Using Recurrent Artificial Neural Network

    Science.gov (United States)

    Yao, Weigang; Liou, Meng-Sing

    2012-01-01

    The present study demonstrates the efficacy of a recurrent artificial neural network to provide a high fidelity time-dependent nonlinear reduced-order model (ROM) for flutter/limit-cycle oscillation (LCO) modeling. An artificial neural network is a relatively straightforward nonlinear method for modeling an input-output relationship from a set of known data, for which we use the radial basis function (RBF) with its parameters determined through a training process. The resulting RBF neural network, however, is only static and is not yet adequate for an application to problems of dynamic nature. The recurrent neural network method [1] is applied to construct a reduced order model resulting from a series of high-fidelity time-dependent data of aero-elastic simulations. Once the RBF neural network ROM is constructed properly, an accurate approximate solution can be obtained at a fraction of the cost of a full-order computation. The method derived during the study has been validated for predicting nonlinear aerodynamic forces in transonic flow and is capable of accurate flutter/LCO simulations. The obtained results indicate that the present recurrent RBF neural network is accurate and efficient for nonlinear aero-elastic system analysis

  10. PREDIKSI BISNIS FOREX MENGGUNAKAN MODEL NEURAL NETWORK BERBASIS ADA BOOST MENGGUNAKAN 2047 DATA

    Directory of Open Access Journals (Sweden)

    Suyatno Suyatno

    2016-11-01

    Full Text Available Setelah melakukan penelitian dan percobaan maka didapatkan hasil penelitian pertama yang telah dilakukan dengan menggunakan Algoritma Neural Network Backpropagatioan dengan menggunakan data sebanyak 268 menunjungkan tingkat akurasi error prediksi pada waktu prediksi per 5 menit sebesar 0.758619403, bila menggunakan data sebanyak 2047 menunjukkan tingkat akurasi error prediksi sebesar 0.500161212 dan hasil penelitian kedua yang telah dilakukan menggunakan Algoritma Optimasi Adaboost pada proses trainning dan ditambah Neural Network Backpropagation pada proses learning menunjukkan tingkat akurasi error prediksi pada waktu prediksi per 5 menit menggunakan data sebanyak 268 sebesar 0.397014925, bila menggunakan data sebanyak 2047 menunjukkan tingkat akurasi error prediksi sebesar 0.099951148. Tahap awal dalam melakukan penelitian ini sampai dengan pengujian menggunakan perhitungan prediksi nilai akurasi error menggunakan rumus MSE (Mean Sequare Error dengan menggunakan algoritma optimasi adaboost untuk memberikan jawaban atas permasalahan bahwa nilai akurasi error Algoritma Neural Network Backpropagation perlu direndahkan agar akurasi prediksi meningkat dan tahap kedua dilakukan uji coba menggunakan data yang lebih banyak dibandingan dengan tahap ke satu. Berdasarkan hasil penelitian yang telah dilakukan, dapat disimpulkan bahwa Algoritma Neural Network memiliki akurasi yang lebih rendah bila dibandingkan dengan akurasi menggunakan metode optimasi adaboost pada proses trainning ditambah dengan Neural Network, ini dapat dilihat dengan rendahnya tingkat error MSE menggunakan metode adaboost + neural network dan dapat disimpukan pula bahwa dengan menggunakan jumlah data yang lebih banyak maka dapat menurunkan tingkat akurasi error MSE sehingga berhasil meningkatkan akurasi prediksi dalam bisnis forex trading. Kata kunci: forex, trading, neural network, adaboost, central capital futures.

  11. Evaluation of tactical training in team handball by means of artificial neural networks.

    Science.gov (United States)

    Hassan, Amr; Schrapf, Norbert; Ramadan, Wael; Tilp, Markus

    2017-04-01

    While tactical performance in competition has been analysed extensively, the assessment of training processes of tactical behaviour has rather been neglected in the literature. Therefore, the purpose of this study is to provide a methodology to assess the acquisition and implementation of offensive tactical behaviour in team handball. The use of game analysis software combined with an artificial neural network (ANN) software enabled identifying tactical target patterns from high level junior players based on their positions during offensive actions. These patterns were then trained by an amateur junior handball team (n = 14, 17 (0.5) years)). Following 6 weeks of tactical training an exhibition game was performed where the players were advised to use the target patterns as often as possible. Subsequently, the position data of the game was analysed with an ANN. The test revealed that 58% of the played patterns could be related to the trained target patterns. The similarity between executed patterns and target patterns was assessed by calculating the mean distance between key positions of the players in the game and the target pattern which was 0.49 (0.20) m. In summary, the presented method appears to be a valid instrument to assess tactical training.

  12. Two-Stage Approach to Image Classification by Deep Neural Networks

    Directory of Open Access Journals (Sweden)

    Ososkov Gennady

    2018-01-01

    Full Text Available The paper demonstrates the advantages of the deep learning networks over the ordinary neural networks on their comparative applications to image classifying. An autoassociative neural network is used as a standalone autoencoder for prior extraction of the most informative features of the input data for neural networks to be compared further as classifiers. The main efforts to deal with deep learning networks are spent for a quite painstaking work of optimizing the structures of those networks and their components, as activation functions, weights, as well as the procedures of minimizing their loss function to improve their performances and speed up their learning time. It is also shown that the deep autoencoders develop the remarkable ability for denoising images after being specially trained. Convolutional Neural Networks are also used to solve a quite actual problem of protein genetics on the example of the durum wheat classification. Results of our comparative study demonstrate the undoubted advantage of the deep networks, as well as the denoising power of the autoencoders. In our work we use both GPU and cloud services to speed up the calculations.

  13. Iris Data Classification Using Quantum Neural Networks

    International Nuclear Information System (INIS)

    Sahni, Vishal; Patvardhan, C.

    2006-01-01

    Quantum computing is a novel paradigm that promises to be the future of computing. The performance of quantum algorithms has proved to be stunning. ANN within the context of classical computation has been used for approximation and classification tasks with some success. This paper presents an idea of quantum neural networks along with the training algorithm and its convergence property. It synergizes the unique properties of quantum bits or qubits with the various techniques in vogue in neural networks. An example application of Fisher's Iris data set, a benchmark classification problem has also been presented. The results obtained amply demonstrate the classification capabilities of the quantum neuron and give an idea of their promising capabilities

  14. Convolutional Neural Networks for SAR Image Segmentation

    DEFF Research Database (Denmark)

    Malmgren-Hansen, David; Nobel-Jørgensen, Morten

    2015-01-01

    Segmentation of Synthetic Aperture Radar (SAR) images has several uses, but it is a difficult task due to a number of properties related to SAR images. In this article we show how Convolutional Neural Networks (CNNs) can easily be trained for SAR image segmentation with good results. Besides...

  15. Crack identification by artificial neural network

    Energy Technology Data Exchange (ETDEWEB)

    Hwu, C.B.; Liang, Y.C. [National Cheng Kung Univ., Tainan (Taiwan, Province of China). Inst. of Aeronaut. and Astronaut.

    1998-04-01

    In this paper, a most popular artificial neural network called the back propagation neural network (BPN) is employed to achieve an ideal on-line identification of the crack embedded in a composite plate. Different from the usual dynamic estimate, the parameters used for the present crack identification are the strains of static deformation. It is known that the crack effects are localized which may not be clearly reflected from the boundary information especially when the data is from static deformation only. To remedy this, we use data from multiple-loading modes in which the loading modes may include the opening, shearing and tearing modes. The results show that our method for crack identification is always stable and accurate no matter how far-away of the test data from its training set. (orig.) 8 refs.

  16. Optimization of multilayer neural network parameters for speaker recognition

    Science.gov (United States)

    Tovarek, Jaromir; Partila, Pavol; Rozhon, Jan; Voznak, Miroslav; Skapa, Jan; Uhrin, Dominik; Chmelikova, Zdenka

    2016-05-01

    This article discusses the impact of multilayer neural network parameters for speaker identification. The main task of speaker identification is to find a specific person in the known set of speakers. It means that the voice of an unknown speaker (wanted person) belongs to a group of reference speakers from the voice database. One of the requests was to develop the text-independent system, which means to classify wanted person regardless of content and language. Multilayer neural network has been used for speaker identification in this research. Artificial neural network (ANN) needs to set parameters like activation function of neurons, steepness of activation functions, learning rate, the maximum number of iterations and a number of neurons in the hidden and output layers. ANN accuracy and validation time are directly influenced by the parameter settings. Different roles require different settings. Identification accuracy and ANN validation time were evaluated with the same input data but different parameter settings. The goal was to find parameters for the neural network with the highest precision and shortest validation time. Input data of neural networks are a Mel-frequency cepstral coefficients (MFCC). These parameters describe the properties of the vocal tract. Audio samples were recorded for all speakers in a laboratory environment. Training, testing and validation data set were split into 70, 15 and 15 %. The result of the research described in this article is different parameter setting for the multilayer neural network for four speakers.

  17. Single-hidden-layer feed-forward quantum neural network based on Grover learning.

    Science.gov (United States)

    Liu, Cheng-Yi; Chen, Chein; Chang, Ching-Ter; Shih, Lun-Min

    2013-09-01

    In this paper, a novel single-hidden-layer feed-forward quantum neural network model is proposed based on some concepts and principles in the quantum theory. By combining the quantum mechanism with the feed-forward neural network, we defined quantum hidden neurons and connected quantum weights, and used them as the fundamental information processing unit in a single-hidden-layer feed-forward neural network. The quantum neurons make a wide range of nonlinear functions serve as the activation functions in the hidden layer of the network, and the Grover searching algorithm outstands the optimal parameter setting iteratively and thus makes very efficient neural network learning possible. The quantum neuron and weights, along with a Grover searching algorithm based learning, result in a novel and efficient neural network characteristic of reduced network, high efficient training and prospect application in future. Some simulations are taken to investigate the performance of the proposed quantum network and the result show that it can achieve accurate learning. Copyright © 2013 Elsevier Ltd. All rights reserved.

  18. Rotor Resistance Online Identification of Vector Controlled Induction Motor Based on Neural Network

    Directory of Open Access Journals (Sweden)

    Bo Fan

    2014-01-01

    Full Text Available Rotor resistance identification has been well recognized as one of the most critical factors affecting the theoretical study and applications of AC motor’s control for high performance variable frequency speed adjustment. This paper proposes a novel model for rotor resistance parameters identification based on Elman neural networks. Elman recurrent neural network is capable of performing nonlinear function approximation and possesses the ability of time-variable characteristic adaptation. Those influencing factors of specified parameter are analyzed, respectively, and various work states are covered to ensure the completeness of the training samples. Through signal preprocessing on samples and training dataset, different input parameters identifications with one network are compared and analyzed. The trained Elman neural network, applied in the identification model, is able to efficiently predict the rotor resistance in high accuracy. The simulation and experimental results show that the proposed method owns extensive adaptability and performs very well in its application to vector controlled induction motor. This identification method is able to enhance the performance of induction motor’s variable-frequency speed regulation.

  19. Weed Growth Stage Estimator Using Deep Convolutional Neural Networks

    DEFF Research Database (Denmark)

    Teimouri, Nima; Dyrmann, Mads; Nielsen, Per Rydahl

    2018-01-01

    conditions with regards to soil types, resolution and light settings. Then, 9649 of these images were used for training the computer, which automatically divided the weeds into nine growth classes. The performance of this proposed convolutional neural network approach was evaluated on a further set of 2516...... in estimating the number of leaves and 96% accuracy when accepting a deviation of two leaves. These results show that this new method of using deep convolutional neural networks has a relatively high ability to estimate early growth stages across a wide variety of weed species....

  20. Applying neural networks as software sensors for enzyme engineering.

    Science.gov (United States)

    Linko, S; Zhu, Y H; Linko, P

    1999-04-01

    The on-line control of enzyme-production processes is difficult, owing to the uncertainties typical of biological systems and to the lack of suitable on-line sensors for key process variables. For example, intelligent methods to predict the end point of fermentation could be of great economic value. Computer-assisted control based on artificial-neural-network models offers a novel solution in such situations. Well-trained feedforward-backpropagation neural networks can be used as software sensors in enzyme-process control; their performance can be affected by a number of factors.

  1. The use of global image characteristics for neural network pattern recognitions

    Science.gov (United States)

    Kulyas, Maksim O.; Kulyas, Oleg L.; Loshkarev, Aleksey S.

    2017-04-01

    The recognition system is observed, where the information is transferred by images of symbols generated by a television camera. For descriptors of objects the coefficients of two-dimensional Fourier transformation generated in a special way. For solution of the task of classification the one-layer neural network trained on reference images is used. Fast learning of a neural network with a single neuron calculation of coefficients is applied.

  2. Prediction of ferric iron precipitation in bioleaching process using partial least squares and artificial neural network

    Directory of Open Access Journals (Sweden)

    Golmohammadi Hassan

    2013-01-01

    Full Text Available A quantitative structure-property relationship (QSPR study based on partial least squares (PLS and artificial neural network (ANN was developed for the prediction of ferric iron precipitation in bioleaching process. The leaching temperature, initial pH, oxidation/reduction potential (ORP, ferrous concentration and particle size of ore were used as inputs to the network. The output of the model was ferric iron precipitation. The optimal condition of the neural network was obtained by adjusting various parameters by trial-and-error. After optimization and training of the network according to back-propagation algorithm, a 5-5-1 neural network was generated for prediction of ferric iron precipitation. The root mean square error for the neural network calculated ferric iron precipitation for training, prediction and validation set are 32.860, 40.739 and 35.890, respectively, which are smaller than those obtained by PLS model (180.972, 165.047 and 149.950, respectively. Results obtained reveal the reliability and good predictivity of neural network model for the prediction of ferric iron precipitation in bioleaching process.

  3. Decorrelation of Neural-Network Activity by Inhibitory Feedback

    Science.gov (United States)

    Einevoll, Gaute T.; Diesmann, Markus

    2012-01-01

    Correlations in spike-train ensembles can seriously impair the encoding of information by their spatio-temporal structure. An inevitable source of correlation in finite neural networks is common presynaptic input to pairs of neurons. Recent studies demonstrate that spike correlations in recurrent neural networks are considerably smaller than expected based on the amount of shared presynaptic input. Here, we explain this observation by means of a linear network model and simulations of networks of leaky integrate-and-fire neurons. We show that inhibitory feedback efficiently suppresses pairwise correlations and, hence, population-rate fluctuations, thereby assigning inhibitory neurons the new role of active decorrelation. We quantify this decorrelation by comparing the responses of the intact recurrent network (feedback system) and systems where the statistics of the feedback channel is perturbed (feedforward system). Manipulations of the feedback statistics can lead to a significant increase in the power and coherence of the population response. In particular, neglecting correlations within the ensemble of feedback channels or between the external stimulus and the feedback amplifies population-rate fluctuations by orders of magnitude. The fluctuation suppression in homogeneous inhibitory networks is explained by a negative feedback loop in the one-dimensional dynamics of the compound activity. Similarly, a change of coordinates exposes an effective negative feedback loop in the compound dynamics of stable excitatory-inhibitory networks. The suppression of input correlations in finite networks is explained by the population averaged correlations in the linear network model: In purely inhibitory networks, shared-input correlations are canceled by negative spike-train correlations. In excitatory-inhibitory networks, spike-train correlations are typically positive. Here, the suppression of input correlations is not a result of the mere existence of correlations between

  4. A neutron spectrum unfolding code based on generalized regression artificial neural networks

    International Nuclear Information System (INIS)

    Ortiz R, J. M.; Martinez B, M. R.; Castaneda M, R.; Solis S, L. O.; Vega C, H. R.

    2015-10-01

    The most delicate part of neutron spectrometry, is the unfolding process. Then derivation of the spectral information is not simple because the unknown is not given directly as result of the measurements. Novel methods based on Artificial Neural Networks have been widely investigated. In prior works, back propagation neural networks (BPNN) have been used to solve the neutron spectrometry problem, however, some drawbacks still exist using this kind of neural nets, as the optimum selection of the network topology and the long training time. Compared to BPNN, is usually much faster to train a generalized regression neural network (GRNN). That is mainly because spread constant is the only parameter used in GRNN. Another feature is that the network will converge to a global minimum. In addition, often are more accurate than BPNN in prediction. These characteristics make GRNN be of great interest in the neutron spectrometry domain. In this work is presented a computational tool based on GRNN, capable to solve the neutron spectrometry problem. This computational code, automates the pre-processing, training and testing stages, the statistical analysis and the post-processing of the information, using 7 Bonner spheres rate counts as only entrance data. The code was designed for a Bonner Spheres System based on a 6 LiI(Eu) neutron detector and a response matrix expressed in 60 energy bins taken from an International Atomic Energy Agency compilation. (Author)

  5. A neutron spectrum unfolding code based on generalized regression artificial neural networks

    Energy Technology Data Exchange (ETDEWEB)

    Ortiz R, J. M.; Martinez B, M. R.; Castaneda M, R.; Solis S, L. O. [Universidad Autonoma de Zacatecas, Unidad Academica de Ingenieria Electrica, Av. Ramon Lopez Velarde 801, Col. Centro, 98000 Zacatecas, Zac. (Mexico); Vega C, H. R., E-mail: morvymm@yahoo.com.mx [Universidad Autonoma de Zacatecas, Unidad Academica de Estudios Nucleares, Cipres No. 10, Fracc. La Penuela, 98068 Zacatecas, Zac. (Mexico)

    2015-10-15

    The most delicate part of neutron spectrometry, is the unfolding process. Then derivation of the spectral information is not simple because the unknown is not given directly as result of the measurements. Novel methods based on Artificial Neural Networks have been widely investigated. In prior works, back propagation neural networks (BPNN) have been used to solve the neutron spectrometry problem, however, some drawbacks still exist using this kind of neural nets, as the optimum selection of the network topology and the long training time. Compared to BPNN, is usually much faster to train a generalized regression neural network (GRNN). That is mainly because spread constant is the only parameter used in GRNN. Another feature is that the network will converge to a global minimum. In addition, often are more accurate than BPNN in prediction. These characteristics make GRNN be of great interest in the neutron spectrometry domain. In this work is presented a computational tool based on GRNN, capable to solve the neutron spectrometry problem. This computational code, automates the pre-processing, training and testing stages, the statistical analysis and the post-processing of the information, using 7 Bonner spheres rate counts as only entrance data. The code was designed for a Bonner Spheres System based on a {sup 6}LiI(Eu) neutron detector and a response matrix expressed in 60 energy bins taken from an International Atomic Energy Agency compilation. (Author)

  6. Constructing general partial differential equations using polynomial and neural networks.

    Science.gov (United States)

    Zjavka, Ladislav; Pedrycz, Witold

    2016-01-01

    Sum fraction terms can approximate multi-variable functions on the basis of discrete observations, replacing a partial differential equation definition with polynomial elementary data relation descriptions. Artificial neural networks commonly transform the weighted sum of inputs to describe overall similarity relationships of trained and new testing input patterns. Differential polynomial neural networks form a new class of neural networks, which construct and solve an unknown general partial differential equation of a function of interest with selected substitution relative terms using non-linear multi-variable composite polynomials. The layers of the network generate simple and composite relative substitution terms whose convergent series combinations can describe partial dependent derivative changes of the input variables. This regression is based on trained generalized partial derivative data relations, decomposed into a multi-layer polynomial network structure. The sigmoidal function, commonly used as a nonlinear activation of artificial neurons, may transform some polynomial items together with the parameters with the aim to improve the polynomial derivative term series ability to approximate complicated periodic functions, as simple low order polynomials are not able to fully make up for the complete cycles. The similarity analysis facilitates substitutions for differential equations or can form dimensional units from data samples to describe real-world problems. Copyright © 2015 Elsevier Ltd. All rights reserved.

  7. A neural network model for non invasive subsurface stratigraphic identification

    International Nuclear Information System (INIS)

    Sullivan, John M. Jr.; Ludwig, Reinhold; Lai Qiang

    2000-01-01

    Ground-Penetrating Radar (GRP) is a powerful tool to examine the stratigraphy below ground surface for remote sensing. Increasingly GPR has also found applications in microwave NDE as an interrogation tool to assess dielectric layers. Unfortunately, GPR data is characterized by a high degree of uncertainty and natural physical ambiguity. Robust decomposition routines are sparse for this application. We have developed a hierarchical set of neural network modules which split the task of layer profiling into consecutive stages. Successful GPR profiling of the subsurface stratigraphy is of key importance for many remote sensing applications including microwave NDE. Neural network modules were designed to accomplish the two main processing goals of recognizing the 'subsurface pattern' followed by the identification of the depths of the subsurface layers like permafrost, groundwater table, and bedrock. We used an adaptive transform technique to transform raw GPR data into a small feature vector containing the most representative and discriminative features of the signal. This information formed the input for the neural network processing units. This strategy reduced the number of required training samples for the neural network by orders of magnitude. The entire processing system was trained using the adaptive transformed feature vector inputs and tested with real measured GPR data. The successful results of this system establishes the feasibility the feasibility of delineating subsurface layering nondestructively

  8. Parameter diagnostics of phases and phase transition learning by neural networks

    Science.gov (United States)

    Suchsland, Philippe; Wessel, Stefan

    2018-05-01

    We present an analysis of neural network-based machine learning schemes for phases and phase transitions in theoretical condensed matter research, focusing on neural networks with a single hidden layer. Such shallow neural networks were previously found to be efficient in classifying phases and locating phase transitions of various basic model systems. In order to rationalize the emergence of the classification process and for identifying any underlying physical quantities, it is feasible to examine the weight matrices and the convolutional filter kernels that result from the learning process of such shallow networks. Furthermore, we demonstrate how the learning-by-confusing scheme can be used, in combination with a simple threshold-value classification method, to diagnose the learning parameters of neural networks. In particular, we study the classification process of both fully-connected and convolutional neural networks for the two-dimensional Ising model with extended domain wall configurations included in the low-temperature regime. Moreover, we consider the two-dimensional XY model and contrast the performance of the learning-by-confusing scheme and convolutional neural networks trained on bare spin configurations to the case of preprocessed samples with respect to vortex configurations. We discuss these findings in relation to similar recent investigations and possible further applications.

  9. Evaluation of the Performance of Feedforward and Recurrent Neural Networks in Active Cancellation of Sound Noise

    OpenAIRE

    Mehrshad Salmasi; Homayoun Mahdavi-Nasab

    2012-01-01

    Active noise control is based on the destructive interference between the primary noise and generated noise from the secondary source. An antinoise of equal amplitude and opposite phase is generated and combined with the primary noise. In this paper, performance of the neural networks is evaluated in active cancellation of sound noise. For this reason, feedforward and recurrent neural networks are designed and trained. After training, performance of the feedforwrad and recurrent networks in n...

  10. An Automatic Diagnosis Method of Facial Acne Vulgaris Based on Convolutional Neural Network.

    Science.gov (United States)

    Shen, Xiaolei; Zhang, Jiachi; Yan, Chenjun; Zhou, Hong

    2018-04-11

    In this paper, we present a new automatic diagnosis method for facial acne vulgaris which is based on convolutional neural networks (CNNs). To overcome the shortcomings of previous methods which were the inability to classify enough types of acne vulgaris. The core of our method is to extract features of images based on CNNs and achieve classification by classifier. A binary-classifier of skin-and-non-skin is used to detect skin area and a seven-classifier is used to achieve the classification task of facial acne vulgaris and healthy skin. In the experiments, we compare the effectiveness of our CNN and the VGG16 neural network which is pre-trained on the ImageNet data set. We use a ROC curve to evaluate the performance of binary-classifier and use a normalized confusion matrix to evaluate the performance of seven-classifier. The results of our experiments show that the pre-trained VGG16 neural network is effective in extracting features from facial acne vulgaris images. And the features are very useful for the follow-up classifiers. Finally, we try applying the classifiers both based on the pre-trained VGG16 neural network to assist doctors in facial acne vulgaris diagnosis.

  11. Designing a Pattern Recognition Neural Network with a Reject Output and Many Sets of Weights and Biases

    OpenAIRE

    Dung, Le; Mizukawa, Makoto

    2008-01-01

    Adding the reject output to the pattern recognition neural network is an approach to help the neural network can classify almost all patterns of a training data set by using many sets of weights and biases, even if the neural network is small. With a smaller number of neurons, we can implement the neural network on a hardware-based platform more easily and also reduce the response time of it. With the reject output the neural network can produce not only right or wrong results but also reject...

  12. Airplane detection in remote sensing images using convolutional neural networks

    Science.gov (United States)

    Ouyang, Chao; Chen, Zhong; Zhang, Feng; Zhang, Yifei

    2018-03-01

    Airplane detection in remote sensing images remains a challenging problem and has also been taking a great interest to researchers. In this paper we propose an effective method to detect airplanes in remote sensing images using convolutional neural networks. Deep learning methods show greater advantages than the traditional methods with the rise of deep neural networks in target detection, and we give an explanation why this happens. To improve the performance on detection of airplane, we combine a region proposal algorithm with convolutional neural networks. And in the training phase, we divide the background into multi classes rather than one class, which can reduce false alarms. Our experimental results show that the proposed method is effective and robust in detecting airplane.

  13. Semi-empirical neural network models of controlled dynamical systems

    Directory of Open Access Journals (Sweden)

    Mihail V. Egorchev

    2017-12-01

    Full Text Available A simulation approach is discussed for maneuverable aircraft motion as nonlinear controlled dynamical system under multiple and diverse uncertainties including knowledge imperfection concerning simulated plant and its environment exposure. The suggested approach is based on a merging of theoretical knowledge for the plant with training tools of artificial neural network field. The efficiency of this approach is demonstrated using the example of motion modeling and the identification of the aerodynamic characteristics of a maneuverable aircraft. A semi-empirical recurrent neural network based model learning algorithm is proposed for multi-step ahead prediction problem. This algorithm sequentially states and solves numerical optimization subproblems of increasing complexity, using each solution as initial guess for subsequent subproblem. We also consider a procedure for representative training set acquisition that utilizes multisine control signals.

  14. Recursive Bayesian recurrent neural networks for time-series modeling.

    Science.gov (United States)

    Mirikitani, Derrick T; Nikolaev, Nikolay

    2010-02-01

    This paper develops a probabilistic approach to recursive second-order training of recurrent neural networks (RNNs) for improved time-series modeling. A general recursive Bayesian Levenberg-Marquardt algorithm is derived to sequentially update the weights and the covariance (Hessian) matrix. The main strengths of the approach are a principled handling of the regularization hyperparameters that leads to better generalization, and stable numerical performance. The framework involves the adaptation of a noise hyperparameter and local weight prior hyperparameters, which represent the noise in the data and the uncertainties in the model parameters. Experimental investigations using artificial and real-world data sets show that RNNs equipped with the proposed approach outperform standard real-time recurrent learning and extended Kalman training algorithms for recurrent networks, as well as other contemporary nonlinear neural models, on time-series modeling.

  15. The application of particle swarm optimization to identify gamma spectrum with neural network

    International Nuclear Information System (INIS)

    Shi Dongsheng; Di Yuming; Zhou Chunlin

    2006-01-01

    Aiming at the shortcomings that BP algorithm is usually trapped to a local optimum and it has a low speed of convergence in the application of neural network to identify gamma spectrum, according to the advantage of the globe optimal searching of particle swarm optimization, this paper put forward a new algorithm for neural network training by combining BP algorithm and Particle Swarm Optimization-mixed PSO-BP algorithm. In the application to identify gamma spectrum, the new algorithm overcomes the shortcoming that BP algorithm is usually trapped to a local optimum and the neural network trained by it has a high ability of generalization with identification result of one hundred percent correct. Practical example shows that the mixed PSO-BP algorithm can effectively and reliably be used to identify gamma spectrum. (authors)

  16. Parallel consensual neural networks.

    Science.gov (United States)

    Benediktsson, J A; Sveinsson, J R; Ersoy, O K; Swain, P H

    1997-01-01

    A new type of a neural-network architecture, the parallel consensual neural network (PCNN), is introduced and applied in classification/data fusion of multisource remote sensing and geographic data. The PCNN architecture is based on statistical consensus theory and involves using stage neural networks with transformed input data. The input data are transformed several times and the different transformed data are used as if they were independent inputs. The independent inputs are first classified using the stage neural networks. The output responses from the stage networks are then weighted and combined to make a consensual decision. In this paper, optimization methods are used in order to weight the outputs from the stage networks. Two approaches are proposed to compute the data transforms for the PCNN, one for binary data and another for analog data. The analog approach uses wavelet packets. The experimental results obtained with the proposed approach show that the PCNN outperforms both a conjugate-gradient backpropagation neural network and conventional statistical methods in terms of overall classification accuracy of test data.

  17. Neural network cloud top pressure and height for MODIS

    Science.gov (United States)

    Håkansson, Nina; Adok, Claudia; Thoss, Anke; Scheirer, Ronald; Hörnquist, Sara

    2018-06-01

    Cloud top height retrieval from imager instruments is important for nowcasting and for satellite climate data records. A neural network approach for cloud top height retrieval from the imager instrument MODIS (Moderate Resolution Imaging Spectroradiometer) is presented. The neural networks are trained using cloud top layer pressure data from the CALIOP (Cloud-Aerosol Lidar with Orthogonal Polarization) dataset. Results are compared with two operational reference algorithms for cloud top height: the MODIS Collection 6 Level 2 height product and the cloud top temperature and height algorithm in the 2014 version of the NWC SAF (EUMETSAT (European Organization for the Exploitation of Meteorological Satellites) Satellite Application Facility on Support to Nowcasting and Very Short Range Forecasting) PPS (Polar Platform System). All three techniques are evaluated using both CALIOP and CPR (Cloud Profiling Radar for CloudSat (CLOUD SATellite)) height. Instruments like AVHRR (Advanced Very High Resolution Radiometer) and VIIRS (Visible Infrared Imaging Radiometer Suite) contain fewer channels useful for cloud top height retrievals than MODIS, therefore several different neural networks are investigated to test how infrared channel selection influences retrieval performance. Also a network with only channels available for the AVHRR1 instrument is trained and evaluated. To examine the contribution of different variables, networks with fewer variables are trained. It is shown that variables containing imager information for neighboring pixels are very important. The error distributions of the involved cloud top height algorithms are found to be non-Gaussian. Different descriptive statistic measures are presented and it is exemplified that bias and SD (standard deviation) can be misleading for non-Gaussian distributions. The median and mode are found to better describe the tendency of the error distributions and IQR (interquartile range) and MAE (mean absolute error) are found

  18. Wind Resource Assessment and Forecast Planning with Neural Networks

    Directory of Open Access Journals (Sweden)

    Nicolus K. Rotich

    2014-06-01

    Full Text Available In this paper we built three types of artificial neural networks, namely: Feed forward networks, Elman networks and Cascade forward networks, for forecasting wind speeds and directions. A similar network topology was used for all the forecast horizons, regardless of the model type. All the models were then trained with real data of collected wind speeds and directions over a period of two years in the municipal of Puumala, Finland. Up to 70th percentile of the data was used for training, validation and testing, while 71–85th percentile was presented to the trained models for validation. The model outputs were then compared to the last 15% of the original data, by measuring the statistical errors between them. The feed forward networks returned the lowest errors for wind speeds. Cascade forward networks gave the lowest errors for wind directions; Elman networks returned the lowest errors when used for short term forecasting.

  19. PID Neural Network Based Speed Control of Asynchronous Motor Using Programmable Logic Controller

    Directory of Open Access Journals (Sweden)

    MARABA, V. A.

    2011-11-01

    Full Text Available This paper deals with the structure and characteristics of PID Neural Network controller for single input and single output systems. PID Neural Network is a new kind of controller that includes the advantages of artificial neural networks and classic PID controller. Functioning of this controller is based on the update of controller parameters according to the value extracted from system output pursuant to the rules of back propagation algorithm used in artificial neural networks. Parameters obtained from the application of PID Neural Network training algorithm on the speed model of the asynchronous motor exhibiting second order linear behavior were used in the real time speed control of the motor. Programmable logic controller (PLC was used as real time controller. The real time control results show that reference speed successfully maintained under various load conditions.

  20. Automatic recognition of alertness and drowsiness from EEG by an artificial neural network.

    Science.gov (United States)

    Vuckovic, Aleksandra; Radivojevic, Vlada; Chen, Andrew C N; Popovic, Dejan

    2002-06-01

    We present a novel method for classifying alert vs drowsy states from 1 s long sequences of full spectrum EEG recordings in an arbitrary subject. This novel method uses time series of interhemispheric and intrahemispheric cross spectral densities of full spectrum EEG as the input to an artificial neural network (ANN) with two discrete outputs: drowsy and alert. The experimental data were collected from 17 subjects. Two experts in EEG interpretation visually inspected the data and provided the necessary expertise for the training of an ANN. We selected the following three ANNs as potential candidates: (1) the linear network with Widrow-Hoff (WH) algorithm; (2) the non-linear ANN with the Levenberg-Marquardt (LM) rule; and (3) the Learning Vector Quantization (LVQ) neural network. We showed that the LVQ neural network gives the best classification compared with the linear network that uses WH algorithm (the worst), and the non-linear network trained with the LM rule. Classification properties of LVQ were validated using the data recorded in 12 healthy volunteer subjects, yet whose EEG recordings have not been used for the training of the ANN. The statistics were used as a measure of potential applicability of the LVQ: the t-distribution showed that matching between the human assessment and the network output was 94.37+/-1.95%. This result suggests that the automatic recognition algorithm is applicable for distinguishing between alert and drowsy state in recordings that have not been used for the training.

  1. Approximate solutions of dual fuzzy polynomials by feed-back neural networks

    Directory of Open Access Journals (Sweden)

    Ahmad Jafarian

    2012-11-01

    Full Text Available Recently, artificial neural networks (ANNs have been extensively studied and used in different areas such as pattern recognition, associative memory, combinatorial optimization, etc. In this paper, we investigate the ability of fuzzy neural networks to approximate solution of a dual fuzzy polynomial of the form $a_{1}x+ ...+a_{n}x^n =b_{1}x+ ...+b_{n}x^n+d,$ where $a_{j},b_{j},d epsilon E^1 (for j=1,...,n.$ Since the operation of fuzzy neural networks is based on Zadeh's extension principle. For this scope we train a fuzzified neural network by back-propagation-type learning algorithm which has five layer where connection weights are crisp numbers. This neural network can get a crisp input signal and then calculates its corresponding fuzzy output. Presented method can give a real approximate solution for given polynomial by using a cost function which is defined for the level sets of fuzzy output and target output. The simulation results are presented to demonstrate the efficiency and effectiveness of the proposed approach.

  2. Artificial neural network modeling of jatropha oil fueled diesel engine for emission predictions

    Directory of Open Access Journals (Sweden)

    Ganapathy Thirunavukkarasu

    2009-01-01

    Full Text Available This paper deals with artificial neural network modeling of diesel engine fueled with jatropha oil to predict the unburned hydrocarbons, smoke, and NOx emissions. The experimental data from the literature have been used as the data base for the proposed neural network model development. For training the networks, the injection timing, injector opening pressure, plunger diameter, and engine load are used as the input layer. The outputs are hydrocarbons, smoke, and NOx emissions. The feed forward back propagation learning algorithms with two hidden layers are used in the networks. For each output a different network is developed with required topology. The artificial neural network models for hydrocarbons, smoke, and NOx emissions gave R2 values of 0.9976, 0.9976, and 0.9984 and mean percent errors of smaller than 2.7603, 4.9524, and 3.1136, respectively, for training data sets, while the R2 values of 0.9904, 0.9904, and 0.9942, and mean percent errors of smaller than 6.5557, 6.1072, and 4.4682, respectively, for testing data sets. The best linear fit of regression to the artificial neural network models of hydrocarbons, smoke, and NOx emissions gave the correlation coefficient values of 0.98, 0.995, and 0.997, respectively.

  3. Application of artificial neural network for NHR fault diagnosis

    International Nuclear Information System (INIS)

    Yu Haitao; Zhang Liangju; Xu Xiangdong

    1999-01-01

    The author makes researches on 200 MW nuclear heating reactor (NHR) fault diagnosis system using artificial neural network, and use the tendency value and real value of the data under the accidents to train and test two BP networks respectively. The final diagnostic result is the combination of the results of the two networks. The compound system can enhance the accuracy and adaptability of the diagnosis comparing to the single network system

  4. Comparing Models GRM, Refraction Tomography and Neural Network to Analyze Shallow Landslide

    Directory of Open Access Journals (Sweden)

    Armstrong F. Sompotan

    2011-11-01

    Full Text Available Detailed investigations of landslides are essential to understand fundamental landslide mechanisms. Seismic refraction method has been proven as a useful geophysical tool for investigating shallow landslides. The objective of this study is to introduce a new workflow using neural network in analyzing seismic refraction data and to compare the result with some methods; that are general reciprocal method (GRM and refraction tomography. The GRM is effective when the velocity structure is relatively simple and refractors are gently dipping. Refraction tomography is capable of modeling the complex velocity structures of landslides. Neural network is found to be more potential in application especially in time consuming and complicated numerical methods. Neural network seem to have the ability to establish a relationship between an input and output space for mapping seismic velocity. Therefore, we made a preliminary attempt to evaluate the applicability of neural network to determine velocity and elevation of subsurface synthetic models corresponding to arrival times. The training and testing process of the neural network is successfully accomplished using the synthetic data. Furthermore, we evaluated the neural network using observed data. The result of the evaluation indicates that the neural network can compute velocity and elevation corresponding to arrival times. The similarity of those models shows the success of neural network as a new alternative in seismic refraction data interpretation.

  5. Temporal neural networks and transient analysis of complex engineering systems

    Science.gov (United States)

    Uluyol, Onder

    A theory is introduced for a multi-layered Local Output Gamma Feedback (LOGF) neural network within the paradigm of Locally-Recurrent Globally-Feedforward neural networks. It is developed for the identification, prediction, and control tasks of spatio-temporal systems and allows for the presentation of different time scales through incorporation of a gamma memory. It is initially applied to the tasks of sunspot and Mackey-Glass series prediction as benchmarks, then it is extended to the task of power level control of a nuclear reactor at different fuel cycle conditions. The developed LOGF neuron model can also be viewed as a Transformed Input and State (TIS) Gamma memory for neural network architectures for temporal processing. The novel LOGF neuron model extends the static neuron model by incorporating into it a short-term memory structure in the form of a digital gamma filter. A feedforward neural network made up of LOGF neurons can thus be used to model dynamic systems. A learning algorithm based upon the Backpropagation-Through-Time (BTT) approach is derived. It is applicable for training a general L-layer LOGF neural network. The spatial and temporal weights and parameters of the network are iteratively optimized for a given problem using the derived learning algorithm.

  6. Deformable image registration using convolutional neural networks

    Science.gov (United States)

    Eppenhof, Koen A. J.; Lafarge, Maxime W.; Moeskops, Pim; Veta, Mitko; Pluim, Josien P. W.

    2018-03-01

    Deformable image registration can be time-consuming and often needs extensive parameterization to perform well on a specific application. We present a step towards a registration framework based on a three-dimensional convolutional neural network. The network directly learns transformations between pairs of three-dimensional images. The outputs of the network are three maps for the x, y, and z components of a thin plate spline transformation grid. The network is trained on synthetic random transformations, which are applied to a small set of representative images for the desired application. Training therefore does not require manually annotated ground truth deformation information. The methodology is demonstrated on public data sets of inspiration-expiration lung CT image pairs, which come with annotated corresponding landmarks for evaluation of the registration accuracy. Advantages of this methodology are its fast registration times and its minimal parameterization.

  7. Advanced approach to numerical forecasting using artificial neural networks

    Directory of Open Access Journals (Sweden)

    Michael Štencl

    2009-01-01

    Full Text Available Current global market is driven by many factors, such as the information age, the time and amount of information distributed by many data channels it is practically impossible analyze all kinds of incoming information flows and transform them to data with classical methods. New requirements could be met by using other methods. Once trained on patterns artificial neural networks can be used for forecasting and they are able to work with extremely big data sets in reasonable time. The patterns used for learning process are samples of past data. This paper uses Radial Basis Functions neural network in comparison with Multi Layer Perceptron network with Back-propagation learning algorithm on prediction task. The task works with simplified numerical time series and includes forty observations with prediction for next five observations. The main topic of the article is the identification of the main differences between used neural networks architectures together with numerical forecasting. Detected differences then verify on practical comparative example.

  8. Evolving RBF neural networks for adaptive soft-sensor design.

    Science.gov (United States)

    Alexandridis, Alex

    2013-12-01

    This work presents an adaptive framework for building soft-sensors based on radial basis function (RBF) neural network models. The adaptive fuzzy means algorithm is utilized in order to evolve an RBF network, which approximates the unknown system based on input-output data from it. The methodology gradually builds the RBF network model, based on two separate levels of adaptation: On the first level, the structure of the hidden layer is modified by adding or deleting RBF centers, while on the second level, the synaptic weights are adjusted with the recursive least squares with exponential forgetting algorithm. The proposed approach is tested on two different systems, namely a simulated nonlinear DC Motor and a real industrial reactor. The results show that the produced soft-sensors can be successfully applied to model the two nonlinear systems. A comparison with two different adaptive modeling techniques, namely a dynamic evolving neural-fuzzy inference system (DENFIS) and neural networks trained with online backpropagation, highlights the advantages of the proposed methodology.

  9. Neural Networks in Control Applications

    DEFF Research Database (Denmark)

    Sørensen, O.

    are examined. The models are separated into three groups representing input/output descriptions as well as state space descriptions: - Models, where all in- and outputs are measurable (static networks). - Models, where some inputs are non-measurable (recurrent networks). - Models, where some in- and some...... outputs are non-measurable (recurrent networks with incomplete state information). The three groups are ordered in increasing complexity, and for each group it is shown how to solve the problems concerning training and application of the specific model type. Of particular interest are the model types...... Kalmann filter) representing state space description. The potentials of neural networks for control of non-linear processes are also examined, focusing on three different groups of control concepts, all considered as generalizations of known linear control concepts to handle also non-linear processes...

  10. Improved Artificial Fish Algorithm for Parameters Optimization of PID Neural Network

    OpenAIRE

    Jing Wang; Yourui Huang

    2013-01-01

    In order to solve problems such as initial weights are difficult to be determined, training results are easy to trap in local minima in optimization process of PID neural network parameters by traditional BP algorithm, this paper proposed a new method based on improved artificial fish algorithm for parameters optimization of PID neural network. This improved artificial fish algorithm uses a composite adaptive artificial fish algorithm based on optimal artificial fish and nearest artificial fi...

  11. Neural network classification of quark and gluon jets

    International Nuclear Information System (INIS)

    Graham, M.A.; Jones, L.M.; Herbin, S.

    1995-01-01

    We demonstrate that there are characteristics common to quark jets and to gluon jets regardless of the interaction that produced them. The classification technique we use depends on the mass of the jet as well as the center-of-mass energy of the hard subprocess that produces the jet. In addition, we present the quark-gluon separability results of an artificial neural network trained on three-jet e + e - events at the Z 0 mass, using a back-propagation algorithm. The inputs to the network are the longitudinal momenta of the leading hadrons in the jet. We tested the network with quark and gluon jets from both e + e - →3 jets and bar pp→2 jets. Finally, we compare the performance of the artificial neural network with the results of making well chosen physical cuts

  12. Different propagation speeds of recalled sequences in plastic spiking neural networks

    Science.gov (United States)

    Huang, Xuhui; Zheng, Zhigang; Hu, Gang; Wu, Si; Rasch, Malte J.

    2015-03-01

    Neural networks can generate spatiotemporal patterns of spike activity. Sequential activity learning and retrieval have been observed in many brain areas, and e.g. is crucial for coding of episodic memory in the hippocampus or generating temporal patterns during song production in birds. In a recent study, a sequential activity pattern was directly entrained onto the neural activity of the primary visual cortex (V1) of rats and subsequently successfully recalled by a local and transient trigger. It was observed that the speed of activity propagation in coordinates of the retinotopically organized neural tissue was constant during retrieval regardless how the speed of light stimulation sweeping across the visual field during training was varied. It is well known that spike-timing dependent plasticity (STDP) is a potential mechanism for embedding temporal sequences into neural network activity. How training and retrieval speeds relate to each other and how network and learning parameters influence retrieval speeds, however, is not well described. We here theoretically analyze sequential activity learning and retrieval in a recurrent neural network with realistic synaptic short-term dynamics and STDP. Testing multiple STDP rules, we confirm that sequence learning can be achieved by STDP. However, we found that a multiplicative nearest-neighbor (NN) weight update rule generated weight distributions and recall activities that best matched the experiments in V1. Using network simulations and mean-field analysis, we further investigated the learning mechanisms and the influence of network parameters on recall speeds. Our analysis suggests that a multiplicative STDP rule with dominant NN spike interaction might be implemented in V1 since recall speed was almost constant in an NMDA-dominant regime. Interestingly, in an AMPA-dominant regime, neural circuits might exhibit recall speeds that instead follow the change in stimulus speeds. This prediction could be tested in

  13. Scaling of counter-current imbibition recovery curves using artificial neural networks

    Science.gov (United States)

    Jafari, Iman; Masihi, Mohsen; Nasiri Zarandi, Masoud

    2018-06-01

    Scaling imbibition curves are of great importance in the characterization and simulation of oil production from naturally fractured reservoirs. Different parameters such as matrix porosity and permeability, oil and water viscosities, matrix dimensions, and oil/water interfacial tensions have an effective on the imbibition process. Studies on the scaling imbibition curves along with the consideration of different assumptions have resulted in various scaling equations. In this work, using an artificial neural network (ANN) method, a novel technique is presented for scaling imbibition recovery curves, which can be used for scaling the experimental and field-scale imbibition cases. The imbibition recovery curves for training and testing the neural network were gathered through the simulation of different scenarios using a commercial reservoir simulator. In this ANN-based method, six parameters were assumed to have an effect on the imbibition process and were considered as the inputs for training the network. Using the ‘Bayesian regularization’ training algorithm, the network was trained and tested. Training and testing phases showed superior results in comparison with the other scaling methods. It is concluded that using the new technique is useful for scaling imbibition recovery curves, especially for complex cases, for which the common scaling methods are not designed.

  14. Automatic target recognition using a feature-based optical neural network

    Science.gov (United States)

    Chao, Tien-Hsin

    1992-01-01

    An optical neural network based upon the Neocognitron paradigm (K. Fukushima et al. 1983) is introduced. A novel aspect of the architectural design is shift-invariant multichannel Fourier optical correlation within each processing layer. Multilayer processing is achieved by iteratively feeding back the output of the feature correlator to the input spatial light modulator and updating the Fourier filters. By training the neural net with characteristic features extracted from the target images, successful pattern recognition with intra-class fault tolerance and inter-class discrimination is achieved. A detailed system description is provided. Experimental demonstration of a two-layer neural network for space objects discrimination is also presented.

  15. A neutron spectrum unfolding code based on generalized regression artificial neural networks

    International Nuclear Information System (INIS)

    Rosario Martinez-Blanco, Ma. del

    2016-01-01

    The most delicate part of neutron spectrometry, is the unfolding process. The derivation of the spectral information is not simple because the unknown is not given directly as a result of the measurements. Novel methods based on Artificial Neural Networks have been widely investigated. In prior works, back propagation neural networks (BPNN) have been used to solve the neutron spectrometry problem, however, some drawbacks still exist using this kind of neural nets, i.e. the optimum selection of the network topology and the long training time. Compared to BPNN, it's usually much faster to train a generalized regression neural network (GRNN). That's mainly because spread constant is the only parameter used in GRNN. Another feature is that the network will converge to a global minimum, provided that the optimal values of spread has been determined and that the dataset adequately represents the problem space. In addition, GRNN are often more accurate than BPNN in the prediction. These characteristics make GRNNs to be of great interest in the neutron spectrometry domain. This work presents a computational tool based on GRNN capable to solve the neutron spectrometry problem. This computational code, automates the pre-processing, training and testing stages using a k-fold cross validation of 3 folds, the statistical analysis and the post-processing of the information, using 7 Bonner spheres rate counts as only entrance data. The code was designed for a Bonner Spheres System based on a "6LiI(Eu) neutron detector and a response matrix expressed in 60 energy bins taken from an International Atomic Energy Agency compilation. - Highlights: • Main drawback of neutron spectrometry with BPNN is network topology optimization. • Compared to BPNN, it’s usually much faster to train a (GRNN). • GRNN are often more accurate than BPNN in the prediction. These characteristics make GRNNs to be of great interest. • This computational code, automates the pre-processing, training

  16. Separating true V0's from combinatoric background with a neural network

    International Nuclear Information System (INIS)

    Justice, M.

    1997-01-01

    A feedforward multilayered neural network has been trained to ''recognize'' true V0's in the presence of a large combinatoric background using simulated data for 2 GeV/nucleon Ni + Cu interactions. The resulting neural network filter has been applied to actual data from the EOS TPC experiment. An enhancement of signal to background over more traditional selection mechanisms has been observed. (orig.)

  17. Neural Networks: Implementations and Applications

    OpenAIRE

    Vonk, E.; Veelenturf, L.P.J.; Jain, L.C.

    1996-01-01

    Artificial neural networks, also called neural networks, have been used successfully in many fields including engineering, science and business. This paper presents the implementation of several neural network simulators and their applications in character recognition and other engineering areas

  18. Transforming Musical Signals through a Genre Classifying Convolutional Neural Network

    Science.gov (United States)

    Geng, S.; Ren, G.; Ogihara, M.

    2017-05-01

    Convolutional neural networks (CNNs) have been successfully applied on both discriminative and generative modeling for music-related tasks. For a particular task, the trained CNN contains information representing the decision making or the abstracting process. One can hope to manipulate existing music based on this 'informed' network and create music with new features corresponding to the knowledge obtained by the network. In this paper, we propose a method to utilize the stored information from a CNN trained on musical genre classification task. The network was composed of three convolutional layers, and was trained to classify five-second song clips into five different genres. After training, randomly selected clips were modified by maximizing the sum of outputs from the network layers. In addition to the potential of such CNNs to produce interesting audio transformation, more information about the network and the original music could be obtained from the analysis of the generated features since these features indicate how the network 'understands' the music.

  19. Artificial neural network modeling and optimization of ultrahigh pressure extraction of green tea polyphenols.

    Science.gov (United States)

    Xi, Jun; Xue, Yujing; Xu, Yinxiang; Shen, Yuhong

    2013-11-01

    In this study, the ultrahigh pressure extraction of green tea polyphenols was modeled and optimized by a three-layer artificial neural network. A feed-forward neural network trained with an error back-propagation algorithm was used to evaluate the effects of pressure, liquid/solid ratio and ethanol concentration on the total phenolic content of green tea extracts. The neural network coupled with genetic algorithms was also used to optimize the conditions needed to obtain the highest yield of tea polyphenols. The obtained optimal architecture of artificial neural network model involved a feed-forward neural network with three input neurons, one hidden layer with eight neurons and one output layer including single neuron. The trained network gave the minimum value in the MSE of 0.03 and the maximum value in the R(2) of 0.9571, which implied a good agreement between the predicted value and the actual value, and confirmed a good generalization of the network. Based on the combination of neural network and genetic algorithms, the optimum extraction conditions for the highest yield of green tea polyphenols were determined as follows: 498.8 MPa for pressure, 20.8 mL/g for liquid/solid ratio and 53.6% for ethanol concentration. The total phenolic content of the actual measurement under the optimum predicated extraction conditions was 582.4 ± 0.63 mg/g DW, which was well matched with the predicted value (597.2mg/g DW). This suggests that the artificial neural network model described in this work is an efficient quantitative tool to predict the extraction efficiency of green tea polyphenols. Crown Copyright © 2013. Published by Elsevier Ltd. All rights reserved.

  20. Application of CMAC Neural Network to Solar Energy Heliostat Field Fault Diagnosis

    Directory of Open Access Journals (Sweden)

    Neng-Sheng Pai

    2013-01-01

    Full Text Available Solar energy heliostat fields comprise numerous sun tracking platforms. As a result, fault detection is a highly challenging problem. Accordingly, the present study proposes a cerebellar model arithmetic computer (CMAC neutral network for automatically diagnosing faults within the heliostat field in accordance with the rotational speed, vibration, and temperature characteristics of the individual heliostat transmission systems. As compared with radial basis function (RBF neural network and back propagation (BP neural network in the heliostat field fault diagnosis, the experimental results show that the proposed neural network has a low training time, good robustness, and a reliable diagnostic performance. As a result, it provides an ideal solution for fault diagnosis in modern, large-scale heliostat fields.