WorldWideScience

Sample records for neural network committees

  1. Neural network committees for finger joint angle estimation from surface EMG signals

    Directory of Open Access Journals (Sweden)

    Reddy Narender P

    2009-01-01

    Full Text Available Abstract Background In virtual reality (VR systems, the user's finger and hand positions are sensed and used to control the virtual environments. Direct biocontrol of VR environments using surface electromyography (SEMG signals may be more synergistic and unconstraining to the user. The purpose of the present investigation was to develop a technique to predict the finger joint angle from the surface EMG measurements of the extensor muscle using neural network models. Methodology SEMG together with the actual joint angle measurements were obtained while the subject was performing flexion-extension rotation of the index finger at three speeds. Several neural networks were trained to predict the joint angle from the parameters extracted from the SEMG signals. The best networks were selected to form six committees. The neural network committees were evaluated using data from new subjects. Results There was hysteresis in the measured SMEG signals during the flexion-extension cycle. However, neural network committees were able to predict the joint angle with reasonable accuracy. RMS errors ranged from 0.085 ± 0.036 for fast speed finger-extension to 0.147 ± 0.026 for slow speed finger extension, and from 0.098 ± 0.023 for the fast speed finger flexion to 0.163 ± 0.054 for slow speed finger flexion. Conclusion Although hysteresis was observed in the measured SEMG signals, the committees of neural networks were able to predict the finger joint angle from SEMG signals.

  2. Morphological neural networks

    Energy Technology Data Exchange (ETDEWEB)

    Ritter, G.X.; Sussner, P. [Univ. of Florida, Gainesville, FL (United States)

    1996-12-31

    The theory of artificial neural networks has been successfully applied to a wide variety of pattern recognition problems. In this theory, the first step in computing the next state of a neuron or in performing the next layer neural network computation involves the linear operation of multiplying neural values by their synaptic strengths and adding the results. Thresholding usually follows the linear operation in order to provide for nonlinearity of the network. In this paper we introduce a novel class of neural networks, called morphological neural networks, in which the operations of multiplication and addition are replaced by addition and maximum (or minimum), respectively. By taking the maximum (or minimum) of sums instead of the sum of products, morphological network computation is nonlinear before thresholding. As a consequence, the properties of morphological neural networks are drastically different than those of traditional neural network models. In this paper we consider some of these differences and provide some particular examples of morphological neural network.

  3. Neural Networks

    International Nuclear Information System (INIS)

    Smith, Patrick I.

    2003-01-01

    Physicists use large detectors to measure particles created in high-energy collisions at particle accelerators. These detectors typically produce signals indicating either where ionization occurs along the path of the particle, or where energy is deposited by the particle. The data produced by these signals is fed into pattern recognition programs to try to identify what particles were produced, and to measure the energy and direction of these particles. Ideally, there are many techniques used in this pattern recognition software. One technique, neural networks, is particularly suitable for identifying what type of particle caused by a set of energy deposits. Neural networks can derive meaning from complicated or imprecise data, extract patterns, and detect trends that are too complex to be noticed by either humans or other computer related processes. To assist in the advancement of this technology, Physicists use a tool kit to experiment with several neural network techniques. The goal of this research is interface a neural network tool kit into Java Analysis Studio (JAS3), an application that allows data to be analyzed from any experiment. As the final result, a physicist will have the ability to train, test, and implement a neural network with the desired output while using JAS3 to analyze the results or output. Before an implementation of a neural network can take place, a firm understanding of what a neural network is and how it works is beneficial. A neural network is an artificial representation of the human brain that tries to simulate the learning process [5]. It is also important to think of the word artificial in that definition as computer programs that use calculations during the learning process. In short, a neural network learns by representative examples. Perhaps the easiest way to describe the way neural networks learn is to explain how the human brain functions. The human brain contains billions of neural cells that are responsible for processing

  4. Chaotic diagonal recurrent neural network

    International Nuclear Information System (INIS)

    Wang Xing-Yuan; Zhang Yi

    2012-01-01

    We propose a novel neural network based on a diagonal recurrent neural network and chaos, and its structure and learning algorithm are designed. The multilayer feedforward neural network, diagonal recurrent neural network, and chaotic diagonal recurrent neural network are used to approach the cubic symmetry map. The simulation results show that the approximation capability of the chaotic diagonal recurrent neural network is better than the other two neural networks. (interdisciplinary physics and related areas of science and technology)

  5. Neural networks

    International Nuclear Information System (INIS)

    Denby, Bruce; Lindsey, Clark; Lyons, Louis

    1992-01-01

    The 1980s saw a tremendous renewal of interest in 'neural' information processing systems, or 'artificial neural networks', among computer scientists and computational biologists studying cognition. Since then, the growth of interest in neural networks in high energy physics, fueled by the need for new information processing technologies for the next generation of high energy proton colliders, can only be described as explosive

  6. Neural networks for aircraft control

    Science.gov (United States)

    Linse, Dennis

    1990-01-01

    Current research in Artificial Neural Networks indicates that networks offer some potential advantages in adaptation and fault tolerance. This research is directed at determining the possible applicability of neural networks to aircraft control. The first application will be to aircraft trim. Neural network node characteristics, network topology and operation, neural network learning and example histories using neighboring optimal control with a neural net are discussed.

  7. Hidden neural networks

    DEFF Research Database (Denmark)

    Krogh, Anders Stærmose; Riis, Søren Kamaric

    1999-01-01

    A general framework for hybrids of hidden Markov models (HMMs) and neural networks (NNs) called hidden neural networks (HNNs) is described. The article begins by reviewing standard HMMs and estimation by conditional maximum likelihood, which is used by the HNN. In the HNN, the usual HMM probability...... parameters are replaced by the outputs of state-specific neural networks. As opposed to many other hybrids, the HNN is normalized globally and therefore has a valid probabilistic interpretation. All parameters in the HNN are estimated simultaneously according to the discriminative conditional maximum...... likelihood criterion. The HNN can be viewed as an undirected probabilistic independence network (a graphical model), where the neural networks provide a compact representation of the clique functions. An evaluation of the HNN on the task of recognizing broad phoneme classes in the TIMIT database shows clear...

  8. Deep Learning Neural Networks and Bayesian Neural Networks in Data Analysis

    Directory of Open Access Journals (Sweden)

    Chernoded Andrey

    2017-01-01

    Full Text Available Most of the modern analyses in high energy physics use signal-versus-background classification techniques of machine learning methods and neural networks in particular. Deep learning neural network is the most promising modern technique to separate signal and background and now days can be widely and successfully implemented as a part of physical analysis. In this article we compare Deep learning and Bayesian neural networks application as a classifiers in an instance of top quark analysis.

  9. Parallel consensual neural networks.

    Science.gov (United States)

    Benediktsson, J A; Sveinsson, J R; Ersoy, O K; Swain, P H

    1997-01-01

    A new type of a neural-network architecture, the parallel consensual neural network (PCNN), is introduced and applied in classification/data fusion of multisource remote sensing and geographic data. The PCNN architecture is based on statistical consensus theory and involves using stage neural networks with transformed input data. The input data are transformed several times and the different transformed data are used as if they were independent inputs. The independent inputs are first classified using the stage neural networks. The output responses from the stage networks are then weighted and combined to make a consensual decision. In this paper, optimization methods are used in order to weight the outputs from the stage networks. Two approaches are proposed to compute the data transforms for the PCNN, one for binary data and another for analog data. The analog approach uses wavelet packets. The experimental results obtained with the proposed approach show that the PCNN outperforms both a conjugate-gradient backpropagation neural network and conventional statistical methods in terms of overall classification accuracy of test data.

  10. Neural Networks in R Using the Stuttgart Neural Network Simulator: RSNNS

    Directory of Open Access Journals (Sweden)

    Christopher Bergmeir

    2012-01-01

    Full Text Available Neural networks are important standard machine learning procedures for classification and regression. We describe the R package RSNNS that provides a convenient interface to the popular Stuttgart Neural Network Simulator SNNS. The main features are (a encapsulation of the relevant SNNS parts in a C++ class, for sequential and parallel usage of different networks, (b accessibility of all of the SNNSalgorithmic functionality from R using a low-level interface, and (c a high-level interface for convenient, R-style usage of many standard neural network procedures. The package also includes functions for visualization and analysis of the models and the training procedures, as well as functions for data input/output from/to the original SNNSfile formats.

  11. Neural Networks: Implementations and Applications

    OpenAIRE

    Vonk, E.; Veelenturf, L.P.J.; Jain, L.C.

    1996-01-01

    Artificial neural networks, also called neural networks, have been used successfully in many fields including engineering, science and business. This paper presents the implementation of several neural network simulators and their applications in character recognition and other engineering areas

  12. Neural electrical activity and neural network growth.

    Science.gov (United States)

    Gafarov, F M

    2018-05-01

    The development of central and peripheral neural system depends in part on the emergence of the correct functional connectivity in its input and output pathways. Now it is generally accepted that molecular factors guide neurons to establish a primary scaffold that undergoes activity-dependent refinement for building a fully functional circuit. However, a number of experimental results obtained recently shows that the neuronal electrical activity plays an important role in the establishing of initial interneuronal connections. Nevertheless, these processes are rather difficult to study experimentally, due to the absence of theoretical description and quantitative parameters for estimation of the neuronal activity influence on growth in neural networks. In this work we propose a general framework for a theoretical description of the activity-dependent neural network growth. The theoretical description incorporates a closed-loop growth model in which the neural activity can affect neurite outgrowth, which in turn can affect neural activity. We carried out the detailed quantitative analysis of spatiotemporal activity patterns and studied the relationship between individual cells and the network as a whole to explore the relationship between developing connectivity and activity patterns. The model, developed in this work will allow us to develop new experimental techniques for studying and quantifying the influence of the neuronal activity on growth processes in neural networks and may lead to a novel techniques for constructing large-scale neural networks by self-organization. Copyright © 2018 Elsevier Ltd. All rights reserved.

  13. Fractional Hopfield Neural Networks: Fractional Dynamic Associative Recurrent Neural Networks.

    Science.gov (United States)

    Pu, Yi-Fei; Yi, Zhang; Zhou, Ji-Liu

    2017-10-01

    This paper mainly discusses a novel conceptual framework: fractional Hopfield neural networks (FHNN). As is commonly known, fractional calculus has been incorporated into artificial neural networks, mainly because of its long-term memory and nonlocality. Some researchers have made interesting attempts at fractional neural networks and gained competitive advantages over integer-order neural networks. Therefore, it is naturally makes one ponder how to generalize the first-order Hopfield neural networks to the fractional-order ones, and how to implement FHNN by means of fractional calculus. We propose to introduce a novel mathematical method: fractional calculus to implement FHNN. First, we implement fractor in the form of an analog circuit. Second, we implement FHNN by utilizing fractor and the fractional steepest descent approach, construct its Lyapunov function, and further analyze its attractors. Third, we perform experiments to analyze the stability and convergence of FHNN, and further discuss its applications to the defense against chip cloning attacks for anticounterfeiting. The main contribution of our work is to propose FHNN in the form of an analog circuit by utilizing a fractor and the fractional steepest descent approach, construct its Lyapunov function, prove its Lyapunov stability, analyze its attractors, and apply FHNN to the defense against chip cloning attacks for anticounterfeiting. A significant advantage of FHNN is that its attractors essentially relate to the neuron's fractional order. FHNN possesses the fractional-order-stability and fractional-order-sensitivity characteristics.

  14. Artificial neural networks in NDT

    International Nuclear Information System (INIS)

    Abdul Aziz Mohamed

    2001-01-01

    Artificial neural networks, simply known as neural networks, have attracted considerable interest in recent years largely because of a growing recognition of the potential of these computational paradigms as powerful alternative models to conventional pattern recognition or function approximation techniques. The neural networks approach is having a profound effect on almost all fields, and has been utilised in fields Where experimental inter-disciplinary work is being carried out. Being a multidisciplinary subject with a broad knowledge base, Nondestructive Testing (NDT) or Nondestructive Evaluation (NDE) is no exception. This paper explains typical applications of neural networks in NDT/NDE. Three promising types of neural networks are highlighted, namely, back-propagation, binary Hopfield and Kohonen's self-organising maps. (Author)

  15. Introduction to neural networks

    International Nuclear Information System (INIS)

    Pavlopoulos, P.

    1996-01-01

    This lecture is a presentation of today's research in neural computation. Neural computation is inspired by knowledge from neuro-science. It draws its methods in large degree from statistical physics and its potential applications lie mainly in computer science and engineering. Neural networks models are algorithms for cognitive tasks, such as learning and optimization, which are based on concepts derived from research into the nature of the brain. The lecture first gives an historical presentation of neural networks development and interest in performing complex tasks. Then, an exhaustive overview of data management and networks computation methods is given: the supervised learning and the associative memory problem, the capacity of networks, the Perceptron networks, the functional link networks, the Madaline (Multiple Adalines) networks, the back-propagation networks, the reduced coulomb energy (RCE) networks, the unsupervised learning and the competitive learning and vector quantization. An example of application in high energy physics is given with the trigger systems and track recognition system (track parametrization, event selection and particle identification) developed for the CPLEAR experiment detectors from the LEAR at CERN. (J.S.). 56 refs., 20 figs., 1 tab., 1 appendix

  16. The Laplacian spectrum of neural networks

    Science.gov (United States)

    de Lange, Siemon C.; de Reus, Marcel A.; van den Heuvel, Martijn P.

    2014-01-01

    The brain is a complex network of neural interactions, both at the microscopic and macroscopic level. Graph theory is well suited to examine the global network architecture of these neural networks. Many popular graph metrics, however, encode average properties of individual network elements. Complementing these “conventional” graph metrics, the eigenvalue spectrum of the normalized Laplacian describes a network's structure directly at a systems level, without referring to individual nodes or connections. In this paper, the Laplacian spectra of the macroscopic anatomical neuronal networks of the macaque and cat, and the microscopic network of the Caenorhabditis elegans were examined. Consistent with conventional graph metrics, analysis of the Laplacian spectra revealed an integrative community structure in neural brain networks. Extending previous findings of overlap of network attributes across species, similarity of the Laplacian spectra across the cat, macaque and C. elegans neural networks suggests a certain level of consistency in the overall architecture of the anatomical neural networks of these species. Our results further suggest a specific network class for neural networks, distinct from conceptual small-world and scale-free models as well as several empirical networks. PMID:24454286

  17. Program Helps Simulate Neural Networks

    Science.gov (United States)

    Villarreal, James; Mcintire, Gary

    1993-01-01

    Neural Network Environment on Transputer System (NNETS) computer program provides users high degree of flexibility in creating and manipulating wide variety of neural-network topologies at processing speeds not found in conventional computing environments. Supports back-propagation and back-propagation-related algorithms. Back-propagation algorithm used is implementation of Rumelhart's generalized delta rule. NNETS developed on INMOS Transputer(R). Predefines back-propagation network, Jordan network, and reinforcement network to assist users in learning and defining own networks. Also enables users to configure other neural-network paradigms from NNETS basic architecture. Small portion of software written in OCCAM(R) language.

  18. Neural network to diagnose lining condition

    Science.gov (United States)

    Yemelyanov, V. A.; Yemelyanova, N. Y.; Nedelkin, A. A.; Zarudnaya, M. V.

    2018-03-01

    The paper presents data on the problem of diagnosing the lining condition at the iron and steel works. The authors describe the neural network structure and software that are designed and developed to determine the lining burnout zones. The simulation results of the proposed neural networks are presented. The authors note the low learning and classification errors of the proposed neural networks. To realize the proposed neural network, the specialized software has been developed.

  19. Memristor-based neural networks

    International Nuclear Information System (INIS)

    Thomas, Andy

    2013-01-01

    The synapse is a crucial element in biological neural networks, but a simple electronic equivalent has been absent. This complicates the development of hardware that imitates biological architectures in the nervous system. Now, the recent progress in the experimental realization of memristive devices has renewed interest in artificial neural networks. The resistance of a memristive system depends on its past states and exactly this functionality can be used to mimic the synaptic connections in a (human) brain. After a short introduction to memristors, we present and explain the relevant mechanisms in a biological neural network, such as long-term potentiation and spike time-dependent plasticity, and determine the minimal requirements for an artificial neural network. We review the implementations of these processes using basic electric circuits and more complex mechanisms that either imitate biological systems or could act as a model system for them. (topical review)

  20. Neural Networks in Control Applications

    DEFF Research Database (Denmark)

    Sørensen, O.

    The intention of this report is to make a systematic examination of the possibilities of applying neural networks in those technical areas, which are familiar to a control engineer. In other words, the potential of neural networks in control applications is given higher priority than a detailed...... study of the networks themselves. With this end in view the following restrictions have been made: - Amongst numerous neural network structures, only the Multi Layer Perceptron (a feed-forward network) is applied. - Amongst numerous training algorithms, only four algorithms are examined, all...... in a recursive form (sample updating). The simplest is the Back Probagation Error Algorithm, and the most complex is the recursive Prediction Error Method using a Gauss-Newton search direction. - Over-fitting is often considered to be a serious problem when training neural networks. This problem is specifically...

  1. Practical neural network recipies in C++

    CERN Document Server

    Masters

    2014-01-01

    This text serves as a cookbook for neural network solutions to practical problems using C++. It will enable those with moderate programming experience to select a neural network model appropriate to solving a particular problem, and to produce a working program implementing that network. The book provides guidance along the entire problem-solving path, including designing the training set, preprocessing variables, training and validating the network, and evaluating its performance. Though the book is not intended as a general course in neural networks, no background in neural works is assum

  2. Signal Processing and Neural Network Simulator

    Science.gov (United States)

    Tebbe, Dennis L.; Billhartz, Thomas J.; Doner, John R.; Kraft, Timothy T.

    1995-04-01

    The signal processing and neural network simulator (SPANNS) is a digital signal processing simulator with the capability to invoke neural networks into signal processing chains. This is a generic tool which will greatly facilitate the design and simulation of systems with embedded neural networks. The SPANNS is based on the Signal Processing WorkSystemTM (SPWTM), a commercial-off-the-shelf signal processing simulator. SPW provides a block diagram approach to constructing signal processing simulations. Neural network paradigms implemented in the SPANNS include Backpropagation, Kohonen Feature Map, Outstar, Fully Recurrent, Adaptive Resonance Theory 1, 2, & 3, and Brain State in a Box. The SPANNS was developed by integrating SAIC's Industrial Strength Neural Networks (ISNN) Software into SPW.

  3. Trimaran Resistance Artificial Neural Network

    Science.gov (United States)

    2011-01-01

    11th International Conference on Fast Sea Transportation FAST 2011, Honolulu, Hawaii, USA, September 2011 Trimaran Resistance Artificial Neural Network Richard...Trimaran Resistance Artificial Neural Network 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) 5d. PROJECT NUMBER 5e... Artificial Neural Network and is restricted to the center and side-hull configurations tested. The value in the parametric model is that it is able to

  4. Neural network regulation driven by autonomous neural firings

    Science.gov (United States)

    Cho, Myoung Won

    2016-07-01

    Biological neurons naturally fire spontaneously due to the existence of a noisy current. Such autonomous firings may provide a driving force for network formation because synaptic connections can be modified due to neural firings. Here, we study the effect of autonomous firings on network formation. For the temporally asymmetric Hebbian learning, bidirectional connections lose their balance easily and become unidirectional ones. Defining the difference between reciprocal connections as new variables, we could express the learning dynamics as if Ising model spins interact with each other in magnetism. We present a theoretical method to estimate the interaction between the new variables in a neural system. We apply the method to some network systems and find some tendencies of autonomous neural network regulation.

  5. Neural networks with discontinuous/impact activations

    CERN Document Server

    Akhmet, Marat

    2014-01-01

    This book presents as its main subject new models in mathematical neuroscience. A wide range of neural networks models with discontinuities are discussed, including impulsive differential equations, differential equations with piecewise constant arguments, and models of mixed type. These models involve discontinuities, which are natural because huge velocities and short distances are usually observed in devices modeling the networks. A discussion of the models, appropriate for the proposed applications, is also provided. This book also: Explores questions related to the biological underpinning for models of neural networks\\ Considers neural networks modeling using differential equations with impulsive and piecewise constant argument discontinuities Provides all necessary mathematical basics for application to the theory of neural networks Neural Networks with Discontinuous/Impact Activations is an ideal book for researchers and professionals in the field of engineering mathematics that have an interest in app...

  6. Simplified LQG Control with Neural Networks

    DEFF Research Database (Denmark)

    Sørensen, O.

    1997-01-01

    A new neural network application for non-linear state control is described. One neural network is modelled to form a Kalmann predictor and trained to act as an optimal state observer for a non-linear process. Another neural network is modelled to form a state controller and trained to produce...

  7. Fuzzy neural network theory and application

    CERN Document Server

    Liu, Puyin

    2004-01-01

    This book systematically synthesizes research achievements in the field of fuzzy neural networks in recent years. It also provides a comprehensive presentation of the developments in fuzzy neural networks, with regard to theory as well as their application to system modeling and image restoration. Special emphasis is placed on the fundamental concepts and architecture analysis of fuzzy neural networks. The book is unique in treating all kinds of fuzzy neural networks and their learning algorithms and universal approximations, and employing simulation examples which are carefully designed to he

  8. Modular representation of layered neural networks.

    Science.gov (United States)

    Watanabe, Chihiro; Hiramatsu, Kaoru; Kashino, Kunio

    2018-01-01

    Layered neural networks have greatly improved the performance of various applications including image processing, speech recognition, natural language processing, and bioinformatics. However, it is still difficult to discover or interpret knowledge from the inference provided by a layered neural network, since its internal representation has many nonlinear and complex parameters embedded in hierarchical layers. Therefore, it becomes important to establish a new methodology by which layered neural networks can be understood. In this paper, we propose a new method for extracting a global and simplified structure from a layered neural network. Based on network analysis, the proposed method detects communities or clusters of units with similar connection patterns. We show its effectiveness by applying it to three use cases. (1) Network decomposition: it can decompose a trained neural network into multiple small independent networks thus dividing the problem and reducing the computation time. (2) Training assessment: the appropriateness of a trained result with a given hyperparameter or randomly chosen initial parameters can be evaluated by using a modularity index. And (3) data analysis: in practical data it reveals the community structure in the input, hidden, and output layers, which serves as a clue for discovering knowledge from a trained neural network. Copyright © 2017 Elsevier Ltd. All rights reserved.

  9. Introduction to Artificial Neural Networks

    DEFF Research Database (Denmark)

    Larsen, Jan

    1999-01-01

    The note addresses introduction to signal analysis and classification based on artificial feed-forward neural networks.......The note addresses introduction to signal analysis and classification based on artificial feed-forward neural networks....

  10. Artificial Neural Network Analysis System

    Science.gov (United States)

    2001-02-27

    Contract No. DASG60-00-M-0201 Purchase request no.: Foot in the Door-01 Title Name: Artificial Neural Network Analysis System Company: Atlantic... Artificial Neural Network Analysis System 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) Powell, Bruce C 5d. PROJECT NUMBER 5e. TASK NUMBER...34) 27-02-2001 Report Type N/A Dates Covered (from... to) ("DD MON YYYY") 28-10-2000 27-02-2001 Title and Subtitle Artificial Neural Network Analysis

  11. Neural Network Ensembles

    DEFF Research Database (Denmark)

    Hansen, Lars Kai; Salamon, Peter

    1990-01-01

    We propose several means for improving the performance an training of neural networks for classification. We use crossvalidation as a tool for optimizing network parameters and architecture. We show further that the remaining generalization error can be reduced by invoking ensembles of similar...... networks....

  12. Complex-Valued Neural Networks

    CERN Document Server

    Hirose, Akira

    2012-01-01

    This book is the second enlarged and revised edition of the first successful monograph on complex-valued neural networks (CVNNs) published in 2006, which lends itself to graduate and undergraduate courses in electrical engineering, informatics, control engineering, mechanics, robotics, bioengineering, and other relevant fields. In the second edition the recent trends in CVNNs research are included, resulting in e.g. almost a doubled number of references. The parametron invented in 1954 is also referred to with discussion on analogy and disparity. Also various additional arguments on the advantages of the complex-valued neural networks enhancing the difference to real-valued neural networks are given in various sections. The book is useful for those beginning their studies, for instance, in adaptive signal processing for highly functional sensing and imaging, control in unknown and changing environment, robotics inspired by human neural systems, and brain-like information processing, as well as interdisciplina...

  13. Prototype-Incorporated Emotional Neural Network.

    Science.gov (United States)

    Oyedotun, Oyebade K; Khashman, Adnan

    2017-08-15

    Artificial neural networks (ANNs) aim to simulate the biological neural activities. Interestingly, many ''engineering'' prospects in ANN have relied on motivations from cognition and psychology studies. So far, two important learning theories that have been subject of active research are the prototype and adaptive learning theories. The learning rules employed for ANNs can be related to adaptive learning theory, where several examples of the different classes in a task are supplied to the network for adjusting internal parameters. Conversely, the prototype-learning theory uses prototypes (representative examples); usually, one prototype per class of the different classes contained in the task. These prototypes are supplied for systematic matching with new examples so that class association can be achieved. In this paper, we propose and implement a novel neural network algorithm based on modifying the emotional neural network (EmNN) model to unify the prototype- and adaptive-learning theories. We refer to our new model as ``prototype-incorporated EmNN''. Furthermore, we apply the proposed model to two real-life challenging tasks, namely, static hand-gesture recognition and face recognition, and compare the result to those obtained using the popular back-propagation neural network (BPNN), emotional BPNN (EmNN), deep networks, an exemplar classification model, and k-nearest neighbor.

  14. Antenna analysis using neural networks

    Science.gov (United States)

    Smith, William T.

    1992-01-01

    Conventional computing schemes have long been used to analyze problems in electromagnetics (EM). The vast majority of EM applications require computationally intensive algorithms involving numerical integration and solutions to large systems of equations. The feasibility of using neural network computing algorithms for antenna analysis is investigated. The ultimate goal is to use a trained neural network algorithm to reduce the computational demands of existing reflector surface error compensation techniques. Neural networks are computational algorithms based on neurobiological systems. Neural nets consist of massively parallel interconnected nonlinear computational elements. They are often employed in pattern recognition and image processing problems. Recently, neural network analysis has been applied in the electromagnetics area for the design of frequency selective surfaces and beam forming networks. The backpropagation training algorithm was employed to simulate classical antenna array synthesis techniques. The Woodward-Lawson (W-L) and Dolph-Chebyshev (D-C) array pattern synthesis techniques were used to train the neural network. The inputs to the network were samples of the desired synthesis pattern. The outputs are the array element excitations required to synthesize the desired pattern. Once trained, the network is used to simulate the W-L or D-C techniques. Various sector patterns and cosecant-type patterns (27 total) generated using W-L synthesis were used to train the network. Desired pattern samples were then fed to the neural network. The outputs of the network were the simulated W-L excitations. A 20 element linear array was used. There were 41 input pattern samples with 40 output excitations (20 real parts, 20 imaginary). A comparison between the simulated and actual W-L techniques is shown for a triangular-shaped pattern. Dolph-Chebyshev is a different class of synthesis technique in that D-C is used for side lobe control as opposed to pattern

  15. Deconvolution using a neural network

    Energy Technology Data Exchange (ETDEWEB)

    Lehman, S.K.

    1990-11-15

    Viewing one dimensional deconvolution as a matrix inversion problem, we compare a neural network backpropagation matrix inverse with LMS, and pseudo-inverse. This is a largely an exercise in understanding how our neural network code works. 1 ref.

  16. Neural networks and statistical learning

    CERN Document Server

    Du, Ke-Lin

    2014-01-01

    Providing a broad but in-depth introduction to neural network and machine learning in a statistical framework, this book provides a single, comprehensive resource for study and further research. All the major popular neural network models and statistical learning approaches are covered with examples and exercises in every chapter to develop a practical working understanding of the content. Each of the twenty-five chapters includes state-of-the-art descriptions and important research results on the respective topics. The broad coverage includes the multilayer perceptron, the Hopfield network, associative memory models, clustering models and algorithms, the radial basis function network, recurrent neural networks, principal component analysis, nonnegative matrix factorization, independent component analysis, discriminant analysis, support vector machines, kernel methods, reinforcement learning, probabilistic and Bayesian networks, data fusion and ensemble learning, fuzzy sets and logic, neurofuzzy models, hardw...

  17. Neural network recognition of mammographic lesions

    International Nuclear Information System (INIS)

    Oldham, W.J.B.; Downes, P.T.; Hunter, V.

    1987-01-01

    A method for recognition of mammographic lesions through the use of neural networks is presented. Neural networks have exhibited the ability to learn the shape andinternal structure of patterns. Digitized mammograms containing circumscribed and stelate lesions were used to train a feedfoward synchronous neural network that self-organizes to stable attractor states. Encoding of data for submission to the network was accomplished by performing a fractal analysis of the digitized image. This results in scale invariant representation of the lesions. Results are discussed

  18. Neural Networks and Micromechanics

    Science.gov (United States)

    Kussul, Ernst; Baidyk, Tatiana; Wunsch, Donald C.

    The title of the book, "Neural Networks and Micromechanics," seems artificial. However, the scientific and technological developments in recent decades demonstrate a very close connection between the two different areas of neural networks and micromechanics. The purpose of this book is to demonstrate this connection. Some artificial intelligence (AI) methods, including neural networks, could be used to improve automation system performance in manufacturing processes. However, the implementation of these AI methods within industry is rather slow because of the high cost of conducting experiments using conventional manufacturing and AI systems. To lower the cost, we have developed special micromechanical equipment that is similar to conventional mechanical equipment but of much smaller size and therefore of lower cost. This equipment could be used to evaluate different AI methods in an easy and inexpensive way. The proved methods could be transferred to industry through appropriate scaling. In this book, we describe the prototypes of low cost microequipment for manufacturing processes and the implementation of some AI methods to increase precision, such as computer vision systems based on neural networks for microdevice assembly and genetic algorithms for microequipment characterization and the increase of microequipment precision.

  19. 78 FR 7797 - Homeland Security Information Network Advisory Committee (HSINAC)

    Science.gov (United States)

    2013-02-04

    ... DEPARTMENT OF HOMELAND SECURITY [Docket No. DHS-2013-0005] Homeland Security Information Network... Committee Meeting. SUMMARY: The Homeland Security Information Network Advisory Committee (HSIN AC) will meet... received by the (Homeland Security Information Network Advisory Committee), go to http://www.regulations...

  20. Parameterization Of Solar Radiation Using Neural Network

    International Nuclear Information System (INIS)

    Jiya, J. D.; Alfa, B.

    2002-01-01

    This paper presents a neural network technique for parameterization of global solar radiation. The available data from twenty-one stations is used for training the neural network and the data from other ten stations is used to validate the neural model. The neural network utilizes latitude, longitude, altitude, sunshine duration and period number to parameterize solar radiation values. The testing data was not used in the training to demonstrate the performance of the neural network in unknown stations to parameterize solar radiation. The results indicate a good agreement between the parameterized solar radiation values and actual measured values

  1. Analysis of neural networks through base functions

    NARCIS (Netherlands)

    van der Zwaag, B.J.; Slump, Cornelis H.; Spaanenburg, L.

    Problem statement. Despite their success-story, neural networks have one major disadvantage compared to other techniques: the inability to explain comprehensively how a trained neural network reaches its output; neural networks are not only (incorrectly) seen as a "magic tool" but possibly even more

  2. Neural Networks for Optimal Control

    DEFF Research Database (Denmark)

    Sørensen, O.

    1995-01-01

    Two neural networks are trained to act as an observer and a controller, respectively, to control a non-linear, multi-variable process.......Two neural networks are trained to act as an observer and a controller, respectively, to control a non-linear, multi-variable process....

  3. Boolean Factor Analysis by Attractor Neural Network

    Czech Academy of Sciences Publication Activity Database

    Frolov, A. A.; Húsek, Dušan; Muraviev, I. P.; Polyakov, P.Y.

    2007-01-01

    Roč. 18, č. 3 (2007), s. 698-707 ISSN 1045-9227 R&D Projects: GA AV ČR 1ET100300419; GA ČR GA201/05/0079 Institutional research plan: CEZ:AV0Z10300504 Keywords : recurrent neural network * Hopfield-like neural network * associative memory * unsupervised learning * neural network architecture * neural network application * statistics * Boolean factor analysis * dimensionality reduction * features clustering * concepts search * information retrieval Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 2.769, year: 2007

  4. Neural networks at the Tevatron

    International Nuclear Information System (INIS)

    Badgett, W.; Burkett, K.; Campbell, M.K.; Wu, D.Y.; Bianchin, S.; DeNardi, M.; Pauletta, G.; Santi, L.; Caner, A.; Denby, B.; Haggerty, H.; Lindsey, C.S.; Wainer, N.; Dall'Agata, M.; Johns, K.; Dickson, M.; Stanco, L.; Wyss, J.L.

    1992-10-01

    This paper summarizes neural network applications at the Fermilab Tevatron, including the first online hardware application in high energy physics (muon tracking): the CDF and DO neural network triggers; offline quark/gluon discrimination at CDF; ND a new tool for top to multijets recognition at CDF

  5. Genetic Algorithm Optimized Neural Networks Ensemble as ...

    African Journals Online (AJOL)

    NJD

    Improvements in neural network calibration models by a novel approach using neural network ensemble (NNE) for the simultaneous ... process by training a number of neural networks. .... Matlab® version 6.1 was employed for building principal component ... provide a fair simulation of calibration data set with some degree.

  6. Application of neural network to CT

    International Nuclear Information System (INIS)

    Ma, Xiao-Feng; Takeda, Tatsuoki

    1999-01-01

    This paper presents a new method for two-dimensional image reconstruction by using a multilayer neural network. Multilayer neural networks are extensively investigated and practically applied to solution of various problems such as inverse problems or time series prediction problems. From learning an input-output mapping from a set of examples, neural networks can be regarded as synthesizing an approximation of multidimensional function (that is, solving the problem of hypersurface reconstruction, including smoothing and interpolation). From this viewpoint, neural networks are well suited to the solution of CT image reconstruction. Though a conventionally used object function of a neural network is composed of a sum of squared errors of the output data, we can define an object function composed of a sum of residue of an integral equation. By employing an appropriate line integral for this integral equation, we can construct a neural network that can be used for CT. We applied this method to some model problems and obtained satisfactory results. As it is not necessary to discretized the integral equation using this reconstruction method, therefore it is application to the problem of complicated geometrical shapes is also feasible. Moreover, in neural networks, interpolation is performed quite smoothly, as a result, inverse mapping can be achieved smoothly even in case of including experimental and numerical errors, However, use of conventional back propagation technique for optimization leads to an expensive computation cost. To overcome this drawback, 2nd order optimization methods or parallel computing will be applied in future. (J.P.N.)

  7. Fuzzy logic and neural networks basic concepts & application

    CERN Document Server

    Alavala, Chennakesava R

    2008-01-01

    About the Book: The primary purpose of this book is to provide the student with a comprehensive knowledge of basic concepts of fuzzy logic and neural networks. The hybridization of fuzzy logic and neural networks is also included. No previous knowledge of fuzzy logic and neural networks is required. Fuzzy logic and neural networks have been discussed in detail through illustrative examples, methods and generic applications. Extensive and carefully selected references is an invaluable resource for further study of fuzzy logic and neural networks. Each chapter is followed by a question bank

  8. The effect of the neural activity on topological properties of growing neural networks.

    Science.gov (United States)

    Gafarov, F M; Gafarova, V R

    2016-09-01

    The connectivity structure in cortical networks defines how information is transmitted and processed, and it is a source of the complex spatiotemporal patterns of network's development, and the process of creation and deletion of connections is continuous in the whole life of the organism. In this paper, we study how neural activity influences the growth process in neural networks. By using a two-dimensional activity-dependent growth model we demonstrated the neural network growth process from disconnected neurons to fully connected networks. For making quantitative investigation of the network's activity influence on its topological properties we compared it with the random growth network not depending on network's activity. By using the random graphs theory methods for the analysis of the network's connections structure it is shown that the growth in neural networks results in the formation of a well-known "small-world" network.

  9. 76 FR 67750 - Homeland Security Information Network Advisory Committee

    Science.gov (United States)

    2011-11-02

    ... DEPARTMENT OF HOMELAND SECURITY [Docket No. DHS-2011-0107] Homeland Security Information Network... Information Network Advisory Committee. SUMMARY: The Secretary of Homeland Security has determined that the renewal of the Homeland Security Information Network Advisory Committee (HSINAC) is necessary and in the...

  10. Enhancing neural-network performance via assortativity

    International Nuclear Information System (INIS)

    Franciscis, Sebastiano de; Johnson, Samuel; Torres, Joaquin J.

    2011-01-01

    The performance of attractor neural networks has been shown to depend crucially on the heterogeneity of the underlying topology. We take this analysis a step further by examining the effect of degree-degree correlations - assortativity - on neural-network behavior. We make use of a method recently put forward for studying correlated networks and dynamics thereon, both analytically and computationally, which is independent of how the topology may have evolved. We show how the robustness to noise is greatly enhanced in assortative (positively correlated) neural networks, especially if it is the hub neurons that store the information.

  11. Genetic algorithm for neural networks optimization

    Science.gov (United States)

    Setyawati, Bina R.; Creese, Robert C.; Sahirman, Sidharta

    2004-11-01

    This paper examines the forecasting performance of multi-layer feed forward neural networks in modeling a particular foreign exchange rates, i.e. Japanese Yen/US Dollar. The effects of two learning methods, Back Propagation and Genetic Algorithm, in which the neural network topology and other parameters fixed, were investigated. The early results indicate that the application of this hybrid system seems to be well suited for the forecasting of foreign exchange rates. The Neural Networks and Genetic Algorithm were programmed using MATLAB«.

  12. Neural Network Based Load Frequency Control for Restructuring ...

    African Journals Online (AJOL)

    Neural Network Based Load Frequency Control for Restructuring Power Industry. ... an artificial neural network (ANN) application of load frequency control (LFC) of a Multi-Area power system by using a neural network controller is presented.

  13. PREDIKSI FOREX MENGGUNAKAN MODEL NEURAL NETWORK

    Directory of Open Access Journals (Sweden)

    R. Hadapiningradja Kusumodestoni

    2015-11-01

    Full Text Available ABSTRAK Prediksi adalah salah satu teknik yang paling penting dalam menjalankan bisnis forex. Keputusan dalam memprediksi adalah sangatlah penting, karena dengan prediksi dapat membantu mengetahui nilai forex di waktu tertentu kedepan sehingga dapat mengurangi resiko kerugian. Tujuan dari penelitian ini dimaksudkan memprediksi bisnis fores menggunakan model neural network dengan data time series per 1 menit untuk mengetahui nilai akurasi prediksi sehingga dapat mengurangi resiko dalam menjalankan bisnis forex. Metode penelitian pada penelitian ini meliputi metode pengumpulan data kemudian dilanjutkan ke metode training, learning, testing menggunakan neural network. Setelah di evaluasi hasil penelitian ini menunjukan bahwa penerapan algoritma Neural Network mampu untuk memprediksi forex dengan tingkat akurasi prediksi 0.431 +/- 0.096 sehingga dengan prediksi ini dapat membantu mengurangi resiko dalam menjalankan bisnis forex. Kata kunci: prediksi, forex, neural network.

  14. Artificial neural networks a practical course

    CERN Document Server

    da Silva, Ivan Nunes; Andrade Flauzino, Rogerio; Liboni, Luisa Helena Bartocci; dos Reis Alves, Silas Franco

    2017-01-01

    This book provides comprehensive coverage of neural networks, their evolution, their structure, the problems they can solve, and their applications. The first half of the book looks at theoretical investigations on artificial neural networks and addresses the key architectures that are capable of implementation in various application scenarios. The second half is designed specifically for the production of solutions using artificial neural networks to solve practical problems arising from different areas of knowledge. It also describes the various implementation details that were taken into account to achieve the reported results. These aspects contribute to the maturation and improvement of experimental techniques to specify the neural network architecture that is most appropriate for a particular application scope. The book is appropriate for students in graduate and upper undergraduate courses in addition to researchers and professionals.

  15. Optical-Correlator Neural Network Based On Neocognitron

    Science.gov (United States)

    Chao, Tien-Hsin; Stoner, William W.

    1994-01-01

    Multichannel optical correlator implements shift-invariant, high-discrimination pattern-recognizing neural network based on paradigm of neocognitron. Selected as basic building block of this neural network because invariance under shifts is inherent advantage of Fourier optics included in optical correlators in general. Neocognitron is conceptual electronic neural-network model for recognition of visual patterns. Multilayer processing achieved by iteratively feeding back output of feature correlator to input spatial light modulator and updating Fourier filters. Neural network trained by use of characteristic features extracted from target images. Multichannel implementation enables parallel processing of large number of selected features.

  16. Nonequilibrium landscape theory of neural networks

    Science.gov (United States)

    Yan, Han; Zhao, Lei; Hu, Liang; Wang, Xidi; Wang, Erkang; Wang, Jin

    2013-01-01

    The brain map project aims to map out the neuron connections of the human brain. Even with all of the wirings mapped out, the global and physical understandings of the function and behavior are still challenging. Hopfield quantified the learning and memory process of symmetrically connected neural networks globally through equilibrium energy. The energy basins of attractions represent memories, and the memory retrieval dynamics is determined by the energy gradient. However, the realistic neural networks are asymmetrically connected, and oscillations cannot emerge from symmetric neural networks. Here, we developed a nonequilibrium landscape–flux theory for realistic asymmetrically connected neural networks. We uncovered the underlying potential landscape and the associated Lyapunov function for quantifying the global stability and function. We found the dynamics and oscillations in human brains responsible for cognitive processes and physiological rhythm regulations are determined not only by the landscape gradient but also by the flux. We found that the flux is closely related to the degrees of the asymmetric connections in neural networks and is the origin of the neural oscillations. The neural oscillation landscape shows a closed-ring attractor topology. The landscape gradient attracts the network down to the ring. The flux is responsible for coherent oscillations on the ring. We suggest the flux may provide the driving force for associations among memories. We applied our theory to rapid-eye movement sleep cycle. We identified the key regulation factors for function through global sensitivity analysis of landscape topography against wirings, which are in good agreements with experiments. PMID:24145451

  17. Nonequilibrium landscape theory of neural networks.

    Science.gov (United States)

    Yan, Han; Zhao, Lei; Hu, Liang; Wang, Xidi; Wang, Erkang; Wang, Jin

    2013-11-05

    The brain map project aims to map out the neuron connections of the human brain. Even with all of the wirings mapped out, the global and physical understandings of the function and behavior are still challenging. Hopfield quantified the learning and memory process of symmetrically connected neural networks globally through equilibrium energy. The energy basins of attractions represent memories, and the memory retrieval dynamics is determined by the energy gradient. However, the realistic neural networks are asymmetrically connected, and oscillations cannot emerge from symmetric neural networks. Here, we developed a nonequilibrium landscape-flux theory for realistic asymmetrically connected neural networks. We uncovered the underlying potential landscape and the associated Lyapunov function for quantifying the global stability and function. We found the dynamics and oscillations in human brains responsible for cognitive processes and physiological rhythm regulations are determined not only by the landscape gradient but also by the flux. We found that the flux is closely related to the degrees of the asymmetric connections in neural networks and is the origin of the neural oscillations. The neural oscillation landscape shows a closed-ring attractor topology. The landscape gradient attracts the network down to the ring. The flux is responsible for coherent oscillations on the ring. We suggest the flux may provide the driving force for associations among memories. We applied our theory to rapid-eye movement sleep cycle. We identified the key regulation factors for function through global sensitivity analysis of landscape topography against wirings, which are in good agreements with experiments.

  18. Neutron spectrometry and dosimetry by means of Bonner spheres system and artificial neural networks applying robust design of artificial neural networks

    Energy Technology Data Exchange (ETDEWEB)

    Martinez B, M.R.; Ortiz R, J.M.; Vega C, H.R. [UAZ, Av. Ramon Lopez Velarde No. 801, 98000 Zacatecas (Mexico)

    2006-07-01

    An Artificial Neural Network has been designed, trained and tested to unfold neutron spectra and simultaneously to calculate equivalent doses. A set of 187 neutron spectra compiled by the International Atomic Energy Agency and 13 equivalent doses were used in the artificial neural network designed, trained and tested. In order to design the neural network was used the robust design of artificial neural networks methodology, which assures that the quality of the neural networks takes into account from the design stage. Unless previous works, here, for first time a group of neural networks were designed and trained to unfold 187 neutron spectra and at the same time to calculate 13 equivalent doses, starting from the count rates coming from the Bonner spheres system by using a systematic and experimental strategy. (Author)

  19. Neutron spectrometry and dosimetry by means of Bonner spheres system and artificial neural networks applying robust design of artificial neural networks

    International Nuclear Information System (INIS)

    Martinez B, M.R.; Ortiz R, J.M.; Vega C, H.R.

    2006-01-01

    An Artificial Neural Network has been designed, trained and tested to unfold neutron spectra and simultaneously to calculate equivalent doses. A set of 187 neutron spectra compiled by the International Atomic Energy Agency and 13 equivalent doses were used in the artificial neural network designed, trained and tested. In order to design the neural network was used the robust design of artificial neural networks methodology, which assures that the quality of the neural networks takes into account from the design stage. Unless previous works, here, for first time a group of neural networks were designed and trained to unfold 187 neutron spectra and at the same time to calculate 13 equivalent doses, starting from the count rates coming from the Bonner spheres system by using a systematic and experimental strategy. (Author)

  20. Neural networks within multi-core optic fibers.

    Science.gov (United States)

    Cohen, Eyal; Malka, Dror; Shemer, Amir; Shahmoon, Asaf; Zalevsky, Zeev; London, Michael

    2016-07-07

    Hardware implementation of artificial neural networks facilitates real-time parallel processing of massive data sets. Optical neural networks offer low-volume 3D connectivity together with large bandwidth and minimal heat production in contrast to electronic implementation. Here, we present a conceptual design for in-fiber optical neural networks. Neurons and synapses are realized as individual silica cores in a multi-core fiber. Optical signals are transferred transversely between cores by means of optical coupling. Pump driven amplification in erbium-doped cores mimics synaptic interactions. We simulated three-layered feed-forward neural networks and explored their capabilities. Simulations suggest that networks can differentiate between given inputs depending on specific configurations of amplification; this implies classification and learning capabilities. Finally, we tested experimentally our basic neuronal elements using fibers, couplers, and amplifiers, and demonstrated that this configuration implements a neuron-like function. Therefore, devices similar to our proposed multi-core fiber could potentially serve as building blocks for future large-scale small-volume optical artificial neural networks.

  1. Intelligent neural network diagnostic system

    International Nuclear Information System (INIS)

    Mohamed, A.H.

    2010-01-01

    Recently, artificial neural network (ANN) has made a significant mark in the domain of diagnostic applications. Neural networks are used to implement complex non-linear mappings (functions) using simple elementary units interrelated through connections with adaptive weights. The performance of the ANN is mainly depending on their topology structure and weights. Some systems have been developed using genetic algorithm (GA) to optimize the topology of the ANN. But, they suffer from some limitations. They are : (1) The computation time requires for training the ANN several time reaching for the average weight required, (2) Slowness of GA for optimization process and (3) Fitness noise appeared in the optimization of ANN. This research suggests new issues to overcome these limitations for finding optimal neural network architectures to learn particular problems. This proposed methodology is used to develop a diagnostic neural network system. It has been applied for a 600 MW turbo-generator as a case of real complex systems. The proposed system has proved its significant performance compared to two common methods used in the diagnostic applications.

  2. Neural networks and applications tutorial

    Science.gov (United States)

    Guyon, I.

    1991-09-01

    The importance of neural networks has grown dramatically during this decade. While only a few years ago they were primarily of academic interest, now dozens of companies and many universities are investigating the potential use of these systems and products are beginning to appear. The idea of building a machine whose architecture is inspired by that of the brain has roots which go far back in history. Nowadays, technological advances of computers and the availability of custom integrated circuits, permit simulations of hundreds or even thousands of neurons. In conjunction, the growing interest in learning machines, non-linear dynamics and parallel computation spurred renewed attention in artificial neural networks. Many tentative applications have been proposed, including decision systems (associative memories, classifiers, data compressors and optimizers), or parametric models for signal processing purposes (system identification, automatic control, noise canceling, etc.). While they do not always outperform standard methods, neural network approaches are already used in some real world applications for pattern recognition and signal processing tasks. The tutorial is divided into six lectures, that where presented at the Third Graduate Summer Course on Computational Physics (September 3-7, 1990) on Parallel Architectures and Applications, organized by the European Physical Society: (1) Introduction: machine learning and biological computation. (2) Adaptive artificial neurons (perceptron, ADALINE, sigmoid units, etc.): learning rules and implementations. (3) Neural network systems: architectures, learning algorithms. (4) Applications: pattern recognition, signal processing, etc. (5) Elements of learning theory: how to build networks which generalize. (6) A case study: a neural network for on-line recognition of handwritten alphanumeric characters.

  3. Distribution network fault section identification and fault location using artificial neural network

    DEFF Research Database (Denmark)

    Dashtdar, Masoud; Dashti, Rahman; Shaker, Hamid Reza

    2018-01-01

    In this paper, a method for fault location in power distribution network is presented. The proposed method uses artificial neural network. In order to train the neural network, a series of specific characteristic are extracted from the recorded fault signals in relay. These characteristics...... components of the sequences as well as three-phase signals could be obtained using statistics to extract the hidden features inside them and present them separately to train the neural network. Also, since the obtained inputs for the training of the neural network strongly depend on the fault angle, fault...... resistance, and fault location, the training data should be selected such that these differences are properly presented so that the neural network does not face any issues for identification. Therefore, selecting the signal processing function, data spectrum and subsequently, statistical parameters...

  4. Adaptive optimization and control using neural networks

    Energy Technology Data Exchange (ETDEWEB)

    Mead, W.C.; Brown, S.K.; Jones, R.D.; Bowling, P.S.; Barnes, C.W.

    1993-10-22

    Recent work has demonstrated the ability of neural-network-based controllers to optimize and control machines with complex, non-linear, relatively unknown control spaces. We present a brief overview of neural networks via a taxonomy illustrating some capabilities of different kinds of neural networks. We present some successful control examples, particularly the optimization and control of a small-angle negative ion source.

  5. Neural Networks

    Directory of Open Access Journals (Sweden)

    Schwindling Jerome

    2010-04-01

    Full Text Available This course presents an overview of the concepts of the neural networks and their aplication in the framework of High energy physics analyses. After a brief introduction on the concept of neural networks, the concept is explained in the frame of neuro-biology, introducing the concept of multi-layer perceptron, learning and their use as data classifer. The concept is then presented in a second part using in more details the mathematical approach focussing on typical use cases faced in particle physics. Finally, the last part presents the best way to use such statistical tools in view of event classifers, putting the emphasis on the setup of the multi-layer perceptron. The full article (15 p. corresponding to this lecture is written in french and is provided in the proceedings of the book SOS 2008.

  6. Altered Synchronizations among Neural Networks in Geriatric Depression.

    Science.gov (United States)

    Wang, Lihong; Chou, Ying-Hui; Potter, Guy G; Steffens, David C

    2015-01-01

    Although major depression has been considered as a manifestation of discoordinated activity between affective and cognitive neural networks, only a few studies have examined the relationships among neural networks directly. Because of the known disconnection theory, geriatric depression could be a useful model in studying the interactions among different networks. In the present study, using independent component analysis to identify intrinsically connected neural networks, we investigated the alterations in synchronizations among neural networks in geriatric depression to better understand the underlying neural mechanisms. Resting-state fMRI data was collected from thirty-two patients with geriatric depression and thirty-two age-matched never-depressed controls. We compared the resting-state activities between the two groups in the default-mode, central executive, attention, salience, and affective networks as well as correlations among these networks. The depression group showed stronger activity than the controls in an affective network, specifically within the orbitofrontal region. However, unlike the never-depressed controls, geriatric depression group lacked synchronized/antisynchronized activity between the affective network and the other networks. Those depressed patients with lower executive function has greater synchronization between the salience network with the executive and affective networks. Our results demonstrate the effectiveness of the between-network analyses in examining neural models for geriatric depression.

  7. Neural Networks for the Beginner.

    Science.gov (United States)

    Snyder, Robin M.

    Motivated by the brain, neural networks are a right-brained approach to artificial intelligence that is used to recognize patterns based on previous training. In practice, one would not program an expert system to recognize a pattern and one would not train a neural network to make decisions from rules; but one could combine the best features of…

  8. Neural network-based model reference adaptive control system.

    Science.gov (United States)

    Patino, H D; Liu, D

    2000-01-01

    In this paper, an approach to model reference adaptive control based on neural networks is proposed and analyzed for a class of first-order continuous-time nonlinear dynamical systems. The controller structure can employ either a radial basis function network or a feedforward neural network to compensate adaptively the nonlinearities in the plant. A stable controller-parameter adjustment mechanism, which is determined using the Lyapunov theory, is constructed using a sigma-modification-type updating law. The evaluation of control error in terms of the neural network learning error is performed. That is, the control error converges asymptotically to a neighborhood of zero, whose size is evaluated and depends on the approximation error of the neural network. In the design and analysis of neural network-based control systems, it is important to take into account the neural network learning error and its influence on the control error of the plant. Simulation results showing the feasibility and performance of the proposed approach are given.

  9. Mass reconstruction with a neural network

    International Nuclear Information System (INIS)

    Loennblad, L.; Peterson, C.; Roegnvaldsson, T.

    1992-01-01

    A feed-forward neural network method is developed for reconstructing the invariant mass of hadronic jets appearing in a calorimeter. The approach is illustrated in W→qanti q, where W-bosons are produced in panti p reactions at SPS collider energies. The neural network method yields results that are superior to conventional methods. This neural network application differs from the classification ones in the sense that an analog number (the mass) is computed by the network, rather than a binary decision being made. As a by-product our application clearly demonstrates the need for using 'intelligent' variables in instances when the amount of training instances is limited. (orig.)

  10. Inversion of a lateral log using neural networks

    International Nuclear Information System (INIS)

    Garcia, G.; Whitman, W.W.

    1992-01-01

    In this paper a technique using neural networks is demonstrated for the inversion of a lateral log. The lateral log is simulated by a finite difference method which in turn is used as an input to a backpropagation neural network. An initial guess earth model is generated from the neural network, which is then input to a Marquardt inversion. The neural network reacts to gross and subtle data features in actual logs and produces a response inferred from the knowledge stored in the network during a training process. The neural network inversion of lateral logs is tested on synthetic and field data. Tests using field data resulted in a final earth model whose simulated lateral is in good agreement with the actual log data

  11. Financial time series prediction using spiking neural networks.

    Science.gov (United States)

    Reid, David; Hussain, Abir Jaafar; Tawfik, Hissam

    2014-01-01

    In this paper a novel application of a particular type of spiking neural network, a Polychronous Spiking Network, was used for financial time series prediction. It is argued that the inherent temporal capabilities of this type of network are suited to non-stationary data such as this. The performance of the spiking neural network was benchmarked against three systems: two "traditional", rate-encoded, neural networks; a Multi-Layer Perceptron neural network and a Dynamic Ridge Polynomial neural network, and a standard Linear Predictor Coefficients model. For this comparison three non-stationary and noisy time series were used: IBM stock data; US/Euro exchange rate data, and the price of Brent crude oil. The experiments demonstrated favourable prediction results for the Spiking Neural Network in terms of Annualised Return and prediction error for 5-Step ahead predictions. These results were also supported by other relevant metrics such as Maximum Drawdown and Signal-To-Noise ratio. This work demonstrated the applicability of the Polychronous Spiking Network to financial data forecasting and this in turn indicates the potential of using such networks over traditional systems in difficult to manage non-stationary environments.

  12. Neural Networks in Mobile Robot Motion

    Directory of Open Access Journals (Sweden)

    Danica Janglová

    2004-03-01

    Full Text Available This paper deals with a path planning and intelligent control of an autonomous robot which should move safely in partially structured environment. This environment may involve any number of obstacles of arbitrary shape and size; some of them are allowed to move. We describe our approach to solving the motion-planning problem in mobile robot control using neural networks-based technique. Our method of the construction of a collision-free path for moving robot among obstacles is based on two neural networks. The first neural network is used to determine the “free” space using ultrasound range finder data. The second neural network “finds” a safe direction for the next robot section of the path in the workspace while avoiding the nearest obstacles. Simulation examples of generated path with proposed techniques will be presented.

  13. International Conference on Artificial Neural Networks (ICANN)

    CERN Document Server

    Mladenov, Valeri; Kasabov, Nikola; Artificial Neural Networks : Methods and Applications in Bio-/Neuroinformatics

    2015-01-01

    The book reports on the latest theories on artificial neural networks, with a special emphasis on bio-neuroinformatics methods. It includes twenty-three papers selected from among the best contributions on bio-neuroinformatics-related issues, which were presented at the International Conference on Artificial Neural Networks, held in Sofia, Bulgaria, on September 10-13, 2013 (ICANN 2013). The book covers a broad range of topics concerning the theory and applications of artificial neural networks, including recurrent neural networks, super-Turing computation and reservoir computing, double-layer vector perceptrons, nonnegative matrix factorization, bio-inspired models of cell communities, Gestalt laws, embodied theory of language understanding, saccadic gaze shifts and memory formation, and new training algorithms for Deep Boltzmann Machines, as well as dynamic neural networks and kernel machines. It also reports on new approaches to reinforcement learning, optimal control of discrete time-delay systems, new al...

  14. Interpretable neural networks with BP-SOM

    NARCIS (Netherlands)

    Weijters, A.J.M.M.; Bosch, van den A.P.J.; Pobil, del A.P.; Mira, J.; Ali, M.

    1998-01-01

    Artificial Neural Networks (ANNS) are used successfully in industry and commerce. This is not surprising since neural networks are especially competitive for complex tasks for which insufficient domain-specific knowledge is available. However, interpretation of models induced by ANNS is often

  15. Adaptive nonlinear control using input normalized neural networks

    International Nuclear Information System (INIS)

    Leeghim, Henzeh; Seo, In Ho; Bang, Hyo Choong

    2008-01-01

    An adaptive feedback linearization technique combined with the neural network is addressed to control uncertain nonlinear systems. The neural network-based adaptive control theory has been widely studied. However, the stability analysis of the closed-loop system with the neural network is rather complicated and difficult to understand, and sometimes unnecessary assumptions are involved. As a result, unnecessary assumptions for stability analysis are avoided by using the neural network with input normalization technique. The ultimate boundedness of the tracking error is simply proved by the Lyapunov stability theory. A new simple update law as an adaptive nonlinear control is derived by the simplification of the input normalized neural network assuming the variation of the uncertain term is sufficiently small

  16. Runoff Modelling in Urban Storm Drainage by Neural Networks

    DEFF Research Database (Denmark)

    Rasmussen, Michael R.; Brorsen, Michael; Schaarup-Jensen, Kjeld

    1995-01-01

    A neural network is used to simulate folw and water levels in a sewer system. The calibration of th neural network is based on a few measured events and the network is validated against measureed events as well as flow simulated with the MOUSE model (Lindberg and Joergensen, 1986). The neural...... network is used to compute flow or water level at selected points in the sewer system, and to forecast the flow from a small residential area. The main advantages of the neural network are the build-in self calibration procedure and high speed performance, but the neural network cannot be used to extract...... knowledge of the runoff process. The neural network was found to simulate 150 times faster than e.g. the MOUSE model....

  17. Neural networks in economic modelling : An empirical study

    NARCIS (Netherlands)

    Verkooijen, W.J.H.

    1996-01-01

    This dissertation addresses the statistical aspects of neural networks and their usability for solving problems in economics and finance. Neural networks are discussed in a framework of modelling which is generally accepted in econometrics. Within this framework a neural network is regarded as a

  18. Influence of neural adaptation on dynamics and equilibrium state of neural activities in a ring neural network

    Science.gov (United States)

    Takiyama, Ken

    2017-12-01

    How neural adaptation affects neural information processing (i.e. the dynamics and equilibrium state of neural activities) is a central question in computational neuroscience. In my previous works, I analytically clarified the dynamics and equilibrium state of neural activities in a ring-type neural network model that is widely used to model the visual cortex, motor cortex, and several other brain regions. The neural dynamics and the equilibrium state in the neural network model corresponded to a Bayesian computation and statistically optimal multiple information integration, respectively, under a biologically inspired condition. These results were revealed in an analytically tractable manner; however, adaptation effects were not considered. Here, I analytically reveal how the dynamics and equilibrium state of neural activities in a ring neural network are influenced by spike-frequency adaptation (SFA). SFA is an adaptation that causes gradual inhibition of neural activity when a sustained stimulus is applied, and the strength of this inhibition depends on neural activities. I reveal that SFA plays three roles: (1) SFA amplifies the influence of external input in neural dynamics; (2) SFA allows the history of the external input to affect neural dynamics; and (3) the equilibrium state corresponds to the statistically optimal multiple information integration independent of the existence of SFA. In addition, the equilibrium state in a ring neural network model corresponds to the statistically optimal integration of multiple information sources under biologically inspired conditions, independent of the existence of SFA.

  19. Artificial neural network intelligent method for prediction

    Science.gov (United States)

    Trifonov, Roumen; Yoshinov, Radoslav; Pavlova, Galya; Tsochev, Georgi

    2017-09-01

    Accounting and financial classification and prediction problems are high challenge and researchers use different methods to solve them. Methods and instruments for short time prediction of financial operations using artificial neural network are considered. The methods, used for prediction of financial data as well as the developed forecasting system with neural network are described in the paper. The architecture of a neural network used four different technical indicators, which are based on the raw data and the current day of the week is presented. The network developed is used for forecasting movement of stock prices one day ahead and consists of an input layer, one hidden layer and an output layer. The training method is algorithm with back propagation of the error. The main advantage of the developed system is self-determination of the optimal topology of neural network, due to which it becomes flexible and more precise The proposed system with neural network is universal and can be applied to various financial instruments using only basic technical indicators as input data.

  20. A fuzzy neural network for sensor signal estimation

    International Nuclear Information System (INIS)

    Na, Man Gyun

    2000-01-01

    In this work, a fuzzy neural network is used to estimate the relevant sensor signal using other sensor signals. Noise components in input signals into the fuzzy neural network are removed through the wavelet denoising technique. Principal component analysis (PCA) is used to reduce the dimension of an input space without losing a significant amount of information. A lower dimensional input space will also usually reduce the time necessary to train a fuzzy-neural network. Also, the principal component analysis makes easy the selection of the input signals into the fuzzy neural network. The fuzzy neural network parameters are optimized by two learning methods. A genetic algorithm is used to optimize the antecedent parameters of the fuzzy neural network and a least-squares algorithm is used to solve the consequent parameters. The proposed algorithm was verified through the application to the pressurizer water level and the hot-leg flowrate measurements in pressurized water reactors

  1. Multistability in bidirectional associative memory neural networks

    International Nuclear Information System (INIS)

    Huang Gan; Cao Jinde

    2008-01-01

    In this Letter, the multistability issue is studied for Bidirectional Associative Memory (BAM) neural networks. Based on the existence and stability analysis of the neural networks with or without delay, it is found that the 2n-dimensional networks can have 3 n equilibria and 2 n equilibria of them are locally exponentially stable, where each layer of the BAM network has n neurons. Furthermore, the results has been extended to (n+m)-dimensional BAM neural networks, where there are n and m neurons on the two layers respectively. Finally, two numerical examples are presented to illustrate the validity of our results

  2. Multistability in bidirectional associative memory neural networks

    Science.gov (United States)

    Huang, Gan; Cao, Jinde

    2008-04-01

    In this Letter, the multistability issue is studied for Bidirectional Associative Memory (BAM) neural networks. Based on the existence and stability analysis of the neural networks with or without delay, it is found that the 2 n-dimensional networks can have 3 equilibria and 2 equilibria of them are locally exponentially stable, where each layer of the BAM network has n neurons. Furthermore, the results has been extended to (n+m)-dimensional BAM neural networks, where there are n and m neurons on the two layers respectively. Finally, two numerical examples are presented to illustrate the validity of our results.

  3. Machine Learning Topological Invariants with Neural Networks

    Science.gov (United States)

    Zhang, Pengfei; Shen, Huitao; Zhai, Hui

    2018-02-01

    In this Letter we supervisedly train neural networks to distinguish different topological phases in the context of topological band insulators. After training with Hamiltonians of one-dimensional insulators with chiral symmetry, the neural network can predict their topological winding numbers with nearly 100% accuracy, even for Hamiltonians with larger winding numbers that are not included in the training data. These results show a remarkable success that the neural network can capture the global and nonlinear topological features of quantum phases from local inputs. By opening up the neural network, we confirm that the network does learn the discrete version of the winding number formula. We also make a couple of remarks regarding the role of the symmetry and the opposite effect of regularization techniques when applying machine learning to physical systems.

  4. Time series prediction with simple recurrent neural networks ...

    African Journals Online (AJOL)

    A hybrid of the two called Elman-Jordan (or Multi-recurrent) neural network is also being used. In this study, we evaluated the performance of these neural networks on three established bench mark time series prediction problems. Results from the experiments showed that Jordan neural network performed significantly ...

  5. Quantum neural networks: Current status and prospects for development

    Science.gov (United States)

    Altaisky, M. V.; Kaputkina, N. E.; Krylov, V. A.

    2014-11-01

    The idea of quantum artificial neural networks, first formulated in [34], unites the artificial neural network concept with the quantum computation paradigm. Quantum artificial neural networks were first systematically considered in the PhD thesis by T. Menneer (1998). Based on the works of Menneer and Narayanan [42, 43], Kouda, Matsui, and Nishimura [35, 36], Altaisky [2, 68], Zhou [67], and others, quantum-inspired learning algorithms for neural networks were developed, and are now used in various training programs and computer games [29, 30]. The first practically realizable scaled hardware-implemented model of the quantum artificial neural network is obtained by D-Wave Systems, Inc. [33]. It is a quantum Hopfield network implemented on the basis of superconducting quantum interference devices (SQUIDs). In this work we analyze possibilities and underlying principles of an alternative way to implement quantum neural networks on the basis of quantum dots. A possibility of using quantum neural network algorithms in automated control systems, associative memory devices, and in modeling biological and social networks is examined.

  6. Neural network modeling for near wall turbulent flow

    International Nuclear Information System (INIS)

    Milano, Michele; Koumoutsakos, Petros

    2002-01-01

    A neural network methodology is developed in order to reconstruct the near wall field in a turbulent flow by exploiting flow fields provided by direct numerical simulations. The results obtained from the neural network methodology are compared with the results obtained from prediction and reconstruction using proper orthogonal decomposition (POD). Using the property that the POD is equivalent to a specific linear neural network, a nonlinear neural network extension is presented. It is shown that for a relatively small additional computational cost nonlinear neural networks provide us with improved reconstruction and prediction capabilities for the near wall velocity fields. Based on these results advantages and drawbacks of both approaches are discussed with an outlook toward the development of near wall models for turbulence modeling and control

  7. Application of neural networks in CRM systems

    Directory of Open Access Journals (Sweden)

    Bojanowska Agnieszka

    2017-01-01

    Full Text Available The central aim of this study is to investigate how to apply artificial neural networks in Customer Relationship Management (CRM. The paper presents several business applications of neural networks in software systems designed to aid CRM, e.g. in deciding on the profitability of building a relationship with a given customer. Furthermore, a framework for a neural-network based CRM software tool is developed. Building beneficial relationships with customers is generating considerable interest among various businesses, and is often mentioned as one of the crucial objectives of enterprises, next to their key aim: to bring satisfactory profit. There is a growing tendency among businesses to invest in CRM systems, which together with an organisational culture of a company aid managing customer relationships. It is the sheer amount of gathered data as well as the need for constant updating and analysis of this breadth of information that may imply the suitability of neural networks for the application in question. Neural networks exhibit considerably higher computational capabilities than sequential calculations because the solution to a problem is obtained without the need for developing a special algorithm. In the majority of presented CRM applications neural networks constitute and are presented as a managerial decision-taking optimisation tool.

  8. Local Dynamics in Trained Recurrent Neural Networks.

    Science.gov (United States)

    Rivkind, Alexander; Barak, Omri

    2017-06-23

    Learning a task induces connectivity changes in neural circuits, thereby changing their dynamics. To elucidate task-related neural dynamics, we study trained recurrent neural networks. We develop a mean field theory for reservoir computing networks trained to have multiple fixed point attractors. Our main result is that the dynamics of the network's output in the vicinity of attractors is governed by a low-order linear ordinary differential equation. The stability of the resulting equation can be assessed, predicting training success or failure. As a consequence, networks of rectified linear units and of sigmoidal nonlinearities are shown to have diametrically different properties when it comes to learning attractors. Furthermore, a characteristic time constant, which remains finite at the edge of chaos, offers an explanation of the network's output robustness in the presence of variability of the internal neural dynamics. Finally, the proposed theory predicts state-dependent frequency selectivity in the network response.

  9. Local Dynamics in Trained Recurrent Neural Networks

    Science.gov (United States)

    Rivkind, Alexander; Barak, Omri

    2017-06-01

    Learning a task induces connectivity changes in neural circuits, thereby changing their dynamics. To elucidate task-related neural dynamics, we study trained recurrent neural networks. We develop a mean field theory for reservoir computing networks trained to have multiple fixed point attractors. Our main result is that the dynamics of the network's output in the vicinity of attractors is governed by a low-order linear ordinary differential equation. The stability of the resulting equation can be assessed, predicting training success or failure. As a consequence, networks of rectified linear units and of sigmoidal nonlinearities are shown to have diametrically different properties when it comes to learning attractors. Furthermore, a characteristic time constant, which remains finite at the edge of chaos, offers an explanation of the network's output robustness in the presence of variability of the internal neural dynamics. Finally, the proposed theory predicts state-dependent frequency selectivity in the network response.

  10. Mode Choice Modeling Using Artificial Neural Networks

    OpenAIRE

    Edara, Praveen Kumar

    2003-01-01

    Artificial intelligence techniques have produced excellent results in many diverse fields of engineering. Techniques such as neural networks and fuzzy systems have found their way into transportation engineering. In recent years, neural networks are being used instead of regression techniques for travel demand forecasting purposes. The basic reason lies in the fact that neural networks are able to capture complex relationships and learn from examples and also able to adapt when new data becom...

  11. Neutron spectrometry with artificial neural networks

    International Nuclear Information System (INIS)

    Vega C, H.R.; Hernandez D, V.M.; Manzanares A, E.; Rodriguez, J.M.; Mercado S, G.A.; Iniguez de la Torre Bayo, M.P.; Barquero, R.; Arteaga A, T.

    2005-01-01

    An artificial neural network has been designed to obtain the neutron spectra from the Bonner spheres spectrometer's count rates. The neural network was trained using 129 neutron spectra. These include isotopic neutron sources; reference and operational spectra from accelerators and nuclear reactors, spectra from mathematical functions as well as few energy groups and monoenergetic spectra. The spectra were transformed from lethargy to energy distribution and were re-bin ned to 31 energy groups using the MCNP 4C code. Re-binned spectra and UTA4 response matrix were used to calculate the expected count rates in Bonner spheres spectrometer. These count rates were used as input and the respective spectrum was used as output during neural network training. After training the network was tested with the Bonner spheres count rates produced by a set of neutron spectra. This set contains data used during network training as well as data not used. Training and testing was carried out in the Mat lab program. To verify the network unfolding performance the original and unfolded spectra were compared using the χ 2 -test and the total fluence ratios. The use of Artificial Neural Networks to unfold neutron spectra in neutron spectrometry is an alternative procedure that overcomes the drawbacks associated in this ill-conditioned problem. (Author)

  12. Neural network and its application to CT imaging

    Energy Technology Data Exchange (ETDEWEB)

    Nikravesh, M.; Kovscek, A.R.; Patzek, T.W. [Lawrence Berkeley National Lab., CA (United States)] [and others

    1997-02-01

    We present an integrated approach to imaging the progress of air displacement by spontaneous imbibition of oil into sandstone. We combine Computerized Tomography (CT) scanning and neural network image processing. The main aspects of our approach are (I) visualization of the distribution of oil and air saturation by CT, (II) interpretation of CT scans using neural networks, and (III) reconstruction of 3-D images of oil saturation from the CT scans with a neural network model. Excellent agreement between the actual images and the neural network predictions is found.

  13. Artificial neural networks in neutron dosimetry

    Energy Technology Data Exchange (ETDEWEB)

    Vega C, H.R.; Hernandez D, V.M.; Manzanares A, E.; Mercado, G.A.; Perales M, W.A.; Robles R, J.A. [Unidades Academicas de Estudios Nucleares, UAZ, A.P. 336, 98000 Zacatecas (Mexico); Gallego, E.; Lorente, A. [Depto. de Ingenieria Nuclear, Universidad Politecnica de Madrid, (Spain)

    2005-07-01

    An artificial neural network has been designed to obtain the neutron doses using only the Bonner spheres spectrometer's count rates. Ambient, personal and effective neutron doses were included. 187 neutron spectra were utilized to calculate the Bonner count rates and the neutron doses. The spectra were transformed from lethargy to energy distribution and were re-binned to 31 energy groups using the MCNP 4C code. Re-binned spectra, UTA4 response matrix and fluence-to-dose coefficients were used to calculate the count rates in Bonner spheres spectrometer and the doses. Count rates were used as input and the respective doses were used as output during neural network training. Training and testing was carried out in Mat lab environment. The artificial neural network performance was evaluated using the {chi}{sup 2}- test, where the original and calculated doses were compared. The use of Artificial Neural Networks in neutron dosimetry is an alternative procedure that overcomes the drawbacks associated in this ill-conditioned problem. (Author)

  14. Artificial neural networks in neutron dosimetry

    International Nuclear Information System (INIS)

    Vega C, H.R.; Hernandez D, V.M.; Manzanares A, E.; Mercado, G.A.; Perales M, W.A.; Robles R, J.A.; Gallego, E.; Lorente, A.

    2005-01-01

    An artificial neural network has been designed to obtain the neutron doses using only the Bonner spheres spectrometer's count rates. Ambient, personal and effective neutron doses were included. 187 neutron spectra were utilized to calculate the Bonner count rates and the neutron doses. The spectra were transformed from lethargy to energy distribution and were re-binned to 31 energy groups using the MCNP 4C code. Re-binned spectra, UTA4 response matrix and fluence-to-dose coefficients were used to calculate the count rates in Bonner spheres spectrometer and the doses. Count rates were used as input and the respective doses were used as output during neural network training. Training and testing was carried out in Mat lab environment. The artificial neural network performance was evaluated using the χ 2 - test, where the original and calculated doses were compared. The use of Artificial Neural Networks in neutron dosimetry is an alternative procedure that overcomes the drawbacks associated in this ill-conditioned problem. (Author)

  15. Artificial neural networks for plasma spectroscopy analysis

    International Nuclear Information System (INIS)

    Morgan, W.L.; Larsen, J.T.; Goldstein, W.H.

    1992-01-01

    Artificial neural networks have been applied to a variety of signal processing and image recognition problems. Of the several common neural models the feed-forward, back-propagation network is well suited for the analysis of scientific laboratory data, which can be viewed as a pattern recognition problem. The authors present a discussion of the basic neural network concepts and illustrate its potential for analysis of experiments by applying it to the spectra of laser produced plasmas in order to obtain estimates of electron temperatures and densities. Although these are high temperature and density plasmas, the neural network technique may be of interest in the analysis of the low temperature and density plasmas characteristic of experiments and devices in gaseous electronics

  16. Dynamic training algorithm for dynamic neural networks

    International Nuclear Information System (INIS)

    Tan, Y.; Van Cauwenberghe, A.; Liu, Z.

    1996-01-01

    The widely used backpropagation algorithm for training neural networks based on the gradient descent has a significant drawback of slow convergence. A Gauss-Newton method based recursive least squares (RLS) type algorithm with dynamic error backpropagation is presented to speed-up the learning procedure of neural networks with local recurrent terms. Finally, simulation examples concerning the applications of the RLS type algorithm to identification of nonlinear processes using a local recurrent neural network are also included in this paper

  17. Neural Networks for Non-linear Control

    DEFF Research Database (Denmark)

    Sørensen, O.

    1994-01-01

    This paper describes how a neural network, structured as a Multi Layer Perceptron, is trained to predict, simulate and control a non-linear process.......This paper describes how a neural network, structured as a Multi Layer Perceptron, is trained to predict, simulate and control a non-linear process....

  18. Sensitive quantitative predictions of peptide-MHC binding by a 'Query by Committee' artificial neural network approach

    DEFF Research Database (Denmark)

    Buus, S.; Lauemoller, S.L.; Worning, Peder

    2003-01-01

    We have generated Artificial Neural Networks (ANN) capable of performing sensitive, quantitative predictions of peptide binding to the MHC class I molecule, HLA-A*0204. We have shown that such quantitative ANN are superior to conventional classification ANN, that have been trained to predict...

  19. Drift chamber tracking with neural networks

    International Nuclear Information System (INIS)

    Lindsey, C.S.; Denby, B.; Haggerty, H.

    1992-10-01

    We discuss drift chamber tracking with a commercial log VLSI neural network chip. Voltages proportional to the drift times in a 4-layer drift chamber were presented to the Intel ETANN chip. The network was trained to provide the intercept and slope of straight tracks traversing the chamber. The outputs were recorded and later compared off line to conventional track fits. Two types of network architectures were studied. Applications of neural network tracking to high energy physics detector triggers is discussed

  20. Using neural networks to describe tracer correlations

    Directory of Open Access Journals (Sweden)

    D. J. Lary

    2004-01-01

    Full Text Available Neural networks are ideally suited to describe the spatial and temporal dependence of tracer-tracer correlations. The neural network performs well even in regions where the correlations are less compact and normally a family of correlation curves would be required. For example, the CH4-N2O correlation can be well described using a neural network trained with the latitude, pressure, time of year, and methane volume mixing ratio (v.m.r.. In this study a neural network using Quickprop learning and one hidden layer with eight nodes was able to reproduce the CH4-N2O correlation with a correlation coefficient between simulated and training values of 0.9995. Such an accurate representation of tracer-tracer correlations allows more use to be made of long-term datasets to constrain chemical models. Such as the dataset from the Halogen Occultation Experiment (HALOE which has continuously observed CH4  (but not N2O from 1991 till the present. The neural network Fortran code used is available for download.

  1. Inverting radiometric measurements with a neural network

    Science.gov (United States)

    Measure, Edward M.; Yee, Young P.; Balding, Jeff M.; Watkins, Wendell R.

    1992-02-01

    A neural network scheme for retrieving remotely sensed vertical temperature profiles was applied to observed ground based radiometer measurements. The neural network used microwave radiance measurements and surface measurements of temperature and pressure as inputs. Because the microwave radiometer is capable of measuring 4 oxygen channels at 5 different elevation angles (9, 15, 25, 40, and 90 degs), 20 microwave measurements are potentially available. Because these measurements have considerable redundancy, a neural network was experimented with, accepting as inputs microwave measurements taken at 53.88 GHz, 40 deg; 57.45 GHz, 40 deg; and 57.45, 90 deg. The primary test site was located at White Sands Missile Range (WSMR), NM. Results are compared with measurements made simultaneously with balloon borne radiosonde instruments and with radiometric temperature retrievals made using more conventional retrieval algorithms. The neural network was trained using a Widrow-Hoff delta rule procedure. Functions of date to include season dependence in the retrieval process and functions of time to include diurnal effects were used as inputs to the neural network.

  2. A TLD dose algorithm using artificial neural networks

    International Nuclear Information System (INIS)

    Moscovitch, M.; Rotunda, J.E.; Tawil, R.A.; Rathbone, B.A.

    1995-01-01

    An artificial neural network was designed and used to develop a dose algorithm for a multi-element thermoluminescence dosimeter (TLD). The neural network architecture is based on the concept of functional links network (FLN). Neural network is an information processing method inspired by the biological nervous system. A dose algorithm based on neural networks is fundamentally different as compared to conventional algorithms, as it has the capability to learn from its own experience. The neural network algorithm is shown the expected dose values (output) associated with given responses of a multi-element dosimeter (input) many times. The algorithm, being trained that way, eventually is capable to produce its own unique solution to similar (but not exactly the same) dose calculation problems. For personal dosimetry, the output consists of the desired dose components: deep dose, shallow dose and eye dose. The input consists of the TL data obtained from the readout of a multi-element dosimeter. The neural network approach was applied to the Harshaw Type 8825 TLD, and was shown to significantly improve the performance of this dosimeter, well within the U.S. accreditation requirements for personnel dosimeters

  3. Artificial Astrocytes Improve Neural Network Performance

    Science.gov (United States)

    Porto-Pazos, Ana B.; Veiguela, Noha; Mesejo, Pablo; Navarrete, Marta; Alvarellos, Alberto; Ibáñez, Oscar; Pazos, Alejandro; Araque, Alfonso

    2011-01-01

    Compelling evidence indicates the existence of bidirectional communication between astrocytes and neurons. Astrocytes, a type of glial cells classically considered to be passive supportive cells, have been recently demonstrated to be actively involved in the processing and regulation of synaptic information, suggesting that brain function arises from the activity of neuron-glia networks. However, the actual impact of astrocytes in neural network function is largely unknown and its application in artificial intelligence remains untested. We have investigated the consequences of including artificial astrocytes, which present the biologically defined properties involved in astrocyte-neuron communication, on artificial neural network performance. Using connectionist systems and evolutionary algorithms, we have compared the performance of artificial neural networks (NN) and artificial neuron-glia networks (NGN) to solve classification problems. We show that the degree of success of NGN is superior to NN. Analysis of performances of NN with different number of neurons or different architectures indicate that the effects of NGN cannot be accounted for an increased number of network elements, but rather they are specifically due to astrocytes. Furthermore, the relative efficacy of NGN vs. NN increases as the complexity of the network increases. These results indicate that artificial astrocytes improve neural network performance, and established the concept of Artificial Neuron-Glia Networks, which represents a novel concept in Artificial Intelligence with implications in computational science as well as in the understanding of brain function. PMID:21526157

  4. Artificial astrocytes improve neural network performance.

    Directory of Open Access Journals (Sweden)

    Ana B Porto-Pazos

    Full Text Available Compelling evidence indicates the existence of bidirectional communication between astrocytes and neurons. Astrocytes, a type of glial cells classically considered to be passive supportive cells, have been recently demonstrated to be actively involved in the processing and regulation of synaptic information, suggesting that brain function arises from the activity of neuron-glia networks. However, the actual impact of astrocytes in neural network function is largely unknown and its application in artificial intelligence remains untested. We have investigated the consequences of including artificial astrocytes, which present the biologically defined properties involved in astrocyte-neuron communication, on artificial neural network performance. Using connectionist systems and evolutionary algorithms, we have compared the performance of artificial neural networks (NN and artificial neuron-glia networks (NGN to solve classification problems. We show that the degree of success of NGN is superior to NN. Analysis of performances of NN with different number of neurons or different architectures indicate that the effects of NGN cannot be accounted for an increased number of network elements, but rather they are specifically due to astrocytes. Furthermore, the relative efficacy of NGN vs. NN increases as the complexity of the network increases. These results indicate that artificial astrocytes improve neural network performance, and established the concept of Artificial Neuron-Glia Networks, which represents a novel concept in Artificial Intelligence with implications in computational science as well as in the understanding of brain function.

  5. Artificial astrocytes improve neural network performance.

    Science.gov (United States)

    Porto-Pazos, Ana B; Veiguela, Noha; Mesejo, Pablo; Navarrete, Marta; Alvarellos, Alberto; Ibáñez, Oscar; Pazos, Alejandro; Araque, Alfonso

    2011-04-19

    Compelling evidence indicates the existence of bidirectional communication between astrocytes and neurons. Astrocytes, a type of glial cells classically considered to be passive supportive cells, have been recently demonstrated to be actively involved in the processing and regulation of synaptic information, suggesting that brain function arises from the activity of neuron-glia networks. However, the actual impact of astrocytes in neural network function is largely unknown and its application in artificial intelligence remains untested. We have investigated the consequences of including artificial astrocytes, which present the biologically defined properties involved in astrocyte-neuron communication, on artificial neural network performance. Using connectionist systems and evolutionary algorithms, we have compared the performance of artificial neural networks (NN) and artificial neuron-glia networks (NGN) to solve classification problems. We show that the degree of success of NGN is superior to NN. Analysis of performances of NN with different number of neurons or different architectures indicate that the effects of NGN cannot be accounted for an increased number of network elements, but rather they are specifically due to astrocytes. Furthermore, the relative efficacy of NGN vs. NN increases as the complexity of the network increases. These results indicate that artificial astrocytes improve neural network performance, and established the concept of Artificial Neuron-Glia Networks, which represents a novel concept in Artificial Intelligence with implications in computational science as well as in the understanding of brain function.

  6. Convolutional Neural Network for Image Recognition

    CERN Document Server

    Seifnashri, Sahand

    2015-01-01

    The aim of this project is to use machine learning techniques especially Convolutional Neural Networks for image processing. These techniques can be used for Quark-Gluon discrimination using calorimeters data, but unfortunately I didn’t manage to get the calorimeters data and I just used the Jet data fromminiaodsim(ak4 chs). The Jet data was not good enough for Convolutional Neural Network which is designed for ’image’ recognition. This report is made of twomain part, part one is mainly about implementing Convolutional Neural Network on unphysical data such as MNIST digits and CIFAR-10 dataset and part 2 is about the Jet data.

  7. A neural network approach to burst detection.

    Science.gov (United States)

    Mounce, S R; Day, A J; Wood, A S; Khan, A; Widdop, P D; Machell, J

    2002-01-01

    This paper describes how hydraulic and water quality data from a distribution network may be used to provide a more efficient leakage management capability for the water industry. The research presented concerns the application of artificial neural networks to the issue of detection and location of leakage in treated water distribution systems. An architecture for an Artificial Neural Network (ANN) based system is outlined. The neural network uses time series data produced by sensors to directly construct an empirical model for predication and classification of leaks. Results are presented using data from an experimental site in Yorkshire Water's Keighley distribution system.

  8. QSAR modelling using combined simple competitive learning networks and RBF neural networks.

    Science.gov (United States)

    Sheikhpour, R; Sarram, M A; Rezaeian, M; Sheikhpour, E

    2018-04-01

    The aim of this study was to propose a QSAR modelling approach based on the combination of simple competitive learning (SCL) networks with radial basis function (RBF) neural networks for predicting the biological activity of chemical compounds. The proposed QSAR method consisted of two phases. In the first phase, an SCL network was applied to determine the centres of an RBF neural network. In the second phase, the RBF neural network was used to predict the biological activity of various phenols and Rho kinase (ROCK) inhibitors. The predictive ability of the proposed QSAR models was evaluated and compared with other QSAR models using external validation. The results of this study showed that the proposed QSAR modelling approach leads to better performances than other models in predicting the biological activity of chemical compounds. This indicated the efficiency of simple competitive learning networks in determining the centres of RBF neural networks.

  9. Neural network classifier of attacks in IP telephony

    Science.gov (United States)

    Safarik, Jakub; Voznak, Miroslav; Mehic, Miralem; Partila, Pavol; Mikulec, Martin

    2014-05-01

    Various types of monitoring mechanism allow us to detect and monitor behavior of attackers in VoIP networks. Analysis of detected malicious traffic is crucial for further investigation and hardening the network. This analysis is typically based on statistical methods and the article brings a solution based on neural network. The proposed algorithm is used as a classifier of attacks in a distributed monitoring network of independent honeypot probes. Information about attacks on these honeypots is collected on a centralized server and then classified. This classification is based on different mechanisms. One of them is based on the multilayer perceptron neural network. The article describes inner structure of used neural network and also information about implementation of this network. The learning set for this neural network is based on real attack data collected from IP telephony honeypot called Dionaea. We prepare the learning set from real attack data after collecting, cleaning and aggregation of this information. After proper learning is the neural network capable to classify 6 types of most commonly used VoIP attacks. Using neural network classifier brings more accurate attack classification in a distributed system of honeypots. With this approach is possible to detect malicious behavior in a different part of networks, which are logically or geographically divided and use the information from one network to harden security in other networks. Centralized server for distributed set of nodes serves not only as a collector and classifier of attack data, but also as a mechanism for generating a precaution steps against attacks.

  10. A Neural Network-Based Interval Pattern Matcher

    Directory of Open Access Journals (Sweden)

    Jing Lu

    2015-07-01

    Full Text Available One of the most important roles in the machine learning area is to classify, and neural networks are very important classifiers. However, traditional neural networks cannot identify intervals, let alone classify them. To improve their identification ability, we propose a neural network-based interval matcher in our paper. After summarizing the theoretical construction of the model, we take a simple and a practical weather forecasting experiment, which show that the recognizer accuracy reaches 100% and that is promising.

  11. Artificial Neural Networks and the Mass Appraisal of Real Estate

    Directory of Open Access Journals (Sweden)

    Gang Zhou

    2018-03-01

    Full Text Available With the rapid development of computer, artificial intelligence and big data technology, artificial neural networks have become one of the most powerful machine learning algorithms. In the practice, most of the applications of artificial neural networks use back propagation neural network and its variation. Besides the back propagation neural network, various neural networks have been developing in order to improve the performance of standard models. Though neural networks are well known method in the research of real estate, there is enormous space for future research in order to enhance their function. Some scholars combine genetic algorithm, geospatial information, support vector machine model, particle swarm optimization with artificial neural networks to appraise the real estate, which is helpful for the existing appraisal technology. The mass appraisal of real estate in this paper includes the real estate valuation in the transaction and the tax base valuation in the real estate holding. In this study we focus on the theoretical development of artificial neural networks and mass appraisal of real estate, artificial neural networks model evolution and algorithm improvement, artificial neural networks practice and application, and review the existing literature about artificial neural networks and mass appraisal of real estate. Finally, we provide some suggestions for the mass appraisal of China's real estate.

  12. Introduction to neural networks with electric power applications

    International Nuclear Information System (INIS)

    Wildberger, A.M.; Hickok, K.A.

    1990-01-01

    This is an introduction to the general field of neural networks with emphasis on prospects for their application in the power industry. It is intended to provide enough background information for its audience to begin to follow technical developments in neural networks and to recognize those which might impact on electric power engineering. Beginning with a brief discussion of natural and artificial neurons, the characteristics of neural networks in general and how they learn, neural networks are compared with other modeling tools such as simulation and expert systems in order to provide guidance in selecting appropriate applications. In the power industry, possible applications include plant control, dispatching, and maintenance scheduling. In particular, neural networks are currently being investigated for enhancements to the Thermal Performance Advisor (TPA) which General Physics Corporation (GP) has developed to improve the efficiency of electric power generation

  13. Controlling the dynamics of multi-state neural networks

    International Nuclear Information System (INIS)

    Jin, Tao; Zhao, Hong

    2008-01-01

    In this paper, we first analyze the distribution of local fields (DLF) which is induced by the memory patterns in the Q-Ising model. It is found that the structure of the DLF is closely correlated with the network dynamics and the system performance. However, the design rule adopted in the Q-Ising model, like the other rules adopted for multi-state neural networks with associative memories, cannot be applied to directly control the DLF for a given set of memory patterns, and thus cannot be applied to further study the relationships between the structure of the DLF and the dynamics of the network. We then extend a design rule, which was presented recently for designing binary-state neural networks, to make it suitable for designing general multi-state neural networks. This rule is able to control the structure of the DLF as expected. We show that controlling the DLF not only can affect the dynamic behaviors of the multi-state neural networks for a given set of memory patterns, but also can improve the storage capacity. With the change of the DLF, the network shows very rich dynamic behaviors, such as the 'chaos phase', the 'memory phase', and the 'mixture phase'. These dynamic behaviors are also observed in the binary-state neural networks; therefore, our results imply that they may be the universal behaviors of feedback neural networks

  14. Face recognition based on improved BP neural network

    Directory of Open Access Journals (Sweden)

    Yue Gaili

    2017-01-01

    Full Text Available In order to improve the recognition rate of face recognition, face recognition algorithm based on histogram equalization, PCA and BP neural network is proposed. First, the face image is preprocessed by histogram equalization. Then, the classical PCA algorithm is used to extract the features of the histogram equalization image, and extract the principal component of the image. And then train the BP neural network using the trained training samples. This improved BP neural network weight adjustment method is used to train the network because the conventional BP algorithm has the disadvantages of slow convergence, easy to fall into local minima and training process. Finally, the BP neural network with the test sample input is trained to classify and identify the face images, and the recognition rate is obtained. Through the use of ORL database face image simulation experiment, the analysis results show that the improved BP neural network face recognition method can effectively improve the recognition rate of face recognition.

  15. Control of autonomous robot using neural networks

    Science.gov (United States)

    Barton, Adam; Volna, Eva

    2017-07-01

    The aim of the article is to design a method of control of an autonomous robot using artificial neural networks. The introductory part describes control issues from the perspective of autonomous robot navigation and the current mobile robots controlled by neural networks. The core of the article is the design of the controlling neural network, and generation and filtration of the training set using ART1 (Adaptive Resonance Theory). The outcome of the practical part is an assembled Lego Mindstorms EV3 robot solving the problem of avoiding obstacles in space. To verify models of an autonomous robot behavior, a set of experiments was created as well as evaluation criteria. The speed of each motor was adjusted by the controlling neural network with respect to the situation in which the robot was found.

  16. Neutron spectrometry using artificial neural networks

    International Nuclear Information System (INIS)

    Vega-Carrillo, Hector Rene; Martin Hernandez-Davila, Victor; Manzanares-Acuna, Eduardo; Mercado Sanchez, Gema A.; Pilar Iniguez de la Torre, Maria; Barquero, Raquel; Palacios, Francisco; Mendez Villafane, Roberto; Arteaga Arteaga, Tarcicio; Manuel Ortiz Rodriguez, Jose

    2006-01-01

    An artificial neural network has been designed to obtain neutron spectra from Bonner spheres spectrometer count rates. The neural network was trained using 129 neutron spectra. These include spectra from isotopic neutron sources; reference and operational spectra from accelerators and nuclear reactors, spectra based on mathematical functions as well as few energy groups and monoenergetic spectra. The spectra were transformed from lethargy to energy distribution and were re-binned to 31 energy groups using the MCNP 4C code. The re-binned spectra and the UTA4 response matrix were used to calculate the expected count rates in Bonner spheres spectrometer. These count rates were used as input and their respective spectra were used as output during the neural network training. After training, the network was tested with the Bonner spheres count rates produced by folding a set of neutron spectra with the response matrix. This set contains data used during network training as well as data not used. Training and testing was carried out using the Matlab ( R) program. To verify the network unfolding performance, the original and unfolded spectra were compared using the root mean square error. The use of artificial neural networks to unfold neutron spectra in neutron spectrometry is an alternative procedure that overcomes the drawbacks associated with this ill-conditioned problem

  17. A quantum-implementable neural network model

    Science.gov (United States)

    Chen, Jialin; Wang, Lingli; Charbon, Edoardo

    2017-10-01

    A quantum-implementable neural network, namely quantum probability neural network (QPNN) model, is proposed in this paper. QPNN can use quantum parallelism to trace all possible network states to improve the result. Due to its unique quantum nature, this model is robust to several quantum noises under certain conditions, which can be efficiently implemented by the qubus quantum computer. Another advantage is that QPNN can be used as memory to retrieve the most relevant data and even to generate new data. The MATLAB experimental results of Iris data classification and MNIST handwriting recognition show that much less neuron resources are required in QPNN to obtain a good result than the classical feedforward neural network. The proposed QPNN model indicates that quantum effects are useful for real-life classification tasks.

  18. Neural Network Algorithm for Particle Loading

    International Nuclear Information System (INIS)

    Lewandowski, J.L.V.

    2003-01-01

    An artificial neural network algorithm for continuous minimization is developed and applied to the case of numerical particle loading. It is shown that higher-order moments of the probability distribution function can be efficiently renormalized using this technique. A general neural network for the renormalization of an arbitrary number of moments is given

  19. Memory in Neural Networks and Glasses

    NARCIS (Netherlands)

    Heerema, M.

    2000-01-01

    The thesis tries and models a neural network in a way which, at essential points, is biologically realistic. In a biological context, the changes of the synapses of the neural network are most often described by what is called `Hebb's learning rule'. On careful analysis it is, in fact, nothing but a

  20. Neural Network for Sparse Reconstruction

    Directory of Open Access Journals (Sweden)

    Qingfa Li

    2014-01-01

    Full Text Available We construct a neural network based on smoothing approximation techniques and projected gradient method to solve a kind of sparse reconstruction problems. Neural network can be implemented by circuits and can be seen as an important method for solving optimization problems, especially large scale problems. Smoothing approximation is an efficient technique for solving nonsmooth optimization problems. We combine these two techniques to overcome the difficulties of the choices of the step size in discrete algorithms and the item in the set-valued map of differential inclusion. In theory, the proposed network can converge to the optimal solution set of the given problem. Furthermore, some numerical experiments show the effectiveness of the proposed network in this paper.

  1. Ocean wave forecasting using recurrent neural networks

    Digital Repository Service at National Institute of Oceanography (India)

    Mandal, S.; Prabaharan, N.

    , merchant vessel routing, nearshore construction, etc. more efficiently and safely. This paper describes an artificial neural network, namely recurrent neural network with rprop update algorithm and is applied for wave forecasting. Measured ocean waves off...

  2. Self-organized critical neural networks

    International Nuclear Information System (INIS)

    Bornholdt, Stefan; Roehl, Torsten

    2003-01-01

    A mechanism for self-organization of the degree of connectivity in model neural networks is studied. Network connectivity is regulated locally on the basis of an order parameter of the global dynamics, which is estimated from an observable at the single synapse level. This principle is studied in a two-dimensional neural network with randomly wired asymmetric weights. In this class of networks, network connectivity is closely related to a phase transition between ordered and disordered dynamics. A slow topology change is imposed on the network through a local rewiring rule motivated by activity-dependent synaptic development: Neighbor neurons whose activity is correlated, on average develop a new connection while uncorrelated neighbors tend to disconnect. As a result, robust self-organization of the network towards the order disorder transition occurs. Convergence is independent of initial conditions, robust against thermal noise, and does not require fine tuning of parameters

  3. Critical Branching Neural Networks

    Science.gov (United States)

    Kello, Christopher T.

    2013-01-01

    It is now well-established that intrinsic variations in human neural and behavioral activity tend to exhibit scaling laws in their fluctuations and distributions. The meaning of these scaling laws is an ongoing matter of debate between isolable causes versus pervasive causes. A spiking neural network model is presented that self-tunes to critical…

  4. Tensor Basis Neural Network v. 1.0 (beta)

    Energy Technology Data Exchange (ETDEWEB)

    2017-03-28

    This software package can be used to build, train, and test a neural network machine learning model. The neural network architecture is specifically designed to embed tensor invariance properties by enforcing that the model predictions sit on an invariant tensor basis. This neural network architecture can be used in developing constitutive models for applications such as turbulence modeling, materials science, and electromagnetism.

  5. Storage capacity and retrieval time of small-world neural networks

    International Nuclear Information System (INIS)

    Oshima, Hiraku; Odagaki, Takashi

    2007-01-01

    To understand the influence of structure on the function of neural networks, we study the storage capacity and the retrieval time of Hopfield-type neural networks for four network structures: regular, small world, random networks generated by the Watts-Strogatz (WS) model, and the same network as the neural network of the nematode Caenorhabditis elegans. Using computer simulations, we find that (1) as the randomness of network is increased, its storage capacity is enhanced; (2) the retrieval time of WS networks does not depend on the network structure, but the retrieval time of C. elegans's neural network is longer than that of WS networks; (3) the storage capacity of the C. elegans network is smaller than that of networks generated by the WS model, though the neural network of C. elegans is considered to be a small-world network

  6. Cultured Neural Networks: Optimization of Patterned Network Adhesiveness and Characterization of their Neural Activity

    Directory of Open Access Journals (Sweden)

    W. L. C. Rutten

    2006-01-01

    Full Text Available One type of future, improved neural interface is the “cultured probe”. It is a hybrid type of neural information transducer or prosthesis, for stimulation and/or recording of neural activity. It would consist of a microelectrode array (MEA on a planar substrate, each electrode being covered and surrounded by a local circularly confined network (“island” of cultured neurons. The main purpose of the local networks is that they act as biofriendly intermediates for collateral sprouts from the in vivo system, thus allowing for an effective and selective neuron–electrode interface. As a secondary purpose, one may envisage future information processing applications of these intermediary networks. In this paper, first, progress is shown on how substrates can be chemically modified to confine developing networks, cultured from dissociated rat cortex cells, to “islands” surrounding an electrode site. Additional coating of neurophobic, polyimide-coated substrate by triblock-copolymer coating enhances neurophilic-neurophobic adhesion contrast. Secondly, results are given on neuronal activity in patterned, unconnected and connected, circular “island” networks. For connected islands, the larger the island diameter (50, 100 or 150 μm, the more spontaneous activity is seen. Also, activity may show a very high degree of synchronization between two islands. For unconnected islands, activity may start at 22 days in vitro (DIV, which is two weeks later than in unpatterned networks.

  7. Complex-valued neural networks advances and applications

    CERN Document Server

    Hirose, Akira

    2013-01-01

    Presents the latest advances in complex-valued neural networks by demonstrating the theory in a wide range of applications Complex-valued neural networks is a rapidly developing neural network framework that utilizes complex arithmetic, exhibiting specific characteristics in its learning, self-organizing, and processing dynamics. They are highly suitable for processing complex amplitude, composed of amplitude and phase, which is one of the core concepts in physical systems to deal with electromagnetic, light, sonic/ultrasonic waves as well as quantum waves, namely, electron and

  8. 78 FR 71631 - Committee Name: Homeland Security Information Network Advisory Committee (HSINAC)

    Science.gov (United States)

    2013-11-29

    ... DEPARTMENT OF HOMELAND SECURITY [DHS-2013-0037] Committee Name: Homeland Security Information.... SUMMARY: The Homeland Security Information Network Advisory Council (HSINAC) will meet December 17, 2013... , Phone: 202-343-4212. SUPPLEMENTARY INFORMATION: The Homeland Security Information Network Advisory...

  9. Arabic Handwriting Recognition Using Neural Network Classifier

    African Journals Online (AJOL)

    pc

    2018-03-05

    Mar 5, 2018 ... an OCR using Neural Network classifier preceded by a set of preprocessing .... Artificial Neural Networks (ANNs), which we adopt in this research, consist of ... advantage and disadvantages of each technique. In [9],. Khemiri ...

  10. Implementing Signature Neural Networks with Spiking Neurons.

    Science.gov (United States)

    Carrillo-Medina, José Luis; Latorre, Roberto

    2016-01-01

    Spiking Neural Networks constitute the most promising approach to develop realistic Artificial Neural Networks (ANNs). Unlike traditional firing rate-based paradigms, information coding in spiking models is based on the precise timing of individual spikes. It has been demonstrated that spiking ANNs can be successfully and efficiently applied to multiple realistic problems solvable with traditional strategies (e.g., data classification or pattern recognition). In recent years, major breakthroughs in neuroscience research have discovered new relevant computational principles in different living neural systems. Could ANNs benefit from some of these recent findings providing novel elements of inspiration? This is an intriguing question for the research community and the development of spiking ANNs including novel bio-inspired information coding and processing strategies is gaining attention. From this perspective, in this work, we adapt the core concepts of the recently proposed Signature Neural Network paradigm-i.e., neural signatures to identify each unit in the network, local information contextualization during the processing, and multicoding strategies for information propagation regarding the origin and the content of the data-to be employed in a spiking neural network. To the best of our knowledge, none of these mechanisms have been used yet in the context of ANNs of spiking neurons. This paper provides a proof-of-concept for their applicability in such networks. Computer simulations show that a simple network model like the discussed here exhibits complex self-organizing properties. The combination of multiple simultaneous encoding schemes allows the network to generate coexisting spatio-temporal patterns of activity encoding information in different spatio-temporal spaces. As a function of the network and/or intra-unit parameters shaping the corresponding encoding modality, different forms of competition among the evoked patterns can emerge even in the absence

  11. MEMBRAIN NEURAL NETWORK FOR VISUAL PATTERN RECOGNITION

    Directory of Open Access Journals (Sweden)

    Artur Popko

    2013-06-01

    Full Text Available Recognition of visual patterns is one of significant applications of Artificial Neural Networks, which partially emulate human thinking in the domain of artificial intelligence. In the paper, a simplified neural approach to recognition of visual patterns is portrayed and discussed. This paper is dedicated for investigators in visual patterns recognition, Artificial Neural Networking and related disciplines. The document describes also MemBrain application environment as a powerful and easy to use neural networks’ editor and simulator supporting ANN.

  12. Decoding small surface codes with feedforward neural networks

    Science.gov (United States)

    Varsamopoulos, Savvas; Criger, Ben; Bertels, Koen

    2018-01-01

    Surface codes reach high error thresholds when decoded with known algorithms, but the decoding time will likely exceed the available time budget, especially for near-term implementations. To decrease the decoding time, we reduce the decoding problem to a classification problem that a feedforward neural network can solve. We investigate quantum error correction and fault tolerance at small code distances using neural network-based decoders, demonstrating that the neural network can generalize to inputs that were not provided during training and that they can reach similar or better decoding performance compared to previous algorithms. We conclude by discussing the time required by a feedforward neural network decoder in hardware.

  13. Artificial Neural Networks For Hadron Hadron Cross-sections

    International Nuclear Information System (INIS)

    ELMashad, M.; ELBakry, M.Y.; Tantawy, M.; Habashy, D.M.

    2011-01-01

    In recent years artificial neural networks (ANN ) have emerged as a mature and viable framework with many applications in various areas. Artificial neural networks theory is sometimes used to refer to a branch of computational science that uses neural networks as models to either simulate or analyze complex phenomena and/or study the principles of operation of neural networks analytically. In this work a model of hadron- hadron collision using the ANN technique is present, the hadron- hadron based ANN model calculates the cross sections of hadron- hadron collision. The results amply demonstrate the feasibility of such new technique in extracting the collision features and prove its effectiveness

  14. Foreign currency rate forecasting using neural networks

    Science.gov (United States)

    Pandya, Abhijit S.; Kondo, Tadashi; Talati, Amit; Jayadevappa, Suryaprasad

    2000-03-01

    Neural networks are increasingly being used as a forecasting tool in many forecasting problems. This paper discusses the application of neural networks in predicting daily foreign exchange rates between the USD, GBP as well as DEM. We approach the problem from a time-series analysis framework - where future exchange rates are forecasted solely using past exchange rates. This relies on the belief that the past prices and future prices are very close related, and interdependent. We present the result of training a neural network with historical USD-GBP data. The methodology used in explained, as well as the training process. We discuss the selection of inputs to the network, and present a comparison of using the actual exchange rates and the exchange rate differences as inputs. Price and rate differences are the preferred way of training neural network in financial applications. Results of both approaches are present together for comparison. We show that the network is able to learn the trends in the exchange rate movements correctly, and present the results of the prediction over several periods of time.

  15. Face recognition: a convolutional neural-network approach.

    Science.gov (United States)

    Lawrence, S; Giles, C L; Tsoi, A C; Back, A D

    1997-01-01

    We present a hybrid neural-network for human face recognition which compares favourably with other methods. The system combines local image sampling, a self-organizing map (SOM) neural network, and a convolutional neural network. The SOM provides a quantization of the image samples into a topological space where inputs that are nearby in the original space are also nearby in the output space, thereby providing dimensionality reduction and invariance to minor changes in the image sample, and the convolutional neural network provides partial invariance to translation, rotation, scale, and deformation. The convolutional network extracts successively larger features in a hierarchical set of layers. We present results using the Karhunen-Loeve transform in place of the SOM, and a multilayer perceptron (MLP) in place of the convolutional network for comparison. We use a database of 400 images of 40 individuals which contains quite a high degree of variability in expression, pose, and facial details. We analyze the computational complexity and discuss how new classes could be added to the trained recognizer.

  16. Diabetic retinopathy screening using deep neural network.

    Science.gov (United States)

    Ramachandran, Nishanthan; Hong, Sheng Chiong; Sime, Mary J; Wilson, Graham A

    2017-09-07

    There is a burgeoning interest in the use of deep neural network in diabetic retinal screening. To determine whether a deep neural network could satisfactorily detect diabetic retinopathy that requires referral to an ophthalmologist from a local diabetic retinal screening programme and an international database. Retrospective audit. Diabetic retinal photos from Otago database photographed during October 2016 (485 photos), and 1200 photos from Messidor international database. Receiver operating characteristic curve to illustrate the ability of a deep neural network to identify referable diabetic retinopathy (moderate or worse diabetic retinopathy or exudates within one disc diameter of the fovea). Area under the receiver operating characteristic curve, sensitivity and specificity. For detecting referable diabetic retinopathy, the deep neural network had an area under receiver operating characteristic curve of 0.901 (95% confidence interval 0.807-0.995), with 84.6% sensitivity and 79.7% specificity for Otago and 0.980 (95% confidence interval 0.973-0.986), with 96.0% sensitivity and 90.0% specificity for Messidor. This study has shown that a deep neural network can detect referable diabetic retinopathy with sensitivities and specificities close to or better than 80% from both an international and a domestic (New Zealand) database. We believe that deep neural networks can be integrated into community screening once they can successfully detect both diabetic retinopathy and diabetic macular oedema. © 2017 Royal Australian and New Zealand College of Ophthalmologists.

  17. Logarithmic learning for generalized classifier neural network.

    Science.gov (United States)

    Ozyildirim, Buse Melis; Avci, Mutlu

    2014-12-01

    Generalized classifier neural network is introduced as an efficient classifier among the others. Unless the initial smoothing parameter value is close to the optimal one, generalized classifier neural network suffers from convergence problem and requires quite a long time to converge. In this work, to overcome this problem, a logarithmic learning approach is proposed. The proposed method uses logarithmic cost function instead of squared error. Minimization of this cost function reduces the number of iterations used for reaching the minima. The proposed method is tested on 15 different data sets and performance of logarithmic learning generalized classifier neural network is compared with that of standard one. Thanks to operation range of radial basis function included by generalized classifier neural network, proposed logarithmic approach and its derivative has continuous values. This makes it possible to adopt the advantage of logarithmic fast convergence by the proposed learning method. Due to fast convergence ability of logarithmic cost function, training time is maximally decreased to 99.2%. In addition to decrease in training time, classification performance may also be improved till 60%. According to the test results, while the proposed method provides a solution for time requirement problem of generalized classifier neural network, it may also improve the classification accuracy. The proposed method can be considered as an efficient way for reducing the time requirement problem of generalized classifier neural network. Copyright © 2014 Elsevier Ltd. All rights reserved.

  18. An Introduction to Neural Networks for Hearing Aid Noise Recognition.

    Science.gov (United States)

    Kim, Jun W.; Tyler, Richard S.

    1995-01-01

    This article introduces the use of multilayered artificial neural networks in hearing aid noise recognition. It reviews basic principles of neural networks, and offers an example of an application in which a neural network is used to identify the presence or absence of noise in speech. The ability of neural networks to "learn" the…

  19. Thermoelastic steam turbine rotor control based on neural network

    Science.gov (United States)

    Rzadkowski, Romuald; Dominiczak, Krzysztof; Radulski, Wojciech; Szczepanik, R.

    2015-12-01

    Considered here are Nonlinear Auto-Regressive neural networks with eXogenous inputs (NARX) as a mathematical model of a steam turbine rotor for controlling steam turbine stress on-line. In order to obtain neural networks that locate critical stress and temperature points in the steam turbine during transient states, an FE rotor model was built. This model was used to train the neural networks on the basis of steam turbine transient operating data. The training included nonlinearity related to steam turbine expansion, heat exchange and rotor material properties during transients. Simultaneous neural networks are algorithms which can be implemented on PLC controllers. This allows for the application neural networks to control steam turbine stress in industrial power plants.

  20. Application of neural networks in coastal engineering

    Digital Repository Service at National Institute of Oceanography (India)

    Mandal, S.

    the neural network attractive. A neural network is an information processing system modeled on the structure of the dynamic process. It can solve the complex/nonlinear problems quickly once trained by operating on problems using an interconnected number...

  1. What are artificial neural networks?

    DEFF Research Database (Denmark)

    Krogh, Anders

    2008-01-01

    Artificial neural networks have been applied to problems ranging from speech recognition to prediction of protein secondary structure, classification of cancers and gene prediction. How do they work and what might they be good for? Udgivelsesdato: 2008-Feb......Artificial neural networks have been applied to problems ranging from speech recognition to prediction of protein secondary structure, classification of cancers and gene prediction. How do they work and what might they be good for? Udgivelsesdato: 2008-Feb...

  2. Neural network based multiscale image restoration approach

    Science.gov (United States)

    de Castro, Ana Paula A.; da Silva, José D. S.

    2007-02-01

    This paper describes a neural network based multiscale image restoration approach. Multilayer perceptrons are trained with artificial images of degraded gray level circles, in an attempt to make the neural network learn inherent space relations of the degraded pixels. The present approach simulates the degradation by a low pass Gaussian filter blurring operation and the addition of noise to the pixels at pre-established rates. The training process considers the degraded image as input and the non-degraded image as output for the supervised learning process. The neural network thus performs an inverse operation by recovering a quasi non-degraded image in terms of least squared. The main difference of the approach to existing ones relies on the fact that the space relations are taken from different scales, thus providing relational space data to the neural network. The approach is an attempt to come up with a simple method that leads to an optimum solution to the problem. Considering different window sizes around a pixel simulates the multiscale operation. In the generalization phase the neural network is exposed to indoor, outdoor, and satellite degraded images following the same steps use for the artificial circle image.

  3. Analysis of neural networks in terms of domain functions

    NARCIS (Netherlands)

    van der Zwaag, B.J.; Slump, Cornelis H.; Spaanenburg, Lambert

    Despite their success-story, artificial neural networks have one major disadvantage compared to other techniques: the inability to explain comprehensively how a trained neural network reaches its output; neural networks are not only (incorrectly) seen as a "magic tool" but possibly even more as a

  4. Advances in Artificial Neural Networks – Methodological Development and Application

    Directory of Open Access Journals (Sweden)

    Yanbo Huang

    2009-08-01

    Full Text Available Artificial neural networks as a major soft-computing technology have been extensively studied and applied during the last three decades. Research on backpropagation training algorithms for multilayer perceptron networks has spurred development of other neural network training algorithms for other networks such as radial basis function, recurrent network, feedback network, and unsupervised Kohonen self-organizing network. These networks, especially the multilayer perceptron network with a backpropagation training algorithm, have gained recognition in research and applications in various scientific and engineering areas. In order to accelerate the training process and overcome data over-fitting, research has been conducted to improve the backpropagation algorithm. Further, artificial neural networks have been integrated with other advanced methods such as fuzzy logic and wavelet analysis, to enhance the ability of data interpretation and modeling and to avoid subjectivity in the operation of the training algorithm. In recent years, support vector machines have emerged as a set of high-performance supervised generalized linear classifiers in parallel with artificial neural networks. A review on development history of artificial neural networks is presented and the standard architectures and algorithms of artificial neural networks are described. Furthermore, advanced artificial neural networks will be introduced with support vector machines, and limitations of ANNs will be identified. The future of artificial neural network development in tandem with support vector machines will be discussed in conjunction with further applications to food science and engineering, soil and water relationship for crop management, and decision support for precision agriculture. Along with the network structures and training algorithms, the applications of artificial neural networks will be reviewed as well, especially in the fields of agricultural and biological

  5. Nonlinear programming with feedforward neural networks.

    Energy Technology Data Exchange (ETDEWEB)

    Reifman, J.

    1999-06-02

    We provide a practical and effective method for solving constrained optimization problems by successively training a multilayer feedforward neural network in a coupled neural-network/objective-function representation. Nonlinear programming problems are easily mapped into this representation which has a simpler and more transparent method of solution than optimization performed with Hopfield-like networks and poses very mild requirements on the functions appearing in the problem. Simulation results are illustrated and compared with an off-the-shelf optimization tool.

  6. Additive Feed Forward Control with Neural Networks

    DEFF Research Database (Denmark)

    Sørensen, O.

    1999-01-01

    This paper demonstrates a method to control a non-linear, multivariable, noisy process using trained neural networks. The basis for the method is a trained neural network controller acting as the inverse process model. A training method for obtaining such an inverse process model is applied....... A suitable 'shaped' (low-pass filtered) reference is used to overcome problems with excessive control action when using a controller acting as the inverse process model. The control concept is Additive Feed Forward Control, where the trained neural network controller, acting as the inverse process model......, is placed in a supplementary pure feed-forward path to an existing feedback controller. This concept benefits from the fact, that an existing, traditional designed, feedback controller can be retained without any modifications, and after training the connection of the neural network feed-forward controller...

  7. Neural Network to Solve Concave Games

    OpenAIRE

    Liu, Zixin; Wang, Nengfa

    2014-01-01

    The issue on neural network method to solve concave games is concerned. Combined with variational inequality, Ky Fan inequality, and projection equation, concave games are transformed into a neural network model. On the basis of the Lyapunov stable theory, some stability results are also given. Finally, two classic games’ simulation results are given to illustrate the theoretical results.

  8. Topology influences performance in the associative memory neural networks

    International Nuclear Information System (INIS)

    Lu Jianquan; He Juan; Cao Jinde; Gao Zhiqiang

    2006-01-01

    To explore how topology affects performance within Hopfield-type associative memory neural networks (AMNNs), we studied the computational performance of the neural networks with regular lattice, random, small-world, and scale-free structures. In this Letter, we found that the memory performance of neural networks obtained through asynchronous updating from 'larger' nodes to 'smaller' nodes are better than asynchronous updating in random order, especially for the scale-free topology. The computational performance of associative memory neural networks linked by the above-mentioned network topologies with the same amounts of nodes (neurons) and edges (synapses) were studied respectively. Along with topologies becoming more random and less locally disordered, we will see that the performance of associative memory neural network is quite improved. By comparing, we show that the regular lattice and random network form two extremes in terms of patterns stability and retrievability. For a network, its patterns stability and retrievability can be largely enhanced by adding a random component or some shortcuts to its structured component. According to the conclusions of this Letter, we can design the associative memory neural networks with high performance and minimal interconnect requirements

  9. Hybrid neural network bushing model for vehicle dynamics simulation

    International Nuclear Information System (INIS)

    Sohn, Jeong Hyun; Lee, Seung Kyu; Yoo, Wan Suk

    2008-01-01

    Although the linear model was widely used for the bushing model in vehicle suspension systems, it could not express the nonlinear characteristics of bushing in terms of the amplitude and the frequency. An artificial neural network model was suggested to consider the hysteretic responses of bushings. This model, however, often diverges due to the uncertainties of the neural network under the unexpected excitation inputs. In this paper, a hybrid neural network bushing model combining linear and neural network is suggested. A linear model was employed to represent linear stiffness and damping effects, and the artificial neural network algorithm was adopted to take into account the hysteretic responses. A rubber test was performed to capture bushing characteristics, where sine excitation with different frequencies and amplitudes is applied. Random test results were used to update the weighting factors of the neural network model. It is proven that the proposed model has more robust characteristics than a simple neural network model under step excitation input. A full car simulation was carried out to verify the proposed bushing models. It was shown that the hybrid model results are almost identical to the linear model under several maneuvers

  10. An introduction to neural network methods for differential equations

    CERN Document Server

    Yadav, Neha; Kumar, Manoj

    2015-01-01

    This book introduces a variety of neural network methods for solving differential equations arising in science and engineering. The emphasis is placed on a deep understanding of the neural network techniques, which has been presented in a mostly heuristic and intuitive manner. This approach will enable the reader to understand the working, efficiency and shortcomings of each neural network technique for solving differential equations. The objective of this book is to provide the reader with a sound understanding of the foundations of neural networks, and a comprehensive introduction to neural network methods for solving differential equations together with recent developments in the techniques and their applications. The book comprises four major sections. Section I consists of a brief overview of differential equations and the relevant physical problems arising in science and engineering. Section II illustrates the history of neural networks starting from their beginnings in the 1940s through to the renewed...

  11. Evolutionary Algorithms For Neural Networks Binary And Real Data Classification

    Directory of Open Access Journals (Sweden)

    Dr. Hanan A.R. Akkar

    2015-08-01

    Full Text Available Artificial neural networks are complex networks emulating the way human rational neurons process data. They have been widely used generally in prediction clustering classification and association. The training algorithms that used to determine the network weights are almost the most important factor that influence the neural networks performance. Recently many meta-heuristic and Evolutionary algorithms are employed to optimize neural networks weights to achieve better neural performance. This paper aims to use recently proposed algorithms for optimizing neural networks weights comparing these algorithms performance with other classical meta-heuristic algorithms used for the same purpose. However to evaluate the performance of such algorithms for training neural networks we examine such algorithms to classify four opposite binary XOR clusters and classification of continuous real data sets such as Iris and Ecoli.

  12. An Attractor-Based Complexity Measurement for Boolean Recurrent Neural Networks

    Science.gov (United States)

    Cabessa, Jérémie; Villa, Alessandro E. P.

    2014-01-01

    We provide a novel refined attractor-based complexity measurement for Boolean recurrent neural networks that represents an assessment of their computational power in terms of the significance of their attractor dynamics. This complexity measurement is achieved by first proving a computational equivalence between Boolean recurrent neural networks and some specific class of -automata, and then translating the most refined classification of -automata to the Boolean neural network context. As a result, a hierarchical classification of Boolean neural networks based on their attractive dynamics is obtained, thus providing a novel refined attractor-based complexity measurement for Boolean recurrent neural networks. These results provide new theoretical insights to the computational and dynamical capabilities of neural networks according to their attractive potentialities. An application of our findings is illustrated by the analysis of the dynamics of a simplified model of the basal ganglia-thalamocortical network simulated by a Boolean recurrent neural network. This example shows the significance of measuring network complexity, and how our results bear new founding elements for the understanding of the complexity of real brain circuits. PMID:24727866

  13. Artificial Neural Network Modeling of an Inverse Fluidized Bed ...

    African Journals Online (AJOL)

    A Radial Basis Function neural network has been successfully employed for the modeling of the inverse fluidized bed reactor. In the proposed model, the trained neural network represents the kinetics of biological decomposition of pollutants in the reactor. The neural network has been trained with experimental data ...

  14. Using function approximation to determine neural network accuracy

    International Nuclear Information System (INIS)

    Wichman, R.F.; Alexander, J.

    2013-01-01

    Many, if not most, control processes demonstrate nonlinear behavior in some portion of their operating range and the ability of neural networks to model non-linear dynamics makes them very appealing for control. Control of high reliability safety systems, and autonomous control in process or robotic applications, however, require accurate and consistent control and neural networks are only approximators of various functions so their degree of approximation becomes important. In this paper, the factors affecting the ability of a feed-forward back-propagation neural network to accurately approximate a non-linear function are explored. Compared to pattern recognition using a neural network for function approximation provides an easy and accurate method for determining the network's accuracy. In contrast to other techniques, we show that errors arising in function approximation or curve fitting are caused by the neural network itself rather than scatter in the data. A method is proposed that provides improvements in the accuracy achieved during training and resulting ability of the network to generalize after training. Binary input vectors provided a more accurate model than with scalar inputs and retraining using a small number of the outlier x,y pairs improved generalization. (author)

  15. Representation of neutron noise data using neural networks

    International Nuclear Information System (INIS)

    Korsah, K.; Damiano, B.; Wood, R.T.

    1992-01-01

    This paper describes a neural network-based method of representing neutron noise spectra using a model developed at the Oak Ridge National Laboratory (ORNL). The backpropagation neural network learned to represent neutron noise data in terms of four descriptors, and the network response matched calculated values to within 3.5 percent. These preliminary results are encouraging, and further research is directed towards the application of neural networks in a diagnostics system for the identification of the causes of changes in structural spectral resonances. This work is part of our current investigation of advanced technologies such as expert systems and neural networks for neutron noise data reduction, analysis, and interpretation. The objective is to improve the state-of-the-art of noise analysis as a diagnostic tool for nuclear power plants and other mechanical systems

  16. Supervised Learning with Complex-valued Neural Networks

    CERN Document Server

    Suresh, Sundaram; Savitha, Ramasamy

    2013-01-01

    Recent advancements in the field of telecommunications, medical imaging and signal processing deal with signals that are inherently time varying, nonlinear and complex-valued. The time varying, nonlinear characteristics of these signals can be effectively analyzed using artificial neural networks.  Furthermore, to efficiently preserve the physical characteristics of these complex-valued signals, it is important to develop complex-valued neural networks and derive their learning algorithms to represent these signals at every step of the learning process. This monograph comprises a collection of new supervised learning algorithms along with novel architectures for complex-valued neural networks. The concepts of meta-cognition equipped with a self-regulated learning have been known to be the best human learning strategy. In this monograph, the principles of meta-cognition have been introduced for complex-valued neural networks in both the batch and sequential learning modes. For applications where the computati...

  17. Hardware implementation of stochastic spiking neural networks.

    Science.gov (United States)

    Rosselló, Josep L; Canals, Vincent; Morro, Antoni; Oliver, Antoni

    2012-08-01

    Spiking Neural Networks, the last generation of Artificial Neural Networks, are characterized by its bio-inspired nature and by a higher computational capacity with respect to other neural models. In real biological neurons, stochastic processes represent an important mechanism of neural behavior and are responsible of its special arithmetic capabilities. In this work we present a simple hardware implementation of spiking neurons that considers this probabilistic nature. The advantage of the proposed implementation is that it is fully digital and therefore can be massively implemented in Field Programmable Gate Arrays. The high computational capabilities of the proposed model are demonstrated by the study of both feed-forward and recurrent networks that are able to implement high-speed signal filtering and to solve complex systems of linear equations.

  18. Introduction to Concepts in Artificial Neural Networks

    Science.gov (United States)

    Niebur, Dagmar

    1995-01-01

    This introduction to artificial neural networks summarizes some basic concepts of computational neuroscience and the resulting models of artificial neurons. The terminology of biological and artificial neurons, biological and machine learning and neural processing is introduced. The concepts of supervised and unsupervised learning are explained with examples from the power system area. Finally, a taxonomy of different types of neurons and different classes of artificial neural networks is presented.

  19. Direct adaptive control using feedforward neural networks

    OpenAIRE

    Cajueiro, Daniel Oliveira; Hemerly, Elder Moreira

    2003-01-01

    ABSTRACT: This paper proposes a new scheme for direct neural adaptive control that works efficiently employing only one neural network, used for simultaneously identifying and controlling the plant. The idea behind this structure of adaptive control is to compensate the control input obtained by a conventional feedback controller. The neural network training process is carried out by using two different techniques: backpropagation and extended Kalman filter algorithm. Additionally, the conver...

  20. Neural networks in signal processing

    International Nuclear Information System (INIS)

    Govil, R.

    2000-01-01

    Nuclear Engineering has matured during the last decade. In research and design, control, supervision, maintenance and production, mathematical models and theories are used extensively. In all such applications signal processing is embedded in the process. Artificial Neural Networks (ANN), because of their nonlinear, adaptive nature are well suited to such applications where the classical assumptions of linearity and second order Gaussian noise statistics cannot be made. ANN's can be treated as nonparametric techniques, which can model an underlying process from example data. They can also adopt their model parameters to statistical change with time. Algorithms in the framework of Neural Networks in Signal processing have found new applications potentials in the field of Nuclear Engineering. This paper reviews the fundamentals of Neural Networks in signal processing and their applications in tasks such as recognition/identification and control. The topics covered include dynamic modeling, model based ANN's, statistical learning, eigen structure based processing and generalization structures. (orig.)

  1. Application of a neural network for reflectance spectrum classification

    Science.gov (United States)

    Yang, Gefei; Gartley, Michael

    2017-05-01

    Traditional reflectance spectrum classification algorithms are based on comparing spectrum across the electromagnetic spectrum anywhere from the ultra-violet to the thermal infrared regions. These methods analyze reflectance on a pixel by pixel basis. Inspired by high performance that Convolution Neural Networks (CNN) have demonstrated in image classification, we applied a neural network to analyze directional reflectance pattern images. By using the bidirectional reflectance distribution function (BRDF) data, we can reformulate the 4-dimensional into 2 dimensions, namely incident direction × reflected direction × channels. Meanwhile, RIT's micro-DIRSIG model is utilized to simulate additional training samples for improving the robustness of the neural networks training. Unlike traditional classification by using hand-designed feature extraction with a trainable classifier, neural networks create several layers to learn a feature hierarchy from pixels to classifier and all layers are trained jointly. Hence, the our approach of utilizing the angular features are different to traditional methods utilizing spatial features. Although training processing typically has a large computational cost, simple classifiers work well when subsequently using neural network generated features. Currently, most popular neural networks such as VGG, GoogLeNet and AlexNet are trained based on RGB spatial image data. Our approach aims to build a directional reflectance spectrum based neural network to help us to understand from another perspective. At the end of this paper, we compare the difference among several classifiers and analyze the trade-off among neural networks parameters.

  2. Neural Based Orthogonal Data Fitting The EXIN Neural Networks

    CERN Document Server

    Cirrincione, Giansalvo

    2008-01-01

    Written by three leaders in the field of neural based algorithms, Neural Based Orthogonal Data Fitting proposes several neural networks, all endowed with a complete theory which not only explains their behavior, but also compares them with the existing neural and traditional algorithms. The algorithms are studied from different points of view, including: as a differential geometry problem, as a dynamic problem, as a stochastic problem, and as a numerical problem. All algorithms have also been analyzed on real time problems (large dimensional data matrices) and have shown accurate solutions. Wh

  3. Quantized Synchronization of Chaotic Neural Networks With Scheduled Output Feedback Control.

    Science.gov (United States)

    Wan, Ying; Cao, Jinde; Wen, Guanghui

    In this paper, the synchronization problem of master-slave chaotic neural networks with remote sensors, quantization process, and communication time delays is investigated. The information communication channel between the master chaotic neural network and slave chaotic neural network consists of several remote sensors, with each sensor able to access only partial knowledge of output information of the master neural network. At each sampling instants, each sensor updates its own measurement and only one sensor is scheduled to transmit its latest information to the controller's side in order to update the control inputs for the slave neural network. Thus, such communication process and control strategy are much more energy-saving comparing with the traditional point-to-point scheme. Sufficient conditions for output feedback control gain matrix, allowable length of sampling intervals, and upper bound of network-induced delays are derived to ensure the quantized synchronization of master-slave chaotic neural networks. Lastly, Chua's circuit system and 4-D Hopfield neural network are simulated to validate the effectiveness of the main results.In this paper, the synchronization problem of master-slave chaotic neural networks with remote sensors, quantization process, and communication time delays is investigated. The information communication channel between the master chaotic neural network and slave chaotic neural network consists of several remote sensors, with each sensor able to access only partial knowledge of output information of the master neural network. At each sampling instants, each sensor updates its own measurement and only one sensor is scheduled to transmit its latest information to the controller's side in order to update the control inputs for the slave neural network. Thus, such communication process and control strategy are much more energy-saving comparing with the traditional point-to-point scheme. Sufficient conditions for output feedback control

  4. Bio-inspired spiking neural network for nonlinear systems control.

    Science.gov (United States)

    Pérez, Javier; Cabrera, Juan A; Castillo, Juan J; Velasco, Juan M

    2018-08-01

    Spiking neural networks (SNN) are the third generation of artificial neural networks. SNN are the closest approximation to biological neural networks. SNNs make use of temporal spike trains to command inputs and outputs, allowing a faster and more complex computation. As demonstrated by biological organisms, they are a potentially good approach to designing controllers for highly nonlinear dynamic systems in which the performance of controllers developed by conventional techniques is not satisfactory or difficult to implement. SNN-based controllers exploit their ability for online learning and self-adaptation to evolve when transferred from simulations to the real world. SNN's inherent binary and temporary way of information codification facilitates their hardware implementation compared to analog neurons. Biological neural networks often require a lower number of neurons compared to other controllers based on artificial neural networks. In this work, these neuronal systems are imitated to perform the control of non-linear dynamic systems. For this purpose, a control structure based on spiking neural networks has been designed. Particular attention has been paid to optimizing the structure and size of the neural network. The proposed structure is able to control dynamic systems with a reduced number of neurons and connections. A supervised learning process using evolutionary algorithms has been carried out to perform controller training. The efficiency of the proposed network has been verified in two examples of dynamic systems control. Simulations show that the proposed control based on SNN exhibits superior performance compared to other approaches based on Neural Networks and SNNs. Copyright © 2018 Elsevier Ltd. All rights reserved.

  5. Adaptive competitive learning neural networks

    Directory of Open Access Journals (Sweden)

    Ahmed R. Abas

    2013-11-01

    Full Text Available In this paper, the adaptive competitive learning (ACL neural network algorithm is proposed. This neural network not only groups similar input feature vectors together but also determines the appropriate number of groups of these vectors. This algorithm uses a new proposed criterion referred to as the ACL criterion. This criterion evaluates different clustering structures produced by the ACL neural network for an input data set. Then, it selects the best clustering structure and the corresponding network architecture for this data set. The selected structure is composed of the minimum number of clusters that are compact and balanced in their sizes. The selected network architecture is efficient, in terms of its complexity, as it contains the minimum number of neurons. Synaptic weight vectors of these neurons represent well-separated, compact and balanced clusters in the input data set. The performance of the ACL algorithm is evaluated and compared with the performance of a recently proposed algorithm in the literature in clustering an input data set and determining its number of clusters. Results show that the ACL algorithm is more accurate and robust in both determining the number of clusters and allocating input feature vectors into these clusters than the other algorithm especially with data sets that are sparsely distributed.

  6. Parameter Identification by Bayes Decision and Neural Networks

    DEFF Research Database (Denmark)

    Kulczycki, P.; Schiøler, Henrik

    1994-01-01

    The problem of parameter identification by Bayes point estimation using neural networks is investigated.......The problem of parameter identification by Bayes point estimation using neural networks is investigated....

  7. Combining neural networks for protein secondary structure prediction

    DEFF Research Database (Denmark)

    Riis, Søren Kamaric

    1995-01-01

    In this paper structured neural networks are applied to the problem of predicting the secondary structure of proteins. A hierarchical approach is used where specialized neural networks are designed for each structural class and then combined using another neural network. The submodels are designed...... by using a priori knowledge of the mapping between protein building blocks and the secondary structure and by using weight sharing. Since none of the individual networks have more than 600 adjustable weights over-fitting is avoided. When ensembles of specialized experts are combined the performance...

  8. Pattern recognition of state variables by neural networks

    International Nuclear Information System (INIS)

    Faria, Eduardo Fernandes; Pereira, Claubia

    1996-01-01

    An artificial intelligence system based on artificial neural networks can be used to classify predefined events and emergency procedures. These systems are being used in different areas. In the nuclear reactors safety, the goal is the classification of events whose data can be processed and recognized by neural networks. In this works we present a preliminary simple system, using neural networks in the recognition of patterns the recognition of variables which define a situation. (author)

  9. Classification of behavior using unsupervised temporal neural networks

    International Nuclear Information System (INIS)

    Adair, K.L.

    1998-03-01

    Adding recurrent connections to unsupervised neural networks used for clustering creates a temporal neural network which clusters a sequence of inputs as they appear over time. The model presented combines the Jordan architecture with the unsupervised learning technique Adaptive Resonance Theory, Fuzzy ART. The combination yields a neural network capable of quickly clustering sequential pattern sequences as the sequences are generated. The applicability of the architecture is illustrated through a facility monitoring problem

  10. Pulsed neural networks consisting of single-flux-quantum spiking neurons

    International Nuclear Information System (INIS)

    Hirose, T.; Asai, T.; Amemiya, Y.

    2007-01-01

    An inhibitory pulsed neural network was developed for brain-like information processing, by using single-flux-quantum (SFQ) circuits. It consists of spiking neuron devices that are coupled to each other through all-to-all inhibitory connections. The network selects neural activity. The operation of the neural network was confirmed by computer simulation. SFQ neuron devices can imitate the operation of the inhibition phenomenon of neural networks

  11. The neural network approach to parton fitting

    International Nuclear Information System (INIS)

    Rojo, Joan; Latorre, Jose I.; Del Debbio, Luigi; Forte, Stefano; Piccione, Andrea

    2005-01-01

    We introduce the neural network approach to global fits of parton distribution functions. First we review previous work on unbiased parametrizations of deep-inelastic structure functions with faithful estimation of their uncertainties, and then we summarize the current status of neural network parton distribution fits

  12. A study of reactor monitoring method with neural network

    Energy Technology Data Exchange (ETDEWEB)

    Nabeshima, Kunihiko [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment

    2001-03-01

    The purpose of this study is to investigate the methodology of Nuclear Power Plant (NPP) monitoring with neural networks, which create the plant models by the learning of the past normal operation patterns. The concept of this method is to detect the symptom of small anomalies by monitoring the deviations between the process signals measured from an actual plant and corresponding output signals from the neural network model, which might not be equal if the abnormal operational patterns are presented to the input of the neural network. Auto-associative network, which has same output as inputs, can detect an kind of anomaly condition by using normal operation data only. The monitoring tests of the feedforward neural network with adaptive learning were performed using the PWR plant simulator by which many kinds of anomaly conditions can be easily simulated. The adaptively trained feedforward network could follow the actual plant dynamics and the changes of plant condition, and then find most of the anomalies much earlier than the conventional alarm system during steady state and transient operations. Then the off-line and on-line test results during one year operation at the actual NPP (PWR) showed that the neural network could detect several small anomalies which the operators or the conventional alarm system didn't noticed. Furthermore, the sensitivity analysis suggests that the plant models by neural networks are appropriate. Finally, the simulation results show that the recurrent neural network with feedback connections could successfully model the slow behavior of the reactor dynamics without adaptive learning. Therefore, the recurrent neural network with adaptive learning will be the best choice for the actual reactor monitoring system. (author)

  13. Estimation of Conditional Quantile using Neural Networks

    DEFF Research Database (Denmark)

    Kulczycki, P.; Schiøler, Henrik

    1999-01-01

    The problem of estimating conditional quantiles using neural networks is investigated here. A basic structure is developed using the methodology of kernel estimation, and a theory guaranteeing con-sistency on a mild set of assumptions is provided. The constructed structure constitutes a basis...... for the design of a variety of different neural networks, some of which are considered in detail. The task of estimating conditional quantiles is related to Bayes point estimation whereby a broad range of applications within engineering, economics and management can be suggested. Numerical results illustrating...... the capabilities of the elaborated neural network are also given....

  14. Applications of neural network to numerical analyses

    International Nuclear Information System (INIS)

    Takeda, Tatsuoki; Fukuhara, Makoto; Ma, Xiao-Feng; Liaqat, Ali

    1999-01-01

    Applications of a multi-layer neural network to numerical analyses are described. We are mainly concerned with the computed tomography and the solution of differential equations. In both cases as the objective functions for the training process of the neural network we employed residuals of the integral equation or the differential equations. This is different from the conventional neural network training where sum of the squared errors of the output values is adopted as the objective function. For model problems both the methods gave satisfactory results and the methods are considered promising for some kind of problems. (author)

  15. Generating Seismograms with Deep Neural Networks

    Science.gov (United States)

    Krischer, L.; Fichtner, A.

    2017-12-01

    The recent surge of successful uses of deep neural networks in computer vision, speech recognition, and natural language processing, mainly enabled by the availability of fast GPUs and extremely large data sets, is starting to see many applications across all natural sciences. In seismology these are largely confined to classification and discrimination tasks. In this contribution we explore the use of deep neural networks for another class of problems: so called generative models.Generative modelling is a branch of statistics concerned with generating new observed data samples, usually by drawing from some underlying probability distribution. Samples with specific attributes can be generated by conditioning on input variables. In this work we condition on seismic source (mechanism and location) and receiver (location) parameters to generate multi-component seismograms.The deep neural networks are trained on synthetic data calculated with Instaseis (http://instaseis.net, van Driel et al. (2015)) and waveforms from the global ShakeMovie project (http://global.shakemovie.princeton.edu, Tromp et al. (2010)). The underlying radially symmetric or smoothly three dimensional Earth structures result in comparatively small waveform differences from similar events or at close receivers and the networks learn to interpolate between training data samples.Of particular importance is the chosen misfit functional. Generative adversarial networks (Goodfellow et al. (2014)) implement a system in which two networks compete: the generator network creates samples and the discriminator network distinguishes these from the true training examples. Both are trained in an adversarial fashion until the discriminator can no longer distinguish between generated and real samples. We show how this can be applied to seismograms and in particular how it compares to networks trained with more conventional misfit metrics. Last but not least we attempt to shed some light on the black-box nature of

  16. Localizing Tortoise Nests by Neural Networks.

    Directory of Open Access Journals (Sweden)

    Roberto Barbuti

    Full Text Available The goal of this research is to recognize the nest digging activity of tortoises using a device mounted atop the tortoise carapace. The device classifies tortoise movements in order to discriminate between nest digging, and non-digging activity (specifically walking and eating. Accelerometer data was collected from devices attached to the carapace of a number of tortoises during their two-month nesting period. Our system uses an accelerometer and an activity recognition system (ARS which is modularly structured using an artificial neural network and an output filter. For the purpose of experiment and comparison, and with the aim of minimizing the computational cost, the artificial neural network has been modelled according to three different architectures based on the input delay neural network (IDNN. We show that the ARS can achieve very high accuracy on segments of data sequences, with an extremely small neural network that can be embedded in programmable low power devices. Given that digging is typically a long activity (up to two hours, the application of ARS on data segments can be repeated over time to set up a reliable and efficient system, called Tortoise@, for digging activity recognition.

  17. Implementation of neural networks on 'Connection Machine'

    International Nuclear Information System (INIS)

    Belmonte, Ghislain

    1990-12-01

    This report is a first approach to the notion of neural networks and their possible applications within the framework of artificial intelligence activities of the Department of Applied Mathematics of the Limeil-Valenton Research Center. The first part is an introduction to the field of neural networks; the main neural network models are described in this section. The applications of neural networks in the field of classification have mainly been studied because they could more particularly help to solve some of the decision support problems dealt with by the C.E.A. As the neural networks perform a large number of parallel operations, it was therefore logical to use a parallel architecture computer: the Connection Machine (which uses 16384 processors and is located at E.T.C.A. Arcueil). The second part presents some generalities on the parallelism and the Connection Machine, and two implementations of neural networks on Connection Machine. The first of these implementations concerns one of the most used algorithms to realize the learning of neural networks: the Gradient Retro-propagation algorithm. The second one, less common, concerns a network of neurons destined mainly to the recognition of forms: the Fukushima Neocognitron. The latter is studied by the C.E.A. of Bruyeres-le-Chatel in order to realize an embedded system (including hardened circuits) for the fast recognition of forms [fr

  18. Artificial Neural Network Analysis of Xinhui Pericarpium Citri ...

    African Journals Online (AJOL)

    Methods: Artificial neural networks (ANN) models, including general regression neural network (GRNN) and multi-layer ... N-hexane (HPLC grade) was purchased from. Fisher Scientific. ..... Simultaneous Quantification of Seven Flavonoids in.

  19. Photon spectrometry utilizing neural networks

    International Nuclear Information System (INIS)

    Silveira, R.; Benevides, C.; Lima, F.; Vilela, E.

    2015-01-01

    Having in mind the time spent on the uneventful work of characterization of the radiation beams used in a ionizing radiation metrology laboratory, the Metrology Service of the Centro Regional de Ciencias Nucleares do Nordeste - CRCN-NE verified the applicability of artificial intelligence (artificial neural networks) to perform the spectrometry in photon fields. For this, was developed a multilayer neural network, as an application for the classification of patterns in energy, associated with a thermoluminescent dosimetric system (TLD-700 and TLD-600). A set of dosimeters was initially exposed to various well known medium energies, between 40 keV and 1.2 MeV, coinciding with the beams determined by ISO 4037 standard, for the dose of 10 mSv in the quantity Hp(10), on a chest phantom (ISO slab phantom) with the purpose of generating a set of training data for the neural network. Subsequently, a new set of dosimeters irradiated in unknown energies was presented to the network with the purpose to test the method. The methodology used in this work was suitable for application in the classification of energy beams, having obtained 100% of the classification performed. (authors)

  20. Hidden neural networks: application to speech recognition

    DEFF Research Database (Denmark)

    Riis, Søren Kamaric

    1998-01-01

    We evaluate the hidden neural network HMM/NN hybrid on two speech recognition benchmark tasks; (1) task independent isolated word recognition on the Phonebook database, and (2) recognition of broad phoneme classes in continuous speech from the TIMIT database. It is shown how hidden neural networks...

  1. Periodicity and stability for variable-time impulsive neural networks.

    Science.gov (United States)

    Li, Hongfei; Li, Chuandong; Huang, Tingwen

    2017-10-01

    The paper considers a general neural networks model with variable-time impulses. It is shown that each solution of the system intersects with every discontinuous surface exactly once via several new well-proposed assumptions. Moreover, based on the comparison principle, this paper shows that neural networks with variable-time impulse can be reduced to the corresponding neural network with fixed-time impulses under well-selected conditions. Meanwhile, the fixed-time impulsive systems can be regarded as the comparison system of the variable-time impulsive neural networks. Furthermore, a series of sufficient criteria are derived to ensure the existence and global exponential stability of periodic solution of variable-time impulsive neural networks, and to illustrate the same stability properties between variable-time impulsive neural networks and the fixed-time ones. The new criteria are established by applying Schaefer's fixed point theorem combined with the use of inequality technique. Finally, a numerical example is presented to show the effectiveness of the proposed results. Copyright © 2017 Elsevier Ltd. All rights reserved.

  2. A neural network model for credit risk evaluation.

    Science.gov (United States)

    Khashman, Adnan

    2009-08-01

    Credit scoring is one of the key analytical techniques in credit risk evaluation which has been an active research area in financial risk management. This paper presents a credit risk evaluation system that uses a neural network model based on the back propagation learning algorithm. We train and implement the neural network to decide whether to approve or reject a credit application, using seven learning schemes and real world credit applications from the Australian credit approval datasets. A comparison of the system performance under the different learning schemes is provided, furthermore, we compare the performance of two neural networks; with one and two hidden layers following the ideal learning scheme. Experimental results suggest that neural networks can be effectively used in automatic processing of credit applications.

  3. Hopfield neural network in HEP track reconstruction

    International Nuclear Information System (INIS)

    Muresan, R.; Pentia, M.

    1997-01-01

    In experimental particle physics, pattern recognition problems, specifically for neural network methods, occur frequently in track finding or feature extraction. Track finding is a combinatorial optimization problem. Given a set of points in Euclidean space, one tries the reconstruction of particle trajectories, subject to smoothness constraints.The basic ingredients in a neural network are the N binary neurons and the synaptic strengths connecting them. In our case the neurons are the segments connecting all possible point pairs.The dynamics of the neural network is given by a local updating rule wich evaluates for each neuron the sign of the 'upstream activity'. An updating rule in the form of sigmoid function is given. The synaptic strengths are defined in terms of angle between the segments and the lengths of the segments implied in the track reconstruction. An algorithm based on Hopfield neural network has been developed and tested on the track coordinates measured by silicon microstrip tracking system

  4. Genetic optimization of neural network architecture

    International Nuclear Information System (INIS)

    Harp, S.A.; Samad, T.

    1994-03-01

    Neural networks are now a popular technology for a broad variety of application domains, including the electric utility industry. Yet, as the technology continues to gain increasing acceptance, it is also increasingly apparent that the power that neural networks provide is not an unconditional blessing. Considerable care must be exercised during application development if the full benefit of the technology is to be realized. At present, no fully general theory or methodology for neural network design is available, and application development is a trial-and-error process that is time-consuming and expertise-intensive. Each application demands appropriate selections of the network input space, the network structure, and values of learning algorithm parameters-design choices that are closely coupled in ways that largely remain a mystery. This EPRI-funded exploratory research project was initiated to take the key next step in this research program: the validation of the approach on a realistic problem. We focused on the problem of modeling the thermal performance of the TVA Sequoyah nuclear power plant (units 1 and 2)

  5. Nano-topography Enhances Communication in Neural Cells Networks

    KAUST Repository

    Onesto, V.

    2017-08-23

    Neural cells are the smallest building blocks of the central and peripheral nervous systems. Information in neural networks and cell-substrate interactions have been heretofore studied separately. Understanding whether surface nano-topography can direct nerve cells assembly into computational efficient networks may provide new tools and criteria for tissue engineering and regenerative medicine. In this work, we used information theory approaches and functional multi calcium imaging (fMCI) techniques to examine how information flows in neural networks cultured on surfaces with controlled topography. We found that substrate roughness Sa affects networks topology. In the low nano-meter range, S-a = 0-30 nm, information increases with Sa. Moreover, we found that energy density of a network of cells correlates to the topology of that network. This reinforces the view that information, energy and surface nano-topography are tightly inter-connected and should not be neglected when studying cell-cell interaction in neural tissue repair and regeneration.

  6. Polarity-specific high-level information propagation in neural networks.

    Science.gov (United States)

    Lin, Yen-Nan; Chang, Po-Yen; Hsiao, Pao-Yueh; Lo, Chung-Chuan

    2014-01-01

    Analyzing the connectome of a nervous system provides valuable information about the functions of its subsystems. Although much has been learned about the architectures of neural networks in various organisms by applying analytical tools developed for general networks, two distinct and functionally important properties of neural networks are often overlooked. First, neural networks are endowed with polarity at the circuit level: Information enters a neural network at input neurons, propagates through interneurons, and leaves via output neurons. Second, many functions of nervous systems are implemented by signal propagation through high-level pathways involving multiple and often recurrent connections rather than by the shortest paths between nodes. In the present study, we analyzed two neural networks: the somatic nervous system of Caenorhabditis elegans (C. elegans) and the partial central complex network of Drosophila, in light of these properties. Specifically, we quantified high-level propagation in the vertical and horizontal directions: the former characterizes how signals propagate from specific input nodes to specific output nodes and the latter characterizes how a signal from a specific input node is shared by all output nodes. We found that the two neural networks are characterized by very efficient vertical and horizontal propagation. In comparison, classic small-world networks show a trade-off between vertical and horizontal propagation; increasing the rewiring probability improves the efficiency of horizontal propagation but worsens the efficiency of vertical propagation. Our result provides insights into how the complex functions of natural neural networks may arise from a design that allows them to efficiently transform and combine input signals.

  7. One weird trick for parallelizing convolutional neural networks

    OpenAIRE

    Krizhevsky, Alex

    2014-01-01

    I present a new way to parallelize the training of convolutional neural networks across multiple GPUs. The method scales significantly better than all alternatives when applied to modern convolutional neural networks.

  8. Optical resonators and neural networks

    Science.gov (United States)

    Anderson, Dana Z.

    1986-08-01

    It may be possible to implement neural network models using continuous field optical architectures. These devices offer the inherent parallelism of propagating waves and an information density in principle dictated by the wavelength of light and the quality of the bulk optical elements. Few components are needed to construct a relatively large equivalent network. Various associative memories based on optical resonators have been demonstrated in the literature, a ring resonator design is discussed in detail here. Information is stored in a holographic medium and recalled through a competitive processes in the gain medium supplying energy to the ring rsonator. The resonator memory is the first realized example of a neural network function implemented with this kind of architecture.

  9. NEURAL NETWORKS FOR STOCK MARKET OPTION PRICING

    Directory of Open Access Journals (Sweden)

    Sergey A. Sannikov

    2017-03-01

    Full Text Available Introduction: The use of neural networks for non-linear models helps to understand where linear model drawbacks, coused by their specification, reveal themselves. This paper attempts to find this out. The objective of research is to determine the meaning of “option prices calculation using neural networks”. Materials and Methods: We use two kinds of variables: endogenous (variables included in the model of neural network and variables affecting on the model (permanent disturbance. Results: All data are divided into 3 sets: learning, affirming and testing. All selected variables are normalised from 0 to 1. Extreme values of income were shortcut. Discussion and Conclusions: Using the 33-14-1 neural network with direct links we obtained two sets of forecasts. Optimal criteria of strategies in stock markets’ option pricing were developed.

  10. Region stability analysis and tracking control of memristive recurrent neural network.

    Science.gov (United States)

    Bao, Gang; Zeng, Zhigang; Shen, Yanjun

    2018-02-01

    Memristor is firstly postulated by Leon Chua and realized by Hewlett-Packard (HP) laboratory. Research results show that memristor can be used to simulate the synapses of neurons. This paper presents a class of recurrent neural network with HP memristors. Firstly, it shows that memristive recurrent neural network has more compound dynamics than the traditional recurrent neural network by simulations. Then it derives that n dimensional memristive recurrent neural network is composed of [Formula: see text] sub neural networks which do not have a common equilibrium point. By designing the tracking controller, it can make memristive neural network being convergent to the desired sub neural network. At last, two numerical examples are given to verify the validity of our result. Copyright © 2017 Elsevier Ltd. All rights reserved.

  11. Advanced neural network-based computational schemes for robust fault diagnosis

    CERN Document Server

    Mrugalski, Marcin

    2014-01-01

    The present book is devoted to problems of adaptation of artificial neural networks to robust fault diagnosis schemes. It presents neural networks-based modelling and estimation techniques used for designing robust fault diagnosis schemes for non-linear dynamic systems. A part of the book focuses on fundamental issues such as architectures of dynamic neural networks, methods for designing of neural networks and fault diagnosis schemes as well as the importance of robustness. The book is of a tutorial value and can be perceived as a good starting point for the new-comers to this field. The book is also devoted to advanced schemes of description of neural model uncertainty. In particular, the methods of computation of neural networks uncertainty with robust parameter estimation are presented. Moreover, a novel approach for system identification with the state-space GMDH neural network is delivered. All the concepts described in this book are illustrated by both simple academic illustrative examples and practica...

  12. Artificial neural networks applied to forecasting time series.

    Science.gov (United States)

    Montaño Moreno, Juan J; Palmer Pol, Alfonso; Muñoz Gracia, Pilar

    2011-04-01

    This study offers a description and comparison of the main models of Artificial Neural Networks (ANN) which have proved to be useful in time series forecasting, and also a standard procedure for the practical application of ANN in this type of task. The Multilayer Perceptron (MLP), Radial Base Function (RBF), Generalized Regression Neural Network (GRNN), and Recurrent Neural Network (RNN) models are analyzed. With this aim in mind, we use a time series made up of 244 time points. A comparative study establishes that the error made by the four neural network models analyzed is less than 10%. In accordance with the interpretation criteria of this performance, it can be concluded that the neural network models show a close fit regarding their forecasting capacity. The model with the best performance is the RBF, followed by the RNN and MLP. The GRNN model is the one with the worst performance. Finally, we analyze the advantages and limitations of ANN, the possible solutions to these limitations, and provide an orientation towards future research.

  13. Neural network error correction for solving coupled ordinary differential equations

    Science.gov (United States)

    Shelton, R. O.; Darsey, J. A.; Sumpter, B. G.; Noid, D. W.

    1992-01-01

    A neural network is presented to learn errors generated by a numerical algorithm for solving coupled nonlinear differential equations. The method is based on using a neural network to correctly learn the error generated by, for example, Runge-Kutta on a model molecular dynamics (MD) problem. The neural network programs used in this study were developed by NASA. Comparisons are made for training the neural network using backpropagation and a new method which was found to converge with fewer iterations. The neural net programs, the MD model and the calculations are discussed.

  14. Robustness of the ATLAS pixel clustering neural network algorithm

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00407780; The ATLAS collaboration

    2016-01-01

    Proton-proton collisions at the energy frontier puts strong constraints on track reconstruction algorithms. In the ATLAS track reconstruction algorithm, an artificial neural network is utilised to identify and split clusters of neighbouring read-out elements in the ATLAS pixel detector created by multiple charged particles. The robustness of the neural network algorithm is presented, probing its sensitivity to uncertainties in the detector conditions. The robustness is studied by evaluating the stability of the algorithm's performance under a range of variations in the inputs to the neural networks. Within reasonable variation magnitudes, the neural networks prove to be robust to most variation types.

  15. Entropy Learning in Neural Network

    Directory of Open Access Journals (Sweden)

    Geok See Ng

    2017-12-01

    Full Text Available In this paper, entropy term is used in the learning phase of a neural network.  As learning progresses, more hidden nodes get into saturation.  The early creation of such hidden nodes may impair generalisation.  Hence entropy approach is proposed to dampen the early creation of such nodes.  The entropy learning also helps to increase the importance of relevant nodes while dampening the less important nodes.  At the end of learning, the less important nodes can then be eliminated to reduce the memory requirements of the neural network.

  16. Optimal neural networks for protein-structure prediction

    International Nuclear Information System (INIS)

    Head-Gordon, T.; Stillinger, F.H.

    1993-01-01

    The successful application of neural-network algorithms for prediction of protein structure is stymied by three problem areas: the sparsity of the database of known protein structures, poorly devised network architectures which make the input-output mapping opaque, and a global optimization problem in the multiple-minima space of the network variables. We present a simplified polypeptide model residing in two dimensions with only two amino-acid types, A and B, which allows the determination of the global energy structure for all possible sequences of pentamer, hexamer, and heptamer lengths. This model simplicity allows us to compile a complete structural database and to devise neural networks that reproduce the tertiary structure of all sequences with absolute accuracy and with the smallest number of network variables. These optimal networks reveal that the three problem areas are convoluted, but that thoughtful network designs can actually deconvolute these detrimental traits to provide network algorithms that genuinely impact on the ability of the network to generalize or learn the desired mappings. Furthermore, the two-dimensional polypeptide model shows sufficient chemical complexity so that transfer of neural-network technology to more realistic three-dimensional proteins is evident

  17. Advanced Applications of Neural Networks and Artificial Intelligence: A Review

    OpenAIRE

    Koushal Kumar; Gour Sundar Mitra Thakur

    2012-01-01

    Artificial Neural Network is a branch of Artificial intelligence and has been accepted as a new computing technology in computer science fields. This paper reviews the field of Artificial intelligence and focusing on recent applications which uses Artificial Neural Networks (ANN’s) and Artificial Intelligence (AI). It also considers the integration of neural networks with other computing methods Such as fuzzy logic to enhance the interpretation ability of data. Artificial Neural Networks is c...

  18. Time Series Neural Network Model for Part-of-Speech Tagging Indonesian Language

    Science.gov (United States)

    Tanadi, Theo

    2018-03-01

    Part-of-speech tagging (POS tagging) is an important part in natural language processing. Many methods have been used to do this task, including neural network. This paper models a neural network that attempts to do POS tagging. A time series neural network is modelled to solve the problems that a basic neural network faces when attempting to do POS tagging. In order to enable the neural network to have text data input, the text data will get clustered first using Brown Clustering, resulting a binary dictionary that the neural network can use. To further the accuracy of the neural network, other features such as the POS tag, suffix, and affix of previous words would also be fed to the neural network.

  19. Template measurement for plutonium pit based on neural networks

    International Nuclear Information System (INIS)

    Zhang Changfan; Gong Jian; Liu Suping; Hu Guangchun; Xiang Yongchun

    2012-01-01

    Template measurement for plutonium pit extracts characteristic data from-ray spectrum and the neutron counts emitted by plutonium. The characteristic data of the suspicious object are compared with data of the declared plutonium pit to verify if they are of the same type. In this paper, neural networks are enhanced as the comparison algorithm for template measurement of plutonium pit. Two kinds of neural networks are created, i.e. the BP and LVQ neural networks. They are applied in different aspects for the template measurement and identification. BP neural network is used for classification for different types of plutonium pits, which is often used for management of nuclear materials. LVQ neural network is used for comparison of inspected objects to the declared one, which is usually applied in the field of nuclear disarmament and verification. (authors)

  20. Neutron spectrum unfolding using neural networks

    International Nuclear Information System (INIS)

    Vega C, H.R.; Hernandez D, V.M.; Manzanares A, E.

    2004-01-01

    An artificial neural network has been designed to obtain the neutron spectra from the Bonner spheres spectrometer's count rates. The neural network was trained using a large set of neutron spectra compiled by the International Atomic Energy Agency. These include spectra from iso- topic neutron sources, reference and operational neutron spectra obtained from accelerators and nuclear reactors. The spectra were transformed from lethargy to energy distribution and were re-binned to 31 energy groups using the MCNP 4C code. Re-binned spectra and UTA4 matrix were used to calculate the expected count rates in Bonner spheres spectrometer. These count rates were used as input and correspondent spectrum was used as output during neural network training. The network has 7 input nodes, 56 neurons as hidden layer and 31 neurons in the output layer. After training the network was tested with the Bonner spheres count rates produced by twelve neutron spectra. The network allows unfolding the neutron spectrum from count rates measured with Bonner spheres. Good results are obtained when testing count rates belong to neutron spectra used during training, acceptable results are obtained for count rates obtained from actual neutron fields; however the network fails when count rates belong to monoenergetic neutron sources. (Author)

  1. The principles of artificial neural network information processing

    International Nuclear Information System (INIS)

    Dai, Ru-Wei

    1993-01-01

    In this article, the basic structure of an artificial neuron is first introduced. In addition, principles of artificial neural network as well as several important artificial neural models such as Perceptron, Back propagation model, Hopfield net, and ART model are briefly discussed and analyzed. Finally, the application of artificial neural network for Chinese Character Recognition is also given. (author)

  2. The principles of artificial neural network information processing

    International Nuclear Information System (INIS)

    Dai, Ru-Wei

    1993-01-01

    In this article, the basic structure of an artificial neuron is first introduced. In addition, principles of artificial neural network as well as several important artificial neural models such as perception, back propagation model, Hopfield net, and ART model are briefly discussed and analyzed. Finally the application of artificial neural network for Chinese character recognition is also given. (author)

  3. Neural network monitoring of resistive welding

    International Nuclear Information System (INIS)

    Quero, J.M.; Millan, R.L.; Franquelo, L.G.; Canas, J.

    1994-01-01

    Supervision of welding processes is one of the most important and complicated tasks in production lines. Artificial Neural Networks have been applied for modeling and control of ph physical processes. In our paper we propose the use of a neural network classifier for on-line non-destructive testing. This system has been developed and installed in a resistive welding station. Results confirm the validity of this novel approach. (Author) 6 refs

  4. Forecasting Zakat collection using artificial neural network

    Science.gov (United States)

    Sy Ahmad Ubaidillah, Sh. Hafizah; Sallehuddin, Roselina

    2013-04-01

    'Zakat', "that which purifies" or "alms", is the giving of a fixed portion of one's wealth to charity, generally to the poor and needy. It is one of the five pillars of Islam, and must be paid by all practicing Muslims who have the financial means (nisab). 'Nisab' is the minimum level to determine whether there is a 'zakat' to be paid on the assets. Today, in most Muslim countries, 'zakat' is collected through a decentralized and voluntary system. Under this voluntary system, 'zakat' committees are established, which are tasked with the collection and distribution of 'zakat' funds. 'Zakat' promotes a more equitable redistribution of wealth, and fosters a sense of solidarity amongst members of the 'Ummah'. The Malaysian government has established a 'zakat' center at every state to facilitate the management of 'zakat'. The center has to have a good 'zakat' management system to effectively execute its functions especially in the collection and distribution of 'zakat'. Therefore, a good forecasting model is needed. The purpose of this study is to develop a forecasting model for Pusat Zakat Pahang (PZP) to predict the total amount of collection from 'zakat' of assets more precisely. In this study, two different Artificial Neural Network (ANN) models using two different learning algorithms are developed; Back Propagation (BP) and Levenberg-Marquardt (LM). Both models are developed and compared in terms of their accuracy performance. The best model is determined based on the lowest mean square error and the highest correlations values. Based on the results obtained from the study, BP neural network is recommended as the forecasting model to forecast the collection from 'zakat' of assets for PZP.

  5. Firing rate dynamics in recurrent spiking neural networks with intrinsic and network heterogeneity.

    Science.gov (United States)

    Ly, Cheng

    2015-12-01

    Heterogeneity of neural attributes has recently gained a lot of attention and is increasing recognized as a crucial feature in neural processing. Despite its importance, this physiological feature has traditionally been neglected in theoretical studies of cortical neural networks. Thus, there is still a lot unknown about the consequences of cellular and circuit heterogeneity in spiking neural networks. In particular, combining network or synaptic heterogeneity and intrinsic heterogeneity has yet to be considered systematically despite the fact that both are known to exist and likely have significant roles in neural network dynamics. In a canonical recurrent spiking neural network model, we study how these two forms of heterogeneity lead to different distributions of excitatory firing rates. To analytically characterize how these types of heterogeneities affect the network, we employ a dimension reduction method that relies on a combination of Monte Carlo simulations and probability density function equations. We find that the relationship between intrinsic and network heterogeneity has a strong effect on the overall level of heterogeneity of the firing rates. Specifically, this relationship can lead to amplification or attenuation of firing rate heterogeneity, and these effects depend on whether the recurrent network is firing asynchronously or rhythmically firing. These observations are captured with the aforementioned reduction method, and furthermore simpler analytic descriptions based on this dimension reduction method are developed. The final analytic descriptions provide compact and descriptive formulas for how the relationship between intrinsic and network heterogeneity determines the firing rate heterogeneity dynamics in various settings.

  6. Improvement of the Hopfield Neural Network by MC-Adaptation Rule

    Science.gov (United States)

    Zhou, Zhen; Zhao, Hong

    2006-06-01

    We show that the performance of the Hopfield neural networks, especially the quality of the recall and the capacity of the effective storing, can be greatly improved by making use of a recently presented neural network designing method without altering the whole structure of the network. In the improved neural network, a memory pattern is recalled exactly from initial states having a given degree of similarity with the memory pattern, and thus one can avoids to apply the overlap criterion as carried out in the Hopfield neural networks.

  7. Neural networks in continuous optical media

    International Nuclear Information System (INIS)

    Anderson, D.Z.

    1987-01-01

    The authors' interest is to see to what extent neural models can be implemented using continuous optical elements. Thus these optical networks represent a continuous distribution of neuronlike processors rather than a discrete collection. Most neural models have three characteristic features: interconnections; adaptivity; and nonlinearity. In their optical representation the interconnections are implemented with linear one- and two-port optical elements such as lenses and holograms. Real-time holographic media allow these interconnections to become adaptive. The nonlinearity is achieved with gain, for example, from two-beam coupling in photorefractive media or a pumped dye medium. Using these basic optical elements one can in principle construct continuous representations of a number of neural network models. The authors demonstrated two devices based on continuous optical elements: an associative memory which recalls an entire object when addressed with a partial object and a tracking novelty filter which identifies time-dependent features in an optical scene. These devices demonstrate the potential of distributed optical elements to implement more formal models of neural networks

  8. Network traffic anomaly prediction using Artificial Neural Network

    Science.gov (United States)

    Ciptaningtyas, Hening Titi; Fatichah, Chastine; Sabila, Altea

    2017-03-01

    As the excessive increase of internet usage, the malicious software (malware) has also increase significantly. Malware is software developed by hacker for illegal purpose(s), such as stealing data and identity, causing computer damage, or denying service to other user[1]. Malware which attack computer or server often triggers network traffic anomaly phenomena. Based on Sophos's report[2], Indonesia is the riskiest country of malware attack and it also has high network traffic anomaly. This research uses Artificial Neural Network (ANN) to predict network traffic anomaly based on malware attack in Indonesia which is recorded by Id-SIRTII/CC (Indonesia Security Incident Response Team on Internet Infrastructure/Coordination Center). The case study is the highest malware attack (SQL injection) which has happened in three consecutive years: 2012, 2013, and 2014[4]. The data series is preprocessed first, then the network traffic anomaly is predicted using Artificial Neural Network and using two weight update algorithms: Gradient Descent and Momentum. Error of prediction is calculated using Mean Squared Error (MSE) [7]. The experimental result shows that MSE for SQL Injection is 0.03856. So, this approach can be used to predict network traffic anomaly.

  9. A two-layer recurrent neural network for nonsmooth convex optimization problems.

    Science.gov (United States)

    Qin, Sitian; Xue, Xiaoping

    2015-06-01

    In this paper, a two-layer recurrent neural network is proposed to solve the nonsmooth convex optimization problem subject to convex inequality and linear equality constraints. Compared with existing neural network models, the proposed neural network has a low model complexity and avoids penalty parameters. It is proved that from any initial point, the state of the proposed neural network reaches the equality feasible region in finite time and stays there thereafter. Moreover, the state is unique if the initial point lies in the equality feasible region. The equilibrium point set of the proposed neural network is proved to be equivalent to the Karush-Kuhn-Tucker optimality set of the original optimization problem. It is further proved that the equilibrium point of the proposed neural network is stable in the sense of Lyapunov. Moreover, from any initial point, the state is proved to be convergent to an equilibrium point of the proposed neural network. Finally, as applications, the proposed neural network is used to solve nonlinear convex programming with linear constraints and L1 -norm minimization problems.

  10. Sensitive quantitative predictions of peptide-MHC binding by a 'Query by Committee' artificial neural network approach

    DEFF Research Database (Denmark)

    Buus, S.; Lauemoller, S.L.; Worning, Peder

    2003-01-01

    We have generated Artificial Neural Networks (ANN) capable of performing sensitive, quantitative predictions of peptide binding to the MHC class I molecule, HLA-A*0204. We have shown that such quantitative ANN are superior to conventional classification ANN, that have been trained to predict bind...... of an iterative feedback loop whereby advanced, computational bioinformatics optimize experimental strategy, and vice versa....

  11. Adaptive training of feedforward neural networks by Kalman filtering

    International Nuclear Information System (INIS)

    Ciftcioglu, Oe.

    1995-02-01

    Adaptive training of feedforward neural networks by Kalman filtering is described. Adaptive training is particularly important in estimation by neural network in real-time environmental where the trained network is used for system estimation while the network is further trained by means of the information provided by the experienced/exercised ongoing operation. As result of this, neural network adapts itself to a changing environment to perform its mission without recourse to re-training. The performance of the training method is demonstrated by means of actual process signals from a nuclear power plant. (orig.)

  12. Modeling and control of magnetorheological fluid dampers using neural networks

    Science.gov (United States)

    Wang, D. H.; Liao, W. H.

    2005-02-01

    Due to the inherent nonlinear nature of magnetorheological (MR) fluid dampers, one of the challenging aspects for utilizing these devices to achieve high system performance is the development of accurate models and control algorithms that can take advantage of their unique characteristics. In this paper, the direct identification and inverse dynamic modeling for MR fluid dampers using feedforward and recurrent neural networks are studied. The trained direct identification neural network model can be used to predict the damping force of the MR fluid damper on line, on the basis of the dynamic responses across the MR fluid damper and the command voltage, and the inverse dynamic neural network model can be used to generate the command voltage according to the desired damping force through supervised learning. The architectures and the learning methods of the dynamic neural network models and inverse neural network models for MR fluid dampers are presented, and some simulation results are discussed. Finally, the trained neural network models are applied to predict and control the damping force of the MR fluid damper. Moreover, validation methods for the neural network models developed are proposed and used to evaluate their performance. Validation results with different data sets indicate that the proposed direct identification dynamic model using the recurrent neural network can be used to predict the damping force accurately and the inverse identification dynamic model using the recurrent neural network can act as a damper controller to generate the command voltage when the MR fluid damper is used in a semi-active mode.

  13. Neural Networks for Modeling and Control of Particle Accelerators

    Science.gov (United States)

    Edelen, A. L.; Biedron, S. G.; Chase, B. E.; Edstrom, D.; Milton, S. V.; Stabile, P.

    2016-04-01

    Particle accelerators are host to myriad nonlinear and complex physical phenomena. They often involve a multitude of interacting systems, are subject to tight performance demands, and should be able to run for extended periods of time with minimal interruptions. Often times, traditional control techniques cannot fully meet these requirements. One promising avenue is to introduce machine learning and sophisticated control techniques inspired by artificial intelligence, particularly in light of recent theoretical and practical advances in these fields. Within machine learning and artificial intelligence, neural networks are particularly well-suited to modeling, control, and diagnostic analysis of complex, nonlinear, and time-varying systems, as well as systems with large parameter spaces. Consequently, the use of neural network-based modeling and control techniques could be of significant benefit to particle accelerators. For the same reasons, particle accelerators are also ideal test-beds for these techniques. Many early attempts to apply neural networks to particle accelerators yielded mixed results due to the relative immaturity of the technology for such tasks. The purpose of this paper is to re-introduce neural networks to the particle accelerator community and report on some work in neural network control that is being conducted as part of a dedicated collaboration between Fermilab and Colorado State University (CSU). We describe some of the challenges of particle accelerator control, highlight recent advances in neural network techniques, discuss some promising avenues for incorporating neural networks into particle accelerator control systems, and describe a neural network-based control system that is being developed for resonance control of an RF electron gun at the Fermilab Accelerator Science and Technology (FAST) facility, including initial experimental results from a benchmark controller.

  14. Issues in the use of neural networks in information retrieval

    CERN Document Server

    Iatan, Iuliana F

    2017-01-01

    This book highlights the ability of neural networks (NNs) to be excellent pattern matchers and their importance in information retrieval (IR), which is based on index term matching. The book defines a new NN-based method for learning image similarity and describes how to use fuzzy Gaussian neural networks to predict personality. It introduces the fuzzy Clifford Gaussian network, and two concurrent neural models: (1) concurrent fuzzy nonlinear perceptron modules, and (2) concurrent fuzzy Gaussian neural network modules. Furthermore, it explains the design of a new model of fuzzy nonlinear perceptron based on alpha level sets and describes a recurrent fuzzy neural network model with a learning algorithm based on the improved particle swarm optimization method.

  15. Stability prediction of berm breakwater using neural network

    Digital Repository Service at National Institute of Oceanography (India)

    Mandal, S.; Rao, S.; Manjunath, Y.R.

    In the present study, an artificial neural network method has been applied to predict the stability of berm breakwaters. Four neural network models are constructed based on the parameters which influence the stability of breakwater. Training...

  16. Neural networks. A new analytical tool, applicable also in nuclear technology

    International Nuclear Information System (INIS)

    Stritar, A.

    1992-01-01

    The basic concept of neural networks and back propagation learning algorithm are described. The behaviour of typical neural network is demonstrated on a simple graphical case. A short literature survey about the application of neural networks in nuclear science and engineering is made. The application of the neural network to the probability density calculation is shown. (author) [sl

  17. Short-Term Load Forecasting Model Based on Quantum Elman Neural Networks

    Directory of Open Access Journals (Sweden)

    Zhisheng Zhang

    2016-01-01

    Full Text Available Short-term load forecasting model based on quantum Elman neural networks was constructed in this paper. The quantum computation and Elman feedback mechanism were integrated into quantum Elman neural networks. Quantum computation can effectively improve the approximation capability and the information processing ability of the neural networks. Quantum Elman neural networks have not only the feedforward connection but also the feedback connection. The feedback connection between the hidden nodes and the context nodes belongs to the state feedback in the internal system, which has formed specific dynamic memory performance. Phase space reconstruction theory is the theoretical basis of constructing the forecasting model. The training samples are formed by means of K-nearest neighbor approach. Through the example simulation, the testing results show that the model based on quantum Elman neural networks is better than the model based on the quantum feedforward neural network, the model based on the conventional Elman neural network, and the model based on the conventional feedforward neural network. So the proposed model can effectively improve the prediction accuracy. The research in the paper makes a theoretical foundation for the practical engineering application of the short-term load forecasting model based on quantum Elman neural networks.

  18. Resolution of Singularities Introduced by Hierarchical Structure in Deep Neural Networks.

    Science.gov (United States)

    Nitta, Tohru

    2017-10-01

    We present a theoretical analysis of singular points of artificial deep neural networks, resulting in providing deep neural network models having no critical points introduced by a hierarchical structure. It is considered that such deep neural network models have good nature for gradient-based optimization. First, we show that there exist a large number of critical points introduced by a hierarchical structure in deep neural networks as straight lines, depending on the number of hidden layers and the number of hidden neurons. Second, we derive a sufficient condition for deep neural networks having no critical points introduced by a hierarchical structure, which can be applied to general deep neural networks. It is also shown that the existence of critical points introduced by a hierarchical structure is determined by the rank and the regularity of weight matrices for a specific class of deep neural networks. Finally, two kinds of implementation methods of the sufficient conditions to have no critical points are provided. One is a learning algorithm that can avoid critical points introduced by the hierarchical structure during learning (called avoidant learning algorithm). The other is a neural network that does not have some critical points introduced by the hierarchical structure as an inherent property (called avoidant neural network).

  19. Application of artificial neural network in radiographic diagnosis

    International Nuclear Information System (INIS)

    Piraino, D.; Amartur, S.; Richmond, B.; Schils, J.; Belhobek, G.

    1990-01-01

    This paper reports on an artificial neural network trained to rate the likelihood of different bone neoplasms when given a standard description of a radiograph. A three-layer back propagation algorithm was trained with descriptions of examples of bone neoplasms obtained from standard radiographic textbooks. Fifteen bone neoplasms obtained from clinical material were used as unknowns to test the trained artificial neural network. The artificial neural network correctly rated the pathologic diagnosis as the most likely diagnosis in 10 of the 15 unknown cases

  20. Collaborative Recurrent Neural Networks forDynamic Recommender Systems

    Science.gov (United States)

    2016-11-22

    JMLR: Workshop and Conference Proceedings 63:366–381, 2016 ACML 2016 Collaborative Recurrent Neural Networks for Dynamic Recommender Systems Young...an unprece- dented scale. Although such activity logs are abundantly available, most approaches to recommender systems are based on the rating...Recurrent Neural Network, Recommender System , Neural Language Model, Collaborative Filtering 1. Introduction As ever larger parts of the population

  1. Control of beam halo-chaos using neural network self-adaptation method

    International Nuclear Information System (INIS)

    Fang Jinqing; Huang Guoxian; Luo Xiaoshu

    2004-11-01

    Taking the advantages of neural network control method for nonlinear complex systems, control of beam halo-chaos in the periodic focusing channels (network) of high intensity accelerators is studied by feed-forward back-propagating neural network self-adaptation method. The envelope radius of high-intensity proton beam is reached to the matching beam radius by suitably selecting the control structure of neural network and the linear feedback coefficient, adjusted the right-coefficient of neural network. The beam halo-chaos is obviously suppressed and shaking size is much largely reduced after the neural network self-adaptation control is applied. (authors)

  2. Use of neural networks to monitor power plant components

    International Nuclear Information System (INIS)

    Ikonomopoulos, A.; Tsoukalas, L.H.

    1992-01-01

    A new methodology is presented for nondestructive evaluation (NDE) of check valve performance and degradation. Artificial neural network (ANN) technology is utilized for processing frequency domain signatures of check valves operating in a nuclear power plant (NPP). Acoustic signatures obtained from different locations on a check valve are transformed from the time domain to the frequency domain and then used as input to a pretrained neural network. The neural network has been trained with data sets corresponding to normal operation, therefore establishing a basis for check valve satisfactory performance. Results obtained from the proposed methodology demonstrate the ability of neural networks to perform accurate and quick evaluations of check valve performance

  3. Sejarah, Penerapan, dan Analisis Resiko dari Neural Network: Sebuah Tinjauan Pustaka

    Directory of Open Access Journals (Sweden)

    Cristina Cristina

    2018-05-01

    Full Text Available A neural network is a form of artificial intelligence that has the ability to learn, grow, and adapt in a dynamic environment. Neural network began since 1890 because a great American psychologist named William James created the book "Principles of Psycology". James was the first one publish a number of facts related to the structure and function of the brain. The history of neural network development is divided into 4 epochs, the Camelot era, the Depression, the Renaissance, and the Neoconnectiosm era. Neural networks used today are not 100 percent accurate. However, neural networks are still used because of better performance than alternative computing models. The use of neural network consists of pattern recognition, signal analysis, robotics, and expert systems. For risk analysis of the neural network, it is first performed using hazards and operability studies (HAZOPS. Determining the neural network requirements in a good way will help in determining its contribution to system hazards and validating the control or mitigation of any hazards. After completion of the first stage at HAZOPS and the second stage determines the requirements, the next stage is designing. Neural network underwent repeated design-train-test development. At the design stage, the hazard analysis should consider the design aspects of the development, which include neural network architecture, size, intended use, and so on. It will be continued at the implementation stage, test phase, installation and inspection phase, operation phase, and ends at the maintenance stage.

  4. Using neural networks in software repositories

    Science.gov (United States)

    Eichmann, David (Editor); Srinivas, Kankanahalli; Boetticher, G.

    1992-01-01

    The first topic is an exploration of the use of neural network techniques to improve the effectiveness of retrieval in software repositories. The second topic relates to a series of experiments conducted to evaluate the feasibility of using adaptive neural networks as a means of deriving (or more specifically, learning) measures on software. Taken together, these two efforts illuminate a very promising mechanism supporting software infrastructures - one based upon a flexible and responsive technology.

  5. Prediction based chaos control via a new neural network

    International Nuclear Information System (INIS)

    Shen Liqun; Wang Mao; Liu Wanyu; Sun Guanghui

    2008-01-01

    In this Letter, a new chaos control scheme based on chaos prediction is proposed. To perform chaos prediction, a new neural network architecture for complex nonlinear approximation is proposed. And the difficulty in building and training the neural network is also reduced. Simulation results of Logistic map and Lorenz system show the effectiveness of the proposed chaos control scheme and the proposed neural network

  6. Neural Networks for Modeling and Control of Particle Accelerators

    CERN Document Server

    Edelen, A.L.; Chase, B.E.; Edstrom, D.; Milton, S.V.; Stabile, P.

    2016-01-01

    We describe some of the challenges of particle accelerator control, highlight recent advances in neural network techniques, discuss some promising avenues for incorporating neural networks into particle accelerator control systems, and describe a neural network-based control system that is being developed for resonance control of an RF electron gun at the Fermilab Accelerator Science and Technology (FAST) facility, including initial experimental results from a benchmark controller.

  7. Computational modeling of neural plasticity for self-organization of neural networks.

    Science.gov (United States)

    Chrol-Cannon, Joseph; Jin, Yaochu

    2014-11-01

    Self-organization in biological nervous systems during the lifetime is known to largely occur through a process of plasticity that is dependent upon the spike-timing activity in connected neurons. In the field of computational neuroscience, much effort has been dedicated to building up computational models of neural plasticity to replicate experimental data. Most recently, increasing attention has been paid to understanding the role of neural plasticity in functional and structural neural self-organization, as well as its influence on the learning performance of neural networks for accomplishing machine learning tasks such as classification and regression. Although many ideas and hypothesis have been suggested, the relationship between the structure, dynamics and learning performance of neural networks remains elusive. The purpose of this article is to review the most important computational models for neural plasticity and discuss various ideas about neural plasticity's role. Finally, we suggest a few promising research directions, in particular those along the line that combines findings in computational neuroscience and systems biology, and their synergetic roles in understanding learning, memory and cognition, thereby bridging the gap between computational neuroscience, systems biology and computational intelligence. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  8. Deep Learning Neural Networks in Cybersecurity - Managing Malware with AI

    OpenAIRE

    Rayle, Keith

    2017-01-01

    There’s a lot of talk about the benefits of deep learning (neural networks) and how it’s the new electricity that will power us into the future. Medical diagnosis, computer vision and speech recognition are all examples of use-cases where neural networks are being applied in our everyday business environment. This begs the question…what are the uses of neural-network applications for cyber security? How does the AI process work when applying neural networks to detect malicious software bombar...

  9. Cotton genotypes selection through artificial neural networks.

    Science.gov (United States)

    Júnior, E G Silva; Cardoso, D B O; Reis, M C; Nascimento, A F O; Bortolin, D I; Martins, M R; Sousa, L B

    2017-09-27

    Breeding programs currently use statistical analysis to assist in the identification of superior genotypes at various stages of a cultivar's development. Differently from these analyses, the computational intelligence approach has been little explored in genetic improvement of cotton. Thus, this study was carried out with the objective of presenting the use of artificial neural networks as auxiliary tools in the improvement of the cotton to improve fiber quality. To demonstrate the applicability of this approach, this research was carried out using the evaluation data of 40 genotypes. In order to classify the genotypes for fiber quality, the artificial neural networks were trained with replicate data of 20 genotypes of cotton evaluated in the harvests of 2013/14 and 2014/15, regarding fiber length, uniformity of length, fiber strength, micronaire index, elongation, short fiber index, maturity index, reflectance degree, and fiber quality index. This quality index was estimated by means of a weighted average on the determined score (1 to 5) of each characteristic of the HVI evaluated, according to its industry standards. The artificial neural networks presented a high capacity of correct classification of the 20 selected genotypes based on the fiber quality index, so that when using fiber length associated with the short fiber index, fiber maturation, and micronaire index, the artificial neural networks presented better results than using only fiber length and previous associations. It was also observed that to submit data of means of new genotypes to the neural networks trained with data of repetition, provides better results of classification of the genotypes. When observing the results obtained in the present study, it was verified that the artificial neural networks present great potential to be used in the different stages of a genetic improvement program of the cotton, aiming at the improvement of the fiber quality of the future cultivars.

  10. Noise Analysis studies with neural networks

    International Nuclear Information System (INIS)

    Seker, S.; Ciftcioglu, O.

    1996-01-01

    Noise analysis studies with neural network are aimed. Stochastic signals at the input of the network are used to obtain an algorithmic multivariate stochastic signal modeling. To this end, lattice modeling of a stochastic signal is performed to obtain backward residual noise sources which are uncorrelated among themselves. There are applied together with an additional input to the network to obtain an algorithmic model which is used for signal detection for early failure in plant monitoring. The additional input provides the information to the network to minimize the difference between the signal and the network's one-step-ahead prediction. A stochastic algorithm is used for training where the errors reflecting the measurement error during the training are also modelled so that fast and consistent convergence of network's weights is obtained. The lattice structure coupled to neural network investigated with measured signals from an actual power plant. (authors)

  11. Neural networks and its application in biomedical engineering

    International Nuclear Information System (INIS)

    Husnain, S.K.; Bhatti, M.I.

    2002-01-01

    Artificial network (ANNs) is an information processing system that has certain performance characteristics in common with biological neural networks. A neural network is characterized by connections between the neurons, method of determining the weights on the connections and its activation functions while a biological neuron has three types of components that are of particular interest in understanding an artificial neuron: its dendrites, soma, and axon. The actin of the chemical transmitter modifies the incoming signal. The study of neural networks is an extremely interdisciplinary field. Computer-based diagnosis is an increasingly used method that tries to improve the quality of health care. Systems on Neural Networks have been developed extensively in the last ten years with the hope that medical diagnosis and therefore medical care would improve dramatically. The addition of a symbolic processing layer enhances the ANNs in a number of ways. It is, for instance, possible to supplement a network that is purely diagnostic with a level that recommends or nodes in order to more closely simulate the nervous system. (author)

  12. Noise suppress or express exponential growth for hybrid Hopfield neural networks

    International Nuclear Information System (INIS)

    Zhu Song; Shen Yi; Chen Guici

    2010-01-01

    In this Letter, we will show that noise can make the given hybrid Hopfield neural networks whose solution may grows exponentially become the new stochastic hybrid Hopfield neural networks whose solution will grows at most polynomially. On the other hand, we will also show that noise can make the given hybrid Hopfield neural networks whose solution grows at most polynomially become the new stochastic hybrid Hopfield neural networks whose solution will grows at exponentially. In other words, we will reveal that the noise can suppress or express exponential growth for hybrid Hopfield neural networks.

  13. Discriminating lysosomal membrane protein types using dynamic neural network.

    Science.gov (United States)

    Tripathi, Vijay; Gupta, Dwijendra Kumar

    2014-01-01

    This work presents a dynamic artificial neural network methodology, which classifies the proteins into their classes from their sequences alone: the lysosomal membrane protein classes and the various other membranes protein classes. In this paper, neural networks-based lysosomal-associated membrane protein type prediction system is proposed. Different protein sequence representations are fused to extract the features of a protein sequence, which includes seven feature sets; amino acid (AA) composition, sequence length, hydrophobic group, electronic group, sum of hydrophobicity, R-group, and dipeptide composition. To reduce the dimensionality of the large feature vector, we applied the principal component analysis. The probabilistic neural network, generalized regression neural network, and Elman regression neural network (RNN) are used as classifiers and compared with layer recurrent network (LRN), a dynamic network. The dynamic networks have memory, i.e. its output depends not only on the input but the previous outputs also. Thus, the accuracy of LRN classifier among all other artificial neural networks comes out to be the highest. The overall accuracy of jackknife cross-validation is 93.2% for the data-set. These predicted results suggest that the method can be effectively applied to discriminate lysosomal associated membrane proteins from other membrane proteins (Type-I, Outer membrane proteins, GPI-Anchored) and Globular proteins, and it also indicates that the protein sequence representation can better reflect the core feature of membrane proteins than the classical AA composition.

  14. Stability of Neutral Fractional Neural Networks with Delay

    Institute of Scientific and Technical Information of China (English)

    LI Yan; JIANG Wei; HU Bei-bei

    2016-01-01

    This paper studies stability of neutral fractional neural networks with delay. By introducing the definition of norm and using the uniform stability, the sufficient condition for uniform stability of neutral fractional neural networks with delay is obtained.

  15. A novel recurrent neural network with finite-time convergence for linear programming.

    Science.gov (United States)

    Liu, Qingshan; Cao, Jinde; Chen, Guanrong

    2010-11-01

    In this letter, a novel recurrent neural network based on the gradient method is proposed for solving linear programming problems. Finite-time convergence of the proposed neural network is proved by using the Lyapunov method. Compared with the existing neural networks for linear programming, the proposed neural network is globally convergent to exact optimal solutions in finite time, which is remarkable and rare in the literature of neural networks for optimization. Some numerical examples are given to show the effectiveness and excellent performance of the new recurrent neural network.

  16. Embedding recurrent neural networks into predator-prey models.

    Science.gov (United States)

    Moreau, Yves; Louiès, Stephane; Vandewalle, Joos; Brenig, Leon

    1999-03-01

    We study changes of coordinates that allow the embedding of ordinary differential equations describing continuous-time recurrent neural networks into differential equations describing predator-prey models-also called Lotka-Volterra systems. We transform the equations for the neural network first into quasi-monomial form (Brenig, L. (1988). Complete factorization and analytic solutions of generalized Lotka-Volterra equations. Physics Letters A, 133(7-8), 378-382), where we express the vector field of the dynamical system as a linear combination of products of powers of the variables. In practice, this transformation is possible only if the activation function is the hyperbolic tangent or the logistic sigmoid. From this quasi-monomial form, we can directly transform the system further into Lotka-Volterra equations. The resulting Lotka-Volterra system is of higher dimension than the original system, but the behavior of its first variables is equivalent to the behavior of the original neural network. We expect that this transformation will permit the application of existing techniques for the analysis of Lotka-Volterra systems to recurrent neural networks. Furthermore, our results show that Lotka-Volterra systems are universal approximators of dynamical systems, just as are continuous-time neural networks.

  17. Image Encryption and Chaotic Cellular Neural Network

    Science.gov (United States)

    Peng, Jun; Zhang, Du

    Machine learning has been playing an increasingly important role in information security and assurance. One of the areas of new applications is to design cryptographic systems by using chaotic neural network due to the fact that chaotic systems have several appealing features for information security applications. In this chapter, we describe a novel image encryption algorithm that is based on a chaotic cellular neural network. We start by giving an introduction to the concept of image encryption and its main technologies, and an overview of the chaotic cellular neural network. We then discuss the proposed image encryption algorithm in details, which is followed by a number of security analyses (key space analysis, sensitivity analysis, information entropy analysis and statistical analysis). The comparison with the most recently reported chaos-based image encryption algorithms indicates that the algorithm proposed in this chapter has a better security performance. Finally, we conclude the chapter with possible future work and application prospects of the chaotic cellular neural network in other information assurance and security areas.

  18. Neural networks to predict exosphere temperature corrections

    Science.gov (United States)

    Choury, Anna; Bruinsma, Sean; Schaeffer, Philippe

    2013-10-01

    Precise orbit prediction requires a forecast of the atmospheric drag force with a high degree of accuracy. Artificial neural networks are universal approximators derived from artificial intelligence and are widely used for prediction. This paper presents a method of artificial neural networking for prediction of the thermosphere density by forecasting exospheric temperature, which will be used by the semiempirical thermosphere Drag Temperature Model (DTM) currently developed. Artificial neural network has shown to be an effective and robust forecasting model for temperature prediction. The proposed model can be used for any mission from which temperature can be deduced accurately, i.e., it does not require specific training. Although the primary goal of the study was to create a model for 1 day ahead forecast, the proposed architecture has been generalized to 2 and 3 days prediction as well. The impact of artificial neural network predictions has been quantified for the low-orbiting satellite Gravity Field and Steady-State Ocean Circulation Explorer in 2011, and an order of magnitude smaller orbit errors were found when compared with orbits propagated using the thermosphere model DTM2009.

  19. Integrating neural network technology and noise analysis

    International Nuclear Information System (INIS)

    Uhrig, R.E.; Oak Ridge National Lab., TN

    1995-01-01

    The integrated use of neural network and noise analysis technologies offers advantages not available by the use of either technology alone. The application of neural network technology to noise analysis offers an opportunity to expand the scope of problems where noise analysis is useful and unique ways in which the integration of these technologies can be used productively. The two-sensor technique, in which the responses of two sensors to an unknown driving source are related, is used to demonstration such integration. The relationship between power spectral densities (PSDs) of accelerometer signals is derived theoretically using noise analysis to demonstrate its uniqueness. This relationship is modeled from experimental data using a neural network when the system is working properly, and the actual PSD of one sensor is compared with the PSD of that sensor predicted by the neural network using the PSD of the other sensor as an input. A significant deviation between the actual and predicted PSDs indicate that system is changing (i.e., failing). Experiments carried out on check values and bearings illustrate the usefulness of the methodology developed. (Author)

  20. System Identification, Prediction, Simulation and Control with Neural Networks

    DEFF Research Database (Denmark)

    Sørensen, O.

    1997-01-01

    a Gauss-Newton search direction is applied. 3) Amongst numerous model types, often met in control applications, only the Non-linear ARMAX (NARMAX) model, representing input/output description, is examined. A simulated example confirms that a neural network has the potential to perform excellent System......The intention of this paper is to make a systematic examination of the possibilities of applying neural networks in those technical areas, which are familiar to a control engineer. In other words, the potential of neural networks in control applications is given higher priority than a detailed...... study of the networks themselves. With this end in view the following restrictions have been made: 1) Amongst numerous neural network structures, only the Multi Layer Perceptron (a feed-forward network) is applied. 2) Amongst numerous training algorithms, only the Recursive Prediction Error Method using...

  1. Open quantum generalisation of Hopfield neural networks

    Science.gov (United States)

    Rotondo, P.; Marcuzzi, M.; Garrahan, J. P.; Lesanovsky, I.; Müller, M.

    2018-03-01

    We propose a new framework to understand how quantum effects may impact on the dynamics of neural networks. We implement the dynamics of neural networks in terms of Markovian open quantum systems, which allows us to treat thermal and quantum coherent effects on the same footing. In particular, we propose an open quantum generalisation of the Hopfield neural network, the simplest toy model of associative memory. We determine its phase diagram and show that quantum fluctuations give rise to a qualitatively new non-equilibrium phase. This novel phase is characterised by limit cycles corresponding to high-dimensional stationary manifolds that may be regarded as a generalisation of storage patterns to the quantum domain.

  2. Stock market index prediction using neural networks

    Science.gov (United States)

    Komo, Darmadi; Chang, Chein-I.; Ko, Hanseok

    1994-03-01

    A neural network approach to stock market index prediction is presented. Actual data of the Wall Street Journal's Dow Jones Industrial Index has been used for a benchmark in our experiments where Radial Basis Function based neural networks have been designed to model these indices over the period from January 1988 to Dec 1992. A notable success has been achieved with the proposed model producing over 90% prediction accuracies observed based on monthly Dow Jones Industrial Index predictions. The model has also captured both moderate and heavy index fluctuations. The experiments conducted in this study demonstrated that the Radial Basis Function neural network represents an excellent candidate to predict stock market index.

  3. Ideomotor feedback control in a recurrent neural network.

    Science.gov (United States)

    Galtier, Mathieu

    2015-06-01

    The architecture of a neural network controlling an unknown environment is presented. It is based on a randomly connected recurrent neural network from which both perception and action are simultaneously read and fed back. There are two concurrent learning rules implementing a sort of ideomotor control: (i) perception is learned along the principle that the network should predict reliably its incoming stimuli; (ii) action is learned along the principle that the prediction of the network should match a target time series. The coherent behavior of the neural network in its environment is a consequence of the interaction between the two principles. Numerical simulations show a promising performance of the approach, which can be turned into a local and better "biologically plausible" algorithm.

  4. Discrete-time BAM neural networks with variable delays

    Science.gov (United States)

    Liu, Xin-Ge; Tang, Mei-Lan; Martin, Ralph; Liu, Xin-Bi

    2007-07-01

    This Letter deals with the global exponential stability of discrete-time bidirectional associative memory (BAM) neural networks with variable delays. Using a Lyapunov functional, and linear matrix inequality techniques (LMI), we derive a new delay-dependent exponential stability criterion for BAM neural networks with variable delays. As this criterion has no extra constraints on the variable delay functions, it can be applied to quite general BAM neural networks with a broad range of time delay functions. It is also easy to use in practice. An example is provided to illustrate the theoretical development.

  5. Discrete-time BAM neural networks with variable delays

    International Nuclear Information System (INIS)

    Liu Xinge; Tang Meilan; Martin, Ralph; Liu Xinbi

    2007-01-01

    This Letter deals with the global exponential stability of discrete-time bidirectional associative memory (BAM) neural networks with variable delays. Using a Lyapunov functional, and linear matrix inequality techniques (LMI), we derive a new delay-dependent exponential stability criterion for BAM neural networks with variable delays. As this criterion has no extra constraints on the variable delay functions, it can be applied to quite general BAM neural networks with a broad range of time delay functions. It is also easy to use in practice. An example is provided to illustrate the theoretical development

  6. Designing neural networks that process mean values of random variables

    International Nuclear Information System (INIS)

    Barber, Michael J.; Clark, John W.

    2014-01-01

    We develop a class of neural networks derived from probabilistic models posed in the form of Bayesian networks. Making biologically and technically plausible assumptions about the nature of the probabilistic models to be represented in the networks, we derive neural networks exhibiting standard dynamics that require no training to determine the synaptic weights, that perform accurate calculation of the mean values of the relevant random variables, that can pool multiple sources of evidence, and that deal appropriately with ambivalent, inconsistent, or contradictory evidence. - Highlights: • High-level neural computations are specified by Bayesian belief networks of random variables. • Probability densities of random variables are encoded in activities of populations of neurons. • Top-down algorithm generates specific neural network implementation of given computation. • Resulting “neural belief networks” process mean values of random variables. • Such networks pool multiple sources of evidence and deal properly with inconsistent evidence

  7. Designing neural networks that process mean values of random variables

    Energy Technology Data Exchange (ETDEWEB)

    Barber, Michael J. [AIT Austrian Institute of Technology, Innovation Systems Department, 1220 Vienna (Austria); Clark, John W. [Department of Physics and McDonnell Center for the Space Sciences, Washington University, St. Louis, MO 63130 (United States); Centro de Ciências Matemáticas, Universidade de Madeira, 9000-390 Funchal (Portugal)

    2014-06-13

    We develop a class of neural networks derived from probabilistic models posed in the form of Bayesian networks. Making biologically and technically plausible assumptions about the nature of the probabilistic models to be represented in the networks, we derive neural networks exhibiting standard dynamics that require no training to determine the synaptic weights, that perform accurate calculation of the mean values of the relevant random variables, that can pool multiple sources of evidence, and that deal appropriately with ambivalent, inconsistent, or contradictory evidence. - Highlights: • High-level neural computations are specified by Bayesian belief networks of random variables. • Probability densities of random variables are encoded in activities of populations of neurons. • Top-down algorithm generates specific neural network implementation of given computation. • Resulting “neural belief networks” process mean values of random variables. • Such networks pool multiple sources of evidence and deal properly with inconsistent evidence.

  8. Low-dimensional recurrent neural network-based Kalman filter for speech enhancement.

    Science.gov (United States)

    Xia, Youshen; Wang, Jun

    2015-07-01

    This paper proposes a new recurrent neural network-based Kalman filter for speech enhancement, based on a noise-constrained least squares estimate. The parameters of speech signal modeled as autoregressive process are first estimated by using the proposed recurrent neural network and the speech signal is then recovered from Kalman filtering. The proposed recurrent neural network is globally asymptomatically stable to the noise-constrained estimate. Because the noise-constrained estimate has a robust performance against non-Gaussian noise, the proposed recurrent neural network-based speech enhancement algorithm can minimize the estimation error of Kalman filter parameters in non-Gaussian noise. Furthermore, having a low-dimensional model feature, the proposed neural network-based speech enhancement algorithm has a much faster speed than two existing recurrent neural networks-based speech enhancement algorithms. Simulation results show that the proposed recurrent neural network-based speech enhancement algorithm can produce a good performance with fast computation and noise reduction. Copyright © 2015 Elsevier Ltd. All rights reserved.

  9. Multiple simultaneous fault diagnosis via hierarchical and single artificial neural networks

    International Nuclear Information System (INIS)

    Eslamloueyan, R.; Shahrokhi, M.; Bozorgmehri, R.

    2003-01-01

    Process fault diagnosis involves interpreting the current status of the plant given sensor reading and process knowledge. There has been considerable work done in this area with a variety of approaches being proposed for process fault diagnosis. Neural networks have been used to solve process fault diagnosis problems in chemical process, as they are well suited for recognizing multi-dimensional nonlinear patterns. In this work, the use of Hierarchical Artificial Neural Networks in diagnosing the multi-faults of a chemical process are discussed and compared with that of Single Artificial Neural Networks. The lower efficiency of Hierarchical Artificial Neural Networks , in comparison to Single Artificial Neural Networks, in process fault diagnosis is elaborated and analyzed. Also, the concept of a multi-level selection switch is presented and developed to improve the performance of hierarchical artificial neural networks. Simulation results indicate that application of multi-level selection switch increase the performance of the hierarchical artificial neural networks considerably

  10. A fuzzy Hopfield neural network for medical image segmentation

    International Nuclear Information System (INIS)

    Lin, J.S.; Cheng, K.S.; Mao, C.W.

    1996-01-01

    In this paper, an unsupervised parallel segmentation approach using a fuzzy Hopfield neural network (FHNN) is proposed. The main purpose is to embed fuzzy clustering into neural networks so that on-line learning and parallel implementation for medical image segmentation are feasible. The idea is to cast a clustering problem as a minimization problem where the criteria for the optimum segmentation is chosen as the minimization of the Euclidean distance between samples to class centers. In order to generate feasible results, a fuzzy c-means clustering strategy is included in the Hopfield neural network to eliminate the need of finding weighting factors in the energy function, which is formulated and based on a basic concept commonly used in pattern classification, called the within-class scatter matrix principle. The suggested fuzzy c-means clustering strategy has also been proven to be convergent and to allow the network to learn more effectively than the conventional Hopfield neural network. The fuzzy Hopfield neural network based on the within-class scatter matrix shows the promising results in comparison with the hard c-means method

  11. Identification of generalized state transfer matrix using neural networks

    International Nuclear Information System (INIS)

    Zhu Changchun

    2001-01-01

    The research is introduced on identification of generalized state transfer matrix of linear time-invariant (LTI) system by use of neural networks based on LM (Levenberg-Marquart) algorithm. Firstly, the generalized state transfer matrix is defined. The relationship between the identification of state transfer matrix of structural dynamics and the identification of the weight matrix of neural networks has been established in theory. A singular layer neural network is adopted to obtain the structural parameters as a powerful tool that has parallel distributed processing ability and the property of adaptation or learning. The constraint condition of weight matrix of the neural network is deduced so that the learning and training of the designed network can be more effective. The identified neural network can be used to simulate the structural response excited by any other signals. In order to cope with its further application in practical problems, some noise (5% and 10%) is expected to be present in the response measurements. Results from computer simulation studies show that this method is valid and feasible

  12. A one-layer recurrent neural network for constrained nonsmooth optimization.

    Science.gov (United States)

    Liu, Qingshan; Wang, Jun

    2011-10-01

    This paper presents a novel one-layer recurrent neural network modeled by means of a differential inclusion for solving nonsmooth optimization problems, in which the number of neurons in the proposed neural network is the same as the number of decision variables of optimization problems. Compared with existing neural networks for nonsmooth optimization problems, the global convexity condition on the objective functions and constraints is relaxed, which allows the objective functions and constraints to be nonconvex. It is proven that the state variables of the proposed neural network are convergent to optimal solutions if a single design parameter in the model is larger than a derived lower bound. Numerical examples with simulation results substantiate the effectiveness and illustrate the characteristics of the proposed neural network.

  13. Neural Network Classifier Based on Growing Hyperspheres

    Czech Academy of Sciences Publication Activity Database

    Jiřina Jr., Marcel; Jiřina, Marcel

    2000-01-01

    Roč. 10, č. 3 (2000), s. 417-428 ISSN 1210-0552. [Neural Network World 2000. Prague, 09.07.2000-12.07.2000] Grant - others:MŠMT ČR(CZ) VS96047; MPO(CZ) RP-4210 Institutional research plan: AV0Z1030915 Keywords : neural network * classifier * hyperspheres * big -dimensional data Subject RIV: BA - General Mathematics

  14. Artificial neural network based approach to transmission lines protection

    International Nuclear Information System (INIS)

    Joorabian, M.

    1999-05-01

    The aim of this paper is to present and accurate fault detection technique for high speed distance protection using artificial neural networks. The feed-forward multi-layer neural network with the use of supervised learning and the common training rule of error back-propagation is chosen for this study. Information available locally at the relay point is passed to a neural network in order for an assessment of the fault location to be made. However in practice there is a large amount of information available, and a feature extraction process is required to reduce the dimensionality of the pattern vectors, whilst retaining important information that distinguishes the fault point. The choice of features is critical to the performance of the neural networks learning and operation. A significant feature in this paper is that an artificial neural network has been designed and tested to enhance the precision of the adaptive capabilities for distance protection

  15. Advances in Artificial Neural Networks - Methodological Development and Application

    Science.gov (United States)

    Artificial neural networks as a major soft-computing technology have been extensively studied and applied during the last three decades. Research on backpropagation training algorithms for multilayer perceptron networks has spurred development of other neural network training algorithms for other ne...

  16. Stability of a neural network model with small-world connections

    International Nuclear Information System (INIS)

    Li Chunguang; Chen Guanrong

    2003-01-01

    Small-world networks are highly clustered networks with small distances among the nodes. There are many biological neural networks that present this kind of connection. There are no special weightings in the connections of most existing small-world network models. However, this kind of simply connected model cannot characterize biological neural networks, in which there are different weights in synaptic connections. In this paper, we present a neural network model with weighted small-world connections and further investigate the stability of this model

  17. Neural networks and their potential application to nuclear power plants

    International Nuclear Information System (INIS)

    Uhrig, R.E.

    1991-01-01

    A network of artificial neurons, usually called an artificial neural network is a data processing system consisting of a number of highly interconnected processing elements in an architecture inspired by the structure of the cerebral cortex portion of the brain. Hence, neural networks are often capable of doing things which humans or animals do well but which conventional computers often do poorly. Neural networks exhibit characteristics and capabilities not provided by any other technology. Neural networks may be designed so as to classify an input pattern as one of several predefined types or to create, as needed, categories or classes of system states which can be interpreted by a human operator. Neural networks have the ability to recognize patterns, even when the information comprising these patterns is noisy, sparse, or incomplete. Thus, systems of artificial neural networks show great promise for use in environments in which robust, fault-tolerant pattern recognition is necessary in a real-time mode, and in which the incoming data may be distorted or noisy. The application of neural networks, a rapidly evolving technology used extensively in defense applications, alone or in conjunction with other advanced technologies, to some of the problems of operating nuclear power plants has the potential to enhance the safety, reliability and operability of nuclear power plants. The potential applications of neural networking include, but are not limited to diagnosing specific abnormal conditions, identification of nonlinear dynamics and transients, detection of the change of mode of operation, control of temperature and pressure during start-up, signal validation, plant-wide monitoring using autoassociative neural networks, monitoring of check valves, modeling of the plant thermodynamics, emulation of core reload calculations, analysis of temporal sequences in NRC's ''licensee event reports,'' and monitoring of plant parameters

  18. Performance of artificial neural networks and genetical evolved artificial neural networks unfolding techniques

    International Nuclear Information System (INIS)

    Ortiz R, J. M.; Martinez B, M. R.; Vega C, H. R.; Gallego D, E.; Lorente F, A.; Mendez V, R.; Los Arcos M, J. M.; Guerrero A, J. E.

    2011-01-01

    With the Bonner spheres spectrometer neutron spectrum is obtained through an unfolding procedure. Monte Carlo methods, Regularization, Parametrization, Least-squares, and Maximum Entropy are some of the techniques utilized for unfolding. In the last decade methods based on Artificial Intelligence Technology have been used. Approaches based on Genetic Algorithms and Artificial Neural Networks (Ann) have been developed in order to overcome the drawbacks of previous techniques. Nevertheless the advantages of Ann still it has some drawbacks mainly in the design process of the network, vg the optimum selection of the architectural and learning Ann parameters. In recent years the use of hybrid technologies, combining Ann and genetic algorithms, has been utilized to. In this work, several Ann topologies were trained and tested using Ann and Genetically Evolved Artificial Neural Networks in the aim to unfold neutron spectra using the count rates of a Bonner sphere spectrometer. Here, a comparative study of both procedures has been carried out. (Author)

  19. Anomaly detection in an automated safeguards system using neural networks

    International Nuclear Information System (INIS)

    Whiteson, R.; Howell, J.A.

    1992-01-01

    An automated safeguards system must be able to detect an anomalous event, identify the nature of the event, and recommend a corrective action. Neural networks represent a new way of thinking about basic computational mechanisms for intelligent information processing. In this paper, we discuss the issues involved in applying a neural network model to the first step of this process: anomaly detection in materials accounting systems. We extend our previous model to a 3-tank problem and compare different neural network architectures and algorithms. We evaluate the computational difficulties in training neural networks and explore how certain design principles affect the problems. The issues involved in building a neural network architecture include how the information flows, how the network is trained, how the neurons in a network are connected, how the neurons process information, and how the connections between neurons are modified. Our approach is based on the demonstrated ability of neural networks to model complex, nonlinear, real-time processes. By modeling the normal behavior of the processes, we can predict how a system should be behaving and, therefore, detect when an abnormality occurs

  20. Rule extraction from minimal neural networks for credit card screening.

    Science.gov (United States)

    Setiono, Rudy; Baesens, Bart; Mues, Christophe

    2011-08-01

    While feedforward neural networks have been widely accepted as effective tools for solving classification problems, the issue of finding the best network architecture remains unresolved, particularly so in real-world problem settings. We address this issue in the context of credit card screening, where it is important to not only find a neural network with good predictive performance but also one that facilitates a clear explanation of how it produces its predictions. We show that minimal neural networks with as few as one hidden unit provide good predictive accuracy, while having the added advantage of making it easier to generate concise and comprehensible classification rules for the user. To further reduce model size, a novel approach is suggested in which network connections from the input units to this hidden unit are removed by a very straightaway pruning procedure. In terms of predictive accuracy, both the minimized neural networks and the rule sets generated from them are shown to compare favorably with other neural network based classifiers. The rules generated from the minimized neural networks are concise and thus easier to validate in a real-life setting.

  1. Introduction to spiking neural networks: Information processing, learning and applications.

    Science.gov (United States)

    Ponulak, Filip; Kasinski, Andrzej

    2011-01-01

    The concept that neural information is encoded in the firing rate of neurons has been the dominant paradigm in neurobiology for many years. This paradigm has also been adopted by the theory of artificial neural networks. Recent physiological experiments demonstrate, however, that in many parts of the nervous system, neural code is founded on the timing of individual action potentials. This finding has given rise to the emergence of a new class of neural models, called spiking neural networks. In this paper we summarize basic properties of spiking neurons and spiking networks. Our focus is, specifically, on models of spike-based information coding, synaptic plasticity and learning. We also survey real-life applications of spiking models. The paper is meant to be an introduction to spiking neural networks for scientists from various disciplines interested in spike-based neural processing.

  2. Neural-Network Object-Recognition Program

    Science.gov (United States)

    Spirkovska, L.; Reid, M. B.

    1993-01-01

    HONTIOR computer program implements third-order neural network exhibiting invariance under translation, change of scale, and in-plane rotation. Invariance incorporated directly into architecture of network. Only one view of each object needed to train network for two-dimensional-translation-invariant recognition of object. Also used for three-dimensional-transformation-invariant recognition by training network on only set of out-of-plane rotated views. Written in C language.

  3. Character recognition from trajectory by recurrent spiking neural networks.

    Science.gov (United States)

    Jiangrong Shen; Kang Lin; Yueming Wang; Gang Pan

    2017-07-01

    Spiking neural networks are biologically plausible and power-efficient on neuromorphic hardware, while recurrent neural networks have been proven to be efficient on time series data. However, how to use the recurrent property to improve the performance of spiking neural networks is still a problem. This paper proposes a recurrent spiking neural network for character recognition using trajectories. In the network, a new encoding method is designed, in which varying time ranges of input streams are used in different recurrent layers. This is able to improve the generalization ability of our model compared with general encoding methods. The experiments are conducted on four groups of the character data set from University of Edinburgh. The results show that our method can achieve a higher average recognition accuracy than existing methods.

  4. Efficient Cancer Detection Using Multiple Neural Networks.

    Science.gov (United States)

    Shell, John; Gregory, William D

    2017-01-01

    The inspection of live excised tissue specimens to ascertain malignancy is a challenging task in dermatopathology and generally in histopathology. We introduce a portable desktop prototype device that provides highly accurate neural network classification of malignant and benign tissue. The handheld device collects 47 impedance data samples from 1 Hz to 32 MHz via tetrapolar blackened platinum electrodes. The data analysis was implemented with six different backpropagation neural networks (BNN). A data set consisting of 180 malignant and 180 benign breast tissue data files in an approved IRB study at the Aurora Medical Center, Milwaukee, WI, USA, were utilized as a neural network input. The BNN structure consisted of a multi-tiered consensus approach autonomously selecting four of six neural networks to determine a malignant or benign classification. The BNN analysis was then compared with the histology results with consistent sensitivity of 100% and a specificity of 100%. This implementation successfully relied solely on statistical variation between the benign and malignant impedance data and intricate neural network configuration. This device and BNN implementation provides a novel approach that could be a valuable tool to augment current medical practice assessment of the health of breast, squamous, and basal cell carcinoma and other excised tissue without requisite tissue specimen expertise. It has the potential to provide clinical management personnel with a fast non-invasive accurate assessment of biopsied or sectioned excised tissue in various clinical settings.

  5. Potential applications of neural networks to nuclear power plants

    International Nuclear Information System (INIS)

    Uhrig, R.E.

    1991-01-01

    Application of neural networks to the operation of nuclear power plants is being investigated under a US Department of Energy sponsored program at the University of Tennessee. Projects include the feasibility of using neural networks for the following tasks: diagnosing specific abnormal conditions, detection of the change of mode of operation, signal validation, monitoring of check valves, plant-wide monitoring using autoassociative neural networks, modeling of the plant thermodynamics, emulation of core reload calculations, monitoring of plant parameters, and analysis of plant vibrations. Each of these projects and its status are described briefly in this article. The objective of each of these projects is to enhance the safety and performance of nuclear plants through the use of neural networks

  6. Neural network models: from biology to many - body phenomenology

    International Nuclear Information System (INIS)

    Clark, J.W.

    1993-01-01

    The current surge of research on practical side of neural networks and their utility in memory storage/recall, pattern recognition and classification is given in this article. The initial attraction of neural networks as dynamical and statistical system has been investigated. From the view of many-body theorist, the neurons may be thought of as particles, and the weighted connection between the units, as the interaction between these particles. Finally, the author has seen the impressive capabilities of artificial neural networks in pattern recognition and classification may be exploited to solve data management problems in experimental physics and the discovery of radically new theoretically description of physical problems and neural networks can be used in physics. (A.B.)

  7. Neural network for solving convex quadratic bilevel programming problems.

    Science.gov (United States)

    He, Xing; Li, Chuandong; Huang, Tingwen; Li, Chaojie

    2014-03-01

    In this paper, using the idea of successive approximation, we propose a neural network to solve convex quadratic bilevel programming problems (CQBPPs), which is modeled by a nonautonomous differential inclusion. Different from the existing neural network for CQBPP, the model has the least number of state variables and simple structure. Based on the theory of nonsmooth analysis, differential inclusions and Lyapunov-like method, the limit equilibrium points sequence of the proposed neural networks can approximately converge to an optimal solution of CQBPP under certain conditions. Finally, simulation results on two numerical examples and the portfolio selection problem show the effectiveness and performance of the proposed neural network. Copyright © 2013 Elsevier Ltd. All rights reserved.

  8. Neural network application to diesel generator diagnostics

    International Nuclear Information System (INIS)

    Logan, K.P.

    1990-01-01

    Diagnostic problems typically begin with the observation of some system behavior which is recognized as a deviation from the expected. The fundamental underlying process is one involving pattern matching cf observed symptoms to a set of compiled symptoms belonging to a fault-symptom mapping. Pattern recognition is often relied upon for initial fault detection and diagnosis. Parallel distributed processing (PDP) models employing neural network paradigms are known to be good pattern recognition devices. This paper describes the application of neural network processing techniques to the malfunction diagnosis of subsystems within a typical diesel generator configuration. Neural network models employing backpropagation learning were developed to correctly recognize fault conditions from the input diagnostic symptom patterns pertaining to various engine subsystems. The resulting network models proved to be excellent pattern recognizers for malfunction examples within the training set. The motivation for employing network models in lieu of a rule-based expert system, however, is related to the network's potential for generalizing malfunctions outside of the training set, as in the case of noisy or partial symptom patterns

  9. Learning text representation using recurrent convolutional neural network with highway layers

    OpenAIRE

    Wen, Ying; Zhang, Weinan; Luo, Rui; Wang, Jun

    2016-01-01

    Recently, the rapid development of word embedding and neural networks has brought new inspiration to various NLP and IR tasks. In this paper, we describe a staged hybrid model combining Recurrent Convolutional Neural Networks (RCNN) with highway layers. The highway network module is incorporated in the middle takes the output of the bi-directional Recurrent Neural Network (Bi-RNN) module in the first stage and provides the Convolutional Neural Network (CNN) module in the last stage with the i...

  10. Neural network-based nonlinear model predictive control vs. linear quadratic gaussian control

    Science.gov (United States)

    Cho, C.; Vance, R.; Mardi, N.; Qian, Z.; Prisbrey, K.

    1997-01-01

    One problem with the application of neural networks to the multivariable control of mineral and extractive processes is determining whether and how to use them. The objective of this investigation was to compare neural network control to more conventional strategies and to determine if there are any advantages in using neural network control in terms of set-point tracking, rise time, settling time, disturbance rejection and other criteria. The procedure involved developing neural network controllers using both historical plant data and simulation models. Various control patterns were tried, including both inverse and direct neural network plant models. These were compared to state space controllers that are, by nature, linear. For grinding and leaching circuits, a nonlinear neural network-based model predictive control strategy was superior to a state space-based linear quadratic gaussian controller. The investigation pointed out the importance of incorporating state space into neural networks by making them recurrent, i.e., feeding certain output state variables into input nodes in the neural network. It was concluded that neural network controllers can have better disturbance rejection, set-point tracking, rise time, settling time and lower set-point overshoot, and it was also concluded that neural network controllers can be more reliable and easy to implement in complex, multivariable plants.

  11. Hybrid discrete-time neural networks.

    Science.gov (United States)

    Cao, Hongjun; Ibarz, Borja

    2010-11-13

    Hybrid dynamical systems combine evolution equations with state transitions. When the evolution equations are discrete-time (also called map-based), the result is a hybrid discrete-time system. A class of biological neural network models that has recently received some attention falls within this category: map-based neuron models connected by means of fast threshold modulation (FTM). FTM is a connection scheme that aims to mimic the switching dynamics of a neuron subject to synaptic inputs. The dynamic equations of the neuron adopt different forms according to the state (either firing or not firing) and type (excitatory or inhibitory) of their presynaptic neighbours. Therefore, the mathematical model of one such network is a combination of discrete-time evolution equations with transitions between states, constituting a hybrid discrete-time (map-based) neural network. In this paper, we review previous work within the context of these models, exemplifying useful techniques to analyse them. Typical map-based neuron models are low-dimensional and amenable to phase-plane analysis. In bursting models, fast-slow decomposition can be used to reduce dimensionality further, so that the dynamics of a pair of connected neurons can be easily understood. We also discuss a model that includes electrical synapses in addition to chemical synapses with FTM. Furthermore, we describe how master stability functions can predict the stability of synchronized states in these networks. The main results are extended to larger map-based neural networks.

  12. PWR system simulation and parameter estimation with neural networks

    Energy Technology Data Exchange (ETDEWEB)

    Akkurt, Hatice; Colak, Uener E-mail: uc@nuke.hacettepe.edu.tr

    2002-11-01

    A detailed nonlinear model for a typical PWR system has been considered for the development of simulation software. Each component in the system has been represented by appropriate differential equations. The SCILAB software was used for solving nonlinear equations to simulate steady-state and transient operational conditions. Overall system has been constructed by connecting individual components to each other. The validity of models for individual components and overall system has been verified. The system response against given transients have been analyzed. A neural network has been utilized to estimate system parameters during transients. Different transients have been imposed in training and prediction stages with neural networks. Reactor power and system reactivity during the transient event have been predicted by the neural network. Results show that neural networks estimations are in good agreement with the calculated response of the reactor system. The maximum errors are within {+-}0.254% for power and between -0.146 and 0.353% for reactivity prediction cases. Steam generator parameters, pressure and water level, are also successfully predicted by the neural network employed in this study. The noise imposed on the input parameters of the neural network deteriorates the power estimation capability whereas the reactivity estimation capability is not significantly affected.

  13. PWR system simulation and parameter estimation with neural networks

    International Nuclear Information System (INIS)

    Akkurt, Hatice; Colak, Uener

    2002-01-01

    A detailed nonlinear model for a typical PWR system has been considered for the development of simulation software. Each component in the system has been represented by appropriate differential equations. The SCILAB software was used for solving nonlinear equations to simulate steady-state and transient operational conditions. Overall system has been constructed by connecting individual components to each other. The validity of models for individual components and overall system has been verified. The system response against given transients have been analyzed. A neural network has been utilized to estimate system parameters during transients. Different transients have been imposed in training and prediction stages with neural networks. Reactor power and system reactivity during the transient event have been predicted by the neural network. Results show that neural networks estimations are in good agreement with the calculated response of the reactor system. The maximum errors are within ±0.254% for power and between -0.146 and 0.353% for reactivity prediction cases. Steam generator parameters, pressure and water level, are also successfully predicted by the neural network employed in this study. The noise imposed on the input parameters of the neural network deteriorates the power estimation capability whereas the reactivity estimation capability is not significantly affected

  14. Application of radial basis neural network for state estimation of ...

    African Journals Online (AJOL)

    An original application of radial basis function (RBF) neural network for power system state estimation is proposed in this paper. The property of massive parallelism of neural networks is employed for this. The application of RBF neural network for state estimation is investigated by testing its applicability on a IEEE 14 bus ...

  15. Wind power prediction based on genetic neural network

    Science.gov (United States)

    Zhang, Suhan

    2017-04-01

    The scale of grid connected wind farms keeps increasing. To ensure the stability of power system operation, make a reasonable scheduling scheme and improve the competitiveness of wind farm in the electricity generation market, it's important to accurately forecast the short-term wind power. To reduce the influence of the nonlinear relationship between the disturbance factor and the wind power, the improved prediction model based on genetic algorithm and neural network method is established. To overcome the shortcomings of long training time of BP neural network and easy to fall into local minimum and improve the accuracy of the neural network, genetic algorithm is adopted to optimize the parameters and topology of neural network. The historical data is used as input to predict short-term wind power. The effectiveness and feasibility of the method is verified by the actual data of a certain wind farm as an example.

  16. Introduction to neural networks in high energy physics

    International Nuclear Information System (INIS)

    Therhaag, J.

    2013-01-01

    Artificial neural networks are a well established tool in high energy physics, playing an important role in both online and offline data analysis. Nevertheless they are often perceived as black boxes which perform obscure operations beyond the control of the user, resulting in a skepticism against any results that may be obtained using them. The situation is not helped by common explanations which try to draw analogies between artificial neural networks and the human brain, for the brain is an even more complex black box itself. In this introductory text, I will take a problem-oriented approach to neural network techniques, showing how the fundamental concepts arise naturally from the demand to solve classification tasks which are frequently encountered in high energy physics. Particular attention is devoted to the question how probability theory can be used to control the complexity of neural networks. (authors)

  17. Neural networks for event filtering at D/O/

    International Nuclear Information System (INIS)

    Cutts, D.; Hoftun, J.S.; Sornborger, A.; Johnson, C.R.; Zeller, R.T.

    1989-01-01

    Neural networks may provide important tools for pattern recognition in high energy physics. We discuss an initial exploration of these techniques, presenting the result of network simulations of several filter algorithms. The D0 data acquisition system, a MicroVAX farm, will perform critical event selection; we describe a possible implementation of neural network algorithms in this system. 7 refs., 4 figs

  18. Neural network segmentation of magnetic resonance images

    International Nuclear Information System (INIS)

    Frederick, B.

    1990-01-01

    Neural networks are well adapted to the task of grouping input patterns into subsets which share some similarity. Moreover, once trained, they can generalize their classification rules to classify new data sets. Sets of pixel intensities from magnetic resonance (MR) images provide a natural input to a neural network; by varying imaging parameters, MR images can reflect various independent physical parameters of tissues in their pixel intensities. A neural net can then be trained to classify physically similar tissue types based on sets of pixel intensities resulting from different imaging studies on the same subject. This paper reports that a neural network classifier for image segmentation was implanted on a Sun 4/60, and was tested on the task of classifying tissues of canine head MR images. Four images of a transaxial slice with different imaging sequences were taken as input to the network (three spin-echo images and an inversion recovery image). The training set consisted of 691 representative samples of gray matter, white matter, cerebrospinal fluid, bone, and muscle preclassified by a neuroscientist. The network was trained using a fast backpropagation algorithm to derive the decision criteria to classify any location in the image by its pixel intensities, and the image was subsequently segmented by the classifier

  19. The application of artificial neural networks to TLD dose algorithm

    International Nuclear Information System (INIS)

    Moscovitch, M.

    1997-01-01

    We review the application of feed forward neural networks to multi element thermoluminescence dosimetry (TLD) dose algorithm development. A Neural Network is an information processing method inspired by the biological nervous system. A dose algorithm based on a neural network is a fundamentally different approach from conventional algorithms, as it has the capability to learn from its own experience. The neural network algorithm is shown the expected dose values (output) associated with a given response of a multi-element dosimeter (input) many times.The algorithm, being trained that way, eventually is able to produce its own unique solution to similar (but not exactly the same) dose calculation problems. For personnel dosimetry, the output consists of the desired dose components: deep dose, shallow dose, and eye dose. The input consists of the TL data obtained from the readout of a multi-element dosimeter. For this application, a neural network architecture was developed based on the concept of functional links network (FLN). The FLN concept allowed an increase in the dimensionality of the input space and construction of a neural network without any hidden layers. This simplifies the problem and results in a relatively simple and reliable dose calculation algorithm. Overall, the neural network dose algorithm approach has been shown to significantly improve the precision and accuracy of dose calculations. (authors)

  20. Vibration monitoring with artificial neural networks

    International Nuclear Information System (INIS)

    Alguindigue, I.

    1991-01-01

    Vibration monitoring of components in nuclear power plants has been used for a number of years. This technique involves the analysis of vibration data coming from vital components of the plant to detect features which reflect the operational state of machinery. The analysis leads to the identification of potential failures and their causes, and makes it possible to perform efficient preventive maintenance. Earlydetection is important because it can decrease the probability of catastrophic failures, reduce forced outgage, maximize utilization of available assets, increase the life of the plant, and reduce maintenance costs. This paper documents our work on the design of a vibration monitoring methodology based on neural network technology. This technology provides an attractive complement to traditional vibration analysis because of the potential of neural network to operate in real-time mode and to handle data which may be distorted or noisy. Our efforts have been concentrated on the analysis and classification of vibration signatures collected from operating machinery. Two neural networks algorithms were used in our project: the Recirculation algorithm for data compression and the Backpropagation algorithm to perform the actual classification of the patterns. Although this project is in the early stages of development it indicates that neural networks may provide a viable methodology for monitoring and diagnostics of vibrating components. Our results to date are very encouraging

  1. Neural networks and orbit control in accelerators

    International Nuclear Information System (INIS)

    Bozoki, E.; Friedman, A.

    1994-01-01

    An overview of the architecture, workings and training of Neural Networks is given. We stress the aspects which are important for the use of Neural Networks for orbit control in accelerators and storage rings, especially its ability to cope with the nonlinear behavior of the orbit response to 'kicks' and the slow drift in the orbit response during long-term operation. Results obtained for the two NSLS storage rings with several network architectures and various training methods for each architecture are given

  2. A comparative study of two neural networks for document retrieval

    International Nuclear Information System (INIS)

    Hui, S.C.; Goh, A.

    1997-01-01

    In recent years there has been specific interest in adopting advanced computer techniques in the field of document retrieval. This interest is generated by the fact that classical methods such as the Boolean search, the vector space model or even probabilistic retrieval cannot handle the increasing demands of end-users in satisfying their needs. The most recent attempt is the application of the neural network paradigm as a means of providing end-users with a more powerful retrieval mechanism. Neural networks are not only good pattern matchers but also highly versatile and adaptable. In this paper, we demonstrate how to apply two neural networks, namely Adaptive Resonance Theory and Fuzzy Kohonen Neural Network, for document retrieval. In addition, a comparison of these two neural networks based on performance is also given

  3. Investigation of efficient features for image recognition by neural networks.

    Science.gov (United States)

    Goltsev, Alexander; Gritsenko, Vladimir

    2012-04-01

    In the paper, effective and simple features for image recognition (named LiRA-features) are investigated in the task of handwritten digit recognition. Two neural network classifiers are considered-a modified 3-layer perceptron LiRA and a modular assembly neural network. A method of feature selection is proposed that analyses connection weights formed in the preliminary learning process of a neural network classifier. In the experiments using the MNIST database of handwritten digits, the feature selection procedure allows reduction of feature number (from 60 000 to 7000) preserving comparable recognition capability while accelerating computations. Experimental comparison between the LiRA perceptron and the modular assembly neural network is accomplished, which shows that recognition capability of the modular assembly neural network is somewhat better. Copyright © 2011 Elsevier Ltd. All rights reserved.

  4. 23rd Workshop of the Italian Neural Networks Society (SIREN)

    CERN Document Server

    Esposito, Anna; Morabito, Francesco

    2014-01-01

    This volume collects a selection of contributions which has been presented at the 23rd Italian Workshop on Neural Networks, the yearly meeting of the Italian Society for Neural Networks (SIREN). The conference was held in Vietri sul Mare, Salerno, Italy during May 23-24, 2013. The annual meeting of SIREN is sponsored by International Neural Network Society (INNS), European Neural Network Society (ENNS) and IEEE Computational Intelligence Society (CIS). The book – as well as the workshop-  is organized in two main components, a special session and a group of regular sessions featuring different aspects and point of views of artificial neural networks, artificial and natural intelligence, as well as psychological and cognitive theories for modeling human behaviors and human machine interactions, including Information Communication applications of compelling interest.  .

  5. A Recurrent Neural Network for Nonlinear Fractional Programming

    Directory of Open Access Journals (Sweden)

    Quan-Ju Zhang

    2012-01-01

    Full Text Available This paper presents a novel recurrent time continuous neural network model which performs nonlinear fractional optimization subject to interval constraints on each of the optimization variables. The network is proved to be complete in the sense that the set of optima of the objective function to be minimized with interval constraints coincides with the set of equilibria of the neural network. It is also shown that the network is primal and globally convergent in the sense that its trajectory cannot escape from the feasible region and will converge to an exact optimal solution for any initial point being chosen in the feasible interval region. Simulation results are given to demonstrate further the global convergence and good performance of the proposing neural network for nonlinear fractional programming problems with interval constraints.

  6. Sonar discrimination of cylinders from different angles using neural networks neural networks

    DEFF Research Database (Denmark)

    Andersen, Lars Nonboe; Au, Whiwlow; Larsen, Jan

    1999-01-01

    This paper describes an underwater object discrimination system applied to recognize cylinders of various compositions from different angles. The system is based on a new combination of simulated dolphin clicks, simulated auditory filters and artificial neural networks. The model demonstrates its...

  7. Recurrent Neural Network for Computing Outer Inverse.

    Science.gov (United States)

    Živković, Ivan S; Stanimirović, Predrag S; Wei, Yimin

    2016-05-01

    Two linear recurrent neural networks for generating outer inverses with prescribed range and null space are defined. Each of the proposed recurrent neural networks is based on the matrix-valued differential equation, a generalization of dynamic equations proposed earlier for the nonsingular matrix inversion, the Moore-Penrose inversion, as well as the Drazin inversion, under the condition of zero initial state. The application of the first approach is conditioned by the properties of the spectrum of a certain matrix; the second approach eliminates this drawback, though at the cost of increasing the number of matrix operations. The cases corresponding to the most common generalized inverses are defined. The conditions that ensure stability of the proposed neural network are presented. Illustrative examples present the results of numerical simulations.

  8. Neutron spectra unfolding in Bonner spheres spectrometry using neural networks

    International Nuclear Information System (INIS)

    Kardan, M.R.; Setayeshi, S.; Koohi-Fayegh, R.; Ghiassi-Nejad, M.

    2003-01-01

    The neural network method has been used for the unfolding of neutron spectra in neutron spectrometry by Bonner spheres. A back propagation algorithm was used for training of neural networks 4mm x 4 mm bare LiI(Eu) and in a polyethylene sphere set: 2, 3, 4, 5, 6, 7, 8, 10, 12, 18 inch diameter have been used for unfolding of neutron spectra. Neural networks were trained by 199 sets of neutron spectra, which were subdivided into 6, 8, 10, 12, 15 and 20 energy bins and for each of them an appropriate neural network was designed and trained. The validation was performed by the 21 sets of neutron spectra. A neural network with 10 energy bins which had a mean value of error of 6% for dose equivalent estimation of spectra in the validation set showed the best results. The obtained results show that neural networks can be applied as an effective method for unfolding neutron spectra especially when the main target is neutron dosimetry. (author)

  9. SOLAR PHOTOVOLTAIC OUTPUT POWER FORECASTING USING BACK PROPAGATION NEURAL NETWORK

    Directory of Open Access Journals (Sweden)

    B. Jency Paulin

    2016-01-01

    Full Text Available Solar Energy is an important renewable and unlimited source of energy. Solar photovoltaic power forecasting, is an estimation of the expected power production, that help the grid operators to better manage the electric balance between power demand and supply. Neural network is a computational model that can predict new outcomes from past trends. The artificial neural network is used for photovoltaic plant energy forecasting. The output power for solar photovoltaic cell is predicted on hourly basis. In historical dataset collection process, two dataset was collected and used for analysis. The dataset was provided with three independent attributes and one dependent attributes. The implementation of Artificial Neural Network structure is done by Multilayer Perceptron (MLP and training procedure for neural network is done by error Back Propagation (BP. In order to train and test the neural network, the datasets are divided in the ratio 70:30. The accuracy of prediction can be done by using various error measurement criteria and the performance of neural network is to be noted.

  10. Modeling of steam generator in nuclear power plant using neural network ensemble

    International Nuclear Information System (INIS)

    Lee, S. K.; Lee, E. C.; Jang, J. W.

    2003-01-01

    Neural network is now being used in modeling the steam generator is known to be difficult due to the reverse dynamics. However, Neural network is prone to the problem of overfitting. This paper investigates the use of neural network combining methods to model steam generator water level and compares with single neural network. The results show that neural network ensemble is effective tool which can offer improved generalization, lower dependence of the training set and reduced training time

  11. Neural network decoder for quantum error correcting codes

    Science.gov (United States)

    Krastanov, Stefan; Jiang, Liang

    Artificial neural networks form a family of extremely powerful - albeit still poorly understood - tools used in anything from image and sound recognition through text generation to, in our case, decoding. We present a straightforward Recurrent Neural Network architecture capable of deducing the correcting procedure for a quantum error-correcting code from a set of repeated stabilizer measurements. We discuss the fault-tolerance of our scheme and the cost of training the neural network for a system of a realistic size. Such decoders are especially interesting when applied to codes, like the quantum LDPC codes, that lack known efficient decoding schemes.

  12. Neural Networks through Shared Maps in Mobile Devices

    Directory of Open Access Journals (Sweden)

    William Raveane

    2014-12-01

    Full Text Available We introduce a hybrid system composed of a convolutional neural network and a discrete graphical model for image recognition. This system improves upon traditional sliding window techniques for analysis of an image larger than the training data by effectively processing the full input scene through the neural network in less time. The final result is then inferred from the neural network output through energy minimization to reach a more precize localization than what traditional maximum value class comparisons yield. These results are apt for applying this process in a mobile device for real time image recognition.

  13. Active Engine Mounting Control Algorithm Using Neural Network

    Directory of Open Access Journals (Sweden)

    Fadly Jashi Darsivan

    2009-01-01

    Full Text Available This paper proposes the application of neural network as a controller to isolate engine vibration in an active engine mounting system. It has been shown that the NARMA-L2 neurocontroller has the ability to reject disturbances from a plant. The disturbance is assumed to be both impulse and sinusoidal disturbances that are induced by the engine. The performance of the neural network controller is compared with conventional PD and PID controllers tuned using Ziegler-Nichols. From the result simulated the neural network controller has shown better ability to isolate the engine vibration than the conventional controllers.

  14. Toward automatic time-series forecasting using neural networks.

    Science.gov (United States)

    Yan, Weizhong

    2012-07-01

    Over the past few decades, application of artificial neural networks (ANN) to time-series forecasting (TSF) has been growing rapidly due to several unique features of ANN models. However, to date, a consistent ANN performance over different studies has not been achieved. Many factors contribute to the inconsistency in the performance of neural network models. One such factor is that ANN modeling involves determining a large number of design parameters, and the current design practice is essentially heuristic and ad hoc, this does not exploit the full potential of neural networks. Systematic ANN modeling processes and strategies for TSF are, therefore, greatly needed. Motivated by this need, this paper attempts to develop an automatic ANN modeling scheme. It is based on the generalized regression neural network (GRNN), a special type of neural network. By taking advantage of several GRNN properties (i.e., a single design parameter and fast learning) and by incorporating several design strategies (e.g., fusing multiple GRNNs), we have been able to make the proposed modeling scheme to be effective for modeling large-scale business time series. The initial model was entered into the NN3 time-series competition. It was awarded the best prediction on the reduced dataset among approximately 60 different models submitted by scholars worldwide.

  15. Recurrent Neural Network for Computing the Drazin Inverse.

    Science.gov (United States)

    Stanimirović, Predrag S; Zivković, Ivan S; Wei, Yimin

    2015-11-01

    This paper presents a recurrent neural network (RNN) for computing the Drazin inverse of a real matrix in real time. This recurrent neural network (RNN) is composed of n independent parts (subnetworks), where n is the order of the input matrix. These subnetworks can operate concurrently, so parallel and distributed processing can be achieved. In this way, the computational advantages over the existing sequential algorithms can be attained in real-time applications. The RNN defined in this paper is convenient for an implementation in an electronic circuit. The number of neurons in the neural network is the same as the number of elements in the output matrix, which represents the Drazin inverse. The difference between the proposed RNN and the existing ones for the Drazin inverse computation lies in their network architecture and dynamics. The conditions that ensure the stability of the defined RNN as well as its convergence toward the Drazin inverse are considered. In addition, illustrative examples and examples of application to the practical engineering problems are discussed to show the efficacy of the proposed neural network.

  16. Fast neutron spectra determination by threshold activation detectors using neural networks

    International Nuclear Information System (INIS)

    Kardan, M.R.; Koohi-Fayegh, R.; Setayeshi, S.; Ghiassi-Nejad, M.

    2004-01-01

    Neural network method was used for fast neutron spectra unfolding in spectrometry by threshold activation detectors. The input layer of the neural networks consisted of 11 neurons for the specific activities of neutron-induced nuclear reaction products, while the output layers were fast neutron spectra which had been subdivided into 6, 8, 10, 12, 15 and 20 energy bins. Neural network training was performed by 437 fast neutron spectra and corresponding threshold activation detector readings. The trained neural network have been applied for unfolding 50 spectra, which were not in training sets and the results were compared with real spectra and unfolded spectra by SANDII. The best results belong to 10 energy bin spectra. The neural network was also trained by detector readings with 5% uncertainty and the response of the trained neural network to detector readings with 5%, 10%, 15%, 20%, 25% and 50% uncertainty was compared with real spectra. Neural network algorithm, in comparison with other unfolding methods, is very fast and needless to detector response matrix and any prior information about spectra and also the outputs have low sensitivity to uncertainty in the activity measurements. The results show that the neural network algorithm is useful when a fast response is required with reasonable accuracy

  17. Modeling and Speed Control of Induction Motor Drives Using Neural Networks

    Directory of Open Access Journals (Sweden)

    V. Jamuna

    2010-08-01

    Full Text Available Speed control of induction motor drives using neural networks is presented. The mathematical model of single phase induction motor is developed. A new simulink model for a neural network-controlled bidirectional chopper fed single phase induction motor is proposed. Under normal operation, the true drive parameters are real-time identified and they are converted into the controller parameters through multilayer forward computation by neural networks. Comparative study has been made between the conventional and neural network controllers. It is observed that the neural network controlled drive system has better dynamic performance, reduced overshoot and faster transient response than the conventional controlled system.

  18. Feature to prototype transition in neural networks

    Science.gov (United States)

    Krotov, Dmitry; Hopfield, John

    Models of associative memory with higher order (higher than quadratic) interactions, and their relationship to neural networks used in deep learning are discussed. Associative memory is conventionally described by recurrent neural networks with dynamical convergence to stable points. Deep learning typically uses feedforward neural nets without dynamics. However, a simple duality relates these two different views when applied to problems of pattern classification. From the perspective of associative memory such models deserve attention because they make it possible to store a much larger number of memories, compared to the quadratic case. In the dual description, these models correspond to feedforward neural networks with one hidden layer and unusual activation functions transmitting the activities of the visible neurons to the hidden layer. These activation functions are rectified polynomials of a higher degree rather than the rectified linear functions used in deep learning. The network learns representations of the data in terms of features for rectified linear functions, but as the power in the activation function is increased there is a gradual shift to a prototype-based representation, the two extreme regimes of pattern recognition known in cognitive psychology. Simons Center for Systems Biology.

  19. Neural networks for sensor validation and plant-wide monitoring

    International Nuclear Information System (INIS)

    Eryurek, E.

    1991-08-01

    The feasibility of using neural networks to characterize one or more variables as a function of other than related variables has been studied. Neural network or parallel distributed processing is found to be highly suitable for the development of relationships among various parameters. A sensor failure detection is studied, and it is shown that neural network models can be used to estimate the sensor readings during the absence of a sensor. (author). 4 refs.; 3 figs

  20. Optimum Neural Network Architecture for Precipitation Prediction of Myanmar

    OpenAIRE

    Khaing Win Mar; Thinn Thu Naing

    2008-01-01

    Nowadays, precipitation prediction is required for proper planning and management of water resources. Prediction with neural network models has received increasing interest in various research and application domains. However, it is difficult to determine the best neural network architecture for prediction since it is not immediately obvious how many input or hidden nodes are used in the model. In this paper, neural network model is used as a forecasting tool. The major aim is to evaluate a s...

  1. Neural network tagging in a toy model

    International Nuclear Information System (INIS)

    Milek, Marko; Patel, Popat

    1999-01-01

    The purpose of this study is a comparison of Artificial Neural Network approach to HEP analysis against the traditional methods. A toy model used in this analysis consists of two types of particles defined by four generic properties. A number of 'events' was created according to the model using standard Monte Carlo techniques. Several fully connected, feed forward multi layered Artificial Neural Networks were trained to tag the model events. The performance of each network was compared to the standard analysis mechanisms and significant improvement was observed

  2. Optimal Brain Surgeon on Artificial Neural Networks in

    DEFF Research Database (Denmark)

    Christiansen, Niels Hørbye; Job, Jonas Hultmann; Klyver, Katrine

    2012-01-01

    It is shown how the procedure know as optimal brain surgeon can be used to trim and optimize artificial neural networks in nonlinear structural dynamics. Beside optimizing the neural network, and thereby minimizing computational cost in simulation, the surgery procedure can also serve as a quick...

  3. Neural networks and their application to nuclear power plant diagnosis

    International Nuclear Information System (INIS)

    Reifman, J.

    1997-01-01

    The authors present a survey of artificial neural network-based computer systems that have been proposed over the last decade for the detection and identification of component faults in thermal-hydraulic systems of nuclear power plants. The capabilities and advantages of applying neural networks as decision support systems for nuclear power plant operators and their inherent characteristics are discussed along with their limitations and drawbacks. The types of neural network structures used and their applications are described and the issues of process diagnosis and neural network-based diagnostic systems are identified. A total of thirty-four publications are reviewed

  4. Deep Neural Network-Based Chinese Semantic Role Labeling

    Institute of Scientific and Technical Information of China (English)

    ZHENG Xiaoqing; CHEN Jun; SHANG Guoqiang

    2017-01-01

    A recent trend in machine learning is to use deep architec-tures to discover multiple levels of features from data, which has achieved impressive results on various natural language processing (NLP) tasks. We propose a deep neural network-based solution to Chinese semantic role labeling (SRL) with its application on message analysis. The solution adopts a six-step strategy: text normalization, named entity recognition (NER), Chinese word segmentation and part-of-speech (POS) tagging, theme classification, SRL, and slot filling. For each step, a novel deep neural network - based model is designed and optimized, particularly for smart phone applications. Ex-periment results on all the NLP sub - tasks of the solution show that the proposed neural networks achieve state-of-the-art performance with the minimal computational cost. The speed advantage of deep neural networks makes them more competitive for large-scale applications or applications requir-ing real-time response, highlighting the potential of the pro-posed solution for practical NLP systems.

  5. Iris double recognition based on modified evolutionary neural network

    Science.gov (United States)

    Liu, Shuai; Liu, Yuan-Ning; Zhu, Xiao-Dong; Huo, Guang; Liu, Wen-Tao; Feng, Jia-Kai

    2017-11-01

    Aiming at multicategory iris recognition under illumination and noise interference, this paper proposes a method of iris double recognition based on a modified evolutionary neural network. An equalization histogram and Laplace of Gaussian operator are used to process the iris to suppress illumination and noise interference and Haar wavelet to convert the iris feature to binary feature encoding. Calculate the Hamming distance for the test iris and template iris , and compare with classification threshold, determine the type of iris. If the iris cannot be identified as a different type, there needs to be a secondary recognition. The connection weights in back-propagation (BP) neural network use modified evolutionary neural network to adaptively train. The modified neural network is composed of particle swarm optimization with mutation operator and BP neural network. According to different iris libraries in different circumstances of experimental results, under illumination and noise interference, the correct recognition rate of this algorithm is higher, the ROC curve is closer to the coordinate axis, the training and recognition time is shorter, and the stability and the robustness are better.

  6. Deep Recurrent Convolutional Neural Network: Improving Performance For Speech Recognition

    OpenAIRE

    Zhang, Zewang; Sun, Zheng; Liu, Jiaqi; Chen, Jingwen; Huo, Zhao; Zhang, Xiao

    2016-01-01

    A deep learning approach has been widely applied in sequence modeling problems. In terms of automatic speech recognition (ASR), its performance has significantly been improved by increasing large speech corpus and deeper neural network. Especially, recurrent neural network and deep convolutional neural network have been applied in ASR successfully. Given the arising problem of training speed, we build a novel deep recurrent convolutional network for acoustic modeling and then apply deep resid...

  7. Classification of Urinary Calculi using Feed-Forward Neural Networks

    African Journals Online (AJOL)

    NJD

    Genetic algorithms were used for optimization of neural networks and for selection of the ... Urinary calculi, infrared spectroscopy, classification, neural networks, variable ..... note that the best accuracy is obtained for whewellite, weddellite.

  8. ECO INVESTMENT PROJECT MANAGEMENT THROUGH TIME APPLYING ARTIFICIAL NEURAL NETWORKS

    Directory of Open Access Journals (Sweden)

    Tamara Gvozdenović

    2007-06-01

    Full Text Available he concept of project management expresses an indispensable approach to investment projects. Time is often the most important factor in these projects. The artificial neural network is the paradigm of data processing, which is inspired by the one used by the biological brain, and it is used in numerous, different fields, among which is the project management. This research is oriented to application of artificial neural networks in managing time of investment project. The artificial neural networks are used to define the optimistic, the most probable and the pessimistic time in PERT method. The program package Matlab: Neural Network Toolbox is used in data simulation. The feed-forward back propagation network is chosen.

  9. Forecasting PM10 in metropolitan areas: Efficacy of neural networks

    International Nuclear Information System (INIS)

    Fernando, H.J.S.; Mammarella, M.C.; Grandoni, G.; Fedele, P.; Di Marco, R.; Dimitrova, R.; Hyde, P.

    2012-01-01

    Deterministic photochemical air quality models are commonly used for regulatory management and planning of urban airsheds. These models are complex, computer intensive, and hence are prohibitively expensive for routine air quality predictions. Stochastic methods are becoming increasingly popular as an alternative, which relegate decision making to artificial intelligence based on Neural Networks that are made of artificial neurons or ‘nodes’ capable of ‘learning through training’ via historic data. A Neural Network was used to predict particulate matter concentration at a regulatory monitoring site in Phoenix, Arizona; its development, efficacy as a predictive tool and performance vis-à-vis a commonly used regulatory photochemical model are described in this paper. It is concluded that Neural Networks are much easier, quicker and economical to implement without compromising the accuracy of predictions. Neural Networks can be used to develop rapid air quality warning systems based on a network of automated monitoring stations.Highlights: ► Neural Network is an alternative technique to photochemical modelling. ► Neutral Networks can be as effective as traditional air photochemical modelling. ► Neural Networks are much easier and quicker to implement in health warning system. - Neutral networks are as effective as photochemical modelling for air quality predictions, but are much easier, quicker and economical to implement in air pollution (or health) warning systems.

  10. q-state Potts-glass neural network based on pseudoinverse rule

    International Nuclear Information System (INIS)

    Xiong Daxing; Zhao Hong

    2010-01-01

    We study the q-state Potts-glass neural network with the pseudoinverse (PI) rule. Its performance is investigated and compared with that of the counterpart network with the Hebbian rule instead. We find that there exists a critical point of q, i.e., q cr =14, below which the storage capacity and the retrieval quality can be greatly improved by introducing the PI rule. We show that the dynamics of the neural networks constructed with the two learning rules respectively are quite different; but however, regardless of the learning rules, in the q-state Potts-glass neural networks with q≥3 there is a common novel dynamical phase in which the spurious memories are completely suppressed. This property has never been noticed in the symmetric feedback neural networks. Free from the spurious memories implies that the multistate Potts-glass neural networks would not be trapped in the metastable states, which is a favorable property for their applications.

  11. Neural networks advances and applications 2

    CERN Document Server

    Gelenbe, E

    1992-01-01

    The present volume is a natural follow-up to Neural Networks: Advances and Applications which appeared one year previously. As the title indicates, it combines the presentation of recent methodological results concerning computational models and results inspired by neural networks, and of well-documented applications which illustrate the use of such models in the solution of difficult problems. The volume is balanced with respect to these two orientations: it contains six papers concerning methodological developments and five papers concerning applications and examples illustrating the theoret

  12. Global Robust Stability of Switched Interval Neural Networks with Discrete and Distributed Time-Varying Delays of Neural Type

    Directory of Open Access Journals (Sweden)

    Huaiqin Wu

    2012-01-01

    Full Text Available By combing the theories of the switched systems and the interval neural networks, the mathematics model of the switched interval neural networks with discrete and distributed time-varying delays of neural type is presented. A set of the interval parameter uncertainty neural networks with discrete and distributed time-varying delays of neural type are used as the individual subsystem, and an arbitrary switching rule is assumed to coordinate the switching between these networks. By applying the augmented Lyapunov-Krasovskii functional approach and linear matrix inequality (LMI techniques, a delay-dependent criterion is achieved to ensure to such switched interval neural networks to be globally asymptotically robustly stable in terms of LMIs. The unknown gain matrix is determined by solving this delay-dependent LMIs. Finally, an illustrative example is given to demonstrate the validity of the theoretical results.

  13. Training Deep Spiking Neural Networks Using Backpropagation.

    Science.gov (United States)

    Lee, Jun Haeng; Delbruck, Tobi; Pfeiffer, Michael

    2016-01-01

    Deep spiking neural networks (SNNs) hold the potential for improving the latency and energy efficiency of deep neural networks through data-driven event-based computation. However, training such networks is difficult due to the non-differentiable nature of spike events. In this paper, we introduce a novel technique, which treats the membrane potentials of spiking neurons as differentiable signals, where discontinuities at spike times are considered as noise. This enables an error backpropagation mechanism for deep SNNs that follows the same principles as in conventional deep networks, but works directly on spike signals and membrane potentials. Compared with previous methods relying on indirect training and conversion, our technique has the potential to capture the statistics of spikes more precisely. We evaluate the proposed framework on artificially generated events from the original MNIST handwritten digit benchmark, and also on the N-MNIST benchmark recorded with an event-based dynamic vision sensor, in which the proposed method reduces the error rate by a factor of more than three compared to the best previous SNN, and also achieves a higher accuracy than a conventional convolutional neural network (CNN) trained and tested on the same data. We demonstrate in the context of the MNIST task that thanks to their event-driven operation, deep SNNs (both fully connected and convolutional) trained with our method achieve accuracy equivalent with conventional neural networks. In the N-MNIST example, equivalent accuracy is achieved with about five times fewer computational operations.

  14. Fast and Efficient Asynchronous Neural Computation with Adapting Spiking Neural Networks

    NARCIS (Netherlands)

    D. Zambrano (Davide); S.M. Bohte (Sander)

    2016-01-01

    textabstractBiological neurons communicate with a sparing exchange of pulses - spikes. It is an open question how real spiking neurons produce the kind of powerful neural computation that is possible with deep artificial neural networks, using only so very few spikes to communicate. Building on

  15. A recurrent neural network based on projection operator for extended general variational inequalities.

    Science.gov (United States)

    Liu, Qingshan; Cao, Jinde

    2010-06-01

    Based on the projection operator, a recurrent neural network is proposed for solving extended general variational inequalities (EGVIs). Sufficient conditions are provided to ensure the global convergence of the proposed neural network based on Lyapunov methods. Compared with the existing neural networks for variational inequalities, the proposed neural network is a modified version of the general projection neural network existing in the literature and capable of solving the EGVI problems. In addition, simulation results on numerical examples show the effectiveness and performance of the proposed neural network.

  16. Neutron spectrometry and dosimetry by means of evolutive neural networks

    International Nuclear Information System (INIS)

    Ortiz R, J.M.; Martinez B, M.R.; Vega C, H.R.

    2008-01-01

    The artificial neural networks and the genetic algorithms are two relatively new areas of research, which have been subject to a growing interest during the last years. Both models are inspired by the nature, however, the neural networks are interested in the learning of a single individual, which is defined as fenotypic learning, while the evolutionary algorithms are interested in the adaptation of a population to a changing environment, that which is defined as genotypic learning. Recently, the use of the technology of neural networks has been applied with success in the area of the nuclear sciences, mainly in the areas of neutron spectrometry and dosimetry. The structure (network topology), as well as the learning parameters of a neural network, are factors that contribute in a significant way with the acting of the same one, however, it has been observed that the investigators in this area, carry out the selection of the network parameters through the essay and error technique, that which produces neural networks of poor performance and low generalization capacity. From the revised sources, it has been observed that the use of the evolutionary algorithms, seen as search techniques, it has allowed him to be possible to evolve and to optimize different properties of the neural networks, just as the initialization of the synaptic weights, the network architecture or the training algorithms without the human intervention. The objective of the present work is focused in analyzing the intersection of the neural networks and the evolutionary algorithms, analyzing like it is that the same ones can be used to help in the design processes and training of a neural network, this is, in the good selection of the structural parameters and of network learning, improving its generalization capacity, in such way that the same one is able to reconstruct in an efficient way neutron spectra and to calculate equivalent doses starting from the counting rates of a Bonner sphere

  17. Neural Network for Optimization of Existing Control Systems

    DEFF Research Database (Denmark)

    Madsen, Per Printz

    1995-01-01

    The purpose of this paper is to develop methods to use Neural Network based Controllers (NNC) as an optimization tool for existing control systems.......The purpose of this paper is to develop methods to use Neural Network based Controllers (NNC) as an optimization tool for existing control systems....

  18. assessment of neural networks performance in modeling rainfall ...

    African Journals Online (AJOL)

    Sholagberu

    neural network architecture for precipitation prediction of Myanmar, World Academy of. Science, Engineering and Technology, 48, pp. 130 – 134. Kumarasiri, A.D. and Sonnadara, D.U.J. (2006). Rainfall forecasting: an artificial neural network approach, Proceedings of the Technical Sessions,. 22, pp. 1-13 Institute of Physics ...

  19. Discrete Orthogonal Transforms and Neural Networks for Image Interpolation

    Directory of Open Access Journals (Sweden)

    J. Polec

    1999-09-01

    Full Text Available In this contribution we present transform and neural network approaches to the interpolation of images. From transform point of view, the principles from [1] are modified for 1st and 2nd order interpolation. We present several new interpolation discrete orthogonal transforms. From neural network point of view, we present interpolation possibilities of multilayer perceptrons. We use various configurations of neural networks for 1st and 2nd order interpolation. The results are compared by means of tables.

  20. Runoff Calculation by Neural Networks Using Radar Rainfall Data

    OpenAIRE

    岡田, 晋作; 四俵, 正俊

    1997-01-01

    Neural networks, are used to calculate runoff from weather radar data and ground rain gauge data. Compared to usual runoff models, it is easier to use radar data in neural network runoff calculation. Basically you can use the radar data directly, or without transforming them into rainfall, as the input of the neural network. A situation with the difficulty of ground measurement is supposed. To cover the area lacking ground rain gauge, radar data are used. In case that the distribution of grou...

  1. Recurrent neural network for non-smooth convex optimization problems with application to the identification of genetic regulatory networks.

    Science.gov (United States)

    Cheng, Long; Hou, Zeng-Guang; Lin, Yingzi; Tan, Min; Zhang, Wenjun Chris; Wu, Fang-Xiang

    2011-05-01

    A recurrent neural network is proposed for solving the non-smooth convex optimization problem with the convex inequality and linear equality constraints. Since the objective function and inequality constraints may not be smooth, the Clarke's generalized gradients of the objective function and inequality constraints are employed to describe the dynamics of the proposed neural network. It is proved that the equilibrium point set of the proposed neural network is equivalent to the optimal solution of the original optimization problem by using the Lagrangian saddle-point theorem. Under weak conditions, the proposed neural network is proved to be stable, and the state of the neural network is convergent to one of its equilibrium points. Compared with the existing neural network models for non-smooth optimization problems, the proposed neural network can deal with a larger class of constraints and is not based on the penalty method. Finally, the proposed neural network is used to solve the identification problem of genetic regulatory networks, which can be transformed into a non-smooth convex optimization problem. The simulation results show the satisfactory identification accuracy, which demonstrates the effectiveness and efficiency of the proposed approach.

  2. Neural Network Classifiers for Local Wind Prediction.

    Science.gov (United States)

    Kretzschmar, Ralf; Eckert, Pierre; Cattani, Daniel; Eggimann, Fritz

    2004-05-01

    This paper evaluates the quality of neural network classifiers for wind speed and wind gust prediction with prediction lead times between +1 and +24 h. The predictions were realized based on local time series and model data. The selection of appropriate input features was initiated by time series analysis and completed by empirical comparison of neural network classifiers trained on several choices of input features. The selected input features involved day time, yearday, features from a single wind observation device at the site of interest, and features derived from model data. The quality of the resulting classifiers was benchmarked against persistence for two different sites in Switzerland. The neural network classifiers exhibited superior quality when compared with persistence judged on a specific performance measure, hit and false-alarm rates.

  3. Accident scenario diagnostics with neural networks

    International Nuclear Information System (INIS)

    Guo, Z.

    1992-01-01

    Nuclear power plants are very complex systems. The diagnoses of transients or accident conditions is very difficult because a large amount of information, which is often noisy, or intermittent, or even incomplete, need to be processed in real time. To demonstrate their potential application to nuclear power plants, neural networks axe used to monitor the accident scenarios simulated by the training simulator of TVA's Watts Bar Nuclear Power Plant. A self-organization network is used to compress original data to reduce the total number of training patterns. Different accident scenarios are closely related to different key parameters which distinguish one accident scenario from another. Therefore, the accident scenarios can be monitored by a set of small size neural networks, called modular networks, each one of which monitors only one assigned accident scenario, to obtain fast training and recall. Sensitivity analysis is applied to select proper input variables for modular networks

  4. Hindcasting of storm waves using neural networks

    Digital Repository Service at National Institute of Oceanography (India)

    Rao, S.; Mandal, S.

    Department NN neural network net i weighted sum of the inputs of neuron i o k network output at kth output node P total number of training pattern s i output of neuron i t k target output at kth output node 1. Introduction Severe storms occur in Bay of Bengal...), forecasting of runoff (Crespo and Mora, 1993), concrete strength (Kasperkiewicz et al., 1995). The uses of neural network in the coastal the wave conditions will change from year to year, thus a proper statistical and climatological treatment requires several...

  5. Kontrol Kecepatan Motor Induksi menggunakan Algoritma Backpropagation Neural Network

    Directory of Open Access Journals (Sweden)

    MUHAMMAD RUSWANDI DJALAL

    2017-07-01

    Full Text Available ABSTRAKBanyak strategi kontrol berbasis kecerdasan buatan telah diusulkan dalam penelitian seperti Fuzzy Logic dan Artificial Neural Network (ANN. Tujuan dari penelitian ini adalah untuk mendesain sebuah kontrol agar kecepatan motor induksi dapat diatur sesuai kebutuhan serta membandingkan kinerja motor induksi tanpa kontrol dan dengan kontrol. Dalam penelitian ini diusulkan sebuah metode artificial neural network untuk mengontrol kecepatan motor induksi tiga fasa. Kecepatan referensi motor diatur pada kecepatan 140 rad/s, 150 rad/s, dan 130 rad/s. Perubahan kecepatan diatur pada setiap interval 0.3 detik dan waktu simulasi maksimum adalah 0,9 detik. Kasus 1 tanpa kontrol, menunjukkan respon torka dan kecepatan dari motor induksi tiga fasa tanpa kontrol. Meskipun kecepatan motor induksi tiga fasa diatur berubah pada setiap 0,3 detik tidak akan mempengaruhi torka. Selain itu, motor induksi tiga fasa tanpa kontrol memiliki kinerja yang buruk dikarenakan kecepatan motor induksi tidak dapat diatur sesuai dengan kebutuhan. Kasus 2 dengan control backpropagation neural network, meskipun kecepatan motor induksi tiga fasa berubah pada setiap 0.3 detik tidak akan mempengaruhi torsi. Selain itu, kontrol backpropagation neural network memiliki kinerja yang baik dikarenakan kecepatan motor induksi dapat diatur sesuai dengan kebutuhan.Kata kunci: Backpropagation Neural Network (BPNN, NN Training, NN Testing, Motor.ABSTRACTMany artificial intelligence-based control strategies have been proposed in research such as Fuzzy Logic and Artificial Neural Network (ANN. The purpose of this research was design a control for the induction motor speed that could be adjusted as needed and compare the performance of induction motor without control and with control. In this research, it was proposed an artificial neural network method to control the speed of three-phase induction motors. The reference speed of motor was set at the rate of 140 rad / s, 150 rad / s, and 130

  6. Neural networks: Application to medical imaging

    Science.gov (United States)

    Clarke, Laurence P.

    1994-01-01

    The research mission is the development of computer assisted diagnostic (CAD) methods for improved diagnosis of medical images including digital x-ray sensors and tomographic imaging modalities. The CAD algorithms include advanced methods for adaptive nonlinear filters for image noise suppression, hybrid wavelet methods for feature segmentation and enhancement, and high convergence neural networks for feature detection and VLSI implementation of neural networks for real time analysis. Other missions include (1) implementation of CAD methods on hospital based picture archiving computer systems (PACS) and information networks for central and remote diagnosis and (2) collaboration with defense and medical industry, NASA, and federal laboratories in the area of dual use technology conversion from defense or aerospace to medicine.

  7. Optimization of blanking process using neural network simulation

    International Nuclear Information System (INIS)

    Hambli, R.

    2005-01-01

    The present work describes a methodology using the finite element method and neural network simulation in order to predict the optimum punch-die clearance during sheet metal blanking processes. A damage model is used in order to describe crack initiation and propagation into the sheet. The proposed approach combines predictive finite element and neural network modeling of the leading blanking parameters. Numerical results obtained by finite element computation including damage and fracture modeling were utilized to train the developed simulation environment based on back propagation neural network modeling. The comparative study between the numerical results and the experimental ones shows the good agreement. (author)

  8. Beneficial role of noise in artificial neural networks

    International Nuclear Information System (INIS)

    Monterola, Christopher; Saloma, Caesar; Zapotocky, Martin

    2008-01-01

    We demonstrate enhancement of neural networks efficacy to recognize frequency encoded signals and/or to categorize spatial patterns of neural activity as a result of noise addition. For temporal information recovery, noise directly added to the receiving neurons allow instantaneous improvement of signal-to-noise ratio [Monterola and Saloma, Phys. Rev. Lett. 2002]. For spatial patterns however, recurrence is necessary to extend and homogenize the operating range of a feed-forward neural network [Monterola and Zapotocky, Phys. Rev. E 2005]. Finally, using the size of the basin of attraction of the networks learned patterns (dynamical fixed points), a procedure for estimating the optimal noise is demonstrated

  9. Transient analysis for PWR reactor core using neural networks predictors

    International Nuclear Information System (INIS)

    Gueray, B.S.

    2001-01-01

    In this study, transient analysis for a Pressurized Water Reactor core has been performed. A lumped parameter approximation is preferred for that purpose, to describe the reactor core together with mechanism which play an important role in dynamic analysis. The dynamic behavior of the reactor core during transients is analyzed considering the transient initiating events, wich are an essential part of Safety Analysis Reports. several transients are simulated based on the employed core model. Simulation results are in accord the physical expectations. A neural network is developed to predict the future response of the reactor core, in advance. The neural network is trained using the simulation results of a number of representative transients. Structure of the neural network is optimized by proper selection of transfer functions for the neurons. Trained neural network is used to predict the future responses following an early observation of the changes in system variables. Estimated behaviour using the neural network is in good agreement with the simulation results for various for types of transients. Results of this study indicate that the designed neural network can be used as an estimator of the time dependent behavior of the reactor core under transient conditions

  10. Phase Diagram of Spiking Neural Networks

    Directory of Open Access Journals (Sweden)

    Hamed eSeyed-Allaei

    2015-03-01

    Full Text Available In computer simulations of spiking neural networks, often it is assumed that every two neurons of the network are connected by a probablilty of 2%, 20% of neurons are inhibitory and 80% are excitatory. These common values are based on experiments, observations. but here, I take a different perspective, inspired by evolution. I simulate many networks, each with a different set of parameters, and then I try to figure out what makes the common values desirable by nature. Networks which are configured according to the common values, have the best dynamic range in response to an impulse and their dynamic range is more robust in respect to synaptic weights. In fact, evolution has favored networks of best dynamic range. I present a phase diagram that shows the dynamic ranges of different networks of different parameteres. This phase diagram gives an insight into the space of parameters -- excitatory to inhibitory ratio, sparseness of connections and synaptic weights. It may serve as a guideline to decide about the values of parameters in a simulation of spiking neural network.

  11. Neural networks, D0, and the SSC

    International Nuclear Information System (INIS)

    Barter, C.; Cutts, D.; Hoftun, J.S.; Partridge, R.A.; Sornborger, A.T.; Johnson, C.T.; Zeller, R.T.

    1989-01-01

    We outline several exploratory studies involving neural network simulations applied to pattern recognition in high energy physics. We describe the D0 data acquisition system and a natual means by which algorithms derived from neural networks techniques may be incorporated into recently developed hardware associated with the D0 MicroVAX farm nodes. Such applications to the event filtering needed by SSC detectors look interesting. 10 refs., 11 figs

  12. Multi-modular neural networks for the classification of e+e- hadronic events

    International Nuclear Information System (INIS)

    Proriol, J.

    1994-01-01

    Some multi-modular neural network methods of classifying e + e - hadronic events are presented. We compare the performances of the following neural networks: MLP (multilayer perceptron), MLP and LVQ (learning vector quantization) trained sequentially, and MLP and RBF (radial basis function) trained sequentially. We introduce a MLP-RBF cooperative neural network. Our last study is a multi-MLP neural network. (orig.)

  13. Linear programming based on neural networks for radiotherapy treatment planning

    International Nuclear Information System (INIS)

    Xingen Wu; Limin Luo

    2000-01-01

    In this paper, we propose a neural network model for linear programming that is designed to optimize radiotherapy treatment planning (RTP). This kind of neural network can be easily implemented by using a kind of 'neural' electronic system in order to obtain an optimization solution in real time. We first give an introduction to the RTP problem and construct a non-constraint objective function for the neural network model. We adopt a gradient algorithm to minimize the objective function and design the structure of the neural network for RTP. Compared to traditional linear programming methods, this neural network model can reduce the time needed for convergence, the size of problems (i.e., the number of variables to be searched) and the number of extra slack and surplus variables needed. We obtained a set of optimized beam weights that result in a better dose distribution as compared to that obtained using the simplex algorithm under the same initial condition. The example presented in this paper shows that this model is feasible in three-dimensional RTP. (author)

  14. Chaotic Hopfield Neural Network Swarm Optimization and Its Application

    Directory of Open Access Journals (Sweden)

    Yanxia Sun

    2013-01-01

    Full Text Available A new neural network based optimization algorithm is proposed. The presented model is a discrete-time, continuous-state Hopfield neural network and the states of the model are updated synchronously. The proposed algorithm combines the advantages of traditional PSO, chaos and Hopfield neural networks: particles learn from their own experience and the experiences of surrounding particles, their search behavior is ergodic, and convergence of the swarm is guaranteed. The effectiveness of the proposed approach is demonstrated using simulations and typical optimization problems.

  15. Sequential and parallel image restoration: neural network implementations.

    Science.gov (United States)

    Figueiredo, M T; Leitao, J N

    1994-01-01

    Sequential and parallel image restoration algorithms and their implementations on neural networks are proposed. For images degraded by linear blur and contaminated by additive white Gaussian noise, maximum a posteriori (MAP) estimation and regularization theory lead to the same high dimension convex optimization problem. The commonly adopted strategy (in using neural networks for image restoration) is to map the objective function of the optimization problem into the energy of a predefined network, taking advantage of its energy minimization properties. Departing from this approach, we propose neural implementations of iterative minimization algorithms which are first proved to converge. The developed schemes are based on modified Hopfield (1985) networks of graded elements, with both sequential and parallel updating schedules. An algorithm supported on a fully standard Hopfield network (binary elements and zero autoconnections) is also considered. Robustness with respect to finite numerical precision is studied, and examples with real images are presented.

  16. Advances in neural networks computational and theoretical issues

    CERN Document Server

    Esposito, Anna; Morabito, Francesco

    2015-01-01

    This book collects research works that exploit neural networks and machine learning techniques from a multidisciplinary perspective. Subjects covered include theoretical, methodological and computational topics which are grouped together into chapters devoted to the discussion of novelties and innovations related to the field of Artificial Neural Networks as well as the use of neural networks for applications, pattern recognition, signal processing, and special topics such as the detection and recognition of multimodal emotional expressions and daily cognitive functions, and  bio-inspired memristor-based networks.  Providing insights into the latest research interest from a pool of international experts coming from different research fields, the volume becomes valuable to all those with any interest in a holistic approach to implement believable, autonomous, adaptive, and context-aware Information Communication Technologies.

  17. Diagnostic Neural Network Systems for the Electronic Circuits

    International Nuclear Information System (INIS)

    Mohamed, A.H.

    2014-01-01

    Neural Networks is one of the most important artificial intelligent approaches for solving the diagnostic processes. This research concerns with uses the neural networks for diagnosis of the electronic circuits. Modern electronic systems contain both the analog and digital circuits. But, diagnosis of the analog circuits suffers from great complexity due to their nonlinearity. To overcome this problem, the proposed system introduces a diagnostic system that uses the neural network to diagnose both the digital and analog circuits. So, it can face the new requirements for the modern electronic systems. A fault dictionary method was implemented in the system. Experimental results are presented on three electronic systems. They are: artificial kidney, wireless network and personal computer systems. The proposed system has improved the performance of the diagnostic systems when applied for these practical cases

  18. Reducing Wind Tunnel Data Requirements Using Neural Networks

    Science.gov (United States)

    Ross, James C.; Jorgenson, Charles C.; Norgaard, Magnus

    1997-01-01

    The use of neural networks to minimize the amount of data required to completely define the aerodynamic performance of a wind tunnel model is examined. The accuracy requirements for commercial wind tunnel test data are very severe and are difficult to reproduce using neural networks. For the current work, multiple input, single output networks were trained using a Levenberg-Marquardt algorithm for each of the aerodynamic coefficients. When applied to the aerodynamics of a 55% scale model of a U.S. Air Force/ NASA generic fighter configuration, this scheme provided accurate models of the lift, drag, and pitching-moment coefficients. Using only 50% of the data acquired during, the wind tunnel test, the trained neural network had a predictive accuracy equal to or better than the accuracy of the experimental measurements.

  19. UNMANNED AIR VEHICLE STABILIZATION BASED ON NEURAL NETWORK REGULATOR

    Directory of Open Access Journals (Sweden)

    S. S. Andropov

    2016-09-01

    Full Text Available A problem of stabilizing for the multirotor unmanned aerial vehicle in an environment with external disturbances is researched. A classic proportional-integral-derivative controller is analyzed, its flaws are outlined: inability to respond to changing of external conditions and the need for manual adjustment of coefficients. The paper presents an adaptive adjustment method for coefficients of the proportional-integral-derivative controller based on neural networks. A neural network structure, its input and output data are described. Neural networks with three layers are used to create an adaptive stabilization system for the multirotor unmanned aerial vehicle. Training of the networks is done with the back propagation method. Each neural network produces regulator coefficients for each angle of stabilization as its output. A method for network training is explained. Several graphs of transition process on different stages of learning, including processes with external disturbances, are presented. It is shown that the system meets stabilization requirements with sufficient number of iterations. Described adjustment method for coefficients can be used in remote control of unmanned aerial vehicles, operating in the changing environment.

  20. Empirical modeling of nuclear power plants using neural networks

    International Nuclear Information System (INIS)

    Parlos, A.G.; Atiya, A.; Chong, K.T.

    1991-01-01

    A summary of a procedure for nonlinear identification of process dynamics encountered in nuclear power plant components is presented in this paper using artificial neural systems. A hybrid feedforward/feedback neural network, namely, a recurrent multilayer perceptron, is used as the nonlinear structure for system identification. In the overall identification process, the feedforward portion of the network architecture provides its well-known interpolation property, while through recurrency and cross-talk, the local information feedback enables representation of time-dependent system nonlinearities. The standard backpropagation learning algorithm is modified and is used to train the proposed hybrid network in a supervised manner. The performance of recurrent multilayer perceptron networks in identifying process dynamics is investigated via the case study of a U-tube steam generator. The nonlinear response of a representative steam generator is predicted using a neural network and is compared to the response obtained from a sophisticated physical model during both high- and low-power operation. The transient responses compare well, though further research is warranted for training and testing of recurrent neural networks during more severe operational transients and accident scenarios

  1. Optimization of multilayer neural network parameters for speaker recognition

    Science.gov (United States)

    Tovarek, Jaromir; Partila, Pavol; Rozhon, Jan; Voznak, Miroslav; Skapa, Jan; Uhrin, Dominik; Chmelikova, Zdenka

    2016-05-01

    This article discusses the impact of multilayer neural network parameters for speaker identification. The main task of speaker identification is to find a specific person in the known set of speakers. It means that the voice of an unknown speaker (wanted person) belongs to a group of reference speakers from the voice database. One of the requests was to develop the text-independent system, which means to classify wanted person regardless of content and language. Multilayer neural network has been used for speaker identification in this research. Artificial neural network (ANN) needs to set parameters like activation function of neurons, steepness of activation functions, learning rate, the maximum number of iterations and a number of neurons in the hidden and output layers. ANN accuracy and validation time are directly influenced by the parameter settings. Different roles require different settings. Identification accuracy and ANN validation time were evaluated with the same input data but different parameter settings. The goal was to find parameters for the neural network with the highest precision and shortest validation time. Input data of neural networks are a Mel-frequency cepstral coefficients (MFCC). These parameters describe the properties of the vocal tract. Audio samples were recorded for all speakers in a laboratory environment. Training, testing and validation data set were split into 70, 15 and 15 %. The result of the research described in this article is different parameter setting for the multilayer neural network for four speakers.

  2. Applying Gradient Descent in Convolutional Neural Networks

    Science.gov (United States)

    Cui, Nan

    2018-04-01

    With the development of the integrated circuit and computer science, people become caring more about solving practical issues via information technologies. Along with that, a new subject called Artificial Intelligent (AI) comes up. One popular research interest of AI is about recognition algorithm. In this paper, one of the most common algorithms, Convolutional Neural Networks (CNNs) will be introduced, for image recognition. Understanding its theory and structure is of great significance for every scholar who is interested in this field. Convolution Neural Network is an artificial neural network which combines the mathematical method of convolution and neural network. The hieratical structure of CNN provides it reliable computer speed and reasonable error rate. The most significant characteristics of CNNs are feature extraction, weight sharing and dimension reduction. Meanwhile, combining with the Back Propagation (BP) mechanism and the Gradient Descent (GD) method, CNNs has the ability to self-study and in-depth learning. Basically, BP provides an opportunity for backwardfeedback for enhancing reliability and GD is used for self-training process. This paper mainly discusses the CNN and the related BP and GD algorithms, including the basic structure and function of CNN, details of each layer, the principles and features of BP and GD, and some examples in practice with a summary in the end.

  3. Deep Neural Network Detects Quantum Phase Transition

    Science.gov (United States)

    Arai, Shunta; Ohzeki, Masayuki; Tanaka, Kazuyuki

    2018-03-01

    We detect the quantum phase transition of a quantum many-body system by mapping the observed results of the quantum state onto a neural network. In the present study, we utilized the simplest case of a quantum many-body system, namely a one-dimensional chain of Ising spins with the transverse Ising model. We prepared several spin configurations, which were obtained using repeated observations of the model for a particular strength of the transverse field, as input data for the neural network. Although the proposed method can be employed using experimental observations of quantum many-body systems, we tested our technique with spin configurations generated by a quantum Monte Carlo simulation without initial relaxation. The neural network successfully identified the strength of transverse field only from the spin configurations, leading to consistent estimations of the critical point of our model Γc = J.

  4. Eddy Current Flaw Characterization Using Neural Networks

    International Nuclear Information System (INIS)

    Song, S. J.; Park, H. J.; Shin, Y. K.

    1998-01-01

    Determination of location, shape and size of a flaw from its eddy current testing signal is one of the fundamental issues in eddy current nondestructive evaluation of steam generator tubes. Here, we propose an approach to this problem; an inversion of eddy current flaw signal using neural networks trained by finite element model-based synthetic signatures. Total 216 eddy current signals from four different types of axisymmetric flaws in tubes are generated by finite element models of which the accuracy is experimentally validated. From each simulated signature, total 24 eddy current features are extracted and among them 13 features are finally selected for flaw characterization. Based on these features, probabilistic neural networks discriminate flaws into four different types according to the location and the shape, and successively back propagation neural networks determine the size parameters of the discriminated flaw

  5. Liquefaction Microzonation of Babol City Using Artificial Neural Network

    DEFF Research Database (Denmark)

    Farrokhzad, F.; Choobbasti, A.J.; Barari, Amin

    2012-01-01

    that will be less susceptible to damage during earthquakes. The scope of present study is to prepare the liquefaction microzonation map for the Babol city based on Seed and Idriss (1983) method using artificial neural network. Artificial neural network (ANN) is one of the artificial intelligence (AI) approaches...... microzonation map is produced for research area. Based on the obtained results, it can be stated that the trained neural network is capable in prediction of liquefaction potential with an acceptable level of confidence. At the end, zoning of the city is carried out based on the prediction of liquefaction...... that can be classified as machine learning. Simplified methods have been practiced by researchers to assess nonlinear liquefaction potential of soil. In order to address the collective knowledge built-up in conventional liquefaction engineering, an alternative general regression neural network model...

  6. Higher-order neural network software for distortion invariant object recognition

    Science.gov (United States)

    Reid, Max B.; Spirkovska, Lilly

    1991-01-01

    The state-of-the-art in pattern recognition for such applications as automatic target recognition and industrial robotic vision relies on digital image processing. We present a higher-order neural network model and software which performs the complete feature extraction-pattern classification paradigm required for automatic pattern recognition. Using a third-order neural network, we demonstrate complete, 100 percent accurate invariance to distortions of scale, position, and in-plate rotation. In a higher-order neural network, feature extraction is built into the network, and does not have to be learned. Only the relatively simple classification step must be learned. This is key to achieving very rapid training. The training set is much smaller than with standard neural network software because the higher-order network only has to be shown one view of each object to be learned, not every possible view. The software and graphical user interface run on any Sun workstation. Results of the use of the neural software in autonomous robotic vision systems are presented. Such a system could have extensive application in robotic manufacturing.

  7. Neural networks and their potential application in nuclear power plants

    International Nuclear Information System (INIS)

    Uhrig, R.E.

    1991-01-01

    A neural network is a data processing system consisting of a number of simple, highly interconnected processing elements in an architecture inspired by the structure of the cerebral cortex portion of the brain. Hence, neural networks are often capable of doing things which humans or animals do well but which conventional computers often do poorly. Neural networks have emerged in the past few years as an area of unusual opportunity for research, development and application to a variety of real world problems. Indeed, neural networks exhibit characteristics and capabilities not provided by any other technology. Examples include reading Japanese Kanji characters and human handwriting, reading a typewritten manuscript aloud, compensating for alignment errors in robots, interpreting very noise signals (e.g., electroencephalograms), modeling complex systems that cannot be modeled mathematically, and predicting whether proposed loans will be good or fail. This paper presents a brief tutorial on neural networks and describes research on the potential applications to nuclear power plants

  8. Radial basis function neural network for power system load-flow

    International Nuclear Information System (INIS)

    Karami, A.; Mohammadi, M.S.

    2008-01-01

    This paper presents a method for solving the load-flow problem of the electric power systems using radial basis function (RBF) neural network with a fast hybrid training method. The main idea is that some operating conditions (values) are needed to solve the set of non-linear algebraic equations of load-flow by employing an iterative numerical technique. Therefore, we may view the outputs of a load-flow program as functions of the operating conditions. Indeed, we are faced with a function approximation problem and this can be done by an RBF neural network. The proposed approach has been successfully applied to the 10-machine and 39-bus New England test system. In addition, this method has been compared with that of a multi-layer perceptron (MLP) neural network model. The simulation results show that the RBF neural network is a simpler method to implement and requires less training time to converge than the MLP neural network. (author)

  9. A One-Layer Recurrent Neural Network for Constrained Complex-Variable Convex Optimization.

    Science.gov (United States)

    Qin, Sitian; Feng, Jiqiang; Song, Jiahui; Wen, Xingnan; Xu, Chen

    2018-03-01

    In this paper, based on calculus and penalty method, a one-layer recurrent neural network is proposed for solving constrained complex-variable convex optimization. It is proved that for any initial point from a given domain, the state of the proposed neural network reaches the feasible region in finite time and converges to an optimal solution of the constrained complex-variable convex optimization finally. In contrast to existing neural networks for complex-variable convex optimization, the proposed neural network has a lower model complexity and better convergence. Some numerical examples and application are presented to substantiate the effectiveness of the proposed neural network.

  10. Hidden Neural Networks: A Framework for HMM/NN Hybrids

    DEFF Research Database (Denmark)

    Riis, Søren Kamaric; Krogh, Anders Stærmose

    1997-01-01

    This paper presents a general framework for hybrids of hidden Markov models (HMM) and neural networks (NN). In the new framework called hidden neural networks (HNN) the usual HMM probability parameters are replaced by neural network outputs. To ensure a probabilistic interpretation the HNN is nor...... HMMs on TIMIT continuous speech recognition benchmarks. On the task of recognizing five broad phoneme classes an accuracy of 84% is obtained compared to 76% for a standard HMM. Additionally, we report a preliminary result of 69% accuracy on the TIMIT 39 phoneme task...

  11. Optical Neural Network Classifier Architectures

    National Research Council Canada - National Science Library

    Getbehead, Mark

    1998-01-01

    We present an adaptive opto-electronic neural network hardware architecture capable of exploiting parallel optics to realize real-time processing and classification of high-dimensional data for Air...

  12. Daily Nigerian peak load forecasting using artificial neural network ...

    African Journals Online (AJOL)

    A daily peak load forecasting technique that uses artificial neural network with seasonal indices is presented in this paper. A neural network of relatively smaller size than the main prediction network is used to predict the daily peak load for a period of one year over which the actual daily load data are available using one ...

  13. Neural Network Models for Time Series Forecasts

    OpenAIRE

    Tim Hill; Marcus O'Connor; William Remus

    1996-01-01

    Neural networks have been advocated as an alternative to traditional statistical forecasting methods. In the present experiment, time series forecasts produced by neural networks are compared with forecasts from six statistical time series methods generated in a major forecasting competition (Makridakis et al. [Makridakis, S., A. Anderson, R. Carbone, R. Fildes, M. Hibon, R. Lewandowski, J. Newton, E. Parzen, R. Winkler. 1982. The accuracy of extrapolation (time series) methods: Results of a ...

  14. Functional model of biological neural networks.

    Science.gov (United States)

    Lo, James Ting-Ho

    2010-12-01

    A functional model of biological neural networks, called temporal hierarchical probabilistic associative memory (THPAM), is proposed in this paper. THPAM comprises functional models of dendritic trees for encoding inputs to neurons, a first type of neuron for generating spike trains, a second type of neuron for generating graded signals to modulate neurons of the first type, supervised and unsupervised Hebbian learning mechanisms for easy learning and retrieving, an arrangement of dendritic trees for maximizing generalization, hardwiring for rotation-translation-scaling invariance, and feedback connections with different delay durations for neurons to make full use of present and past informations generated by neurons in the same and higher layers. These functional models and their processing operations have many functions of biological neural networks that have not been achieved by other models in the open literature and provide logically coherent answers to many long-standing neuroscientific questions. However, biological justifications of these functional models and their processing operations are required for THPAM to qualify as a macroscopic model (or low-order approximate) of biological neural networks.

  15. Neural network training by Kalman filtering in process system monitoring

    International Nuclear Information System (INIS)

    Ciftcioglu, Oe.

    1996-03-01

    Kalman filtering approach for neural network training is described. Its extended form is used as an adaptive filter in a nonlinear environment of the form a feedforward neural network. Kalman filtering approach generally provides fast training as well as avoiding excessive learning which results in enhanced generalization capability. The network is used in a process monitoring application where the inputs are measurement signals. Since the measurement errors are also modelled in Kalman filter the approach yields accurate training with the implication of accurate neural network model representing the input and output relationships in the application. As the process of concern is a dynamic system, the input source of information to neural network is time dependent so that the training algorithm presents an adaptive form for real-time operation for the monitoring task. (orig.)

  16. Dynamical networks: Finding, measuring, and tracking neural population activity using network science

    Directory of Open Access Journals (Sweden)

    Mark D. Humphries

    2017-12-01

    Full Text Available Systems neuroscience is in a headlong rush to record from as many neurons at the same time as possible. As the brain computes and codes using neuron populations, it is hoped these data will uncover the fundamentals of neural computation. But with hundreds, thousands, or more simultaneously recorded neurons come the inescapable problems of visualizing, describing, and quantifying their interactions. Here I argue that network science provides a set of scalable, analytical tools that already solve these problems. By treating neurons as nodes and their interactions as links, a single network can visualize and describe an arbitrarily large recording. I show that with this description we can quantify the effects of manipulating a neural circuit, track changes in population dynamics over time, and quantitatively define theoretical concepts of neural populations such as cell assemblies. Using network science as a core part of analyzing population recordings will thus provide both qualitative and quantitative advances to our understanding of neural computation.

  17. Modular Neural Tile Architecture for Compact Embedded Hardware Spiking Neural Network

    NARCIS (Netherlands)

    Pande, Sandeep; Morgan, Fearghal; Cawley, Seamus; Bruintjes, Tom; Smit, Gerardus Johannes Maria; McGinley, Brian; Carrillo, Snaider; Harkin, Jim; McDaid, Liam

    2013-01-01

    Biologically-inspired packet switched network on chip (NoC) based hardware spiking neural network (SNN) architectures have been proposed as an embedded computing platform for classification, estimation and control applications. Storage of large synaptic connectivity (SNN topology) information in

  18. Fundamental study of interpretation technique for 3-D magnetotelluric data using neural networks; Neural network wo mochiita sanjigen MT ho data kaishaku gijutsu no kisoteki kenkyu

    Energy Technology Data Exchange (ETDEWEB)

    Kobayashi, T; Fukuoka, K; Shima, H [Oyo Corp., Tokyo (Japan); Mogi, T [Kyushu University, Fukuoka (Japan). Faculty of Engineering; Spichak, V

    1997-05-27

    The research and development have been conducted to apply neural networks to interpretation technique for 3-D MT data. In this study, a data base of various data was made from the numerical modeling of 3-D fault model, and the data base management system was constructed. In addition, an unsupervised neural network for treating noise and a supervised neural network for estimating fault parameters such as dip, strike and specific resistance were made, and a basic neural network system was constructed. As a result of the application to the various data, basically sufficient performance for estimating the fault parameters was confirmed. Thus, the optimum MT data for this system were selected. In future, it is necessary to investigate the optimum model and the number of models for learning these neural networks. 3 refs., 5 figs., 2 tabs.

  19. A Parallel Adaboost-Backpropagation Neural Network for Massive Image Dataset Classification

    Science.gov (United States)

    Cao, Jianfang; Chen, Lichao; Wang, Min; Shi, Hao; Tian, Yun

    2016-01-01

    Image classification uses computers to simulate human understanding and cognition of images by automatically categorizing images. This study proposes a faster image classification approach that parallelizes the traditional Adaboost-Backpropagation (BP) neural network using the MapReduce parallel programming model. First, we construct a strong classifier by assembling the outputs of 15 BP neural networks (which are individually regarded as weak classifiers) based on the Adaboost algorithm. Second, we design Map and Reduce tasks for both the parallel Adaboost-BP neural network and the feature extraction algorithm. Finally, we establish an automated classification model by building a Hadoop cluster. We use the Pascal VOC2007 and Caltech256 datasets to train and test the classification model. The results are superior to those obtained using traditional Adaboost-BP neural network or parallel BP neural network approaches. Our approach increased the average classification accuracy rate by approximately 14.5% and 26.0% compared to the traditional Adaboost-BP neural network and parallel BP neural network, respectively. Furthermore, the proposed approach requires less computation time and scales very well as evaluated by speedup, sizeup and scaleup. The proposed approach may provide a foundation for automated large-scale image classification and demonstrates practical value. PMID:27905520

  20. A one-layer recurrent neural network for constrained nonconvex optimization.

    Science.gov (United States)

    Li, Guocheng; Yan, Zheng; Wang, Jun

    2015-01-01

    In this paper, a one-layer recurrent neural network is proposed for solving nonconvex optimization problems subject to general inequality constraints, designed based on an exact penalty function method. It is proved herein that any neuron state of the proposed neural network is convergent to the feasible region in finite time and stays there thereafter, provided that the penalty parameter is sufficiently large. The lower bounds of the penalty parameter and convergence time are also estimated. In addition, any neural state of the proposed neural network is convergent to its equilibrium point set which satisfies the Karush-Kuhn-Tucker conditions of the optimization problem. Moreover, the equilibrium point set is equivalent to the optimal solution to the nonconvex optimization problem if the objective function and constraints satisfy given conditions. Four numerical examples are provided to illustrate the performances of the proposed neural network.

  1. The networks scale and coupling parameter in synchronization of neural networks with diluted synapses

    International Nuclear Information System (INIS)

    Li Yanlong; Ma Jun; Chen Yuhong; Xu Wenke; Wang Yinghai

    2008-01-01

    In this paper the influence of the networks scale on the coupling parameter in the synchronization of neural networks with diluted synapses is investigated. Using numerical simulations, an exponential decay form is observed in the extreme case of global coupling among networks and full connection in each network; the larger linked degree becomes, the larger critical coupling intensity becomes; and the oscillation phenomena in the relationship of critical coupling intensity and the number of neural networks layers in the case of small-scale networks are found

  2. Neural Network-Based Resistance Spot Welding Control and Quality Prediction

    Energy Technology Data Exchange (ETDEWEB)

    Allen, J.D., Jr.; Ivezic, N.D.; Zacharia, T.

    1999-07-10

    This paper describes the development and evaluation of neural network-based systems for industrial resistance spot welding process control and weld quality assessment. The developed systems utilize recurrent neural networks for process control and both recurrent networks and static networks for quality prediction. The first section describes a system capable of both welding process control and real-time weld quality assessment, The second describes the development and evaluation of a static neural network-based weld quality assessment system that relied on experimental design to limit the influence of environmental variability. Relevant data analysis methods are also discussed. The weld classifier resulting from the analysis successfldly balances predictive power and simplicity of interpretation. The results presented for both systems demonstrate clearly that neural networks can be employed to address two significant problems common to the resistance spot welding industry, control of the process itself, and non-destructive determination of resulting weld quality.

  3. Adaptive Filtering Using Recurrent Neural Networks

    Science.gov (United States)

    Parlos, Alexander G.; Menon, Sunil K.; Atiya, Amir F.

    2005-01-01

    A method for adaptive (or, optionally, nonadaptive) filtering has been developed for estimating the states of complex process systems (e.g., chemical plants, factories, or manufacturing processes at some level of abstraction) from time series of measurements of system inputs and outputs. The method is based partly on the fundamental principles of the Kalman filter and partly on the use of recurrent neural networks. The standard Kalman filter involves an assumption of linearity of the mathematical model used to describe a process system. The extended Kalman filter accommodates a nonlinear process model but still requires linearization about the state estimate. Both the standard and extended Kalman filters involve the often unrealistic assumption that process and measurement noise are zero-mean, Gaussian, and white. In contrast, the present method does not involve any assumptions of linearity of process models or of the nature of process noise; on the contrary, few (if any) assumptions are made about process models, noise models, or the parameters of such models. In this regard, the method can be characterized as one of nonlinear, nonparametric filtering. The method exploits the unique ability of neural networks to approximate nonlinear functions. In a given case, the process model is limited mainly by limitations of the approximation ability of the neural networks chosen for that case. Moreover, despite the lack of assumptions regarding process noise, the method yields minimum- variance filters. In that they do not require statistical models of noise, the neural- network-based state filters of this method are comparable to conventional nonlinear least-squares estimators.

  4. Rotation Invariance Neural Network

    OpenAIRE

    Li, Shiyuan

    2017-01-01

    Rotation invariance and translation invariance have great values in image recognition tasks. In this paper, we bring a new architecture in convolutional neural network (CNN) named cyclic convolutional layer to achieve rotation invariance in 2-D symbol recognition. We can also get the position and orientation of the 2-D symbol by the network to achieve detection purpose for multiple non-overlap target. Last but not least, this architecture can achieve one-shot learning in some cases using thos...

  5. Artificial neural network modelling

    CERN Document Server

    Samarasinghe, Sandhya

    2016-01-01

    This book covers theoretical aspects as well as recent innovative applications of Artificial Neural networks (ANNs) in natural, environmental, biological, social, industrial and automated systems. It presents recent results of ANNs in modelling small, large and complex systems under three categories, namely, 1) Networks, Structure Optimisation, Robustness and Stochasticity 2) Advances in Modelling Biological and Environmental Systems and 3) Advances in Modelling Social and Economic Systems. The book aims at serving undergraduates, postgraduates and researchers in ANN computational modelling. .

  6. SORN: a self-organizing recurrent neural network

    Directory of Open Access Journals (Sweden)

    Andreea Lazar

    2009-10-01

    Full Text Available Understanding the dynamics of recurrent neural networks is crucial for explaining how the brain processes information. In the neocortex, a range of different plasticity mechanisms are shaping recurrent networks into effective information processing circuits that learn appropriate representations for time-varying sensory stimuli. However, it has been difficult to mimic these abilities in artificial neural network models. Here we introduce SORN, a self-organizing recurrent network. It combines three distinct forms of local plasticity to learn spatio-temporal patterns in its input while maintaining its dynamics in a healthy regime suitable for learning. The SORN learns to encode information in the form of trajectories through its high-dimensional state space reminiscent of recent biological findings on cortical coding. All three forms of plasticity are shown to be essential for the network's success.

  7. Detecting atrial fibrillation by deep convolutional neural networks.

    Science.gov (United States)

    Xia, Yong; Wulan, Naren; Wang, Kuanquan; Zhang, Henggui

    2018-02-01

    Atrial fibrillation (AF) is the most common cardiac arrhythmia. The incidence of AF increases with age, causing high risks of stroke and increased morbidity and mortality. Efficient and accurate diagnosis of AF based on the ECG is valuable in clinical settings and remains challenging. In this paper, we proposed a novel method with high reliability and accuracy for AF detection via deep learning. The short-term Fourier transform (STFT) and stationary wavelet transform (SWT) were used to analyze ECG segments to obtain two-dimensional (2-D) matrix input suitable for deep convolutional neural networks. Then, two different deep convolutional neural network models corresponding to STFT output and SWT output were developed. Our new method did not require detection of P or R peaks, nor feature designs for classification, in contrast to existing algorithms. Finally, the performances of the two models were evaluated and compared with those of existing algorithms. Our proposed method demonstrated favorable performances on ECG segments as short as 5 s. The deep convolutional neural network using input generated by STFT, presented a sensitivity of 98.34%, specificity of 98.24% and accuracy of 98.29%. For the deep convolutional neural network using input generated by SWT, a sensitivity of 98.79%, specificity of 97.87% and accuracy of 98.63% was achieved. The proposed method using deep convolutional neural networks shows high sensitivity, specificity and accuracy, and, therefore, is a valuable tool for AF detection. Copyright © 2017 Elsevier Ltd. All rights reserved.

  8. Modelling and Prediction of Photovoltaic Power Output Using Artificial Neural Networks

    Directory of Open Access Journals (Sweden)

    Aminmohammad Saberian

    2014-01-01

    Full Text Available This paper presents a solar power modelling method using artificial neural networks (ANNs. Two neural network structures, namely, general regression neural network (GRNN feedforward back propagation (FFBP, have been used to model a photovoltaic panel output power and approximate the generated power. Both neural networks have four inputs and one output. The inputs are maximum temperature, minimum temperature, mean temperature, and irradiance; the output is the power. The data used in this paper started from January 1, 2006, until December 31, 2010. The five years of data were split into two parts: 2006–2008 and 2009-2010; the first part was used for training and the second part was used for testing the neural networks. A mathematical equation is used to estimate the generated power. At the end, both of these networks have shown good modelling performance; however, FFBP has shown a better performance comparing with GRNN.

  9. Research on Fault Diagnosis Method Based on Rule Base Neural Network

    Directory of Open Access Journals (Sweden)

    Zheng Ni

    2017-01-01

    Full Text Available The relationship between fault phenomenon and fault cause is always nonlinear, which influences the accuracy of fault location. And neural network is effective in dealing with nonlinear problem. In order to improve the efficiency of uncertain fault diagnosis based on neural network, a neural network fault diagnosis method based on rule base is put forward. At first, the structure of BP neural network is built and the learning rule is given. Then, the rule base is built by fuzzy theory. An improved fuzzy neural construction model is designed, in which the calculated methods of node function and membership function are also given. Simulation results confirm the effectiveness of this method.

  10. Nonlinear identification of process dynamics using neural networks

    International Nuclear Information System (INIS)

    Parlos, A.G.; Atiya, A.F.; Chong, K.T.

    1992-01-01

    In this paper the nonlinear identification of process dynamics encountered in nuclear power plant components is addressed, in an input-output sense, using artificial neural systems. A hybrid feedforward/feedback neural network, namely, a recurrent multilayer perceptron, is used as the model structure to be identified. The feedforward portion of the network architecture provides its well-known interpolation property, while through recurrency and cross-talk, the local information feedback enables representation of temporal variations in the system nonlinearities. The standard backpropagation learning algorithm is modified, and it is used for the supervised training of the proposed hybrid network. The performance of recurrent multilayer perceptron networks in identifying process dynamics is investigated via the case study of a U-tube steam generator. The response of representative steam generator is predicted using a neural network, and it is compared to the response obtained from a sophisticated computer model based on first principles. The transient responses compare well, although further research is warranted to determine the predictive capabilities of these networks during more severe operational transients and accident scenarios

  11. Global dissipativity of continuous-time recurrent neural networks with time delay

    International Nuclear Information System (INIS)

    Liao Xiaoxin; Wang Jun

    2003-01-01

    This paper addresses the global dissipativity of a general class of continuous-time recurrent neural networks. First, the concepts of global dissipation and global exponential dissipation are defined and elaborated. Next, the sets of global dissipativity and global exponentially dissipativity are characterized using the parameters of recurrent neural network models. In particular, it is shown that the Hopfield network and cellular neural networks with or without time delays are dissipative systems

  12. An artifical neural network for detection of simulated dental caries

    Energy Technology Data Exchange (ETDEWEB)

    Kositbowornchai, S. [Khon Kaen Univ. (Thailand). Dept. of Oral Diagnosis; Siriteptawee, S.; Plermkamon, S.; Bureerat, S. [Khon Kaen Univ. (Thailand). Dept. of Mechanical Engineering; Chetchotsak, D. [Khon Kaen Univ. (Thailand). Dept. of Industrial Engineering

    2006-08-15

    Objects: A neural network was developed to diagnose artificial dental caries using images from a charged-coupled device (CCD)camera and intra-oral digital radiography. The diagnostic performance of this neural network was evaluated against a gold standard. Materials and methods: The neural network design was the Learning Vector Quantization (LVQ) used to classify a tooth surface as sound or as having dental caries. The depth of the dental caries was indicated on a graphic user interface (GUI) screen developed by Matlab programming. Forty-nine images of both sound and simulated dental caries, derived from a CCD camera and by digital radiography, were used to 'train' an artificial neural network. After the 'training' process, a separate test-set comprising 322 unseen images was evaluated. Tooth sections and microscopic examinations were used to confirm the actual dental caries status.The performance of neural network was evaluated using diagnostic test. Results: The sensitivity (95%CI)/specificity (95%CI) of dental caries detection by the CCD camera and digital radiography were 0.77(0.68-0.85)/0.85(0.75-0.92) and 0.81(0.72-0.88)/0.93(0.84-0.97), respectively. The accuracy of caries depth-detection by the CCD camera and digital radiography was 58 and 40%, respectively. Conclusions: The model neural network used in this study could be a prototype for caries detection but should be improved for classifying caries depth. Our study suggests an artificial neural network can be trained to make the correct interpretations of dental caries. (orig.)

  13. An artifical neural network for detection of simulated dental caries

    International Nuclear Information System (INIS)

    Kositbowornchai, S.; Siriteptawee, S.; Plermkamon, S.; Bureerat, S.; Chetchotsak, D.

    2006-01-01

    Objects: A neural network was developed to diagnose artificial dental caries using images from a charged-coupled device (CCD)camera and intra-oral digital radiography. The diagnostic performance of this neural network was evaluated against a gold standard. Materials and methods: The neural network design was the Learning Vector Quantization (LVQ) used to classify a tooth surface as sound or as having dental caries. The depth of the dental caries was indicated on a graphic user interface (GUI) screen developed by Matlab programming. Forty-nine images of both sound and simulated dental caries, derived from a CCD camera and by digital radiography, were used to 'train' an artificial neural network. After the 'training' process, a separate test-set comprising 322 unseen images was evaluated. Tooth sections and microscopic examinations were used to confirm the actual dental caries status.The performance of neural network was evaluated using diagnostic test. Results: The sensitivity (95%CI)/specificity (95%CI) of dental caries detection by the CCD camera and digital radiography were 0.77(0.68-0.85)/0.85(0.75-0.92) and 0.81(0.72-0.88)/0.93(0.84-0.97), respectively. The accuracy of caries depth-detection by the CCD camera and digital radiography was 58 and 40%, respectively. Conclusions: The model neural network used in this study could be a prototype for caries detection but should be improved for classifying caries depth. Our study suggests an artificial neural network can be trained to make the correct interpretations of dental caries. (orig.)

  14. Application of improved PSO-RBF neural network in the synthetic ammonia decarbonization

    Directory of Open Access Journals (Sweden)

    Yongwei LI

    2017-12-01

    Full Text Available The synthetic ammonia decarbonization is a typical complex industrial process, which has the characteristics of time variation, nonlinearity and uncertainty, and the on-line control model is difficult to be established. An improved PSO-RBF neural network control algorithm is proposed to solve the problems of low precision and poor robustness in the complex process of the synthetic ammonia decarbonization. The particle swarm optimization algorithm and RBF neural network are combined. The improved particle swarm algorithm is used to optimize the RBF neural network's hidden layer primary function center, width and the output layer's connection value to construct the RBF neural network model optimized by the improved PSO algorithm. The improved PSO-RBF neural network control model is applied to the key carbonization process and compared with the traditional fuzzy neural network. The simulation results show that the improved PSO-RBF neural network control method used in the synthetic ammonia decarbonization process has higher control accuracy and system robustness, which provides an effective way to solve the modeling and optimization control of a complex industrial process.

  15. Estimating Conditional Distributions by Neural Networks

    DEFF Research Database (Denmark)

    Kulczycki, P.; Schiøler, Henrik

    1998-01-01

    Neural Networks for estimating conditionaldistributions and their associated quantiles are investigated in this paper. A basic network structure is developed on the basis of kernel estimation theory, and consistency property is considered from a mild set of assumptions. A number of applications...

  16. Two-Stage Approach to Image Classification by Deep Neural Networks

    Science.gov (United States)

    Ososkov, Gennady; Goncharov, Pavel

    2018-02-01

    The paper demonstrates the advantages of the deep learning networks over the ordinary neural networks on their comparative applications to image classifying. An autoassociative neural network is used as a standalone autoencoder for prior extraction of the most informative features of the input data for neural networks to be compared further as classifiers. The main efforts to deal with deep learning networks are spent for a quite painstaking work of optimizing the structures of those networks and their components, as activation functions, weights, as well as the procedures of minimizing their loss function to improve their performances and speed up their learning time. It is also shown that the deep autoencoders develop the remarkable ability for denoising images after being specially trained. Convolutional Neural Networks are also used to solve a quite actual problem of protein genetics on the example of the durum wheat classification. Results of our comparative study demonstrate the undoubted advantage of the deep networks, as well as the denoising power of the autoencoders. In our work we use both GPU and cloud services to speed up the calculations.

  17. Classification of Company Performance using Weighted Probabilistic Neural Network

    Science.gov (United States)

    Yasin, Hasbi; Waridi Basyiruddin Arifin, Adi; Warsito, Budi

    2018-05-01

    Classification of company performance can be judged by looking at its financial status, whether good or bad state. Classification of company performance can be achieved by some approach, either parametric or non-parametric. Neural Network is one of non-parametric methods. One of Artificial Neural Network (ANN) models is Probabilistic Neural Network (PNN). PNN consists of four layers, i.e. input layer, pattern layer, addition layer, and output layer. The distance function used is the euclidean distance and each class share the same values as their weights. In this study used PNN that has been modified on the weighting process between the pattern layer and the addition layer by involving the calculation of the mahalanobis distance. This model is called the Weighted Probabilistic Neural Network (WPNN). The results show that the company's performance modeling with the WPNN model has a very high accuracy that reaches 100%.

  18. Forex Market Prediction Using NARX Neural Network with Bagging

    Directory of Open Access Journals (Sweden)

    Shahbazi Nima

    2016-01-01

    Full Text Available We propose a new methodfor predicting movements in Forex market based on NARX neural network withtime shifting bagging techniqueand financial indicators, such as relative strength index and stochastic indicators. Neural networks have prominent learning ability but they often exhibit bad and unpredictable performance for noisy data. When compared with the static neural networks, our method significantly reducesthe error rate of the responseandimproves the performance of the prediction. We tested three different types ofarchitecture for predicting the response and determined the best network approach. We applied our method to prediction the hourly foreign exchange rates and found remarkable predictability in comprehensive experiments with 2 different foreign exchange rates (GBPUSD and EURUSD.

  19. Determining the confidence levels of sensor outputs using neural networks

    Energy Technology Data Exchange (ETDEWEB)

    Broten, G S; Wood, H C [Saskatchewan Univ., Saskatoon, SK (Canada). Dept. of Electrical Engineering

    1996-12-31

    This paper describes an approach for determining the confidence level of a sensor output using multi-sensor arrays, sensor fusion and artificial neural networks. The authors have shown in previous work that sensor fusion and artificial neural networks can be used to learn the relationships between the outputs of an array of simulated partially selective sensors and the individual analyte concentrations in a mixture of analyses. Other researchers have shown that an array of partially selective sensors can be used to determine the individual gas concentrations in a gaseous mixture. The research reported in this paper shows that it is possible to extract confidence level information from an array of partially selective sensors using artificial neural networks. The confidence level of a sensor output is defined as a numeric value, ranging from 0% to 100%, that indicates the confidence associated with a output of a given sensor. A three layer back-propagation neural network was trained on a subset of the sensor confidence level space, and was tested for its ability to generalize, where the confidence level space is defined as all possible deviations from the correct sensor output. A learning rate of 0.1 was used and no momentum terms were used in the neural network. This research has shown that an artificial neural network can accurately estimate the confidence level of individual sensors in an array of partially selective sensors. This research has also shown that the neural network`s ability to determine the confidence level is influenced by the complexity of the sensor`s response and that the neural network is able to estimate the confidence levels even if more than one sensor is in error. The fundamentals behind this research could be applied to other configurations besides arrays of partially selective sensors, such as an array of sensors separated spatially. An example of such a configuration could be an array of temperature sensors in a tank that is not in

  20. Neural Network Based Model of an Industrial Oil-Fired Boiler System ...

    African Journals Online (AJOL)

    A two-layer feed-forward neural network with Hyperbolic tangent sigmoid ... The neural network model when subjected to test, using the validation input data; ... Proportional Integral Derivative (PID) Controller is used to control the neural ...

  1. Theoretical Properties for Neural Networks with Weight Matrices of Low Displacement Rank

    OpenAIRE

    Zhao, Liang; Liao, Siyu; Wang, Yanzhi; Li, Zhe; Tang, Jian; Pan, Victor; Yuan, Bo

    2017-01-01

    Recently low displacement rank (LDR) matrices, or so-called structured matrices, have been proposed to compress large-scale neural networks. Empirical results have shown that neural networks with weight matrices of LDR matrices, referred as LDR neural networks, can achieve significant reduction in space and computational complexity while retaining high accuracy. We formally study LDR matrices in deep learning. First, we prove the universal approximation property of LDR neural networks with a ...

  2. Neural networks for predicting breeding values and genetic gains

    Directory of Open Access Journals (Sweden)

    Gabi Nunes Silva

    2014-12-01

    Full Text Available Analysis using Artificial Neural Networks has been described as an approach in the decision-making process that, although incipient, has been reported as presenting high potential for use in animal and plant breeding. In this study, we introduce the procedure of using the expanded data set for training the network. Wealso proposed using statistical parameters to estimate the breeding value of genotypes in simulated scenarios, in addition to the mean phenotypic value in a feed-forward back propagation multilayer perceptron network. After evaluating artificial neural network configurations, our results showed its superiority to estimates based on linear models, as well as its applicability in the genetic value prediction process. The results further indicated the good generalization performance of the neural network model in several additional validation experiments.

  3. Sensitivity analysis of linear programming problem through a recurrent neural network

    Science.gov (United States)

    Das, Raja

    2017-11-01

    In this paper we study the recurrent neural network for solving linear programming problems. To achieve optimality in accuracy and also in computational effort, an algorithm is presented. We investigate the sensitivity analysis of linear programming problem through the neural network. A detailed example is also presented to demonstrate the performance of the recurrent neural network.

  4. Efficient computation in adaptive artificial spiking neural networks

    NARCIS (Netherlands)

    D. Zambrano (Davide); R.B.P. Nusselder (Roeland); H.S. Scholte; S.M. Bohte (Sander)

    2017-01-01

    textabstractArtificial Neural Networks (ANNs) are bio-inspired models of neural computation that have proven highly effective. Still, ANNs lack a natural notion of time, and neural units in ANNs exchange analog values in a frame-based manner, a computationally and energetically inefficient form of

  5. Diagnostic Classifiers: Revealing how Neural Networks Process Hierarchical Structure

    NARCIS (Netherlands)

    Veldhoen, S.; Hupkes, D.; Zuidema, W.

    2016-01-01

    We investigate how neural networks can be used for hierarchical, compositional semantics. To this end, we define the simple but nontrivial artificial task of processing nested arithmetic expressions and study whether different types of neural networks can learn to add and subtract. We find that

  6. vDNN: Virtualized Deep Neural Networks for Scalable, Memory-Efficient Neural Network Design

    OpenAIRE

    Rhu, Minsoo; Gimelshein, Natalia; Clemons, Jason; Zulfiqar, Arslan; Keckler, Stephen W.

    2016-01-01

    The most widely used machine learning frameworks require users to carefully tune their memory usage so that the deep neural network (DNN) fits into the DRAM capacity of a GPU. This restriction hampers a researcher's flexibility to study different machine learning algorithms, forcing them to either use a less desirable network architecture or parallelize the processing across multiple GPUs. We propose a runtime memory manager that virtualizes the memory usage of DNNs such that both GPU and CPU...

  7. Numeral eddy current sensor modelling based on genetic neural network

    International Nuclear Information System (INIS)

    Yu Along

    2008-01-01

    This paper presents a method used to the numeral eddy current sensor modelling based on the genetic neural network to settle its nonlinear problem. The principle and algorithms of genetic neural network are introduced. In this method, the nonlinear model parameters of the numeral eddy current sensor are optimized by genetic neural network (GNN) according to measurement data. So the method remains both the global searching ability of genetic algorithm and the good local searching ability of neural network. The nonlinear model has the advantages of strong robustness, on-line modelling and high precision. The maximum nonlinearity error can be reduced to 0.037% by using GNN. However, the maximum nonlinearity error is 0.075% using the least square method

  8. Neural networks. A new analytical tool, applicable also in nuclear technology

    Energy Technology Data Exchange (ETDEWEB)

    Stritar, A [Inst. Jozef Stefan, Ljubljana (Slovenia)

    1992-07-01

    The basic concept of neural networks and back propagation learning algorithm are described. The behaviour of typical neural network is demonstrated on a simple graphical case. A short literature survey about the application of neural networks in nuclear science and engineering is made. The application of the neural network to the probability density calculation is shown. (author) [Slovenian] Opisana je osnova nevronskih mrez in back propagation nacina njihovega ucenja. Obnasanje enostavne nevronske mreze je prikazano na graficnem primeru. Podan je kratek pregled literaure o uporabi nevronskih mrez v jedrski znanosti in tehnologiji. Prikazana je tudi uporaba nevronske mreze pri izracunu verjetnostne porazdelitve. (author)

  9. Noise-enhanced categorization in a recurrently reconnected neural network

    International Nuclear Information System (INIS)

    Monterola, Christopher; Zapotocky, Martin

    2005-01-01

    We investigate the interplay of recurrence and noise in neural networks trained to categorize spatial patterns of neural activity. We develop the following procedure to demonstrate how, in the presence of noise, the introduction of recurrence permits to significantly extend and homogenize the operating range of a feed-forward neural network. We first train a two-level perceptron in the absence of noise. Following training, we identify the input and output units of the feed-forward network, and thus convert it into a two-layer recurrent network. We show that the performance of the reconnected network has features reminiscent of nondynamic stochastic resonance: the addition of noise enables the network to correctly categorize stimuli of subthreshold strength, with optimal noise magnitude significantly exceeding the stimulus strength. We characterize the dynamics leading to this effect and contrast it to the behavior of a more simple associative memory network in which noise-mediated categorization fails

  10. Noise-enhanced categorization in a recurrently reconnected neural network

    Science.gov (United States)

    Monterola, Christopher; Zapotocky, Martin

    2005-03-01

    We investigate the interplay of recurrence and noise in neural networks trained to categorize spatial patterns of neural activity. We develop the following procedure to demonstrate how, in the presence of noise, the introduction of recurrence permits to significantly extend and homogenize the operating range of a feed-forward neural network. We first train a two-level perceptron in the absence of noise. Following training, we identify the input and output units of the feed-forward network, and thus convert it into a two-layer recurrent network. We show that the performance of the reconnected network has features reminiscent of nondynamic stochastic resonance: the addition of noise enables the network to correctly categorize stimuli of subthreshold strength, with optimal noise magnitude significantly exceeding the stimulus strength. We characterize the dynamics leading to this effect and contrast it to the behavior of a more simple associative memory network in which noise-mediated categorization fails.

  11. A multivariate extension of mutual information for growing neural networks.

    Science.gov (United States)

    Ball, Kenneth R; Grant, Christopher; Mundy, William R; Shafer, Timothy J

    2017-11-01

    Recordings of neural network activity in vitro are increasingly being used to assess the development of neural network activity and the effects of drugs, chemicals and disease states on neural network function. The high-content nature of the data derived from such recordings can be used to infer effects of compounds or disease states on a variety of important neural functions, including network synchrony. Historically, synchrony of networks in vitro has been assessed either by determination of correlation coefficients (e.g. Pearson's correlation), by statistics estimated from cross-correlation histograms between pairs of active electrodes, and/or by pairwise mutual information and related measures. The present study examines the application of Normalized Multiinformation (NMI) as a scalar measure of shared information content in a multivariate network that is robust with respect to changes in network size. Theoretical simulations are designed to investigate NMI as a measure of complexity and synchrony in a developing network relative to several alternative approaches. The NMI approach is applied to these simulations and also to data collected during exposure of in vitro neural networks to neuroactive compounds during the first 12 days in vitro, and compared to other common measures, including correlation coefficients and mean firing rates of neurons. NMI is shown to be more sensitive to developmental effects than first order synchronous and nonsynchronous measures of network complexity. Finally, NMI is a scalar measure of global (rather than pairwise) mutual information in a multivariate network, and hence relies on less assumptions for cross-network comparisons than historical approaches. Copyright © 2017 Elsevier Ltd. All rights reserved.

  12. Modeling of quasistatic magnetic hysteresis with feed-forward neural networks

    International Nuclear Information System (INIS)

    Makaveev, Dimitre; Dupre, Luc; De Wulf, Marc; Melkebeek, Jan

    2001-01-01

    A modeling technique for rate-independent (quasistatic) scalar magnetic hysteresis is presented, using neural networks. Based on the theory of dynamic systems and the wiping-out and congruency properties of the classical scalar Preisach hysteresis model, the choice of a feed-forward neural network model is motivated. The neural network input parameters at each time step are the corresponding magnetic field strength and memory state, thereby assuring accurate prediction of the change of magnetic induction. For rate-independent hysteresis, the current memory state can be determined by the last extreme magnetic field strength and induction values, kept in memory. The choice of a network training set is motivated and the performance of the network is illustrated for a test set not used during training. Very accurate prediction of both major and minor hysteresis loops is observed, proving that the neural network technique is suitable for hysteresis modeling. [copyright] 2001 American Institute of Physics

  13. Hierarchical modular granular neural networks with fuzzy aggregation

    CERN Document Server

    Sanchez, Daniela

    2016-01-01

    In this book, a new method for hybrid intelligent systems is proposed. The proposed method is based on a granular computing approach applied in two levels. The techniques used and combined in the proposed method are modular neural networks (MNNs) with a Granular Computing (GrC) approach, thus resulting in a new concept of MNNs; modular granular neural networks (MGNNs). In addition fuzzy logic (FL) and hierarchical genetic algorithms (HGAs) are techniques used in this research work to improve results. These techniques are chosen because in other works have demonstrated to be a good option, and in the case of MNNs and HGAs, these techniques allow to improve the results obtained than with their conventional versions; respectively artificial neural networks and genetic algorithms.

  14. Modeling polyvinyl chloride Plasma Modification by Neural Networks

    Science.gov (United States)

    Wang, Changquan

    2018-03-01

    Neural networks model were constructed to analyze the connection between dielectric barrier discharge parameters and surface properties of material. The experiment data were generated from polyvinyl chloride plasma modification by using uniform design. Discharge voltage, discharge gas gap and treatment time were as neural network input layer parameters. The measured values of contact angle were as the output layer parameters. A nonlinear mathematical model of the surface modification for polyvinyl chloride was developed based upon the neural networks. The optimum model parameters were obtained by the simulation evaluation and error analysis. The results of the optimal model show that the predicted value is very close to the actual test value. The prediction model obtained here are useful for discharge plasma surface modification analysis.

  15. Neural network versus classical time series forecasting models

    Science.gov (United States)

    Nor, Maria Elena; Safuan, Hamizah Mohd; Shab, Noorzehan Fazahiyah Md; Asrul, Mohd; Abdullah, Affendi; Mohamad, Nurul Asmaa Izzati; Lee, Muhammad Hisyam

    2017-05-01

    Artificial neural network (ANN) has advantage in time series forecasting as it has potential to solve complex forecasting problems. This is because ANN is data driven approach which able to be trained to map past values of a time series. In this study the forecast performance between neural network and classical time series forecasting method namely seasonal autoregressive integrated moving average models was being compared by utilizing gold price data. Moreover, the effect of different data preprocessing on the forecast performance of neural network being examined. The forecast accuracy was evaluated using mean absolute deviation, root mean square error and mean absolute percentage error. It was found that ANN produced the most accurate forecast when Box-Cox transformation was used as data preprocessing.

  16. Neural networks for sensor validation and plant monitoring

    International Nuclear Information System (INIS)

    Upadhyaya, B.R.; Eryurek, E.; Mathai, G.

    1990-01-01

    Sensor and process monitoring in power plants require the estimation of one or more process variables. Neural network paradigms are suitable for establishing general nonlinear relationships among a set of plant variables. Multiple-input multiple-output autoassociative networks can follow changes in plant-wide behavior. The backpropagation algorithm has been applied for training feedforward networks. A new and enhanced algorithm for training neural networks (BPN) has been developed and implemented in a VAX workstation. Operational data from the Experimental Breeder Reactor-II (EBR-II) have been used to study the performance of BPN. Several results of application to the EBR-II are presented

  17. Neural Networks In Mining Sciences - General Overview And Some Representative Examples

    Science.gov (United States)

    Tadeusiewicz, Ryszard

    2015-12-01

    The many difficult problems that must now be addressed in mining sciences make us search for ever newer and more efficient computer tools that can be used to solve those problems. Among the numerous tools of this type, there are neural networks presented in this article - which, although not yet widely used in mining sciences, are certainly worth consideration. Neural networks are a technique which belongs to so called artificial intelligence, and originates from the attempts to model the structure and functioning of biological nervous systems. Initially constructed and tested exclusively out of scientific curiosity, as computer models of parts of the human brain, neural networks have become a surprisingly effective calculation tool in many areas: in technology, medicine, economics, and even social sciences. Unfortunately, they are relatively rarely used in mining sciences and mining technology. The article is intended to convince the readers that neural networks can be very useful also in mining sciences. It contains information how modern neural networks are built, how they operate and how one can use them. The preliminary discussion presented in this paper can help the reader gain an opinion whether this is a tool with handy properties, useful for him, and what it might come in useful for. Of course, the brief introduction to neural networks contained in this paper will not be enough for the readers who get convinced by the arguments contained here, and want to use neural networks. They will still need a considerable portion of detailed knowledge so that they can begin to independently create and build such networks, and use them in practice. However, an interested reader who decides to try out the capabilities of neural networks will also find here links to references that will allow him to start exploration of neural networks fast, and then work with this handy tool efficiently. This will be easy, because there are currently quite a few ready-made computer

  18. The Effects of GABAergic Polarity Changes on Episodic Neural Network Activity in Developing Neural Systems

    Directory of Open Access Journals (Sweden)

    Wilfredo Blanco

    2017-09-01

    Full Text Available Early in development, neural systems have primarily excitatory coupling, where even GABAergic synapses are excitatory. Many of these systems exhibit spontaneous episodes of activity that have been characterized through both experimental and computational studies. As development progress the neural system goes through many changes, including synaptic remodeling, intrinsic plasticity in the ion channel expression, and a transformation of GABAergic synapses from excitatory to inhibitory. What effect each of these, and other, changes have on the network behavior is hard to know from experimental studies since they all happen in parallel. One advantage of a computational approach is that one has the ability to study developmental changes in isolation. Here, we examine the effects of GABAergic synapse polarity change on the spontaneous activity of both a mean field and a neural network model that has both glutamatergic and GABAergic coupling, representative of a developing neural network. We find some intuitive behavioral changes as the GABAergic neurons go from excitatory to inhibitory, shared by both models, such as a decrease in the duration of episodes. We also find some paradoxical changes in the activity that are only present in the neural network model. In particular, we find that during early development the inter-episode durations become longer on average, while later in development they become shorter. In addressing this unexpected finding, we uncover a priming effect that is particularly important for a small subset of neurons, called the “intermediate neurons.” We characterize these neurons and demonstrate why they are crucial to episode initiation, and why the paradoxical behavioral change result from priming of these neurons. The study illustrates how even arguably the simplest of developmental changes that occurs in neural systems can present non-intuitive behaviors. It also makes predictions about neural network behavioral changes

  19. Design of Robust Neural Network Classifiers

    DEFF Research Database (Denmark)

    Larsen, Jan; Andersen, Lars Nonboe; Hintz-Madsen, Mads

    1998-01-01

    This paper addresses a new framework for designing robust neural network classifiers. The network is optimized using the maximum a posteriori technique, i.e., the cost function is the sum of the log-likelihood and a regularization term (prior). In order to perform robust classification, we present...... a modified likelihood function which incorporates the potential risk of outliers in the data. This leads to the introduction of a new parameter, the outlier probability. Designing the neural classifier involves optimization of network weights as well as outlier probability and regularization parameters. We...... suggest to adapt the outlier probability and regularisation parameters by minimizing the error on a validation set, and a simple gradient descent scheme is derived. In addition, the framework allows for constructing a simple outlier detector. Experiments with artificial data demonstrate the potential...

  20. Modelling the permeability of polymers: a neural network approach

    NARCIS (Netherlands)

    Wessling, Matthias; Mulder, M.H.V.; Bos, A.; Bos, A.; van der Linden, M.K.T.; Bos, M.; van der Linden, W.E.

    1994-01-01

    In this short communication, the prediction of the permeability of carbon dioxide through different polymers using a neural network is studied. A neural network is a numeric-mathematical construction that can model complex non-linear relationships. Here it is used to correlate the IR spectrum of a

  1. Classes of feedforward neural networks and their circuit complexity

    NARCIS (Netherlands)

    Shawe-Taylor, John S.; Anthony, Martin H.G.; Kern, Walter

    1992-01-01

    This paper aims to place neural networks in the context of boolean circuit complexity. We define appropriate classes of feedforward neural networks with specified fan-in, accuracy of computation and depth and using techniques of communication complexity proceed to show that the classes fit into a

  2. Identification of Non-Linear Structures using Recurrent Neural Networks

    DEFF Research Database (Denmark)

    Kirkegaard, Poul Henning; Nielsen, Søren R. K.; Hansen, H. I.

    Two different partially recurrent neural networks structured as Multi Layer Perceptrons (MLP) are investigated for time domain identification of a non-linear structure.......Two different partially recurrent neural networks structured as Multi Layer Perceptrons (MLP) are investigated for time domain identification of a non-linear structure....

  3. Identification of Non-Linear Structures using Recurrent Neural Networks

    DEFF Research Database (Denmark)

    Kirkegaard, Poul Henning; Nielsen, Søren R. K.; Hansen, H. I.

    1995-01-01

    Two different partially recurrent neural networks structured as Multi Layer Perceptrons (MLP) are investigated for time domain identification of a non-linear structure.......Two different partially recurrent neural networks structured as Multi Layer Perceptrons (MLP) are investigated for time domain identification of a non-linear structure....

  4. A neural network approach to job-shop scheduling.

    Science.gov (United States)

    Zhou, D N; Cherkassky, V; Baldwin, T R; Olson, D E

    1991-01-01

    A novel analog computational network is presented for solving NP-complete constraint satisfaction problems, i.e. job-shop scheduling. In contrast to most neural approaches to combinatorial optimization based on quadratic energy cost function, the authors propose to use linear cost functions. As a result, the network complexity (number of neurons and the number of resistive interconnections) grows only linearly with problem size, and large-scale implementations become possible. The proposed approach is related to the linear programming network described by D.W. Tank and J.J. Hopfield (1985), which also uses a linear cost function for a simple optimization problem. It is shown how to map a difficult constraint-satisfaction problem onto a simple neural net in which the number of neural processors equals the number of subjobs (operations) and the number of interconnections grows linearly with the total number of operations. Simulations show that the authors' approach produces better solutions than existing neural approaches to job-shop scheduling, i.e. the traveling salesman problem-type Hopfield approach and integer linear programming approach of J.P.S. Foo and Y. Takefuji (1988), in terms of the quality of the solution and the network complexity.

  5. Comparing Models GRM, Refraction Tomography and Neural Network to Analyze Shallow Landslide

    Directory of Open Access Journals (Sweden)

    Armstrong F. Sompotan

    2011-11-01

    Full Text Available Detailed investigations of landslides are essential to understand fundamental landslide mechanisms. Seismic refraction method has been proven as a useful geophysical tool for investigating shallow landslides. The objective of this study is to introduce a new workflow using neural network in analyzing seismic refraction data and to compare the result with some methods; that are general reciprocal method (GRM and refraction tomography. The GRM is effective when the velocity structure is relatively simple and refractors are gently dipping. Refraction tomography is capable of modeling the complex velocity structures of landslides. Neural network is found to be more potential in application especially in time consuming and complicated numerical methods. Neural network seem to have the ability to establish a relationship between an input and output space for mapping seismic velocity. Therefore, we made a preliminary attempt to evaluate the applicability of neural network to determine velocity and elevation of subsurface synthetic models corresponding to arrival times. The training and testing process of the neural network is successfully accomplished using the synthetic data. Furthermore, we evaluated the neural network using observed data. The result of the evaluation indicates that the neural network can compute velocity and elevation corresponding to arrival times. The similarity of those models shows the success of neural network as a new alternative in seismic refraction data interpretation.

  6. An Artificial Neural Network Controller for Intelligent Transportation Systems Applications

    Science.gov (United States)

    1996-01-01

    An Autonomous Intelligent Cruise Control (AICC) has been designed using a feedforward artificial neural network, as an example for utilizing artificial neural networks for nonlinear control problems arising in intelligent transportation systems appli...

  7. Learning drifting concepts with neural networks

    NARCIS (Netherlands)

    Biehl, Michael; Schwarze, Holm

    1993-01-01

    The learning of time-dependent concepts with a neural network is studied analytically and numerically. The linearly separable target rule is represented by an N-vector, whose time dependence is modelled by a random or deterministic drift process. A single-layer network is trained online using

  8. Neural network based method for conversion of solar radiation data

    International Nuclear Information System (INIS)

    Celik, Ali N.; Muneer, Tariq

    2013-01-01

    Highlights: ► Generalized regression neural network is used to predict the solar radiation on tilted surfaces. ► The above network, amongst many such as multilayer perceptron, is the most successful one. ► The present neural network returns a relative mean absolute error value of 9.1%. ► The present model leads to a mean absolute error value of estimate of 14.9 Wh/m 2 . - Abstract: The receiving ends of the solar energy conversion systems that generate heat or electricity from radiation is usually tilted at an optimum angle to increase the solar incident on the surface. Solar irradiation data measured on horizontal surfaces is readily available for many locations where such solar energy conversion systems are installed. Various equations have been developed to convert solar irradiation data measured on horizontal surface to that on tilted one. These equations constitute the conventional approach. In this article, an alternative approach, generalized regression type of neural network, is used to predict the solar irradiation on tilted surfaces, using the minimum number of variables involved in the physical process, namely the global solar irradiation on horizontal surface, declination and hour angles. Artificial neural networks have been successfully used in recent years for optimization, prediction and modeling in energy systems as alternative to conventional modeling approaches. To show the merit of the presently developed neural network, the solar irradiation data predicted from the novel model was compared to that from the conventional approach (isotropic and anisotropic models), with strict reference to the irradiation data measured in the same location. The present neural network model was found to provide closer solar irradiation values to the measured than the conventional approach, with a mean absolute error value of 14.9 Wh/m 2 . The other statistical values of coefficient of determination and relative mean absolute error also indicate the

  9. UAV Trajectory Modeling Using Neural Networks

    Science.gov (United States)

    Xue, Min

    2017-01-01

    Large amount of small Unmanned Aerial Vehicles (sUAVs) are projected to operate in the near future. Potential sUAV applications include, but not limited to, search and rescue, inspection and surveillance, aerial photography and video, precision agriculture, and parcel delivery. sUAVs are expected to operate in the uncontrolled Class G airspace, which is at or below 500 feet above ground level (AGL), where many static and dynamic constraints exist, such as ground properties and terrains, restricted areas, various winds, manned helicopters, and conflict avoidance among sUAVs. How to enable safe, efficient, and massive sUAV operations at the low altitude airspace remains a great challenge. NASA's Unmanned aircraft system Traffic Management (UTM) research initiative works on establishing infrastructure and developing policies, requirement, and rules to enable safe and efficient sUAVs' operations. To achieve this goal, it is important to gain insights of future UTM traffic operations through simulations, where the accurate trajectory model plays an extremely important role. On the other hand, like what happens in current aviation development, trajectory modeling should also serve as the foundation for any advanced concepts and tools in UTM. Accurate models of sUAV dynamics and control systems are very important considering the requirement of the meter level precision in UTM operations. The vehicle dynamics are relatively easy to derive and model, however, vehicle control systems remain unknown as they are usually kept by manufactures as a part of intellectual properties. That brings challenges to trajectory modeling for sUAVs. How to model the vehicle's trajectories with unknown control system? This work proposes to use a neural network to model a vehicle's trajectory. The neural network is first trained to learn the vehicle's responses at numerous conditions. Once being fully trained, given current vehicle states, winds, and desired future trajectory, the neural

  10. Determining the confidence levels of sensor outputs using neural networks

    International Nuclear Information System (INIS)

    Broten, G.S.; Wood, H.C.

    1995-01-01

    This paper describes an approach for determining the confidence level of a sensor output using multi-sensor arrays, sensor fusion and artificial neural networks. The authors have shown in previous work that sensor fusion and artificial neural networks can be used to learn the relationships between the outputs of an array of simulated partially selective sensors and the individual analyte concentrations in a mixture of analyses. Other researchers have shown that an array of partially selective sensors can be used to determine the individual gas concentrations in a gaseous mixture. The research reported in this paper shows that it is possible to extract confidence level information from an array of partially selective sensors using artificial neural networks. The confidence level of a sensor output is defined as a numeric value, ranging from 0% to 100%, that indicates the confidence associated with a output of a given sensor. A three layer back-propagation neural network was trained on a subset of the sensor confidence level space, and was tested for its ability to generalize, where the confidence level space is defined as all possible deviations from the correct sensor output. A learning rate of 0.1 was used and no momentum terms were used in the neural network. This research has shown that an artificial neural network can accurately estimate the confidence level of individual sensors in an array of partially selective sensors. This research has also shown that the neural network's ability to determine the confidence level is influenced by the complexity of the sensor's response and that the neural network is able to estimate the confidence levels even if more than one sensor is in error. The fundamentals behind this research could be applied to other configurations besides arrays of partially selective sensors, such as an array of sensors separated spatially. An example of such a configuration could be an array of temperature sensors in a tank that is not in

  11. Analysis of complex systems using neural networks

    International Nuclear Information System (INIS)

    Uhrig, R.E.

    1992-01-01

    The application of neural networks, alone or in conjunction with other advanced technologies (expert systems, fuzzy logic, and/or genetic algorithms), to some of the problems of complex engineering systems has the potential to enhance the safety, reliability, and operability of these systems. Typically, the measured variables from the systems are analog variables that must be sampled and normalized to expected peak values before they are introduced into neural networks. Often data must be processed to put it into a form more acceptable to the neural network (e.g., a fast Fourier transformation of the time-series data to produce a spectral plot of the data). Specific applications described include: (1) Diagnostics: State of the Plant (2) Hybrid System for Transient Identification, (3) Sensor Validation, (4) Plant-Wide Monitoring, (5) Monitoring of Performance and Efficiency, and (6) Analysis of Vibrations. Although specific examples described deal with nuclear power plants or their subsystems, the techniques described can be applied to a wide variety of complex engineering systems

  12. Reconstruction of neutron spectra through neural networks

    International Nuclear Information System (INIS)

    Vega C, H.R.; Hernandez D, V.M.; Manzanares A, E.

    2003-01-01

    A neural network has been used to reconstruct the neutron spectra starting from the counting rates of the detectors of the Bonner sphere spectrophotometric system. A group of 56 neutron spectra was selected to calculate the counting rates that would produce in a Bonner sphere system, with these data and the spectra it was trained the neural network. To prove the performance of the net, 12 spectra were used, 6 were taken of the group used for the training, 3 were obtained of mathematical functions and those other 3 correspond to real spectra. When comparing the original spectra of those reconstructed by the net we find that our net has a poor performance when reconstructing monoenergetic spectra, this attributes it to those characteristic of the spectra used for the training of the neural network, however for the other groups of spectra the results of the net are appropriate with the prospective ones. (Author)

  13. Potential usefulness of an artificial neural network for assessing ventricular size

    International Nuclear Information System (INIS)

    Fukuda, Haruyuki; Nakajima, Hideyuki; Usuki, Noriaki; Saiwai, Shigeo; Miyamoto, Takeshi; Inoue, Yuichi; Onoyama, Yasuto.

    1995-01-01

    An artificial neural network approach was applied to assess ventricular size from computed tomograms. Three layer, feed-forward neural networks with a back propagation algorithm were designed to distinguish between three degree of enlargement of the ventricles on the basis of patient's age and six items of computed tomographic information. Data for training and testing the neural network were created with computed tomograms of the brains selected at random from daily examinations. Four radiologists decided by mutual consent subjectively based on their experience whether the ventricles were within normal limits, slightly enlarged, or enlarged for the patient's age. The data for training was obtained from 38 patients. The data for testing was obtained from 47 other patients. The performance of the neural network trained using the data for training was evaluated by the rate of correct answers to the data for testing. The valid solution ratio to response of the test data obtained from the trained neural networks was more than 90% for all conditions in this study. The solutions were completely valid in the neural networks with two or three units at the hidden layer with 2,200 learning iterations, and with two units at the hidden layer with 11,000 learning iterations. The squared error decreased remarkably in the range from 0 to 500 learning iterations, and was close to a contrast over two thousand learning iterations. The neural network with a hidden layer having two or three units showed high decision performance. The preliminary results strongly suggest that the neural network approach has potential utility in computer-aided estimation of enlargement of the ventricles. (author)

  14. Reduced-Order Modeling for Flutter/LCO Using Recurrent Artificial Neural Network

    Science.gov (United States)

    Yao, Weigang; Liou, Meng-Sing

    2012-01-01

    The present study demonstrates the efficacy of a recurrent artificial neural network to provide a high fidelity time-dependent nonlinear reduced-order model (ROM) for flutter/limit-cycle oscillation (LCO) modeling. An artificial neural network is a relatively straightforward nonlinear method for modeling an input-output relationship from a set of known data, for which we use the radial basis function (RBF) with its parameters determined through a training process. The resulting RBF neural network, however, is only static and is not yet adequate for an application to problems of dynamic nature. The recurrent neural network method [1] is applied to construct a reduced order model resulting from a series of high-fidelity time-dependent data of aero-elastic simulations. Once the RBF neural network ROM is constructed properly, an accurate approximate solution can be obtained at a fraction of the cost of a full-order computation. The method derived during the study has been validated for predicting nonlinear aerodynamic forces in transonic flow and is capable of accurate flutter/LCO simulations. The obtained results indicate that the present recurrent RBF neural network is accurate and efficient for nonlinear aero-elastic system analysis

  15. Parameter diagnostics of phases and phase transition learning by neural networks

    Science.gov (United States)

    Suchsland, Philippe; Wessel, Stefan

    2018-05-01

    We present an analysis of neural network-based machine learning schemes for phases and phase transitions in theoretical condensed matter research, focusing on neural networks with a single hidden layer. Such shallow neural networks were previously found to be efficient in classifying phases and locating phase transitions of various basic model systems. In order to rationalize the emergence of the classification process and for identifying any underlying physical quantities, it is feasible to examine the weight matrices and the convolutional filter kernels that result from the learning process of such shallow networks. Furthermore, we demonstrate how the learning-by-confusing scheme can be used, in combination with a simple threshold-value classification method, to diagnose the learning parameters of neural networks. In particular, we study the classification process of both fully-connected and convolutional neural networks for the two-dimensional Ising model with extended domain wall configurations included in the low-temperature regime. Moreover, we consider the two-dimensional XY model and contrast the performance of the learning-by-confusing scheme and convolutional neural networks trained on bare spin configurations to the case of preprocessed samples with respect to vortex configurations. We discuss these findings in relation to similar recent investigations and possible further applications.

  16. Neural networks as a tool for unit commitment

    DEFF Research Database (Denmark)

    Rønne-Hansen, Peter; Rønne-Hansen, Jan

    1991-01-01

    Some of the fundamental problems when solving the power system unit commitment problem by means of neural networks have been attacked. It has been demonstrated for a small example that neural networks might be a viable alternative. Some of the major problems solved in this initiating phase form...... a basis for the analysis of real life sized problems. These will be investigated in the near future...

  17. Neural networks - Potential appplication in the nuclear industry

    International Nuclear Information System (INIS)

    Yiftah, S.

    1989-01-01

    Neural networks are an emerging technology which is perceived to have potential for solving complex computation problems which cannot be solved by standard computational methods. One such example is the inverse kinematics problem which is considered to be the most difficult problem in robotics. In 1986, only one neural network modelling tool was available, now there are about twenty offered commercially by various companies in North America

  18. Probing many-body localization with neural networks

    Science.gov (United States)

    Schindler, Frank; Regnault, Nicolas; Neupert, Titus

    2017-06-01

    We show that a simple artificial neural network trained on entanglement spectra of individual states of a many-body quantum system can be used to determine the transition between a many-body localized and a thermalizing regime. Specifically, we study the Heisenberg spin-1/2 chain in a random external field. We employ a multilayer perceptron with a single hidden layer, which is trained on labeled entanglement spectra pertaining to the fully localized and fully thermal regimes. We then apply this network to classify spectra belonging to states in the transition region. For training, we use a cost function that contains, in addition to the usual error and regularization parts, a term that favors a confident classification of the transition region states. The resulting phase diagram is in good agreement with the one obtained by more conventional methods and can be computed for small systems. In particular, the neural network outperforms conventional methods in classifying individual eigenstates pertaining to a single disorder realization. It allows us to map out the structure of these eigenstates across the transition with spatial resolution. Furthermore, we analyze the network operation using the dreaming technique to show that the neural network correctly learns by itself the power-law structure of the entanglement spectra in the many-body localized regime.

  19. Reynolds averaged turbulence modelling using deep neural networks with embedded invariance

    International Nuclear Information System (INIS)

    Ling, Julia; Kurzawski, Andrew; Templeton, Jeremy

    2016-01-01

    There exists significant demand for improved Reynolds-averaged Navier–Stokes (RANS) turbulence models that are informed by and can represent a richer set of turbulence physics. This paper presents a method of using deep neural networks to learn a model for the Reynolds stress anisotropy tensor from high-fidelity simulation data. A novel neural network architecture is proposed which uses a multiplicative layer with an invariant tensor basis to embed Galilean invariance into the predicted anisotropy tensor. It is demonstrated that this neural network architecture provides improved prediction accuracy compared with a generic neural network architecture that does not embed this invariance property. Furthermore, the Reynolds stress anisotropy predictions of this invariant neural network are propagated through to the velocity field for two test cases. For both test cases, significant improvement versus baseline RANS linear eddy viscosity and nonlinear eddy viscosity models is demonstrated.

  20. ACO-Initialized Wavelet Neural Network for Vibration Fault Diagnosis of Hydroturbine Generating Unit

    Directory of Open Access Journals (Sweden)

    Zhihuai Xiao

    2015-01-01

    Full Text Available Considering the drawbacks of traditional wavelet neural network, such as low convergence speed and high sensitivity to initial parameters, an ant colony optimization- (ACO- initialized wavelet neural network is proposed in this paper for vibration fault diagnosis of a hydroturbine generating unit. In this method, parameters of the wavelet neural network are initialized by the ACO algorithm, and then the wavelet neural network is trained by the gradient descent algorithm. Amplitudes of the frequency components of the hydroturbine generating unit vibration signals are used as feature vectors for wavelet neural network training to realize mapping relationship from vibration features to fault types. A real vibration fault diagnosis case result of a hydroturbine generating unit shows that the proposed method has faster convergence speed and stronger generalization ability than the traditional wavelet neural network and ACO wavelet neural network. Thus it can provide an effective solution for online vibration fault diagnosis of a hydroturbine generating unit.

  1. IMNN: Information Maximizing Neural Networks

    Science.gov (United States)

    Charnock, Tom; Lavaux, Guilhem; Wandelt, Benjamin D.

    2018-04-01

    This software trains artificial neural networks to find non-linear functionals of data that maximize Fisher information: information maximizing neural networks (IMNNs). As compressing large data sets vastly simplifies both frequentist and Bayesian inference, important information may be inadvertently missed. Likelihood-free inference based on automatically derived IMNN summaries produces summaries that are good approximations to sufficient statistics. IMNNs are robustly capable of automatically finding optimal, non-linear summaries of the data even in cases where linear compression fails: inferring the variance of Gaussian signal in the presence of noise, inferring cosmological parameters from mock simulations of the Lyman-α forest in quasar spectra, and inferring frequency-domain parameters from LISA-like detections of gravitational waveforms. In this final case, the IMNN summary outperforms linear data compression by avoiding the introduction of spurious likelihood maxima.

  2. Estimation of Collapse Moment for Wall Thinned Elbows Using Fuzzy Neural Networks

    International Nuclear Information System (INIS)

    Na, Man Gyun; Kim, Jin Weon; Shin, Sun Ho; Kim, Koung Suk; Kang, Ki Soo

    2004-01-01

    In this work, the collapse moment due to wall-thinning defects is estimated by using fuzzy neural networks. The developed fuzzy neural networks have been applied to the numerical data obtained from the finite element analysis. Principal component analysis is used to preprocess the input signals into the fuzzy neural network to reduce the sensitivity to the input change and the fuzzy neural networks are trained by using the data set prepared for training (training data) and verified by using another data set different (independent) from the training data. Also, two fuzzy neural networks are trained for two data sets divided into the two classes of extrados and intrados defects, which is because they have different characteristics. The relative 2-sigma errors of the estimated collapse moment are 3.07% for the training data and 4.12% for the test data. It is known from this result that the fuzzy neural networks are sufficiently accurate to be used in the wall-thinning monitoring of elbows

  3. Linear and nonlinear ARMA model parameter estimation using an artificial neural network

    Science.gov (United States)

    Chon, K. H.; Cohen, R. J.

    1997-01-01

    This paper addresses parametric system identification of linear and nonlinear dynamic systems by analysis of the input and output signals. Specifically, we investigate the relationship between estimation of the system using a feedforward neural network model and estimation of the system by use of linear and nonlinear autoregressive moving-average (ARMA) models. By utilizing a neural network model incorporating a polynomial activation function, we show the equivalence of the artificial neural network to the linear and nonlinear ARMA models. We compare the parameterization of the estimated system using the neural network and ARMA approaches by utilizing data generated by means of computer simulations. Specifically, we show that the parameters of a simulated ARMA system can be obtained from the neural network analysis of the simulated data or by conventional least squares ARMA analysis. The feasibility of applying neural networks with polynomial activation functions to the analysis of experimental data is explored by application to measurements of heart rate (HR) and instantaneous lung volume (ILV) fluctuations.

  4. Firing patterns transition and desynchronization induced by time delay in neural networks

    Science.gov (United States)

    Huang, Shoufang; Zhang, Jiqian; Wang, Maosheng; Hu, Chin-Kun

    2018-06-01

    We used the Hindmarsh-Rose (HR) model (Hindmarsh and Rose, 1984) to study the effect of time delay on the transition of firing behaviors and desynchronization in neural networks. As time delay is increased, neural networks exhibit diversity of firing behaviors, including regular spiking or bursting and firing patterns transitions (FPTs). Meanwhile, the desynchronization of firing and unstable bursting with decreasing amplitude in neural system, are also increasingly enhanced with the increase of time delay. Furthermore, we also studied the effect of coupling strength and network randomness on these phenomena. Our results imply that time delays can induce transition and desynchronization of firing behaviors in neural networks. These findings provide new insight into the role of time delay in the firing activities of neural networks, and can help to better understand the firing phenomena in complex systems of neural networks. A possible mechanism in brain that can cause the increase of time delay is discussed.

  5. Stability analysis of fractional-order Hopfield neural networks with time delays.

    Science.gov (United States)

    Wang, Hu; Yu, Yongguang; Wen, Guoguang

    2014-07-01

    This paper investigates the stability for fractional-order Hopfield neural networks with time delays. Firstly, the fractional-order Hopfield neural networks with hub structure and time delays are studied. Some sufficient conditions for stability of the systems are obtained. Next, two fractional-order Hopfield neural networks with different ring structures and time delays are developed. By studying the developed neural networks, the corresponding sufficient conditions for stability of the systems are also derived. It is shown that the stability conditions are independent of time delays. Finally, numerical simulations are given to illustrate the effectiveness of the theoretical results obtained in this paper. Copyright © 2014 Elsevier Ltd. All rights reserved.

  6. Two-Stage Approach to Image Classification by Deep Neural Networks

    Directory of Open Access Journals (Sweden)

    Ososkov Gennady

    2018-01-01

    Full Text Available The paper demonstrates the advantages of the deep learning networks over the ordinary neural networks on their comparative applications to image classifying. An autoassociative neural network is used as a standalone autoencoder for prior extraction of the most informative features of the input data for neural networks to be compared further as classifiers. The main efforts to deal with deep learning networks are spent for a quite painstaking work of optimizing the structures of those networks and their components, as activation functions, weights, as well as the procedures of minimizing their loss function to improve their performances and speed up their learning time. It is also shown that the deep autoencoders develop the remarkable ability for denoising images after being specially trained. Convolutional Neural Networks are also used to solve a quite actual problem of protein genetics on the example of the durum wheat classification. Results of our comparative study demonstrate the undoubted advantage of the deep networks, as well as the denoising power of the autoencoders. In our work we use both GPU and cloud services to speed up the calculations.

  7. A reverse engineering algorithm for neural networks, applied to the subthalamopallidal network of basal ganglia.

    Science.gov (United States)

    Floares, Alexandru George

    2008-01-01

    Modeling neural networks with ordinary differential equations systems is a sensible approach, but also very difficult. This paper describes a new algorithm based on linear genetic programming which can be used to reverse engineer neural networks. The RODES algorithm automatically discovers the structure of the network, including neural connections, their signs and strengths, estimates its parameters, and can even be used to identify the biophysical mechanisms involved. The algorithm is tested on simulated time series data, generated using a realistic model of the subthalamopallidal network of basal ganglia. The resulting ODE system is highly accurate, and results are obtained in a matter of minutes. This is because the problem of reverse engineering a system of coupled differential equations is reduced to one of reverse engineering individual algebraic equations. The algorithm allows the incorporation of common domain knowledge to restrict the solution space. To our knowledge, this is the first time a realistic reverse engineering algorithm based on linear genetic programming has been applied to neural networks.

  8. Wave transmission prediction of multilayer floating breakwater using neural network

    Digital Repository Service at National Institute of Oceanography (India)

    Mandal, S.; Patil, S.G.; Hegde, A.V.

    In the present study, an artificial neural network method has been applied for wave transmission prediction of multilayer floating breakwater. Two neural network models are constructed based on the parameters which influence the wave transmission...

  9. Livermore Big Artificial Neural Network Toolkit

    Energy Technology Data Exchange (ETDEWEB)

    2016-07-01

    LBANN is a toolkit that is designed to train artificial neural networks efficiently on high performance computing architectures. It is optimized to take advantages of key High Performance Computing features to accelerate neural network training. Specifically it is optimized for low-latency, high bandwidth interconnects, node-local NVRAM, node-local GPU accelerators, and high bandwidth parallel file systems. It is built on top of the open source Elemental distributed-memory dense and spars-direct linear algebra and optimization library that is released under the BSD license. The algorithms contained within LBANN are drawn from the academic literature and implemented to work within a distributed-memory framework.

  10. Application of Integrated Neural Network Method to Fault Diagnosis of Nuclear Steam Generator

    International Nuclear Information System (INIS)

    Zhou Gang; Yang Li

    2009-01-01

    A new fault diagnosis method based on integrated neural networks for nuclear steam generator (SG) was proposed in view of the shortcoming of the conventional fault monitoring and diagnosis method. In the method, two neural networks (ANNs) were employed for the fault diagnosis of steam generator. A neural network, which was used for predicting the values of steam generator operation parameters, was taken as the dynamics model of steam generator. The principle of fault monitoring method using the neural network model is to detect the deviations between process signals measured from an operating steam generator and corresponding output signals from the neural network model of steam generator. When the deviation exceeds the limit set in advance, the abnormal event is thought to occur. The other neural network as a fault classifier conducts the fault classification of steam generator. So, the fault types of steam generator are given by the fault classifier. The clear information on steam generator faults was obtained by fusing the monitoring and diagnosis results of two neural networks. The simulation results indicate that employing integrated neural networks can improve the capacity of fault monitoring and diagnosis for the steam generator. (authors)

  11. Generation of artificial accelerograms using neural networks for data of Iran

    International Nuclear Information System (INIS)

    Bargi, Kh.; Loux, C.; Rohani, H.

    2002-01-01

    A new method for generation of artificial earthquake accelerograms from response spectra is proposed by Ghaboussi and Lin in 1997 using neural networks. In this paper the methodology has been extended and enhanced for data of Iran. For this purpose, first 40 records of Iran acceleration is chosen, then an RBF neural network which called generalized regression neural network learn the inverse mapping directly from the response spectrum to the Discrete Cosine Transform of accelerograms. Discrete Cosine Transform has been used as an assisting device to extract the content of frequency domain. Learning of network is reasonable and a generalized regression neural network learns it in a few second. Outputs are presented to demonstrate the performance of this method and show its capabilities

  12. Google matrix analysis of C.elegans neural network

    Energy Technology Data Exchange (ETDEWEB)

    Kandiah, V., E-mail: kandiah@irsamc.ups-tlse.fr; Shepelyansky, D.L., E-mail: dima@irsamc.ups-tlse.fr

    2014-05-01

    We study the structural properties of the neural network of the C.elegans (worm) from a directed graph point of view. The Google matrix analysis is used to characterize the neuron connectivity structure and node classifications are discussed and compared with physiological properties of the cells. Our results are obtained by a proper definition of neural directed network and subsequent eigenvector analysis which recovers some results of previous studies. Our analysis highlights particular sets of important neurons constituting the core of the neural system. The applications of PageRank, CheiRank and ImpactRank to characterization of interdependency of neurons are discussed.

  13. Google matrix analysis of C.elegans neural network

    International Nuclear Information System (INIS)

    Kandiah, V.; Shepelyansky, D.L.

    2014-01-01

    We study the structural properties of the neural network of the C.elegans (worm) from a directed graph point of view. The Google matrix analysis is used to characterize the neuron connectivity structure and node classifications are discussed and compared with physiological properties of the cells. Our results are obtained by a proper definition of neural directed network and subsequent eigenvector analysis which recovers some results of previous studies. Our analysis highlights particular sets of important neurons constituting the core of the neural system. The applications of PageRank, CheiRank and ImpactRank to characterization of interdependency of neurons are discussed.

  14. On-line plant-wide monitoring using neural networks

    International Nuclear Information System (INIS)

    Turkcan, E.; Ciftcioglu, O.; Eryurek, E.; Upadhyaya, B.R.

    1992-06-01

    The on-line signal analysis system designed for a multi-level mode operation using neural networks is described. The system is capable of monitoring the plant states by tracking different number of signals up to 32 simultaneously. The data used for this study were acquired from the Borssele Nuclear Power Plant (PWR type), and using the on-line monitoring system. An on-line plant-wide monitoring study using a multilayer neural network model is discussed in this paper. The back-propagation neural network algorithm is used for training the network. The technique assumes that each physical state of the power plant can be represented by a unique pattern of instrument readings which can be related to the condition of the plant. When disturbance occurs, the sensor readings undergo a transient, and form a different set of patterns which represent the new operational status. Diagnosing these patterns can be helpful in identifying this new state of the power plant. To this end, plant-wide monitoring with neutral networks is one of the new techniques in real-time applications. (author). 9 refs.; 5 figs

  15. Knowledge extraction from evolving spiking neural networks with rank order population coding.

    Science.gov (United States)

    Soltic, Snjezana; Kasabov, Nikola

    2010-12-01

    This paper demonstrates how knowledge can be extracted from evolving spiking neural networks with rank order population coding. Knowledge discovery is a very important feature of intelligent systems. Yet, a disproportionally small amount of research is centered on the issue of knowledge extraction from spiking neural networks which are considered to be the third generation of artificial neural networks. The lack of knowledge representation compatibility is becoming a major detriment to end users of these networks. We show that a high-level knowledge can be obtained from evolving spiking neural networks. More specifically, we propose a method for fuzzy rule extraction from an evolving spiking network with rank order population coding. The proposed method was used for knowledge discovery on two benchmark taste recognition problems where the knowledge learnt by an evolving spiking neural network was extracted in the form of zero-order Takagi-Sugeno fuzzy IF-THEN rules.

  16. A Neural Network Approach to Muon Triggering in ATLAS

    CERN Document Server

    Livneh, Ran; CERN. Geneva

    2007-01-01

    The extremely high rate of events that will be produced in the future Large Hadron Collider requires the triggering mechanism to make precise decisions in a few nano-seconds. This poses a complicated inverse problem, arising from the inhomogeneous nature of the magnetic fields in ATLAS. This thesis presents a study of an application of Artificial Neural Networks to the muon triggering problem in the ATLAS end-cap. A comparison with realistic results from the ATLAS first level trigger simulation was in favour of the neural network, but this is mainly due to superior resolution available off-line. Other options for applying a neural network to this problem are discussed.

  17. Behaviour in O of the Neural Networks Training Cost

    DEFF Research Database (Denmark)

    Goutte, Cyril

    1998-01-01

    We study the behaviour in zero of the derivatives of the cost function used when training non-linear neural networks. It is shown that a fair number offirst, second and higher order derivatives vanish in zero, validating the belief that 0 is a peculiar and potentially harmful location. These calc......We study the behaviour in zero of the derivatives of the cost function used when training non-linear neural networks. It is shown that a fair number offirst, second and higher order derivatives vanish in zero, validating the belief that 0 is a peculiar and potentially harmful location....... These calculations arerelated to practical and theoretical aspects of neural networks training....

  18. Artificial Neural Networks and Instructional Technology.

    Science.gov (United States)

    Carlson, Patricia A.

    1991-01-01

    Artificial neural networks (ANN), part of artificial intelligence, are discussed. Such networks are fed sample cases (training sets), learn how to recognize patterns in the sample data, and use this experience in handling new cases. Two cognitive roles for ANNs (intelligent filters and spreading, associative memories) are examined. Prototypes…

  19. Representation of neural networks as Lotka-Volterra systems

    International Nuclear Information System (INIS)

    Moreau, Yves; Vandewalle, Joos; Louies, Stephane; Brenig, Leon

    1999-01-01

    We study changes of coordinates that allow the representation of the ordinary differential equations describing continuous-time recurrent neural networks into differential equations describing predator-prey models--also called Lotka-Volterra systems. We transform the equations for the neural network first into quasi-monomial form, where we express the vector field of the dynamical system as a linear combination of products of powers of the variables. In practice, this transformation is possible only if the activation function is the hyperbolic tangent or the logistic sigmoied. From this quasi-monomial form, we can directly transform the system further into Lotka-Volterra equations. The resulting Lotka-Volterra system is of higher dimension than the original system, but the behavior of its first variables is equivalent to the behavior of the original neural network

  20. Quantitative phase microscopy using deep neural networks

    Science.gov (United States)

    Li, Shuai; Sinha, Ayan; Lee, Justin; Barbastathis, George

    2018-02-01

    Deep learning has been proven to achieve ground-breaking accuracy in various tasks. In this paper, we implemented a deep neural network (DNN) to achieve phase retrieval in a wide-field microscope. Our DNN utilized the residual neural network (ResNet) architecture and was trained using the data generated by a phase SLM. The results showed that our DNN was able to reconstruct the profile of the phase target qualitatively. In the meantime, large error still existed, which indicated that our approach still need to be improved.

  1. A Neural Network Based Dutch Part of Speech Tagger

    NARCIS (Netherlands)

    Boschman, E.; op den Akker, Hendrikus J.A.; Nijholt, A.; Nijholt, Antinus; Pantic, Maja; Pantic, M.; Poel, M.; Poel, Mannes; Hondorp, G.H.W.

    2008-01-01

    In this paper a Neural Network is designed for Part-of-Speech Tagging of Dutch text. Our approach uses the Corpus Gesproken Nederlands (CGN) consisting of almost 9 million transcribed words of spoken Dutch, divided into 15 different categories. The outcome of the design is a Neural Network with an

  2. Application of genetic neural network in steam generator fault diagnosing

    International Nuclear Information System (INIS)

    Lin Xiaogong; Jiang Xingwei; Liu Tao; Shi Xiaocheng

    2005-01-01

    In the paper, a new algorithm which neural network and genetic algorithm are mixed is adopted, aiming at the problems of slow convergence rate and easily falling into part minimums in network studying of traditional BP neural network, and used in the fault diagnosis of steam generator. The result shows that this algorithm can solve the convergence problem in the network trains effectively. (author)

  3. Neural networks for perception human and machine perception

    CERN Document Server

    Wechsler, Harry

    1991-01-01

    Neural Networks for Perception, Volume 1: Human and Machine Perception focuses on models for understanding human perception in terms of distributed computation and examples of PDP models for machine perception. This book addresses both theoretical and practical issues related to the feasibility of both explaining human perception and implementing machine perception in terms of neural network models. The book is organized into two parts. The first part focuses on human perception. Topics on network model ofobject recognition in human vision, the self-organization of functional architecture in t

  4. Improved Local Weather Forecasts Using Artificial Neural Networks

    DEFF Research Database (Denmark)

    Wollsen, Morten Gill; Jørgensen, Bo Nørregaard

    2015-01-01

    Solar irradiance and temperature forecasts are used in many different control systems. Such as intelligent climate control systems in commercial greenhouses, where the solar irradiance affects the use of supplemental lighting. This paper proposes a novel method to predict the forthcoming weather...... using an artificial neural network. The neural network used is a NARX network, which is known to model non-linear systems well. The predictions are compared to both a design reference year as well as commercial weather forecasts based upon numerical modelling. The results presented in this paper show...

  5. Pattern classification and recognition of invertebrate functional groups using self-organizing neural networks.

    Science.gov (United States)

    Zhang, WenJun

    2007-07-01

    Self-organizing neural networks can be used to mimic non-linear systems. The main objective of this study is to make pattern classification and recognition on sampling information using two self-organizing neural network models. Invertebrate functional groups sampled in the irrigated rice field were classified and recognized using one-dimensional self-organizing map and self-organizing competitive learning neural networks. Comparisons between neural network models, distance (similarity) measures, and number of neurons were conducted. The results showed that self-organizing map and self-organizing competitive learning neural network models were effective in pattern classification and recognition of sampling information. Overall the performance of one-dimensional self-organizing map neural network was better than self-organizing competitive learning neural network. The number of neurons could determine the number of classes in the classification. Different neural network models with various distance (similarity) measures yielded similar classifications. Some differences, dependent upon the specific network structure, would be found. The pattern of an unrecognized functional group was recognized with the self-organizing neural network. A relative consistent classification indicated that the following invertebrate functional groups, terrestrial blood sucker; terrestrial flyer; tourist (nonpredatory species with no known functional role other than as prey in ecosystem); gall former; collector (gather, deposit feeder); predator and parasitoid; leaf miner; idiobiont (acarine ectoparasitoid), were classified into the same group, and the following invertebrate functional groups, external plant feeder; terrestrial crawler, walker, jumper or hunter; neustonic (water surface) swimmer (semi-aquatic), were classified into another group. It was concluded that reliable conclusions could be drawn from comparisons of different neural network models that use different distance

  6. Temporal neural networks and transient analysis of complex engineering systems

    Science.gov (United States)

    Uluyol, Onder

    A theory is introduced for a multi-layered Local Output Gamma Feedback (LOGF) neural network within the paradigm of Locally-Recurrent Globally-Feedforward neural networks. It is developed for the identification, prediction, and control tasks of spatio-temporal systems and allows for the presentation of different time scales through incorporation of a gamma memory. It is initially applied to the tasks of sunspot and Mackey-Glass series prediction as benchmarks, then it is extended to the task of power level control of a nuclear reactor at different fuel cycle conditions. The developed LOGF neuron model can also be viewed as a Transformed Input and State (TIS) Gamma memory for neural network architectures for temporal processing. The novel LOGF neuron model extends the static neuron model by incorporating into it a short-term memory structure in the form of a digital gamma filter. A feedforward neural network made up of LOGF neurons can thus be used to model dynamic systems. A learning algorithm based upon the Backpropagation-Through-Time (BTT) approach is derived. It is applicable for training a general L-layer LOGF neural network. The spatial and temporal weights and parameters of the network are iteratively optimized for a given problem using the derived learning algorithm.

  7. Improved transformer protection using probabilistic neural network ...

    African Journals Online (AJOL)

    user

    secure and dependable protection for power transformers. Owing to its superior learning and generalization capabilities Artificial. Neural Network (ANN) can considerably enhance the scope of WI method. ANN approach is faster, robust and easier to implement than the conventional waveform approach. The use of neural ...

  8. Neural-Network Quantum States, String-Bond States, and Chiral Topological States

    Science.gov (United States)

    Glasser, Ivan; Pancotti, Nicola; August, Moritz; Rodriguez, Ivan D.; Cirac, J. Ignacio

    2018-01-01

    Neural-network quantum states have recently been introduced as an Ansatz for describing the wave function of quantum many-body systems. We show that there are strong connections between neural-network quantum states in the form of restricted Boltzmann machines and some classes of tensor-network states in arbitrary dimensions. In particular, we demonstrate that short-range restricted Boltzmann machines are entangled plaquette states, while fully connected restricted Boltzmann machines are string-bond states with a nonlocal geometry and low bond dimension. These results shed light on the underlying architecture of restricted Boltzmann machines and their efficiency at representing many-body quantum states. String-bond states also provide a generic way of enhancing the power of neural-network quantum states and a natural generalization to systems with larger local Hilbert space. We compare the advantages and drawbacks of these different classes of states and present a method to combine them together. This allows us to benefit from both the entanglement structure of tensor networks and the efficiency of neural-network quantum states into a single Ansatz capable of targeting the wave function of strongly correlated systems. While it remains a challenge to describe states with chiral topological order using traditional tensor networks, we show that, because of their nonlocal geometry, neural-network quantum states and their string-bond-state extension can describe a lattice fractional quantum Hall state exactly. In addition, we provide numerical evidence that neural-network quantum states can approximate a chiral spin liquid with better accuracy than entangled plaquette states and local string-bond states. Our results demonstrate the efficiency of neural networks to describe complex quantum wave functions and pave the way towards the use of string-bond states as a tool in more traditional machine-learning applications.

  9. DCS-Neural-Network Program for Aircraft Control and Testing

    Science.gov (United States)

    Jorgensen, Charles C.

    2006-01-01

    A computer program implements a dynamic-cell-structure (DCS) artificial neural network that can perform such tasks as learning selected aerodynamic characteristics of an airplane from wind-tunnel test data and computing real-time stability and control derivatives of the airplane for use in feedback linearized control. A DCS neural network is one of several types of neural networks that can incorporate additional nodes in order to rapidly learn increasingly complex relationships between inputs and outputs. In the DCS neural network implemented by the present program, the insertion of nodes is based on accumulated error. A competitive Hebbian learning rule (a supervised-learning rule in which connection weights are adjusted to minimize differences between actual and desired outputs for training examples) is used. A Kohonen-style learning rule (derived from a relatively simple training algorithm, implements a Delaunay triangulation layout of neurons) is used to adjust node positions during training. Neighborhood topology determines which nodes are used to estimate new values. The network learns, starting with two nodes, and adds new nodes sequentially in locations chosen to maximize reductions in global error. At any given time during learning, the error becomes homogeneously distributed over all nodes.

  10. Predicting recurrent aphthous ulceration using genetic algorithms-optimized neural networks

    Directory of Open Access Journals (Sweden)

    Najla S Dar-Odeh

    2010-05-01

    Full Text Available Najla S Dar-Odeh1, Othman M Alsmadi2, Faris Bakri3, Zaer Abu-Hammour2, Asem A Shehabi3, Mahmoud K Al-Omiri1, Shatha M K Abu-Hammad4, Hamzeh Al-Mashni4, Mohammad B Saeed4, Wael Muqbil4, Osama A Abu-Hammad1 1Faculty of Dentistry, 2Faculty of Engineering and Technology, 3Faculty of Medicine, University of Jordan, Amman, Jordan; 4Dental Department, University of Jordan Hospital, Amman, JordanObjective: To construct and optimize a neural network that is capable of predicting the occurrence of recurrent aphthous ulceration (RAU based on a set of appropriate input data.Participants and methods: Artificial neural networks (ANN software employing genetic algorithms to optimize the architecture neural networks was used. Input and output data of 86 participants (predisposing factors and status of the participants with regards to recurrent aphthous ulceration were used to construct and train the neural networks. The optimized neural networks were then tested using untrained data of a further 10 participants.Results: The optimized neural network, which produced the most accurate predictions for the presence or absence of recurrent aphthous ulceration was found to employ: gender, hematological (with or without ferritin and mycological data of the participants, frequency of tooth brushing, and consumption of vegetables and fruits.Conclusions: Factors appearing to be related to recurrent aphthous ulceration and appropriate for use as input data to construct ANNs that predict recurrent aphthous ulceration were found to include the following: gender, hemoglobin, serum vitamin B12, serum ferritin, red cell folate, salivary candidal colony count, frequency of tooth brushing, and the number of fruits or vegetables consumed daily.Keywords: artifical neural networks, recurrent, aphthous ulceration, ulcer

  11. Upset Prediction in Friction Welding Using Radial Basis Function Neural Network

    Directory of Open Access Journals (Sweden)

    Wei Liu

    2013-01-01

    Full Text Available This paper addresses the upset prediction problem of friction welded joints. Based on finite element simulations of inertia friction welding (IFW, a radial basis function (RBF neural network was developed initially to predict the final upset for a number of welding parameters. The predicted joint upset by the RBF neural network was compared to validated finite element simulations, producing an error of less than 8.16% which is reasonable. Furthermore, the effects of initial rotational speed and axial pressure on the upset were investigated in relation to energy conversion with the RBF neural network. The developed RBF neural network was also applied to linear friction welding (LFW and continuous drive friction welding (CDFW. The correlation coefficients of RBF prediction for LFW and CDFW were 0.963 and 0.998, respectively, which further suggest that an RBF neural network is an effective method for upset prediction of friction welded joints.

  12. Prediction of Electricity Usage Using Convolutional Neural Networks

    OpenAIRE

    Hansen, Martin

    2017-01-01

    Master's thesis Information- and communication technology IKT590 - University of Agder 2017 Convolutional Neural Networks are overwhelmingly accurate when attempting to predict numbers using the famous MNIST-dataset. In this paper, we are attempting to transcend these results for time- series forecasting, and compare them with several regression mod- els. The Convolutional Neural Network model predicted the same value through the entire time lapse in contrast with the other ...

  13. Neural Network Machine Learning and Dimension Reduction for Data Visualization

    Science.gov (United States)

    Liles, Charles A.

    2014-01-01

    Neural network machine learning in computer science is a continuously developing field of study. Although neural network models have been developed which can accurately predict a numeric value or nominal classification, a general purpose method for constructing neural network architecture has yet to be developed. Computer scientists are often forced to rely on a trial-and-error process of developing and improving accurate neural network models. In many cases, models are constructed from a large number of input parameters. Understanding which input parameters have the greatest impact on the prediction of the model is often difficult to surmise, especially when the number of input variables is very high. This challenge is often labeled the "curse of dimensionality" in scientific fields. However, techniques exist for reducing the dimensionality of problems to just two dimensions. Once a problem's dimensions have been mapped to two dimensions, it can be easily plotted and understood by humans. The ability to visualize a multi-dimensional dataset can provide a means of identifying which input variables have the highest effect on determining a nominal or numeric output. Identifying these variables can provide a better means of training neural network models; models can be more easily and quickly trained using only input variables which appear to affect the outcome variable. The purpose of this project is to explore varying means of training neural networks and to utilize dimensional reduction for visualizing and understanding complex datasets.

  14. Nano-topography Enhances Communication in Neural Cells Networks

    KAUST Repository

    Onesto, V.; Cancedda, L.; Coluccio, M. L.; Nanni, M.; Pesce, M.; Malara, N.; Cesarelli, M.; Di Fabrizio, Enzo M.; Amato, F.; Gentile, F.

    2017-01-01

    Neural cells are the smallest building blocks of the central and peripheral nervous systems. Information in neural networks and cell-substrate interactions have been heretofore studied separately. Understanding whether surface nano-topography can

  15. Recurrent Neural Network Based Boolean Factor Analysis and its Application to Word Clustering

    Czech Academy of Sciences Publication Activity Database

    Frolov, A. A.; Húsek, Dušan; Polyakov, P.Y.

    2009-01-01

    Roč. 20, č. 7 (2009), s. 1073-1086 ISSN 1045-9227 R&D Projects: GA MŠk(CZ) 1M0567 Institutional research plan: CEZ:AV0Z10300504 Keywords : recurrent neural network * Hopfield-like neural network * associative memory * unsupervised learning * neural network architecture * neural network application * statistics * Boolean factor analysis * concepts search * information retrieval Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 2.889, year: 2009

  16. Development of nuclear power plant diagnosis technique using neural networks

    International Nuclear Information System (INIS)

    Horiguchi, Masahiro; Fukawa, Naohiro; Nishimura, Kazuo

    1991-01-01

    A nuclear power plant diagnosis technique has been developed, called transient phenomena analysis, which employs neural network. The neural networks identify malfunctioning equipment by recognizing the pattern of main plant parameters, making it possible to locate the cause of an abnormality when a plant is in a transient state. In a case where some piece of equipment shows abnormal behavior, many plant parameters either directly or indirectly related to that equipment change simultaneously. When an abrupt change in a plant parameter is detected, changes in the 49 main plant parameters are classified into three types and a characteristic change pattern consisting of 49 data is defined. The neural networks then judge the cause of the abnormality from this pattern. This neural-network-based technique can recognize 100 patterns that are characterized by the causes of plant abnormality. (author)

  17. PERFORMANCE COMPARISON FOR INTRUSION DETECTION SYSTEM USING NEURAL NETWORK WITH KDD DATASET

    Directory of Open Access Journals (Sweden)

    S. Devaraju

    2014-04-01

    Full Text Available Intrusion Detection Systems are challenging task for finding the user as normal user or attack user in any organizational information systems or IT Industry. The Intrusion Detection System is an effective method to deal with the kinds of problem in networks. Different classifiers are used to detect the different kinds of attacks in networks. In this paper, the performance of intrusion detection is compared with various neural network classifiers. In the proposed research the four types of classifiers used are Feed Forward Neural Network (FFNN, Generalized Regression Neural Network (GRNN, Probabilistic Neural Network (PNN and Radial Basis Neural Network (RBNN. The performance of the full featured KDD Cup 1999 dataset is compared with that of the reduced featured KDD Cup 1999 dataset. The MATLAB software is used to train and test the dataset and the efficiency and False Alarm Rate is measured. It is proved that the reduced dataset is performing better than the full featured dataset.

  18. Ridge Polynomial Neural Network with Error Feedback for Time Series Forecasting.

    Science.gov (United States)

    Waheeb, Waddah; Ghazali, Rozaida; Herawan, Tutut

    2016-01-01

    Time series forecasting has gained much attention due to its many practical applications. Higher-order neural network with recurrent feedback is a powerful technique that has been used successfully for time series forecasting. It maintains fast learning and the ability to learn the dynamics of the time series over time. Network output feedback is the most common recurrent feedback for many recurrent neural network models. However, not much attention has been paid to the use of network error feedback instead of network output feedback. In this study, we propose a novel model, called Ridge Polynomial Neural Network with Error Feedback (RPNN-EF) that incorporates higher order terms, recurrence and error feedback. To evaluate the performance of RPNN-EF, we used four univariate time series with different forecasting horizons, namely star brightness, monthly smoothed sunspot numbers, daily Euro/Dollar exchange rate, and Mackey-Glass time-delay differential equation. We compared the forecasting performance of RPNN-EF with the ordinary Ridge Polynomial Neural Network (RPNN) and the Dynamic Ridge Polynomial Neural Network (DRPNN). Simulation results showed an average 23.34% improvement in Root Mean Square Error (RMSE) with respect to RPNN and an average 10.74% improvement with respect to DRPNN. That means that using network errors during training helps enhance the overall forecasting performance for the network.

  19. Ridge Polynomial Neural Network with Error Feedback for Time Series Forecasting.

    Directory of Open Access Journals (Sweden)

    Waddah Waheeb

    Full Text Available Time series forecasting has gained much attention due to its many practical applications. Higher-order neural network with recurrent feedback is a powerful technique that has been used successfully for time series forecasting. It maintains fast learning and the ability to learn the dynamics of the time series over time. Network output feedback is the most common recurrent feedback for many recurrent neural network models. However, not much attention has been paid to the use of network error feedback instead of network output feedback. In this study, we propose a novel model, called Ridge Polynomial Neural Network with Error Feedback (RPNN-EF that incorporates higher order terms, recurrence and error feedback. To evaluate the performance of RPNN-EF, we used four univariate time series with different forecasting horizons, namely star brightness, monthly smoothed sunspot numbers, daily Euro/Dollar exchange rate, and Mackey-Glass time-delay differential equation. We compared the forecasting performance of RPNN-EF with the ordinary Ridge Polynomial Neural Network (RPNN and the Dynamic Ridge Polynomial Neural Network (DRPNN. Simulation results showed an average 23.34% improvement in Root Mean Square Error (RMSE with respect to RPNN and an average 10.74% improvement with respect to DRPNN. That means that using network errors during training helps enhance the overall forecasting performance for the network.

  20. Solving differential equations with unknown constitutive relations as recurrent neural networks

    Energy Technology Data Exchange (ETDEWEB)

    Hagge, Tobias J.; Stinis, Panagiotis; Yeung, Enoch H.; Tartakovsky, Alexandre M.

    2017-12-08

    We solve a system of ordinary differential equations with an unknown functional form of a sink (reaction rate) term. We assume that the measurements (time series) of state variables are partially available, and use a recurrent neural network to “learn” the reaction rate from this data. This is achieved by including discretized ordinary differential equations as part of a recurrent neural network training problem. We extend TensorFlow’s recurrent neural network architecture to create a simple but scalable and effective solver for the unknown functions, and apply it to a fedbatch bioreactor simulation problem. Use of techniques from recent deep learning literature enables training of functions with behavior manifesting over thousands of time steps. Our networks are structurally similar to recurrent neural networks, but differ in purpose, and require modified training strategies.

  1. Computational neural network regression model for Host based Intrusion Detection System

    Directory of Open Access Journals (Sweden)

    Sunil Kumar Gautam

    2016-09-01

    Full Text Available The current scenario of information gathering and storing in secure system is a challenging task due to increasing cyber-attacks. There exists computational neural network techniques designed for intrusion detection system, which provide security to single machine and entire network's machine. In this paper, we have used two types of computational neural network models, namely, Generalized Regression Neural Network (GRNN model and Multilayer Perceptron Neural Network (MPNN model for Host based Intrusion Detection System using log files that are generated by a single personal computer. The simulation results show correctly classified percentage of normal and abnormal (intrusion class using confusion matrix. On the basis of results and discussion, we found that the Host based Intrusion Systems Model (HISM significantly improved the detection accuracy while retaining minimum false alarm rate.

  2. Efficient Neural Network Modeling for Flight and Space Dynamics Simulation

    Directory of Open Access Journals (Sweden)

    Ayman Hamdy Kassem

    2011-01-01

    Full Text Available This paper represents an efficient technique for neural network modeling of flight and space dynamics simulation. The technique will free the neural network designer from guessing the size and structure for the required neural network model and will help to minimize the number of neurons. For linear flight/space dynamics systems, the technique can find the network weights and biases directly by solving a system of linear equations without the need for training. Nonlinear flight dynamic systems can be easily modeled by training its linearized models keeping the same network structure. The training is fast, as it uses the linear system knowledge to speed up the training process. The technique is tested on different flight/space dynamic models and showed promising results.

  3. Robust neural network with applications to credit portfolio data analysis.

    Science.gov (United States)

    Feng, Yijia; Li, Runze; Sudjianto, Agus; Zhang, Yiyun

    2010-01-01

    In this article, we study nonparametric conditional quantile estimation via neural network structure. We proposed an estimation method that combines quantile regression and neural network (robust neural network, RNN). It provides good smoothing performance in the presence of outliers and can be used to construct prediction bands. A Majorization-Minimization (MM) algorithm was developed for optimization. Monte Carlo simulation study is conducted to assess the performance of RNN. Comparison with other nonparametric regression methods (e.g., local linear regression and regression splines) in real data application demonstrate the advantage of the newly proposed procedure.

  4. PARTICLE SWARM OPTIMIZATION (PSO FOR TRAINING OPTIMIZATION ON CONVOLUTIONAL NEURAL NETWORK (CNN

    Directory of Open Access Journals (Sweden)

    Arie Rachmad Syulistyo

    2016-02-01

    Full Text Available Neural network attracts plenty of researchers lately. Substantial number of renowned universities have developed neural network for various both academically and industrially applications. Neural network shows considerable performance on various purposes. Nevertheless, for complex applications, neural network’s accuracy significantly deteriorates. To tackle the aforementioned drawback, lot of researches had been undertaken on the improvement of the standard neural network. One of the most promising modifications on standard neural network for complex applications is deep learning method. In this paper, we proposed the utilization of Particle Swarm Optimization (PSO in Convolutional Neural Networks (CNNs, which is one of the basic methods in deep learning. The use of PSO on the training process aims to optimize the results of the solution vectors on CNN in order to improve the recognition accuracy. The data used in this research is handwritten digit from MNIST. The experiments exhibited that the accuracy can be attained in 4 epoch is 95.08%. This result was better than the conventional CNN and DBN.  The execution time was also almost similar to the conventional CNN. Therefore, the proposed method was a promising method.

  5. Probabilistic neural network algorithm for using radon emanations as an earthquake precursor

    International Nuclear Information System (INIS)

    Gupta, Dhawal; Shahani, D.T.

    2014-01-01

    The investigation throughout the world in past two decades provides evidence which indicate that significance variation of radon and other soil gases occur in association with major geophysical events such as earthquake. The traditional statistical algorithm includes regression to remove the effect of the meteorological parameters from the raw radon and anomalies are calculated either taking the periodicity in seasonal variations or periodicity computed using Fast Fourier Transform. In case of neural networks the regression step is avoided. A neural network model can be found which can learn the behavior of radon with respect to meteorological parameter in order that changing emission patterns may be adapted to by the model on its own. The output of this neural model is the estimated radon values. This estimated radon value is used to decide whether anomalous behavior of radon has occurred and a valid precursor may be identified. The neural network model developed using Radial Basis function network gave a prediction rate of 87.7%. The same was accompanied by huge false alarms. The present paper deals with improved neural network algorithm using Probabilistic Neural Networks that requires neither an explicit step of regression nor use of any specific period. This neural network model reduces the false alarms to zero and gave same prediction rate as RBF networks. (author)

  6. Alpha spectral analysis via artificial neural networks

    International Nuclear Information System (INIS)

    Kangas, L.J.; Hashem, S.; Keller, P.E.; Kouzes, R.T.; Troyer, G.L.

    1994-10-01

    An artificial neural network system that assigns quality factors to alpha particle energy spectra is discussed. The alpha energy spectra are used to detect plutonium contamination in the work environment. The quality factors represent the levels of spectral degradation caused by miscalibration and foreign matter affecting the instruments. A set of spectra was labeled with a quality factor by an expert and used in training the artificial neural network expert system. The investigation shows that the expert knowledge of alpha spectra quality factors can be transferred to an ANN system

  7. Neural network approach to radiologic lesion detection

    International Nuclear Information System (INIS)

    Newman, F.D.; Raff, U.; Stroud, D.

    1989-01-01

    An area of artificial intelligence that has gained recent attention is the neural network approach to pattern recognition. The authors explore the use of neural networks in radiologic lesion detection with what is known in the literature as the novelty filter. This filter uses a linear model; images of normal patterns become training vectors and are stored as columns of a matrix. An image of an abnormal pattern is introduced and the abnormality or novelty is extracted. A VAX 750 was used to encode the novelty filter, and two experiments have been examined

  8. TRIGA control rod position and reactivity transient Monitoring by Neural Networks

    International Nuclear Information System (INIS)

    Rosa, R.; Palomba, M.; Sepielli, M.

    2008-01-01

    Plant sensors drift or malfunction and operator actions in nuclear reactor control can be supported by sensor on-line monitoring, and data validation through soft-computing process. On-line recalibration can often avoid manual calibration or drifting component replacement. DSP requires prompt response to the modified conditions. Artificial Neural Network (ANN) and Fuzzy logic ensure: prompt response, link with field measurement and physical system behaviour, data incoming interpretation, and detection of discrepancy for mis-calibration or sensor faults. ANN (Artificial Neural Network) is a system based on the operation of biological neural networks. Although computing is day by day advancing, there are certain tasks that a program made for a common microprocessor is unable to perform. A software implementation of an ANN can be made with Pros and Cons. Pros: A neural network can perform tasks that a linear program can not; When an element of the neural network fails, it can continue without any problem by their parallel nature; A neural network learns and does not need to be reprogrammed; It can be implemented in any application; It can be implemented without any problem. Cons: The architecture of a neural network is different from the architecture of microprocessors therefore needs to be emulated; it requires high processing time for large neural networks; and the neural network needs training to operate. Three possibilities of training exist: Supervised learning: the network is trained providing input and matching output patterns; Unsupervised learning: input patterns are not a priori classified and the system must develop its own representation of the input stimuli; Reinforcement Learning: intermediate form of the above two types of learning, the learning machine does some action on the environment and gets a feedback response from the environment. Two TRIGAN ANN applications are considered: control rod position and fuel temperature. The outcome obtained in this

  9. Analysis of surface ozone using a recurrent neural network.

    Science.gov (United States)

    Biancofiore, Fabio; Verdecchia, Marco; Di Carlo, Piero; Tomassetti, Barbara; Aruffo, Eleonora; Busilacchio, Marcella; Bianco, Sebastiano; Di Tommaso, Sinibaldo; Colangeli, Carlo

    2015-05-01

    Hourly concentrations of ozone (O₃) and nitrogen dioxide (NO₂) have been measured for 16 years, from 1998 to 2013, in a seaside town in central Italy. The seasonal trends of O₃ and NO₂ recorded in this period have been studied. Furthermore, we used the data collected during one year (2005), to define the characteristics of a multiple linear regression model and a neural network model. Both models are used to model the hourly O₃ concentration, using, two scenarios: 1) in the first as inputs, only meteorological parameters and 2) in the second adding photochemical parameters at those of the first scenario. In order to evaluate the performance of the model four statistical criteria are used: correlation coefficient, fractional bias, normalized mean squared error and a factor of two. All the criteria show that the neural network gives better results, compared to the regression model, in all the model scenarios. Predictions of O₃ have been carried out by many authors using a feed forward neural architecture. In this paper we show that a recurrent architecture significantly improves the performances of neural predictors. Using only the meteorological parameters as input, the recurrent architecture shows performance better than the multiple linear regression model that uses meteorological and photochemical data as input, making the neural network model with recurrent architecture a more useful tool in areas where only weather measurements are available. Finally, we used the neural network model to forecast the O₃ hourly concentrations 1, 3, 6, 12, 24 and 48 h ahead. The performances of the model in predicting O₃ levels are discussed. Emphasis is given to the possibility of using the neural network model in operational ways in areas where only meteorological data are available, in order to predict O₃ also in sites where it has not been measured yet. Copyright © 2015 Elsevier B.V. All rights reserved.

  10. Separation prediction in two dimensional boundary layer flows using artificial neural networks

    International Nuclear Information System (INIS)

    Sabetghadam, F.; Ghomi, H.A.

    2003-01-01

    In this article, the ability of artificial neural networks in prediction of separation in steady two dimensional boundary layer flows is studied. Data for network training is extracted from numerical solution of an ODE obtained from Von Karman integral equation with approximate one parameter Pohlhousen velocity profile. As an appropriate neural network, a two layer radial basis generalized regression artificial neural network is used. The results shows good agreements between the overall behavior of the flow fields predicted by the artificial neural network and the actual flow fields for some cases. The method easily can be extended to unsteady separation and turbulent as well as compressible boundary layer flows. (author)

  11. A network security situation prediction model based on wavelet neural network with optimized parameters

    Directory of Open Access Journals (Sweden)

    Haibo Zhang

    2016-08-01

    Full Text Available The security incidents ion networks are sudden and uncertain, it is very hard to precisely predict the network security situation by traditional methods. In order to improve the prediction accuracy of the network security situation, we build a network security situation prediction model based on Wavelet Neural Network (WNN with optimized parameters by the Improved Niche Genetic Algorithm (INGA. The proposed model adopts WNN which has strong nonlinear ability and fault-tolerance performance. Also, the parameters for WNN are optimized through the adaptive genetic algorithm (GA so that WNN searches more effectively. Considering the problem that the adaptive GA converges slowly and easily turns to the premature problem, we introduce a novel niche technology with a dynamic fuzzy clustering and elimination mechanism to solve the premature convergence of the GA. Our final simulation results show that the proposed INGA-WNN prediction model is more reliable and effective, and it achieves faster convergence-speed and higher prediction accuracy than the Genetic Algorithm-Wavelet Neural Network (GA-WNN, Genetic Algorithm-Back Propagation Neural Network (GA-BPNN and WNN.

  12. NNSYSID and NNCTRL Tools for system identification and control with neural networks

    DEFF Research Database (Denmark)

    Nørgaard, Magnus; Ravn, Ole; Poulsen, Niels Kjølstad

    2001-01-01

    choose among several designs such as direct inverse control, internal model control, nonlinear feedforward, feedback linearisation, optimal control, gain scheduling based on instantaneous linearisation of neural network models and nonlinear model predictive control. This article gives an overview......Two toolsets for use with MATLAB have been developed: the neural network based system identification toolbox (NNSYSID) and the neural network based control system design toolkit (NNCTRL). The NNSYSID toolbox has been designed to assist identification of nonlinear dynamic systems. It contains...... a number of nonlinear model structures based on neural networks, effective training algorithms and tools for model validation and model structure selection. The NNCTRL toolkit is an add-on to NNSYSID and provides tools for design and simulation of control systems based on neural networks. The user can...

  13. NNSYSID and NNCTRL Tools for system identification and control with neural networks

    DEFF Research Database (Denmark)

    Nørgaard, Magnus; Ravn, Ole; Poulsen, Niels Kjølstad

    2001-01-01

    a number of nonlinear model structures based on neural networks, effective training algorithms and tools for model validation and model structure selection. The NNCTRL toolkit is an add-on to NNSYSID and provides tools for design and simulation of control systems based on neural networks. The user can...... choose among several designs such as direct inverse control, internal model control, nonlinear feedforward, feedback linearisation, optimal control, gain scheduling based on instantaneous linearisation of neural network models and nonlinear model predictive control. This article gives an overview......Two toolsets for use with MATLAB have been developed: the neural network based system identification toolbox (NNSYSID) and the neural network based control system design toolkit (NNCTRL). The NNSYSID toolbox has been designed to assist identification of nonlinear dynamic systems. It contains...

  14. Synchronization criteria for generalized reaction-diffusion neural networks via periodically intermittent control.

    Science.gov (United States)

    Gan, Qintao; Lv, Tianshi; Fu, Zhenhua

    2016-04-01

    In this paper, the synchronization problem for a class of generalized neural networks with time-varying delays and reaction-diffusion terms is investigated concerning Neumann boundary conditions in terms of p-norm. The proposed generalized neural networks model includes reaction-diffusion local field neural networks and reaction-diffusion static neural networks as its special cases. By establishing a new inequality, some simple and useful conditions are obtained analytically to guarantee the global exponential synchronization of the addressed neural networks under the periodically intermittent control. According to the theoretical results, the influences of diffusion coefficients, diffusion space, and control rate on synchronization are analyzed. Finally, the feasibility and effectiveness of the proposed methods are shown by simulation examples, and by choosing different diffusion coefficients, diffusion spaces, and control rates, different controlled synchronization states can be obtained.

  15. Learning Data Set Influence on Identification Accuracy of Gas Turbine Neural Network Model

    Science.gov (United States)

    Kuznetsov, A. V.; Makaryants, G. M.

    2018-01-01

    There are many gas turbine engine identification researches via dynamic neural network models. It should minimize errors between model and real object during identification process. Questions about training data set processing of neural networks are usually missed. This article presents a study about influence of data set type on gas turbine neural network model accuracy. The identification object is thermodynamic model of micro gas turbine engine. The thermodynamic model input signal is the fuel consumption and output signal is the engine rotor rotation frequency. Four types input signals was used for creating training and testing data sets of dynamic neural network models - step, fast, slow and mixed. Four dynamic neural networks were created based on these types of training data sets. Each neural network was tested via four types test data sets. In the result 16 transition processes from four neural networks and four test data sets from analogous solving results of thermodynamic model were compared. The errors comparison was made between all neural network errors in each test data set. In the comparison result it was shown error value ranges of each test data set. It is shown that error values ranges is small therefore the influence of data set types on identification accuracy is low.

  16. Transport energy demand modeling of South Korea using artificial neural network

    International Nuclear Information System (INIS)

    Geem, Zong Woo

    2011-01-01

    Artificial neural network models were developed to forecast South Korea's transport energy demand. Various independent variables, such as GDP, population, oil price, number of vehicle registrations, and passenger transport amount, were considered and several good models (Model 1 with GDP, population, and passenger transport amount; Model 2 with GDP, number of vehicle registrations, and passenger transport amount; and Model 3 with oil price, number of vehicle registrations, and passenger transport amount) were selected by comparing with multiple linear regression models. Although certain regression models obtained better R-squared values than neural network models, this does not guarantee the fact that the former is better than the latter because root mean squared errors of the former were much inferior to those of the latter. Also, certain regression model had structural weakness based on P-value. Instead, neural network models produced more robust results. Forecasted results using the neural network models show that South Korea will consume around 37 MTOE of transport energy in 2025. - Highlights: → Transport energy demand of South Korea was forecasted using artificial neural network. → Various variables (GDP, population, oil price, number of registrations, etc.) were considered. → Results of artificial neural network were compared with those of multiple linear regression.

  17. Fastest learning in small-world neural networks

    International Nuclear Information System (INIS)

    Simard, D.; Nadeau, L.; Kroeger, H.

    2005-01-01

    We investigate supervised learning in neural networks. We consider a multi-layered feed-forward network with back propagation. We find that the network of small-world connectivity reduces the learning error and learning time when compared to the networks of regular or random connectivity. Our study has potential applications in the domain of data-mining, image processing, speech recognition, and pattern recognition

  18. Evolutionary neural networks: a new alternative for neutron spectrometry

    International Nuclear Information System (INIS)

    Ortiz R, J. M.; Martinez B, M. R.; Vega C, H. R.; Galleo, E.

    2009-10-01

    A device used to perform neutron spectroscopy is the system known as a system of Bonner spheres spectrometer, this system has some disadvantages, one of these is the need for reconstruction using a code that is based on an iterative reconstruction algorithm, whose greater inconvenience is the need for a initial spectrum, as close as possible to the spectrum that is desired to avoid this inconvenience has been reported several procedures in reconstruction, combined with various types of experimental methods, based on artificial intelligence technology how genetic algorithms, artificial neural networks and hybrid systems evolved artificial neural networks using genetic algorithms. This paper analyzes the intersection of neural networks and evolutionary algorithms applied in the neutron spectroscopy and dosimetry. Due to this is an emerging technology, there are not tools for doing analysis of the obtained results, by what this paper presents a computing tool to analyze the neutron spectra and the equivalent doses obtained through the hybrid technology of neural networks and genetic algorithms. The toolmaker offers a user graphical environment, friendly and easy to operate. (author)

  19. Methodology for neural networks prototyping. Application to traffic control

    Energy Technology Data Exchange (ETDEWEB)

    Belegan, I.C.

    1998-07-01

    The work described in this report was carried out in the context of the European project ASTORIA (Advanced Simulation Toolbox for Real-World Industrial Application in Passenger Management and Adaptive Control), and concerns the development of an advanced toolbox for complex transportation systems. Our work was focused on the methodology for prototyping a set of neural networks corresponding to specific strategies for traffic control and congestion management. The tool used for prototyping is SNNS (Stuggart Neural Network Simulator), developed at the University of Stuggart, Institute for Parallel and Distributed High Performance Systems, and the real data from the field were provided by ZELT. This report is structured into six parts. The introduction gives some insights about traffic control and its approaches. The second chapter discusses the various control strategies existing. The third chapter is an introduction to the field of neural networks. The data analysis and pre-processing is described in the fourth chapter. In the fifth chapter, the methodology for prototyping the neural networks is presented. Finally, conclusions and further work are presented. (author) 14 refs.

  20. ACO-Initialized Wavelet Neural Network for Vibration Fault Diagnosis of Hydroturbine Generating Unit

    OpenAIRE

    Xiao, Zhihuai; He, Xinying; Fu, Xiangqian; Malik, O. P.

    2015-01-01

    Considering the drawbacks of traditional wavelet neural network, such as low convergence speed and high sensitivity to initial parameters, an ant colony optimization- (ACO-) initialized wavelet neural network is proposed in this paper for vibration fault diagnosis of a hydroturbine generating unit. In this method, parameters of the wavelet neural network are initialized by the ACO algorithm, and then the wavelet neural network is trained by the gradient descent algorithm. Amplitudes of the fr...

  1. Feed forward neural networks modeling for K-P interactions

    International Nuclear Information System (INIS)

    El-Bakry, M.Y.

    2003-01-01

    Artificial intelligence techniques involving neural networks became vital modeling tools where model dynamics are difficult to track with conventional techniques. The paper make use of the feed forward neural networks (FFNN) to model the charged multiplicity distribution of K-P interactions at high energies. The FFNN was trained using experimental data for the multiplicity distributions at different lab momenta. Results of the FFNN model were compared to that generated using the parton two fireball model and the experimental data. The proposed FFNN model results showed good fitting to the experimental data. The neural network model performance was also tested at non-trained space and was found to be in good agreement with the experimental data

  2. Decomposition of Rotor Hopfield Neural Networks Using Complex Numbers.

    Science.gov (United States)

    Kobayashi, Masaki

    2018-04-01

    A complex-valued Hopfield neural network (CHNN) is a multistate model of a Hopfield neural network. It has the disadvantage of low noise tolerance. Meanwhile, a symmetric CHNN (SCHNN) is a modification of a CHNN that improves noise tolerance. Furthermore, a rotor Hopfield neural network (RHNN) is an extension of a CHNN. It has twice the storage capacity of CHNNs and SCHNNs, and much better noise tolerance than CHNNs, although it requires twice many connection parameters. In this brief, we investigate the relations between CHNN, SCHNN, and RHNN; an RHNN is uniquely decomposed into a CHNN and SCHNN. In addition, the Hebbian learning rule for RHNNs is decomposed into those for CHNNs and SCHNNs.

  3. DAILY RAINFALL-RUNOFF MODELLING BY NEURAL NETWORKS ...

    African Journals Online (AJOL)

    K. Benzineb, M. Remaoun

    2016-09-01

    Sep 1, 2016 ... The hydrologic behaviour modelling of w. Journal of ... i Ouahrane's basin from rainfall-runoff relation which is non-linea networks ... will allow checking efficiency of formal neural networks for flows simulation in semi-arid zone.

  4. Single-hidden-layer feed-forward quantum neural network based on Grover learning.

    Science.gov (United States)

    Liu, Cheng-Yi; Chen, Chein; Chang, Ching-Ter; Shih, Lun-Min

    2013-09-01

    In this paper, a novel single-hidden-layer feed-forward quantum neural network model is proposed based on some concepts and principles in the quantum theory. By combining the quantum mechanism with the feed-forward neural network, we defined quantum hidden neurons and connected quantum weights, and used them as the fundamental information processing unit in a single-hidden-layer feed-forward neural network. The quantum neurons make a wide range of nonlinear functions serve as the activation functions in the hidden layer of the network, and the Grover searching algorithm outstands the optimal parameter setting iteratively and thus makes very efficient neural network learning possible. The quantum neuron and weights, along with a Grover searching algorithm based learning, result in a novel and efficient neural network characteristic of reduced network, high efficient training and prospect application in future. Some simulations are taken to investigate the performance of the proposed quantum network and the result show that it can achieve accurate learning. Copyright © 2013 Elsevier Ltd. All rights reserved.

  5. Architecture and biological applications of artificial neural networks: a tuberculosis perspective.

    Science.gov (United States)

    Darsey, Jerry A; Griffin, William O; Joginipelli, Sravanthi; Melapu, Venkata Kiran

    2015-01-01

    Advancement of science and technology has prompted researchers to develop new intelligent systems that can solve a variety of problems such as pattern recognition, prediction, and optimization. The ability of the human brain to learn in a fashion that tolerates noise and error has attracted many researchers and provided the starting point for the development of artificial neural networks: the intelligent systems. Intelligent systems can acclimatize to the environment or data and can maximize the chances of success or improve the efficiency of a search. Due to massive parallelism with large numbers of interconnected processers and their ability to learn from the data, neural networks can solve a variety of challenging computational problems. Neural networks have the ability to derive meaning from complicated and imprecise data; they are used in detecting patterns, and trends that are too complex for humans, or other computer systems. Solutions to the toughest problems will not be found through one narrow specialization; therefore we need to combine interdisciplinary approaches to discover the solutions to a variety of problems. Many researchers in different disciplines such as medicine, bioinformatics, molecular biology, and pharmacology have successfully applied artificial neural networks. This chapter helps the reader in understanding the basics of artificial neural networks, their applications, and methodology; it also outlines the network learning process and architecture. We present a brief outline of the application of neural networks to medical diagnosis, drug discovery, gene identification, and protein structure prediction. We conclude with a summary of the results from our study on tuberculosis data using neural networks, in diagnosing active tuberculosis, and predicting chronic vs. infiltrative forms of tuberculosis.

  6. Optoelectronic Implementation of Neural Networks

    Indian Academy of Sciences (India)

    neural networks, such as learning, adapting and copying by means of parallel ... to provide robust recognition of hand-printed English text. Engine idle and misfiring .... and s represents the bounded activation function of a neuron. It is typically ...

  7. Computational chaos in massively parallel neural networks

    Science.gov (United States)

    Barhen, Jacob; Gulati, Sandeep

    1989-01-01

    A fundamental issue which directly impacts the scalability of current theoretical neural network models to massively parallel embodiments, in both software as well as hardware, is the inherent and unavoidable concurrent asynchronicity of emerging fine-grained computational ensembles and the possible emergence of chaotic manifestations. Previous analyses attributed dynamical instability to the topology of the interconnection matrix, to parasitic components or to propagation delays. However, researchers have observed the existence of emergent computational chaos in a concurrently asynchronous framework, independent of the network topology. Researcher present a methodology enabling the effective asynchronous operation of large-scale neural networks. Necessary and sufficient conditions guaranteeing concurrent asynchronous convergence are established in terms of contracting operators. Lyapunov exponents are computed formally to characterize the underlying nonlinear dynamics. Simulation results are presented to illustrate network convergence to the correct results, even in the presence of large delays.

  8. Energy efficiency optimisation for distillation column using artificial neural network models

    International Nuclear Information System (INIS)

    Osuolale, Funmilayo N.; Zhang, Jie

    2016-01-01

    This paper presents a neural network based strategy for the modelling and optimisation of energy efficiency in distillation columns incorporating the second law of thermodynamics. Real-time optimisation of distillation columns based on mechanistic models is often infeasible due to the effort in model development and the large computation effort associated with mechanistic model computation. This issue can be addressed by using neural network models which can be quickly developed from process operation data. The computation time in neural network model evaluation is very short making them ideal for real-time optimisation. Bootstrap aggregated neural networks are used in this study for enhanced model accuracy and reliability. Aspen HYSYS is used for the simulation of the distillation systems. Neural network models for exergy efficiency and product compositions are developed from simulated process operation data and are used to maximise exergy efficiency while satisfying products qualities constraints. Applications to binary systems of methanol-water and benzene-toluene separations culminate in a reduction of utility consumption of 8.2% and 28.2% respectively. Application to multi-component separation columns also demonstrate the effectiveness of the proposed method with a 32.4% improvement in the exergy efficiency. - Highlights: • Neural networks can accurately model exergy efficiency in distillation columns. • Bootstrap aggregated neural network offers improved model prediction accuracy. • Improved exergy efficiency is obtained through model based optimisation. • Reductions of utility consumption by 8.2% and 28.2% were achieved for binary systems. • The exergy efficiency for multi-component distillation is increased by 32.4%.

  9. Experiments in Neural-Network Control of a Free-Flying Space Robot

    Science.gov (United States)

    Wilson, Edward

    1995-01-01

    Four important generic issues are identified and addressed in some depth in this thesis as part of the development of an adaptive neural network based control system for an experimental free flying space robot prototype. The first issue concerns the importance of true system level design of the control system. A new hybrid strategy is developed here, in depth, for the beneficial integration of neural networks into the total control system. A second important issue in neural network control concerns incorporating a priori knowledge into the neural network. In many applications, it is possible to get a reasonably accurate controller using conventional means. If this prior information is used purposefully to provide a starting point for the optimizing capabilities of the neural network, it can provide much faster initial learning. In a step towards addressing this issue, a new generic Fully Connected Architecture (FCA) is developed for use with backpropagation. A third issue is that neural networks are commonly trained using a gradient based optimization method such as backpropagation; but many real world systems have Discrete Valued Functions (DVFs) that do not permit gradient based optimization. One example is the on-off thrusters that are common on spacecraft. A new technique is developed here that now extends backpropagation learning for use with DVFs. The fourth issue is that the speed of adaptation is often a limiting factor in the implementation of a neural network control system. This issue has been strongly resolved in the research by drawing on the above new contributions.

  10. Neural-Network Control Of Prosthetic And Robotic Hands

    Science.gov (United States)

    Buckley, Theresa M.

    1991-01-01

    Electronic neural networks proposed for use in controlling robotic and prosthetic hands and exoskeletal or glovelike electromechanical devices aiding intact but nonfunctional hands. Specific to patient, who activates grasping motion by voice command, by mechanical switch, or by myoelectric impulse. Patient retains higher-level control, while lower-level control provided by neural network analogous to that of miniature brain. During training, patient teaches miniature brain to perform specialized, anthropomorphic movements unique to himself or herself.

  11. Optimized Neural Network for Fault Diagnosis and Classification

    International Nuclear Information System (INIS)

    Elaraby, S.M.

    2005-01-01

    This paper presents a developed and implemented toolbox for optimizing neural network structure of fault diagnosis and classification. Evolutionary algorithm based on hierarchical genetic algorithm structure is used for optimization. The simplest feed-forward neural network architecture is selected. Developed toolbox has friendly user interface. Multiple solutions are generated. The performance and applicability of the proposed toolbox is verified with benchmark data patterns and accident diagnosis of Egyptian Second research reactor (ETRR-2)

  12. Neural network post-processing of grayscale optical correlator

    Science.gov (United States)

    Lu, Thomas T; Hughlett, Casey L.; Zhoua, Hanying; Chao, Tien-Hsin; Hanan, Jay C.

    2005-01-01

    In this paper we present the use of a radial basis function neural network (RBFNN) as a post-processor to assist the optical correlator to identify the objects and to reject false alarms. Image plane features near the correlation peaks are extracted and fed to the neural network for analysis. The approach is capable of handling large number of object variations and filter sets. Preliminary experimental results are presented and the performance is analyzed.

  13. Cellular neural networks for the stereo matching problem

    International Nuclear Information System (INIS)

    Taraglio, S.; Zanela, A.

    1997-03-01

    The applicability of the Cellular Neural Network (CNN) paradigm to the problem of recovering information on the tridimensional structure of the environment is investigated. The approach proposed is the stereo matching of video images. The starting point of this work is the Zhou-Chellappa neural network implementation for the same problem. The CNN based system we present here yields the same results as the previous approach, but without the many existing drawbacks

  14. Gear Fault Diagnosis Based on BP Neural Network

    Science.gov (United States)

    Huang, Yongsheng; Huang, Ruoshi

    2018-03-01

    Gear transmission is more complex, widely used in machinery fields, which form of fault has some nonlinear characteristics. This paper uses BP neural network to train the gear of four typical failure modes, and achieves satisfactory results. Tested by using test data, test results have an agreement with the actual results. The results show that the BP neural network can effectively solve the complex state of gear fault in the gear fault diagnosis.

  15. A Quantum Implementation Model for Artificial Neural Networks

    OpenAIRE

    Ammar Daskin

    2018-01-01

    The learning process for multilayered neural networks with many nodes makes heavy demands on computational resources. In some neural network models, the learning formulas, such as the Widrow–Hoff formula, do not change the eigenvectors of the weight matrix while flatting the eigenvalues. In infinity, these iterative formulas result in terms formed by the principal components of the weight matrix, namely, the eigenvectors corresponding to the non-zero eigenvalues. In quantum computing, the pha...

  16. An introduction to neural networks surgery, a field of neuromodulation which is based on advances in neural networks science and digitised brain imaging.

    Science.gov (United States)

    Sakas, D E; Panourias, I G; Simpson, B A

    2007-01-01

    Operative Neuromodulation is the field of altering electrically or chemically the signal transmission in the nervous system by implanted devices in order to excite, inhibit or tune the activities of neurons or neural networks and produce therapeutic effects. The present article reviews relevant literature on procedures or devices applied either in contact with the cerebral cortex or cranial nerves or in deep sites inside the brain in order to treat various refractory neurological conditions such as: a) chronic pain (facial, somatic, deafferentation, phantom limb), b) movement disorders (Parkinson's disease, dystonia, Tourette syndrome), c) epilepsy, d) psychiatric disease, e) hearing deficits, and f) visual loss. These data indicate that in operative neuromodulation, a new field emerges that is based on neural networks research and on advances in digitised stereometric brain imaging which allow precise localisation of cerebral neural networks and their relay stations; this field can be described as Neural networks surgery because it aims to act extrinsically or intrinsically on neural networks and to alter therapeutically the neural signal transmission with the use of implantable electrical or electronic devices. The authors also review neurotechnology literature relevant to neuroengineering, nanotechnologies, brain computer interfaces, hybrid cultured probes, neuromimetics, neuroinformatics, neurocomputation, and computational neuromodulation; the latter field is dedicated to the study of the biophysical and mathematical characteristics of electrochemical neuromodulation. The article also brings forward particularly interesting lines of research such as the carbon nanofibers electrode arrays for simultaneous electrochemical recording and stimulation, closed-loop systems for responsive neuromodulation, and the intracortical electrodes for restoring hearing or vision. The present review of cerebral neuromodulatory procedures highlights the transition from the

  17. Intrusion Detection System Using Deep Neural Network for In-Vehicle Network Security.

    Directory of Open Access Journals (Sweden)

    Min-Joo Kang

    Full Text Available A novel intrusion detection system (IDS using a deep neural network (DNN is proposed to enhance the security of in-vehicular network. The parameters building the DNN structure are trained with probability-based feature vectors that are extracted from the in-vehicular network packets. For a given packet, the DNN provides the probability of each class discriminating normal and attack packets, and, thus the sensor can identify any malicious attack to the vehicle. As compared to the traditional artificial neural network applied to the IDS, the proposed technique adopts recent advances in deep learning studies such as initializing the parameters through the unsupervised pre-training of deep belief networks (DBN, therefore improving the detection accuracy. It is demonstrated with experimental results that the proposed technique can provide a real-time response to the attack with a significantly improved detection ratio in controller area network (CAN bus.

  18. Intrusion Detection System Using Deep Neural Network for In-Vehicle Network Security.

    Science.gov (United States)

    Kang, Min-Joo; Kang, Je-Won

    2016-01-01

    A novel intrusion detection system (IDS) using a deep neural network (DNN) is proposed to enhance the security of in-vehicular network. The parameters building the DNN structure are trained with probability-based feature vectors that are extracted from the in-vehicular network packets. For a given packet, the DNN provides the probability of each class discriminating normal and attack packets, and, thus the sensor can identify any malicious attack to the vehicle. As compared to the traditional artificial neural network applied to the IDS, the proposed technique adopts recent advances in deep learning studies such as initializing the parameters through the unsupervised pre-training of deep belief networks (DBN), therefore improving the detection accuracy. It is demonstrated with experimental results that the proposed technique can provide a real-time response to the attack with a significantly improved detection ratio in controller area network (CAN) bus.

  19. Analysis of neural networks

    CERN Document Server

    Heiden, Uwe

    1980-01-01

    The purpose of this work is a unified and general treatment of activity in neural networks from a mathematical pOint of view. Possible applications of the theory presented are indica­ ted throughout the text. However, they are not explored in de­ tail for two reasons : first, the universal character of n- ral activity in nearly all animals requires some type of a general approach~ secondly, the mathematical perspicuity would suffer if too many experimental details and empirical peculiarities were interspersed among the mathematical investigation. A guide to many applications is supplied by the references concerning a variety of specific issues. Of course the theory does not aim at covering all individual problems. Moreover there are other approaches to neural network theory (see e.g. Poggio-Torre, 1978) based on the different lev­ els at which the nervous system may be viewed. The theory is a deterministic one reflecting the average be­ havior of neurons or neuron pools. In this respect the essay is writt...

  20. Application of artificial neural network for medical image recognition and diagnostic decision making

    International Nuclear Information System (INIS)

    Asada, N.; Eiho, S.; Doi, K.; MacMahon, H.; Montner, S.M.; Giger, M.L.

    1989-01-01

    An artificial neural network has been applied for pattern recognition and used as a tool in an expert system. The purpose of this study is to examine the potential usefulness of the neural network approach in medical applications for image recognition and decision making. The authors designed multilayer feedforward neural networks with a back-propagation algorithm for our study. Using first-pass radionuclide ventriculograms, we attempted to identify the right and left ventricles of the heart and the lungs by training the neural network from patterns of time-activity curves. In a preliminary study, the neural network enabled identification of the lungs and heart chambers once the network was trained sufficiently by means of repeated entries of data from the same case