WorldWideScience

Sample records for neural network types

  1. Type-2 fuzzy neural networks and their applications

    CERN Document Server

    Aliev, Rafik Aziz

    2014-01-01

    This book deals with the theory, design principles, and application of hybrid intelligent systems using type-2 fuzzy sets in combination with other paradigms of Soft Computing technology such as Neuro-Computing and Evolutionary Computing. It provides a self-contained exposition of the foundation of type-2 fuzzy neural networks and presents a vast compendium of its applications to control, forecasting, decision making, system identification and other real problems. Type-2 Fuzzy Neural Networks and Their Applications is helpful for teachers and students of universities and colleges, for scientis

  2. Discriminating lysosomal membrane protein types using dynamic neural network.

    Science.gov (United States)

    Tripathi, Vijay; Gupta, Dwijendra Kumar

    2014-01-01

    This work presents a dynamic artificial neural network methodology, which classifies the proteins into their classes from their sequences alone: the lysosomal membrane protein classes and the various other membranes protein classes. In this paper, neural networks-based lysosomal-associated membrane protein type prediction system is proposed. Different protein sequence representations are fused to extract the features of a protein sequence, which includes seven feature sets; amino acid (AA) composition, sequence length, hydrophobic group, electronic group, sum of hydrophobicity, R-group, and dipeptide composition. To reduce the dimensionality of the large feature vector, we applied the principal component analysis. The probabilistic neural network, generalized regression neural network, and Elman regression neural network (RNN) are used as classifiers and compared with layer recurrent network (LRN), a dynamic network. The dynamic networks have memory, i.e. its output depends not only on the input but the previous outputs also. Thus, the accuracy of LRN classifier among all other artificial neural networks comes out to be the highest. The overall accuracy of jackknife cross-validation is 93.2% for the data-set. These predicted results suggest that the method can be effectively applied to discriminate lysosomal associated membrane proteins from other membrane proteins (Type-I, Outer membrane proteins, GPI-Anchored) and Globular proteins, and it also indicates that the protein sequence representation can better reflect the core feature of membrane proteins than the classical AA composition.

  3. New backpropagation algorithm with type-2 fuzzy weights for neural networks

    CERN Document Server

    Gaxiola, Fernando; Valdez, Fevrier

    2016-01-01

    In this book a neural network learning method with type-2 fuzzy weight adjustment is proposed. The mathematical analysis of the proposed learning method architecture and the adaptation of type-2 fuzzy weights are presented. The proposed method is based on research of recent methods that handle weight adaptation and especially fuzzy weights. The internal operation of the neuron is changed to work with two internal calculations for the activation function to obtain two results as outputs of the proposed method. Simulation results and a comparative study among monolithic neural networks, neural network with type-1 fuzzy weights and neural network with type-2 fuzzy weights are presented to illustrate the advantages of the proposed method. The proposed approach is based on recent methods that handle adaptation of weights using fuzzy logic of type-1 and type-2. The proposed approach is applied to a cases of prediction for the Mackey-Glass (for ô=17) and Dow-Jones time series, and recognition of person with iris bi...

  4. Global Robust Stability of Switched Interval Neural Networks with Discrete and Distributed Time-Varying Delays of Neural Type

    Directory of Open Access Journals (Sweden)

    Huaiqin Wu

    2012-01-01

    Full Text Available By combing the theories of the switched systems and the interval neural networks, the mathematics model of the switched interval neural networks with discrete and distributed time-varying delays of neural type is presented. A set of the interval parameter uncertainty neural networks with discrete and distributed time-varying delays of neural type are used as the individual subsystem, and an arbitrary switching rule is assumed to coordinate the switching between these networks. By applying the augmented Lyapunov-Krasovskii functional approach and linear matrix inequality (LMI techniques, a delay-dependent criterion is achieved to ensure to such switched interval neural networks to be globally asymptotically robustly stable in terms of LMIs. The unknown gain matrix is determined by solving this delay-dependent LMIs. Finally, an illustrative example is given to demonstrate the validity of the theoretical results.

  5. Comparing various artificial neural network types for water temperature prediction in rivers

    Science.gov (United States)

    Piotrowski, Adam P.; Napiorkowski, Maciej J.; Napiorkowski, Jaroslaw J.; Osuch, Marzena

    2015-10-01

    A number of methods have been proposed for the prediction of streamwater temperature based on various meteorological and hydrological variables. The present study shows a comparison of few types of data-driven neural networks (multi-layer perceptron, product-units, adaptive-network-based fuzzy inference systems and wavelet neural networks) and nearest neighbour approach for short time streamwater temperature predictions in two natural catchments (mountainous and lowland) located in temperate climate zone, with snowy winters and hot summers. To allow wide applicability of such models, autoregressive inputs are not used and only easily available measurements are considered. Each neural network type is calibrated independently 100 times and the mean, median and standard deviation of the results are used for the comparison. Finally, the ensemble aggregation approach is tested. The results show that simple and popular multi-layer perceptron neural networks are in most cases not outperformed by more complex and advanced models. The choice of neural network is dependent on the way the models are compared. This may be a warning for anyone who wish to promote own models, that their superiority should be verified in different ways. The best results are obtained when mean, maximum and minimum daily air temperatures from the previous days are used as inputs, together with the current runoff and declination of the Sun from two recent days. The ensemble aggregation approach allows reducing the mean square error up to several percent, depending on the case, and noticeably diminishes differences in modelling performance obtained by various neural network types.

  6. Global asymptotic stability to a generalized Cohen-Grossberg BAM neural networks of neutral type delays.

    Science.gov (United States)

    Zhang, Zhengqiu; Liu, Wenbin; Zhou, Dongming

    2012-01-01

    In this paper, we first discuss the existence of a unique equilibrium point of a generalized Cohen-Grossberg BAM neural networks of neutral type delays by means of the Homeomorphism theory and inequality technique. Then, by applying the existence result of an equilibrium point and constructing a Lyapunov functional, we study the global asymptotic stability of the equilibrium solution to the above Cohen-Grossberg BAM neural networks of neutral type. In our results, the hypothesis for boundedness in the existing paper, which discussed Cohen-Grossberg neural networks of neutral type on the activation functions, are removed. Finally, we give an example to demonstrate the validity of our global asymptotic stability result for the above neural networks. Copyright © 2011 Elsevier Ltd. All rights reserved.

  7. Modular Neural Networks and Type-2 Fuzzy Systems for Pattern Recognition

    CERN Document Server

    Melin, Patricia

    2012-01-01

    This book describes hybrid intelligent systems using type-2 fuzzy logic and modular neural networks for pattern recognition applications. Hybrid intelligent systems combine several intelligent computing paradigms, including fuzzy logic, neural networks, and bio-inspired optimization algorithms, which can be used to produce powerful pattern recognition systems. Type-2 fuzzy logic is an extension of traditional type-1 fuzzy logic that enables managing higher levels of uncertainty in complex real world problems, which are of particular importance in the area of pattern recognition. The book is organized in three main parts, each containing a group of chapters built around a similar subject. The first part consists of chapters with the main theme of theory and design algorithms, which are basically chapters that propose new models and concepts, which are the basis for achieving intelligent pattern recognition. The second part contains chapters with the main theme of using type-2 fuzzy models and modular neural ne...

  8. Fine-grained vehicle type recognition based on deep convolution neural networks

    Directory of Open Access Journals (Sweden)

    Hongcai CHEN

    2017-12-01

    Full Text Available Public security and traffic department put forward higher requirements for real-time performance and accuracy of vehicle type recognition in complex traffic scenes. Aiming at the problems of great plice forces occupation, low retrieval efficiency, and lacking of intelligence for dealing with false license, fake plate vehicles and vehicles without plates, this paper proposes a vehicle type fine-grained recognition method based GoogleNet deep convolution neural networks. The filter size and numbers of convolution neural network are designed, the activation function and vehicle type classifier are optimally selected, and a new network framework is constructed for vehicle type fine-grained recognition. The experimental results show that the proposed method has 97% accuracy for vehicle type fine-grained recognition and has greater improvement than the original GoogleNet model. Moreover, the new model effectively reduces the number of training parameters, and saves computer memory. Fine-grained vehicle type recognition can be used in intelligent traffic management area, and has important theoretical research value and practical significance.

  9. Exponential p-stability of delayed Cohen-Grossberg-type BAM neural networks with impulses

    International Nuclear Information System (INIS)

    Xia Yonghui; Huang Zhenkun; Han Maoan

    2008-01-01

    An impulsive Cohen-Grossberg-type bidirectional associative memory (BAM) neural networks with distributed delays is studied. Some new sufficient conditions are established for the existence and global exponential stability of a unique equilibrium without strict conditions imposed on self regulation functions. The approaches are based on Laypunov-Kravsovskii functional and homeomorphism theory. When our results are applied to the BAM neural networks, our results generalize some previously known results. It is believed that these results are significant and useful for the design and applications of Cohen-Grossberg-type bidirectional associative memory networks

  10. Stochastic stability analysis for delayed neural networks of neutral type with Markovian jump parameters

    International Nuclear Information System (INIS)

    Lou Xuyang; Cui Baotong

    2009-01-01

    In this paper, the problem of stochastic stability for a class of delayed neural networks of neutral type with Markovian jump parameters is investigated. The jumping parameters are modelled as a continuous-time, discrete-state Markov process. A sufficient condition guaranteeing the stochastic stability of the equilibrium point is derived for the Markovian jumping delayed neural networks (MJDNNs) with neutral type. The stability criterion not only eliminates the differences between excitatory and inhibitory effects on the neural networks, but also can be conveniently checked. The sufficient condition obtained can be essentially solved in terms of linear matrix inequality. A numerical example is given to show the effectiveness of the obtained results.

  11. A novel delay-dependent criterion for delayed neural networks of neutral type

    International Nuclear Information System (INIS)

    Lee, S.M.; Kwon, O.M.; Park, Ju H.

    2010-01-01

    This Letter considers a robust stability analysis method for delayed neural networks of neutral type. By constructing a new Lyapunov functional, a novel delay-dependent criterion for the stability is derived in terms of LMIs (linear matrix inequalities). A less conservative stability criterion is derived by using nonlinear properties of the activation function of the neural networks. Two numerical examples are illustrated to show the effectiveness of the proposed method.

  12. Existence and exponential stability of almost periodic solution for Hopfield-type neural networks with impulse

    International Nuclear Information System (INIS)

    Zhang Huiying; Xia Yonghui

    2008-01-01

    In this paper, some sufficient conditions are obtained for checking the existence and exponential stability of almost periodic solution for bidirectional associative memory Hopfield-type neural networks with impulse. The approaches are based on contraction principle and Gronwall-Bellman's inequality. This paper is considering the almost periodic solution for impulsive Hopfield-type neural networks

  13. Neural Networks

    International Nuclear Information System (INIS)

    Smith, Patrick I.

    2003-01-01

    Physicists use large detectors to measure particles created in high-energy collisions at particle accelerators. These detectors typically produce signals indicating either where ionization occurs along the path of the particle, or where energy is deposited by the particle. The data produced by these signals is fed into pattern recognition programs to try to identify what particles were produced, and to measure the energy and direction of these particles. Ideally, there are many techniques used in this pattern recognition software. One technique, neural networks, is particularly suitable for identifying what type of particle caused by a set of energy deposits. Neural networks can derive meaning from complicated or imprecise data, extract patterns, and detect trends that are too complex to be noticed by either humans or other computer related processes. To assist in the advancement of this technology, Physicists use a tool kit to experiment with several neural network techniques. The goal of this research is interface a neural network tool kit into Java Analysis Studio (JAS3), an application that allows data to be analyzed from any experiment. As the final result, a physicist will have the ability to train, test, and implement a neural network with the desired output while using JAS3 to analyze the results or output. Before an implementation of a neural network can take place, a firm understanding of what a neural network is and how it works is beneficial. A neural network is an artificial representation of the human brain that tries to simulate the learning process [5]. It is also important to think of the word artificial in that definition as computer programs that use calculations during the learning process. In short, a neural network learns by representative examples. Perhaps the easiest way to describe the way neural networks learn is to explain how the human brain functions. The human brain contains billions of neural cells that are responsible for processing

  14. Artificial neural networks in NDT

    International Nuclear Information System (INIS)

    Abdul Aziz Mohamed

    2001-01-01

    Artificial neural networks, simply known as neural networks, have attracted considerable interest in recent years largely because of a growing recognition of the potential of these computational paradigms as powerful alternative models to conventional pattern recognition or function approximation techniques. The neural networks approach is having a profound effect on almost all fields, and has been utilised in fields Where experimental inter-disciplinary work is being carried out. Being a multidisciplinary subject with a broad knowledge base, Nondestructive Testing (NDT) or Nondestructive Evaluation (NDE) is no exception. This paper explains typical applications of neural networks in NDT/NDE. Three promising types of neural networks are highlighted, namely, back-propagation, binary Hopfield and Kohonen's self-organising maps. (Author)

  15. Parallel consensual neural networks.

    Science.gov (United States)

    Benediktsson, J A; Sveinsson, J R; Ersoy, O K; Swain, P H

    1997-01-01

    A new type of a neural-network architecture, the parallel consensual neural network (PCNN), is introduced and applied in classification/data fusion of multisource remote sensing and geographic data. The PCNN architecture is based on statistical consensus theory and involves using stage neural networks with transformed input data. The input data are transformed several times and the different transformed data are used as if they were independent inputs. The independent inputs are first classified using the stage neural networks. The output responses from the stage networks are then weighted and combined to make a consensual decision. In this paper, optimization methods are used in order to weight the outputs from the stage networks. Two approaches are proposed to compute the data transforms for the PCNN, one for binary data and another for analog data. The analog approach uses wavelet packets. The experimental results obtained with the proposed approach show that the PCNN outperforms both a conjugate-gradient backpropagation neural network and conventional statistical methods in terms of overall classification accuracy of test data.

  16. Multistability of delayed complex-valued recurrent neural networks with discontinuous real-imaginary-type activation functions

    International Nuclear Information System (INIS)

    Huang Yu-Jiao; Hu Hai-Gen

    2015-01-01

    In this paper, the multistability issue is discussed for delayed complex-valued recurrent neural networks with discontinuous real-imaginary-type activation functions. Based on a fixed theorem and stability definition, sufficient criteria are established for the existence and stability of multiple equilibria of complex-valued recurrent neural networks. The number of stable equilibria is larger than that of real-valued recurrent neural networks, which can be used to achieve high-capacity associative memories. One numerical example is provided to show the effectiveness and superiority of the presented results. (paper)

  17. Сhoosing the best type neural network jet contour diagnostics engines

    Directory of Open Access Journals (Sweden)

    О.С. Якушенко

    2006-01-01

    Full Text Available  In the paper the  choice problems of neurons  type for neural network is considered. The neurons types has to be , optimal from the point of work stability, training speed and quality of gas turbine engine  technical condition class recognition by work process parameters. Results of researches are given.

  18. Artificial Neural Network Analysis System

    Science.gov (United States)

    2001-02-27

    Contract No. DASG60-00-M-0201 Purchase request no.: Foot in the Door-01 Title Name: Artificial Neural Network Analysis System Company: Atlantic... Artificial Neural Network Analysis System 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) Powell, Bruce C 5d. PROJECT NUMBER 5e. TASK NUMBER...34) 27-02-2001 Report Type N/A Dates Covered (from... to) ("DD MON YYYY") 28-10-2000 27-02-2001 Title and Subtitle Artificial Neural Network Analysis

  19. Finite-time stability of neutral-type neural networks with random time-varying delays

    Science.gov (United States)

    Ali, M. Syed; Saravanan, S.; Zhu, Quanxin

    2017-11-01

    This paper is devoted to the finite-time stability analysis of neutral-type neural networks with random time-varying delays. The randomly time-varying delays are characterised by Bernoulli stochastic variable. This result can be extended to analysis and design for neutral-type neural networks with random time-varying delays. On the basis of this paper, we constructed suitable Lyapunov-Krasovskii functional together and established a set of sufficient linear matrix inequalities approach to guarantee the finite-time stability of the system concerned. By employing the Jensen's inequality, free-weighting matrix method and Wirtinger's double integral inequality, the proposed conditions are derived and two numerical examples are addressed for the effectiveness of the developed techniques.

  20. Dynamic training algorithm for dynamic neural networks

    International Nuclear Information System (INIS)

    Tan, Y.; Van Cauwenberghe, A.; Liu, Z.

    1996-01-01

    The widely used backpropagation algorithm for training neural networks based on the gradient descent has a significant drawback of slow convergence. A Gauss-Newton method based recursive least squares (RLS) type algorithm with dynamic error backpropagation is presented to speed-up the learning procedure of neural networks with local recurrent terms. Finally, simulation examples concerning the applications of the RLS type algorithm to identification of nonlinear processes using a local recurrent neural network are also included in this paper

  1. Evaluation of neural networks to identify types of activity using accelerometers

    NARCIS (Netherlands)

    Vries, S.I. de; Garre, F.G.; Engbers, L.H.; Hildebrandt, V.H.; Buuren, S. van

    2011-01-01

    Purpose: To develop and evaluate two artificial neural network (ANN) models based on single-sensor accelerometer data and an ANN model based on the data of two accelerometers for the identification of types of physical activity in adults. Methods: Forty-nine subjects (21 men and 28 women; age range

  2. Global exponential convergence of neutral-type Hopfield neural networks with multi-proportional delays and leakage delays

    International Nuclear Information System (INIS)

    Xu, Changjin; Li, Peiluan

    2017-01-01

    This paper is concerned with a class of neutral-type Hopfield neural networks with multi-proportional delays and leakage delays. Using the differential inequality theory, a set of sufficient conditions which guarantee that all solutions of neutral-type Hopfield neural networks with multi-proportional delays and leakage delays converge exponentially to zero vector are derived. Computer simulations are carried out to verify our theoretical findings. The obtained results of this paper are new and complement some previous studies.

  3. Robust Stability Analysis of Neutral-Type Hybrid Bidirectional Associative Memory Neural Networks with Time-Varying Delays

    OpenAIRE

    Wei Feng; Simon X. Yang; Haixia Wu

    2014-01-01

    The global asymptotic robust stability of equilibrium is considered for neutral-type hybrid bidirectional associative memory neural networks with time-varying delays and parameters uncertainties. The results we obtained in this paper are delay-derivative-dependent and establish various relationships between the network parameters only. Therefore, the results of this paper are applicable to a larger class of neural networks and can be easily verified when compared with the previously reported ...

  4. Convolutional neural network with transfer learning for rice type classification

    Science.gov (United States)

    Patel, Vaibhav Amit; Joshi, Manjunath V.

    2018-04-01

    Presently, rice type is identified manually by humans, which is time consuming and error prone. Therefore, there is a need to do this by machine which makes it faster with greater accuracy. This paper proposes a deep learning based method for classification of rice types. We propose two methods to classify the rice types. In the first method, we train a deep convolutional neural network (CNN) using the given segmented rice images. In the second method, we train a combination of a pretrained VGG16 network and the proposed method, while using transfer learning in which the weights of a pretrained network are used to achieve better accuracy. Our approach can also be used for classification of rice grain as broken or fine. We train a 5-class model for classifying rice types using 4000 training images and another 2- class model for the classification of broken and normal rice using 1600 training images. We observe that despite having distinct rice images, our architecture, pretrained on ImageNet data boosts classification accuracy significantly.

  5. Neural networks with discontinuous/impact activations

    CERN Document Server

    Akhmet, Marat

    2014-01-01

    This book presents as its main subject new models in mathematical neuroscience. A wide range of neural networks models with discontinuities are discussed, including impulsive differential equations, differential equations with piecewise constant arguments, and models of mixed type. These models involve discontinuities, which are natural because huge velocities and short distances are usually observed in devices modeling the networks. A discussion of the models, appropriate for the proposed applications, is also provided. This book also: Explores questions related to the biological underpinning for models of neural networks\\ Considers neural networks modeling using differential equations with impulsive and piecewise constant argument discontinuities Provides all necessary mathematical basics for application to the theory of neural networks Neural Networks with Discontinuous/Impact Activations is an ideal book for researchers and professionals in the field of engineering mathematics that have an interest in app...

  6. Bayesian estimation inherent in a Mexican-hat-type neural network

    Science.gov (United States)

    Takiyama, Ken

    2016-05-01

    Brain functions, such as perception, motor control and learning, and decision making, have been explained based on a Bayesian framework, i.e., to decrease the effects of noise inherent in the human nervous system or external environment, our brain integrates sensory and a priori information in a Bayesian optimal manner. However, it remains unclear how Bayesian computations are implemented in the brain. Herein, I address this issue by analyzing a Mexican-hat-type neural network, which was used as a model of the visual cortex, motor cortex, and prefrontal cortex. I analytically demonstrate that the dynamics of an order parameter in the model corresponds exactly to a variational inference of a linear Gaussian state-space model, a Bayesian estimation, when the strength of recurrent synaptic connectivity is appropriately stronger than that of an external stimulus, a plausible condition in the brain. This exact correspondence can reveal the relationship between the parameters in the Bayesian estimation and those in the neural network, providing insight for understanding brain functions.

  7. Identifying the cutting tool type used in excavations using neural networks

    Energy Technology Data Exchange (ETDEWEB)

    Jonak, J.; Gajewski, J. [Lublin University of Technology, Lublin (Poland). Faculty of Mechanical Engineering

    2006-03-15

    The paper presents results of preliminary research on utilising neural networks to identify excavating cutting tool's type used in multi-tool excavating heads of mechanical coal miners. Such research is necessary to identify rock excavating process with a given head, and construct adaptation systems for control of excavating process with such a head.

  8. Financial time series prediction using spiking neural networks.

    Science.gov (United States)

    Reid, David; Hussain, Abir Jaafar; Tawfik, Hissam

    2014-01-01

    In this paper a novel application of a particular type of spiking neural network, a Polychronous Spiking Network, was used for financial time series prediction. It is argued that the inherent temporal capabilities of this type of network are suited to non-stationary data such as this. The performance of the spiking neural network was benchmarked against three systems: two "traditional", rate-encoded, neural networks; a Multi-Layer Perceptron neural network and a Dynamic Ridge Polynomial neural network, and a standard Linear Predictor Coefficients model. For this comparison three non-stationary and noisy time series were used: IBM stock data; US/Euro exchange rate data, and the price of Brent crude oil. The experiments demonstrated favourable prediction results for the Spiking Neural Network in terms of Annualised Return and prediction error for 5-Step ahead predictions. These results were also supported by other relevant metrics such as Maximum Drawdown and Signal-To-Noise ratio. This work demonstrated the applicability of the Polychronous Spiking Network to financial data forecasting and this in turn indicates the potential of using such networks over traditional systems in difficult to manage non-stationary environments.

  9. Morphological neural networks

    Energy Technology Data Exchange (ETDEWEB)

    Ritter, G.X.; Sussner, P. [Univ. of Florida, Gainesville, FL (United States)

    1996-12-31

    The theory of artificial neural networks has been successfully applied to a wide variety of pattern recognition problems. In this theory, the first step in computing the next state of a neuron or in performing the next layer neural network computation involves the linear operation of multiplying neural values by their synaptic strengths and adding the results. Thresholding usually follows the linear operation in order to provide for nonlinearity of the network. In this paper we introduce a novel class of neural networks, called morphological neural networks, in which the operations of multiplication and addition are replaced by addition and maximum (or minimum), respectively. By taking the maximum (or minimum) of sums instead of the sum of products, morphological network computation is nonlinear before thresholding. As a consequence, the properties of morphological neural networks are drastically different than those of traditional neural network models. In this paper we consider some of these differences and provide some particular examples of morphological neural network.

  10. Robust Stability Analysis of Neutral-Type Hybrid Bidirectional Associative Memory Neural Networks with Time-Varying Delays

    Directory of Open Access Journals (Sweden)

    Wei Feng

    2014-01-01

    Full Text Available The global asymptotic robust stability of equilibrium is considered for neutral-type hybrid bidirectional associative memory neural networks with time-varying delays and parameters uncertainties. The results we obtained in this paper are delay-derivative-dependent and establish various relationships between the network parameters only. Therefore, the results of this paper are applicable to a larger class of neural networks and can be easily verified when compared with the previously reported literature results. Two numerical examples are illustrated to verify our results.

  11. Almost periodic cellular neural networks with neutral-type proportional delays

    Science.gov (United States)

    Xiao, Songlin

    2018-03-01

    This paper presents a new result on the existence, uniqueness and generalised exponential stability of almost periodic solutions for cellular neural networks with neutral-type proportional delays and D operator. Based on some novel differential inequality techniques, a testable condition is derived to ensure that all the state trajectories of the system converge to an almost periodic solution with a positive exponential convergence rate. The effectiveness of the obtained result is illustrated by a numerical example.

  12. Drift chamber tracking with neural networks

    International Nuclear Information System (INIS)

    Lindsey, C.S.; Denby, B.; Haggerty, H.

    1992-10-01

    We discuss drift chamber tracking with a commercial log VLSI neural network chip. Voltages proportional to the drift times in a 4-layer drift chamber were presented to the Intel ETANN chip. The network was trained to provide the intercept and slope of straight tracks traversing the chamber. The outputs were recorded and later compared off line to conventional track fits. Two types of network architectures were studied. Applications of neural network tracking to high energy physics detector triggers is discussed

  13. Template measurement for plutonium pit based on neural networks

    International Nuclear Information System (INIS)

    Zhang Changfan; Gong Jian; Liu Suping; Hu Guangchun; Xiang Yongchun

    2012-01-01

    Template measurement for plutonium pit extracts characteristic data from-ray spectrum and the neutron counts emitted by plutonium. The characteristic data of the suspicious object are compared with data of the declared plutonium pit to verify if they are of the same type. In this paper, neural networks are enhanced as the comparison algorithm for template measurement of plutonium pit. Two kinds of neural networks are created, i.e. the BP and LVQ neural networks. They are applied in different aspects for the template measurement and identification. BP neural network is used for classification for different types of plutonium pits, which is often used for management of nuclear materials. LVQ neural network is used for comparison of inspected objects to the declared one, which is usually applied in the field of nuclear disarmament and verification. (authors)

  14. Cultured Neural Networks: Optimization of Patterned Network Adhesiveness and Characterization of their Neural Activity

    Directory of Open Access Journals (Sweden)

    W. L. C. Rutten

    2006-01-01

    Full Text Available One type of future, improved neural interface is the “cultured probe”. It is a hybrid type of neural information transducer or prosthesis, for stimulation and/or recording of neural activity. It would consist of a microelectrode array (MEA on a planar substrate, each electrode being covered and surrounded by a local circularly confined network (“island” of cultured neurons. The main purpose of the local networks is that they act as biofriendly intermediates for collateral sprouts from the in vivo system, thus allowing for an effective and selective neuron–electrode interface. As a secondary purpose, one may envisage future information processing applications of these intermediary networks. In this paper, first, progress is shown on how substrates can be chemically modified to confine developing networks, cultured from dissociated rat cortex cells, to “islands” surrounding an electrode site. Additional coating of neurophobic, polyimide-coated substrate by triblock-copolymer coating enhances neurophilic-neurophobic adhesion contrast. Secondly, results are given on neuronal activity in patterned, unconnected and connected, circular “island” networks. For connected islands, the larger the island diameter (50, 100 or 150 μm, the more spontaneous activity is seen. Also, activity may show a very high degree of synchronization between two islands. For unconnected islands, activity may start at 22 days in vitro (DIV, which is two weeks later than in unpatterned networks.

  15. Neural network classifier of attacks in IP telephony

    Science.gov (United States)

    Safarik, Jakub; Voznak, Miroslav; Mehic, Miralem; Partila, Pavol; Mikulec, Martin

    2014-05-01

    Various types of monitoring mechanism allow us to detect and monitor behavior of attackers in VoIP networks. Analysis of detected malicious traffic is crucial for further investigation and hardening the network. This analysis is typically based on statistical methods and the article brings a solution based on neural network. The proposed algorithm is used as a classifier of attacks in a distributed monitoring network of independent honeypot probes. Information about attacks on these honeypots is collected on a centralized server and then classified. This classification is based on different mechanisms. One of them is based on the multilayer perceptron neural network. The article describes inner structure of used neural network and also information about implementation of this network. The learning set for this neural network is based on real attack data collected from IP telephony honeypot called Dionaea. We prepare the learning set from real attack data after collecting, cleaning and aggregation of this information. After proper learning is the neural network capable to classify 6 types of most commonly used VoIP attacks. Using neural network classifier brings more accurate attack classification in a distributed system of honeypots. With this approach is possible to detect malicious behavior in a different part of networks, which are logically or geographically divided and use the information from one network to harden security in other networks. Centralized server for distributed set of nodes serves not only as a collector and classifier of attack data, but also as a mechanism for generating a precaution steps against attacks.

  16. Introduction to Concepts in Artificial Neural Networks

    Science.gov (United States)

    Niebur, Dagmar

    1995-01-01

    This introduction to artificial neural networks summarizes some basic concepts of computational neuroscience and the resulting models of artificial neurons. The terminology of biological and artificial neurons, biological and machine learning and neural processing is introduced. The concepts of supervised and unsupervised learning are explained with examples from the power system area. Finally, a taxonomy of different types of neurons and different classes of artificial neural networks is presented.

  17. Neural network-based model reference adaptive control system.

    Science.gov (United States)

    Patino, H D; Liu, D

    2000-01-01

    In this paper, an approach to model reference adaptive control based on neural networks is proposed and analyzed for a class of first-order continuous-time nonlinear dynamical systems. The controller structure can employ either a radial basis function network or a feedforward neural network to compensate adaptively the nonlinearities in the plant. A stable controller-parameter adjustment mechanism, which is determined using the Lyapunov theory, is constructed using a sigma-modification-type updating law. The evaluation of control error in terms of the neural network learning error is performed. That is, the control error converges asymptotically to a neighborhood of zero, whose size is evaluated and depends on the approximation error of the neural network. In the design and analysis of neural network-based control systems, it is important to take into account the neural network learning error and its influence on the control error of the plant. Simulation results showing the feasibility and performance of the proposed approach are given.

  18. Chaotic diagonal recurrent neural network

    International Nuclear Information System (INIS)

    Wang Xing-Yuan; Zhang Yi

    2012-01-01

    We propose a novel neural network based on a diagonal recurrent neural network and chaos, and its structure and learning algorithm are designed. The multilayer feedforward neural network, diagonal recurrent neural network, and chaotic diagonal recurrent neural network are used to approach the cubic symmetry map. The simulation results show that the approximation capability of the chaotic diagonal recurrent neural network is better than the other two neural networks. (interdisciplinary physics and related areas of science and technology)

  19. Functional model of biological neural networks.

    Science.gov (United States)

    Lo, James Ting-Ho

    2010-12-01

    A functional model of biological neural networks, called temporal hierarchical probabilistic associative memory (THPAM), is proposed in this paper. THPAM comprises functional models of dendritic trees for encoding inputs to neurons, a first type of neuron for generating spike trains, a second type of neuron for generating graded signals to modulate neurons of the first type, supervised and unsupervised Hebbian learning mechanisms for easy learning and retrieving, an arrangement of dendritic trees for maximizing generalization, hardwiring for rotation-translation-scaling invariance, and feedback connections with different delay durations for neurons to make full use of present and past informations generated by neurons in the same and higher layers. These functional models and their processing operations have many functions of biological neural networks that have not been achieved by other models in the open literature and provide logically coherent answers to many long-standing neuroscientific questions. However, biological justifications of these functional models and their processing operations are required for THPAM to qualify as a macroscopic model (or low-order approximate) of biological neural networks.

  20. Learning Data Set Influence on Identification Accuracy of Gas Turbine Neural Network Model

    Science.gov (United States)

    Kuznetsov, A. V.; Makaryants, G. M.

    2018-01-01

    There are many gas turbine engine identification researches via dynamic neural network models. It should minimize errors between model and real object during identification process. Questions about training data set processing of neural networks are usually missed. This article presents a study about influence of data set type on gas turbine neural network model accuracy. The identification object is thermodynamic model of micro gas turbine engine. The thermodynamic model input signal is the fuel consumption and output signal is the engine rotor rotation frequency. Four types input signals was used for creating training and testing data sets of dynamic neural network models - step, fast, slow and mixed. Four dynamic neural networks were created based on these types of training data sets. Each neural network was tested via four types test data sets. In the result 16 transition processes from four neural networks and four test data sets from analogous solving results of thermodynamic model were compared. The errors comparison was made between all neural network errors in each test data set. In the comparison result it was shown error value ranges of each test data set. It is shown that error values ranges is small therefore the influence of data set types on identification accuracy is low.

  1. Neural networks

    International Nuclear Information System (INIS)

    Denby, Bruce; Lindsey, Clark; Lyons, Louis

    1992-01-01

    The 1980s saw a tremendous renewal of interest in 'neural' information processing systems, or 'artificial neural networks', among computer scientists and computational biologists studying cognition. Since then, the growth of interest in neural networks in high energy physics, fueled by the need for new information processing technologies for the next generation of high energy proton colliders, can only be described as explosive

  2. Storage capacity and retrieval time of small-world neural networks

    International Nuclear Information System (INIS)

    Oshima, Hiraku; Odagaki, Takashi

    2007-01-01

    To understand the influence of structure on the function of neural networks, we study the storage capacity and the retrieval time of Hopfield-type neural networks for four network structures: regular, small world, random networks generated by the Watts-Strogatz (WS) model, and the same network as the neural network of the nematode Caenorhabditis elegans. Using computer simulations, we find that (1) as the randomness of network is increased, its storage capacity is enhanced; (2) the retrieval time of WS networks does not depend on the network structure, but the retrieval time of C. elegans's neural network is longer than that of WS networks; (3) the storage capacity of the C. elegans network is smaller than that of networks generated by the WS model, though the neural network of C. elegans is considered to be a small-world network

  3. Neural networks for aircraft control

    Science.gov (United States)

    Linse, Dennis

    1990-01-01

    Current research in Artificial Neural Networks indicates that networks offer some potential advantages in adaptation and fault tolerance. This research is directed at determining the possible applicability of neural networks to aircraft control. The first application will be to aircraft trim. Neural network node characteristics, network topology and operation, neural network learning and example histories using neighboring optimal control with a neural net are discussed.

  4. Robustness of the ATLAS pixel clustering neural network algorithm

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00407780; The ATLAS collaboration

    2016-01-01

    Proton-proton collisions at the energy frontier puts strong constraints on track reconstruction algorithms. In the ATLAS track reconstruction algorithm, an artificial neural network is utilised to identify and split clusters of neighbouring read-out elements in the ATLAS pixel detector created by multiple charged particles. The robustness of the neural network algorithm is presented, probing its sensitivity to uncertainties in the detector conditions. The robustness is studied by evaluating the stability of the algorithm's performance under a range of variations in the inputs to the neural networks. Within reasonable variation magnitudes, the neural networks prove to be robust to most variation types.

  5. Iris double recognition based on modified evolutionary neural network

    Science.gov (United States)

    Liu, Shuai; Liu, Yuan-Ning; Zhu, Xiao-Dong; Huo, Guang; Liu, Wen-Tao; Feng, Jia-Kai

    2017-11-01

    Aiming at multicategory iris recognition under illumination and noise interference, this paper proposes a method of iris double recognition based on a modified evolutionary neural network. An equalization histogram and Laplace of Gaussian operator are used to process the iris to suppress illumination and noise interference and Haar wavelet to convert the iris feature to binary feature encoding. Calculate the Hamming distance for the test iris and template iris , and compare with classification threshold, determine the type of iris. If the iris cannot be identified as a different type, there needs to be a secondary recognition. The connection weights in back-propagation (BP) neural network use modified evolutionary neural network to adaptively train. The modified neural network is composed of particle swarm optimization with mutation operator and BP neural network. According to different iris libraries in different circumstances of experimental results, under illumination and noise interference, the correct recognition rate of this algorithm is higher, the ROC curve is closer to the coordinate axis, the training and recognition time is shorter, and the stability and the robustness are better.

  6. Eddy Current Flaw Characterization Using Neural Networks

    International Nuclear Information System (INIS)

    Song, S. J.; Park, H. J.; Shin, Y. K.

    1998-01-01

    Determination of location, shape and size of a flaw from its eddy current testing signal is one of the fundamental issues in eddy current nondestructive evaluation of steam generator tubes. Here, we propose an approach to this problem; an inversion of eddy current flaw signal using neural networks trained by finite element model-based synthetic signatures. Total 216 eddy current signals from four different types of axisymmetric flaws in tubes are generated by finite element models of which the accuracy is experimentally validated. From each simulated signature, total 24 eddy current features are extracted and among them 13 features are finally selected for flaw characterization. Based on these features, probabilistic neural networks discriminate flaws into four different types according to the location and the shape, and successively back propagation neural networks determine the size parameters of the discriminated flaw

  7. Antenna analysis using neural networks

    Science.gov (United States)

    Smith, William T.

    1992-01-01

    Conventional computing schemes have long been used to analyze problems in electromagnetics (EM). The vast majority of EM applications require computationally intensive algorithms involving numerical integration and solutions to large systems of equations. The feasibility of using neural network computing algorithms for antenna analysis is investigated. The ultimate goal is to use a trained neural network algorithm to reduce the computational demands of existing reflector surface error compensation techniques. Neural networks are computational algorithms based on neurobiological systems. Neural nets consist of massively parallel interconnected nonlinear computational elements. They are often employed in pattern recognition and image processing problems. Recently, neural network analysis has been applied in the electromagnetics area for the design of frequency selective surfaces and beam forming networks. The backpropagation training algorithm was employed to simulate classical antenna array synthesis techniques. The Woodward-Lawson (W-L) and Dolph-Chebyshev (D-C) array pattern synthesis techniques were used to train the neural network. The inputs to the network were samples of the desired synthesis pattern. The outputs are the array element excitations required to synthesize the desired pattern. Once trained, the network is used to simulate the W-L or D-C techniques. Various sector patterns and cosecant-type patterns (27 total) generated using W-L synthesis were used to train the network. Desired pattern samples were then fed to the neural network. The outputs of the network were the simulated W-L excitations. A 20 element linear array was used. There were 41 input pattern samples with 40 output excitations (20 real parts, 20 imaginary). A comparison between the simulated and actual W-L techniques is shown for a triangular-shaped pattern. Dolph-Chebyshev is a different class of synthesis technique in that D-C is used for side lobe control as opposed to pattern

  8. A TLD dose algorithm using artificial neural networks

    International Nuclear Information System (INIS)

    Moscovitch, M.; Rotunda, J.E.; Tawil, R.A.; Rathbone, B.A.

    1995-01-01

    An artificial neural network was designed and used to develop a dose algorithm for a multi-element thermoluminescence dosimeter (TLD). The neural network architecture is based on the concept of functional links network (FLN). Neural network is an information processing method inspired by the biological nervous system. A dose algorithm based on neural networks is fundamentally different as compared to conventional algorithms, as it has the capability to learn from its own experience. The neural network algorithm is shown the expected dose values (output) associated with given responses of a multi-element dosimeter (input) many times. The algorithm, being trained that way, eventually is capable to produce its own unique solution to similar (but not exactly the same) dose calculation problems. For personal dosimetry, the output consists of the desired dose components: deep dose, shallow dose and eye dose. The input consists of the TL data obtained from the readout of a multi-element dosimeter. The neural network approach was applied to the Harshaw Type 8825 TLD, and was shown to significantly improve the performance of this dosimeter, well within the U.S. accreditation requirements for personnel dosimeters

  9. Influence of neural adaptation on dynamics and equilibrium state of neural activities in a ring neural network

    Science.gov (United States)

    Takiyama, Ken

    2017-12-01

    How neural adaptation affects neural information processing (i.e. the dynamics and equilibrium state of neural activities) is a central question in computational neuroscience. In my previous works, I analytically clarified the dynamics and equilibrium state of neural activities in a ring-type neural network model that is widely used to model the visual cortex, motor cortex, and several other brain regions. The neural dynamics and the equilibrium state in the neural network model corresponded to a Bayesian computation and statistically optimal multiple information integration, respectively, under a biologically inspired condition. These results were revealed in an analytically tractable manner; however, adaptation effects were not considered. Here, I analytically reveal how the dynamics and equilibrium state of neural activities in a ring neural network are influenced by spike-frequency adaptation (SFA). SFA is an adaptation that causes gradual inhibition of neural activity when a sustained stimulus is applied, and the strength of this inhibition depends on neural activities. I reveal that SFA plays three roles: (1) SFA amplifies the influence of external input in neural dynamics; (2) SFA allows the history of the external input to affect neural dynamics; and (3) the equilibrium state corresponds to the statistically optimal multiple information integration independent of the existence of SFA. In addition, the equilibrium state in a ring neural network model corresponds to the statistically optimal integration of multiple information sources under biologically inspired conditions, independent of the existence of SFA.

  10. Hidden neural networks

    DEFF Research Database (Denmark)

    Krogh, Anders Stærmose; Riis, Søren Kamaric

    1999-01-01

    A general framework for hybrids of hidden Markov models (HMMs) and neural networks (NNs) called hidden neural networks (HNNs) is described. The article begins by reviewing standard HMMs and estimation by conditional maximum likelihood, which is used by the HNN. In the HNN, the usual HMM probability...... parameters are replaced by the outputs of state-specific neural networks. As opposed to many other hybrids, the HNN is normalized globally and therefore has a valid probabilistic interpretation. All parameters in the HNN are estimated simultaneously according to the discriminative conditional maximum...... likelihood criterion. The HNN can be viewed as an undirected probabilistic independence network (a graphical model), where the neural networks provide a compact representation of the clique functions. An evaluation of the HNN on the task of recognizing broad phoneme classes in the TIMIT database shows clear...

  11. Diagnostic Classifiers: Revealing how Neural Networks Process Hierarchical Structure

    NARCIS (Netherlands)

    Veldhoen, S.; Hupkes, D.; Zuidema, W.

    2016-01-01

    We investigate how neural networks can be used for hierarchical, compositional semantics. To this end, we define the simple but nontrivial artificial task of processing nested arithmetic expressions and study whether different types of neural networks can learn to add and subtract. We find that

  12. Neural network tagging in a toy model

    International Nuclear Information System (INIS)

    Milek, Marko; Patel, Popat

    1999-01-01

    The purpose of this study is a comparison of Artificial Neural Network approach to HEP analysis against the traditional methods. A toy model used in this analysis consists of two types of particles defined by four generic properties. A number of 'events' was created according to the model using standard Monte Carlo techniques. Several fully connected, feed forward multi layered Artificial Neural Networks were trained to tag the model events. The performance of each network was compared to the standard analysis mechanisms and significant improvement was observed

  13. Topology influences performance in the associative memory neural networks

    International Nuclear Information System (INIS)

    Lu Jianquan; He Juan; Cao Jinde; Gao Zhiqiang

    2006-01-01

    To explore how topology affects performance within Hopfield-type associative memory neural networks (AMNNs), we studied the computational performance of the neural networks with regular lattice, random, small-world, and scale-free structures. In this Letter, we found that the memory performance of neural networks obtained through asynchronous updating from 'larger' nodes to 'smaller' nodes are better than asynchronous updating in random order, especially for the scale-free topology. The computational performance of associative memory neural networks linked by the above-mentioned network topologies with the same amounts of nodes (neurons) and edges (synapses) were studied respectively. Along with topologies becoming more random and less locally disordered, we will see that the performance of associative memory neural network is quite improved. By comparing, we show that the regular lattice and random network form two extremes in terms of patterns stability and retrievability. For a network, its patterns stability and retrievability can be largely enhanced by adding a random component or some shortcuts to its structured component. According to the conclusions of this Letter, we can design the associative memory neural networks with high performance and minimal interconnect requirements

  14. Predicting The Type Of Pregnancy Using Flexible Discriminate Analysis And Artificial Neural Networks: A Comparison Study

    International Nuclear Information System (INIS)

    Hooman, A.; Mohammadzadeh, M.

    2008-01-01

    Some medical and epidemiological surveys have been designed to predict a nominal response variable with several levels. With regard to the type of pregnancy there are four possible states: wanted, unwanted by wife, unwanted by husband and unwanted by couple. In this paper, we have predicted the type of pregnancy, as well as the factors influencing it using three different models and comparing them. Regarding the type of pregnancy with several levels, we developed a multinomial logistic regression, a neural network and a flexible discrimination based on the data and compared their results using tow statistical indices: Surface under curve (ROC) and kappa coefficient. Based on these tow indices, flexible discrimination proved to be a better fit for prediction on data in comparison to other methods. When the relations among variables are complex, one can use flexible discrimination instead of multinomial logistic regression and neural network to predict the nominal response variables with several levels in order to gain more accurate predictions

  15. Deep Learning Neural Networks and Bayesian Neural Networks in Data Analysis

    Directory of Open Access Journals (Sweden)

    Chernoded Andrey

    2017-01-01

    Full Text Available Most of the modern analyses in high energy physics use signal-versus-background classification techniques of machine learning methods and neural networks in particular. Deep learning neural network is the most promising modern technique to separate signal and background and now days can be widely and successfully implemented as a part of physical analysis. In this article we compare Deep learning and Bayesian neural networks application as a classifiers in an instance of top quark analysis.

  16. State estimation for neural neutral-type networks with mixed time-varying delays and Markovian jumping parameters

    International Nuclear Information System (INIS)

    Lakshmanan, S.; Park, Ju H.; Jung, H. Y.; Balasubramaniam, P.

    2012-01-01

    This paper is concerned with a delay-dependent state estimator for neutral-type neural networks with mixed time-varying delays and Markovian jumping parameters. The addressed neural networks have a finite number of modes, and the modes may jump from one to another according to a Markov process. By construction of a suitable Lyapunov—Krasovskii functional, a delay-dependent condition is developed to estimate the neuron states through available output measurements such that the estimation error system is globally asymptotically stable in a mean square. The criterion is formulated in terms of a set of linear matrix inequalities (LMIs), which can be checked efficiently by use of some standard numerical packages

  17. Artificial Astrocytes Improve Neural Network Performance

    Science.gov (United States)

    Porto-Pazos, Ana B.; Veiguela, Noha; Mesejo, Pablo; Navarrete, Marta; Alvarellos, Alberto; Ibáñez, Oscar; Pazos, Alejandro; Araque, Alfonso

    2011-01-01

    Compelling evidence indicates the existence of bidirectional communication between astrocytes and neurons. Astrocytes, a type of glial cells classically considered to be passive supportive cells, have been recently demonstrated to be actively involved in the processing and regulation of synaptic information, suggesting that brain function arises from the activity of neuron-glia networks. However, the actual impact of astrocytes in neural network function is largely unknown and its application in artificial intelligence remains untested. We have investigated the consequences of including artificial astrocytes, which present the biologically defined properties involved in astrocyte-neuron communication, on artificial neural network performance. Using connectionist systems and evolutionary algorithms, we have compared the performance of artificial neural networks (NN) and artificial neuron-glia networks (NGN) to solve classification problems. We show that the degree of success of NGN is superior to NN. Analysis of performances of NN with different number of neurons or different architectures indicate that the effects of NGN cannot be accounted for an increased number of network elements, but rather they are specifically due to astrocytes. Furthermore, the relative efficacy of NGN vs. NN increases as the complexity of the network increases. These results indicate that artificial astrocytes improve neural network performance, and established the concept of Artificial Neuron-Glia Networks, which represents a novel concept in Artificial Intelligence with implications in computational science as well as in the understanding of brain function. PMID:21526157

  18. Artificial astrocytes improve neural network performance.

    Directory of Open Access Journals (Sweden)

    Ana B Porto-Pazos

    Full Text Available Compelling evidence indicates the existence of bidirectional communication between astrocytes and neurons. Astrocytes, a type of glial cells classically considered to be passive supportive cells, have been recently demonstrated to be actively involved in the processing and regulation of synaptic information, suggesting that brain function arises from the activity of neuron-glia networks. However, the actual impact of astrocytes in neural network function is largely unknown and its application in artificial intelligence remains untested. We have investigated the consequences of including artificial astrocytes, which present the biologically defined properties involved in astrocyte-neuron communication, on artificial neural network performance. Using connectionist systems and evolutionary algorithms, we have compared the performance of artificial neural networks (NN and artificial neuron-glia networks (NGN to solve classification problems. We show that the degree of success of NGN is superior to NN. Analysis of performances of NN with different number of neurons or different architectures indicate that the effects of NGN cannot be accounted for an increased number of network elements, but rather they are specifically due to astrocytes. Furthermore, the relative efficacy of NGN vs. NN increases as the complexity of the network increases. These results indicate that artificial astrocytes improve neural network performance, and established the concept of Artificial Neuron-Glia Networks, which represents a novel concept in Artificial Intelligence with implications in computational science as well as in the understanding of brain function.

  19. Artificial astrocytes improve neural network performance.

    Science.gov (United States)

    Porto-Pazos, Ana B; Veiguela, Noha; Mesejo, Pablo; Navarrete, Marta; Alvarellos, Alberto; Ibáñez, Oscar; Pazos, Alejandro; Araque, Alfonso

    2011-04-19

    Compelling evidence indicates the existence of bidirectional communication between astrocytes and neurons. Astrocytes, a type of glial cells classically considered to be passive supportive cells, have been recently demonstrated to be actively involved in the processing and regulation of synaptic information, suggesting that brain function arises from the activity of neuron-glia networks. However, the actual impact of astrocytes in neural network function is largely unknown and its application in artificial intelligence remains untested. We have investigated the consequences of including artificial astrocytes, which present the biologically defined properties involved in astrocyte-neuron communication, on artificial neural network performance. Using connectionist systems and evolutionary algorithms, we have compared the performance of artificial neural networks (NN) and artificial neuron-glia networks (NGN) to solve classification problems. We show that the degree of success of NGN is superior to NN. Analysis of performances of NN with different number of neurons or different architectures indicate that the effects of NGN cannot be accounted for an increased number of network elements, but rather they are specifically due to astrocytes. Furthermore, the relative efficacy of NGN vs. NN increases as the complexity of the network increases. These results indicate that artificial astrocytes improve neural network performance, and established the concept of Artificial Neuron-Glia Networks, which represents a novel concept in Artificial Intelligence with implications in computational science as well as in the understanding of brain function.

  20. Exponential convergence rate estimation for uncertain delayed neural networks of neutral type

    International Nuclear Information System (INIS)

    Lien, C.-H.; Yu, K.-W.; Lin, Y.-F.; Chung, Y.-J.; Chung, L.-Y.

    2009-01-01

    The global exponential stability for a class of uncertain delayed neural networks (DNNs) of neutral type is investigated in this paper. Delay-dependent and delay-independent criteria are proposed to guarantee the robust stability of DNNs via LMI and Razumikhin-like approaches. For a given delay, the maximal allowable exponential convergence rate will be estimated. Some numerical examples are given to illustrate the effectiveness of our results. The simulation results reveal significant improvement over the recent results.

  1. Neural networks and their application to nuclear power plant diagnosis

    International Nuclear Information System (INIS)

    Reifman, J.

    1997-01-01

    The authors present a survey of artificial neural network-based computer systems that have been proposed over the last decade for the detection and identification of component faults in thermal-hydraulic systems of nuclear power plants. The capabilities and advantages of applying neural networks as decision support systems for nuclear power plant operators and their inherent characteristics are discussed along with their limitations and drawbacks. The types of neural network structures used and their applications are described and the issues of process diagnosis and neural network-based diagnostic systems are identified. A total of thirty-four publications are reviewed

  2. Artificial Neural Networks to Detect Risk of Type 2 Diabetes | Baha ...

    African Journals Online (AJOL)

    A multilayer feedforward architecture with backpropagation algorithm was designed using Neural Network Toolbox of Matlab. The network was trained using batch mode backpropagation with gradient descent and momentum. Best performed network identified during the training was 2 hidden layers of 6 and 3 neurons, ...

  3. Thermal performance parameters estimation of hot box type solar cooker by using artificial neural network

    Energy Technology Data Exchange (ETDEWEB)

    Kurt, Hueseyin; Atik, Kemal; Oezkaymak, Mehmet; Recebli, Ziyaddin [Zonguldak Karaelmas University, Karabuk Technical Education Faculty, 78200 Karabuk (Turkey)

    2008-02-15

    Work to date has shown that Artificial Neural Network (ANN) has not been used for predicting thermal performance parameters of a solar cooker. The objective of this study is to predict thermal performance parameters such as absorber plate, enclosure air and pot water temperatures of the experimentally investigated box type solar cooker by using the ANN. Data set is obtained from the box type solar cooker which was tested under various experimental conditions. A feed-forward neural network based on back propagation algorithm was developed to predict the thermal performance of solar cooker with and without reflector. Mathematical formulations derived from the ANN model are presented for each predicting temperatures. The experimental data set consists of 126 values. These were divided into two groups, of which the 96 values were used for training/learning of the network and the rest of the data (30 values) for testing/validation of the network performance. The performance of the ANN predictions was evaluated by comparing the prediction results with the experimental results. The results showed a good regression analysis with the correlation coefficients in the range of 0.9950-0.9987 and mean relative errors (MREs) in the range of 3.925-7.040% for the test data set. The regression coefficients indicated that the ANN model can successfully be used for the prediction of the thermal performance parameters of a box type solar cooker with a high degree of accuracy. (author)

  4. System Identification, Prediction, Simulation and Control with Neural Networks

    DEFF Research Database (Denmark)

    Sørensen, O.

    1997-01-01

    a Gauss-Newton search direction is applied. 3) Amongst numerous model types, often met in control applications, only the Non-linear ARMAX (NARMAX) model, representing input/output description, is examined. A simulated example confirms that a neural network has the potential to perform excellent System......The intention of this paper is to make a systematic examination of the possibilities of applying neural networks in those technical areas, which are familiar to a control engineer. In other words, the potential of neural networks in control applications is given higher priority than a detailed...... study of the networks themselves. With this end in view the following restrictions have been made: 1) Amongst numerous neural network structures, only the Multi Layer Perceptron (a feed-forward network) is applied. 2) Amongst numerous training algorithms, only the Recursive Prediction Error Method using...

  5. Artificial neural networks applied to forecasting time series.

    Science.gov (United States)

    Montaño Moreno, Juan J; Palmer Pol, Alfonso; Muñoz Gracia, Pilar

    2011-04-01

    This study offers a description and comparison of the main models of Artificial Neural Networks (ANN) which have proved to be useful in time series forecasting, and also a standard procedure for the practical application of ANN in this type of task. The Multilayer Perceptron (MLP), Radial Base Function (RBF), Generalized Regression Neural Network (GRNN), and Recurrent Neural Network (RNN) models are analyzed. With this aim in mind, we use a time series made up of 244 time points. A comparative study establishes that the error made by the four neural network models analyzed is less than 10%. In accordance with the interpretation criteria of this performance, it can be concluded that the neural network models show a close fit regarding their forecasting capacity. The model with the best performance is the RBF, followed by the RNN and MLP. The GRNN model is the one with the worst performance. Finally, we analyze the advantages and limitations of ANN, the possible solutions to these limitations, and provide an orientation towards future research.

  6. Optimal neural networks for protein-structure prediction

    International Nuclear Information System (INIS)

    Head-Gordon, T.; Stillinger, F.H.

    1993-01-01

    The successful application of neural-network algorithms for prediction of protein structure is stymied by three problem areas: the sparsity of the database of known protein structures, poorly devised network architectures which make the input-output mapping opaque, and a global optimization problem in the multiple-minima space of the network variables. We present a simplified polypeptide model residing in two dimensions with only two amino-acid types, A and B, which allows the determination of the global energy structure for all possible sequences of pentamer, hexamer, and heptamer lengths. This model simplicity allows us to compile a complete structural database and to devise neural networks that reproduce the tertiary structure of all sequences with absolute accuracy and with the smallest number of network variables. These optimal networks reveal that the three problem areas are convoluted, but that thoughtful network designs can actually deconvolute these detrimental traits to provide network algorithms that genuinely impact on the ability of the network to generalize or learn the desired mappings. Furthermore, the two-dimensional polypeptide model shows sufficient chemical complexity so that transfer of neural-network technology to more realistic three-dimensional proteins is evident

  7. Neural networks and its application in biomedical engineering

    International Nuclear Information System (INIS)

    Husnain, S.K.; Bhatti, M.I.

    2002-01-01

    Artificial network (ANNs) is an information processing system that has certain performance characteristics in common with biological neural networks. A neural network is characterized by connections between the neurons, method of determining the weights on the connections and its activation functions while a biological neuron has three types of components that are of particular interest in understanding an artificial neuron: its dendrites, soma, and axon. The actin of the chemical transmitter modifies the incoming signal. The study of neural networks is an extremely interdisciplinary field. Computer-based diagnosis is an increasingly used method that tries to improve the quality of health care. Systems on Neural Networks have been developed extensively in the last ten years with the hope that medical diagnosis and therefore medical care would improve dramatically. The addition of a symbolic processing layer enhances the ANNs in a number of ways. It is, for instance, possible to supplement a network that is purely diagnostic with a level that recommends or nodes in order to more closely simulate the nervous system. (author)

  8. Neural Networks in R Using the Stuttgart Neural Network Simulator: RSNNS

    Directory of Open Access Journals (Sweden)

    Christopher Bergmeir

    2012-01-01

    Full Text Available Neural networks are important standard machine learning procedures for classification and regression. We describe the R package RSNNS that provides a convenient interface to the popular Stuttgart Neural Network Simulator SNNS. The main features are (a encapsulation of the relevant SNNS parts in a C++ class, for sequential and parallel usage of different networks, (b accessibility of all of the SNNSalgorithmic functionality from R using a low-level interface, and (c a high-level interface for convenient, R-style usage of many standard neural network procedures. The package also includes functions for visualization and analysis of the models and the training procedures, as well as functions for data input/output from/to the original SNNSfile formats.

  9. Neural Networks: Implementations and Applications

    OpenAIRE

    Vonk, E.; Veelenturf, L.P.J.; Jain, L.C.

    1996-01-01

    Artificial neural networks, also called neural networks, have been used successfully in many fields including engineering, science and business. This paper presents the implementation of several neural network simulators and their applications in character recognition and other engineering areas

  10. Detecting and Predicting Muscle Fatigue during Typing By SEMG Signal Processing and Artificial Neural Networks

    Directory of Open Access Journals (Sweden)

    Elham Ghoochani

    2011-03-01

    Full Text Available Introduction: Repetitive strain injuries are one of the most prevalent problems in occupational diseases. Repetition, vibration and bad postures of the extremities are physical risk factors related to work that can cause chronic musculoskeletal disorders. Repetitive work on a computer with low level contraction requires the posture to be maintained for a long time, which can cause muscle fatigue. Muscle fatigue in shoulders and neck is one of the most prevalent problems reported with computer users especially during typing. Surface electromyography (SEMG signals are used for detecting muscle fatigue as a non-invasive method. Material and Methods: Nine healthy females volunteered for signal recoding during typing. EMG signals were recorded from the trapezius muscle, which is subjected to muscle fatigue during typing.  After signal analysis and feature extraction, detecting and predicting muscle fatigue was performed by using the MLP artificial neural network. Results: Recorded signals were analyzed in time and frequency domains for feature extraction. Results of classification showed that the MLP neural network can detect and predict muscle fatigue during typing with 80.79 % ± 1.04% accuracy. Conclusion: Intelligent classification and prediction of muscle fatigue can have many applications in human factors engineering (ergonomics, rehabilitation engineering and biofeedback equipment for mitigating the injuries of repetitive works.

  11. Neural electrical activity and neural network growth.

    Science.gov (United States)

    Gafarov, F M

    2018-05-01

    The development of central and peripheral neural system depends in part on the emergence of the correct functional connectivity in its input and output pathways. Now it is generally accepted that molecular factors guide neurons to establish a primary scaffold that undergoes activity-dependent refinement for building a fully functional circuit. However, a number of experimental results obtained recently shows that the neuronal electrical activity plays an important role in the establishing of initial interneuronal connections. Nevertheless, these processes are rather difficult to study experimentally, due to the absence of theoretical description and quantitative parameters for estimation of the neuronal activity influence on growth in neural networks. In this work we propose a general framework for a theoretical description of the activity-dependent neural network growth. The theoretical description incorporates a closed-loop growth model in which the neural activity can affect neurite outgrowth, which in turn can affect neural activity. We carried out the detailed quantitative analysis of spatiotemporal activity patterns and studied the relationship between individual cells and the network as a whole to explore the relationship between developing connectivity and activity patterns. The model, developed in this work will allow us to develop new experimental techniques for studying and quantifying the influence of the neuronal activity on growth processes in neural networks and may lead to a novel techniques for constructing large-scale neural networks by self-organization. Copyright © 2018 Elsevier Ltd. All rights reserved.

  12. Fractional Hopfield Neural Networks: Fractional Dynamic Associative Recurrent Neural Networks.

    Science.gov (United States)

    Pu, Yi-Fei; Yi, Zhang; Zhou, Ji-Liu

    2017-10-01

    This paper mainly discusses a novel conceptual framework: fractional Hopfield neural networks (FHNN). As is commonly known, fractional calculus has been incorporated into artificial neural networks, mainly because of its long-term memory and nonlocality. Some researchers have made interesting attempts at fractional neural networks and gained competitive advantages over integer-order neural networks. Therefore, it is naturally makes one ponder how to generalize the first-order Hopfield neural networks to the fractional-order ones, and how to implement FHNN by means of fractional calculus. We propose to introduce a novel mathematical method: fractional calculus to implement FHNN. First, we implement fractor in the form of an analog circuit. Second, we implement FHNN by utilizing fractor and the fractional steepest descent approach, construct its Lyapunov function, and further analyze its attractors. Third, we perform experiments to analyze the stability and convergence of FHNN, and further discuss its applications to the defense against chip cloning attacks for anticounterfeiting. The main contribution of our work is to propose FHNN in the form of an analog circuit by utilizing a fractor and the fractional steepest descent approach, construct its Lyapunov function, prove its Lyapunov stability, analyze its attractors, and apply FHNN to the defense against chip cloning attacks for anticounterfeiting. A significant advantage of FHNN is that its attractors essentially relate to the neuron's fractional order. FHNN possesses the fractional-order-stability and fractional-order-sensitivity characteristics.

  13. Classification of E-Nose Aroma Data of Four Fruit Types by ABC-Based Neural Network

    Directory of Open Access Journals (Sweden)

    M. Fatih Adak

    2016-02-01

    Full Text Available Electronic nose technology is used in many areas, and frequently in the beverage industry for classification and quality-control purposes. In this study, four different aroma data (strawberry, lemon, cherry, and melon were obtained using a MOSES II electronic nose for the purpose of fruit classification. To improve the performance of the classification, the training phase of the neural network with two hidden layers was optimized using artificial bee colony algorithm (ABC, which is known to be successful in exploration. Test data were given to two different neural networks, each of which were trained separately with backpropagation (BP and ABC, and average test performances were measured as 60% for the artificial neural network trained with BP and 76.39% for the artificial neural network trained with ABC. Training and test phases were repeated 30 times to obtain these average performance measurements. This level of performance shows that the artificial neural network trained with ABC is successful in classifying aroma data.

  14. Classification of E-Nose Aroma Data of Four Fruit Types by ABC-Based Neural Network.

    Science.gov (United States)

    Adak, M Fatih; Yumusak, Nejat

    2016-02-27

    Electronic nose technology is used in many areas, and frequently in the beverage industry for classification and quality-control purposes. In this study, four different aroma data (strawberry, lemon, cherry, and melon) were obtained using a MOSES II electronic nose for the purpose of fruit classification. To improve the performance of the classification, the training phase of the neural network with two hidden layers was optimized using artificial bee colony algorithm (ABC), which is known to be successful in exploration. Test data were given to two different neural networks, each of which were trained separately with backpropagation (BP) and ABC, and average test performances were measured as 60% for the artificial neural network trained with BP and 76.39% for the artificial neural network trained with ABC. Training and test phases were repeated 30 times to obtain these average performance measurements. This level of performance shows that the artificial neural network trained with ABC is successful in classifying aroma data.

  15. Delay-dependent stability of neural networks of neutral type with time delay in the leakage term

    International Nuclear Information System (INIS)

    Li, Xiaodi; Cao, Jinde

    2010-01-01

    This paper studies the global asymptotic stability of neural networks of neutral type with mixed delays. The mixed delays include constant delay in the leakage term (i.e. 'leakage delay'), time-varying delays and continuously distributed delays. Based on the topological degree theory, Lyapunov method and linear matrix inequality (LMI) approach, some sufficient conditions are derived ensuring the existence, uniqueness and global asymptotic stability of the equilibrium point, which are dependent on both the discrete and distributed time delays. These conditions are expressed in terms of LMI and can be easily checked by the MATLAB LMI toolbox. Even if there is no leakage delay, the obtained results are less restrictive than some recent works. It can be applied to neural networks of neutral type with activation functions without assuming their boundedness, monotonicity or differentiability. Moreover, the differentiability of the time-varying delay in the non-neutral term is removed. Finally, two numerical examples are given to show the effectiveness of the proposed method

  16. Changes in neural network homeostasis trigger neuropsychiatric symptoms.

    Science.gov (United States)

    Winkelmann, Aline; Maggio, Nicola; Eller, Joanna; Caliskan, Gürsel; Semtner, Marcus; Häussler, Ute; Jüttner, René; Dugladze, Tamar; Smolinsky, Birthe; Kowalczyk, Sarah; Chronowska, Ewa; Schwarz, Günter; Rathjen, Fritz G; Rechavi, Gideon; Haas, Carola A; Kulik, Akos; Gloveli, Tengis; Heinemann, Uwe; Meier, Jochen C

    2014-02-01

    The mechanisms that regulate the strength of synaptic transmission and intrinsic neuronal excitability are well characterized; however, the mechanisms that promote disease-causing neural network dysfunction are poorly defined. We generated mice with targeted neuron type-specific expression of a gain-of-function variant of the neurotransmitter receptor for glycine (GlyR) that is found in hippocampectomies from patients with temporal lobe epilepsy. In this mouse model, targeted expression of gain-of-function GlyR in terminals of glutamatergic cells or in parvalbumin-positive interneurons persistently altered neural network excitability. The increased network excitability associated with gain-of-function GlyR expression in glutamatergic neurons resulted in recurrent epileptiform discharge, which provoked cognitive dysfunction and memory deficits without affecting bidirectional synaptic plasticity. In contrast, decreased network excitability due to gain-of-function GlyR expression in parvalbumin-positive interneurons resulted in an anxiety phenotype, but did not affect cognitive performance or discriminative associative memory. Our animal model unveils neuron type-specific effects on cognition, formation of discriminative associative memory, and emotional behavior in vivo. Furthermore, our data identify a presynaptic disease-causing molecular mechanism that impairs homeostatic regulation of neural network excitability and triggers neuropsychiatric symptoms.

  17. Application of artificial neural networks in particle physics

    International Nuclear Information System (INIS)

    Kolanoski, H.

    1995-04-01

    The application of Artificial Neural Networks in Particle Physics is reviewed. Most common is the use of feed-forward nets for event classification and function approximation. This network type is best suited for a hardware implementation and special VLSI chips are available which are used in fast trigger processors. Also discussed are fully connected networks of the Hopfield type for pattern recognition in tracking detectors. (orig.)

  18. Forex Market Prediction Using NARX Neural Network with Bagging

    Directory of Open Access Journals (Sweden)

    Shahbazi Nima

    2016-01-01

    Full Text Available We propose a new methodfor predicting movements in Forex market based on NARX neural network withtime shifting bagging techniqueand financial indicators, such as relative strength index and stochastic indicators. Neural networks have prominent learning ability but they often exhibit bad and unpredictable performance for noisy data. When compared with the static neural networks, our method significantly reducesthe error rate of the responseandimproves the performance of the prediction. We tested three different types ofarchitecture for predicting the response and determined the best network approach. We applied our method to prediction the hourly foreign exchange rates and found remarkable predictability in comprehensive experiments with 2 different foreign exchange rates (GBPUSD and EURUSD.

  19. Aphasia Classification Using Neural Networks

    DEFF Research Database (Denmark)

    Axer, H.; Jantzen, Jan; Berks, G.

    2000-01-01

    A web-based software model (http://fuzzy.iau.dtu.dk/aphasia.nsf) was developed as an example for classification of aphasia using neural networks. Two multilayer perceptrons were used to classify the type of aphasia (Broca, Wernicke, anomic, global) according to the results in some subtests...

  20. Introduction to neural networks

    International Nuclear Information System (INIS)

    Pavlopoulos, P.

    1996-01-01

    This lecture is a presentation of today's research in neural computation. Neural computation is inspired by knowledge from neuro-science. It draws its methods in large degree from statistical physics and its potential applications lie mainly in computer science and engineering. Neural networks models are algorithms for cognitive tasks, such as learning and optimization, which are based on concepts derived from research into the nature of the brain. The lecture first gives an historical presentation of neural networks development and interest in performing complex tasks. Then, an exhaustive overview of data management and networks computation methods is given: the supervised learning and the associative memory problem, the capacity of networks, the Perceptron networks, the functional link networks, the Madaline (Multiple Adalines) networks, the back-propagation networks, the reduced coulomb energy (RCE) networks, the unsupervised learning and the competitive learning and vector quantization. An example of application in high energy physics is given with the trigger systems and track recognition system (track parametrization, event selection and particle identification) developed for the CPLEAR experiment detectors from the LEAR at CERN. (J.S.). 56 refs., 20 figs., 1 tab., 1 appendix

  1. Landslide Susceptibility Index Determination Using Aritificial Neural Network

    Science.gov (United States)

    Kawabata, D.; Bandibas, J.; Urai, M.

    2004-12-01

    The occurrence of landslide is the result of the interaction of complex and diverse environmental factors. The geomorphic features, rock types and geologic structure are especially important base factors of the landslide occurrence. Generating landslide susceptibility index by defining the relationship between landslide occurrence and that base factors using conventional mathematical and statistical methods is very difficult and inaccurate. This study focuses on generating landslide susceptibility index using artificial neural networks in Southern Japanese Alps. The training data are geomorphic (e.g. altitude, slope and aspect) and geologic parameters (e.g. rock type, distance from geologic boundary and geologic dip-strike angle) and landslides. Artificial neural network structure and training scheme are formulated to generate the index. Data from areas with and without landslide occurrences are used to train the network. The network is trained to output 1 when the input data are from areas with landslides and 0 when no landslide occurred. The trained network generates an output ranging from 0 to 1 reflecting the possibility of landslide occurrence based on the inputted data. Output values nearer to 1 means higher possibility of landslide occurrence. The artificial neural network model is incorporated into the GIS software to generate a landslide susceptibility map.

  2. A link prediction method for heterogeneous networks based on BP neural network

    Science.gov (United States)

    Li, Ji-chao; Zhao, Dan-ling; Ge, Bing-Feng; Yang, Ke-Wei; Chen, Ying-Wu

    2018-04-01

    Most real-world systems, composed of different types of objects connected via many interconnections, can be abstracted as various complex heterogeneous networks. Link prediction for heterogeneous networks is of great significance for mining missing links and reconfiguring networks according to observed information, with considerable applications in, for example, friend and location recommendations and disease-gene candidate detection. In this paper, we put forward a novel integrated framework, called MPBP (Meta-Path feature-based BP neural network model), to predict multiple types of links for heterogeneous networks. More specifically, the concept of meta-path is introduced, followed by the extraction of meta-path features for heterogeneous networks. Next, based on the extracted meta-path features, a supervised link prediction model is built with a three-layer BP neural network. Then, the solution algorithm of the proposed link prediction model is put forward to obtain predicted results by iteratively training the network. Last, numerical experiments on the dataset of examples of a gene-disease network and a combat network are conducted to verify the effectiveness and feasibility of the proposed MPBP. It shows that the MPBP with very good performance is superior to the baseline methods.

  3. Firing rate dynamics in recurrent spiking neural networks with intrinsic and network heterogeneity.

    Science.gov (United States)

    Ly, Cheng

    2015-12-01

    Heterogeneity of neural attributes has recently gained a lot of attention and is increasing recognized as a crucial feature in neural processing. Despite its importance, this physiological feature has traditionally been neglected in theoretical studies of cortical neural networks. Thus, there is still a lot unknown about the consequences of cellular and circuit heterogeneity in spiking neural networks. In particular, combining network or synaptic heterogeneity and intrinsic heterogeneity has yet to be considered systematically despite the fact that both are known to exist and likely have significant roles in neural network dynamics. In a canonical recurrent spiking neural network model, we study how these two forms of heterogeneity lead to different distributions of excitatory firing rates. To analytically characterize how these types of heterogeneities affect the network, we employ a dimension reduction method that relies on a combination of Monte Carlo simulations and probability density function equations. We find that the relationship between intrinsic and network heterogeneity has a strong effect on the overall level of heterogeneity of the firing rates. Specifically, this relationship can lead to amplification or attenuation of firing rate heterogeneity, and these effects depend on whether the recurrent network is firing asynchronously or rhythmically firing. These observations are captured with the aforementioned reduction method, and furthermore simpler analytic descriptions based on this dimension reduction method are developed. The final analytic descriptions provide compact and descriptive formulas for how the relationship between intrinsic and network heterogeneity determines the firing rate heterogeneity dynamics in various settings.

  4. Neural network segmentation of magnetic resonance images

    International Nuclear Information System (INIS)

    Frederick, B.

    1990-01-01

    Neural networks are well adapted to the task of grouping input patterns into subsets which share some similarity. Moreover, once trained, they can generalize their classification rules to classify new data sets. Sets of pixel intensities from magnetic resonance (MR) images provide a natural input to a neural network; by varying imaging parameters, MR images can reflect various independent physical parameters of tissues in their pixel intensities. A neural net can then be trained to classify physically similar tissue types based on sets of pixel intensities resulting from different imaging studies on the same subject. This paper reports that a neural network classifier for image segmentation was implanted on a Sun 4/60, and was tested on the task of classifying tissues of canine head MR images. Four images of a transaxial slice with different imaging sequences were taken as input to the network (three spin-echo images and an inversion recovery image). The training set consisted of 691 representative samples of gray matter, white matter, cerebrospinal fluid, bone, and muscle preclassified by a neuroscientist. The network was trained using a fast backpropagation algorithm to derive the decision criteria to classify any location in the image by its pixel intensities, and the image was subsequently segmented by the classifier

  5. Neural Networks in Control Applications

    DEFF Research Database (Denmark)

    Sørensen, O.

    are examined. The models are separated into three groups representing input/output descriptions as well as state space descriptions: - Models, where all in- and outputs are measurable (static networks). - Models, where some inputs are non-measurable (recurrent networks). - Models, where some in- and some...... outputs are non-measurable (recurrent networks with incomplete state information). The three groups are ordered in increasing complexity, and for each group it is shown how to solve the problems concerning training and application of the specific model type. Of particular interest are the model types...... Kalmann filter) representing state space description. The potentials of neural networks for control of non-linear processes are also examined, focusing on three different groups of control concepts, all considered as generalizations of known linear control concepts to handle also non-linear processes...

  6. Controlled neural network application in track-match problem

    International Nuclear Information System (INIS)

    Baginyan, S.A.; Ososkov, G.A.

    1993-01-01

    Track-match problem of high energy physics (HEP) data handling is formulated in terms of incidence matrices. The corresponding Hopfield neural network is developed to solve this type of constraint satisfaction problems (CSP). A special concept of the controlled neural network is proposed as a basis of an algorithm for the effective CSP solution. Results of comparable calculations show the very high performance of this algorithm against conventional search procedures. 8 refs.; 1 fig.; 1 tab

  7. Artificial neural networks for processing fluorescence spectroscopy data in skin cancer diagnostics

    International Nuclear Information System (INIS)

    Lenhardt, L; Zeković, I; Dramićanin, T; Dramićanin, M D

    2013-01-01

    Over the years various optical spectroscopic techniques have been widely used as diagnostic tools in the discrimination of many types of malignant diseases. Recently, synchronous fluorescent spectroscopy (SFS) coupled with chemometrics has been applied in cancer diagnostics. The SFS method involves simultaneous scanning of both emission and excitation wavelengths while keeping the interval of wavelengths (constant-wavelength mode) or frequencies (constant-energy mode) between them constant. This method is fast, relatively inexpensive, sensitive and non-invasive. Total synchronous fluorescence spectra of normal skin, nevus and melanoma samples were used as input for training of artificial neural networks. Two different types of artificial neural networks were trained, the self-organizing map and the feed-forward neural network. Histopathology results of investigated skin samples were used as the gold standard for network output. Based on the obtained classification success rate of neural networks, we concluded that both networks provided high sensitivity with classification errors between 2 and 4%. (paper)

  8. Toward automatic time-series forecasting using neural networks.

    Science.gov (United States)

    Yan, Weizhong

    2012-07-01

    Over the past few decades, application of artificial neural networks (ANN) to time-series forecasting (TSF) has been growing rapidly due to several unique features of ANN models. However, to date, a consistent ANN performance over different studies has not been achieved. Many factors contribute to the inconsistency in the performance of neural network models. One such factor is that ANN modeling involves determining a large number of design parameters, and the current design practice is essentially heuristic and ad hoc, this does not exploit the full potential of neural networks. Systematic ANN modeling processes and strategies for TSF are, therefore, greatly needed. Motivated by this need, this paper attempts to develop an automatic ANN modeling scheme. It is based on the generalized regression neural network (GRNN), a special type of neural network. By taking advantage of several GRNN properties (i.e., a single design parameter and fast learning) and by incorporating several design strategies (e.g., fusing multiple GRNNs), we have been able to make the proposed modeling scheme to be effective for modeling large-scale business time series. The initial model was entered into the NN3 time-series competition. It was awarded the best prediction on the reduced dataset among approximately 60 different models submitted by scholars worldwide.

  9. The Laplacian spectrum of neural networks

    Science.gov (United States)

    de Lange, Siemon C.; de Reus, Marcel A.; van den Heuvel, Martijn P.

    2014-01-01

    The brain is a complex network of neural interactions, both at the microscopic and macroscopic level. Graph theory is well suited to examine the global network architecture of these neural networks. Many popular graph metrics, however, encode average properties of individual network elements. Complementing these “conventional” graph metrics, the eigenvalue spectrum of the normalized Laplacian describes a network's structure directly at a systems level, without referring to individual nodes or connections. In this paper, the Laplacian spectra of the macroscopic anatomical neuronal networks of the macaque and cat, and the microscopic network of the Caenorhabditis elegans were examined. Consistent with conventional graph metrics, analysis of the Laplacian spectra revealed an integrative community structure in neural brain networks. Extending previous findings of overlap of network attributes across species, similarity of the Laplacian spectra across the cat, macaque and C. elegans neural networks suggests a certain level of consistency in the overall architecture of the anatomical neural networks of these species. Our results further suggest a specific network class for neural networks, distinct from conceptual small-world and scale-free models as well as several empirical networks. PMID:24454286

  10. Program Helps Simulate Neural Networks

    Science.gov (United States)

    Villarreal, James; Mcintire, Gary

    1993-01-01

    Neural Network Environment on Transputer System (NNETS) computer program provides users high degree of flexibility in creating and manipulating wide variety of neural-network topologies at processing speeds not found in conventional computing environments. Supports back-propagation and back-propagation-related algorithms. Back-propagation algorithm used is implementation of Rumelhart's generalized delta rule. NNETS developed on INMOS Transputer(R). Predefines back-propagation network, Jordan network, and reinforcement network to assist users in learning and defining own networks. Also enables users to configure other neural-network paradigms from NNETS basic architecture. Small portion of software written in OCCAM(R) language.

  11. Hybrid discrete-time neural networks.

    Science.gov (United States)

    Cao, Hongjun; Ibarz, Borja

    2010-11-13

    Hybrid dynamical systems combine evolution equations with state transitions. When the evolution equations are discrete-time (also called map-based), the result is a hybrid discrete-time system. A class of biological neural network models that has recently received some attention falls within this category: map-based neuron models connected by means of fast threshold modulation (FTM). FTM is a connection scheme that aims to mimic the switching dynamics of a neuron subject to synaptic inputs. The dynamic equations of the neuron adopt different forms according to the state (either firing or not firing) and type (excitatory or inhibitory) of their presynaptic neighbours. Therefore, the mathematical model of one such network is a combination of discrete-time evolution equations with transitions between states, constituting a hybrid discrete-time (map-based) neural network. In this paper, we review previous work within the context of these models, exemplifying useful techniques to analyse them. Typical map-based neuron models are low-dimensional and amenable to phase-plane analysis. In bursting models, fast-slow decomposition can be used to reduce dimensionality further, so that the dynamics of a pair of connected neurons can be easily understood. We also discuss a model that includes electrical synapses in addition to chemical synapses with FTM. Furthermore, we describe how master stability functions can predict the stability of synchronized states in these networks. The main results are extended to larger map-based neural networks.

  12. Transient analysis for PWR reactor core using neural networks predictors

    International Nuclear Information System (INIS)

    Gueray, B.S.

    2001-01-01

    In this study, transient analysis for a Pressurized Water Reactor core has been performed. A lumped parameter approximation is preferred for that purpose, to describe the reactor core together with mechanism which play an important role in dynamic analysis. The dynamic behavior of the reactor core during transients is analyzed considering the transient initiating events, wich are an essential part of Safety Analysis Reports. several transients are simulated based on the employed core model. Simulation results are in accord the physical expectations. A neural network is developed to predict the future response of the reactor core, in advance. The neural network is trained using the simulation results of a number of representative transients. Structure of the neural network is optimized by proper selection of transfer functions for the neurons. Trained neural network is used to predict the future responses following an early observation of the changes in system variables. Estimated behaviour using the neural network is in good agreement with the simulation results for various for types of transients. Results of this study indicate that the designed neural network can be used as an estimator of the time dependent behavior of the reactor core under transient conditions

  13. A note on exponential convergence of neural networks with unbounded distributed delays

    Energy Technology Data Exchange (ETDEWEB)

    Chu Tianguang [Intelligent Control Laboratory, Center for Systems and Control, Department of Mechanics and Engineering Science, Peking University, Beijing 100871 (China)]. E-mail: chutg@pku.edu.cn; Yang Haifeng [Intelligent Control Laboratory, Center for Systems and Control, Department of Mechanics and Engineering Science, Peking University, Beijing 100871 (China)

    2007-12-15

    This note examines issues concerning global exponential convergence of neural networks with unbounded distributed delays. Sufficient conditions are derived by exploiting exponentially fading memory property of delay kernel functions. The method is based on comparison principle of delay differential equations and does not need the construction of any Lyapunov functionals. It is simple yet effective in deriving less conservative exponential convergence conditions and more detailed componentwise decay estimates. The results of this note and [Chu T. An exponential convergence estimate for analog neural networks with delay. Phys Lett A 2001;283:113-8] suggest a class of neural networks whose globally exponentially convergent dynamics is completely insensitive to a wide range of time delays from arbitrary bounded discrete type to certain unbounded distributed type. This is of practical interest in designing fast and reliable neural circuits. Finally, an open question is raised on the nature of delay kernels for attaining exponential convergence in an unbounded distributed delayed neural network.

  14. A note on exponential convergence of neural networks with unbounded distributed delays

    International Nuclear Information System (INIS)

    Chu Tianguang; Yang Haifeng

    2007-01-01

    This note examines issues concerning global exponential convergence of neural networks with unbounded distributed delays. Sufficient conditions are derived by exploiting exponentially fading memory property of delay kernel functions. The method is based on comparison principle of delay differential equations and does not need the construction of any Lyapunov functionals. It is simple yet effective in deriving less conservative exponential convergence conditions and more detailed componentwise decay estimates. The results of this note and [Chu T. An exponential convergence estimate for analog neural networks with delay. Phys Lett A 2001;283:113-8] suggest a class of neural networks whose globally exponentially convergent dynamics is completely insensitive to a wide range of time delays from arbitrary bounded discrete type to certain unbounded distributed type. This is of practical interest in designing fast and reliable neural circuits. Finally, an open question is raised on the nature of delay kernels for attaining exponential convergence in an unbounded distributed delayed neural network

  15. Neural networks for triggering

    International Nuclear Information System (INIS)

    Denby, B.; Campbell, M.; Bedeschi, F.; Chriss, N.; Bowers, C.; Nesti, F.

    1990-01-01

    Two types of neural network beauty trigger architectures, based on identification of electrons in jets and recognition of secondary vertices, have been simulated in the environment of the Fermilab CDF experiment. The efficiencies for B's and rejection of background obtained are encouraging. If hardware tests are successful, the electron identification architecture will be tested in the 1991 run of CDF. 10 refs., 5 figs., 1 tab

  16. Online fouling detection in electrical circulation heaters using neural networks

    Energy Technology Data Exchange (ETDEWEB)

    Lalot, S. [M.E.T.I.E.R., Longuenesse Cedex (France); Universite de Valenciennes (France). LME; Lecoeuche, S. [M.E.T.I.E.R., Longuenesse Cedex (France); Universite de Lille (France). Laboratoire 13D

    2003-06-01

    Here is presented a method that is able to detect fouling during the service of a circulation electrical heater. The neural based technique is divided in two major steps: identification and classification. Each step uses a neural network, the connection weights of the first one being the inputs of the second network. Each step is detailed and the main characteristics and abilities of the two neural networks are given. It is shown that the method is able to discriminate fouling from viscosity modification that would lead to the same type of effect on the total heat transfer coefficient. (author)

  17. Statistical mechanics of attractor neural network models with synaptic depression

    International Nuclear Information System (INIS)

    Igarashi, Yasuhiko; Oizumi, Masafumi; Otsubo, Yosuke; Nagata, Kenji; Okada, Masato

    2009-01-01

    Synaptic depression is known to control gain for presynaptic inputs. Since cortical neurons receive thousands of presynaptic inputs, and their outputs are fed into thousands of other neurons, the synaptic depression should influence macroscopic properties of neural networks. We employ simple neural network models to explore the macroscopic effects of synaptic depression. Systems with the synaptic depression cannot be analyzed due to asymmetry of connections with the conventional equilibrium statistical-mechanical approach. Thus, we first propose a microscopic dynamical mean field theory. Next, we derive macroscopic steady state equations and discuss the stabilities of steady states for various types of neural network models.

  18. Neural network to diagnose lining condition

    Science.gov (United States)

    Yemelyanov, V. A.; Yemelyanova, N. Y.; Nedelkin, A. A.; Zarudnaya, M. V.

    2018-03-01

    The paper presents data on the problem of diagnosing the lining condition at the iron and steel works. The authors describe the neural network structure and software that are designed and developed to determine the lining burnout zones. The simulation results of the proposed neural networks are presented. The authors note the low learning and classification errors of the proposed neural networks. To realize the proposed neural network, the specialized software has been developed.

  19. Memristor-based neural networks

    International Nuclear Information System (INIS)

    Thomas, Andy

    2013-01-01

    The synapse is a crucial element in biological neural networks, but a simple electronic equivalent has been absent. This complicates the development of hardware that imitates biological architectures in the nervous system. Now, the recent progress in the experimental realization of memristive devices has renewed interest in artificial neural networks. The resistance of a memristive system depends on its past states and exactly this functionality can be used to mimic the synaptic connections in a (human) brain. After a short introduction to memristors, we present and explain the relevant mechanisms in a biological neural network, such as long-term potentiation and spike time-dependent plasticity, and determine the minimal requirements for an artificial neural network. We review the implementations of these processes using basic electric circuits and more complex mechanisms that either imitate biological systems or could act as a model system for them. (topical review)

  20. Forecasting the mortality rates of Indonesian population by using neural network

    Science.gov (United States)

    Safitri, Lutfiani; Mardiyati, Sri; Rahim, Hendrisman

    2018-03-01

    A model that can represent a problem is required in conducting a forecasting. One of the models that has been acknowledged by the actuary community in forecasting mortality rate is the Lee-Certer model. Lee Carter model supported by Neural Network will be used to calculate mortality forecasting in Indonesia. The type of Neural Network used is feedforward neural network aligned with backpropagation algorithm in python programming language. And the final result of this study is mortality rate in forecasting Indonesia for the next few years

  1. Neural Networks in Control Applications

    DEFF Research Database (Denmark)

    Sørensen, O.

    The intention of this report is to make a systematic examination of the possibilities of applying neural networks in those technical areas, which are familiar to a control engineer. In other words, the potential of neural networks in control applications is given higher priority than a detailed...... study of the networks themselves. With this end in view the following restrictions have been made: - Amongst numerous neural network structures, only the Multi Layer Perceptron (a feed-forward network) is applied. - Amongst numerous training algorithms, only four algorithms are examined, all...... in a recursive form (sample updating). The simplest is the Back Probagation Error Algorithm, and the most complex is the recursive Prediction Error Method using a Gauss-Newton search direction. - Over-fitting is often considered to be a serious problem when training neural networks. This problem is specifically...

  2. Neural networks and their potential application to nuclear power plants

    International Nuclear Information System (INIS)

    Uhrig, R.E.

    1991-01-01

    A network of artificial neurons, usually called an artificial neural network is a data processing system consisting of a number of highly interconnected processing elements in an architecture inspired by the structure of the cerebral cortex portion of the brain. Hence, neural networks are often capable of doing things which humans or animals do well but which conventional computers often do poorly. Neural networks exhibit characteristics and capabilities not provided by any other technology. Neural networks may be designed so as to classify an input pattern as one of several predefined types or to create, as needed, categories or classes of system states which can be interpreted by a human operator. Neural networks have the ability to recognize patterns, even when the information comprising these patterns is noisy, sparse, or incomplete. Thus, systems of artificial neural networks show great promise for use in environments in which robust, fault-tolerant pattern recognition is necessary in a real-time mode, and in which the incoming data may be distorted or noisy. The application of neural networks, a rapidly evolving technology used extensively in defense applications, alone or in conjunction with other advanced technologies, to some of the problems of operating nuclear power plants has the potential to enhance the safety, reliability and operability of nuclear power plants. The potential applications of neural networking include, but are not limited to diagnosing specific abnormal conditions, identification of nonlinear dynamics and transients, detection of the change of mode of operation, control of temperature and pressure during start-up, signal validation, plant-wide monitoring using autoassociative neural networks, monitoring of check valves, modeling of the plant thermodynamics, emulation of core reload calculations, analysis of temporal sequences in NRC's ''licensee event reports,'' and monitoring of plant parameters

  3. Nonlinear dynamic systems identification using recurrent interval type-2 TSK fuzzy neural network - A novel structure.

    Science.gov (United States)

    El-Nagar, Ahmad M

    2018-01-01

    In this study, a novel structure of a recurrent interval type-2 Takagi-Sugeno-Kang (TSK) fuzzy neural network (FNN) is introduced for nonlinear dynamic and time-varying systems identification. It combines the type-2 fuzzy sets (T2FSs) and a recurrent FNN to avoid the data uncertainties. The fuzzy firing strengths in the proposed structure are returned to the network input as internal variables. The interval type-2 fuzzy sets (IT2FSs) is used to describe the antecedent part for each rule while the consequent part is a TSK-type, which is a linear function of the internal variables and the external inputs with interval weights. All the type-2 fuzzy rules for the proposed RIT2TSKFNN are learned on-line based on structure and parameter learning, which are performed using the type-2 fuzzy clustering. The antecedent and consequent parameters of the proposed RIT2TSKFNN are updated based on the Lyapunov function to achieve network stability. The obtained results indicate that our proposed network has a small root mean square error (RMSE) and a small integral of square error (ISE) with a small number of rules and a small computation time compared with other type-2 FNNs. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.

  4. Practical neural network recipies in C++

    CERN Document Server

    Masters

    2014-01-01

    This text serves as a cookbook for neural network solutions to practical problems using C++. It will enable those with moderate programming experience to select a neural network model appropriate to solving a particular problem, and to produce a working program implementing that network. The book provides guidance along the entire problem-solving path, including designing the training set, preprocessing variables, training and validating the network, and evaluating its performance. Though the book is not intended as a general course in neural networks, no background in neural works is assum

  5. Computational neural network regression model for Host based Intrusion Detection System

    Directory of Open Access Journals (Sweden)

    Sunil Kumar Gautam

    2016-09-01

    Full Text Available The current scenario of information gathering and storing in secure system is a challenging task due to increasing cyber-attacks. There exists computational neural network techniques designed for intrusion detection system, which provide security to single machine and entire network's machine. In this paper, we have used two types of computational neural network models, namely, Generalized Regression Neural Network (GRNN model and Multilayer Perceptron Neural Network (MPNN model for Host based Intrusion Detection System using log files that are generated by a single personal computer. The simulation results show correctly classified percentage of normal and abnormal (intrusion class using confusion matrix. On the basis of results and discussion, we found that the Host based Intrusion Systems Model (HISM significantly improved the detection accuracy while retaining minimum false alarm rate.

  6. ACO-Initialized Wavelet Neural Network for Vibration Fault Diagnosis of Hydroturbine Generating Unit

    Directory of Open Access Journals (Sweden)

    Zhihuai Xiao

    2015-01-01

    Full Text Available Considering the drawbacks of traditional wavelet neural network, such as low convergence speed and high sensitivity to initial parameters, an ant colony optimization- (ACO- initialized wavelet neural network is proposed in this paper for vibration fault diagnosis of a hydroturbine generating unit. In this method, parameters of the wavelet neural network are initialized by the ACO algorithm, and then the wavelet neural network is trained by the gradient descent algorithm. Amplitudes of the frequency components of the hydroturbine generating unit vibration signals are used as feature vectors for wavelet neural network training to realize mapping relationship from vibration features to fault types. A real vibration fault diagnosis case result of a hydroturbine generating unit shows that the proposed method has faster convergence speed and stronger generalization ability than the traditional wavelet neural network and ACO wavelet neural network. Thus it can provide an effective solution for online vibration fault diagnosis of a hydroturbine generating unit.

  7. Prediction of the location and type of beta-turns in proteins using neural networks.

    OpenAIRE

    Shepherd, A. J.; Gorse, D.; Thornton, J. M.

    1999-01-01

    A neural network has been used to predict both the location and the type of beta-turns in a set of 300 nonhomologous protein domains. A substantial improvement in prediction accuracy compared with previous methods has been achieved by incorporating secondary structure information in the input data. The total percentage of residues correctly classified as beta-turn or not-beta-turn is around 75% with predicted secondary structure information. More significantly, the method gives a Matthews cor...

  8. Signal Processing and Neural Network Simulator

    Science.gov (United States)

    Tebbe, Dennis L.; Billhartz, Thomas J.; Doner, John R.; Kraft, Timothy T.

    1995-04-01

    The signal processing and neural network simulator (SPANNS) is a digital signal processing simulator with the capability to invoke neural networks into signal processing chains. This is a generic tool which will greatly facilitate the design and simulation of systems with embedded neural networks. The SPANNS is based on the Signal Processing WorkSystemTM (SPWTM), a commercial-off-the-shelf signal processing simulator. SPW provides a block diagram approach to constructing signal processing simulations. Neural network paradigms implemented in the SPANNS include Backpropagation, Kohonen Feature Map, Outstar, Fully Recurrent, Adaptive Resonance Theory 1, 2, & 3, and Brain State in a Box. The SPANNS was developed by integrating SAIC's Industrial Strength Neural Networks (ISNN) Software into SPW.

  9. Characterisation of eddy current signals using different types of artificial neural networks

    International Nuclear Information System (INIS)

    Shyamsunder, M.T.; Rajagopalan, C.; Jayakumar, T.; Kalyanasundaram, P.; Baldev Raj; Ray, K.K.

    1996-01-01

    Eddy current testing is one of the important techniques in nondestructive testing. Automated characterisation of eddy current signals (ECS), either in the form of lissajous patterns (figure-of-eight) or individual voltage vs. time signals is an area of growing interest. This is particularly relevant in environments where the signal-to-noise ratio (SNR) of ECS are very poor. Intelligent, timely and precise interpretation of resulting data, is the key for improving the efficiency of NDT and E. A comprehensive study has been undertaken by the authors for the characterisation of ECS having poor SNR, using three types of artificial neural networks (ANNs). The types of ANNs used in this study are [a] the error-back propagation model, [b] the binary Hopfield model and [c] the Kohonen's self-organising maps model. Eddy current signals, acquired from different types of defects such as holes and notches on stainless steel type 316 sheets were used in this study. (author)

  10. Development of nuclear power plant diagnosis technique using neural networks

    International Nuclear Information System (INIS)

    Horiguchi, Masahiro; Fukawa, Naohiro; Nishimura, Kazuo

    1991-01-01

    A nuclear power plant diagnosis technique has been developed, called transient phenomena analysis, which employs neural network. The neural networks identify malfunctioning equipment by recognizing the pattern of main plant parameters, making it possible to locate the cause of an abnormality when a plant is in a transient state. In a case where some piece of equipment shows abnormal behavior, many plant parameters either directly or indirectly related to that equipment change simultaneously. When an abrupt change in a plant parameter is detected, changes in the 49 main plant parameters are classified into three types and a characteristic change pattern consisting of 49 data is defined. The neural networks then judge the cause of the abnormality from this pattern. This neural-network-based technique can recognize 100 patterns that are characterized by the causes of plant abnormality. (author)

  11. Improving the Robustness of Deep Neural Networks via Stability Training

    OpenAIRE

    Zheng, Stephan; Song, Yang; Leung, Thomas; Goodfellow, Ian

    2016-01-01

    In this paper we address the issue of output instability of deep neural networks: small perturbations in the visual input can significantly distort the feature embeddings and output of a neural network. Such instability affects many deep architectures with state-of-the-art performance on a wide range of computer vision tasks. We present a general stability training method to stabilize deep networks against small input distortions that result from various types of common image processing, such...

  12. Modelling and Pareto optimization of heat transfer and flow coefficients in microchannels using GMDH type neural networks and genetic algorithms

    International Nuclear Information System (INIS)

    Amanifard, N.; Nariman-Zadeh, N.; Borji, M.; Khalkhali, A.; Habibdoust, A.

    2008-01-01

    Three-dimensional heat transfer characteristics and pressure drop of water flow in a set of rectangular microchannels are numerically investigated using Fluent and compared with those of experimental results. Two metamodels based on the evolved group method of data handling (GMDH) type neural networks are then obtained for modelling of both pressure drop (ΔP) and Nusselt number (Nu) with respect to design variables such as geometrical parameters of microchannels, the amount of heat flux and the Reynolds number. Using such obtained polynomial neural networks, multi-objective genetic algorithms (GAs) (non-dominated sorting genetic algorithm, NSGA-II) with a new diversity preserving mechanism is then used for Pareto based optimization of microchannels considering two conflicting objectives such as (ΔP) and (Nu). It is shown that some interesting and important relationships as useful optimal design principles involved in the performance of microchannels can be discovered by Pareto based multi-objective optimization of the obtained polynomial metamodels representing their heat transfer and flow characteristics. Such important optimal principles would not have been obtained without the use of both GMDH type neural network modelling and the Pareto optimization approach

  13. Trimaran Resistance Artificial Neural Network

    Science.gov (United States)

    2011-01-01

    11th International Conference on Fast Sea Transportation FAST 2011, Honolulu, Hawaii, USA, September 2011 Trimaran Resistance Artificial Neural Network Richard...Trimaran Resistance Artificial Neural Network 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) 5d. PROJECT NUMBER 5e... Artificial Neural Network and is restricted to the center and side-hull configurations tested. The value in the parametric model is that it is able to

  14. Neural network regulation driven by autonomous neural firings

    Science.gov (United States)

    Cho, Myoung Won

    2016-07-01

    Biological neurons naturally fire spontaneously due to the existence of a noisy current. Such autonomous firings may provide a driving force for network formation because synaptic connections can be modified due to neural firings. Here, we study the effect of autonomous firings on network formation. For the temporally asymmetric Hebbian learning, bidirectional connections lose their balance easily and become unidirectional ones. Defining the difference between reciprocal connections as new variables, we could express the learning dynamics as if Ising model spins interact with each other in magnetism. We present a theoretical method to estimate the interaction between the new variables in a neural system. We apply the method to some network systems and find some tendencies of autonomous neural network regulation.

  15. Optimal multiple-information integration inherent in a ring neural network

    International Nuclear Information System (INIS)

    Takiyama, Ken

    2017-01-01

    Although several behavioral experiments have suggested that our neural system integrates multiple sources of information based on the certainty of each type of information in the manner of maximum-likelihood estimation, it is unclear how the maximum-likelihood estimation is implemented in our neural system. Here, I investigate the relationship between maximum-likelihood estimation and a widely used ring-type neural network model that is used as a model of visual, motor, or prefrontal cortices. Without any approximation or ansatz, I analytically demonstrate that the equilibrium of an order parameter in the neural network model exactly corresponds to the maximum-likelihood estimation when the strength of the symmetrical recurrent synaptic connectivity within a neural population is appropriately stronger than that of asymmetrical connectivity, that of local and external inputs, and that of symmetrical or asymmetrical connectivity between different neural populations. In this case, strengths of local and external inputs or those of symmetrical connectivity between different neural populations exactly correspond to the input certainty in maximum-likelihood estimation. Thus, my analysis suggests appropriately strong symmetrical recurrent connectivity as a possible candidate for implementing the maximum-likelihood estimation within our neural system. (paper)

  16. On-line plant-wide monitoring using neural networks

    International Nuclear Information System (INIS)

    Turkcan, E.; Ciftcioglu, O.; Eryurek, E.; Upadhyaya, B.R.

    1992-06-01

    The on-line signal analysis system designed for a multi-level mode operation using neural networks is described. The system is capable of monitoring the plant states by tracking different number of signals up to 32 simultaneously. The data used for this study were acquired from the Borssele Nuclear Power Plant (PWR type), and using the on-line monitoring system. An on-line plant-wide monitoring study using a multilayer neural network model is discussed in this paper. The back-propagation neural network algorithm is used for training the network. The technique assumes that each physical state of the power plant can be represented by a unique pattern of instrument readings which can be related to the condition of the plant. When disturbance occurs, the sensor readings undergo a transient, and form a different set of patterns which represent the new operational status. Diagnosing these patterns can be helpful in identifying this new state of the power plant. To this end, plant-wide monitoring with neutral networks is one of the new techniques in real-time applications. (author). 9 refs.; 5 figs

  17. PERFORMANCE COMPARISON FOR INTRUSION DETECTION SYSTEM USING NEURAL NETWORK WITH KDD DATASET

    Directory of Open Access Journals (Sweden)

    S. Devaraju

    2014-04-01

    Full Text Available Intrusion Detection Systems are challenging task for finding the user as normal user or attack user in any organizational information systems or IT Industry. The Intrusion Detection System is an effective method to deal with the kinds of problem in networks. Different classifiers are used to detect the different kinds of attacks in networks. In this paper, the performance of intrusion detection is compared with various neural network classifiers. In the proposed research the four types of classifiers used are Feed Forward Neural Network (FFNN, Generalized Regression Neural Network (GRNN, Probabilistic Neural Network (PNN and Radial Basis Neural Network (RBNN. The performance of the full featured KDD Cup 1999 dataset is compared with that of the reduced featured KDD Cup 1999 dataset. The MATLAB software is used to train and test the dataset and the efficiency and False Alarm Rate is measured. It is proved that the reduced dataset is performing better than the full featured dataset.

  18. A neural network approach to job-shop scheduling.

    Science.gov (United States)

    Zhou, D N; Cherkassky, V; Baldwin, T R; Olson, D E

    1991-01-01

    A novel analog computational network is presented for solving NP-complete constraint satisfaction problems, i.e. job-shop scheduling. In contrast to most neural approaches to combinatorial optimization based on quadratic energy cost function, the authors propose to use linear cost functions. As a result, the network complexity (number of neurons and the number of resistive interconnections) grows only linearly with problem size, and large-scale implementations become possible. The proposed approach is related to the linear programming network described by D.W. Tank and J.J. Hopfield (1985), which also uses a linear cost function for a simple optimization problem. It is shown how to map a difficult constraint-satisfaction problem onto a simple neural net in which the number of neural processors equals the number of subjobs (operations) and the number of interconnections grows linearly with the total number of operations. Simulations show that the authors' approach produces better solutions than existing neural approaches to job-shop scheduling, i.e. the traveling salesman problem-type Hopfield approach and integer linear programming approach of J.P.S. Foo and Y. Takefuji (1988), in terms of the quality of the solution and the network complexity.

  19. Simplified LQG Control with Neural Networks

    DEFF Research Database (Denmark)

    Sørensen, O.

    1997-01-01

    A new neural network application for non-linear state control is described. One neural network is modelled to form a Kalmann predictor and trained to act as an optimal state observer for a non-linear process. Another neural network is modelled to form a state controller and trained to produce...

  20. Wavelet neural network load frequency controller

    International Nuclear Information System (INIS)

    Hemeida, Ashraf Mohamed

    2005-01-01

    This paper presents the feasibility of applying a wavelet neural network (WNN) approach for the load frequency controller (LFC) to damp the frequency oscillations of two area power systems due to load disturbances. The present intelligent control system trained the wavelet neural network (WNN) controller on line with adaptive learning rates, which are derived in the sense of a discrete type Lyapunov stability theorem. The present WNN controller is designed individually for each area. The proposed technique is applied successfully for a wide range of operating conditions. The time simulation results indicate its superiority and effectiveness over the conventional approach. The effects of consideration of the governor dead zone on the system performance are studied using the proposed controller and the conventional one

  1. Fuzzy neural network theory and application

    CERN Document Server

    Liu, Puyin

    2004-01-01

    This book systematically synthesizes research achievements in the field of fuzzy neural networks in recent years. It also provides a comprehensive presentation of the developments in fuzzy neural networks, with regard to theory as well as their application to system modeling and image restoration. Special emphasis is placed on the fundamental concepts and architecture analysis of fuzzy neural networks. The book is unique in treating all kinds of fuzzy neural networks and their learning algorithms and universal approximations, and employing simulation examples which are carefully designed to he

  2. Nonlinearly Activated Neural Network for Solving Time-Varying Complex Sylvester Equation.

    Science.gov (United States)

    Li, Shuai; Li, Yangming

    2013-10-28

    The Sylvester equation is often encountered in mathematics and control theory. For the general time-invariant Sylvester equation problem, which is defined in the domain of complex numbers, the Bartels-Stewart algorithm and its extensions are effective and widely used with an O(n³) time complexity. When applied to solving the time-varying Sylvester equation, the computation burden increases intensively with the decrease of sampling period and cannot satisfy continuous realtime calculation requirements. For the special case of the general Sylvester equation problem defined in the domain of real numbers, gradient-based recurrent neural networks are able to solve the time-varying Sylvester equation in real time, but there always exists an estimation error while a recently proposed recurrent neural network by Zhang et al [this type of neural network is called Zhang neural network (ZNN)] converges to the solution ideally. The advancements in complex-valued neural networks cast light to extend the existing real-valued ZNN for solving the time-varying real-valued Sylvester equation to its counterpart in the domain of complex numbers. In this paper, a complex-valued ZNN for solving the complex-valued Sylvester equation problem is investigated and the global convergence of the neural network is proven with the proposed nonlinear complex-valued activation functions. Moreover, a special type of activation function with a core function, called sign-bi-power function, is proven to enable the ZNN to converge in finite time, which further enhances its advantage in online processing. In this case, the upper bound of the convergence time is also derived analytically. Simulations are performed to evaluate and compare the performance of the neural network with different parameters and activation functions. Both theoretical analysis and numerical simulations validate the effectiveness of the proposed method.

  3. Modular representation of layered neural networks.

    Science.gov (United States)

    Watanabe, Chihiro; Hiramatsu, Kaoru; Kashino, Kunio

    2018-01-01

    Layered neural networks have greatly improved the performance of various applications including image processing, speech recognition, natural language processing, and bioinformatics. However, it is still difficult to discover or interpret knowledge from the inference provided by a layered neural network, since its internal representation has many nonlinear and complex parameters embedded in hierarchical layers. Therefore, it becomes important to establish a new methodology by which layered neural networks can be understood. In this paper, we propose a new method for extracting a global and simplified structure from a layered neural network. Based on network analysis, the proposed method detects communities or clusters of units with similar connection patterns. We show its effectiveness by applying it to three use cases. (1) Network decomposition: it can decompose a trained neural network into multiple small independent networks thus dividing the problem and reducing the computation time. (2) Training assessment: the appropriateness of a trained result with a given hyperparameter or randomly chosen initial parameters can be evaluated by using a modularity index. And (3) data analysis: in practical data it reveals the community structure in the input, hidden, and output layers, which serves as a clue for discovering knowledge from a trained neural network. Copyright © 2017 Elsevier Ltd. All rights reserved.

  4. Fault diagnosis method for nuclear power plants based on neural networks and voting fusion

    International Nuclear Information System (INIS)

    Zhou Gang; Ge Shengqi; Yang Li

    2010-01-01

    A new fault diagnosis method based on multiple neural networks (ANNs) and voting fusion for nuclear power plants (NPPs) was proposed in view of the shortcoming of single neural network fault diagnosis method. In this method, multiple neural networks that the types of neural networks were different were trained for the fault diagnosis of NPP. The operation parameters of NPP, which have important affect on the safety of NPP, were selected as the input variable of neural networks. The output of neural networks is fault patterns of NPP. The last results of diagnosis for NPP were obtained by fusing the diagnosing results of different neural networks by voting fusion. The typical operation patterns of NPP were diagnosed to demonstrate the effect of the proposed method. The results show that employing the proposed diagnosing method can improve the precision and reliability of fault diagnosis results of NPPs. (authors)

  5. Introduction to Artificial Neural Networks

    DEFF Research Database (Denmark)

    Larsen, Jan

    1999-01-01

    The note addresses introduction to signal analysis and classification based on artificial feed-forward neural networks.......The note addresses introduction to signal analysis and classification based on artificial feed-forward neural networks....

  6. Spiking neural network for recognizing spatiotemporal sequences of spikes

    International Nuclear Information System (INIS)

    Jin, Dezhe Z.

    2004-01-01

    Sensory neurons in many brain areas spike with precise timing to stimuli with temporal structures, and encode temporally complex stimuli into spatiotemporal spikes. How the downstream neurons read out such neural code is an important unsolved problem. In this paper, we describe a decoding scheme using a spiking recurrent neural network. The network consists of excitatory neurons that form a synfire chain, and two globally inhibitory interneurons of different types that provide delayed feedforward and fast feedback inhibition, respectively. The network signals recognition of a specific spatiotemporal sequence when the last excitatory neuron down the synfire chain spikes, which happens if and only if that sequence was present in the input spike stream. The recognition scheme is invariant to variations in the intervals between input spikes within some range. The computation of the network can be mapped into that of a finite state machine. Our network provides a simple way to decode spatiotemporal spikes with diverse types of neurons

  7. Application of Integrated Neural Network Method to Fault Diagnosis of Nuclear Steam Generator

    International Nuclear Information System (INIS)

    Zhou Gang; Yang Li

    2009-01-01

    A new fault diagnosis method based on integrated neural networks for nuclear steam generator (SG) was proposed in view of the shortcoming of the conventional fault monitoring and diagnosis method. In the method, two neural networks (ANNs) were employed for the fault diagnosis of steam generator. A neural network, which was used for predicting the values of steam generator operation parameters, was taken as the dynamics model of steam generator. The principle of fault monitoring method using the neural network model is to detect the deviations between process signals measured from an operating steam generator and corresponding output signals from the neural network model of steam generator. When the deviation exceeds the limit set in advance, the abnormal event is thought to occur. The other neural network as a fault classifier conducts the fault classification of steam generator. So, the fault types of steam generator are given by the fault classifier. The clear information on steam generator faults was obtained by fusing the monitoring and diagnosis results of two neural networks. The simulation results indicate that employing integrated neural networks can improve the capacity of fault monitoring and diagnosis for the steam generator. (authors)

  8. Convergent dynamics for multistable delayed neural networks

    International Nuclear Information System (INIS)

    Shih, Chih-Wen; Tseng, Jui-Pin

    2008-01-01

    This investigation aims at developing a methodology to establish convergence of dynamics for delayed neural network systems with multiple stable equilibria. The present approach is general and can be applied to several network models. We take the Hopfield-type neural networks with both instantaneous and delayed feedbacks to illustrate the idea. We shall construct the complete dynamical scenario which comprises exactly 2 n stable equilibria and exactly (3 n − 2 n ) unstable equilibria for the n-neuron network. In addition, it is shown that every solution of the system converges to one of the equilibria as time tends to infinity. The approach is based on employing the geometrical structure of the network system. Positively invariant sets and componentwise dynamical properties are derived under the geometrical configuration. An iteration scheme is subsequently designed to confirm the convergence of dynamics for the system. Two examples with numerical simulations are arranged to illustrate the present theory

  9. Synchronization of cellular neural networks of neutral type via dynamic feedback controller

    International Nuclear Information System (INIS)

    Park, Ju H.

    2009-01-01

    In this paper, we aim to study global synchronization for neural networks with neutral delay. A dynamic feedback control scheme is proposed to achieve the synchronization between drive network and response network. By utilizing the Lyapunov function and linear matrix inequalities (LMIs), we derive simple and efficient criterion in terms of LMIs for synchronization. The feedback controllers can be easily obtained by solving the derived LMIs.

  10. Evolutionary neural networks: a new alternative for neutron spectrometry

    International Nuclear Information System (INIS)

    Ortiz R, J. M.; Martinez B, M. R.; Vega C, H. R.; Galleo, E.

    2009-10-01

    A device used to perform neutron spectroscopy is the system known as a system of Bonner spheres spectrometer, this system has some disadvantages, one of these is the need for reconstruction using a code that is based on an iterative reconstruction algorithm, whose greater inconvenience is the need for a initial spectrum, as close as possible to the spectrum that is desired to avoid this inconvenience has been reported several procedures in reconstruction, combined with various types of experimental methods, based on artificial intelligence technology how genetic algorithms, artificial neural networks and hybrid systems evolved artificial neural networks using genetic algorithms. This paper analyzes the intersection of neural networks and evolutionary algorithms applied in the neutron spectroscopy and dosimetry. Due to this is an emerging technology, there are not tools for doing analysis of the obtained results, by what this paper presents a computing tool to analyze the neutron spectra and the equivalent doses obtained through the hybrid technology of neural networks and genetic algorithms. The toolmaker offers a user graphical environment, friendly and easy to operate. (author)

  11. DCS-Neural-Network Program for Aircraft Control and Testing

    Science.gov (United States)

    Jorgensen, Charles C.

    2006-01-01

    A computer program implements a dynamic-cell-structure (DCS) artificial neural network that can perform such tasks as learning selected aerodynamic characteristics of an airplane from wind-tunnel test data and computing real-time stability and control derivatives of the airplane for use in feedback linearized control. A DCS neural network is one of several types of neural networks that can incorporate additional nodes in order to rapidly learn increasingly complex relationships between inputs and outputs. In the DCS neural network implemented by the present program, the insertion of nodes is based on accumulated error. A competitive Hebbian learning rule (a supervised-learning rule in which connection weights are adjusted to minimize differences between actual and desired outputs for training examples) is used. A Kohonen-style learning rule (derived from a relatively simple training algorithm, implements a Delaunay triangulation layout of neurons) is used to adjust node positions during training. Neighborhood topology determines which nodes are used to estimate new values. The network learns, starting with two nodes, and adds new nodes sequentially in locations chosen to maximize reductions in global error. At any given time during learning, the error becomes homogeneously distributed over all nodes.

  12. Neural Network-Based Passive Filtering for Delayed Neutral-Type Semi-Markovian Jump Systems.

    Science.gov (United States)

    Shi, Peng; Li, Fanbiao; Wu, Ligang; Lim, Cheng-Chew

    2017-09-01

    This paper investigates the problem of exponential passive filtering for a class of stochastic neutral-type neural networks with both semi-Markovian jump parameters and mixed time delays. Our aim is to estimate the states by designing a Luenberger-type observer, such that the filter error dynamics are mean-square exponentially stable with an expected decay rate and an attenuation level. Sufficient conditions for the existence of passive filters are obtained, and a convex optimization algorithm for the filter design is given. In addition, a cone complementarity linearization procedure is employed to cast the nonconvex feasibility problem into a sequential minimization problem, which can be readily solved by the existing optimization techniques. Numerical examples are given to demonstrate the effectiveness of the proposed techniques.

  13. Neural Network Ensembles

    DEFF Research Database (Denmark)

    Hansen, Lars Kai; Salamon, Peter

    1990-01-01

    We propose several means for improving the performance an training of neural networks for classification. We use crossvalidation as a tool for optimizing network parameters and architecture. We show further that the remaining generalization error can be reduced by invoking ensembles of similar...... networks....

  14. Complex-Valued Neural Networks

    CERN Document Server

    Hirose, Akira

    2012-01-01

    This book is the second enlarged and revised edition of the first successful monograph on complex-valued neural networks (CVNNs) published in 2006, which lends itself to graduate and undergraduate courses in electrical engineering, informatics, control engineering, mechanics, robotics, bioengineering, and other relevant fields. In the second edition the recent trends in CVNNs research are included, resulting in e.g. almost a doubled number of references. The parametron invented in 1954 is also referred to with discussion on analogy and disparity. Also various additional arguments on the advantages of the complex-valued neural networks enhancing the difference to real-valued neural networks are given in various sections. The book is useful for those beginning their studies, for instance, in adaptive signal processing for highly functional sensing and imaging, control in unknown and changing environment, robotics inspired by human neural systems, and brain-like information processing, as well as interdisciplina...

  15. Artificial neural networks in the nuclear engineering (Part 2)

    International Nuclear Information System (INIS)

    Baptista Filho, Benedito Dias

    2002-01-01

    The field of Artificial Neural Networks (ANN), one of the branches of Artificial Intelligence has been waking up a lot of interest in the Nuclear Engineering (NE). ANN can be used to solve problems of difficult modeling, when the data are fail or incomplete and in high complexity problems of control. The first part of this work began a discussion with feed-forward neural networks in back-propagation. In this part of the work, the Multi-synaptic neural networks is applied to control problems. Also, the self-organized maps is presented in a typical pattern classification problem: transients classification. The main purpose of the work is to show that ANN can be successfully used in NE if a carefully choice of its type is done: the application sets this choice. (author)

  16. Analysis of neural networks

    CERN Document Server

    Heiden, Uwe

    1980-01-01

    The purpose of this work is a unified and general treatment of activity in neural networks from a mathematical pOint of view. Possible applications of the theory presented are indica­ ted throughout the text. However, they are not explored in de­ tail for two reasons : first, the universal character of n- ral activity in nearly all animals requires some type of a general approach~ secondly, the mathematical perspicuity would suffer if too many experimental details and empirical peculiarities were interspersed among the mathematical investigation. A guide to many applications is supplied by the references concerning a variety of specific issues. Of course the theory does not aim at covering all individual problems. Moreover there are other approaches to neural network theory (see e.g. Poggio-Torre, 1978) based on the different lev­ els at which the nervous system may be viewed. The theory is a deterministic one reflecting the average be­ havior of neurons or neuron pools. In this respect the essay is writt...

  17. A decomposition approach to analysis of competitive-cooperative neural networks with delay

    International Nuclear Information System (INIS)

    Chu Tianguang; Zhang Zongda; Wang Zhaolin

    2003-01-01

    Competitive-cooperative or inhibitory-excitatory configurations abound in neural networks. It is demonstrated here how such a configuration may be exploited to give a detailed characterization of the fixed point dynamics in general neural networks with time delay. The idea is to divide the connection weights into inhibitory and excitatory types and thereby to embed a competitive-cooperative delay neural network into an augmented cooperative delay system through a symmetric transformation. This allows for the use of the powerful monotone properties of cooperative systems. By the method, we derive several simple necessary and sufficient conditions on guaranteed trapping regions and guaranteed componentwise (exponential) convergence of the neural networks. The results relate specific decay rate and trajectory bounds to system parameters and are therefore of practical significance in designing a network with desired performance

  18. Deep Recurrent Neural Networks for Supernovae Classification

    Science.gov (United States)

    Charnock, Tom; Moss, Adam

    2017-03-01

    We apply deep recurrent neural networks, which are capable of learning complex sequential information, to classify supernovae (code available at https://github.com/adammoss/supernovae). The observational time and filter fluxes are used as inputs to the network, but since the inputs are agnostic, additional data such as host galaxy information can also be included. Using the Supernovae Photometric Classification Challenge (SPCC) data, we find that deep networks are capable of learning about light curves, however the performance of the network is highly sensitive to the amount of training data. For a training size of 50% of the representational SPCC data set (around 104 supernovae) we obtain a type-Ia versus non-type-Ia classification accuracy of 94.7%, an area under the Receiver Operating Characteristic curve AUC of 0.986 and an SPCC figure-of-merit F 1 = 0.64. When using only the data for the early-epoch challenge defined by the SPCC, we achieve a classification accuracy of 93.1%, AUC of 0.977, and F 1 = 0.58, results almost as good as with the whole light curve. By employing bidirectional neural networks, we can acquire impressive classification results between supernovae types I, II and III at an accuracy of 90.4% and AUC of 0.974. We also apply a pre-trained model to obtain classification probabilities as a function of time and show that it can give early indications of supernovae type. Our method is competitive with existing algorithms and has applications for future large-scale photometric surveys.

  19. Prototype-Incorporated Emotional Neural Network.

    Science.gov (United States)

    Oyedotun, Oyebade K; Khashman, Adnan

    2017-08-15

    Artificial neural networks (ANNs) aim to simulate the biological neural activities. Interestingly, many ''engineering'' prospects in ANN have relied on motivations from cognition and psychology studies. So far, two important learning theories that have been subject of active research are the prototype and adaptive learning theories. The learning rules employed for ANNs can be related to adaptive learning theory, where several examples of the different classes in a task are supplied to the network for adjusting internal parameters. Conversely, the prototype-learning theory uses prototypes (representative examples); usually, one prototype per class of the different classes contained in the task. These prototypes are supplied for systematic matching with new examples so that class association can be achieved. In this paper, we propose and implement a novel neural network algorithm based on modifying the emotional neural network (EmNN) model to unify the prototype- and adaptive-learning theories. We refer to our new model as ``prototype-incorporated EmNN''. Furthermore, we apply the proposed model to two real-life challenging tasks, namely, static hand-gesture recognition and face recognition, and compare the result to those obtained using the popular back-propagation neural network (BPNN), emotional BPNN (EmNN), deep networks, an exemplar classification model, and k-nearest neighbor.

  20. Deconvolution using a neural network

    Energy Technology Data Exchange (ETDEWEB)

    Lehman, S.K.

    1990-11-15

    Viewing one dimensional deconvolution as a matrix inversion problem, we compare a neural network backpropagation matrix inverse with LMS, and pseudo-inverse. This is a largely an exercise in understanding how our neural network code works. 1 ref.

  1. Identifying Broadband Rotational Spectra with Neural Networks

    Science.gov (United States)

    Zaleski, Daniel P.; Prozument, Kirill

    2017-06-01

    A typical broadband rotational spectrum may contain several thousand observable transitions, spanning many species. Identifying the individual spectra, particularly when the dynamic range reaches 1,000:1 or even 10,000:1, can be challenging. One approach is to apply automated fitting routines. In this approach, combinations of 3 transitions can be created to form a "triple", which allows fitting of the A, B, and C rotational constants in a Watson-type Hamiltonian. On a standard desktop computer, with a target molecule of interest, a typical AUTOFIT routine takes 2-12 hours depending on the spectral density. A new approach is to utilize machine learning to train a computer to recognize the patterns (frequency spacing and relative intensities) inherit in rotational spectra and to identify the individual spectra in a raw broadband rotational spectrum. Here, recurrent neural networks have been trained to identify different types of rotational spectra and classify them accordingly. Furthermore, early results in applying convolutional neural networks for spectral object recognition in broadband rotational spectra appear promising. Perez et al. "Broadband Fourier transform rotational spectroscopy for structure determination: The water heptamer." Chem. Phys. Lett., 2013, 571, 1-15. Seifert et al. "AUTOFIT, an Automated Fitting Tool for Broadband Rotational Spectra, and Applications to 1-Hexanal." J. Mol. Spectrosc., 2015, 312, 13-21. Bishop. "Neural networks for pattern recognition." Oxford university press, 1995.

  2. Approximate solutions of dual fuzzy polynomials by feed-back neural networks

    Directory of Open Access Journals (Sweden)

    Ahmad Jafarian

    2012-11-01

    Full Text Available Recently, artificial neural networks (ANNs have been extensively studied and used in different areas such as pattern recognition, associative memory, combinatorial optimization, etc. In this paper, we investigate the ability of fuzzy neural networks to approximate solution of a dual fuzzy polynomial of the form $a_{1}x+ ...+a_{n}x^n =b_{1}x+ ...+b_{n}x^n+d,$ where $a_{j},b_{j},d epsilon E^1 (for j=1,...,n.$ Since the operation of fuzzy neural networks is based on Zadeh's extension principle. For this scope we train a fuzzified neural network by back-propagation-type learning algorithm which has five layer where connection weights are crisp numbers. This neural network can get a crisp input signal and then calculates its corresponding fuzzy output. Presented method can give a real approximate solution for given polynomial by using a cost function which is defined for the level sets of fuzzy output and target output. The simulation results are presented to demonstrate the efficiency and effectiveness of the proposed approach.

  3. Optimal capacity of Ashkin-Teller neural networks

    International Nuclear Information System (INIS)

    Kozleowski, P.; Bolle, D.

    2001-01-01

    The maximal capacity per number of couplings is calculated for Ashkin-Teller type neural networks using a replica-symmetric Gardner approach. The results are compared with numerical simulations. The best value obtained at κ=0 is α c =2.26±0.01

  4. Practical Application of Neural Networks in State Space Control

    DEFF Research Database (Denmark)

    Bendtsen, Jan Dimon

    the networks, although some modifications are needed for the method to apply to the multilayer perceptron network. In connection with the multilayer perceptron networks it is also pointed out how instantaneous, sample-by-sample linearized state space models can be extracted from a trained network, thus opening......In the present thesis we address some problems in discrete-time state space control of nonlinear dynamical systems and attempt to solve them using generic nonlinear models based on artificial neural networks. The main aim of the work is to examine how well such control algorithms perform when...... theoretic notions followed by a detailed description of the topology, neuron functions and learning rules of the two types of neural networks treated in the thesis, the multilayer perceptron and the neurofuzzy networks. In both cases, a Least Squares second-order gradient method is used to train...

  5. Neural networks and statistical learning

    CERN Document Server

    Du, Ke-Lin

    2014-01-01

    Providing a broad but in-depth introduction to neural network and machine learning in a statistical framework, this book provides a single, comprehensive resource for study and further research. All the major popular neural network models and statistical learning approaches are covered with examples and exercises in every chapter to develop a practical working understanding of the content. Each of the twenty-five chapters includes state-of-the-art descriptions and important research results on the respective topics. The broad coverage includes the multilayer perceptron, the Hopfield network, associative memory models, clustering models and algorithms, the radial basis function network, recurrent neural networks, principal component analysis, nonnegative matrix factorization, independent component analysis, discriminant analysis, support vector machines, kernel methods, reinforcement learning, probabilistic and Bayesian networks, data fusion and ensemble learning, fuzzy sets and logic, neurofuzzy models, hardw...

  6. Neural network recognition of mammographic lesions

    International Nuclear Information System (INIS)

    Oldham, W.J.B.; Downes, P.T.; Hunter, V.

    1987-01-01

    A method for recognition of mammographic lesions through the use of neural networks is presented. Neural networks have exhibited the ability to learn the shape andinternal structure of patterns. Digitized mammograms containing circumscribed and stelate lesions were used to train a feedfoward synchronous neural network that self-organizes to stable attractor states. Encoding of data for submission to the network was accomplished by performing a fractal analysis of the digitized image. This results in scale invariant representation of the lesions. Results are discussed

  7. Neural Networks and Micromechanics

    Science.gov (United States)

    Kussul, Ernst; Baidyk, Tatiana; Wunsch, Donald C.

    The title of the book, "Neural Networks and Micromechanics," seems artificial. However, the scientific and technological developments in recent decades demonstrate a very close connection between the two different areas of neural networks and micromechanics. The purpose of this book is to demonstrate this connection. Some artificial intelligence (AI) methods, including neural networks, could be used to improve automation system performance in manufacturing processes. However, the implementation of these AI methods within industry is rather slow because of the high cost of conducting experiments using conventional manufacturing and AI systems. To lower the cost, we have developed special micromechanical equipment that is similar to conventional mechanical equipment but of much smaller size and therefore of lower cost. This equipment could be used to evaluate different AI methods in an easy and inexpensive way. The proved methods could be transferred to industry through appropriate scaling. In this book, we describe the prototypes of low cost microequipment for manufacturing processes and the implementation of some AI methods to increase precision, such as computer vision systems based on neural networks for microdevice assembly and genetic algorithms for microequipment characterization and the increase of microequipment precision.

  8. Parameterization Of Solar Radiation Using Neural Network

    International Nuclear Information System (INIS)

    Jiya, J. D.; Alfa, B.

    2002-01-01

    This paper presents a neural network technique for parameterization of global solar radiation. The available data from twenty-one stations is used for training the neural network and the data from other ten stations is used to validate the neural model. The neural network utilizes latitude, longitude, altitude, sunshine duration and period number to parameterize solar radiation values. The testing data was not used in the training to demonstrate the performance of the neural network in unknown stations to parameterize solar radiation. The results indicate a good agreement between the parameterized solar radiation values and actual measured values

  9. Analysis of neural networks through base functions

    NARCIS (Netherlands)

    van der Zwaag, B.J.; Slump, Cornelis H.; Spaanenburg, L.

    Problem statement. Despite their success-story, neural networks have one major disadvantage compared to other techniques: the inability to explain comprehensively how a trained neural network reaches its output; neural networks are not only (incorrectly) seen as a "magic tool" but possibly even more

  10. Neural Networks for Optimal Control

    DEFF Research Database (Denmark)

    Sørensen, O.

    1995-01-01

    Two neural networks are trained to act as an observer and a controller, respectively, to control a non-linear, multi-variable process.......Two neural networks are trained to act as an observer and a controller, respectively, to control a non-linear, multi-variable process....

  11. Comparison Of Power Quality Disturbances Classification Based On Neural Network

    Directory of Open Access Journals (Sweden)

    Nway Nway Kyaw Win

    2015-07-01

    Full Text Available Abstract Power quality disturbances PQDs result serious problems in the reliability safety and economy of power system network. In order to improve electric power quality events the detection and classification of PQDs must be made type of transient fault. Software analysis of wavelet transform with multiresolution analysis MRA algorithm and feed forward neural network probabilistic and multilayer feed forward neural network based methodology for automatic classification of eight types of PQ signals flicker harmonics sag swell impulse fluctuation notch and oscillatory will be presented. The wavelet family Db4 is chosen in this system to calculate the values of detailed energy distributions as input features for classification because it can perform well in detecting and localizing various types of PQ disturbances. This technique classifies the types of PQDs problem sevents.The classifiers classify and identify the disturbance type according to the energy distribution. The results show that the PNN can analyze different power disturbance types efficiently. Therefore it can be seen that PNN has better classification accuracy than MLFF.

  12. Boolean Factor Analysis by Attractor Neural Network

    Czech Academy of Sciences Publication Activity Database

    Frolov, A. A.; Húsek, Dušan; Muraviev, I. P.; Polyakov, P.Y.

    2007-01-01

    Roč. 18, č. 3 (2007), s. 698-707 ISSN 1045-9227 R&D Projects: GA AV ČR 1ET100300419; GA ČR GA201/05/0079 Institutional research plan: CEZ:AV0Z10300504 Keywords : recurrent neural network * Hopfield-like neural network * associative memory * unsupervised learning * neural network architecture * neural network application * statistics * Boolean factor analysis * dimensionality reduction * features clustering * concepts search * information retrieval Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 2.769, year: 2007

  13. Weed Growth Stage Estimator Using Deep Convolutional Neural Networks

    DEFF Research Database (Denmark)

    Teimouri, Nima; Dyrmann, Mads; Nielsen, Per Rydahl

    2018-01-01

    conditions with regards to soil types, resolution and light settings. Then, 9649 of these images were used for training the computer, which automatically divided the weeds into nine growth classes. The performance of this proposed convolutional neural network approach was evaluated on a further set of 2516...... in estimating the number of leaves and 96% accuracy when accepting a deviation of two leaves. These results show that this new method of using deep convolutional neural networks has a relatively high ability to estimate early growth stages across a wide variety of weed species....

  14. Identification and adaptive neural network control of a DC motor system with dead-zone characteristics.

    Science.gov (United States)

    Peng, Jinzhu; Dubay, Rickey

    2011-10-01

    In this paper, an adaptive control approach based on the neural networks is presented to control a DC motor system with dead-zone characteristics (DZC), where two neural networks are proposed to formulate the traditional identification and control approaches. First, a Wiener-type neural network (WNN) is proposed to identify the motor DZC, which formulates the Wiener model with a linear dynamic block in cascade with a nonlinear static gain. Second, a feedforward neural network is proposed to formulate the traditional PID controller, termed as PID-type neural network (PIDNN), which is then used to control and compensate for the DZC. In this way, the DC motor system with DZC is identified by the WNN identifier, which provides model information to the PIDNN controller in order to make it adaptive. Back-propagation algorithms are used to train both neural networks. Also, stability and convergence analysis are conducted using the Lyapunov theorem. Finally, experiments on the DC motor system demonstrated accurate identification and good compensation for dead-zone with improved control performance over the conventional PID control. Copyright © 2011 ISA. Published by Elsevier Ltd. All rights reserved.

  15. SuperSpike: Supervised Learning in Multilayer Spiking Neural Networks.

    Science.gov (United States)

    Zenke, Friedemann; Ganguli, Surya

    2018-04-13

    A vast majority of computation in the brain is performed by spiking neural networks. Despite the ubiquity of such spiking, we currently lack an understanding of how biological spiking neural circuits learn and compute in vivo, as well as how we can instantiate such capabilities in artificial spiking circuits in silico. Here we revisit the problem of supervised learning in temporally coding multilayer spiking neural networks. First, by using a surrogate gradient approach, we derive SuperSpike, a nonlinear voltage-based three-factor learning rule capable of training multilayer networks of deterministic integrate-and-fire neurons to perform nonlinear computations on spatiotemporal spike patterns. Second, inspired by recent results on feedback alignment, we compare the performance of our learning rule under different credit assignment strategies for propagating output errors to hidden units. Specifically, we test uniform, symmetric, and random feedback, finding that simpler tasks can be solved with any type of feedback, while more complex tasks require symmetric feedback. In summary, our results open the door to obtaining a better scientific understanding of learning and computation in spiking neural networks by advancing our ability to train them to solve nonlinear problems involving transformations between different spatiotemporal spike time patterns.

  16. Neural networks at the Tevatron

    International Nuclear Information System (INIS)

    Badgett, W.; Burkett, K.; Campbell, M.K.; Wu, D.Y.; Bianchin, S.; DeNardi, M.; Pauletta, G.; Santi, L.; Caner, A.; Denby, B.; Haggerty, H.; Lindsey, C.S.; Wainer, N.; Dall'Agata, M.; Johns, K.; Dickson, M.; Stanco, L.; Wyss, J.L.

    1992-10-01

    This paper summarizes neural network applications at the Fermilab Tevatron, including the first online hardware application in high energy physics (muon tracking): the CDF and DO neural network triggers; offline quark/gluon discrimination at CDF; ND a new tool for top to multijets recognition at CDF

  17. Analysis of the experimental positron lifetime spectra by neural networks

    International Nuclear Information System (INIS)

    Avdic, S.; Chakarova, R.; Pazsit, I.

    2003-01-01

    This paper deals with the analysis of experimental positron lifetime spectra in polymer materials by using various algorithms of neural networks. A method based on the use of artificial neural networks for unfolding the mean lifetime and intensity of the spectral components of simulated positron lifetime spectra was previously suggested and tested on simulated data [Pazsit et al., Applied Surface Science, 149 (1998), 97]. In this work, the applicability of the method to the analysis of experimental positron spectra has been verified in the case of spectra from polymer materials with three components. It has been demonstrated that the backpropagation neural network can determine the spectral parameters with a high accuracy and perform the decomposition of lifetimes which differ by 10% or more. The backpropagation network has not been suitable for the identification of both the parameters and the number of spectral components. Therefore, a separate artificial neural network module has been designed to solve the classification problem. Module types based on self-organizing map and learning vector quantization algorithms have been tested. The learning vector quantization algorithm was found to have better performance and reliability. A complete artificial neural network analysis tool of positron lifetime spectra has been constructed to include a spectra classification module and parameter evaluation modules for spectra with a different number of components. In this way, both flexibility and high resolution can be achieved. (author)

  18. Genetic Algorithm Optimized Neural Networks Ensemble as ...

    African Journals Online (AJOL)

    NJD

    Improvements in neural network calibration models by a novel approach using neural network ensemble (NNE) for the simultaneous ... process by training a number of neural networks. .... Matlab® version 6.1 was employed for building principal component ... provide a fair simulation of calibration data set with some degree.

  19. Application of neural network to CT

    International Nuclear Information System (INIS)

    Ma, Xiao-Feng; Takeda, Tatsuoki

    1999-01-01

    This paper presents a new method for two-dimensional image reconstruction by using a multilayer neural network. Multilayer neural networks are extensively investigated and practically applied to solution of various problems such as inverse problems or time series prediction problems. From learning an input-output mapping from a set of examples, neural networks can be regarded as synthesizing an approximation of multidimensional function (that is, solving the problem of hypersurface reconstruction, including smoothing and interpolation). From this viewpoint, neural networks are well suited to the solution of CT image reconstruction. Though a conventionally used object function of a neural network is composed of a sum of squared errors of the output data, we can define an object function composed of a sum of residue of an integral equation. By employing an appropriate line integral for this integral equation, we can construct a neural network that can be used for CT. We applied this method to some model problems and obtained satisfactory results. As it is not necessary to discretized the integral equation using this reconstruction method, therefore it is application to the problem of complicated geometrical shapes is also feasible. Moreover, in neural networks, interpolation is performed quite smoothly, as a result, inverse mapping can be achieved smoothly even in case of including experimental and numerical errors, However, use of conventional back propagation technique for optimization leads to an expensive computation cost. To overcome this drawback, 2nd order optimization methods or parallel computing will be applied in future. (J.P.N.)

  20. Fuzzy logic and neural networks basic concepts & application

    CERN Document Server

    Alavala, Chennakesava R

    2008-01-01

    About the Book: The primary purpose of this book is to provide the student with a comprehensive knowledge of basic concepts of fuzzy logic and neural networks. The hybridization of fuzzy logic and neural networks is also included. No previous knowledge of fuzzy logic and neural networks is required. Fuzzy logic and neural networks have been discussed in detail through illustrative examples, methods and generic applications. Extensive and carefully selected references is an invaluable resource for further study of fuzzy logic and neural networks. Each chapter is followed by a question bank

  1. Predicting physical time series using dynamic ridge polynomial neural networks.

    Directory of Open Access Journals (Sweden)

    Dhiya Al-Jumeily

    Full Text Available Forecasting naturally occurring phenomena is a common problem in many domains of science, and this has been addressed and investigated by many scientists. The importance of time series prediction stems from the fact that it has wide range of applications, including control systems, engineering processes, environmental systems and economics. From the knowledge of some aspects of the previous behaviour of the system, the aim of the prediction process is to determine or predict its future behaviour. In this paper, we consider a novel application of a higher order polynomial neural network architecture called Dynamic Ridge Polynomial Neural Network that combines the properties of higher order and recurrent neural networks for the prediction of physical time series. In this study, four types of signals have been used, which are; The Lorenz attractor, mean value of the AE index, sunspot number, and heat wave temperature. The simulation results showed good improvements in terms of the signal to noise ratio in comparison to a number of higher order and feedforward neural networks in comparison to the benchmarked techniques.

  2. Comparison between sparsely distributed memory and Hopfield-type neural network models

    Science.gov (United States)

    Keeler, James D.

    1986-01-01

    The Sparsely Distributed Memory (SDM) model (Kanerva, 1984) is compared to Hopfield-type neural-network models. A mathematical framework for comparing the two is developed, and the capacity of each model is investigated. The capacity of the SDM can be increased independently of the dimension of the stored vectors, whereas the Hopfield capacity is limited to a fraction of this dimension. However, the total number of stored bits per matrix element is the same in the two models, as well as for extended models with higher order interactions. The models are also compared in their ability to store sequences of patterns. The SDM is extended to include time delays so that contextual information can be used to cover sequences. Finally, it is shown how a generalization of the SDM allows storage of correlated input pattern vectors.

  3. A Hybrid Spectral Clustering and Deep Neural Network Ensemble Algorithm for Intrusion Detection in Sensor Networks.

    Science.gov (United States)

    Ma, Tao; Wang, Fen; Cheng, Jianjun; Yu, Yang; Chen, Xiaoyun

    2016-10-13

    The development of intrusion detection systems (IDS) that are adapted to allow routers and network defence systems to detect malicious network traffic disguised as network protocols or normal access is a critical challenge. This paper proposes a novel approach called SCDNN, which combines spectral clustering (SC) and deep neural network (DNN) algorithms. First, the dataset is divided into k subsets based on sample similarity using cluster centres, as in SC. Next, the distance between data points in a testing set and the training set is measured based on similarity features and is fed into the deep neural network algorithm for intrusion detection. Six KDD-Cup99 and NSL-KDD datasets and a sensor network dataset were employed to test the performance of the model. These experimental results indicate that the SCDNN classifier not only performs better than backpropagation neural network (BPNN), support vector machine (SVM), random forest (RF) and Bayes tree models in detection accuracy and the types of abnormal attacks found. It also provides an effective tool of study and analysis of intrusion detection in large networks.

  4. A Markovian event-based framework for stochastic spiking neural networks.

    Science.gov (United States)

    Touboul, Jonathan D; Faugeras, Olivier D

    2011-11-01

    In spiking neural networks, the information is conveyed by the spike times, that depend on the intrinsic dynamics of each neuron, the input they receive and on the connections between neurons. In this article we study the Markovian nature of the sequence of spike times in stochastic neural networks, and in particular the ability to deduce from a spike train the next spike time, and therefore produce a description of the network activity only based on the spike times regardless of the membrane potential process. To study this question in a rigorous manner, we introduce and study an event-based description of networks of noisy integrate-and-fire neurons, i.e. that is based on the computation of the spike times. We show that the firing times of the neurons in the networks constitute a Markov chain, whose transition probability is related to the probability distribution of the interspike interval of the neurons in the network. In the cases where the Markovian model can be developed, the transition probability is explicitly derived in such classical cases of neural networks as the linear integrate-and-fire neuron models with excitatory and inhibitory interactions, for different types of synapses, possibly featuring noisy synaptic integration, transmission delays and absolute and relative refractory period. This covers most of the cases that have been investigated in the event-based description of spiking deterministic neural networks.

  5. The effect of the neural activity on topological properties of growing neural networks.

    Science.gov (United States)

    Gafarov, F M; Gafarova, V R

    2016-09-01

    The connectivity structure in cortical networks defines how information is transmitted and processed, and it is a source of the complex spatiotemporal patterns of network's development, and the process of creation and deletion of connections is continuous in the whole life of the organism. In this paper, we study how neural activity influences the growth process in neural networks. By using a two-dimensional activity-dependent growth model we demonstrated the neural network growth process from disconnected neurons to fully connected networks. For making quantitative investigation of the network's activity influence on its topological properties we compared it with the random growth network not depending on network's activity. By using the random graphs theory methods for the analysis of the network's connections structure it is shown that the growth in neural networks results in the formation of a well-known "small-world" network.

  6. Neural-network classifiers for automatic real-world aerial image recognition

    Science.gov (United States)

    Greenberg, Shlomo; Guterman, Hugo

    1996-08-01

    We describe the application of the multilayer perceptron (MLP) network and a version of the adaptive resonance theory version 2-A (ART 2-A) network to the problem of automatic aerial image recognition (AAIR). The classification of aerial images, independent of their positions and orientations, is required for automatic tracking and target recognition. Invariance is achieved by the use of different invariant feature spaces in combination with supervised and unsupervised neural networks. The performance of neural-network-based classifiers in conjunction with several types of invariant AAIR global features, such as the Fourier-transform space, Zernike moments, central moments, and polar transforms, are examined. The advantages of this approach are discussed. The performance of the MLP network is compared with that of a classical correlator. The MLP neural-network correlator outperformed the binary phase-only filter (BPOF) correlator. It was found that the ART 2-A distinguished itself with its speed and its low number of required training vectors. However, only the MLP classifier was able to deal with a combination of shift and rotation geometric distortions.

  7. Enhancing neural-network performance via assortativity

    International Nuclear Information System (INIS)

    Franciscis, Sebastiano de; Johnson, Samuel; Torres, Joaquin J.

    2011-01-01

    The performance of attractor neural networks has been shown to depend crucially on the heterogeneity of the underlying topology. We take this analysis a step further by examining the effect of degree-degree correlations - assortativity - on neural-network behavior. We make use of a method recently put forward for studying correlated networks and dynamics thereon, both analytically and computationally, which is independent of how the topology may have evolved. We show how the robustness to noise is greatly enhanced in assortative (positively correlated) neural networks, especially if it is the hub neurons that store the information.

  8. Synchronization and Inter-Layer Interactions of Noise-Driven Neural Networks.

    Science.gov (United States)

    Yuniati, Anis; Mai, Te-Lun; Chen, Chi-Ming

    2017-01-01

    In this study, we used the Hodgkin-Huxley (HH) model of neurons to investigate the phase diagram of a developing single-layer neural network and that of a network consisting of two weakly coupled neural layers. These networks are noise driven and learn through the spike-timing-dependent plasticity (STDP) or the inverse STDP rules. We described how these networks transited from a non-synchronous background activity state (BAS) to a synchronous firing state (SFS) by varying the network connectivity and the learning efficacy. In particular, we studied the interaction between a SFS layer and a BAS layer, and investigated how synchronous firing dynamics was induced in the BAS layer. We further investigated the effect of the inter-layer interaction on a BAS to SFS repair mechanism by considering three types of neuron positioning (random, grid, and lognormal distributions) and two types of inter-layer connections (random and preferential connections). Among these scenarios, we concluded that the repair mechanism has the largest effect for a network with the lognormal neuron positioning and the preferential inter-layer connections.

  9. Lyapunov Functions to Caputo Fractional Neural Networks with Time-Varying Delays

    Directory of Open Access Journals (Sweden)

    Ravi Agarwal

    2018-05-01

    Full Text Available One of the main properties of solutions of nonlinear Caputo fractional neural networks is stability and often the direct Lyapunov method is used to study stability properties (usually these Lyapunov functions do not depend on the time variable. In connection with the Lyapunov fractional method we present a brief overview of the most popular fractional order derivatives of Lyapunov functions among Caputo fractional delay differential equations. These derivatives are applied to various types of neural networks with variable coefficients and time-varying delays. We show that quadratic Lyapunov functions and their Caputo fractional derivatives are not applicable in some cases when one studies stability properties. Some sufficient conditions for stability of equilibrium of nonlinear Caputo fractional neural networks with time dependent transmission delays, time varying self-regulating parameters of all units and time varying functions of the connection between two neurons in the network are obtained. The cases of time varying Lipschitz coefficients as well as nonLipschitz activation functions are studied. We illustrate our theory on particular nonlinear Caputo fractional neural networks.

  10. Genetic algorithm for neural networks optimization

    Science.gov (United States)

    Setyawati, Bina R.; Creese, Robert C.; Sahirman, Sidharta

    2004-11-01

    This paper examines the forecasting performance of multi-layer feed forward neural networks in modeling a particular foreign exchange rates, i.e. Japanese Yen/US Dollar. The effects of two learning methods, Back Propagation and Genetic Algorithm, in which the neural network topology and other parameters fixed, were investigated. The early results indicate that the application of this hybrid system seems to be well suited for the forecasting of foreign exchange rates. The Neural Networks and Genetic Algorithm were programmed using MATLAB«.

  11. Neural Network Based Load Frequency Control for Restructuring ...

    African Journals Online (AJOL)

    Neural Network Based Load Frequency Control for Restructuring Power Industry. ... an artificial neural network (ANN) application of load frequency control (LFC) of a Multi-Area power system by using a neural network controller is presented.

  12. PREDIKSI FOREX MENGGUNAKAN MODEL NEURAL NETWORK

    Directory of Open Access Journals (Sweden)

    R. Hadapiningradja Kusumodestoni

    2015-11-01

    Full Text Available ABSTRAK Prediksi adalah salah satu teknik yang paling penting dalam menjalankan bisnis forex. Keputusan dalam memprediksi adalah sangatlah penting, karena dengan prediksi dapat membantu mengetahui nilai forex di waktu tertentu kedepan sehingga dapat mengurangi resiko kerugian. Tujuan dari penelitian ini dimaksudkan memprediksi bisnis fores menggunakan model neural network dengan data time series per 1 menit untuk mengetahui nilai akurasi prediksi sehingga dapat mengurangi resiko dalam menjalankan bisnis forex. Metode penelitian pada penelitian ini meliputi metode pengumpulan data kemudian dilanjutkan ke metode training, learning, testing menggunakan neural network. Setelah di evaluasi hasil penelitian ini menunjukan bahwa penerapan algoritma Neural Network mampu untuk memprediksi forex dengan tingkat akurasi prediksi 0.431 +/- 0.096 sehingga dengan prediksi ini dapat membantu mengurangi resiko dalam menjalankan bisnis forex. Kata kunci: prediksi, forex, neural network.

  13. Artificial neural networks a practical course

    CERN Document Server

    da Silva, Ivan Nunes; Andrade Flauzino, Rogerio; Liboni, Luisa Helena Bartocci; dos Reis Alves, Silas Franco

    2017-01-01

    This book provides comprehensive coverage of neural networks, their evolution, their structure, the problems they can solve, and their applications. The first half of the book looks at theoretical investigations on artificial neural networks and addresses the key architectures that are capable of implementation in various application scenarios. The second half is designed specifically for the production of solutions using artificial neural networks to solve practical problems arising from different areas of knowledge. It also describes the various implementation details that were taken into account to achieve the reported results. These aspects contribute to the maturation and improvement of experimental techniques to specify the neural network architecture that is most appropriate for a particular application scope. The book is appropriate for students in graduate and upper undergraduate courses in addition to researchers and professionals.

  14. Optical-Correlator Neural Network Based On Neocognitron

    Science.gov (United States)

    Chao, Tien-Hsin; Stoner, William W.

    1994-01-01

    Multichannel optical correlator implements shift-invariant, high-discrimination pattern-recognizing neural network based on paradigm of neocognitron. Selected as basic building block of this neural network because invariance under shifts is inherent advantage of Fourier optics included in optical correlators in general. Neocognitron is conceptual electronic neural-network model for recognition of visual patterns. Multilayer processing achieved by iteratively feeding back output of feature correlator to input spatial light modulator and updating Fourier filters. Neural network trained by use of characteristic features extracted from target images. Multichannel implementation enables parallel processing of large number of selected features.

  15. Nonequilibrium landscape theory of neural networks

    Science.gov (United States)

    Yan, Han; Zhao, Lei; Hu, Liang; Wang, Xidi; Wang, Erkang; Wang, Jin

    2013-01-01

    The brain map project aims to map out the neuron connections of the human brain. Even with all of the wirings mapped out, the global and physical understandings of the function and behavior are still challenging. Hopfield quantified the learning and memory process of symmetrically connected neural networks globally through equilibrium energy. The energy basins of attractions represent memories, and the memory retrieval dynamics is determined by the energy gradient. However, the realistic neural networks are asymmetrically connected, and oscillations cannot emerge from symmetric neural networks. Here, we developed a nonequilibrium landscape–flux theory for realistic asymmetrically connected neural networks. We uncovered the underlying potential landscape and the associated Lyapunov function for quantifying the global stability and function. We found the dynamics and oscillations in human brains responsible for cognitive processes and physiological rhythm regulations are determined not only by the landscape gradient but also by the flux. We found that the flux is closely related to the degrees of the asymmetric connections in neural networks and is the origin of the neural oscillations. The neural oscillation landscape shows a closed-ring attractor topology. The landscape gradient attracts the network down to the ring. The flux is responsible for coherent oscillations on the ring. We suggest the flux may provide the driving force for associations among memories. We applied our theory to rapid-eye movement sleep cycle. We identified the key regulation factors for function through global sensitivity analysis of landscape topography against wirings, which are in good agreements with experiments. PMID:24145451

  16. Nonequilibrium landscape theory of neural networks.

    Science.gov (United States)

    Yan, Han; Zhao, Lei; Hu, Liang; Wang, Xidi; Wang, Erkang; Wang, Jin

    2013-11-05

    The brain map project aims to map out the neuron connections of the human brain. Even with all of the wirings mapped out, the global and physical understandings of the function and behavior are still challenging. Hopfield quantified the learning and memory process of symmetrically connected neural networks globally through equilibrium energy. The energy basins of attractions represent memories, and the memory retrieval dynamics is determined by the energy gradient. However, the realistic neural networks are asymmetrically connected, and oscillations cannot emerge from symmetric neural networks. Here, we developed a nonequilibrium landscape-flux theory for realistic asymmetrically connected neural networks. We uncovered the underlying potential landscape and the associated Lyapunov function for quantifying the global stability and function. We found the dynamics and oscillations in human brains responsible for cognitive processes and physiological rhythm regulations are determined not only by the landscape gradient but also by the flux. We found that the flux is closely related to the degrees of the asymmetric connections in neural networks and is the origin of the neural oscillations. The neural oscillation landscape shows a closed-ring attractor topology. The landscape gradient attracts the network down to the ring. The flux is responsible for coherent oscillations on the ring. We suggest the flux may provide the driving force for associations among memories. We applied our theory to rapid-eye movement sleep cycle. We identified the key regulation factors for function through global sensitivity analysis of landscape topography against wirings, which are in good agreements with experiments.

  17. Artificial neural networks for dynamic monitoring of simulated-operating parameters of high temperature gas cooled engineering test reactor (HTTR)

    International Nuclear Information System (INIS)

    Seker, Serhat; Tuerkcan, Erdinc; Ayaz, Emine; Barutcu, Burak

    2003-01-01

    This paper addresses to the problem of utilisation of the artificial neural networks (ANNs) for detecting anomalies as well as physical parameters of a nuclear power plant during power operation in real time. Three different types of neural network algorithms were used namely, feed-forward neural network (back-propagation, BP) and two types of recurrent neural networks (RNN). The data used in this paper were gathered from the simulation of the power operation of the Japan's High Temperature Engineering Testing Reactor (HTTR). For the wide range of power operation, 56 signals were generated by the reactor dynamic simulation code for several hours of normal power operation at different power ramps between 30 and 100% nominal power. Paper will compare the outcomes of different neural networks and presents the neural network system and the determination of physical parameters from the simulated operating data

  18. Neutron spectrometry and dosimetry by means of Bonner spheres system and artificial neural networks applying robust design of artificial neural networks

    Energy Technology Data Exchange (ETDEWEB)

    Martinez B, M.R.; Ortiz R, J.M.; Vega C, H.R. [UAZ, Av. Ramon Lopez Velarde No. 801, 98000 Zacatecas (Mexico)

    2006-07-01

    An Artificial Neural Network has been designed, trained and tested to unfold neutron spectra and simultaneously to calculate equivalent doses. A set of 187 neutron spectra compiled by the International Atomic Energy Agency and 13 equivalent doses were used in the artificial neural network designed, trained and tested. In order to design the neural network was used the robust design of artificial neural networks methodology, which assures that the quality of the neural networks takes into account from the design stage. Unless previous works, here, for first time a group of neural networks were designed and trained to unfold 187 neutron spectra and at the same time to calculate 13 equivalent doses, starting from the count rates coming from the Bonner spheres system by using a systematic and experimental strategy. (Author)

  19. Neutron spectrometry and dosimetry by means of Bonner spheres system and artificial neural networks applying robust design of artificial neural networks

    International Nuclear Information System (INIS)

    Martinez B, M.R.; Ortiz R, J.M.; Vega C, H.R.

    2006-01-01

    An Artificial Neural Network has been designed, trained and tested to unfold neutron spectra and simultaneously to calculate equivalent doses. A set of 187 neutron spectra compiled by the International Atomic Energy Agency and 13 equivalent doses were used in the artificial neural network designed, trained and tested. In order to design the neural network was used the robust design of artificial neural networks methodology, which assures that the quality of the neural networks takes into account from the design stage. Unless previous works, here, for first time a group of neural networks were designed and trained to unfold 187 neutron spectra and at the same time to calculate 13 equivalent doses, starting from the count rates coming from the Bonner spheres system by using a systematic and experimental strategy. (Author)

  20. Periodic oscillatory solution in delayed competitive-cooperative neural networks: A decomposition approach

    International Nuclear Information System (INIS)

    Yuan Kun; Cao Jinde

    2006-01-01

    In this paper, the problems of exponential convergence and the exponential stability of the periodic solution for a general class of non-autonomous competitive-cooperative neural networks are analyzed via the decomposition approach. The idea is to divide the connection weights into inhibitory or excitatory types and thereby to embed a competitive-cooperative delayed neural network into an augmented cooperative delay system through a symmetric transformation. Some simple necessary and sufficient conditions are derived to ensure the componentwise exponential convergence and the exponential stability of the periodic solution of the considered neural networks. These results generalize and improve the previous works, and they are easy to check and apply in practice

  1. Neural networks within multi-core optic fibers.

    Science.gov (United States)

    Cohen, Eyal; Malka, Dror; Shemer, Amir; Shahmoon, Asaf; Zalevsky, Zeev; London, Michael

    2016-07-07

    Hardware implementation of artificial neural networks facilitates real-time parallel processing of massive data sets. Optical neural networks offer low-volume 3D connectivity together with large bandwidth and minimal heat production in contrast to electronic implementation. Here, we present a conceptual design for in-fiber optical neural networks. Neurons and synapses are realized as individual silica cores in a multi-core fiber. Optical signals are transferred transversely between cores by means of optical coupling. Pump driven amplification in erbium-doped cores mimics synaptic interactions. We simulated three-layered feed-forward neural networks and explored their capabilities. Simulations suggest that networks can differentiate between given inputs depending on specific configurations of amplification; this implies classification and learning capabilities. Finally, we tested experimentally our basic neuronal elements using fibers, couplers, and amplifiers, and demonstrated that this configuration implements a neuron-like function. Therefore, devices similar to our proposed multi-core fiber could potentially serve as building blocks for future large-scale small-volume optical artificial neural networks.

  2. Intelligent neural network diagnostic system

    International Nuclear Information System (INIS)

    Mohamed, A.H.

    2010-01-01

    Recently, artificial neural network (ANN) has made a significant mark in the domain of diagnostic applications. Neural networks are used to implement complex non-linear mappings (functions) using simple elementary units interrelated through connections with adaptive weights. The performance of the ANN is mainly depending on their topology structure and weights. Some systems have been developed using genetic algorithm (GA) to optimize the topology of the ANN. But, they suffer from some limitations. They are : (1) The computation time requires for training the ANN several time reaching for the average weight required, (2) Slowness of GA for optimization process and (3) Fitness noise appeared in the optimization of ANN. This research suggests new issues to overcome these limitations for finding optimal neural network architectures to learn particular problems. This proposed methodology is used to develop a diagnostic neural network system. It has been applied for a 600 MW turbo-generator as a case of real complex systems. The proposed system has proved its significant performance compared to two common methods used in the diagnostic applications.

  3. Exponential synchronization of delayed neutral-type neural networks with Lévy noise under non-Lipschitz condition

    Science.gov (United States)

    Ma, Shuo; Kang, Yanmei

    2018-04-01

    In this paper, the exponential synchronization of stochastic neutral-type neural networks with time-varying delay and Lévy noise under non-Lipschitz condition is investigated for the first time. Using the general Itô's formula and the nonnegative semi-martingale convergence theorem, we derive general sufficient conditions of two kinds of exponential synchronization for the drive system and the response system with adaptive control. Numerical examples are presented to verify the effectiveness of the proposed criteria.

  4. Hybrid information privacy system: integration of chaotic neural network and RSA coding

    Science.gov (United States)

    Hsu, Ming-Kai; Willey, Jeff; Lee, Ting N.; Szu, Harold H.

    2005-03-01

    Electronic mails are adopted worldwide; most are easily hacked by hackers. In this paper, we purposed a free, fast and convenient hybrid privacy system to protect email communication. The privacy system is implemented by combining private security RSA algorithm with specific chaos neural network encryption process. The receiver can decrypt received email as long as it can reproduce the specified chaos neural network series, so called spatial-temporal keys. The chaotic typing and initial seed value of chaos neural network series, encrypted by the RSA algorithm, can reproduce spatial-temporal keys. The encrypted chaotic typing and initial seed value are hidden in watermark mixed nonlinearly with message media, wrapped with convolution error correction codes for wireless 3rd generation cellular phones. The message media can be an arbitrary image. The pattern noise has to be considered during transmission and it could affect/change the spatial-temporal keys. Since any change/modification on chaotic typing or initial seed value of chaos neural network series is not acceptable, the RSA codec system must be robust and fault-tolerant via wireless channel. The robust and fault-tolerant properties of chaos neural networks (CNN) were proved by a field theory of Associative Memory by Szu in 1997. The 1-D chaos generating nodes from the logistic map having arbitrarily negative slope a = p/q generating the N-shaped sigmoid was given first by Szu in 1992. In this paper, we simulated the robust and fault-tolerance properties of CNN under additive noise and pattern noise. We also implement a private version of RSA coding and chaos encryption process on messages.

  5. Neural networks and applications tutorial

    Science.gov (United States)

    Guyon, I.

    1991-09-01

    The importance of neural networks has grown dramatically during this decade. While only a few years ago they were primarily of academic interest, now dozens of companies and many universities are investigating the potential use of these systems and products are beginning to appear. The idea of building a machine whose architecture is inspired by that of the brain has roots which go far back in history. Nowadays, technological advances of computers and the availability of custom integrated circuits, permit simulations of hundreds or even thousands of neurons. In conjunction, the growing interest in learning machines, non-linear dynamics and parallel computation spurred renewed attention in artificial neural networks. Many tentative applications have been proposed, including decision systems (associative memories, classifiers, data compressors and optimizers), or parametric models for signal processing purposes (system identification, automatic control, noise canceling, etc.). While they do not always outperform standard methods, neural network approaches are already used in some real world applications for pattern recognition and signal processing tasks. The tutorial is divided into six lectures, that where presented at the Third Graduate Summer Course on Computational Physics (September 3-7, 1990) on Parallel Architectures and Applications, organized by the European Physical Society: (1) Introduction: machine learning and biological computation. (2) Adaptive artificial neurons (perceptron, ADALINE, sigmoid units, etc.): learning rules and implementations. (3) Neural network systems: architectures, learning algorithms. (4) Applications: pattern recognition, signal processing, etc. (5) Elements of learning theory: how to build networks which generalize. (6) A case study: a neural network for on-line recognition of handwritten alphanumeric characters.

  6. Distribution network fault section identification and fault location using artificial neural network

    DEFF Research Database (Denmark)

    Dashtdar, Masoud; Dashti, Rahman; Shaker, Hamid Reza

    2018-01-01

    In this paper, a method for fault location in power distribution network is presented. The proposed method uses artificial neural network. In order to train the neural network, a series of specific characteristic are extracted from the recorded fault signals in relay. These characteristics...... components of the sequences as well as three-phase signals could be obtained using statistics to extract the hidden features inside them and present them separately to train the neural network. Also, since the obtained inputs for the training of the neural network strongly depend on the fault angle, fault...... resistance, and fault location, the training data should be selected such that these differences are properly presented so that the neural network does not face any issues for identification. Therefore, selecting the signal processing function, data spectrum and subsequently, statistical parameters...

  7. Representation of linguistic form and function in recurrent neural networks

    NARCIS (Netherlands)

    Kadar, Akos; Chrupala, Grzegorz; Alishahi, Afra

    2017-01-01

    We present novel methods for analyzing the activation patterns of recurrent neural networks from a linguistic point of view and explore the types of linguistic structure they learn. As a case study, we use a standard standalone language model, and a multi-task gated recurrent network architecture

  8. Adaptive optimization and control using neural networks

    Energy Technology Data Exchange (ETDEWEB)

    Mead, W.C.; Brown, S.K.; Jones, R.D.; Bowling, P.S.; Barnes, C.W.

    1993-10-22

    Recent work has demonstrated the ability of neural-network-based controllers to optimize and control machines with complex, non-linear, relatively unknown control spaces. We present a brief overview of neural networks via a taxonomy illustrating some capabilities of different kinds of neural networks. We present some successful control examples, particularly the optimization and control of a small-angle negative ion source.

  9. Neural Networks

    Directory of Open Access Journals (Sweden)

    Schwindling Jerome

    2010-04-01

    Full Text Available This course presents an overview of the concepts of the neural networks and their aplication in the framework of High energy physics analyses. After a brief introduction on the concept of neural networks, the concept is explained in the frame of neuro-biology, introducing the concept of multi-layer perceptron, learning and their use as data classifer. The concept is then presented in a second part using in more details the mathematical approach focussing on typical use cases faced in particle physics. Finally, the last part presents the best way to use such statistical tools in view of event classifers, putting the emphasis on the setup of the multi-layer perceptron. The full article (15 p. corresponding to this lecture is written in french and is provided in the proceedings of the book SOS 2008.

  10. Altered Synchronizations among Neural Networks in Geriatric Depression.

    Science.gov (United States)

    Wang, Lihong; Chou, Ying-Hui; Potter, Guy G; Steffens, David C

    2015-01-01

    Although major depression has been considered as a manifestation of discoordinated activity between affective and cognitive neural networks, only a few studies have examined the relationships among neural networks directly. Because of the known disconnection theory, geriatric depression could be a useful model in studying the interactions among different networks. In the present study, using independent component analysis to identify intrinsically connected neural networks, we investigated the alterations in synchronizations among neural networks in geriatric depression to better understand the underlying neural mechanisms. Resting-state fMRI data was collected from thirty-two patients with geriatric depression and thirty-two age-matched never-depressed controls. We compared the resting-state activities between the two groups in the default-mode, central executive, attention, salience, and affective networks as well as correlations among these networks. The depression group showed stronger activity than the controls in an affective network, specifically within the orbitofrontal region. However, unlike the never-depressed controls, geriatric depression group lacked synchronized/antisynchronized activity between the affective network and the other networks. Those depressed patients with lower executive function has greater synchronization between the salience network with the executive and affective networks. Our results demonstrate the effectiveness of the between-network analyses in examining neural models for geriatric depression.

  11. Neural Networks for the Beginner.

    Science.gov (United States)

    Snyder, Robin M.

    Motivated by the brain, neural networks are a right-brained approach to artificial intelligence that is used to recognize patterns based on previous training. In practice, one would not program an expert system to recognize a pattern and one would not train a neural network to make decisions from rules; but one could combine the best features of…

  12. Applications of self-organizing neural networks in virtual screening and diversity selection.

    Science.gov (United States)

    Selzer, Paul; Ertl, Peter

    2006-01-01

    Artificial neural networks provide a powerful technique for the analysis and modeling of nonlinear relationships between molecular structures and pharmacological activity. Many network types, including Kohonen and counterpropagation, also provide an intuitive method for the visual assessment of correspondence between the input and output data. This work shows how a combination of neural networks and radial distribution function molecular descriptors can be applied in various areas of industrial pharmaceutical research. These applications include the prediction of biological activity, the selection of screening candidates (cherry picking), and the extraction of representative subsets from large compound collections such as combinatorial libraries. The methods described have also been implemented as an easy-to-use Web tool, allowing chemists to perform interactive neural network experiments on the Novartis intranet.

  13. Neural network based method for conversion of solar radiation data

    International Nuclear Information System (INIS)

    Celik, Ali N.; Muneer, Tariq

    2013-01-01

    Highlights: ► Generalized regression neural network is used to predict the solar radiation on tilted surfaces. ► The above network, amongst many such as multilayer perceptron, is the most successful one. ► The present neural network returns a relative mean absolute error value of 9.1%. ► The present model leads to a mean absolute error value of estimate of 14.9 Wh/m 2 . - Abstract: The receiving ends of the solar energy conversion systems that generate heat or electricity from radiation is usually tilted at an optimum angle to increase the solar incident on the surface. Solar irradiation data measured on horizontal surfaces is readily available for many locations where such solar energy conversion systems are installed. Various equations have been developed to convert solar irradiation data measured on horizontal surface to that on tilted one. These equations constitute the conventional approach. In this article, an alternative approach, generalized regression type of neural network, is used to predict the solar irradiation on tilted surfaces, using the minimum number of variables involved in the physical process, namely the global solar irradiation on horizontal surface, declination and hour angles. Artificial neural networks have been successfully used in recent years for optimization, prediction and modeling in energy systems as alternative to conventional modeling approaches. To show the merit of the presently developed neural network, the solar irradiation data predicted from the novel model was compared to that from the conventional approach (isotropic and anisotropic models), with strict reference to the irradiation data measured in the same location. The present neural network model was found to provide closer solar irradiation values to the measured than the conventional approach, with a mean absolute error value of 14.9 Wh/m 2 . The other statistical values of coefficient of determination and relative mean absolute error also indicate the

  14. Mass reconstruction with a neural network

    International Nuclear Information System (INIS)

    Loennblad, L.; Peterson, C.; Roegnvaldsson, T.

    1992-01-01

    A feed-forward neural network method is developed for reconstructing the invariant mass of hadronic jets appearing in a calorimeter. The approach is illustrated in W→qanti q, where W-bosons are produced in panti p reactions at SPS collider energies. The neural network method yields results that are superior to conventional methods. This neural network application differs from the classification ones in the sense that an analog number (the mass) is computed by the network, rather than a binary decision being made. As a by-product our application clearly demonstrates the need for using 'intelligent' variables in instances when the amount of training instances is limited. (orig.)

  15. Inversion of a lateral log using neural networks

    International Nuclear Information System (INIS)

    Garcia, G.; Whitman, W.W.

    1992-01-01

    In this paper a technique using neural networks is demonstrated for the inversion of a lateral log. The lateral log is simulated by a finite difference method which in turn is used as an input to a backpropagation neural network. An initial guess earth model is generated from the neural network, which is then input to a Marquardt inversion. The neural network reacts to gross and subtle data features in actual logs and produces a response inferred from the knowledge stored in the network during a training process. The neural network inversion of lateral logs is tested on synthetic and field data. Tests using field data resulted in a final earth model whose simulated lateral is in good agreement with the actual log data

  16. Neural Networks in Mobile Robot Motion

    Directory of Open Access Journals (Sweden)

    Danica Janglová

    2004-03-01

    Full Text Available This paper deals with a path planning and intelligent control of an autonomous robot which should move safely in partially structured environment. This environment may involve any number of obstacles of arbitrary shape and size; some of them are allowed to move. We describe our approach to solving the motion-planning problem in mobile robot control using neural networks-based technique. Our method of the construction of a collision-free path for moving robot among obstacles is based on two neural networks. The first neural network is used to determine the “free” space using ultrasound range finder data. The second neural network “finds” a safe direction for the next robot section of the path in the workspace while avoiding the nearest obstacles. Simulation examples of generated path with proposed techniques will be presented.

  17. International Conference on Artificial Neural Networks (ICANN)

    CERN Document Server

    Mladenov, Valeri; Kasabov, Nikola; Artificial Neural Networks : Methods and Applications in Bio-/Neuroinformatics

    2015-01-01

    The book reports on the latest theories on artificial neural networks, with a special emphasis on bio-neuroinformatics methods. It includes twenty-three papers selected from among the best contributions on bio-neuroinformatics-related issues, which were presented at the International Conference on Artificial Neural Networks, held in Sofia, Bulgaria, on September 10-13, 2013 (ICANN 2013). The book covers a broad range of topics concerning the theory and applications of artificial neural networks, including recurrent neural networks, super-Turing computation and reservoir computing, double-layer vector perceptrons, nonnegative matrix factorization, bio-inspired models of cell communities, Gestalt laws, embodied theory of language understanding, saccadic gaze shifts and memory formation, and new training algorithms for Deep Boltzmann Machines, as well as dynamic neural networks and kernel machines. It also reports on new approaches to reinforcement learning, optimal control of discrete time-delay systems, new al...

  18. Using neural networks for prediction of nuclear parameters

    Energy Technology Data Exchange (ETDEWEB)

    Pereira Filho, Leonidas; Souto, Kelling Cabral, E-mail: leonidasmilenium@hotmail.com, E-mail: kcsouto@bol.com.br [Instituto Federal de Educacao, Ciencia e Tecnologia do Rio de Janeiro (IFRJ), Rio de Janeiro, RJ (Brazil); Machado, Marcelo Dornellas, E-mail: dornemd@eletronuclear.gov.br [Eletrobras Termonuclear S.A. (GCN.T/ELETRONUCLEAR), Rio de Janeiro, RJ (Brazil). Gerencia de Combustivel Nuclear

    2013-07-01

    Dating from 1943, the earliest work on artificial neural networks (ANN), when Warren Mc Cullock and Walter Pitts developed a study on the behavior of the biological neuron, with the goal of creating a mathematical model. Some other work was done until after the 80 witnessed an explosion of interest in ANNs, mainly due to advances in technology, especially microelectronics. Because ANNs are able to solve many problems such as approximation, classification, categorization, prediction and others, they have numerous applications in various areas, including nuclear. Nodal method is adopted as a tool for analyzing core parameters such as boron concentration and pin power peaks for pressurized water reactors. However, this method is extremely slow when it is necessary to perform various core evaluations, for example core reloading optimization. To overcome this difficulty, in this paper a model of Multi-layer Perceptron (MLP) artificial neural network type backpropagation will be trained to predict these values. The main objective of this work is the development of Multi-layer Perceptron (MLP) artificial neural network capable to predict, in very short time, with good accuracy, two important parameters used in the core reloading problem - Boron Concentration and Power Peaking Factor. For the training of the neural networks are provided loading patterns and nuclear data used in cycle 19 of Angra 1 nuclear power plant. Three models of networks are constructed using the same input data and providing the following outputs: 1- Boron Concentration and Power Peaking Factor, 2 - Boron Concentration and 3 - Power Peaking Factor. (author)

  19. Using neural networks for prediction of nuclear parameters

    International Nuclear Information System (INIS)

    Pereira Filho, Leonidas; Souto, Kelling Cabral; Machado, Marcelo Dornellas

    2013-01-01

    Dating from 1943, the earliest work on artificial neural networks (ANN), when Warren Mc Cullock and Walter Pitts developed a study on the behavior of the biological neuron, with the goal of creating a mathematical model. Some other work was done until after the 80 witnessed an explosion of interest in ANNs, mainly due to advances in technology, especially microelectronics. Because ANNs are able to solve many problems such as approximation, classification, categorization, prediction and others, they have numerous applications in various areas, including nuclear. Nodal method is adopted as a tool for analyzing core parameters such as boron concentration and pin power peaks for pressurized water reactors. However, this method is extremely slow when it is necessary to perform various core evaluations, for example core reloading optimization. To overcome this difficulty, in this paper a model of Multi-layer Perceptron (MLP) artificial neural network type backpropagation will be trained to predict these values. The main objective of this work is the development of Multi-layer Perceptron (MLP) artificial neural network capable to predict, in very short time, with good accuracy, two important parameters used in the core reloading problem - Boron Concentration and Power Peaking Factor. For the training of the neural networks are provided loading patterns and nuclear data used in cycle 19 of Angra 1 nuclear power plant. Three models of networks are constructed using the same input data and providing the following outputs: 1- Boron Concentration and Power Peaking Factor, 2 - Boron Concentration and 3 - Power Peaking Factor. (author)

  20. Interpretable neural networks with BP-SOM

    NARCIS (Netherlands)

    Weijters, A.J.M.M.; Bosch, van den A.P.J.; Pobil, del A.P.; Mira, J.; Ali, M.

    1998-01-01

    Artificial Neural Networks (ANNS) are used successfully in industry and commerce. This is not surprising since neural networks are especially competitive for complex tasks for which insufficient domain-specific knowledge is available. However, interpretation of models induced by ANNS is often

  1. Adaptive nonlinear control using input normalized neural networks

    International Nuclear Information System (INIS)

    Leeghim, Henzeh; Seo, In Ho; Bang, Hyo Choong

    2008-01-01

    An adaptive feedback linearization technique combined with the neural network is addressed to control uncertain nonlinear systems. The neural network-based adaptive control theory has been widely studied. However, the stability analysis of the closed-loop system with the neural network is rather complicated and difficult to understand, and sometimes unnecessary assumptions are involved. As a result, unnecessary assumptions for stability analysis are avoided by using the neural network with input normalization technique. The ultimate boundedness of the tracking error is simply proved by the Lyapunov stability theory. A new simple update law as an adaptive nonlinear control is derived by the simplification of the input normalized neural network assuming the variation of the uncertain term is sufficiently small

  2. TRIGA control rod position and reactivity transient Monitoring by Neural Networks

    International Nuclear Information System (INIS)

    Rosa, R.; Palomba, M.; Sepielli, M.

    2008-01-01

    Plant sensors drift or malfunction and operator actions in nuclear reactor control can be supported by sensor on-line monitoring, and data validation through soft-computing process. On-line recalibration can often avoid manual calibration or drifting component replacement. DSP requires prompt response to the modified conditions. Artificial Neural Network (ANN) and Fuzzy logic ensure: prompt response, link with field measurement and physical system behaviour, data incoming interpretation, and detection of discrepancy for mis-calibration or sensor faults. ANN (Artificial Neural Network) is a system based on the operation of biological neural networks. Although computing is day by day advancing, there are certain tasks that a program made for a common microprocessor is unable to perform. A software implementation of an ANN can be made with Pros and Cons. Pros: A neural network can perform tasks that a linear program can not; When an element of the neural network fails, it can continue without any problem by their parallel nature; A neural network learns and does not need to be reprogrammed; It can be implemented in any application; It can be implemented without any problem. Cons: The architecture of a neural network is different from the architecture of microprocessors therefore needs to be emulated; it requires high processing time for large neural networks; and the neural network needs training to operate. Three possibilities of training exist: Supervised learning: the network is trained providing input and matching output patterns; Unsupervised learning: input patterns are not a priori classified and the system must develop its own representation of the input stimuli; Reinforcement Learning: intermediate form of the above two types of learning, the learning machine does some action on the environment and gets a feedback response from the environment. Two TRIGAN ANN applications are considered: control rod position and fuel temperature. The outcome obtained in this

  3. Neural Networks In Mining Sciences - General Overview And Some Representative Examples

    Science.gov (United States)

    Tadeusiewicz, Ryszard

    2015-12-01

    The many difficult problems that must now be addressed in mining sciences make us search for ever newer and more efficient computer tools that can be used to solve those problems. Among the numerous tools of this type, there are neural networks presented in this article - which, although not yet widely used in mining sciences, are certainly worth consideration. Neural networks are a technique which belongs to so called artificial intelligence, and originates from the attempts to model the structure and functioning of biological nervous systems. Initially constructed and tested exclusively out of scientific curiosity, as computer models of parts of the human brain, neural networks have become a surprisingly effective calculation tool in many areas: in technology, medicine, economics, and even social sciences. Unfortunately, they are relatively rarely used in mining sciences and mining technology. The article is intended to convince the readers that neural networks can be very useful also in mining sciences. It contains information how modern neural networks are built, how they operate and how one can use them. The preliminary discussion presented in this paper can help the reader gain an opinion whether this is a tool with handy properties, useful for him, and what it might come in useful for. Of course, the brief introduction to neural networks contained in this paper will not be enough for the readers who get convinced by the arguments contained here, and want to use neural networks. They will still need a considerable portion of detailed knowledge so that they can begin to independently create and build such networks, and use them in practice. However, an interested reader who decides to try out the capabilities of neural networks will also find here links to references that will allow him to start exploration of neural networks fast, and then work with this handy tool efficiently. This will be easy, because there are currently quite a few ready-made computer

  4. Runoff Modelling in Urban Storm Drainage by Neural Networks

    DEFF Research Database (Denmark)

    Rasmussen, Michael R.; Brorsen, Michael; Schaarup-Jensen, Kjeld

    1995-01-01

    A neural network is used to simulate folw and water levels in a sewer system. The calibration of th neural network is based on a few measured events and the network is validated against measureed events as well as flow simulated with the MOUSE model (Lindberg and Joergensen, 1986). The neural...... network is used to compute flow or water level at selected points in the sewer system, and to forecast the flow from a small residential area. The main advantages of the neural network are the build-in self calibration procedure and high speed performance, but the neural network cannot be used to extract...... knowledge of the runoff process. The neural network was found to simulate 150 times faster than e.g. the MOUSE model....

  5. Neural networks in economic modelling : An empirical study

    NARCIS (Netherlands)

    Verkooijen, W.J.H.

    1996-01-01

    This dissertation addresses the statistical aspects of neural networks and their usability for solving problems in economics and finance. Neural networks are discussed in a framework of modelling which is generally accepted in econometrics. Within this framework a neural network is regarded as a

  6. Artificial neural network modelling approach for a biomass gasification process in fixed bed gasifiers

    International Nuclear Information System (INIS)

    Mikulandrić, Robert; Lončar, Dražen; Böhning, Dorith; Böhme, Rene; Beckmann, Michael

    2014-01-01

    Highlights: • 2 Different equilibrium models are developed and their performance is analysed. • Neural network prediction models for 2 different fixed bed gasifier types are developed. • The influence of different input parameters on neural network model performance is analysed. • Methodology for neural network model development for different gasifier types is described. • Neural network models are verified for various operating conditions based on measured data. - Abstract: The number of the small and middle-scale biomass gasification combined heat and power plants as well as syngas production plants has been significantly increased in the last decade mostly due to extensive incentives. However, existing issues regarding syngas quality, process efficiency, emissions and environmental standards are preventing biomass gasification technology to become more economically viable. To encounter these issues, special attention is given to the development of mathematical models which can be used for a process analysis or plant control purposes. The presented paper analyses possibilities of neural networks to predict process parameters with high speed and accuracy. After a related literature review and measurement data analysis, different modelling approaches for the process parameter prediction that can be used for an on-line process control were developed and their performance were analysed. Neural network models showed good capability to predict biomass gasification process parameters with reasonable accuracy and speed. Measurement data for the model development, verification and performance analysis were derived from biomass gasification plant operated by Technical University Dresden

  7. Artificial neural network intelligent method for prediction

    Science.gov (United States)

    Trifonov, Roumen; Yoshinov, Radoslav; Pavlova, Galya; Tsochev, Georgi

    2017-09-01

    Accounting and financial classification and prediction problems are high challenge and researchers use different methods to solve them. Methods and instruments for short time prediction of financial operations using artificial neural network are considered. The methods, used for prediction of financial data as well as the developed forecasting system with neural network are described in the paper. The architecture of a neural network used four different technical indicators, which are based on the raw data and the current day of the week is presented. The network developed is used for forecasting movement of stock prices one day ahead and consists of an input layer, one hidden layer and an output layer. The training method is algorithm with back propagation of the error. The main advantage of the developed system is self-determination of the optimal topology of neural network, due to which it becomes flexible and more precise The proposed system with neural network is universal and can be applied to various financial instruments using only basic technical indicators as input data.

  8. Recognition of power quality events by using multiwavelet-based neural networks

    Energy Technology Data Exchange (ETDEWEB)

    Kaewarsa, Suriya; Attakitmongcol, Kitti; Kulworawanichpong, Thanatchai [School of Electrical Engineering, Suranaree University of Technology, 111 University Avenue, Muang District, Nakhon Ratchasima 30000 (Thailand)

    2008-05-15

    Recognition of power quality events by analyzing the voltage and current waveform disturbances is a very important task for the power system monitoring. This paper presents a novel approach for the recognition of power quality disturbances using multiwavelet transform and neural networks. The proposed method employs the multiwavelet transform using multiresolution signal decomposition techniques working together with multiple neural networks using a learning vector quantization network as a powerful classifier. Various transient events are tested, such as voltage sag, swell, interruption, notching, impulsive transient, and harmonic distortion show that the classifier can detect and classify different power quality signal types efficiency. (author)

  9. A fuzzy neural network for sensor signal estimation

    International Nuclear Information System (INIS)

    Na, Man Gyun

    2000-01-01

    In this work, a fuzzy neural network is used to estimate the relevant sensor signal using other sensor signals. Noise components in input signals into the fuzzy neural network are removed through the wavelet denoising technique. Principal component analysis (PCA) is used to reduce the dimension of an input space without losing a significant amount of information. A lower dimensional input space will also usually reduce the time necessary to train a fuzzy-neural network. Also, the principal component analysis makes easy the selection of the input signals into the fuzzy neural network. The fuzzy neural network parameters are optimized by two learning methods. A genetic algorithm is used to optimize the antecedent parameters of the fuzzy neural network and a least-squares algorithm is used to solve the consequent parameters. The proposed algorithm was verified through the application to the pressurizer water level and the hot-leg flowrate measurements in pressurized water reactors

  10. Analog design of a new neural network for optical character recognition.

    Science.gov (United States)

    Morns, I P; Dlay, S S

    1999-01-01

    An electronic circuit is presented for a new type of neural network, which gives a recognition rate of over 100 kHz. The network is used to classify handwritten numerals, presented as Fourier and wavelet descriptors, and has been shown to train far quicker than the popular backpropagation network while maintaining classification accuracy.

  11. Multistability in bidirectional associative memory neural networks

    International Nuclear Information System (INIS)

    Huang Gan; Cao Jinde

    2008-01-01

    In this Letter, the multistability issue is studied for Bidirectional Associative Memory (BAM) neural networks. Based on the existence and stability analysis of the neural networks with or without delay, it is found that the 2n-dimensional networks can have 3 n equilibria and 2 n equilibria of them are locally exponentially stable, where each layer of the BAM network has n neurons. Furthermore, the results has been extended to (n+m)-dimensional BAM neural networks, where there are n and m neurons on the two layers respectively. Finally, two numerical examples are presented to illustrate the validity of our results

  12. Multistability in bidirectional associative memory neural networks

    Science.gov (United States)

    Huang, Gan; Cao, Jinde

    2008-04-01

    In this Letter, the multistability issue is studied for Bidirectional Associative Memory (BAM) neural networks. Based on the existence and stability analysis of the neural networks with or without delay, it is found that the 2 n-dimensional networks can have 3 equilibria and 2 equilibria of them are locally exponentially stable, where each layer of the BAM network has n neurons. Furthermore, the results has been extended to (n+m)-dimensional BAM neural networks, where there are n and m neurons on the two layers respectively. Finally, two numerical examples are presented to illustrate the validity of our results.

  13. Evaluation of artificial neural network techniques for flow forecasting in the River Yangtze, China

    Directory of Open Access Journals (Sweden)

    C. W. Dawson

    2002-01-01

    Full Text Available While engineers have been quantifying rainfall-runoff processes since the mid-19th century, it is only in the last decade that artificial neural network models have been applied to the same task. This paper evaluates two neural networks in this context: the popular multilayer perceptron (MLP, and the radial basis function network (RBF. Using six-hourly rainfall-runoff data for the River Yangtze at Yichang (upstream of the Three Gorges Dam for the period 1991 to 1993, it is shown that both neural network types can simulate river flows beyond the range of the training set. In addition, an evaluation of alternative RBF transfer functions demonstrates that the popular Gaussian function, often used in RBF networks, is not necessarily the ‘best’ function to use for river flow forecasting. Comparisons are also made between these neural networks and conventional statistical techniques; stepwise multiple linear regression, auto regressive moving average models and a zero order forecasting approach. Keywords: Artificial neural network, multilayer perception, radial basis function, flood forecasting

  14. A Neural Network Approach for Inverse Kinematic of a SCARA Manipulator

    Directory of Open Access Journals (Sweden)

    Panchanand Jha

    2014-07-01

    Full Text Available Inverse kinematic is one of the most interesting problems of industrial robot. The inverse kinematics problem in robotics is about the determination of joint angles for a desired Cartesian position of the end effector. It comprises of the computation need to find the joint angles for a given Cartesian position and orientation of the end effectors to control a robot arm. There is no unique solution for the inverse kinematics thus necessitating application of appropriate predictive models from the soft computing domain. Artificial neural network is one such technique which can be gainfully used to yield the acceptable results. This paper proposes a structured artificial neural network (ANN model to find the inverse kinematics solution of a 4-dof SCARA manipulator. The ANN model used is a multi-layered perceptron neural network (MLPNN, wherein gradient descent type of learning rules is applied. An attempt has been made to find the best ANN configuration for the problem. It is found that multi-layered perceptron neural network gives minimum mean square error.

  15. Machine Learning Topological Invariants with Neural Networks

    Science.gov (United States)

    Zhang, Pengfei; Shen, Huitao; Zhai, Hui

    2018-02-01

    In this Letter we supervisedly train neural networks to distinguish different topological phases in the context of topological band insulators. After training with Hamiltonians of one-dimensional insulators with chiral symmetry, the neural network can predict their topological winding numbers with nearly 100% accuracy, even for Hamiltonians with larger winding numbers that are not included in the training data. These results show a remarkable success that the neural network can capture the global and nonlinear topological features of quantum phases from local inputs. By opening up the neural network, we confirm that the network does learn the discrete version of the winding number formula. We also make a couple of remarks regarding the role of the symmetry and the opposite effect of regularization techniques when applying machine learning to physical systems.

  16. Neural-network-directed alignment of optical systems using the laser-beam spatial filter as an example

    Science.gov (United States)

    Decker, Arthur J.; Krasowski, Michael J.; Weiland, Kenneth E.

    1993-01-01

    This report describes an effort at NASA Lewis Research Center to use artificial neural networks to automate the alignment and control of optical measurement systems. Specifically, it addresses the use of commercially available neural network software and hardware to direct alignments of the common laser-beam-smoothing spatial filter. The report presents a general approach for designing alignment records and combining these into training sets to teach optical alignment functions to neural networks and discusses the use of these training sets to train several types of neural networks. Neural network configurations used include the adaptive resonance network, the back-propagation-trained network, and the counter-propagation network. This work shows that neural networks can be used to produce robust sequencers. These sequencers can learn by example to execute the step-by-step procedures of optical alignment and also can learn adaptively to correct for environmentally induced misalignment. The long-range objective is to use neural networks to automate the alignment and operation of optical measurement systems in remote, harsh, or dangerous aerospace environments. This work also shows that when neural networks are trained by a human operator, training sets should be recorded, training should be executed, and testing should be done in a manner that does not depend on intellectual judgments of the human operator.

  17. A novel approach to error function minimization for feedforward neural networks

    International Nuclear Information System (INIS)

    Sinkus, R.

    1995-01-01

    Feedforward neural networks with error backpropagation are widely applied to pattern recognition. One general problem encountered with this type of neural networks is the uncertainty, whether the minimization procedure has converged to a global minimum of the cost function. To overcome this problem a novel approach to minimize the error function is presented. It allows to monitor the approach to the global minimum and as an outcome several ambiguities related to the choice of free parameters of the minimization procedure are removed. (orig.)

  18. Time series prediction with simple recurrent neural networks ...

    African Journals Online (AJOL)

    A hybrid of the two called Elman-Jordan (or Multi-recurrent) neural network is also being used. In this study, we evaluated the performance of these neural networks on three established bench mark time series prediction problems. Results from the experiments showed that Jordan neural network performed significantly ...

  19. Quantum neural networks: Current status and prospects for development

    Science.gov (United States)

    Altaisky, M. V.; Kaputkina, N. E.; Krylov, V. A.

    2014-11-01

    The idea of quantum artificial neural networks, first formulated in [34], unites the artificial neural network concept with the quantum computation paradigm. Quantum artificial neural networks were first systematically considered in the PhD thesis by T. Menneer (1998). Based on the works of Menneer and Narayanan [42, 43], Kouda, Matsui, and Nishimura [35, 36], Altaisky [2, 68], Zhou [67], and others, quantum-inspired learning algorithms for neural networks were developed, and are now used in various training programs and computer games [29, 30]. The first practically realizable scaled hardware-implemented model of the quantum artificial neural network is obtained by D-Wave Systems, Inc. [33]. It is a quantum Hopfield network implemented on the basis of superconducting quantum interference devices (SQUIDs). In this work we analyze possibilities and underlying principles of an alternative way to implement quantum neural networks on the basis of quantum dots. A possibility of using quantum neural network algorithms in automated control systems, associative memory devices, and in modeling biological and social networks is examined.

  20. Neural network modeling for near wall turbulent flow

    International Nuclear Information System (INIS)

    Milano, Michele; Koumoutsakos, Petros

    2002-01-01

    A neural network methodology is developed in order to reconstruct the near wall field in a turbulent flow by exploiting flow fields provided by direct numerical simulations. The results obtained from the neural network methodology are compared with the results obtained from prediction and reconstruction using proper orthogonal decomposition (POD). Using the property that the POD is equivalent to a specific linear neural network, a nonlinear neural network extension is presented. It is shown that for a relatively small additional computational cost nonlinear neural networks provide us with improved reconstruction and prediction capabilities for the near wall velocity fields. Based on these results advantages and drawbacks of both approaches are discussed with an outlook toward the development of near wall models for turbulence modeling and control

  1. Application of neural networks in CRM systems

    Directory of Open Access Journals (Sweden)

    Bojanowska Agnieszka

    2017-01-01

    Full Text Available The central aim of this study is to investigate how to apply artificial neural networks in Customer Relationship Management (CRM. The paper presents several business applications of neural networks in software systems designed to aid CRM, e.g. in deciding on the profitability of building a relationship with a given customer. Furthermore, a framework for a neural-network based CRM software tool is developed. Building beneficial relationships with customers is generating considerable interest among various businesses, and is often mentioned as one of the crucial objectives of enterprises, next to their key aim: to bring satisfactory profit. There is a growing tendency among businesses to invest in CRM systems, which together with an organisational culture of a company aid managing customer relationships. It is the sheer amount of gathered data as well as the need for constant updating and analysis of this breadth of information that may imply the suitability of neural networks for the application in question. Neural networks exhibit considerably higher computational capabilities than sequential calculations because the solution to a problem is obtained without the need for developing a special algorithm. In the majority of presented CRM applications neural networks constitute and are presented as a managerial decision-taking optimisation tool.

  2. Stability in Cohen Grossberg-type bidirectional associative memory neural networks with time-varying delays

    Science.gov (United States)

    Cao, Jinde; Song, Qiankun

    2006-07-01

    In this paper, the exponential stability problem is investigated for a class of Cohen-Grossberg-type bidirectional associative memory neural networks with time-varying delays. By using the analysis method, inequality technique and the properties of an M-matrix, several novel sufficient conditions ensuring the existence, uniqueness and global exponential stability of the equilibrium point are derived. Moreover, the exponential convergence rate is estimated. The obtained results are less restrictive than those given in the earlier literature, and the boundedness and differentiability of the activation functions and differentiability of the time-varying delays are removed. Two examples with their simulations are given to show the effectiveness of the obtained results.

  3. Local Dynamics in Trained Recurrent Neural Networks.

    Science.gov (United States)

    Rivkind, Alexander; Barak, Omri

    2017-06-23

    Learning a task induces connectivity changes in neural circuits, thereby changing their dynamics. To elucidate task-related neural dynamics, we study trained recurrent neural networks. We develop a mean field theory for reservoir computing networks trained to have multiple fixed point attractors. Our main result is that the dynamics of the network's output in the vicinity of attractors is governed by a low-order linear ordinary differential equation. The stability of the resulting equation can be assessed, predicting training success or failure. As a consequence, networks of rectified linear units and of sigmoidal nonlinearities are shown to have diametrically different properties when it comes to learning attractors. Furthermore, a characteristic time constant, which remains finite at the edge of chaos, offers an explanation of the network's output robustness in the presence of variability of the internal neural dynamics. Finally, the proposed theory predicts state-dependent frequency selectivity in the network response.

  4. Local Dynamics in Trained Recurrent Neural Networks

    Science.gov (United States)

    Rivkind, Alexander; Barak, Omri

    2017-06-01

    Learning a task induces connectivity changes in neural circuits, thereby changing their dynamics. To elucidate task-related neural dynamics, we study trained recurrent neural networks. We develop a mean field theory for reservoir computing networks trained to have multiple fixed point attractors. Our main result is that the dynamics of the network's output in the vicinity of attractors is governed by a low-order linear ordinary differential equation. The stability of the resulting equation can be assessed, predicting training success or failure. As a consequence, networks of rectified linear units and of sigmoidal nonlinearities are shown to have diametrically different properties when it comes to learning attractors. Furthermore, a characteristic time constant, which remains finite at the edge of chaos, offers an explanation of the network's output robustness in the presence of variability of the internal neural dynamics. Finally, the proposed theory predicts state-dependent frequency selectivity in the network response.

  5. Evaluating neural networks and artificial intelligence systems

    Science.gov (United States)

    Alberts, David S.

    1994-02-01

    Systems have no intrinsic value in and of themselves, but rather derive value from the contributions they make to the missions, decisions, and tasks they are intended to support. The estimation of the cost-effectiveness of systems is a prerequisite for rational planning, budgeting, and investment documents. Neural network and expert system applications, although similar in their incorporation of a significant amount of decision-making capability, differ from each other in ways that affect the manner in which they can be evaluated. Both these types of systems are, by definition, evolutionary systems, which also impacts their evaluation. This paper discusses key aspects of neural network and expert system applications and their impact on the evaluation process. A practical approach or methodology for evaluating a certain class of expert systems that are particularly difficult to measure using traditional evaluation approaches is presented.

  6. Compensating for Channel Fading in DS-CDMA Communication Systems Employing ICA Neural Network Detectors

    Directory of Open Access Journals (Sweden)

    David Overbye

    2005-06-01

    Full Text Available In this paper we examine the impact of channel fading on the bit error rate of a DS-CDMA communication system. The system employs detectors that incorporate neural networks effecting methods of independent component analysis (ICA, subspace estimation of channel noise, and Hopfield type neural networks. The Rayleigh fading channel model is used. When employed in a Rayleigh fading environment, the ICA neural network detectors that give superior performance in a flat fading channel did not retain this superior performance. We then present a new method of compensating for channel fading based on the incorporation of priors in the ICA neural network learning algorithms. When the ICA neural network detectors were compensated using the incorporation of priors, they give significantly better performance than the traditional detectors and the uncompensated ICA detectors. Keywords: CDMA, Multi-user Detection, Rayleigh Fading, Multipath Detection, Independent Component Analysis, Prior Probability Hebbian Learning, Natural Gradient

  7. Determining bank effects on economic growth: An artificial neural network analysis

    Directory of Open Access Journals (Sweden)

    Alex Senajon

    2016-01-01

    Full Text Available This study characterized the influence of the banking industry’s influence on the growth of the economy. A neural network using the Multilayer Perception was used to define functions of Universal Bank, Cooperative Bank, and Thrift Bank as predictors of Gross Domestic Product growth. Using data series from 2003- 2013, it was found that Universal banks have been growing tremendously taking huge shares of growth compared to the other two bank types. Meantime, the Gross Domestic Product was found to be steadily growing over the same period with a significant spike in 2004. In addition, neural network presents the contribution of the bank types on Gross Domestic Product, and found that the assets and capital of rural banks positively affect the Gross Domestic Product growth. As such, the sensitivity analysis of the Artificial Neural Network indicates Rural banks asset as the most important predictor of all the chosen variables followed by Universal bank capital. However, the capital of Thrift banks was found to show least contribution on the growth of the Gross Domestic Product.

  8. Mode Choice Modeling Using Artificial Neural Networks

    OpenAIRE

    Edara, Praveen Kumar

    2003-01-01

    Artificial intelligence techniques have produced excellent results in many diverse fields of engineering. Techniques such as neural networks and fuzzy systems have found their way into transportation engineering. In recent years, neural networks are being used instead of regression techniques for travel demand forecasting purposes. The basic reason lies in the fact that neural networks are able to capture complex relationships and learn from examples and also able to adapt when new data becom...

  9. Neutron spectrometry with artificial neural networks

    International Nuclear Information System (INIS)

    Vega C, H.R.; Hernandez D, V.M.; Manzanares A, E.; Rodriguez, J.M.; Mercado S, G.A.; Iniguez de la Torre Bayo, M.P.; Barquero, R.; Arteaga A, T.

    2005-01-01

    An artificial neural network has been designed to obtain the neutron spectra from the Bonner spheres spectrometer's count rates. The neural network was trained using 129 neutron spectra. These include isotopic neutron sources; reference and operational spectra from accelerators and nuclear reactors, spectra from mathematical functions as well as few energy groups and monoenergetic spectra. The spectra were transformed from lethargy to energy distribution and were re-bin ned to 31 energy groups using the MCNP 4C code. Re-binned spectra and UTA4 response matrix were used to calculate the expected count rates in Bonner spheres spectrometer. These count rates were used as input and the respective spectrum was used as output during neural network training. After training the network was tested with the Bonner spheres count rates produced by a set of neutron spectra. This set contains data used during network training as well as data not used. Training and testing was carried out in the Mat lab program. To verify the network unfolding performance the original and unfolded spectra were compared using the χ 2 -test and the total fluence ratios. The use of Artificial Neural Networks to unfold neutron spectra in neutron spectrometry is an alternative procedure that overcomes the drawbacks associated in this ill-conditioned problem. (Author)

  10. Predicting methionine and lysine contents in soybean meal and fish meal using a group method of data handling-type neural network

    Energy Technology Data Exchange (ETDEWEB)

    Mottaghitalab, M.; Nikkhah, N.; Darmani-Kuhi, H.; López, S.; France, J.

    2015-07-01

    Artificial neural network models offer an alternative to linear regression analysis for predicting the amino acid content of feeds from their chemical composition. A group method of data handling-type neural network (GMDH-type NN), with an evolutionary method of genetic algorithm, was used to predict methionine (Met) and lysine (Lys) contents of soybean meal (SBM) and fish meal (FM) from their proximate analyses (i.e. crude protein, crude fat, crude fibre, ash and moisture). A data set with 119 data lines for Met and 116 lines for Lys was used to develop GMDH-type NN models with two hidden layers. The data lines were divided into two groups to produce training and validation sets. The data sets were imported into the GEvoM software for training the networks. The predictive capability of the constructed models was evaluated by their abilities to estimate the validation data sets accurately. A quantitative examination of goodness of fit for the predictive models was made using a number of precision, concordance and bias statistics. The statistical performance of the models developed revealed close agreement between observed and predicted Met and Lys contents for SBM and FM. The results of this study clearly illustrate the validity of GMDH-type NN models to estimate accurately the amino acid content of poultry feed ingredients from their chemical composition . (Author)

  11. Neural network and its application to CT imaging

    Energy Technology Data Exchange (ETDEWEB)

    Nikravesh, M.; Kovscek, A.R.; Patzek, T.W. [Lawrence Berkeley National Lab., CA (United States)] [and others

    1997-02-01

    We present an integrated approach to imaging the progress of air displacement by spontaneous imbibition of oil into sandstone. We combine Computerized Tomography (CT) scanning and neural network image processing. The main aspects of our approach are (I) visualization of the distribution of oil and air saturation by CT, (II) interpretation of CT scans using neural networks, and (III) reconstruction of 3-D images of oil saturation from the CT scans with a neural network model. Excellent agreement between the actual images and the neural network predictions is found.

  12. Artificial neural networks in neutron dosimetry

    Energy Technology Data Exchange (ETDEWEB)

    Vega C, H.R.; Hernandez D, V.M.; Manzanares A, E.; Mercado, G.A.; Perales M, W.A.; Robles R, J.A. [Unidades Academicas de Estudios Nucleares, UAZ, A.P. 336, 98000 Zacatecas (Mexico); Gallego, E.; Lorente, A. [Depto. de Ingenieria Nuclear, Universidad Politecnica de Madrid, (Spain)

    2005-07-01

    An artificial neural network has been designed to obtain the neutron doses using only the Bonner spheres spectrometer's count rates. Ambient, personal and effective neutron doses were included. 187 neutron spectra were utilized to calculate the Bonner count rates and the neutron doses. The spectra were transformed from lethargy to energy distribution and were re-binned to 31 energy groups using the MCNP 4C code. Re-binned spectra, UTA4 response matrix and fluence-to-dose coefficients were used to calculate the count rates in Bonner spheres spectrometer and the doses. Count rates were used as input and the respective doses were used as output during neural network training. Training and testing was carried out in Mat lab environment. The artificial neural network performance was evaluated using the {chi}{sup 2}- test, where the original and calculated doses were compared. The use of Artificial Neural Networks in neutron dosimetry is an alternative procedure that overcomes the drawbacks associated in this ill-conditioned problem. (Author)

  13. Artificial neural networks in neutron dosimetry

    International Nuclear Information System (INIS)

    Vega C, H.R.; Hernandez D, V.M.; Manzanares A, E.; Mercado, G.A.; Perales M, W.A.; Robles R, J.A.; Gallego, E.; Lorente, A.

    2005-01-01

    An artificial neural network has been designed to obtain the neutron doses using only the Bonner spheres spectrometer's count rates. Ambient, personal and effective neutron doses were included. 187 neutron spectra were utilized to calculate the Bonner count rates and the neutron doses. The spectra were transformed from lethargy to energy distribution and were re-binned to 31 energy groups using the MCNP 4C code. Re-binned spectra, UTA4 response matrix and fluence-to-dose coefficients were used to calculate the count rates in Bonner spheres spectrometer and the doses. Count rates were used as input and the respective doses were used as output during neural network training. Training and testing was carried out in Mat lab environment. The artificial neural network performance was evaluated using the χ 2 - test, where the original and calculated doses were compared. The use of Artificial Neural Networks in neutron dosimetry is an alternative procedure that overcomes the drawbacks associated in this ill-conditioned problem. (Author)

  14. Automatic Seismic-Event Classification with Convolutional Neural Networks.

    Science.gov (United States)

    Bueno Rodriguez, A.; Titos Luzón, M.; Garcia Martinez, L.; Benitez, C.; Ibáñez, J. M.

    2017-12-01

    Active volcanoes exhibit a wide range of seismic signals, providing vast amounts of unlabelled volcano-seismic data that can be analyzed through the lens of artificial intelligence. However, obtaining high-quality labelled data is time-consuming and expensive. Deep neural networks can process data in their raw form, compute high-level features and provide a better representation of the input data distribution. These systems can be deployed to classify seismic data at scale, enhance current early-warning systems and build extensive seismic catalogs. In this research, we aim to classify spectrograms from seven different seismic events registered at "Volcán de Fuego" (Colima, Mexico), during four eruptive periods. Our approach is based on convolutional neural networks (CNNs), a sub-type of deep neural networks that can exploit grid structure from the data. Volcano-seismic signals can be mapped into a grid-like structure using the spectrogram: a representation of the temporal evolution in terms of time and frequency. Spectrograms were computed from the data using Hamming windows with 4 seconds length, 2.5 seconds overlapping and 128 points FFT resolution. Results are compared to deep neural networks, random forest and SVMs. Experiments show that CNNs can exploit temporal and frequency information, attaining a classification accuracy of 93%, similar to deep networks 91% but outperforming SVM and random forest. These results empirically show that CNNs are powerful models to classify a wide range of volcano-seismic signals, and achieve good generalization. Furthermore, volcano-seismic spectrograms contains useful discriminative information for the CNN, as higher layers of the network combine high-level features computed for each frequency band, helping to detect simultaneous events in time. Being at the intersection of deep learning and geophysics, this research enables future studies of how CNNs can be used in volcano monitoring to accurately determine the detection and

  15. Artificial neural networks for plasma spectroscopy analysis

    International Nuclear Information System (INIS)

    Morgan, W.L.; Larsen, J.T.; Goldstein, W.H.

    1992-01-01

    Artificial neural networks have been applied to a variety of signal processing and image recognition problems. Of the several common neural models the feed-forward, back-propagation network is well suited for the analysis of scientific laboratory data, which can be viewed as a pattern recognition problem. The authors present a discussion of the basic neural network concepts and illustrate its potential for analysis of experiments by applying it to the spectra of laser produced plasmas in order to obtain estimates of electron temperatures and densities. Although these are high temperature and density plasmas, the neural network technique may be of interest in the analysis of the low temperature and density plasmas characteristic of experiments and devices in gaseous electronics

  16. Neural Networks for Non-linear Control

    DEFF Research Database (Denmark)

    Sørensen, O.

    1994-01-01

    This paper describes how a neural network, structured as a Multi Layer Perceptron, is trained to predict, simulate and control a non-linear process.......This paper describes how a neural network, structured as a Multi Layer Perceptron, is trained to predict, simulate and control a non-linear process....

  17. Organization of Anti-Phase Synchronization Pattern in Neural Networks: What are the Key Factors?

    Science.gov (United States)

    Li, Dong; Zhou, Changsong

    2011-01-01

    Anti-phase oscillation has been widely observed in cortical neural network. Elucidating the mechanism underlying the organization of anti-phase pattern is of significance for better understanding more complicated pattern formations in brain networks. In dynamical systems theory, the organization of anti-phase oscillation pattern has usually been considered to relate to time delay in coupling. This is consistent to conduction delays in real neural networks in the brain due to finite propagation velocity of action potentials. However, other structural factors in cortical neural network, such as modular organization (connection density) and the coupling types (excitatory or inhibitory), could also play an important role. In this work, we investigate the anti-phase oscillation pattern organized on a two-module network of either neuronal cell model or neural mass model, and analyze the impact of the conduction delay times, the connection densities, and coupling types. Our results show that delay times and coupling types can play key roles in this organization. The connection densities may have an influence on the stability if an anti-phase pattern exists due to the other factors. Furthermore, we show that anti-phase synchronization of slow oscillations can be achieved with small delay times if there is interaction between slow and fast oscillations. These results are significant for further understanding more realistic spatiotemporal dynamics of cortico-cortical communications. PMID:22232576

  18. Using neural networks to describe tracer correlations

    Directory of Open Access Journals (Sweden)

    D. J. Lary

    2004-01-01

    Full Text Available Neural networks are ideally suited to describe the spatial and temporal dependence of tracer-tracer correlations. The neural network performs well even in regions where the correlations are less compact and normally a family of correlation curves would be required. For example, the CH4-N2O correlation can be well described using a neural network trained with the latitude, pressure, time of year, and methane volume mixing ratio (v.m.r.. In this study a neural network using Quickprop learning and one hidden layer with eight nodes was able to reproduce the CH4-N2O correlation with a correlation coefficient between simulated and training values of 0.9995. Such an accurate representation of tracer-tracer correlations allows more use to be made of long-term datasets to constrain chemical models. Such as the dataset from the Halogen Occultation Experiment (HALOE which has continuously observed CH4  (but not N2O from 1991 till the present. The neural network Fortran code used is available for download.

  19. Inverting radiometric measurements with a neural network

    Science.gov (United States)

    Measure, Edward M.; Yee, Young P.; Balding, Jeff M.; Watkins, Wendell R.

    1992-02-01

    A neural network scheme for retrieving remotely sensed vertical temperature profiles was applied to observed ground based radiometer measurements. The neural network used microwave radiance measurements and surface measurements of temperature and pressure as inputs. Because the microwave radiometer is capable of measuring 4 oxygen channels at 5 different elevation angles (9, 15, 25, 40, and 90 degs), 20 microwave measurements are potentially available. Because these measurements have considerable redundancy, a neural network was experimented with, accepting as inputs microwave measurements taken at 53.88 GHz, 40 deg; 57.45 GHz, 40 deg; and 57.45, 90 deg. The primary test site was located at White Sands Missile Range (WSMR), NM. Results are compared with measurements made simultaneously with balloon borne radiosonde instruments and with radiometric temperature retrievals made using more conventional retrieval algorithms. The neural network was trained using a Widrow-Hoff delta rule procedure. Functions of date to include season dependence in the retrieval process and functions of time to include diurnal effects were used as inputs to the neural network.

  20. Modeling of methane emissions using artificial neural network approach

    Directory of Open Access Journals (Sweden)

    Stamenković Lidija J.

    2015-01-01

    Full Text Available The aim of this study was to develop a model for forecasting CH4 emissions at the national level, using Artificial Neural Networks (ANN with broadly available sustainability, economical and industrial indicators as their inputs. ANN modeling was performed using two different types of architecture; a Backpropagation Neural Network (BPNN and a General Regression Neural Network (GRNN. A conventional multiple linear regression (MLR model was also developed in order to compare model performance and assess which model provides the best results. ANN and MLR models were developed and tested using the same annual data for 20 European countries. The ANN model demonstrated very good performance, significantly better than the MLR model. It was shown that a forecast of CH4 emissions at the national level using the ANN model can be made successfully and accurately for a future period of up to two years, thereby opening the possibility to apply such a modeling technique which can be used to support the implementation of sustainable development strategies and environmental management policies. [Projekat Ministarstva nauke Republike Srbije, br. 172007

  1. Convolutional Neural Network for Image Recognition

    CERN Document Server

    Seifnashri, Sahand

    2015-01-01

    The aim of this project is to use machine learning techniques especially Convolutional Neural Networks for image processing. These techniques can be used for Quark-Gluon discrimination using calorimeters data, but unfortunately I didn’t manage to get the calorimeters data and I just used the Jet data fromminiaodsim(ak4 chs). The Jet data was not good enough for Convolutional Neural Network which is designed for ’image’ recognition. This report is made of twomain part, part one is mainly about implementing Convolutional Neural Network on unphysical data such as MNIST digits and CIFAR-10 dataset and part 2 is about the Jet data.

  2. Equivariant bifurcation in a coupled complex-valued neural network rings

    International Nuclear Information System (INIS)

    Zhang, Chunrui; Sui, Zhenzhang; Li, Hongpeng

    2017-01-01

    Highlights: • Complex value Hopfield-type network with Z4 × Z2 symmetry is discussed. • The spatio-temporal patterns of bifurcating periodic oscillations are obtained. • The oscillations can be in phase or anti-phase depending on the parameters and delay. - Abstract: Network with interacting loops and time delays are common in physiological systems. In the past few years, the dynamic behaviors of coupled interacting loops neural networks have been widely studied due to their extensive applications in classification of pattern recognition, signal processing, image processing, engineering optimization and animal locomotion, and other areas, see the references therein. In a large amount of applications, complex signals often occur and the complex-valued recurrent neural networks are preferable. In this paper, we study a complex value Hopfield-type network that consists of a pair of one-way rings each with four neurons and two-way coupling between each ring. We discuss the spatio-temporal patterns of bifurcating periodic oscillations by using the symmetric bifurcation theory of delay differential equations combined with representation theory of Lie groups. The existence of multiple branches of bifurcating periodic solution is obtained. We also found that the spatio-temporal patterns of bifurcating periodic oscillations alternate according to the change of the propagation time delay in the coupling, i.e., different ranges of delays correspond to different patterns of neural network oscillators. The oscillations of corresponding neurons in the two loops can be in phase or anti-phase depending on the parameters and delay. Some numerical simulations support our analysis results.

  3. An improved superconducting neural circuit and its application for a neural network solving a combinatorial optimization problem

    International Nuclear Information System (INIS)

    Onomi, T; Nakajima, K

    2014-01-01

    We have proposed a superconducting Hopfield-type neural network for solving the N-Queens problem which is one of combinatorial optimization problems. The sigmoid-shape function of a neuron output is represented by the output of coupled SQUIDs gate consisting of a single-junction and a double-junction SQUIDs. One of the important factors for an improvement of the network performance is an improvement of a threshold characteristic of a neuron circuit. In this paper, we report an improved design of coupled SQUID gates for a superconducting neural network. A step-like function with a steep threshold at a rising edge is desirable for a neuron circuit to solve a combinatorial optimization problem. A neuron circuit is composed of two coupled SQUIDs gates with a cascade connection in order to obtain such characteristics. The designed neuron circuit is fabricated by a 2.5 kA/cm 2 Nb/AlOx/Nb process. The operation of a fabricated neuron circuit is experimentally demonstrated. Moreover, we discuss about the performance of the neural network using the improved neuron circuits and delayed negative self-connections.

  4. A neural network approach to burst detection.

    Science.gov (United States)

    Mounce, S R; Day, A J; Wood, A S; Khan, A; Widdop, P D; Machell, J

    2002-01-01

    This paper describes how hydraulic and water quality data from a distribution network may be used to provide a more efficient leakage management capability for the water industry. The research presented concerns the application of artificial neural networks to the issue of detection and location of leakage in treated water distribution systems. An architecture for an Artificial Neural Network (ANN) based system is outlined. The neural network uses time series data produced by sensors to directly construct an empirical model for predication and classification of leaks. Results are presented using data from an experimental site in Yorkshire Water's Keighley distribution system.

  5. QSAR modelling using combined simple competitive learning networks and RBF neural networks.

    Science.gov (United States)

    Sheikhpour, R; Sarram, M A; Rezaeian, M; Sheikhpour, E

    2018-04-01

    The aim of this study was to propose a QSAR modelling approach based on the combination of simple competitive learning (SCL) networks with radial basis function (RBF) neural networks for predicting the biological activity of chemical compounds. The proposed QSAR method consisted of two phases. In the first phase, an SCL network was applied to determine the centres of an RBF neural network. In the second phase, the RBF neural network was used to predict the biological activity of various phenols and Rho kinase (ROCK) inhibitors. The predictive ability of the proposed QSAR models was evaluated and compared with other QSAR models using external validation. The results of this study showed that the proposed QSAR modelling approach leads to better performances than other models in predicting the biological activity of chemical compounds. This indicated the efficiency of simple competitive learning networks in determining the centres of RBF neural networks.

  6. Protein secondary structure prediction using modular reciprocal bidirectional recurrent neural networks.

    Science.gov (United States)

    Babaei, Sepideh; Geranmayeh, Amir; Seyyedsalehi, Seyyed Ali

    2010-12-01

    The supervised learning of recurrent neural networks well-suited for prediction of protein secondary structures from the underlying amino acids sequence is studied. Modular reciprocal recurrent neural networks (MRR-NN) are proposed to model the strong correlations between adjacent secondary structure elements. Besides, a multilayer bidirectional recurrent neural network (MBR-NN) is introduced to capture the long-range intramolecular interactions between amino acids in formation of the secondary structure. The final modular prediction system is devised based on the interactive integration of the MRR-NN and the MBR-NN structures to arbitrarily engage the neighboring effects of the secondary structure types concurrent with memorizing the sequential dependencies of amino acids along the protein chain. The advanced combined network augments the percentage accuracy (Q₃) to 79.36% and boosts the segment overlap (SOV) up to 70.09% when tested on the PSIPRED dataset in three-fold cross-validation. Copyright © 2010 Elsevier Ireland Ltd. All rights reserved.

  7. Novel global robust stability criterion for neural networks with delay

    International Nuclear Information System (INIS)

    Singh, Vimal

    2009-01-01

    A novel criterion for the global robust stability of Hopfield-type interval neural networks with delay is presented. An example illustrating the improvement of the present criterion over several recently reported criteria is given.

  8. A Neural Network-Based Interval Pattern Matcher

    Directory of Open Access Journals (Sweden)

    Jing Lu

    2015-07-01

    Full Text Available One of the most important roles in the machine learning area is to classify, and neural networks are very important classifiers. However, traditional neural networks cannot identify intervals, let alone classify them. To improve their identification ability, we propose a neural network-based interval matcher in our paper. After summarizing the theoretical construction of the model, we take a simple and a practical weather forecasting experiment, which show that the recognizer accuracy reaches 100% and that is promising.

  9. Artificial Neural Networks and the Mass Appraisal of Real Estate

    Directory of Open Access Journals (Sweden)

    Gang Zhou

    2018-03-01

    Full Text Available With the rapid development of computer, artificial intelligence and big data technology, artificial neural networks have become one of the most powerful machine learning algorithms. In the practice, most of the applications of artificial neural networks use back propagation neural network and its variation. Besides the back propagation neural network, various neural networks have been developing in order to improve the performance of standard models. Though neural networks are well known method in the research of real estate, there is enormous space for future research in order to enhance their function. Some scholars combine genetic algorithm, geospatial information, support vector machine model, particle swarm optimization with artificial neural networks to appraise the real estate, which is helpful for the existing appraisal technology. The mass appraisal of real estate in this paper includes the real estate valuation in the transaction and the tax base valuation in the real estate holding. In this study we focus on the theoretical development of artificial neural networks and mass appraisal of real estate, artificial neural networks model evolution and algorithm improvement, artificial neural networks practice and application, and review the existing literature about artificial neural networks and mass appraisal of real estate. Finally, we provide some suggestions for the mass appraisal of China's real estate.

  10. Numerical Solution of Fuzzy Differential Equations with Z-numbers Using Bernstein Neural Networks

    Directory of Open Access Journals (Sweden)

    Raheleh Jafari

    2017-01-01

    Full Text Available The uncertain nonlinear systems can be modeled with fuzzy equations or fuzzy differential equations (FDEs by incorporating the fuzzy set theory. The solutions of them are applied to analyze many engineering problems. However, it is very difficult to obtain solutions of FDEs. In this paper, the solutions of FDEs are approximated by two types of Bernstein neural networks. Here, the uncertainties are in the sense of Z-numbers. Initially the FDE is transformed into four ordinary differential equations (ODEs with Hukuhara differentiability. Then neural models are constructed with the structure of ODEs. With modified back propagation method for Z- number variables, the neural networks are trained. The theory analysis and simulation results show that these new models, Bernstein neural networks, are effective to estimate the solutions of FDEs based on Z-numbers.

  11. Introduction to neural networks with electric power applications

    International Nuclear Information System (INIS)

    Wildberger, A.M.; Hickok, K.A.

    1990-01-01

    This is an introduction to the general field of neural networks with emphasis on prospects for their application in the power industry. It is intended to provide enough background information for its audience to begin to follow technical developments in neural networks and to recognize those which might impact on electric power engineering. Beginning with a brief discussion of natural and artificial neurons, the characteristics of neural networks in general and how they learn, neural networks are compared with other modeling tools such as simulation and expert systems in order to provide guidance in selecting appropriate applications. In the power industry, possible applications include plant control, dispatching, and maintenance scheduling. In particular, neural networks are currently being investigated for enhancements to the Thermal Performance Advisor (TPA) which General Physics Corporation (GP) has developed to improve the efficiency of electric power generation

  12. Wireless synapses in bio-inspired neural networks

    Science.gov (United States)

    Jannson, Tomasz; Forrester, Thomas; Degrood, Kevin

    2009-05-01

    Wireless (virtual) synapses represent a novel approach to bio-inspired neural networks that follow the infrastructure of the biological brain, except that biological (physical) synapses are replaced by virtual ones based on cellular telephony modeling. Such synapses are of two types: intracluster synapses are based on IR wireless ones, while intercluster synapses are based on RF wireless ones. Such synapses have three unique features, atypical of conventional artificial ones: very high parallelism (close to that of the human brain), very high reconfigurability (easy to kill and to create), and very high plasticity (easy to modify or upgrade). In this paper we analyze the general concept of wireless synapses with special emphasis on RF wireless synapses. Also, biological mammalian (vertebrate) neural models are discussed for comparison, and a novel neural lensing effect is discussed in detail.

  13. Controlling the dynamics of multi-state neural networks

    International Nuclear Information System (INIS)

    Jin, Tao; Zhao, Hong

    2008-01-01

    In this paper, we first analyze the distribution of local fields (DLF) which is induced by the memory patterns in the Q-Ising model. It is found that the structure of the DLF is closely correlated with the network dynamics and the system performance. However, the design rule adopted in the Q-Ising model, like the other rules adopted for multi-state neural networks with associative memories, cannot be applied to directly control the DLF for a given set of memory patterns, and thus cannot be applied to further study the relationships between the structure of the DLF and the dynamics of the network. We then extend a design rule, which was presented recently for designing binary-state neural networks, to make it suitable for designing general multi-state neural networks. This rule is able to control the structure of the DLF as expected. We show that controlling the DLF not only can affect the dynamic behaviors of the multi-state neural networks for a given set of memory patterns, but also can improve the storage capacity. With the change of the DLF, the network shows very rich dynamic behaviors, such as the 'chaos phase', the 'memory phase', and the 'mixture phase'. These dynamic behaviors are also observed in the binary-state neural networks; therefore, our results imply that they may be the universal behaviors of feedback neural networks

  14. Face recognition based on improved BP neural network

    Directory of Open Access Journals (Sweden)

    Yue Gaili

    2017-01-01

    Full Text Available In order to improve the recognition rate of face recognition, face recognition algorithm based on histogram equalization, PCA and BP neural network is proposed. First, the face image is preprocessed by histogram equalization. Then, the classical PCA algorithm is used to extract the features of the histogram equalization image, and extract the principal component of the image. And then train the BP neural network using the trained training samples. This improved BP neural network weight adjustment method is used to train the network because the conventional BP algorithm has the disadvantages of slow convergence, easy to fall into local minima and training process. Finally, the BP neural network with the test sample input is trained to classify and identify the face images, and the recognition rate is obtained. Through the use of ORL database face image simulation experiment, the analysis results show that the improved BP neural network face recognition method can effectively improve the recognition rate of face recognition.

  15. LEARNING ALGORITHM EFFECT ON MULTILAYER FEED FORWARD ARTIFICIAL NEURAL NETWORK PERFORMANCE IN IMAGE CODING

    Directory of Open Access Journals (Sweden)

    OMER MAHMOUD

    2007-08-01

    Full Text Available One of the essential factors that affect the performance of Artificial Neural Networks is the learning algorithm. The performance of Multilayer Feed Forward Artificial Neural Network performance in image compression using different learning algorithms is examined in this paper. Based on Gradient Descent, Conjugate Gradient, Quasi-Newton techniques three different error back propagation algorithms have been developed for use in training two types of neural networks, a single hidden layer network and three hidden layers network. The essence of this study is to investigate the most efficient and effective training methods for use in image compression and its subsequent applications. The obtained results show that the Quasi-Newton based algorithm has better performance as compared to the other two algorithms.

  16. Control of autonomous robot using neural networks

    Science.gov (United States)

    Barton, Adam; Volna, Eva

    2017-07-01

    The aim of the article is to design a method of control of an autonomous robot using artificial neural networks. The introductory part describes control issues from the perspective of autonomous robot navigation and the current mobile robots controlled by neural networks. The core of the article is the design of the controlling neural network, and generation and filtration of the training set using ART1 (Adaptive Resonance Theory). The outcome of the practical part is an assembled Lego Mindstorms EV3 robot solving the problem of avoiding obstacles in space. To verify models of an autonomous robot behavior, a set of experiments was created as well as evaluation criteria. The speed of each motor was adjusted by the controlling neural network with respect to the situation in which the robot was found.

  17. Neutron spectrometry using artificial neural networks

    International Nuclear Information System (INIS)

    Vega-Carrillo, Hector Rene; Martin Hernandez-Davila, Victor; Manzanares-Acuna, Eduardo; Mercado Sanchez, Gema A.; Pilar Iniguez de la Torre, Maria; Barquero, Raquel; Palacios, Francisco; Mendez Villafane, Roberto; Arteaga Arteaga, Tarcicio; Manuel Ortiz Rodriguez, Jose

    2006-01-01

    An artificial neural network has been designed to obtain neutron spectra from Bonner spheres spectrometer count rates. The neural network was trained using 129 neutron spectra. These include spectra from isotopic neutron sources; reference and operational spectra from accelerators and nuclear reactors, spectra based on mathematical functions as well as few energy groups and monoenergetic spectra. The spectra were transformed from lethargy to energy distribution and were re-binned to 31 energy groups using the MCNP 4C code. The re-binned spectra and the UTA4 response matrix were used to calculate the expected count rates in Bonner spheres spectrometer. These count rates were used as input and their respective spectra were used as output during the neural network training. After training, the network was tested with the Bonner spheres count rates produced by folding a set of neutron spectra with the response matrix. This set contains data used during network training as well as data not used. Training and testing was carried out using the Matlab ( R) program. To verify the network unfolding performance, the original and unfolded spectra were compared using the root mean square error. The use of artificial neural networks to unfold neutron spectra in neutron spectrometry is an alternative procedure that overcomes the drawbacks associated with this ill-conditioned problem

  18. A quantum-implementable neural network model

    Science.gov (United States)

    Chen, Jialin; Wang, Lingli; Charbon, Edoardo

    2017-10-01

    A quantum-implementable neural network, namely quantum probability neural network (QPNN) model, is proposed in this paper. QPNN can use quantum parallelism to trace all possible network states to improve the result. Due to its unique quantum nature, this model is robust to several quantum noises under certain conditions, which can be efficiently implemented by the qubus quantum computer. Another advantage is that QPNN can be used as memory to retrieve the most relevant data and even to generate new data. The MATLAB experimental results of Iris data classification and MNIST handwriting recognition show that much less neuron resources are required in QPNN to obtain a good result than the classical feedforward neural network. The proposed QPNN model indicates that quantum effects are useful for real-life classification tasks.

  19. Neural Network Algorithm for Particle Loading

    International Nuclear Information System (INIS)

    Lewandowski, J.L.V.

    2003-01-01

    An artificial neural network algorithm for continuous minimization is developed and applied to the case of numerical particle loading. It is shown that higher-order moments of the probability distribution function can be efficiently renormalized using this technique. A general neural network for the renormalization of an arbitrary number of moments is given

  20. Memory in Neural Networks and Glasses

    NARCIS (Netherlands)

    Heerema, M.

    2000-01-01

    The thesis tries and models a neural network in a way which, at essential points, is biologically realistic. In a biological context, the changes of the synapses of the neural network are most often described by what is called `Hebb's learning rule'. On careful analysis it is, in fact, nothing but a

  1. Neural Network for Sparse Reconstruction

    Directory of Open Access Journals (Sweden)

    Qingfa Li

    2014-01-01

    Full Text Available We construct a neural network based on smoothing approximation techniques and projected gradient method to solve a kind of sparse reconstruction problems. Neural network can be implemented by circuits and can be seen as an important method for solving optimization problems, especially large scale problems. Smoothing approximation is an efficient technique for solving nonsmooth optimization problems. We combine these two techniques to overcome the difficulties of the choices of the step size in discrete algorithms and the item in the set-valued map of differential inclusion. In theory, the proposed network can converge to the optimal solution set of the given problem. Furthermore, some numerical experiments show the effectiveness of the proposed network in this paper.

  2. Ocean wave forecasting using recurrent neural networks

    Digital Repository Service at National Institute of Oceanography (India)

    Mandal, S.; Prabaharan, N.

    , merchant vessel routing, nearshore construction, etc. more efficiently and safely. This paper describes an artificial neural network, namely recurrent neural network with rprop update algorithm and is applied for wave forecasting. Measured ocean waves off...

  3. Self-organized critical neural networks

    International Nuclear Information System (INIS)

    Bornholdt, Stefan; Roehl, Torsten

    2003-01-01

    A mechanism for self-organization of the degree of connectivity in model neural networks is studied. Network connectivity is regulated locally on the basis of an order parameter of the global dynamics, which is estimated from an observable at the single synapse level. This principle is studied in a two-dimensional neural network with randomly wired asymmetric weights. In this class of networks, network connectivity is closely related to a phase transition between ordered and disordered dynamics. A slow topology change is imposed on the network through a local rewiring rule motivated by activity-dependent synaptic development: Neighbor neurons whose activity is correlated, on average develop a new connection while uncorrelated neighbors tend to disconnect. As a result, robust self-organization of the network towards the order disorder transition occurs. Convergence is independent of initial conditions, robust against thermal noise, and does not require fine tuning of parameters

  4. Critical Branching Neural Networks

    Science.gov (United States)

    Kello, Christopher T.

    2013-01-01

    It is now well-established that intrinsic variations in human neural and behavioral activity tend to exhibit scaling laws in their fluctuations and distributions. The meaning of these scaling laws is an ongoing matter of debate between isolable causes versus pervasive causes. A spiking neural network model is presented that self-tunes to critical…

  5. Synchronization of chaotic systems and identification of nonlinear systems by using recurrent hierarchical type-2 fuzzy neural networks.

    Science.gov (United States)

    Mohammadzadeh, Ardashir; Ghaemi, Sehraneh

    2015-09-01

    This paper proposes a novel approach for training of proposed recurrent hierarchical interval type-2 fuzzy neural networks (RHT2FNN) based on the square-root cubature Kalman filters (SCKF). The SCKF algorithm is used to adjust the premise part of the type-2 FNN and the weights of defuzzification and the feedback weights. The recurrence property in the proposed network is the output feeding of each membership function to itself. The proposed RHT2FNN is employed in the sliding mode control scheme for the synchronization of chaotic systems. Unknown functions in the sliding mode control approach are estimated by RHT2FNN. Another application of the proposed RHT2FNN is the identification of dynamic nonlinear systems. The effectiveness of the proposed network and its learning algorithm is verified by several simulation examples. Furthermore, the universal approximation of RHT2FNNs is also shown. Copyright © 2015 ISA. Published by Elsevier Ltd. All rights reserved.

  6. Tensor Basis Neural Network v. 1.0 (beta)

    Energy Technology Data Exchange (ETDEWEB)

    2017-03-28

    This software package can be used to build, train, and test a neural network machine learning model. The neural network architecture is specifically designed to embed tensor invariance properties by enforcing that the model predictions sit on an invariant tensor basis. This neural network architecture can be used in developing constitutive models for applications such as turbulence modeling, materials science, and electromagnetism.

  7. Radiometric calibration of digital cameras using neural networks

    Science.gov (United States)

    Grunwald, Michael; Laube, Pascal; Schall, Martin; Umlauf, Georg; Franz, Matthias O.

    2017-08-01

    Digital cameras are used in a large variety of scientific and industrial applications. For most applications, the acquired data should represent the real light intensity per pixel as accurately as possible. However, digital cameras are subject to physical, electronic and optical effects that lead to errors and noise in the raw image. Temperature- dependent dark current, read noise, optical vignetting or different sensitivities of individual pixels are examples of such effects. The purpose of radiometric calibration is to improve the quality of the resulting images by reducing the influence of the various types of errors on the measured data and thus improving the quality of the overall application. In this context, we present a specialized neural network architecture for radiometric calibration of digital cameras. Neural networks are used to learn a temperature- and exposure-dependent mapping from observed gray-scale values to true light intensities for each pixel. In contrast to classical at-fielding, neural networks have the potential to model nonlinear mappings which allows for accurately capturing the temperature dependence of the dark current and for modeling cameras with nonlinear sensitivities. Both scenarios are highly relevant in industrial applications. The experimental comparison of our network approach to classical at-fielding shows a consistently higher reconstruction quality, also for linear cameras. In addition, the calibration is faster than previous machine learning approaches based on Gaussian processes.

  8. Neural Network on Photodegradation of Octylphenol using Natural and Artificial UV Radiation

    Directory of Open Access Journals (Sweden)

    Lorentz JÄNTSCHI

    2011-09-01

    Full Text Available The present paper comes up with an experimental design meant to point out the factors interferingin octylphenol’s degradation in surface waters under solar radiation, underlining each factor’sinfluence on the process observable (concentration of p-octylphenol. Multiple linear regressionanalysis and artificial neural network (Multi-Layer Perceptron type were applied in order to obtaina mathematical model capable to explain the action of UV-light upon synthetic solutions of OP inultra-pure water (MilliQ type. Neural network proves to be the most efficient method in predictingthe evolution of OP concentration during photodegradation process. Thus, determination in neuralnetwork’s case has almost double value versus the regression analysis.

  9. Artificial neural network application for predicting soil distribution coefficient of nickel

    International Nuclear Information System (INIS)

    Falamaki, Amin

    2013-01-01

    The distribution (or partition) coefficient (K d ) is an applicable parameter for modeling contaminant and radionuclide transport as well as risk analysis. Selection of this parameter may cause significant error in predicting the impacts of contaminant migration or site-remediation options. In this regards, various models were presented to predict K d values for different contaminants specially heavy metals and radionuclides. In this study, artificial neural network (ANN) is used to present simplified model for predicting K d of nickel. The main objective is to develop a more accurate model with a minimal number of parameters, which can be determined experimentally or select by review of different studies. In addition, the effects of training as well as the type of the network are considered. The K d values of Ni is strongly dependent on pH of the soil and mathematical relationships were presented between pH and K d of nickel recently. In this study, the same database of these presented models was used to verify that neural network may be more useful tools for predicting of K d . Two different types of ANN, multilayer perceptron and redial basis function, were used to investigate the effect of the network geometry on the results. In addition, each network was trained by 80 and 90% of the data and tested for 20 and 10% of the rest data. Then the results of the networks compared with the results of the mathematical models. Although the networks trained by 80 and 90% of the data the results show that all the networks predict with higher accuracy relative to mathematical models which were derived by 100% of data. More training of a network increases the accuracy of the network. Multilayer perceptron network used in this study predicts better than redial basis function network. - Highlights: ► Simplified models for predicting K d of nickel presented using artificial neural networks. ► Multilayer perceptron and redial basis function used to predict K d of nickel in

  10. Complex-valued neural networks advances and applications

    CERN Document Server

    Hirose, Akira

    2013-01-01

    Presents the latest advances in complex-valued neural networks by demonstrating the theory in a wide range of applications Complex-valued neural networks is a rapidly developing neural network framework that utilizes complex arithmetic, exhibiting specific characteristics in its learning, self-organizing, and processing dynamics. They are highly suitable for processing complex amplitude, composed of amplitude and phase, which is one of the core concepts in physical systems to deal with electromagnetic, light, sonic/ultrasonic waves as well as quantum waves, namely, electron and

  11. Arabic Handwriting Recognition Using Neural Network Classifier

    African Journals Online (AJOL)

    pc

    2018-03-05

    Mar 5, 2018 ... an OCR using Neural Network classifier preceded by a set of preprocessing .... Artificial Neural Networks (ANNs), which we adopt in this research, consist of ... advantage and disadvantages of each technique. In [9],. Khemiri ...

  12. New numerical approximation for solving fractional delay differential equations of variable order using artificial neural networks

    Science.gov (United States)

    Zúñiga-Aguilar, C. J.; Coronel-Escamilla, A.; Gómez-Aguilar, J. F.; Alvarado-Martínez, V. M.; Romero-Ugalde, H. M.

    2018-02-01

    In this paper, we approximate the solution of fractional differential equations with delay using a new approach based on artificial neural networks. We consider fractional differential equations of variable order with the Mittag-Leffler kernel in the Liouville-Caputo sense. With this new neural network approach, an approximate solution of the fractional delay differential equation is obtained. Synaptic weights are optimized using the Levenberg-Marquardt algorithm. The neural network effectiveness and applicability were validated by solving different types of fractional delay differential equations, linear systems with delay, nonlinear systems with delay and a system of differential equations, for instance, the Newton-Leipnik oscillator. The solution of the neural network was compared with the analytical solutions and the numerical simulations obtained through the Adams-Bashforth-Moulton method. To show the effectiveness of the proposed neural network, different performance indices were calculated.

  13. Implementing Signature Neural Networks with Spiking Neurons.

    Science.gov (United States)

    Carrillo-Medina, José Luis; Latorre, Roberto

    2016-01-01

    Spiking Neural Networks constitute the most promising approach to develop realistic Artificial Neural Networks (ANNs). Unlike traditional firing rate-based paradigms, information coding in spiking models is based on the precise timing of individual spikes. It has been demonstrated that spiking ANNs can be successfully and efficiently applied to multiple realistic problems solvable with traditional strategies (e.g., data classification or pattern recognition). In recent years, major breakthroughs in neuroscience research have discovered new relevant computational principles in different living neural systems. Could ANNs benefit from some of these recent findings providing novel elements of inspiration? This is an intriguing question for the research community and the development of spiking ANNs including novel bio-inspired information coding and processing strategies is gaining attention. From this perspective, in this work, we adapt the core concepts of the recently proposed Signature Neural Network paradigm-i.e., neural signatures to identify each unit in the network, local information contextualization during the processing, and multicoding strategies for information propagation regarding the origin and the content of the data-to be employed in a spiking neural network. To the best of our knowledge, none of these mechanisms have been used yet in the context of ANNs of spiking neurons. This paper provides a proof-of-concept for their applicability in such networks. Computer simulations show that a simple network model like the discussed here exhibits complex self-organizing properties. The combination of multiple simultaneous encoding schemes allows the network to generate coexisting spatio-temporal patterns of activity encoding information in different spatio-temporal spaces. As a function of the network and/or intra-unit parameters shaping the corresponding encoding modality, different forms of competition among the evoked patterns can emerge even in the absence

  14. MEMBRAIN NEURAL NETWORK FOR VISUAL PATTERN RECOGNITION

    Directory of Open Access Journals (Sweden)

    Artur Popko

    2013-06-01

    Full Text Available Recognition of visual patterns is one of significant applications of Artificial Neural Networks, which partially emulate human thinking in the domain of artificial intelligence. In the paper, a simplified neural approach to recognition of visual patterns is portrayed and discussed. This paper is dedicated for investigators in visual patterns recognition, Artificial Neural Networking and related disciplines. The document describes also MemBrain application environment as a powerful and easy to use neural networks’ editor and simulator supporting ANN.

  15. Decoding small surface codes with feedforward neural networks

    Science.gov (United States)

    Varsamopoulos, Savvas; Criger, Ben; Bertels, Koen

    2018-01-01

    Surface codes reach high error thresholds when decoded with known algorithms, but the decoding time will likely exceed the available time budget, especially for near-term implementations. To decrease the decoding time, we reduce the decoding problem to a classification problem that a feedforward neural network can solve. We investigate quantum error correction and fault tolerance at small code distances using neural network-based decoders, demonstrating that the neural network can generalize to inputs that were not provided during training and that they can reach similar or better decoding performance compared to previous algorithms. We conclude by discussing the time required by a feedforward neural network decoder in hardware.

  16. Artificial Neural Networks For Hadron Hadron Cross-sections

    International Nuclear Information System (INIS)

    ELMashad, M.; ELBakry, M.Y.; Tantawy, M.; Habashy, D.M.

    2011-01-01

    In recent years artificial neural networks (ANN ) have emerged as a mature and viable framework with many applications in various areas. Artificial neural networks theory is sometimes used to refer to a branch of computational science that uses neural networks as models to either simulate or analyze complex phenomena and/or study the principles of operation of neural networks analytically. In this work a model of hadron- hadron collision using the ANN technique is present, the hadron- hadron based ANN model calculates the cross sections of hadron- hadron collision. The results amply demonstrate the feasibility of such new technique in extracting the collision features and prove its effectiveness

  17. Foreign currency rate forecasting using neural networks

    Science.gov (United States)

    Pandya, Abhijit S.; Kondo, Tadashi; Talati, Amit; Jayadevappa, Suryaprasad

    2000-03-01

    Neural networks are increasingly being used as a forecasting tool in many forecasting problems. This paper discusses the application of neural networks in predicting daily foreign exchange rates between the USD, GBP as well as DEM. We approach the problem from a time-series analysis framework - where future exchange rates are forecasted solely using past exchange rates. This relies on the belief that the past prices and future prices are very close related, and interdependent. We present the result of training a neural network with historical USD-GBP data. The methodology used in explained, as well as the training process. We discuss the selection of inputs to the network, and present a comparison of using the actual exchange rates and the exchange rate differences as inputs. Price and rate differences are the preferred way of training neural network in financial applications. Results of both approaches are present together for comparison. We show that the network is able to learn the trends in the exchange rate movements correctly, and present the results of the prediction over several periods of time.

  18. Face recognition: a convolutional neural-network approach.

    Science.gov (United States)

    Lawrence, S; Giles, C L; Tsoi, A C; Back, A D

    1997-01-01

    We present a hybrid neural-network for human face recognition which compares favourably with other methods. The system combines local image sampling, a self-organizing map (SOM) neural network, and a convolutional neural network. The SOM provides a quantization of the image samples into a topological space where inputs that are nearby in the original space are also nearby in the output space, thereby providing dimensionality reduction and invariance to minor changes in the image sample, and the convolutional neural network provides partial invariance to translation, rotation, scale, and deformation. The convolutional network extracts successively larger features in a hierarchical set of layers. We present results using the Karhunen-Loeve transform in place of the SOM, and a multilayer perceptron (MLP) in place of the convolutional network for comparison. We use a database of 400 images of 40 individuals which contains quite a high degree of variability in expression, pose, and facial details. We analyze the computational complexity and discuss how new classes could be added to the trained recognizer.

  19. Diabetic retinopathy screening using deep neural network.

    Science.gov (United States)

    Ramachandran, Nishanthan; Hong, Sheng Chiong; Sime, Mary J; Wilson, Graham A

    2017-09-07

    There is a burgeoning interest in the use of deep neural network in diabetic retinal screening. To determine whether a deep neural network could satisfactorily detect diabetic retinopathy that requires referral to an ophthalmologist from a local diabetic retinal screening programme and an international database. Retrospective audit. Diabetic retinal photos from Otago database photographed during October 2016 (485 photos), and 1200 photos from Messidor international database. Receiver operating characteristic curve to illustrate the ability of a deep neural network to identify referable diabetic retinopathy (moderate or worse diabetic retinopathy or exudates within one disc diameter of the fovea). Area under the receiver operating characteristic curve, sensitivity and specificity. For detecting referable diabetic retinopathy, the deep neural network had an area under receiver operating characteristic curve of 0.901 (95% confidence interval 0.807-0.995), with 84.6% sensitivity and 79.7% specificity for Otago and 0.980 (95% confidence interval 0.973-0.986), with 96.0% sensitivity and 90.0% specificity for Messidor. This study has shown that a deep neural network can detect referable diabetic retinopathy with sensitivities and specificities close to or better than 80% from both an international and a domestic (New Zealand) database. We believe that deep neural networks can be integrated into community screening once they can successfully detect both diabetic retinopathy and diabetic macular oedema. © 2017 Royal Australian and New Zealand College of Ophthalmologists.

  20. Logarithmic learning for generalized classifier neural network.

    Science.gov (United States)

    Ozyildirim, Buse Melis; Avci, Mutlu

    2014-12-01

    Generalized classifier neural network is introduced as an efficient classifier among the others. Unless the initial smoothing parameter value is close to the optimal one, generalized classifier neural network suffers from convergence problem and requires quite a long time to converge. In this work, to overcome this problem, a logarithmic learning approach is proposed. The proposed method uses logarithmic cost function instead of squared error. Minimization of this cost function reduces the number of iterations used for reaching the minima. The proposed method is tested on 15 different data sets and performance of logarithmic learning generalized classifier neural network is compared with that of standard one. Thanks to operation range of radial basis function included by generalized classifier neural network, proposed logarithmic approach and its derivative has continuous values. This makes it possible to adopt the advantage of logarithmic fast convergence by the proposed learning method. Due to fast convergence ability of logarithmic cost function, training time is maximally decreased to 99.2%. In addition to decrease in training time, classification performance may also be improved till 60%. According to the test results, while the proposed method provides a solution for time requirement problem of generalized classifier neural network, it may also improve the classification accuracy. The proposed method can be considered as an efficient way for reducing the time requirement problem of generalized classifier neural network. Copyright © 2014 Elsevier Ltd. All rights reserved.

  1. An Introduction to Neural Networks for Hearing Aid Noise Recognition.

    Science.gov (United States)

    Kim, Jun W.; Tyler, Richard S.

    1995-01-01

    This article introduces the use of multilayered artificial neural networks in hearing aid noise recognition. It reviews basic principles of neural networks, and offers an example of an application in which a neural network is used to identify the presence or absence of noise in speech. The ability of neural networks to "learn" the…

  2. Thermoelastic steam turbine rotor control based on neural network

    Science.gov (United States)

    Rzadkowski, Romuald; Dominiczak, Krzysztof; Radulski, Wojciech; Szczepanik, R.

    2015-12-01

    Considered here are Nonlinear Auto-Regressive neural networks with eXogenous inputs (NARX) as a mathematical model of a steam turbine rotor for controlling steam turbine stress on-line. In order to obtain neural networks that locate critical stress and temperature points in the steam turbine during transient states, an FE rotor model was built. This model was used to train the neural networks on the basis of steam turbine transient operating data. The training included nonlinearity related to steam turbine expansion, heat exchange and rotor material properties during transients. Simultaneous neural networks are algorithms which can be implemented on PLC controllers. This allows for the application neural networks to control steam turbine stress in industrial power plants.

  3. Application of neural networks in coastal engineering

    Digital Repository Service at National Institute of Oceanography (India)

    Mandal, S.

    the neural network attractive. A neural network is an information processing system modeled on the structure of the dynamic process. It can solve the complex/nonlinear problems quickly once trained by operating on problems using an interconnected number...

  4. What are artificial neural networks?

    DEFF Research Database (Denmark)

    Krogh, Anders

    2008-01-01

    Artificial neural networks have been applied to problems ranging from speech recognition to prediction of protein secondary structure, classification of cancers and gene prediction. How do they work and what might they be good for? Udgivelsesdato: 2008-Feb......Artificial neural networks have been applied to problems ranging from speech recognition to prediction of protein secondary structure, classification of cancers and gene prediction. How do they work and what might they be good for? Udgivelsesdato: 2008-Feb...

  5. Neural network based multiscale image restoration approach

    Science.gov (United States)

    de Castro, Ana Paula A.; da Silva, José D. S.

    2007-02-01

    This paper describes a neural network based multiscale image restoration approach. Multilayer perceptrons are trained with artificial images of degraded gray level circles, in an attempt to make the neural network learn inherent space relations of the degraded pixels. The present approach simulates the degradation by a low pass Gaussian filter blurring operation and the addition of noise to the pixels at pre-established rates. The training process considers the degraded image as input and the non-degraded image as output for the supervised learning process. The neural network thus performs an inverse operation by recovering a quasi non-degraded image in terms of least squared. The main difference of the approach to existing ones relies on the fact that the space relations are taken from different scales, thus providing relational space data to the neural network. The approach is an attempt to come up with a simple method that leads to an optimum solution to the problem. Considering different window sizes around a pixel simulates the multiscale operation. In the generalization phase the neural network is exposed to indoor, outdoor, and satellite degraded images following the same steps use for the artificial circle image.

  6. Analysis of neural networks in terms of domain functions

    NARCIS (Netherlands)

    van der Zwaag, B.J.; Slump, Cornelis H.; Spaanenburg, Lambert

    Despite their success-story, artificial neural networks have one major disadvantage compared to other techniques: the inability to explain comprehensively how a trained neural network reaches its output; neural networks are not only (incorrectly) seen as a "magic tool" but possibly even more as a

  7. Advances in Artificial Neural Networks – Methodological Development and Application

    Directory of Open Access Journals (Sweden)

    Yanbo Huang

    2009-08-01

    Full Text Available Artificial neural networks as a major soft-computing technology have been extensively studied and applied during the last three decades. Research on backpropagation training algorithms for multilayer perceptron networks has spurred development of other neural network training algorithms for other networks such as radial basis function, recurrent network, feedback network, and unsupervised Kohonen self-organizing network. These networks, especially the multilayer perceptron network with a backpropagation training algorithm, have gained recognition in research and applications in various scientific and engineering areas. In order to accelerate the training process and overcome data over-fitting, research has been conducted to improve the backpropagation algorithm. Further, artificial neural networks have been integrated with other advanced methods such as fuzzy logic and wavelet analysis, to enhance the ability of data interpretation and modeling and to avoid subjectivity in the operation of the training algorithm. In recent years, support vector machines have emerged as a set of high-performance supervised generalized linear classifiers in parallel with artificial neural networks. A review on development history of artificial neural networks is presented and the standard architectures and algorithms of artificial neural networks are described. Furthermore, advanced artificial neural networks will be introduced with support vector machines, and limitations of ANNs will be identified. The future of artificial neural network development in tandem with support vector machines will be discussed in conjunction with further applications to food science and engineering, soil and water relationship for crop management, and decision support for precision agriculture. Along with the network structures and training algorithms, the applications of artificial neural networks will be reviewed as well, especially in the fields of agricultural and biological

  8. Nonlinear programming with feedforward neural networks.

    Energy Technology Data Exchange (ETDEWEB)

    Reifman, J.

    1999-06-02

    We provide a practical and effective method for solving constrained optimization problems by successively training a multilayer feedforward neural network in a coupled neural-network/objective-function representation. Nonlinear programming problems are easily mapped into this representation which has a simpler and more transparent method of solution than optimization performed with Hopfield-like networks and poses very mild requirements on the functions appearing in the problem. Simulation results are illustrated and compared with an off-the-shelf optimization tool.

  9. Additive Feed Forward Control with Neural Networks

    DEFF Research Database (Denmark)

    Sørensen, O.

    1999-01-01

    This paper demonstrates a method to control a non-linear, multivariable, noisy process using trained neural networks. The basis for the method is a trained neural network controller acting as the inverse process model. A training method for obtaining such an inverse process model is applied....... A suitable 'shaped' (low-pass filtered) reference is used to overcome problems with excessive control action when using a controller acting as the inverse process model. The control concept is Additive Feed Forward Control, where the trained neural network controller, acting as the inverse process model......, is placed in a supplementary pure feed-forward path to an existing feedback controller. This concept benefits from the fact, that an existing, traditional designed, feedback controller can be retained without any modifications, and after training the connection of the neural network feed-forward controller...

  10. Neural Network to Solve Concave Games

    OpenAIRE

    Liu, Zixin; Wang, Nengfa

    2014-01-01

    The issue on neural network method to solve concave games is concerned. Combined with variational inequality, Ky Fan inequality, and projection equation, concave games are transformed into a neural network model. On the basis of the Lyapunov stable theory, some stability results are also given. Finally, two classic games’ simulation results are given to illustrate the theoretical results.

  11. Hybrid neural network bushing model for vehicle dynamics simulation

    International Nuclear Information System (INIS)

    Sohn, Jeong Hyun; Lee, Seung Kyu; Yoo, Wan Suk

    2008-01-01

    Although the linear model was widely used for the bushing model in vehicle suspension systems, it could not express the nonlinear characteristics of bushing in terms of the amplitude and the frequency. An artificial neural network model was suggested to consider the hysteretic responses of bushings. This model, however, often diverges due to the uncertainties of the neural network under the unexpected excitation inputs. In this paper, a hybrid neural network bushing model combining linear and neural network is suggested. A linear model was employed to represent linear stiffness and damping effects, and the artificial neural network algorithm was adopted to take into account the hysteretic responses. A rubber test was performed to capture bushing characteristics, where sine excitation with different frequencies and amplitudes is applied. Random test results were used to update the weighting factors of the neural network model. It is proven that the proposed model has more robust characteristics than a simple neural network model under step excitation input. A full car simulation was carried out to verify the proposed bushing models. It was shown that the hybrid model results are almost identical to the linear model under several maneuvers

  12. Down image recognition based on deep convolutional neural network

    Directory of Open Access Journals (Sweden)

    Wenzhu Yang

    2018-06-01

    Full Text Available Since of the scale and the various shapes of down in the image, it is difficult for traditional image recognition method to correctly recognize the type of down image and get the required recognition accuracy, even for the Traditional Convolutional Neural Network (TCNN. To deal with the above problems, a Deep Convolutional Neural Network (DCNN for down image classification is constructed, and a new weight initialization method is proposed. Firstly, the salient regions of a down image were cut from the image using the visual saliency model. Then, these salient regions of the image were used to train a sparse autoencoder and get a collection of convolutional filters, which accord with the statistical characteristics of dataset. At last, a DCNN with Inception module and its variants was constructed. To improve the recognition accuracy, the depth of the network is deepened. The experiment results indicate that the constructed DCNN increases the recognition accuracy by 2.7% compared to TCNN, when recognizing the down in the images. The convergence rate of the proposed DCNN with the new weight initialization method is improved by 25.5% compared to TCNN. Keywords: Deep convolutional neural network, Weight initialization, Sparse autoencoder, Visual saliency model, Image recognition

  13. Artificial neural networks for spatial distribution of fuel assemblies in reload of PWR reactors

    Energy Technology Data Exchange (ETDEWEB)

    Oliveira, Edyene; Castro, Victor F.; Velásquez, Carlos E.; Pereira, Claubia, E-mail: claubia@nuclear.ufmg.br [Universidade Federal de Minas Gerais (UFMG), Belo Horizonte, MG (Brazil). Programa de Pós-Graduação em Ciências e Técnicas Nucleares

    2017-07-01

    An artificial neural network methodology is being developed in order to find an optimum spatial distribution of the fuel assemblies in a nuclear reactor core during reload. The main bounding parameter of the modelling was the neutron multiplication factor, k{sub ef{sub f}}. The characteristics of the network are defined by the nuclear parameters: cycle, burnup, enrichment, fuel type, and average power peak of each element. These parameters were obtained by the ORNL nuclear code package SCALE6.0. As for the artificial neural network, the ANN Feedforward Multi{sub L}ayer{sub P}erceptron with various layers and neurons were constructed. Three algorithms were used and tested: LM (Levenberg-Marquardt), SCG (Scaled Conjugate Gradient) and BayR (Bayesian Regularization). Artificial neural network have implemented using MATLAB 2015a version. As preliminary results, the spatial distribution of the fuel assemblies in the core using a neural network was slightly better than the standard core. (author)

  14. Artificial neural networks for spatial distribution of fuel assemblies in reload of PWR reactors

    International Nuclear Information System (INIS)

    Oliveira, Edyene; Castro, Victor F.; Velásquez, Carlos E.; Pereira, Claubia

    2017-01-01

    An artificial neural network methodology is being developed in order to find an optimum spatial distribution of the fuel assemblies in a nuclear reactor core during reload. The main bounding parameter of the modelling was the neutron multiplication factor, k ef f . The characteristics of the network are defined by the nuclear parameters: cycle, burnup, enrichment, fuel type, and average power peak of each element. These parameters were obtained by the ORNL nuclear code package SCALE6.0. As for the artificial neural network, the ANN Feedforward Multi L ayer P erceptron with various layers and neurons were constructed. Three algorithms were used and tested: LM (Levenberg-Marquardt), SCG (Scaled Conjugate Gradient) and BayR (Bayesian Regularization). Artificial neural network have implemented using MATLAB 2015a version. As preliminary results, the spatial distribution of the fuel assemblies in the core using a neural network was slightly better than the standard core. (author)

  15. An introduction to neural network methods for differential equations

    CERN Document Server

    Yadav, Neha; Kumar, Manoj

    2015-01-01

    This book introduces a variety of neural network methods for solving differential equations arising in science and engineering. The emphasis is placed on a deep understanding of the neural network techniques, which has been presented in a mostly heuristic and intuitive manner. This approach will enable the reader to understand the working, efficiency and shortcomings of each neural network technique for solving differential equations. The objective of this book is to provide the reader with a sound understanding of the foundations of neural networks, and a comprehensive introduction to neural network methods for solving differential equations together with recent developments in the techniques and their applications. The book comprises four major sections. Section I consists of a brief overview of differential equations and the relevant physical problems arising in science and engineering. Section II illustrates the history of neural networks starting from their beginnings in the 1940s through to the renewed...

  16. Evolutionary Algorithms For Neural Networks Binary And Real Data Classification

    Directory of Open Access Journals (Sweden)

    Dr. Hanan A.R. Akkar

    2015-08-01

    Full Text Available Artificial neural networks are complex networks emulating the way human rational neurons process data. They have been widely used generally in prediction clustering classification and association. The training algorithms that used to determine the network weights are almost the most important factor that influence the neural networks performance. Recently many meta-heuristic and Evolutionary algorithms are employed to optimize neural networks weights to achieve better neural performance. This paper aims to use recently proposed algorithms for optimizing neural networks weights comparing these algorithms performance with other classical meta-heuristic algorithms used for the same purpose. However to evaluate the performance of such algorithms for training neural networks we examine such algorithms to classify four opposite binary XOR clusters and classification of continuous real data sets such as Iris and Ecoli.

  17. An Attractor-Based Complexity Measurement for Boolean Recurrent Neural Networks

    Science.gov (United States)

    Cabessa, Jérémie; Villa, Alessandro E. P.

    2014-01-01

    We provide a novel refined attractor-based complexity measurement for Boolean recurrent neural networks that represents an assessment of their computational power in terms of the significance of their attractor dynamics. This complexity measurement is achieved by first proving a computational equivalence between Boolean recurrent neural networks and some specific class of -automata, and then translating the most refined classification of -automata to the Boolean neural network context. As a result, a hierarchical classification of Boolean neural networks based on their attractive dynamics is obtained, thus providing a novel refined attractor-based complexity measurement for Boolean recurrent neural networks. These results provide new theoretical insights to the computational and dynamical capabilities of neural networks according to their attractive potentialities. An application of our findings is illustrated by the analysis of the dynamics of a simplified model of the basal ganglia-thalamocortical network simulated by a Boolean recurrent neural network. This example shows the significance of measuring network complexity, and how our results bear new founding elements for the understanding of the complexity of real brain circuits. PMID:24727866

  18. Mobile Position Estimation using Artificial Neural Network in CDMA Cellular Systems

    Directory of Open Access Journals (Sweden)

    Omar Waleed Abdulwahhab

    2017-01-01

    Full Text Available Using the Neural network as a type of associative memory will be introduced in this paper through the problem of mobile position estimation where mobile estimate its location depending on the signal strength reach to it from several around base stations where the neural network can be implemented inside the mobile. Traditional methods of time of arrival (TOA and received signal strength (RSS are used and compared with two analytical methods, optimal positioning method and average positioning method. The data that are used for training are ideal since they can be obtained based on geometry of CDMA cell topology. The test of the two methods TOA and RSS take many cases through a nonlinear path that MS can move through that region. The results show that the neural network has good performance compared with two other analytical methods which are average positioning method and optimal positioning method.

  19. Classification of stroke disease using convolutional neural network

    Science.gov (United States)

    Marbun, J. T.; Seniman; Andayani, U.

    2018-03-01

    Stroke is a condition that occurs when the blood supply stop flowing to the brain because of a blockage or a broken blood vessel. A symptoms that happen when experiencing stroke, some of them is a dropped consciousness, disrupted vision and paralyzed body. The general examination is being done to get a picture of the brain part that have stroke using Computerized Tomography (CT) Scan. The image produced from CT will be manually checked and need a proper lighting by doctor to get a type of stroke. That is why it needs a method to classify stroke from CT image automatically. A method proposed in this research is Convolutional Neural Network. CT image of the brain is used as the input for image processing. The stage before classification are image processing (Grayscaling, Scaling, Contrast Limited Adaptive Histogram Equalization, then the image being classified with Convolutional Neural Network. The result then showed that the method significantly conducted was able to be used as a tool to classify stroke disease in order to distinguish the type of stroke from CT image.

  20. Chaos Synchronization Using Adaptive Dynamic Neural Network Controller with Variable Learning Rates

    Directory of Open Access Journals (Sweden)

    Chih-Hong Kao

    2011-01-01

    Full Text Available This paper addresses the synchronization of chaotic gyros with unknown parameters and external disturbance via an adaptive dynamic neural network control (ADNNC system. The proposed ADNNC system is composed of a neural controller and a smooth compensator. The neural controller uses a dynamic RBF (DRBF network to online approximate an ideal controller. The DRBF network can create new hidden neurons online if the input data falls outside the hidden layer and prune the insignificant hidden neurons online if the hidden neuron is inappropriate. The smooth compensator is designed to compensate for the approximation error between the neural controller and the ideal controller. Moreover, the variable learning rates of the parameter adaptation laws are derived based on a discrete-type Lyapunov function to speed up the convergence rate of the tracking error. Finally, the simulation results which verified the chaotic behavior of two nonlinear identical chaotic gyros can be synchronized using the proposed ADNNC scheme.

  1. Hybrid digital signal processing and neural networks for automated diagnostics using NDE methods

    International Nuclear Information System (INIS)

    Upadhyaya, B.R.; Yan, W.

    1993-11-01

    The primary purpose of the current research was to develop an integrated approach by combining information compression methods and artificial neural networks for the monitoring of plant components using nondestructive examination data. Specifically, data from eddy current inspection of heat exchanger tubing were utilized to evaluate this technology. The focus of the research was to develop and test various data compression methods (for eddy current data) and the performance of different neural network paradigms for defect classification and defect parameter estimation. Feedforward, fully-connected neural networks, that use the back-propagation algorithm for network training, were implemented for defect classification and defect parameter estimation using a modular network architecture. A large eddy current tube inspection database was acquired from the Metals and Ceramics Division of ORNL. These data were used to study the performance of artificial neural networks for defect type classification and for estimating defect parameters. A PC-based data preprocessing and display program was also developed as part of an expert system for data management and decision making. The results of the analysis showed that for effective (low-error) defect classification and estimation of parameters, it is necessary to identify proper feature vectors using different data representation methods. The integration of data compression and artificial neural networks for information processing was established as an effective technique for automation of diagnostics using nondestructive examination methods

  2. Artificial Neural Network Modeling of an Inverse Fluidized Bed ...

    African Journals Online (AJOL)

    A Radial Basis Function neural network has been successfully employed for the modeling of the inverse fluidized bed reactor. In the proposed model, the trained neural network represents the kinetics of biological decomposition of pollutants in the reactor. The neural network has been trained with experimental data ...

  3. Application of clustering analysis in the prediction of photovoltaic power generation based on neural network

    Science.gov (United States)

    Cheng, K.; Guo, L. M.; Wang, Y. K.; Zafar, M. T.

    2017-11-01

    In order to select effective samples in the large number of data of PV power generation years and improve the accuracy of PV power generation forecasting model, this paper studies the application of clustering analysis in this field and establishes forecasting model based on neural network. Based on three different types of weather on sunny, cloudy and rainy days, this research screens samples of historical data by the clustering analysis method. After screening, it establishes BP neural network prediction models using screened data as training data. Then, compare the six types of photovoltaic power generation prediction models before and after the data screening. Results show that the prediction model combining with clustering analysis and BP neural networks is an effective method to improve the precision of photovoltaic power generation.

  4. Application of structured support vector machine backpropagation to a convolutional neural network for human pose estimation.

    Science.gov (United States)

    Witoonchart, Peerajak; Chongstitvatana, Prabhas

    2017-08-01

    In this study, for the first time, we show how to formulate a structured support vector machine (SSVM) as two layers in a convolutional neural network, where the top layer is a loss augmented inference layer and the bottom layer is the normal convolutional layer. We show that a deformable part model can be learned with the proposed structured SVM neural network by backpropagating the error of the deformable part model to the convolutional neural network. The forward propagation calculates the loss augmented inference and the backpropagation calculates the gradient from the loss augmented inference layer to the convolutional layer. Thus, we obtain a new type of convolutional neural network called an Structured SVM convolutional neural network, which we applied to the human pose estimation problem. This new neural network can be used as the final layers in deep learning. Our method jointly learns the structural model parameters and the appearance model parameters. We implemented our method as a new layer in the existing Caffe library. Copyright © 2017 Elsevier Ltd. All rights reserved.

  5. Using function approximation to determine neural network accuracy

    International Nuclear Information System (INIS)

    Wichman, R.F.; Alexander, J.

    2013-01-01

    Many, if not most, control processes demonstrate nonlinear behavior in some portion of their operating range and the ability of neural networks to model non-linear dynamics makes them very appealing for control. Control of high reliability safety systems, and autonomous control in process or robotic applications, however, require accurate and consistent control and neural networks are only approximators of various functions so their degree of approximation becomes important. In this paper, the factors affecting the ability of a feed-forward back-propagation neural network to accurately approximate a non-linear function are explored. Compared to pattern recognition using a neural network for function approximation provides an easy and accurate method for determining the network's accuracy. In contrast to other techniques, we show that errors arising in function approximation or curve fitting are caused by the neural network itself rather than scatter in the data. A method is proposed that provides improvements in the accuracy achieved during training and resulting ability of the network to generalize after training. Binary input vectors provided a more accurate model than with scalar inputs and retraining using a small number of the outlier x,y pairs improved generalization. (author)

  6. Wind Resource Assessment and Forecast Planning with Neural Networks

    Directory of Open Access Journals (Sweden)

    Nicolus K. Rotich

    2014-06-01

    Full Text Available In this paper we built three types of artificial neural networks, namely: Feed forward networks, Elman networks and Cascade forward networks, for forecasting wind speeds and directions. A similar network topology was used for all the forecast horizons, regardless of the model type. All the models were then trained with real data of collected wind speeds and directions over a period of two years in the municipal of Puumala, Finland. Up to 70th percentile of the data was used for training, validation and testing, while 71–85th percentile was presented to the trained models for validation. The model outputs were then compared to the last 15% of the original data, by measuring the statistical errors between them. The feed forward networks returned the lowest errors for wind speeds. Cascade forward networks gave the lowest errors for wind directions; Elman networks returned the lowest errors when used for short term forecasting.

  7. Representation of neutron noise data using neural networks

    International Nuclear Information System (INIS)

    Korsah, K.; Damiano, B.; Wood, R.T.

    1992-01-01

    This paper describes a neural network-based method of representing neutron noise spectra using a model developed at the Oak Ridge National Laboratory (ORNL). The backpropagation neural network learned to represent neutron noise data in terms of four descriptors, and the network response matched calculated values to within 3.5 percent. These preliminary results are encouraging, and further research is directed towards the application of neural networks in a diagnostics system for the identification of the causes of changes in structural spectral resonances. This work is part of our current investigation of advanced technologies such as expert systems and neural networks for neutron noise data reduction, analysis, and interpretation. The objective is to improve the state-of-the-art of noise analysis as a diagnostic tool for nuclear power plants and other mechanical systems

  8. Supervised Learning with Complex-valued Neural Networks

    CERN Document Server

    Suresh, Sundaram; Savitha, Ramasamy

    2013-01-01

    Recent advancements in the field of telecommunications, medical imaging and signal processing deal with signals that are inherently time varying, nonlinear and complex-valued. The time varying, nonlinear characteristics of these signals can be effectively analyzed using artificial neural networks.  Furthermore, to efficiently preserve the physical characteristics of these complex-valued signals, it is important to develop complex-valued neural networks and derive their learning algorithms to represent these signals at every step of the learning process. This monograph comprises a collection of new supervised learning algorithms along with novel architectures for complex-valued neural networks. The concepts of meta-cognition equipped with a self-regulated learning have been known to be the best human learning strategy. In this monograph, the principles of meta-cognition have been introduced for complex-valued neural networks in both the batch and sequential learning modes. For applications where the computati...

  9. Hardware implementation of stochastic spiking neural networks.

    Science.gov (United States)

    Rosselló, Josep L; Canals, Vincent; Morro, Antoni; Oliver, Antoni

    2012-08-01

    Spiking Neural Networks, the last generation of Artificial Neural Networks, are characterized by its bio-inspired nature and by a higher computational capacity with respect to other neural models. In real biological neurons, stochastic processes represent an important mechanism of neural behavior and are responsible of its special arithmetic capabilities. In this work we present a simple hardware implementation of spiking neurons that considers this probabilistic nature. The advantage of the proposed implementation is that it is fully digital and therefore can be massively implemented in Field Programmable Gate Arrays. The high computational capabilities of the proposed model are demonstrated by the study of both feed-forward and recurrent networks that are able to implement high-speed signal filtering and to solve complex systems of linear equations.

  10. Direct adaptive control using feedforward neural networks

    OpenAIRE

    Cajueiro, Daniel Oliveira; Hemerly, Elder Moreira

    2003-01-01

    ABSTRACT: This paper proposes a new scheme for direct neural adaptive control that works efficiently employing only one neural network, used for simultaneously identifying and controlling the plant. The idea behind this structure of adaptive control is to compensate the control input obtained by a conventional feedback controller. The neural network training process is carried out by using two different techniques: backpropagation and extended Kalman filter algorithm. Additionally, the conver...

  11. Neural networks in signal processing

    International Nuclear Information System (INIS)

    Govil, R.

    2000-01-01

    Nuclear Engineering has matured during the last decade. In research and design, control, supervision, maintenance and production, mathematical models and theories are used extensively. In all such applications signal processing is embedded in the process. Artificial Neural Networks (ANN), because of their nonlinear, adaptive nature are well suited to such applications where the classical assumptions of linearity and second order Gaussian noise statistics cannot be made. ANN's can be treated as nonparametric techniques, which can model an underlying process from example data. They can also adopt their model parameters to statistical change with time. Algorithms in the framework of Neural Networks in Signal processing have found new applications potentials in the field of Nuclear Engineering. This paper reviews the fundamentals of Neural Networks in signal processing and their applications in tasks such as recognition/identification and control. The topics covered include dynamic modeling, model based ANN's, statistical learning, eigen structure based processing and generalization structures. (orig.)

  12. Application of a neural network for reflectance spectrum classification

    Science.gov (United States)

    Yang, Gefei; Gartley, Michael

    2017-05-01

    Traditional reflectance spectrum classification algorithms are based on comparing spectrum across the electromagnetic spectrum anywhere from the ultra-violet to the thermal infrared regions. These methods analyze reflectance on a pixel by pixel basis. Inspired by high performance that Convolution Neural Networks (CNN) have demonstrated in image classification, we applied a neural network to analyze directional reflectance pattern images. By using the bidirectional reflectance distribution function (BRDF) data, we can reformulate the 4-dimensional into 2 dimensions, namely incident direction × reflected direction × channels. Meanwhile, RIT's micro-DIRSIG model is utilized to simulate additional training samples for improving the robustness of the neural networks training. Unlike traditional classification by using hand-designed feature extraction with a trainable classifier, neural networks create several layers to learn a feature hierarchy from pixels to classifier and all layers are trained jointly. Hence, the our approach of utilizing the angular features are different to traditional methods utilizing spatial features. Although training processing typically has a large computational cost, simple classifiers work well when subsequently using neural network generated features. Currently, most popular neural networks such as VGG, GoogLeNet and AlexNet are trained based on RGB spatial image data. Our approach aims to build a directional reflectance spectrum based neural network to help us to understand from another perspective. At the end of this paper, we compare the difference among several classifiers and analyze the trade-off among neural networks parameters.

  13. Neural Based Orthogonal Data Fitting The EXIN Neural Networks

    CERN Document Server

    Cirrincione, Giansalvo

    2008-01-01

    Written by three leaders in the field of neural based algorithms, Neural Based Orthogonal Data Fitting proposes several neural networks, all endowed with a complete theory which not only explains their behavior, but also compares them with the existing neural and traditional algorithms. The algorithms are studied from different points of view, including: as a differential geometry problem, as a dynamic problem, as a stochastic problem, and as a numerical problem. All algorithms have also been analyzed on real time problems (large dimensional data matrices) and have shown accurate solutions. Wh

  14. Quantized Synchronization of Chaotic Neural Networks With Scheduled Output Feedback Control.

    Science.gov (United States)

    Wan, Ying; Cao, Jinde; Wen, Guanghui

    In this paper, the synchronization problem of master-slave chaotic neural networks with remote sensors, quantization process, and communication time delays is investigated. The information communication channel between the master chaotic neural network and slave chaotic neural network consists of several remote sensors, with each sensor able to access only partial knowledge of output information of the master neural network. At each sampling instants, each sensor updates its own measurement and only one sensor is scheduled to transmit its latest information to the controller's side in order to update the control inputs for the slave neural network. Thus, such communication process and control strategy are much more energy-saving comparing with the traditional point-to-point scheme. Sufficient conditions for output feedback control gain matrix, allowable length of sampling intervals, and upper bound of network-induced delays are derived to ensure the quantized synchronization of master-slave chaotic neural networks. Lastly, Chua's circuit system and 4-D Hopfield neural network are simulated to validate the effectiveness of the main results.In this paper, the synchronization problem of master-slave chaotic neural networks with remote sensors, quantization process, and communication time delays is investigated. The information communication channel between the master chaotic neural network and slave chaotic neural network consists of several remote sensors, with each sensor able to access only partial knowledge of output information of the master neural network. At each sampling instants, each sensor updates its own measurement and only one sensor is scheduled to transmit its latest information to the controller's side in order to update the control inputs for the slave neural network. Thus, such communication process and control strategy are much more energy-saving comparing with the traditional point-to-point scheme. Sufficient conditions for output feedback control

  15. Bio-inspired spiking neural network for nonlinear systems control.

    Science.gov (United States)

    Pérez, Javier; Cabrera, Juan A; Castillo, Juan J; Velasco, Juan M

    2018-08-01

    Spiking neural networks (SNN) are the third generation of artificial neural networks. SNN are the closest approximation to biological neural networks. SNNs make use of temporal spike trains to command inputs and outputs, allowing a faster and more complex computation. As demonstrated by biological organisms, they are a potentially good approach to designing controllers for highly nonlinear dynamic systems in which the performance of controllers developed by conventional techniques is not satisfactory or difficult to implement. SNN-based controllers exploit their ability for online learning and self-adaptation to evolve when transferred from simulations to the real world. SNN's inherent binary and temporary way of information codification facilitates their hardware implementation compared to analog neurons. Biological neural networks often require a lower number of neurons compared to other controllers based on artificial neural networks. In this work, these neuronal systems are imitated to perform the control of non-linear dynamic systems. For this purpose, a control structure based on spiking neural networks has been designed. Particular attention has been paid to optimizing the structure and size of the neural network. The proposed structure is able to control dynamic systems with a reduced number of neurons and connections. A supervised learning process using evolutionary algorithms has been carried out to perform controller training. The efficiency of the proposed network has been verified in two examples of dynamic systems control. Simulations show that the proposed control based on SNN exhibits superior performance compared to other approaches based on Neural Networks and SNNs. Copyright © 2018 Elsevier Ltd. All rights reserved.

  16. Adaptive competitive learning neural networks

    Directory of Open Access Journals (Sweden)

    Ahmed R. Abas

    2013-11-01

    Full Text Available In this paper, the adaptive competitive learning (ACL neural network algorithm is proposed. This neural network not only groups similar input feature vectors together but also determines the appropriate number of groups of these vectors. This algorithm uses a new proposed criterion referred to as the ACL criterion. This criterion evaluates different clustering structures produced by the ACL neural network for an input data set. Then, it selects the best clustering structure and the corresponding network architecture for this data set. The selected structure is composed of the minimum number of clusters that are compact and balanced in their sizes. The selected network architecture is efficient, in terms of its complexity, as it contains the minimum number of neurons. Synaptic weight vectors of these neurons represent well-separated, compact and balanced clusters in the input data set. The performance of the ACL algorithm is evaluated and compared with the performance of a recently proposed algorithm in the literature in clustering an input data set and determining its number of clusters. Results show that the ACL algorithm is more accurate and robust in both determining the number of clusters and allocating input feature vectors into these clusters than the other algorithm especially with data sets that are sparsely distributed.

  17. A Novel Recurrent Neural Network for Manipulator Control With Improved Noise Tolerance.

    Science.gov (United States)

    Li, Shuai; Wang, Huanqing; Rafique, Muhammad Usman

    2017-04-12

    In this paper, we propose a novel recurrent neural network to resolve the redundancy of manipulators for efficient kinematic control in the presence of noises in a polynomial type. Leveraging the high-order derivative properties of polynomial noises, a deliberately devised neural network is proposed to eliminate the impact of noises and recover the accurate tracking of desired trajectories in workspace. Rigorous analysis shows that the proposed neural law stabilizes the system dynamics and the position tracking error converges to zero in the presence of noises. Extensive simulations verify the theoretical results. Numerical comparisons show that existing dual neural solutions lose stability when exposed to large constant noises or time-varying noises. In contrast, the proposed approach works well and has a low tracking error comparable to noise-free situations.

  18. Design of Jetty Piles Using Artificial Neural Networks

    Directory of Open Access Journals (Sweden)

    Yongjei Lee

    2014-01-01

    Full Text Available To overcome the complication of jetty pile design process, artificial neural networks (ANN are adopted. To generate the training samples for training ANN, finite element (FE analysis was performed 50 times for 50 different design cases. The trained ANN was verified with another FE analysis case and then used as a structural analyzer. The multilayer neural network (MBPNN with two hidden layers was used for ANN. The framework of MBPNN was defined as the input with the lateral forces on the jetty structure and the type of piles and the output with the stress ratio of the piles. The results from the MBPNN agree well with those from FE analysis. Particularly for more complex modes with hundreds of different design cases, the MBPNN would possibly substitute parametric studies with FE analysis saving design time and cost.

  19. Parameter Identification by Bayes Decision and Neural Networks

    DEFF Research Database (Denmark)

    Kulczycki, P.; Schiøler, Henrik

    1994-01-01

    The problem of parameter identification by Bayes point estimation using neural networks is investigated.......The problem of parameter identification by Bayes point estimation using neural networks is investigated....

  20. Using neural networks to infer the hydrodynamic yield of aspherical sources

    International Nuclear Information System (INIS)

    Moran, B.; Glenn, L.

    1993-01-01

    We distinguish two kinds of difficulties with yield determination from aspherical sources. The first kind, the spoofing difficulty, occurs when a fraction of the energy of the explosion is channeled in such a way that it is not detected by the CORRTEX cable. In this case, neither neural networks nor any expert system can be expected to accurately estimate the yield without detailed information about device emplacement within the canister. Numerical simulations however, can provide an upper bound on the undetected fraction of the explosive energy. In the second instance, the interpretation difficulty, the data appear abnormal when analyzed using similar-explosion-scaling and the assumption of a spherical front. The inferred yield varies with time and the confidence in the yield estimate decreases. It is this kind of problem we address in this paper and for which neural networks can make a contribution. We used a back propagation neural network to infer the hydrodynamic yield of simulated aspherical sources. We trained the network using a subset of simulations from 3 different aspherical sources, with 3 different yield, and 3 satellite offset separations. The trained network was able to predict the yield within 15% in all cases and to identify the correct type of aspherical source in most cases. The predictive capability of the network increased with a larger training set. The neural network approach can easily incorporate information from new calculations or experiments and is therefore flexible and easy to maintain. We describe the potential capabilities and limitations in using such networks for yield estimations

  1. Combining neural networks for protein secondary structure prediction

    DEFF Research Database (Denmark)

    Riis, Søren Kamaric

    1995-01-01

    In this paper structured neural networks are applied to the problem of predicting the secondary structure of proteins. A hierarchical approach is used where specialized neural networks are designed for each structural class and then combined using another neural network. The submodels are designed...... by using a priori knowledge of the mapping between protein building blocks and the secondary structure and by using weight sharing. Since none of the individual networks have more than 600 adjustable weights over-fitting is avoided. When ensembles of specialized experts are combined the performance...

  2. Pattern recognition of state variables by neural networks

    International Nuclear Information System (INIS)

    Faria, Eduardo Fernandes; Pereira, Claubia

    1996-01-01

    An artificial intelligence system based on artificial neural networks can be used to classify predefined events and emergency procedures. These systems are being used in different areas. In the nuclear reactors safety, the goal is the classification of events whose data can be processed and recognized by neural networks. In this works we present a preliminary simple system, using neural networks in the recognition of patterns the recognition of variables which define a situation. (author)

  3. Classification of behavior using unsupervised temporal neural networks

    International Nuclear Information System (INIS)

    Adair, K.L.

    1998-03-01

    Adding recurrent connections to unsupervised neural networks used for clustering creates a temporal neural network which clusters a sequence of inputs as they appear over time. The model presented combines the Jordan architecture with the unsupervised learning technique Adaptive Resonance Theory, Fuzzy ART. The combination yields a neural network capable of quickly clustering sequential pattern sequences as the sequences are generated. The applicability of the architecture is illustrated through a facility monitoring problem

  4. Pulsed neural networks consisting of single-flux-quantum spiking neurons

    International Nuclear Information System (INIS)

    Hirose, T.; Asai, T.; Amemiya, Y.

    2007-01-01

    An inhibitory pulsed neural network was developed for brain-like information processing, by using single-flux-quantum (SFQ) circuits. It consists of spiking neuron devices that are coupled to each other through all-to-all inhibitory connections. The network selects neural activity. The operation of the neural network was confirmed by computer simulation. SFQ neuron devices can imitate the operation of the inhibition phenomenon of neural networks

  5. The neural network approach to parton fitting

    International Nuclear Information System (INIS)

    Rojo, Joan; Latorre, Jose I.; Del Debbio, Luigi; Forte, Stefano; Piccione, Andrea

    2005-01-01

    We introduce the neural network approach to global fits of parton distribution functions. First we review previous work on unbiased parametrizations of deep-inelastic structure functions with faithful estimation of their uncertainties, and then we summarize the current status of neural network parton distribution fits

  6. A study of reactor monitoring method with neural network

    Energy Technology Data Exchange (ETDEWEB)

    Nabeshima, Kunihiko [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment

    2001-03-01

    The purpose of this study is to investigate the methodology of Nuclear Power Plant (NPP) monitoring with neural networks, which create the plant models by the learning of the past normal operation patterns. The concept of this method is to detect the symptom of small anomalies by monitoring the deviations between the process signals measured from an actual plant and corresponding output signals from the neural network model, which might not be equal if the abnormal operational patterns are presented to the input of the neural network. Auto-associative network, which has same output as inputs, can detect an kind of anomaly condition by using normal operation data only. The monitoring tests of the feedforward neural network with adaptive learning were performed using the PWR plant simulator by which many kinds of anomaly conditions can be easily simulated. The adaptively trained feedforward network could follow the actual plant dynamics and the changes of plant condition, and then find most of the anomalies much earlier than the conventional alarm system during steady state and transient operations. Then the off-line and on-line test results during one year operation at the actual NPP (PWR) showed that the neural network could detect several small anomalies which the operators or the conventional alarm system didn't noticed. Furthermore, the sensitivity analysis suggests that the plant models by neural networks are appropriate. Finally, the simulation results show that the recurrent neural network with feedback connections could successfully model the slow behavior of the reactor dynamics without adaptive learning. Therefore, the recurrent neural network with adaptive learning will be the best choice for the actual reactor monitoring system. (author)

  7. Estimation of Conditional Quantile using Neural Networks

    DEFF Research Database (Denmark)

    Kulczycki, P.; Schiøler, Henrik

    1999-01-01

    The problem of estimating conditional quantiles using neural networks is investigated here. A basic structure is developed using the methodology of kernel estimation, and a theory guaranteeing con-sistency on a mild set of assumptions is provided. The constructed structure constitutes a basis...... for the design of a variety of different neural networks, some of which are considered in detail. The task of estimating conditional quantiles is related to Bayes point estimation whereby a broad range of applications within engineering, economics and management can be suggested. Numerical results illustrating...... the capabilities of the elaborated neural network are also given....

  8. Applications of neural network to numerical analyses

    International Nuclear Information System (INIS)

    Takeda, Tatsuoki; Fukuhara, Makoto; Ma, Xiao-Feng; Liaqat, Ali

    1999-01-01

    Applications of a multi-layer neural network to numerical analyses are described. We are mainly concerned with the computed tomography and the solution of differential equations. In both cases as the objective functions for the training process of the neural network we employed residuals of the integral equation or the differential equations. This is different from the conventional neural network training where sum of the squared errors of the output values is adopted as the objective function. For model problems both the methods gave satisfactory results and the methods are considered promising for some kind of problems. (author)

  9. Generating Seismograms with Deep Neural Networks

    Science.gov (United States)

    Krischer, L.; Fichtner, A.

    2017-12-01

    The recent surge of successful uses of deep neural networks in computer vision, speech recognition, and natural language processing, mainly enabled by the availability of fast GPUs and extremely large data sets, is starting to see many applications across all natural sciences. In seismology these are largely confined to classification and discrimination tasks. In this contribution we explore the use of deep neural networks for another class of problems: so called generative models.Generative modelling is a branch of statistics concerned with generating new observed data samples, usually by drawing from some underlying probability distribution. Samples with specific attributes can be generated by conditioning on input variables. In this work we condition on seismic source (mechanism and location) and receiver (location) parameters to generate multi-component seismograms.The deep neural networks are trained on synthetic data calculated with Instaseis (http://instaseis.net, van Driel et al. (2015)) and waveforms from the global ShakeMovie project (http://global.shakemovie.princeton.edu, Tromp et al. (2010)). The underlying radially symmetric or smoothly three dimensional Earth structures result in comparatively small waveform differences from similar events or at close receivers and the networks learn to interpolate between training data samples.Of particular importance is the chosen misfit functional. Generative adversarial networks (Goodfellow et al. (2014)) implement a system in which two networks compete: the generator network creates samples and the discriminator network distinguishes these from the true training examples. Both are trained in an adversarial fashion until the discriminator can no longer distinguish between generated and real samples. We show how this can be applied to seismograms and in particular how it compares to networks trained with more conventional misfit metrics. Last but not least we attempt to shed some light on the black-box nature of

  10. Localizing Tortoise Nests by Neural Networks.

    Directory of Open Access Journals (Sweden)

    Roberto Barbuti

    Full Text Available The goal of this research is to recognize the nest digging activity of tortoises using a device mounted atop the tortoise carapace. The device classifies tortoise movements in order to discriminate between nest digging, and non-digging activity (specifically walking and eating. Accelerometer data was collected from devices attached to the carapace of a number of tortoises during their two-month nesting period. Our system uses an accelerometer and an activity recognition system (ARS which is modularly structured using an artificial neural network and an output filter. For the purpose of experiment and comparison, and with the aim of minimizing the computational cost, the artificial neural network has been modelled according to three different architectures based on the input delay neural network (IDNN. We show that the ARS can achieve very high accuracy on segments of data sequences, with an extremely small neural network that can be embedded in programmable low power devices. Given that digging is typically a long activity (up to two hours, the application of ARS on data segments can be repeated over time to set up a reliable and efficient system, called Tortoise@, for digging activity recognition.

  11. Implementation of neural networks on 'Connection Machine'

    International Nuclear Information System (INIS)

    Belmonte, Ghislain

    1990-12-01

    This report is a first approach to the notion of neural networks and their possible applications within the framework of artificial intelligence activities of the Department of Applied Mathematics of the Limeil-Valenton Research Center. The first part is an introduction to the field of neural networks; the main neural network models are described in this section. The applications of neural networks in the field of classification have mainly been studied because they could more particularly help to solve some of the decision support problems dealt with by the C.E.A. As the neural networks perform a large number of parallel operations, it was therefore logical to use a parallel architecture computer: the Connection Machine (which uses 16384 processors and is located at E.T.C.A. Arcueil). The second part presents some generalities on the parallelism and the Connection Machine, and two implementations of neural networks on Connection Machine. The first of these implementations concerns one of the most used algorithms to realize the learning of neural networks: the Gradient Retro-propagation algorithm. The second one, less common, concerns a network of neurons destined mainly to the recognition of forms: the Fukushima Neocognitron. The latter is studied by the C.E.A. of Bruyeres-le-Chatel in order to realize an embedded system (including hardened circuits) for the fast recognition of forms [fr

  12. Experimental calibration of forward and inverse neural networks for rotary type magnetorheological damper

    DEFF Research Database (Denmark)

    Bhowmik, Subrata; Weber, Felix; Høgsberg, Jan Becker

    2013-01-01

    This paper presents a systematic design and training procedure for the feed-forward backpropagation neural network (NN) modeling of both forward and inverse behavior of a rotary magnetorheological (MR) damper based on experimental data. For the forward damper model, with damper force as output...

  13. A new criterion to global exponential periodicity for discrete-time BAM neural network with infinite delays

    International Nuclear Information System (INIS)

    Zhou Tiejun; Liu Yuehua; Li Xiaoping; Liu Yirong

    2009-01-01

    The discrete-time bidirectional associative memory neural network with periodic coefficients and infinite delays is studied. And not by employing the continuation theorem of coincidence degree theory as other literatures, but by constructing suitable Liapunov function, using fixed point theorem and some analysis techniques, a sufficient criterion is obtained which ensures the existence and global exponential stability of periodic solution for the type of discrete-time BAM neural network. The obtained result is less restrictive to the BAM neural networks than previously known criteria. Furthermore, it can be applied to the BAM neural network which signal transfer functions are neither bounded nor differentiable. In addition, an example and its numerical simulation are given to illustrate the effectiveness of the obtained result

  14. Artificial Neural Network Analysis of Xinhui Pericarpium Citri ...

    African Journals Online (AJOL)

    Methods: Artificial neural networks (ANN) models, including general regression neural network (GRNN) and multi-layer ... N-hexane (HPLC grade) was purchased from. Fisher Scientific. ..... Simultaneous Quantification of Seven Flavonoids in.

  15. Photon spectrometry utilizing neural networks

    International Nuclear Information System (INIS)

    Silveira, R.; Benevides, C.; Lima, F.; Vilela, E.

    2015-01-01

    Having in mind the time spent on the uneventful work of characterization of the radiation beams used in a ionizing radiation metrology laboratory, the Metrology Service of the Centro Regional de Ciencias Nucleares do Nordeste - CRCN-NE verified the applicability of artificial intelligence (artificial neural networks) to perform the spectrometry in photon fields. For this, was developed a multilayer neural network, as an application for the classification of patterns in energy, associated with a thermoluminescent dosimetric system (TLD-700 and TLD-600). A set of dosimeters was initially exposed to various well known medium energies, between 40 keV and 1.2 MeV, coinciding with the beams determined by ISO 4037 standard, for the dose of 10 mSv in the quantity Hp(10), on a chest phantom (ISO slab phantom) with the purpose of generating a set of training data for the neural network. Subsequently, a new set of dosimeters irradiated in unknown energies was presented to the network with the purpose to test the method. The methodology used in this work was suitable for application in the classification of energy beams, having obtained 100% of the classification performed. (authors)

  16. Hidden neural networks: application to speech recognition

    DEFF Research Database (Denmark)

    Riis, Søren Kamaric

    1998-01-01

    We evaluate the hidden neural network HMM/NN hybrid on two speech recognition benchmark tasks; (1) task independent isolated word recognition on the Phonebook database, and (2) recognition of broad phoneme classes in continuous speech from the TIMIT database. It is shown how hidden neural networks...

  17. Periodicity and stability for variable-time impulsive neural networks.

    Science.gov (United States)

    Li, Hongfei; Li, Chuandong; Huang, Tingwen

    2017-10-01

    The paper considers a general neural networks model with variable-time impulses. It is shown that each solution of the system intersects with every discontinuous surface exactly once via several new well-proposed assumptions. Moreover, based on the comparison principle, this paper shows that neural networks with variable-time impulse can be reduced to the corresponding neural network with fixed-time impulses under well-selected conditions. Meanwhile, the fixed-time impulsive systems can be regarded as the comparison system of the variable-time impulsive neural networks. Furthermore, a series of sufficient criteria are derived to ensure the existence and global exponential stability of periodic solution of variable-time impulsive neural networks, and to illustrate the same stability properties between variable-time impulsive neural networks and the fixed-time ones. The new criteria are established by applying Schaefer's fixed point theorem combined with the use of inequality technique. Finally, a numerical example is presented to show the effectiveness of the proposed results. Copyright © 2017 Elsevier Ltd. All rights reserved.

  18. A neural network model for credit risk evaluation.

    Science.gov (United States)

    Khashman, Adnan

    2009-08-01

    Credit scoring is one of the key analytical techniques in credit risk evaluation which has been an active research area in financial risk management. This paper presents a credit risk evaluation system that uses a neural network model based on the back propagation learning algorithm. We train and implement the neural network to decide whether to approve or reject a credit application, using seven learning schemes and real world credit applications from the Australian credit approval datasets. A comparison of the system performance under the different learning schemes is provided, furthermore, we compare the performance of two neural networks; with one and two hidden layers following the ideal learning scheme. Experimental results suggest that neural networks can be effectively used in automatic processing of credit applications.

  19. Hopfield neural network in HEP track reconstruction

    International Nuclear Information System (INIS)

    Muresan, R.; Pentia, M.

    1997-01-01

    In experimental particle physics, pattern recognition problems, specifically for neural network methods, occur frequently in track finding or feature extraction. Track finding is a combinatorial optimization problem. Given a set of points in Euclidean space, one tries the reconstruction of particle trajectories, subject to smoothness constraints.The basic ingredients in a neural network are the N binary neurons and the synaptic strengths connecting them. In our case the neurons are the segments connecting all possible point pairs.The dynamics of the neural network is given by a local updating rule wich evaluates for each neuron the sign of the 'upstream activity'. An updating rule in the form of sigmoid function is given. The synaptic strengths are defined in terms of angle between the segments and the lengths of the segments implied in the track reconstruction. An algorithm based on Hopfield neural network has been developed and tested on the track coordinates measured by silicon microstrip tracking system

  20. Convolutional Neural Networks for Human Activity Recognition Using Body-Worn Sensors

    Directory of Open Access Journals (Sweden)

    Fernando Moya Rueda

    2018-05-01

    Full Text Available Human activity recognition (HAR is a classification task for recognizing human movements. Methods of HAR are of great interest as they have become tools for measuring occurrences and durations of human actions, which are the basis of smart assistive technologies and manual processes analysis. Recently, deep neural networks have been deployed for HAR in the context of activities of daily living using multichannel time-series. These time-series are acquired from body-worn devices, which are composed of different types of sensors. The deep architectures process these measurements for finding basic and complex features in human corporal movements, and for classifying them into a set of human actions. As the devices are worn at different parts of the human body, we propose a novel deep neural network for HAR. This network handles sequence measurements from different body-worn devices separately. An evaluation of the architecture is performed on three datasets, the Oportunity, Pamap2, and an industrial dataset, outperforming the state-of-the-art. In addition, different network configurations will also be evaluated. We find that applying convolutions per sensor channel and per body-worn device improves the capabilities of convolutional neural network (CNNs.

  1. Genetic optimization of neural network architecture

    International Nuclear Information System (INIS)

    Harp, S.A.; Samad, T.

    1994-03-01

    Neural networks are now a popular technology for a broad variety of application domains, including the electric utility industry. Yet, as the technology continues to gain increasing acceptance, it is also increasingly apparent that the power that neural networks provide is not an unconditional blessing. Considerable care must be exercised during application development if the full benefit of the technology is to be realized. At present, no fully general theory or methodology for neural network design is available, and application development is a trial-and-error process that is time-consuming and expertise-intensive. Each application demands appropriate selections of the network input space, the network structure, and values of learning algorithm parameters-design choices that are closely coupled in ways that largely remain a mystery. This EPRI-funded exploratory research project was initiated to take the key next step in this research program: the validation of the approach on a realistic problem. We focused on the problem of modeling the thermal performance of the TVA Sequoyah nuclear power plant (units 1 and 2)

  2. Nano-topography Enhances Communication in Neural Cells Networks

    KAUST Repository

    Onesto, V.

    2017-08-23

    Neural cells are the smallest building blocks of the central and peripheral nervous systems. Information in neural networks and cell-substrate interactions have been heretofore studied separately. Understanding whether surface nano-topography can direct nerve cells assembly into computational efficient networks may provide new tools and criteria for tissue engineering and regenerative medicine. In this work, we used information theory approaches and functional multi calcium imaging (fMCI) techniques to examine how information flows in neural networks cultured on surfaces with controlled topography. We found that substrate roughness Sa affects networks topology. In the low nano-meter range, S-a = 0-30 nm, information increases with Sa. Moreover, we found that energy density of a network of cells correlates to the topology of that network. This reinforces the view that information, energy and surface nano-topography are tightly inter-connected and should not be neglected when studying cell-cell interaction in neural tissue repair and regeneration.

  3. Recurrent Convolutional Neural Networks: A Better Model of Biological Object Recognition.

    Science.gov (United States)

    Spoerer, Courtney J; McClure, Patrick; Kriegeskorte, Nikolaus

    2017-01-01

    Feedforward neural networks provide the dominant model of how the brain performs visual object recognition. However, these networks lack the lateral and feedback connections, and the resulting recurrent neuronal dynamics, of the ventral visual pathway in the human and non-human primate brain. Here we investigate recurrent convolutional neural networks with bottom-up (B), lateral (L), and top-down (T) connections. Combining these types of connections yields four architectures (B, BT, BL, and BLT), which we systematically test and compare. We hypothesized that recurrent dynamics might improve recognition performance in the challenging scenario of partial occlusion. We introduce two novel occluded object recognition tasks to test the efficacy of the models, digit clutter (where multiple target digits occlude one another) and digit debris (where target digits are occluded by digit fragments). We find that recurrent neural networks outperform feedforward control models (approximately matched in parametric complexity) at recognizing objects, both in the absence of occlusion and in all occlusion conditions. Recurrent networks were also found to be more robust to the inclusion of additive Gaussian noise. Recurrent neural networks are better in two respects: (1) they are more neurobiologically realistic than their feedforward counterparts; (2) they are better in terms of their ability to recognize objects, especially under challenging conditions. This work shows that computer vision can benefit from using recurrent convolutional architectures and suggests that the ubiquitous recurrent connections in biological brains are essential for task performance.

  4. Polarity-specific high-level information propagation in neural networks.

    Science.gov (United States)

    Lin, Yen-Nan; Chang, Po-Yen; Hsiao, Pao-Yueh; Lo, Chung-Chuan

    2014-01-01

    Analyzing the connectome of a nervous system provides valuable information about the functions of its subsystems. Although much has been learned about the architectures of neural networks in various organisms by applying analytical tools developed for general networks, two distinct and functionally important properties of neural networks are often overlooked. First, neural networks are endowed with polarity at the circuit level: Information enters a neural network at input neurons, propagates through interneurons, and leaves via output neurons. Second, many functions of nervous systems are implemented by signal propagation through high-level pathways involving multiple and often recurrent connections rather than by the shortest paths between nodes. In the present study, we analyzed two neural networks: the somatic nervous system of Caenorhabditis elegans (C. elegans) and the partial central complex network of Drosophila, in light of these properties. Specifically, we quantified high-level propagation in the vertical and horizontal directions: the former characterizes how signals propagate from specific input nodes to specific output nodes and the latter characterizes how a signal from a specific input node is shared by all output nodes. We found that the two neural networks are characterized by very efficient vertical and horizontal propagation. In comparison, classic small-world networks show a trade-off between vertical and horizontal propagation; increasing the rewiring probability improves the efficiency of horizontal propagation but worsens the efficiency of vertical propagation. Our result provides insights into how the complex functions of natural neural networks may arise from a design that allows them to efficiently transform and combine input signals.

  5. One weird trick for parallelizing convolutional neural networks

    OpenAIRE

    Krizhevsky, Alex

    2014-01-01

    I present a new way to parallelize the training of convolutional neural networks across multiple GPUs. The method scales significantly better than all alternatives when applied to modern convolutional neural networks.

  6. A Newton-type neural network learning algorithm

    International Nuclear Information System (INIS)

    Ivanov, V.V.; Puzynin, I.V.; Purehvdorzh, B.

    1993-01-01

    First- and second-order learning methods for feed-forward multilayer networks are considered. A Newton-type algorithm is proposed and compared with the common back-propagation algorithm. It is shown that the proposed algorithm provides better learning quality. Some recommendations for their usage are given. 11 refs.; 1 fig.; 1 tab

  7. Optical resonators and neural networks

    Science.gov (United States)

    Anderson, Dana Z.

    1986-08-01

    It may be possible to implement neural network models using continuous field optical architectures. These devices offer the inherent parallelism of propagating waves and an information density in principle dictated by the wavelength of light and the quality of the bulk optical elements. Few components are needed to construct a relatively large equivalent network. Various associative memories based on optical resonators have been demonstrated in the literature, a ring resonator design is discussed in detail here. Information is stored in a holographic medium and recalled through a competitive processes in the gain medium supplying energy to the ring rsonator. The resonator memory is the first realized example of a neural network function implemented with this kind of architecture.

  8. NEURAL NETWORKS FOR STOCK MARKET OPTION PRICING

    Directory of Open Access Journals (Sweden)

    Sergey A. Sannikov

    2017-03-01

    Full Text Available Introduction: The use of neural networks for non-linear models helps to understand where linear model drawbacks, coused by their specification, reveal themselves. This paper attempts to find this out. The objective of research is to determine the meaning of “option prices calculation using neural networks”. Materials and Methods: We use two kinds of variables: endogenous (variables included in the model of neural network and variables affecting on the model (permanent disturbance. Results: All data are divided into 3 sets: learning, affirming and testing. All selected variables are normalised from 0 to 1. Extreme values of income were shortcut. Discussion and Conclusions: Using the 33-14-1 neural network with direct links we obtained two sets of forecasts. Optimal criteria of strategies in stock markets’ option pricing were developed.

  9. Region stability analysis and tracking control of memristive recurrent neural network.

    Science.gov (United States)

    Bao, Gang; Zeng, Zhigang; Shen, Yanjun

    2018-02-01

    Memristor is firstly postulated by Leon Chua and realized by Hewlett-Packard (HP) laboratory. Research results show that memristor can be used to simulate the synapses of neurons. This paper presents a class of recurrent neural network with HP memristors. Firstly, it shows that memristive recurrent neural network has more compound dynamics than the traditional recurrent neural network by simulations. Then it derives that n dimensional memristive recurrent neural network is composed of [Formula: see text] sub neural networks which do not have a common equilibrium point. By designing the tracking controller, it can make memristive neural network being convergent to the desired sub neural network. At last, two numerical examples are given to verify the validity of our result. Copyright © 2017 Elsevier Ltd. All rights reserved.

  10. Advanced neural network-based computational schemes for robust fault diagnosis

    CERN Document Server

    Mrugalski, Marcin

    2014-01-01

    The present book is devoted to problems of adaptation of artificial neural networks to robust fault diagnosis schemes. It presents neural networks-based modelling and estimation techniques used for designing robust fault diagnosis schemes for non-linear dynamic systems. A part of the book focuses on fundamental issues such as architectures of dynamic neural networks, methods for designing of neural networks and fault diagnosis schemes as well as the importance of robustness. The book is of a tutorial value and can be perceived as a good starting point for the new-comers to this field. The book is also devoted to advanced schemes of description of neural model uncertainty. In particular, the methods of computation of neural networks uncertainty with robust parameter estimation are presented. Moreover, a novel approach for system identification with the state-space GMDH neural network is delivered. All the concepts described in this book are illustrated by both simple academic illustrative examples and practica...

  11. Aeroelasticity of morphing wings using neural networks

    Science.gov (United States)

    Natarajan, Anand

    In this dissertation, neural networks are designed to effectively model static non-linear aeroelastic problems in adaptive structures and linear dynamic aeroelastic systems with time varying stiffness. The use of adaptive materials in aircraft wings allows for the change of the contour or the configuration of a wing (morphing) in flight. The use of smart materials, to accomplish these deformations, can imply that the stiffness of the wing with a morphing contour changes as the contour changes. For a rapidly oscillating body in a fluid field, continuously adapting structural parameters may render the wing to behave as a time variant system. Even the internal spars/ribs of the aircraft wing which define the wing stiffness can be made adaptive, that is, their stiffness can be made to vary with time. The immediate effect on the structural dynamics of the wing, is that, the wing motion is governed by a differential equation with time varying coefficients. The study of this concept of a time varying torsional stiffness, made possible by the use of active materials and adaptive spars, in the dynamic aeroelastic behavior of an adaptable airfoil is performed here. Another type of aeroelastic problem of an adaptive structure that is investigated here, is the shape control of an adaptive bump situated on the leading edge of an airfoil. Such a bump is useful in achieving flow separation control for lateral directional maneuverability of the aircraft. Since actuators are being used to create this bump on the wing surface, the energy required to do so needs to be minimized. The adverse pressure drag as a result of this bump needs to be controlled so that the loss in lift over the wing is made minimal. The design of such a "spoiler bump" on the surface of the airfoil is an optimization problem of maximizing pressure drag due to flow separation while minimizing the loss in lift and energy required to deform the bump. One neural network is trained using the CFD code FLUENT to

  12. Neural network error correction for solving coupled ordinary differential equations

    Science.gov (United States)

    Shelton, R. O.; Darsey, J. A.; Sumpter, B. G.; Noid, D. W.

    1992-01-01

    A neural network is presented to learn errors generated by a numerical algorithm for solving coupled nonlinear differential equations. The method is based on using a neural network to correctly learn the error generated by, for example, Runge-Kutta on a model molecular dynamics (MD) problem. The neural network programs used in this study were developed by NASA. Comparisons are made for training the neural network using backpropagation and a new method which was found to converge with fewer iterations. The neural net programs, the MD model and the calculations are discussed.

  13. Performance of Deep and Shallow Neural Networks, the Universal Approximation Theorem, Activity Cliffs, and QSAR.

    Science.gov (United States)

    Winkler, David A; Le, Tu C

    2017-01-01

    Neural networks have generated valuable Quantitative Structure-Activity/Property Relationships (QSAR/QSPR) models for a wide variety of small molecules and materials properties. They have grown in sophistication and many of their initial problems have been overcome by modern mathematical techniques. QSAR studies have almost always used so-called "shallow" neural networks in which there is a single hidden layer between the input and output layers. Recently, a new and potentially paradigm-shifting type of neural network based on Deep Learning has appeared. Deep learning methods have generated impressive improvements in image and voice recognition, and are now being applied to QSAR and QSAR modelling. This paper describes the differences in approach between deep and shallow neural networks, compares their abilities to predict the properties of test sets for 15 large drug data sets (the kaggle set), discusses the results in terms of the Universal Approximation theorem for neural networks, and describes how DNN may ameliorate or remove troublesome "activity cliffs" in QSAR data sets. © 2017 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.

  14. Patterning and predicting aquatic macroinvertebrate diversities using artificial neural network

    NARCIS (Netherlands)

    Park, Y.S.; Verdonschot, P.F.M.; Chon, T.S.; Lek, S.

    2003-01-01

    A counterpropagation neural network (CPN) was applied to predict species richness (SR) and Shannon diversity index (SH) of benthic macroinvertebrate communities using 34 environmental variables. The data were collected at 664 sites at 23 different water types such as springs, streams, rivers,

  15. Entropy Learning in Neural Network

    Directory of Open Access Journals (Sweden)

    Geok See Ng

    2017-12-01

    Full Text Available In this paper, entropy term is used in the learning phase of a neural network.  As learning progresses, more hidden nodes get into saturation.  The early creation of such hidden nodes may impair generalisation.  Hence entropy approach is proposed to dampen the early creation of such nodes.  The entropy learning also helps to increase the importance of relevant nodes while dampening the less important nodes.  At the end of learning, the less important nodes can then be eliminated to reduce the memory requirements of the neural network.

  16. Coherence Resonance and Noise-Induced Synchronization in Hindmarsh-Rose Neural Network with Different Topologies

    International Nuclear Information System (INIS)

    Wei Duqu; Luo Xiaoshu

    2007-01-01

    In this paper, we investigate coherence resonance (CR) and noise-induced synchronization in Hindmarsh-Rose (HR) neural network with three different types of topologies: regular, random, and small-world. It is found that the additive noise can induce CR in HR neural network with different topologies and its coherence is optimized by a proper noise level. It is also found that as coupling strength increases the plateau in the measure of coherence curve becomes broadened and the effects of network topology is more pronounced simultaneously. Moreover, we find that increasing the probability p of the network topology leads to an enhancement of noise-induced synchronization in HR neurons network.

  17. Advanced Applications of Neural Networks and Artificial Intelligence: A Review

    OpenAIRE

    Koushal Kumar; Gour Sundar Mitra Thakur

    2012-01-01

    Artificial Neural Network is a branch of Artificial intelligence and has been accepted as a new computing technology in computer science fields. This paper reviews the field of Artificial intelligence and focusing on recent applications which uses Artificial Neural Networks (ANN’s) and Artificial Intelligence (AI). It also considers the integration of neural networks with other computing methods Such as fuzzy logic to enhance the interpretation ability of data. Artificial Neural Networks is c...

  18. Time Series Neural Network Model for Part-of-Speech Tagging Indonesian Language

    Science.gov (United States)

    Tanadi, Theo

    2018-03-01

    Part-of-speech tagging (POS tagging) is an important part in natural language processing. Many methods have been used to do this task, including neural network. This paper models a neural network that attempts to do POS tagging. A time series neural network is modelled to solve the problems that a basic neural network faces when attempting to do POS tagging. In order to enable the neural network to have text data input, the text data will get clustered first using Brown Clustering, resulting a binary dictionary that the neural network can use. To further the accuracy of the neural network, other features such as the POS tag, suffix, and affix of previous words would also be fed to the neural network.

  19. Neutron spectrum unfolding using neural networks

    International Nuclear Information System (INIS)

    Vega C, H.R.; Hernandez D, V.M.; Manzanares A, E.

    2004-01-01

    An artificial neural network has been designed to obtain the neutron spectra from the Bonner spheres spectrometer's count rates. The neural network was trained using a large set of neutron spectra compiled by the International Atomic Energy Agency. These include spectra from iso- topic neutron sources, reference and operational neutron spectra obtained from accelerators and nuclear reactors. The spectra were transformed from lethargy to energy distribution and were re-binned to 31 energy groups using the MCNP 4C code. Re-binned spectra and UTA4 matrix were used to calculate the expected count rates in Bonner spheres spectrometer. These count rates were used as input and correspondent spectrum was used as output during neural network training. The network has 7 input nodes, 56 neurons as hidden layer and 31 neurons in the output layer. After training the network was tested with the Bonner spheres count rates produced by twelve neutron spectra. The network allows unfolding the neutron spectrum from count rates measured with Bonner spheres. Good results are obtained when testing count rates belong to neutron spectra used during training, acceptable results are obtained for count rates obtained from actual neutron fields; however the network fails when count rates belong to monoenergetic neutron sources. (Author)

  20. The principles of artificial neural network information processing

    International Nuclear Information System (INIS)

    Dai, Ru-Wei

    1993-01-01

    In this article, the basic structure of an artificial neuron is first introduced. In addition, principles of artificial neural network as well as several important artificial neural models such as Perceptron, Back propagation model, Hopfield net, and ART model are briefly discussed and analyzed. Finally, the application of artificial neural network for Chinese Character Recognition is also given. (author)

  1. The principles of artificial neural network information processing

    International Nuclear Information System (INIS)

    Dai, Ru-Wei

    1993-01-01

    In this article, the basic structure of an artificial neuron is first introduced. In addition, principles of artificial neural network as well as several important artificial neural models such as perception, back propagation model, Hopfield net, and ART model are briefly discussed and analyzed. Finally the application of artificial neural network for Chinese character recognition is also given. (author)

  2. Neural network monitoring of resistive welding

    International Nuclear Information System (INIS)

    Quero, J.M.; Millan, R.L.; Franquelo, L.G.; Canas, J.

    1994-01-01

    Supervision of welding processes is one of the most important and complicated tasks in production lines. Artificial Neural Networks have been applied for modeling and control of ph physical processes. In our paper we propose the use of a neural network classifier for on-line non-destructive testing. This system has been developed and installed in a resistive welding station. Results confirm the validity of this novel approach. (Author) 6 refs

  3. Foreground removal from WMAP 5 yr temperature maps using an MLP neural network

    Science.gov (United States)

    Nørgaard-Nielsen, H. U.

    2010-09-01

    Aims: One of the main obstacles for extracting the cosmic microwave background (CMB) signal from observations in the mm/sub-mm range is the foreground contamination by emission from Galactic component: mainly synchrotron, free-free, and thermal dust emission. The statistical nature of the intrinsic CMB signal makes it essential to minimize the systematic errors in the CMB temperature determinations. Methods: The feasibility of using simple neural networks to extract the CMB signal from detailed simulated data has already been demonstrated. Here, simple neural networks are applied to the WMAP 5 yr temperature data without using any auxiliary data. Results: A simple multilayer perceptron neural network with two hidden layers provides temperature estimates over more than 75 per cent of the sky with random errors significantly below those previously extracted from these data. Also, the systematic errors, i.e. errors correlated with the Galactic foregrounds, are very small. Conclusions: With these results the neural network method is well prepared for dealing with the high - quality CMB data from the ESA Planck Surveyor satellite. unknown author type, collab

  4. Thermodynamic efficiency of learning a rule in neural networks

    Science.gov (United States)

    Goldt, Sebastian; Seifert, Udo

    2017-11-01

    Biological systems have to build models from their sensory input data that allow them to efficiently process previously unseen inputs. Here, we study a neural network learning a binary classification rule for these inputs from examples provided by a teacher. We analyse the ability of the network to apply the rule to new inputs, that is to generalise from past experience. Using stochastic thermodynamics, we show that the thermodynamic costs of the learning process provide an upper bound on the amount of information that the network is able to learn from its teacher for both batch and online learning. This allows us to introduce a thermodynamic efficiency of learning. We analytically compute the dynamics and the efficiency of a noisy neural network performing online learning in the thermodynamic limit. In particular, we analyse three popular learning algorithms, namely Hebbian, Perceptron and AdaTron learning. Our work extends the methods of stochastic thermodynamics to a new type of learning problem and might form a suitable basis for investigating the thermodynamics of decision-making.

  5. Detecting phase transitions in a neural network and its application to classification of syndromes in traditional Chinese medicine

    Energy Technology Data Exchange (ETDEWEB)

    Chen, J; Xi, G [Key Laboratory of Complex Systems and Intelligence Science, Institute of Automation, Chinese Academy of Sciences, 100080, Beijing (China); Wang, W [Beijing University of Chinese Medicine, 100029, Beijing (China)], E-mail: guangcheng.xi@ia.ac.cn

    2008-02-15

    Detecting phase transitions in neural networks (determined or random) presents a challenging subject for phase transitions play a key role in human brain activity. In this paper, we detect numerically phase transitions in two types of random neural network(RNN) under proper parameters.

  6. Improvement of the Hopfield Neural Network by MC-Adaptation Rule

    Science.gov (United States)

    Zhou, Zhen; Zhao, Hong

    2006-06-01

    We show that the performance of the Hopfield neural networks, especially the quality of the recall and the capacity of the effective storing, can be greatly improved by making use of a recently presented neural network designing method without altering the whole structure of the network. In the improved neural network, a memory pattern is recalled exactly from initial states having a given degree of similarity with the memory pattern, and thus one can avoids to apply the overlap criterion as carried out in the Hopfield neural networks.

  7. Integration of Neural Networks and Cellular Automata for Urban Planning

    Institute of Scientific and Technical Information of China (English)

    Anthony Gar-on Yeh; LI Xia

    2004-01-01

    This paper presents a new type of cellular automata (CA) model for the simulation of alternative land development using neural networks for urban planning. CA models can be regarded as a planning tool because they can generate alternative urban growth. Alternative development patterns can be formed by using different sets of parameter values in CA simulation. A critical issue is how to define parameter values for realistic and idealized simulation. This paper demonstrates that neural networks can simplify CA models but generate more plausible results. The simulation is based on a simple three-layer network with an output neuron to generate conversion probability. No transition rules are required for the simulation. Parameter values are automatically obtained from the training of network by using satellite remote sensing data. Original training data can be assessed and modified according to planning objectives. Alternative urban patterns can be easily formulated by using the modified training data sets rather than changing the model.

  8. Neural networks in continuous optical media

    International Nuclear Information System (INIS)

    Anderson, D.Z.

    1987-01-01

    The authors' interest is to see to what extent neural models can be implemented using continuous optical elements. Thus these optical networks represent a continuous distribution of neuronlike processors rather than a discrete collection. Most neural models have three characteristic features: interconnections; adaptivity; and nonlinearity. In their optical representation the interconnections are implemented with linear one- and two-port optical elements such as lenses and holograms. Real-time holographic media allow these interconnections to become adaptive. The nonlinearity is achieved with gain, for example, from two-beam coupling in photorefractive media or a pumped dye medium. Using these basic optical elements one can in principle construct continuous representations of a number of neural network models. The authors demonstrated two devices based on continuous optical elements: an associative memory which recalls an entire object when addressed with a partial object and a tracking novelty filter which identifies time-dependent features in an optical scene. These devices demonstrate the potential of distributed optical elements to implement more formal models of neural networks

  9. Network traffic anomaly prediction using Artificial Neural Network

    Science.gov (United States)

    Ciptaningtyas, Hening Titi; Fatichah, Chastine; Sabila, Altea

    2017-03-01

    As the excessive increase of internet usage, the malicious software (malware) has also increase significantly. Malware is software developed by hacker for illegal purpose(s), such as stealing data and identity, causing computer damage, or denying service to other user[1]. Malware which attack computer or server often triggers network traffic anomaly phenomena. Based on Sophos's report[2], Indonesia is the riskiest country of malware attack and it also has high network traffic anomaly. This research uses Artificial Neural Network (ANN) to predict network traffic anomaly based on malware attack in Indonesia which is recorded by Id-SIRTII/CC (Indonesia Security Incident Response Team on Internet Infrastructure/Coordination Center). The case study is the highest malware attack (SQL injection) which has happened in three consecutive years: 2012, 2013, and 2014[4]. The data series is preprocessed first, then the network traffic anomaly is predicted using Artificial Neural Network and using two weight update algorithms: Gradient Descent and Momentum. Error of prediction is calculated using Mean Squared Error (MSE) [7]. The experimental result shows that MSE for SQL Injection is 0.03856. So, this approach can be used to predict network traffic anomaly.

  10. A two-layer recurrent neural network for nonsmooth convex optimization problems.

    Science.gov (United States)

    Qin, Sitian; Xue, Xiaoping

    2015-06-01

    In this paper, a two-layer recurrent neural network is proposed to solve the nonsmooth convex optimization problem subject to convex inequality and linear equality constraints. Compared with existing neural network models, the proposed neural network has a low model complexity and avoids penalty parameters. It is proved that from any initial point, the state of the proposed neural network reaches the equality feasible region in finite time and stays there thereafter. Moreover, the state is unique if the initial point lies in the equality feasible region. The equilibrium point set of the proposed neural network is proved to be equivalent to the Karush-Kuhn-Tucker optimality set of the original optimization problem. It is further proved that the equilibrium point of the proposed neural network is stable in the sense of Lyapunov. Moreover, from any initial point, the state is proved to be convergent to an equilibrium point of the proposed neural network. Finally, as applications, the proposed neural network is used to solve nonlinear convex programming with linear constraints and L1 -norm minimization problems.

  11. Adaptive training of feedforward neural networks by Kalman filtering

    International Nuclear Information System (INIS)

    Ciftcioglu, Oe.

    1995-02-01

    Adaptive training of feedforward neural networks by Kalman filtering is described. Adaptive training is particularly important in estimation by neural network in real-time environmental where the trained network is used for system estimation while the network is further trained by means of the information provided by the experienced/exercised ongoing operation. As result of this, neural network adapts itself to a changing environment to perform its mission without recourse to re-training. The performance of the training method is demonstrated by means of actual process signals from a nuclear power plant. (orig.)

  12. Modeling and control of magnetorheological fluid dampers using neural networks

    Science.gov (United States)

    Wang, D. H.; Liao, W. H.

    2005-02-01

    Due to the inherent nonlinear nature of magnetorheological (MR) fluid dampers, one of the challenging aspects for utilizing these devices to achieve high system performance is the development of accurate models and control algorithms that can take advantage of their unique characteristics. In this paper, the direct identification and inverse dynamic modeling for MR fluid dampers using feedforward and recurrent neural networks are studied. The trained direct identification neural network model can be used to predict the damping force of the MR fluid damper on line, on the basis of the dynamic responses across the MR fluid damper and the command voltage, and the inverse dynamic neural network model can be used to generate the command voltage according to the desired damping force through supervised learning. The architectures and the learning methods of the dynamic neural network models and inverse neural network models for MR fluid dampers are presented, and some simulation results are discussed. Finally, the trained neural network models are applied to predict and control the damping force of the MR fluid damper. Moreover, validation methods for the neural network models developed are proposed and used to evaluate their performance. Validation results with different data sets indicate that the proposed direct identification dynamic model using the recurrent neural network can be used to predict the damping force accurately and the inverse identification dynamic model using the recurrent neural network can act as a damper controller to generate the command voltage when the MR fluid damper is used in a semi-active mode.

  13. Neural Networks for Modeling and Control of Particle Accelerators

    Science.gov (United States)

    Edelen, A. L.; Biedron, S. G.; Chase, B. E.; Edstrom, D.; Milton, S. V.; Stabile, P.

    2016-04-01

    Particle accelerators are host to myriad nonlinear and complex physical phenomena. They often involve a multitude of interacting systems, are subject to tight performance demands, and should be able to run for extended periods of time with minimal interruptions. Often times, traditional control techniques cannot fully meet these requirements. One promising avenue is to introduce machine learning and sophisticated control techniques inspired by artificial intelligence, particularly in light of recent theoretical and practical advances in these fields. Within machine learning and artificial intelligence, neural networks are particularly well-suited to modeling, control, and diagnostic analysis of complex, nonlinear, and time-varying systems, as well as systems with large parameter spaces. Consequently, the use of neural network-based modeling and control techniques could be of significant benefit to particle accelerators. For the same reasons, particle accelerators are also ideal test-beds for these techniques. Many early attempts to apply neural networks to particle accelerators yielded mixed results due to the relative immaturity of the technology for such tasks. The purpose of this paper is to re-introduce neural networks to the particle accelerator community and report on some work in neural network control that is being conducted as part of a dedicated collaboration between Fermilab and Colorado State University (CSU). We describe some of the challenges of particle accelerator control, highlight recent advances in neural network techniques, discuss some promising avenues for incorporating neural networks into particle accelerator control systems, and describe a neural network-based control system that is being developed for resonance control of an RF electron gun at the Fermilab Accelerator Science and Technology (FAST) facility, including initial experimental results from a benchmark controller.

  14. Issues in the use of neural networks in information retrieval

    CERN Document Server

    Iatan, Iuliana F

    2017-01-01

    This book highlights the ability of neural networks (NNs) to be excellent pattern matchers and their importance in information retrieval (IR), which is based on index term matching. The book defines a new NN-based method for learning image similarity and describes how to use fuzzy Gaussian neural networks to predict personality. It introduces the fuzzy Clifford Gaussian network, and two concurrent neural models: (1) concurrent fuzzy nonlinear perceptron modules, and (2) concurrent fuzzy Gaussian neural network modules. Furthermore, it explains the design of a new model of fuzzy nonlinear perceptron based on alpha level sets and describes a recurrent fuzzy neural network model with a learning algorithm based on the improved particle swarm optimization method.

  15. Stability prediction of berm breakwater using neural network

    Digital Repository Service at National Institute of Oceanography (India)

    Mandal, S.; Rao, S.; Manjunath, Y.R.

    In the present study, an artificial neural network method has been applied to predict the stability of berm breakwaters. Four neural network models are constructed based on the parameters which influence the stability of breakwater. Training...

  16. A Neural Network Model for Prediction of Sound Quality

    DEFF Research Database (Denmark)

    Nielsen,, Lars Bramsløw

    An artificial neural network structure has been specified, implemented and optimized for the purpose of predicting the perceived sound quality for normal-hearing and hearing-impaired subjects. The network was implemented by means of commercially available software and optimized to predict results...... obtained in subjective sound quality rating experiments based on input data from an auditory model. Various types of input data and data representations from the auditory model were used as input data for the chosen network structure, which was a three-layer perceptron. This network was trained by means...... the physical signal parameters and the subjectively perceived sound quality. No simple objective-subjective relationship was evident from this analysis....

  17. Neural networks. A new analytical tool, applicable also in nuclear technology

    International Nuclear Information System (INIS)

    Stritar, A.

    1992-01-01

    The basic concept of neural networks and back propagation learning algorithm are described. The behaviour of typical neural network is demonstrated on a simple graphical case. A short literature survey about the application of neural networks in nuclear science and engineering is made. The application of the neural network to the probability density calculation is shown. (author) [sl

  18. Short-Term Load Forecasting Model Based on Quantum Elman Neural Networks

    Directory of Open Access Journals (Sweden)

    Zhisheng Zhang

    2016-01-01

    Full Text Available Short-term load forecasting model based on quantum Elman neural networks was constructed in this paper. The quantum computation and Elman feedback mechanism were integrated into quantum Elman neural networks. Quantum computation can effectively improve the approximation capability and the information processing ability of the neural networks. Quantum Elman neural networks have not only the feedforward connection but also the feedback connection. The feedback connection between the hidden nodes and the context nodes belongs to the state feedback in the internal system, which has formed specific dynamic memory performance. Phase space reconstruction theory is the theoretical basis of constructing the forecasting model. The training samples are formed by means of K-nearest neighbor approach. Through the example simulation, the testing results show that the model based on quantum Elman neural networks is better than the model based on the quantum feedforward neural network, the model based on the conventional Elman neural network, and the model based on the conventional feedforward neural network. So the proposed model can effectively improve the prediction accuracy. The research in the paper makes a theoretical foundation for the practical engineering application of the short-term load forecasting model based on quantum Elman neural networks.

  19. Resolution of Singularities Introduced by Hierarchical Structure in Deep Neural Networks.

    Science.gov (United States)

    Nitta, Tohru

    2017-10-01

    We present a theoretical analysis of singular points of artificial deep neural networks, resulting in providing deep neural network models having no critical points introduced by a hierarchical structure. It is considered that such deep neural network models have good nature for gradient-based optimization. First, we show that there exist a large number of critical points introduced by a hierarchical structure in deep neural networks as straight lines, depending on the number of hidden layers and the number of hidden neurons. Second, we derive a sufficient condition for deep neural networks having no critical points introduced by a hierarchical structure, which can be applied to general deep neural networks. It is also shown that the existence of critical points introduced by a hierarchical structure is determined by the rank and the regularity of weight matrices for a specific class of deep neural networks. Finally, two kinds of implementation methods of the sufficient conditions to have no critical points are provided. One is a learning algorithm that can avoid critical points introduced by the hierarchical structure during learning (called avoidant learning algorithm). The other is a neural network that does not have some critical points introduced by the hierarchical structure as an inherent property (called avoidant neural network).

  20. Application of artificial neural network in radiographic diagnosis

    International Nuclear Information System (INIS)

    Piraino, D.; Amartur, S.; Richmond, B.; Schils, J.; Belhobek, G.

    1990-01-01

    This paper reports on an artificial neural network trained to rate the likelihood of different bone neoplasms when given a standard description of a radiograph. A three-layer back propagation algorithm was trained with descriptions of examples of bone neoplasms obtained from standard radiographic textbooks. Fifteen bone neoplasms obtained from clinical material were used as unknowns to test the trained artificial neural network. The artificial neural network correctly rated the pathologic diagnosis as the most likely diagnosis in 10 of the 15 unknown cases

  1. Collaborative Recurrent Neural Networks forDynamic Recommender Systems

    Science.gov (United States)

    2016-11-22

    JMLR: Workshop and Conference Proceedings 63:366–381, 2016 ACML 2016 Collaborative Recurrent Neural Networks for Dynamic Recommender Systems Young...an unprece- dented scale. Although such activity logs are abundantly available, most approaches to recommender systems are based on the rating...Recurrent Neural Network, Recommender System , Neural Language Model, Collaborative Filtering 1. Introduction As ever larger parts of the population

  2. Control of beam halo-chaos using neural network self-adaptation method

    International Nuclear Information System (INIS)

    Fang Jinqing; Huang Guoxian; Luo Xiaoshu

    2004-11-01

    Taking the advantages of neural network control method for nonlinear complex systems, control of beam halo-chaos in the periodic focusing channels (network) of high intensity accelerators is studied by feed-forward back-propagating neural network self-adaptation method. The envelope radius of high-intensity proton beam is reached to the matching beam radius by suitably selecting the control structure of neural network and the linear feedback coefficient, adjusted the right-coefficient of neural network. The beam halo-chaos is obviously suppressed and shaking size is much largely reduced after the neural network self-adaptation control is applied. (authors)

  3. Use of neural networks to monitor power plant components

    International Nuclear Information System (INIS)

    Ikonomopoulos, A.; Tsoukalas, L.H.

    1992-01-01

    A new methodology is presented for nondestructive evaluation (NDE) of check valve performance and degradation. Artificial neural network (ANN) technology is utilized for processing frequency domain signatures of check valves operating in a nuclear power plant (NPP). Acoustic signatures obtained from different locations on a check valve are transformed from the time domain to the frequency domain and then used as input to a pretrained neural network. The neural network has been trained with data sets corresponding to normal operation, therefore establishing a basis for check valve satisfactory performance. Results obtained from the proposed methodology demonstrate the ability of neural networks to perform accurate and quick evaluations of check valve performance

  4. A learning algorithm for oscillatory cellular neural networks.

    Science.gov (United States)

    Ho, C Y.; Kurokawa, H

    1999-07-01

    We present a cellular type oscillatory neural network for temporal segregation of stationary input patterns. The model comprises an array of locally connected neural oscillators with connections limited to a 4-connected neighborhood. The architecture is reminiscent of the well-known cellular neural network that consists of local connection for feature extraction. By means of a novel learning rule and an initialization scheme, global synchronization can be accomplished without incurring any erroneous synchrony among uncorrelated objects. Each oscillator comprises two mutually coupled neurons, and neurons share a piecewise-linear activation function characteristic. The dynamics of traditional oscillatory models is simplified by using only one plastic synapse, and the overall complexity for hardware implementation is reduced. Based on the connectedness of image segments, it is shown that global synchronization and desynchronization can be achieved by means of locally connected synapses, and this opens up a tremendous application potential for the proposed architecture. Furthermore, by using special grouping synapses it is demonstrated that temporal segregation of overlapping gray-level and color segments can also be achieved. Finally, simulation results show that the learning rule proposed circumvents the problem of component mismatches, and hence facilitates a large-scale integration.

  5. Paying More Attention to Attention: Improving the Performance of Convolutional Neural Networks via Attention Transfer

    OpenAIRE

    Zagoruyko, Sergey; Komodakis, Nikos

    2016-01-01

    Attention plays a critical role in human visual experience. Furthermore, it has recently been demonstrated that attention can also play an important role in the context of applying artificial neural networks to a variety of tasks from fields such as computer vision and NLP. In this work we show that, by properly defining attention for convolutional neural networks, we can actually use this type of information in order to significantly improve the performance of a student CNN network by forcin...

  6. Classification of non-performing loans portfolio using Multilayer Perceptron artificial neural networks

    Directory of Open Access Journals (Sweden)

    Flávio Clésio Silva de Souza

    2014-06-01

    Full Text Available The purpose of the present research is to apply a Multilayer Perceptron (MLP neural network technique to create classification models from a portfolio of Non-Performing Loans (NPLs to classify this type of credit derivative. These credit derivatives are characterized as the amount of loans that were not paid and are already overdue more than 90 days. Since these titles are, because of legislative motives, moved by losses, Credit Rights Investment Funds (FDIC performs the purchase of these debts and the recovery of the credits. Using the Multilayer Perceptron (MLP architecture of Artificial Neural Network (ANN, classification models regarding the posterior recovery of these debts were created. To evaluate the performance of the models, evaluation metrics of classification relating to the neural networks with different architectures were presented. The results of the classifications were satisfactory, given the classification models were successful in the presented economics costs structure.

  7. Sejarah, Penerapan, dan Analisis Resiko dari Neural Network: Sebuah Tinjauan Pustaka

    Directory of Open Access Journals (Sweden)

    Cristina Cristina

    2018-05-01

    Full Text Available A neural network is a form of artificial intelligence that has the ability to learn, grow, and adapt in a dynamic environment. Neural network began since 1890 because a great American psychologist named William James created the book "Principles of Psycology". James was the first one publish a number of facts related to the structure and function of the brain. The history of neural network development is divided into 4 epochs, the Camelot era, the Depression, the Renaissance, and the Neoconnectiosm era. Neural networks used today are not 100 percent accurate. However, neural networks are still used because of better performance than alternative computing models. The use of neural network consists of pattern recognition, signal analysis, robotics, and expert systems. For risk analysis of the neural network, it is first performed using hazards and operability studies (HAZOPS. Determining the neural network requirements in a good way will help in determining its contribution to system hazards and validating the control or mitigation of any hazards. After completion of the first stage at HAZOPS and the second stage determines the requirements, the next stage is designing. Neural network underwent repeated design-train-test development. At the design stage, the hazard analysis should consider the design aspects of the development, which include neural network architecture, size, intended use, and so on. It will be continued at the implementation stage, test phase, installation and inspection phase, operation phase, and ends at the maintenance stage.

  8. Using neural networks in software repositories

    Science.gov (United States)

    Eichmann, David (Editor); Srinivas, Kankanahalli; Boetticher, G.

    1992-01-01

    The first topic is an exploration of the use of neural network techniques to improve the effectiveness of retrieval in software repositories. The second topic relates to a series of experiments conducted to evaluate the feasibility of using adaptive neural networks as a means of deriving (or more specifically, learning) measures on software. Taken together, these two efforts illuminate a very promising mechanism supporting software infrastructures - one based upon a flexible and responsive technology.

  9. Prediction based chaos control via a new neural network

    International Nuclear Information System (INIS)

    Shen Liqun; Wang Mao; Liu Wanyu; Sun Guanghui

    2008-01-01

    In this Letter, a new chaos control scheme based on chaos prediction is proposed. To perform chaos prediction, a new neural network architecture for complex nonlinear approximation is proposed. And the difficulty in building and training the neural network is also reduced. Simulation results of Logistic map and Lorenz system show the effectiveness of the proposed chaos control scheme and the proposed neural network

  10. Neural Networks for Modeling and Control of Particle Accelerators

    CERN Document Server

    Edelen, A.L.; Chase, B.E.; Edstrom, D.; Milton, S.V.; Stabile, P.

    2016-01-01

    We describe some of the challenges of particle accelerator control, highlight recent advances in neural network techniques, discuss some promising avenues for incorporating neural networks into particle accelerator control systems, and describe a neural network-based control system that is being developed for resonance control of an RF electron gun at the Fermilab Accelerator Science and Technology (FAST) facility, including initial experimental results from a benchmark controller.

  11. Computational modeling of neural plasticity for self-organization of neural networks.

    Science.gov (United States)

    Chrol-Cannon, Joseph; Jin, Yaochu

    2014-11-01

    Self-organization in biological nervous systems during the lifetime is known to largely occur through a process of plasticity that is dependent upon the spike-timing activity in connected neurons. In the field of computational neuroscience, much effort has been dedicated to building up computational models of neural plasticity to replicate experimental data. Most recently, increasing attention has been paid to understanding the role of neural plasticity in functional and structural neural self-organization, as well as its influence on the learning performance of neural networks for accomplishing machine learning tasks such as classification and regression. Although many ideas and hypothesis have been suggested, the relationship between the structure, dynamics and learning performance of neural networks remains elusive. The purpose of this article is to review the most important computational models for neural plasticity and discuss various ideas about neural plasticity's role. Finally, we suggest a few promising research directions, in particular those along the line that combines findings in computational neuroscience and systems biology, and their synergetic roles in understanding learning, memory and cognition, thereby bridging the gap between computational neuroscience, systems biology and computational intelligence. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  12. Deep Learning Neural Networks in Cybersecurity - Managing Malware with AI

    OpenAIRE

    Rayle, Keith

    2017-01-01

    There’s a lot of talk about the benefits of deep learning (neural networks) and how it’s the new electricity that will power us into the future. Medical diagnosis, computer vision and speech recognition are all examples of use-cases where neural networks are being applied in our everyday business environment. This begs the question…what are the uses of neural-network applications for cyber security? How does the AI process work when applying neural networks to detect malicious software bombar...

  13. Application of an artificial neural network model for diagnosing type 2 diabetes mellitus and determining the relative importance of risk factors.

    Science.gov (United States)

    Borzouei, Shiva; Soltanian, Ali Reza

    2018-01-01

    To identify the most important demographic risk factors for a diagnosis of type 2 diabetes mellitus (T2DM) using a neural network model. This study was conducted on a sample of 234 individuals, in whom T2DM was diagnosed using hemoglobin A1c levels. A multilayer perceptron artificial neural network was used to identify demographic risk factors for T2DM and their importance. The DeLong method was used to compare the models by fitting in sequential steps. Variables found to be significant at a level of pneural network modeling, only waist circumference (100.0%), age (78.5%), BMI (78.2%), hypertension (69.4%), stress (54.2%), smoking (49.3%), and a family history of T2DM (37.2%) were identified as predictors of the diagnosis of T2DM. In this study, waist circumference and age were the most important predictors of T2DM. Due to the sensitivity, specificity, and accuracy of the final model, it is suggested that these variables should be used for T2DM risk assessment in screening tests.

  14. Cotton genotypes selection through artificial neural networks.

    Science.gov (United States)

    Júnior, E G Silva; Cardoso, D B O; Reis, M C; Nascimento, A F O; Bortolin, D I; Martins, M R; Sousa, L B

    2017-09-27

    Breeding programs currently use statistical analysis to assist in the identification of superior genotypes at various stages of a cultivar's development. Differently from these analyses, the computational intelligence approach has been little explored in genetic improvement of cotton. Thus, this study was carried out with the objective of presenting the use of artificial neural networks as auxiliary tools in the improvement of the cotton to improve fiber quality. To demonstrate the applicability of this approach, this research was carried out using the evaluation data of 40 genotypes. In order to classify the genotypes for fiber quality, the artificial neural networks were trained with replicate data of 20 genotypes of cotton evaluated in the harvests of 2013/14 and 2014/15, regarding fiber length, uniformity of length, fiber strength, micronaire index, elongation, short fiber index, maturity index, reflectance degree, and fiber quality index. This quality index was estimated by means of a weighted average on the determined score (1 to 5) of each characteristic of the HVI evaluated, according to its industry standards. The artificial neural networks presented a high capacity of correct classification of the 20 selected genotypes based on the fiber quality index, so that when using fiber length associated with the short fiber index, fiber maturation, and micronaire index, the artificial neural networks presented better results than using only fiber length and previous associations. It was also observed that to submit data of means of new genotypes to the neural networks trained with data of repetition, provides better results of classification of the genotypes. When observing the results obtained in the present study, it was verified that the artificial neural networks present great potential to be used in the different stages of a genetic improvement program of the cotton, aiming at the improvement of the fiber quality of the future cultivars.

  15. Noise Analysis studies with neural networks

    International Nuclear Information System (INIS)

    Seker, S.; Ciftcioglu, O.

    1996-01-01

    Noise analysis studies with neural network are aimed. Stochastic signals at the input of the network are used to obtain an algorithmic multivariate stochastic signal modeling. To this end, lattice modeling of a stochastic signal is performed to obtain backward residual noise sources which are uncorrelated among themselves. There are applied together with an additional input to the network to obtain an algorithmic model which is used for signal detection for early failure in plant monitoring. The additional input provides the information to the network to minimize the difference between the signal and the network's one-step-ahead prediction. A stochastic algorithm is used for training where the errors reflecting the measurement error during the training are also modelled so that fast and consistent convergence of network's weights is obtained. The lattice structure coupled to neural network investigated with measured signals from an actual power plant. (authors)

  16. Noise suppress or express exponential growth for hybrid Hopfield neural networks

    International Nuclear Information System (INIS)

    Zhu Song; Shen Yi; Chen Guici

    2010-01-01

    In this Letter, we will show that noise can make the given hybrid Hopfield neural networks whose solution may grows exponentially become the new stochastic hybrid Hopfield neural networks whose solution will grows at most polynomially. On the other hand, we will also show that noise can make the given hybrid Hopfield neural networks whose solution grows at most polynomially become the new stochastic hybrid Hopfield neural networks whose solution will grows at exponentially. In other words, we will reveal that the noise can suppress or express exponential growth for hybrid Hopfield neural networks.

  17. Stability of Neutral Fractional Neural Networks with Delay

    Institute of Scientific and Technical Information of China (English)

    LI Yan; JIANG Wei; HU Bei-bei

    2016-01-01

    This paper studies stability of neutral fractional neural networks with delay. By introducing the definition of norm and using the uniform stability, the sufficient condition for uniform stability of neutral fractional neural networks with delay is obtained.

  18. A novel recurrent neural network with finite-time convergence for linear programming.

    Science.gov (United States)

    Liu, Qingshan; Cao, Jinde; Chen, Guanrong

    2010-11-01

    In this letter, a novel recurrent neural network based on the gradient method is proposed for solving linear programming problems. Finite-time convergence of the proposed neural network is proved by using the Lyapunov method. Compared with the existing neural networks for linear programming, the proposed neural network is globally convergent to exact optimal solutions in finite time, which is remarkable and rare in the literature of neural networks for optimization. Some numerical examples are given to show the effectiveness and excellent performance of the new recurrent neural network.

  19. Embedding recurrent neural networks into predator-prey models.

    Science.gov (United States)

    Moreau, Yves; Louiès, Stephane; Vandewalle, Joos; Brenig, Leon

    1999-03-01

    We study changes of coordinates that allow the embedding of ordinary differential equations describing continuous-time recurrent neural networks into differential equations describing predator-prey models-also called Lotka-Volterra systems. We transform the equations for the neural network first into quasi-monomial form (Brenig, L. (1988). Complete factorization and analytic solutions of generalized Lotka-Volterra equations. Physics Letters A, 133(7-8), 378-382), where we express the vector field of the dynamical system as a linear combination of products of powers of the variables. In practice, this transformation is possible only if the activation function is the hyperbolic tangent or the logistic sigmoid. From this quasi-monomial form, we can directly transform the system further into Lotka-Volterra equations. The resulting Lotka-Volterra system is of higher dimension than the original system, but the behavior of its first variables is equivalent to the behavior of the original neural network. We expect that this transformation will permit the application of existing techniques for the analysis of Lotka-Volterra systems to recurrent neural networks. Furthermore, our results show that Lotka-Volterra systems are universal approximators of dynamical systems, just as are continuous-time neural networks.

  20. Image Encryption and Chaotic Cellular Neural Network

    Science.gov (United States)

    Peng, Jun; Zhang, Du

    Machine learning has been playing an increasingly important role in information security and assurance. One of the areas of new applications is to design cryptographic systems by using chaotic neural network due to the fact that chaotic systems have several appealing features for information security applications. In this chapter, we describe a novel image encryption algorithm that is based on a chaotic cellular neural network. We start by giving an introduction to the concept of image encryption and its main technologies, and an overview of the chaotic cellular neural network. We then discuss the proposed image encryption algorithm in details, which is followed by a number of security analyses (key space analysis, sensitivity analysis, information entropy analysis and statistical analysis). The comparison with the most recently reported chaos-based image encryption algorithms indicates that the algorithm proposed in this chapter has a better security performance. Finally, we conclude the chapter with possible future work and application prospects of the chaotic cellular neural network in other information assurance and security areas.

  1. Neural networks to predict exosphere temperature corrections

    Science.gov (United States)

    Choury, Anna; Bruinsma, Sean; Schaeffer, Philippe

    2013-10-01

    Precise orbit prediction requires a forecast of the atmospheric drag force with a high degree of accuracy. Artificial neural networks are universal approximators derived from artificial intelligence and are widely used for prediction. This paper presents a method of artificial neural networking for prediction of the thermosphere density by forecasting exospheric temperature, which will be used by the semiempirical thermosphere Drag Temperature Model (DTM) currently developed. Artificial neural network has shown to be an effective and robust forecasting model for temperature prediction. The proposed model can be used for any mission from which temperature can be deduced accurately, i.e., it does not require specific training. Although the primary goal of the study was to create a model for 1 day ahead forecast, the proposed architecture has been generalized to 2 and 3 days prediction as well. The impact of artificial neural network predictions has been quantified for the low-orbiting satellite Gravity Field and Steady-State Ocean Circulation Explorer in 2011, and an order of magnitude smaller orbit errors were found when compared with orbits propagated using the thermosphere model DTM2009.

  2. Integrating neural network technology and noise analysis

    International Nuclear Information System (INIS)

    Uhrig, R.E.; Oak Ridge National Lab., TN

    1995-01-01

    The integrated use of neural network and noise analysis technologies offers advantages not available by the use of either technology alone. The application of neural network technology to noise analysis offers an opportunity to expand the scope of problems where noise analysis is useful and unique ways in which the integration of these technologies can be used productively. The two-sensor technique, in which the responses of two sensors to an unknown driving source are related, is used to demonstration such integration. The relationship between power spectral densities (PSDs) of accelerometer signals is derived theoretically using noise analysis to demonstrate its uniqueness. This relationship is modeled from experimental data using a neural network when the system is working properly, and the actual PSD of one sensor is compared with the PSD of that sensor predicted by the neural network using the PSD of the other sensor as an input. A significant deviation between the actual and predicted PSDs indicate that system is changing (i.e., failing). Experiments carried out on check values and bearings illustrate the usefulness of the methodology developed. (Author)

  3. Open quantum generalisation of Hopfield neural networks

    Science.gov (United States)

    Rotondo, P.; Marcuzzi, M.; Garrahan, J. P.; Lesanovsky, I.; Müller, M.

    2018-03-01

    We propose a new framework to understand how quantum effects may impact on the dynamics of neural networks. We implement the dynamics of neural networks in terms of Markovian open quantum systems, which allows us to treat thermal and quantum coherent effects on the same footing. In particular, we propose an open quantum generalisation of the Hopfield neural network, the simplest toy model of associative memory. We determine its phase diagram and show that quantum fluctuations give rise to a qualitatively new non-equilibrium phase. This novel phase is characterised by limit cycles corresponding to high-dimensional stationary manifolds that may be regarded as a generalisation of storage patterns to the quantum domain.

  4. Stock market index prediction using neural networks

    Science.gov (United States)

    Komo, Darmadi; Chang, Chein-I.; Ko, Hanseok

    1994-03-01

    A neural network approach to stock market index prediction is presented. Actual data of the Wall Street Journal's Dow Jones Industrial Index has been used for a benchmark in our experiments where Radial Basis Function based neural networks have been designed to model these indices over the period from January 1988 to Dec 1992. A notable success has been achieved with the proposed model producing over 90% prediction accuracies observed based on monthly Dow Jones Industrial Index predictions. The model has also captured both moderate and heavy index fluctuations. The experiments conducted in this study demonstrated that the Radial Basis Function neural network represents an excellent candidate to predict stock market index.

  5. Ideomotor feedback control in a recurrent neural network.

    Science.gov (United States)

    Galtier, Mathieu

    2015-06-01

    The architecture of a neural network controlling an unknown environment is presented. It is based on a randomly connected recurrent neural network from which both perception and action are simultaneously read and fed back. There are two concurrent learning rules implementing a sort of ideomotor control: (i) perception is learned along the principle that the network should predict reliably its incoming stimuli; (ii) action is learned along the principle that the prediction of the network should match a target time series. The coherent behavior of the neural network in its environment is a consequence of the interaction between the two principles. Numerical simulations show a promising performance of the approach, which can be turned into a local and better "biologically plausible" algorithm.

  6. Discrete-time BAM neural networks with variable delays

    Science.gov (United States)

    Liu, Xin-Ge; Tang, Mei-Lan; Martin, Ralph; Liu, Xin-Bi

    2007-07-01

    This Letter deals with the global exponential stability of discrete-time bidirectional associative memory (BAM) neural networks with variable delays. Using a Lyapunov functional, and linear matrix inequality techniques (LMI), we derive a new delay-dependent exponential stability criterion for BAM neural networks with variable delays. As this criterion has no extra constraints on the variable delay functions, it can be applied to quite general BAM neural networks with a broad range of time delay functions. It is also easy to use in practice. An example is provided to illustrate the theoretical development.

  7. Discrete-time BAM neural networks with variable delays

    International Nuclear Information System (INIS)

    Liu Xinge; Tang Meilan; Martin, Ralph; Liu Xinbi

    2007-01-01

    This Letter deals with the global exponential stability of discrete-time bidirectional associative memory (BAM) neural networks with variable delays. Using a Lyapunov functional, and linear matrix inequality techniques (LMI), we derive a new delay-dependent exponential stability criterion for BAM neural networks with variable delays. As this criterion has no extra constraints on the variable delay functions, it can be applied to quite general BAM neural networks with a broad range of time delay functions. It is also easy to use in practice. An example is provided to illustrate the theoretical development

  8. Designing neural networks that process mean values of random variables

    International Nuclear Information System (INIS)

    Barber, Michael J.; Clark, John W.

    2014-01-01

    We develop a class of neural networks derived from probabilistic models posed in the form of Bayesian networks. Making biologically and technically plausible assumptions about the nature of the probabilistic models to be represented in the networks, we derive neural networks exhibiting standard dynamics that require no training to determine the synaptic weights, that perform accurate calculation of the mean values of the relevant random variables, that can pool multiple sources of evidence, and that deal appropriately with ambivalent, inconsistent, or contradictory evidence. - Highlights: • High-level neural computations are specified by Bayesian belief networks of random variables. • Probability densities of random variables are encoded in activities of populations of neurons. • Top-down algorithm generates specific neural network implementation of given computation. • Resulting “neural belief networks” process mean values of random variables. • Such networks pool multiple sources of evidence and deal properly with inconsistent evidence

  9. Designing neural networks that process mean values of random variables

    Energy Technology Data Exchange (ETDEWEB)

    Barber, Michael J. [AIT Austrian Institute of Technology, Innovation Systems Department, 1220 Vienna (Austria); Clark, John W. [Department of Physics and McDonnell Center for the Space Sciences, Washington University, St. Louis, MO 63130 (United States); Centro de Ciências Matemáticas, Universidade de Madeira, 9000-390 Funchal (Portugal)

    2014-06-13

    We develop a class of neural networks derived from probabilistic models posed in the form of Bayesian networks. Making biologically and technically plausible assumptions about the nature of the probabilistic models to be represented in the networks, we derive neural networks exhibiting standard dynamics that require no training to determine the synaptic weights, that perform accurate calculation of the mean values of the relevant random variables, that can pool multiple sources of evidence, and that deal appropriately with ambivalent, inconsistent, or contradictory evidence. - Highlights: • High-level neural computations are specified by Bayesian belief networks of random variables. • Probability densities of random variables are encoded in activities of populations of neurons. • Top-down algorithm generates specific neural network implementation of given computation. • Resulting “neural belief networks” process mean values of random variables. • Such networks pool multiple sources of evidence and deal properly with inconsistent evidence.

  10. Moving image compression and generalization capability of constructive neural networks

    Science.gov (United States)

    Ma, Liying; Khorasani, Khashayar

    2001-03-01

    To date numerous techniques have been proposed to compress digital images to ease their storage and transmission over communication channels. Recently, a number of image compression algorithms using Neural Networks NNs have been developed. Particularly, several constructive feed-forward neural networks FNNs have been proposed by researchers for image compression, and promising results have been reported. At the previous SPIE AeroSense conference 2000, we proposed to use a constructive One-Hidden-Layer Feedforward Neural Network OHL-FNN for compressing digital images. In this paper, we first investigate the generalization capability of the proposed OHL-FNN in the presence of additive noise for network training and/ or generalization. Extensive experimental results for different scenarios are presented. It is revealed that the constructive OHL-FNN is not as robust to additive noise in input image as expected. Next, the constructive OHL-FNN is applied to moving images, video sequences. The first, or other specified frame in a moving image sequence is used to train the network. The remaining moving images that follow are then generalized/compressed by this trained network. Three types of correlation-like criteria measuring the similarity of any two images are introduced. The relationship between the generalization capability of the constructed net and the similarity of images is investigated in some detail. It is shown that the constructive OHL-FNN is promising even for changing images such as those extracted from a football game.

  11. Low-dimensional recurrent neural network-based Kalman filter for speech enhancement.

    Science.gov (United States)

    Xia, Youshen; Wang, Jun

    2015-07-01

    This paper proposes a new recurrent neural network-based Kalman filter for speech enhancement, based on a noise-constrained least squares estimate. The parameters of speech signal modeled as autoregressive process are first estimated by using the proposed recurrent neural network and the speech signal is then recovered from Kalman filtering. The proposed recurrent neural network is globally asymptomatically stable to the noise-constrained estimate. Because the noise-constrained estimate has a robust performance against non-Gaussian noise, the proposed recurrent neural network-based speech enhancement algorithm can minimize the estimation error of Kalman filter parameters in non-Gaussian noise. Furthermore, having a low-dimensional model feature, the proposed neural network-based speech enhancement algorithm has a much faster speed than two existing recurrent neural networks-based speech enhancement algorithms. Simulation results show that the proposed recurrent neural network-based speech enhancement algorithm can produce a good performance with fast computation and noise reduction. Copyright © 2015 Elsevier Ltd. All rights reserved.

  12. Multiple simultaneous fault diagnosis via hierarchical and single artificial neural networks

    International Nuclear Information System (INIS)

    Eslamloueyan, R.; Shahrokhi, M.; Bozorgmehri, R.

    2003-01-01

    Process fault diagnosis involves interpreting the current status of the plant given sensor reading and process knowledge. There has been considerable work done in this area with a variety of approaches being proposed for process fault diagnosis. Neural networks have been used to solve process fault diagnosis problems in chemical process, as they are well suited for recognizing multi-dimensional nonlinear patterns. In this work, the use of Hierarchical Artificial Neural Networks in diagnosing the multi-faults of a chemical process are discussed and compared with that of Single Artificial Neural Networks. The lower efficiency of Hierarchical Artificial Neural Networks , in comparison to Single Artificial Neural Networks, in process fault diagnosis is elaborated and analyzed. Also, the concept of a multi-level selection switch is presented and developed to improve the performance of hierarchical artificial neural networks. Simulation results indicate that application of multi-level selection switch increase the performance of the hierarchical artificial neural networks considerably

  13. A fuzzy Hopfield neural network for medical image segmentation

    International Nuclear Information System (INIS)

    Lin, J.S.; Cheng, K.S.; Mao, C.W.

    1996-01-01

    In this paper, an unsupervised parallel segmentation approach using a fuzzy Hopfield neural network (FHNN) is proposed. The main purpose is to embed fuzzy clustering into neural networks so that on-line learning and parallel implementation for medical image segmentation are feasible. The idea is to cast a clustering problem as a minimization problem where the criteria for the optimum segmentation is chosen as the minimization of the Euclidean distance between samples to class centers. In order to generate feasible results, a fuzzy c-means clustering strategy is included in the Hopfield neural network to eliminate the need of finding weighting factors in the energy function, which is formulated and based on a basic concept commonly used in pattern classification, called the within-class scatter matrix principle. The suggested fuzzy c-means clustering strategy has also been proven to be convergent and to allow the network to learn more effectively than the conventional Hopfield neural network. The fuzzy Hopfield neural network based on the within-class scatter matrix shows the promising results in comparison with the hard c-means method

  14. Identification of generalized state transfer matrix using neural networks

    International Nuclear Information System (INIS)

    Zhu Changchun

    2001-01-01

    The research is introduced on identification of generalized state transfer matrix of linear time-invariant (LTI) system by use of neural networks based on LM (Levenberg-Marquart) algorithm. Firstly, the generalized state transfer matrix is defined. The relationship between the identification of state transfer matrix of structural dynamics and the identification of the weight matrix of neural networks has been established in theory. A singular layer neural network is adopted to obtain the structural parameters as a powerful tool that has parallel distributed processing ability and the property of adaptation or learning. The constraint condition of weight matrix of the neural network is deduced so that the learning and training of the designed network can be more effective. The identified neural network can be used to simulate the structural response excited by any other signals. In order to cope with its further application in practical problems, some noise (5% and 10%) is expected to be present in the response measurements. Results from computer simulation studies show that this method is valid and feasible

  15. A one-layer recurrent neural network for constrained nonsmooth optimization.

    Science.gov (United States)

    Liu, Qingshan; Wang, Jun

    2011-10-01

    This paper presents a novel one-layer recurrent neural network modeled by means of a differential inclusion for solving nonsmooth optimization problems, in which the number of neurons in the proposed neural network is the same as the number of decision variables of optimization problems. Compared with existing neural networks for nonsmooth optimization problems, the global convexity condition on the objective functions and constraints is relaxed, which allows the objective functions and constraints to be nonconvex. It is proven that the state variables of the proposed neural network are convergent to optimal solutions if a single design parameter in the model is larger than a derived lower bound. Numerical examples with simulation results substantiate the effectiveness and illustrate the characteristics of the proposed neural network.

  16. Neural Network Classifier Based on Growing Hyperspheres

    Czech Academy of Sciences Publication Activity Database

    Jiřina Jr., Marcel; Jiřina, Marcel

    2000-01-01

    Roč. 10, č. 3 (2000), s. 417-428 ISSN 1210-0552. [Neural Network World 2000. Prague, 09.07.2000-12.07.2000] Grant - others:MŠMT ČR(CZ) VS96047; MPO(CZ) RP-4210 Institutional research plan: AV0Z1030915 Keywords : neural network * classifier * hyperspheres * big -dimensional data Subject RIV: BA - General Mathematics

  17. Artificial neural network based approach to transmission lines protection

    International Nuclear Information System (INIS)

    Joorabian, M.

    1999-05-01

    The aim of this paper is to present and accurate fault detection technique for high speed distance protection using artificial neural networks. The feed-forward multi-layer neural network with the use of supervised learning and the common training rule of error back-propagation is chosen for this study. Information available locally at the relay point is passed to a neural network in order for an assessment of the fault location to be made. However in practice there is a large amount of information available, and a feature extraction process is required to reduce the dimensionality of the pattern vectors, whilst retaining important information that distinguishes the fault point. The choice of features is critical to the performance of the neural networks learning and operation. A significant feature in this paper is that an artificial neural network has been designed and tested to enhance the precision of the adaptive capabilities for distance protection

  18. Operational experiences with automated acoustic burst classification by neural networks

    International Nuclear Information System (INIS)

    Olma, B.; Ding, Y.; Enders, R.

    1996-01-01

    Monitoring of Loose Parts Monitoring System sensors for signal bursts associated with metallic impacts of loose parts has proved as an useful methodology for on-line assessing the mechanical integrity of components in the primary circuit of nuclear power plants. With the availability of neural networks new powerful possibilities for classification and diagnosis of burst signals can be realized for acoustic monitoring with the online system RAMSES. In order to look for relevant burst signals an automated classification is needed, that means acoustic signature analysis and assessment has to be performed automatically on-line. A back propagation neural network based on five pre-calculated signal parameter values has been set up for identification of different signal types. During a three-month monitoring program of medium-operated check valves burst signals have been measured and classified separately according to their cause. The successful results of the three measurement campaigns with an automated burst type classification are presented. (author)

  19. Advances in Artificial Neural Networks - Methodological Development and Application

    Science.gov (United States)

    Artificial neural networks as a major soft-computing technology have been extensively studied and applied during the last three decades. Research on backpropagation training algorithms for multilayer perceptron networks has spurred development of other neural network training algorithms for other ne...

  20. Quantum Entanglement in Neural Network States

    Directory of Open Access Journals (Sweden)

    Dong-Ling Deng

    2017-05-01

    Full Text Available Machine learning, one of today’s most rapidly growing interdisciplinary fields, promises an unprecedented perspective for solving intricate quantum many-body problems. Understanding the physical aspects of the representative artificial neural-network states has recently become highly desirable in the applications of machine-learning techniques to quantum many-body physics. In this paper, we explore the data structures that encode the physical features in the network states by studying the quantum entanglement properties, with a focus on the restricted-Boltzmann-machine (RBM architecture. We prove that the entanglement entropy of all short-range RBM states satisfies an area law for arbitrary dimensions and bipartition geometry. For long-range RBM states, we show by using an exact construction that such states could exhibit volume-law entanglement, implying a notable capability of RBM in representing quantum states with massive entanglement. Strikingly, the neural-network representation for these states is remarkably efficient, in the sense that the number of nonzero parameters scales only linearly with the system size. We further examine the entanglement properties of generic RBM states by randomly sampling the weight parameters of the RBM. We find that their averaged entanglement entropy obeys volume-law scaling, and the meantime strongly deviates from the Page entropy of the completely random pure states. We show that their entanglement spectrum has no universal part associated with random matrix theory and bears a Poisson-type level statistics. Using reinforcement learning, we demonstrate that RBM is capable of finding the ground state (with power-law entanglement of a model Hamiltonian with a long-range interaction. In addition, we show, through a concrete example of the one-dimensional symmetry-protected topological cluster states, that the RBM representation may also be used as a tool to analytically compute the entanglement spectrum. Our

  1. Stability of a neural network model with small-world connections

    International Nuclear Information System (INIS)

    Li Chunguang; Chen Guanrong

    2003-01-01

    Small-world networks are highly clustered networks with small distances among the nodes. There are many biological neural networks that present this kind of connection. There are no special weightings in the connections of most existing small-world network models. However, this kind of simply connected model cannot characterize biological neural networks, in which there are different weights in synaptic connections. In this paper, we present a neural network model with weighted small-world connections and further investigate the stability of this model

  2. Performance of artificial neural networks and genetical evolved artificial neural networks unfolding techniques

    International Nuclear Information System (INIS)

    Ortiz R, J. M.; Martinez B, M. R.; Vega C, H. R.; Gallego D, E.; Lorente F, A.; Mendez V, R.; Los Arcos M, J. M.; Guerrero A, J. E.

    2011-01-01

    With the Bonner spheres spectrometer neutron spectrum is obtained through an unfolding procedure. Monte Carlo methods, Regularization, Parametrization, Least-squares, and Maximum Entropy are some of the techniques utilized for unfolding. In the last decade methods based on Artificial Intelligence Technology have been used. Approaches based on Genetic Algorithms and Artificial Neural Networks (Ann) have been developed in order to overcome the drawbacks of previous techniques. Nevertheless the advantages of Ann still it has some drawbacks mainly in the design process of the network, vg the optimum selection of the architectural and learning Ann parameters. In recent years the use of hybrid technologies, combining Ann and genetic algorithms, has been utilized to. In this work, several Ann topologies were trained and tested using Ann and Genetically Evolved Artificial Neural Networks in the aim to unfold neutron spectra using the count rates of a Bonner sphere spectrometer. Here, a comparative study of both procedures has been carried out. (Author)

  3. Anomaly detection in an automated safeguards system using neural networks

    International Nuclear Information System (INIS)

    Whiteson, R.; Howell, J.A.

    1992-01-01

    An automated safeguards system must be able to detect an anomalous event, identify the nature of the event, and recommend a corrective action. Neural networks represent a new way of thinking about basic computational mechanisms for intelligent information processing. In this paper, we discuss the issues involved in applying a neural network model to the first step of this process: anomaly detection in materials accounting systems. We extend our previous model to a 3-tank problem and compare different neural network architectures and algorithms. We evaluate the computational difficulties in training neural networks and explore how certain design principles affect the problems. The issues involved in building a neural network architecture include how the information flows, how the network is trained, how the neurons in a network are connected, how the neurons process information, and how the connections between neurons are modified. Our approach is based on the demonstrated ability of neural networks to model complex, nonlinear, real-time processes. By modeling the normal behavior of the processes, we can predict how a system should be behaving and, therefore, detect when an abnormality occurs

  4. Rule extraction from minimal neural networks for credit card screening.

    Science.gov (United States)

    Setiono, Rudy; Baesens, Bart; Mues, Christophe

    2011-08-01

    While feedforward neural networks have been widely accepted as effective tools for solving classification problems, the issue of finding the best network architecture remains unresolved, particularly so in real-world problem settings. We address this issue in the context of credit card screening, where it is important to not only find a neural network with good predictive performance but also one that facilitates a clear explanation of how it produces its predictions. We show that minimal neural networks with as few as one hidden unit provide good predictive accuracy, while having the added advantage of making it easier to generate concise and comprehensible classification rules for the user. To further reduce model size, a novel approach is suggested in which network connections from the input units to this hidden unit are removed by a very straightaway pruning procedure. In terms of predictive accuracy, both the minimized neural networks and the rule sets generated from them are shown to compare favorably with other neural network based classifiers. The rules generated from the minimized neural networks are concise and thus easier to validate in a real-life setting.

  5. A study on neural network representation of reactor power control procedures

    International Nuclear Information System (INIS)

    Moon, Byung Soo; Park, J. C.; Kim, Y. T.; Yang, S. U.; Lee, H. C.; Hwang, I. A.; Hwang, H. S.

    1997-12-01

    A neural algorithm to carry out the curve readings and arithmetic computations necessary for reactor power control is described in this report. The curve readings are for functions of the form z=f(x,y) and require fairly good interpolations. One of the functions is the total power defect as a function of reactor power and boron concentration. The second is the new position of control rod as a function of the current rod position and the increment of total power defect needed for the required power change. The curves involving xenon effect are also considered separately. We represented these curves by cubic spline interpolations first and then converted them to fuzzy systems so that they perform the identical interpolations as the splines. The resulting fuzzy systems are then converted to artificial neural networks similar to the RBF type neural network. These networks still carry the O(h'4) accuracy as the cubic spline interpolating functions. Also included is a description of an important result on how to find the spline interpolation coefficients without solving the matrix equation, when the function is a polynomial of the form f(t)=t'm. This result provides a systematic way of presenting continuous functions by fuzzy systems and hence by artificial neural networks without any training. (author). 10 refs., 2 tabs., 10 figs

  6. Introduction to spiking neural networks: Information processing, learning and applications.

    Science.gov (United States)

    Ponulak, Filip; Kasinski, Andrzej

    2011-01-01

    The concept that neural information is encoded in the firing rate of neurons has been the dominant paradigm in neurobiology for many years. This paradigm has also been adopted by the theory of artificial neural networks. Recent physiological experiments demonstrate, however, that in many parts of the nervous system, neural code is founded on the timing of individual action potentials. This finding has given rise to the emergence of a new class of neural models, called spiking neural networks. In this paper we summarize basic properties of spiking neurons and spiking networks. Our focus is, specifically, on models of spike-based information coding, synaptic plasticity and learning. We also survey real-life applications of spiking models. The paper is meant to be an introduction to spiking neural networks for scientists from various disciplines interested in spike-based neural processing.

  7. Neural-Network Object-Recognition Program

    Science.gov (United States)

    Spirkovska, L.; Reid, M. B.

    1993-01-01

    HONTIOR computer program implements third-order neural network exhibiting invariance under translation, change of scale, and in-plane rotation. Invariance incorporated directly into architecture of network. Only one view of each object needed to train network for two-dimensional-translation-invariant recognition of object. Also used for three-dimensional-transformation-invariant recognition by training network on only set of out-of-plane rotated views. Written in C language.

  8. Character recognition from trajectory by recurrent spiking neural networks.

    Science.gov (United States)

    Jiangrong Shen; Kang Lin; Yueming Wang; Gang Pan

    2017-07-01

    Spiking neural networks are biologically plausible and power-efficient on neuromorphic hardware, while recurrent neural networks have been proven to be efficient on time series data. However, how to use the recurrent property to improve the performance of spiking neural networks is still a problem. This paper proposes a recurrent spiking neural network for character recognition using trajectories. In the network, a new encoding method is designed, in which varying time ranges of input streams are used in different recurrent layers. This is able to improve the generalization ability of our model compared with general encoding methods. The experiments are conducted on four groups of the character data set from University of Edinburgh. The results show that our method can achieve a higher average recognition accuracy than existing methods.

  9. Efficient Cancer Detection Using Multiple Neural Networks.

    Science.gov (United States)

    Shell, John; Gregory, William D

    2017-01-01

    The inspection of live excised tissue specimens to ascertain malignancy is a challenging task in dermatopathology and generally in histopathology. We introduce a portable desktop prototype device that provides highly accurate neural network classification of malignant and benign tissue. The handheld device collects 47 impedance data samples from 1 Hz to 32 MHz via tetrapolar blackened platinum electrodes. The data analysis was implemented with six different backpropagation neural networks (BNN). A data set consisting of 180 malignant and 180 benign breast tissue data files in an approved IRB study at the Aurora Medical Center, Milwaukee, WI, USA, were utilized as a neural network input. The BNN structure consisted of a multi-tiered consensus approach autonomously selecting four of six neural networks to determine a malignant or benign classification. The BNN analysis was then compared with the histology results with consistent sensitivity of 100% and a specificity of 100%. This implementation successfully relied solely on statistical variation between the benign and malignant impedance data and intricate neural network configuration. This device and BNN implementation provides a novel approach that could be a valuable tool to augment current medical practice assessment of the health of breast, squamous, and basal cell carcinoma and other excised tissue without requisite tissue specimen expertise. It has the potential to provide clinical management personnel with a fast non-invasive accurate assessment of biopsied or sectioned excised tissue in various clinical settings.

  10. Potential applications of neural networks to nuclear power plants

    International Nuclear Information System (INIS)

    Uhrig, R.E.

    1991-01-01

    Application of neural networks to the operation of nuclear power plants is being investigated under a US Department of Energy sponsored program at the University of Tennessee. Projects include the feasibility of using neural networks for the following tasks: diagnosing specific abnormal conditions, detection of the change of mode of operation, signal validation, monitoring of check valves, plant-wide monitoring using autoassociative neural networks, modeling of the plant thermodynamics, emulation of core reload calculations, monitoring of plant parameters, and analysis of plant vibrations. Each of these projects and its status are described briefly in this article. The objective of each of these projects is to enhance the safety and performance of nuclear plants through the use of neural networks

  11. Neural network models: from biology to many - body phenomenology

    International Nuclear Information System (INIS)

    Clark, J.W.

    1993-01-01

    The current surge of research on practical side of neural networks and their utility in memory storage/recall, pattern recognition and classification is given in this article. The initial attraction of neural networks as dynamical and statistical system has been investigated. From the view of many-body theorist, the neurons may be thought of as particles, and the weighted connection between the units, as the interaction between these particles. Finally, the author has seen the impressive capabilities of artificial neural networks in pattern recognition and classification may be exploited to solve data management problems in experimental physics and the discovery of radically new theoretically description of physical problems and neural networks can be used in physics. (A.B.)

  12. Neural network for solving convex quadratic bilevel programming problems.

    Science.gov (United States)

    He, Xing; Li, Chuandong; Huang, Tingwen; Li, Chaojie

    2014-03-01

    In this paper, using the idea of successive approximation, we propose a neural network to solve convex quadratic bilevel programming problems (CQBPPs), which is modeled by a nonautonomous differential inclusion. Different from the existing neural network for CQBPP, the model has the least number of state variables and simple structure. Based on the theory of nonsmooth analysis, differential inclusions and Lyapunov-like method, the limit equilibrium points sequence of the proposed neural networks can approximately converge to an optimal solution of CQBPP under certain conditions. Finally, simulation results on two numerical examples and the portfolio selection problem show the effectiveness and performance of the proposed neural network. Copyright © 2013 Elsevier Ltd. All rights reserved.

  13. Chromatic characterization of a three-channel colorimeter using back-propagation neural networks

    Science.gov (United States)

    Pardo, P. J.; Pérez, A. L.; Suero, M. I.

    2004-09-01

    This work describes a method for the chromatic characterization of a three-channel colorimeter of recent design and construction dedicated to color vision research. The colorimeter consists of two fixed monochromators and a third monochromator interchangeable with a cathode ray tube or any other external light source. Back-propagation neural networks were used for the chromatic characterization to establish the relationship between each monochromator's input parameters and the tristimulus values of each chromatic stimulus generated. The results showed the effectiveness of this type of neural-network-based system for the chromatic characterization of the stimuli produced by any monochromator.

  14. Neural network application to diesel generator diagnostics

    International Nuclear Information System (INIS)

    Logan, K.P.

    1990-01-01

    Diagnostic problems typically begin with the observation of some system behavior which is recognized as a deviation from the expected. The fundamental underlying process is one involving pattern matching cf observed symptoms to a set of compiled symptoms belonging to a fault-symptom mapping. Pattern recognition is often relied upon for initial fault detection and diagnosis. Parallel distributed processing (PDP) models employing neural network paradigms are known to be good pattern recognition devices. This paper describes the application of neural network processing techniques to the malfunction diagnosis of subsystems within a typical diesel generator configuration. Neural network models employing backpropagation learning were developed to correctly recognize fault conditions from the input diagnostic symptom patterns pertaining to various engine subsystems. The resulting network models proved to be excellent pattern recognizers for malfunction examples within the training set. The motivation for employing network models in lieu of a rule-based expert system, however, is related to the network's potential for generalizing malfunctions outside of the training set, as in the case of noisy or partial symptom patterns

  15. Learning text representation using recurrent convolutional neural network with highway layers

    OpenAIRE

    Wen, Ying; Zhang, Weinan; Luo, Rui; Wang, Jun

    2016-01-01

    Recently, the rapid development of word embedding and neural networks has brought new inspiration to various NLP and IR tasks. In this paper, we describe a staged hybrid model combining Recurrent Convolutional Neural Networks (RCNN) with highway layers. The highway network module is incorporated in the middle takes the output of the bi-directional Recurrent Neural Network (Bi-RNN) module in the first stage and provides the Convolutional Neural Network (CNN) module in the last stage with the i...

  16. Topology and computational performance of attractor neural networks

    International Nuclear Information System (INIS)

    McGraw, Patrick N.; Menzinger, Michael

    2003-01-01

    To explore the relation between network structure and function, we studied the computational performance of Hopfield-type attractor neural nets with regular lattice, random, small-world, and scale-free topologies. The random configuration is the most efficient for storage and retrieval of patterns by the network as a whole. However, in the scale-free case retrieval errors are not distributed uniformly among the nodes. The portion of a pattern encoded by the subset of highly connected nodes is more robust and efficiently recognized than the rest of the pattern. The scale-free network thus achieves a very strong partial recognition. The implications of these findings for brain function and social dynamics are suggestive

  17. Neural network-based nonlinear model predictive control vs. linear quadratic gaussian control

    Science.gov (United States)

    Cho, C.; Vance, R.; Mardi, N.; Qian, Z.; Prisbrey, K.

    1997-01-01

    One problem with the application of neural networks to the multivariable control of mineral and extractive processes is determining whether and how to use them. The objective of this investigation was to compare neural network control to more conventional strategies and to determine if there are any advantages in using neural network control in terms of set-point tracking, rise time, settling time, disturbance rejection and other criteria. The procedure involved developing neural network controllers using both historical plant data and simulation models. Various control patterns were tried, including both inverse and direct neural network plant models. These were compared to state space controllers that are, by nature, linear. For grinding and leaching circuits, a nonlinear neural network-based model predictive control strategy was superior to a state space-based linear quadratic gaussian controller. The investigation pointed out the importance of incorporating state space into neural networks by making them recurrent, i.e., feeding certain output state variables into input nodes in the neural network. It was concluded that neural network controllers can have better disturbance rejection, set-point tracking, rise time, settling time and lower set-point overshoot, and it was also concluded that neural network controllers can be more reliable and easy to implement in complex, multivariable plants.

  18. PWR system simulation and parameter estimation with neural networks

    Energy Technology Data Exchange (ETDEWEB)

    Akkurt, Hatice; Colak, Uener E-mail: uc@nuke.hacettepe.edu.tr

    2002-11-01

    A detailed nonlinear model for a typical PWR system has been considered for the development of simulation software. Each component in the system has been represented by appropriate differential equations. The SCILAB software was used for solving nonlinear equations to simulate steady-state and transient operational conditions. Overall system has been constructed by connecting individual components to each other. The validity of models for individual components and overall system has been verified. The system response against given transients have been analyzed. A neural network has been utilized to estimate system parameters during transients. Different transients have been imposed in training and prediction stages with neural networks. Reactor power and system reactivity during the transient event have been predicted by the neural network. Results show that neural networks estimations are in good agreement with the calculated response of the reactor system. The maximum errors are within {+-}0.254% for power and between -0.146 and 0.353% for reactivity prediction cases. Steam generator parameters, pressure and water level, are also successfully predicted by the neural network employed in this study. The noise imposed on the input parameters of the neural network deteriorates the power estimation capability whereas the reactivity estimation capability is not significantly affected.

  19. PWR system simulation and parameter estimation with neural networks

    International Nuclear Information System (INIS)

    Akkurt, Hatice; Colak, Uener

    2002-01-01

    A detailed nonlinear model for a typical PWR system has been considered for the development of simulation software. Each component in the system has been represented by appropriate differential equations. The SCILAB software was used for solving nonlinear equations to simulate steady-state and transient operational conditions. Overall system has been constructed by connecting individual components to each other. The validity of models for individual components and overall system has been verified. The system response against given transients have been analyzed. A neural network has been utilized to estimate system parameters during transients. Different transients have been imposed in training and prediction stages with neural networks. Reactor power and system reactivity during the transient event have been predicted by the neural network. Results show that neural networks estimations are in good agreement with the calculated response of the reactor system. The maximum errors are within ±0.254% for power and between -0.146 and 0.353% for reactivity prediction cases. Steam generator parameters, pressure and water level, are also successfully predicted by the neural network employed in this study. The noise imposed on the input parameters of the neural network deteriorates the power estimation capability whereas the reactivity estimation capability is not significantly affected

  20. The use of skewness, kurtosis and neural networks for determining corrosion mechanism from electrochemical noise data

    International Nuclear Information System (INIS)

    Reid, S.; Bell, G.E.C.; Edgemon, G.L.

    1998-01-01

    This paper describes the work undertaken to de-skill the complex procedure of determining corrosion mechanisms derived from electrochemical noise data. The use of neural networks is discussed and applied to the real time generated electrochemical noise data files with the purpose of determining characteristics particular to individual types of corrosion mechanisms. The electrochemical noise signals can have a wide dynamic range and various methods of raw data pre-processing prior to neural network analysis were investigated. Normalized data were ultimately used as input to the final network analysis. Various network schemes were designed, trained and tested. Factors such as the network learning schedule and network design were considered before a final network was implemented to achieve a solution. Neural networks trained using general and localized corrosion data from various material environment systems were used to analyze data from simulated nuclear waste tank environments with favorable results

  1. Neural network for recognizing signal-shape of nuclear detector output

    International Nuclear Information System (INIS)

    Mardiyanto M Panitra

    2006-01-01

    The use of artificial intelligent technique in the engineering field has been familiar especially in the field of pattern recognition. By using this technique, either simple routine works or complicated routine works can be done by the help of a digital camera and a personal computer. One of the complicated works that can not be solved easily is how to separate two kinds of nuclear radiation types which are mixed in the same field. The separation of the two kinds of radiation become is very important for the radiation dosimetry purposes. For doing this we have carried out a preliminary research in applying a neural network technique for recognizing C and T letters with right, left, up, and down positions. We arranged a three-layer neural network i.e. input layer (9 neurons with/without bias neuron), hidden layer (11 neurons), and output layer (1 neuron). From this preliminary study the use of a bias neuron gave faster learning process compared with the one without the bias neuron. The neural network could work successfully in determining the letter S and T without any mistake. (author)

  2. Neural attractor network for application in visual field data classification

    International Nuclear Information System (INIS)

    Fink, Wolfgang

    2004-01-01

    The purpose was to introduce a novel method for computer-based classification of visual field data derived from perimetric examination, that may act as a ' counsellor', providing an independent 'second opinion' to the diagnosing physician. The classification system consists of a Hopfield-type neural attractor network that obtains its input data from perimetric examination results. An iterative relaxation process determines the states of the neurons dynamically. Therefore, even 'noisy' perimetric output, e.g., early stages of a disease, may eventually be classified correctly according to the predefined idealized visual field defect (scotoma) patterns, stored as attractors of the network, that are found with diseases of the eye, optic nerve and the central nervous system. Preliminary tests of the classification system on real visual field data derived from perimetric examinations have shown a classification success of over 80%. Some of the main advantages of the Hopfield-attractor-network-based approach over feed-forward type neural networks are: (1) network architecture is defined by the classification problem; (2) no training is required to determine the neural coupling strengths; (3) assignment of an auto-diagnosis confidence level is possible by means of an overlap parameter and the Hamming distance. In conclusion, the novel method for computer-based classification of visual field data, presented here, furnishes a valuable first overview and an independent 'second opinion' in judging perimetric examination results, pointing towards a final diagnosis by a physician. It should not be considered a substitute for the diagnosing physician. Thanks to the worldwide accessibility of the Internet, the classification system offers a promising perspective towards modern computer-assisted diagnosis in both medicine and tele-medicine, for example and in particular, with respect to non-ophthalmic clinics or in communities where perimetric expertise is not readily available

  3. The DSFPN, a new neural network for optical character recognition.

    Science.gov (United States)

    Morns, L P; Dlay, S S

    1999-01-01

    A new type of neural network for recognition tasks is presented in this paper. The network, called the dynamic supervised forward-propagation network (DSFPN), is based on the forward only version of the counterpropagation network (CPN). The DSFPN, trains using a supervised algorithm and can grow dynamically during training, allowing subclasses in the training data to be learnt in an unsupervised manner. It is shown to train in times comparable to the CPN while giving better classification accuracies than the popular backpropagation network. Both Fourier descriptors and wavelet descriptors are used for image preprocessing and the wavelets are proven to give a far better performance.

  4. Application of radial basis neural network for state estimation of ...

    African Journals Online (AJOL)

    An original application of radial basis function (RBF) neural network for power system state estimation is proposed in this paper. The property of massive parallelism of neural networks is employed for this. The application of RBF neural network for state estimation is investigated by testing its applicability on a IEEE 14 bus ...

  5. Wind power prediction based on genetic neural network

    Science.gov (United States)

    Zhang, Suhan

    2017-04-01

    The scale of grid connected wind farms keeps increasing. To ensure the stability of power system operation, make a reasonable scheduling scheme and improve the competitiveness of wind farm in the electricity generation market, it's important to accurately forecast the short-term wind power. To reduce the influence of the nonlinear relationship between the disturbance factor and the wind power, the improved prediction model based on genetic algorithm and neural network method is established. To overcome the shortcomings of long training time of BP neural network and easy to fall into local minimum and improve the accuracy of the neural network, genetic algorithm is adopted to optimize the parameters and topology of neural network. The historical data is used as input to predict short-term wind power. The effectiveness and feasibility of the method is verified by the actual data of a certain wind farm as an example.

  6. Introduction to neural networks in high energy physics

    International Nuclear Information System (INIS)

    Therhaag, J.

    2013-01-01

    Artificial neural networks are a well established tool in high energy physics, playing an important role in both online and offline data analysis. Nevertheless they are often perceived as black boxes which perform obscure operations beyond the control of the user, resulting in a skepticism against any results that may be obtained using them. The situation is not helped by common explanations which try to draw analogies between artificial neural networks and the human brain, for the brain is an even more complex black box itself. In this introductory text, I will take a problem-oriented approach to neural network techniques, showing how the fundamental concepts arise naturally from the demand to solve classification tasks which are frequently encountered in high energy physics. Particular attention is devoted to the question how probability theory can be used to control the complexity of neural networks. (authors)

  7. Neural networks for event filtering at D/O/

    International Nuclear Information System (INIS)

    Cutts, D.; Hoftun, J.S.; Sornborger, A.; Johnson, C.R.; Zeller, R.T.

    1989-01-01

    Neural networks may provide important tools for pattern recognition in high energy physics. We discuss an initial exploration of these techniques, presenting the result of network simulations of several filter algorithms. The D0 data acquisition system, a MicroVAX farm, will perform critical event selection; we describe a possible implementation of neural network algorithms in this system. 7 refs., 4 figs

  8. Artificial neural networks as a tool in urban storm drainage

    DEFF Research Database (Denmark)

    Loke, E.; Warnaars, E.A.; Jacobsen, P.

    1997-01-01

    The introduction of Artificial Neural Networks (ANNs) as a tool in the field of urban storm drainage is discussed. Besides some basic theory on the mechanics of ANNs and a general classification of the different types of ANNs, two ANN application examples are presented: The prediction of runoff...

  9. The application of artificial neural networks to TLD dose algorithm

    International Nuclear Information System (INIS)

    Moscovitch, M.

    1997-01-01

    We review the application of feed forward neural networks to multi element thermoluminescence dosimetry (TLD) dose algorithm development. A Neural Network is an information processing method inspired by the biological nervous system. A dose algorithm based on a neural network is a fundamentally different approach from conventional algorithms, as it has the capability to learn from its own experience. The neural network algorithm is shown the expected dose values (output) associated with a given response of a multi-element dosimeter (input) many times.The algorithm, being trained that way, eventually is able to produce its own unique solution to similar (but not exactly the same) dose calculation problems. For personnel dosimetry, the output consists of the desired dose components: deep dose, shallow dose, and eye dose. The input consists of the TL data obtained from the readout of a multi-element dosimeter. For this application, a neural network architecture was developed based on the concept of functional links network (FLN). The FLN concept allowed an increase in the dimensionality of the input space and construction of a neural network without any hidden layers. This simplifies the problem and results in a relatively simple and reliable dose calculation algorithm. Overall, the neural network dose algorithm approach has been shown to significantly improve the precision and accuracy of dose calculations. (authors)

  10. Vibration monitoring with artificial neural networks

    International Nuclear Information System (INIS)

    Alguindigue, I.

    1991-01-01

    Vibration monitoring of components in nuclear power plants has been used for a number of years. This technique involves the analysis of vibration data coming from vital components of the plant to detect features which reflect the operational state of machinery. The analysis leads to the identification of potential failures and their causes, and makes it possible to perform efficient preventive maintenance. Earlydetection is important because it can decrease the probability of catastrophic failures, reduce forced outgage, maximize utilization of available assets, increase the life of the plant, and reduce maintenance costs. This paper documents our work on the design of a vibration monitoring methodology based on neural network technology. This technology provides an attractive complement to traditional vibration analysis because of the potential of neural network to operate in real-time mode and to handle data which may be distorted or noisy. Our efforts have been concentrated on the analysis and classification of vibration signatures collected from operating machinery. Two neural networks algorithms were used in our project: the Recirculation algorithm for data compression and the Backpropagation algorithm to perform the actual classification of the patterns. Although this project is in the early stages of development it indicates that neural networks may provide a viable methodology for monitoring and diagnostics of vibrating components. Our results to date are very encouraging

  11. Neural networks and orbit control in accelerators

    International Nuclear Information System (INIS)

    Bozoki, E.; Friedman, A.

    1994-01-01

    An overview of the architecture, workings and training of Neural Networks is given. We stress the aspects which are important for the use of Neural Networks for orbit control in accelerators and storage rings, especially its ability to cope with the nonlinear behavior of the orbit response to 'kicks' and the slow drift in the orbit response during long-term operation. Results obtained for the two NSLS storage rings with several network architectures and various training methods for each architecture are given

  12. Convolutional neural networks for event-related potential detection: impact of the architecture.

    Science.gov (United States)

    Cecotti, H

    2017-07-01

    The detection of brain responses at the single-trial level in the electroencephalogram (EEG) such as event-related potentials (ERPs) is a difficult problem that requires different processing steps to extract relevant discriminant features. While most of the signal and classification techniques for the detection of brain responses are based on linear algebra, different pattern recognition techniques such as convolutional neural network (CNN), as a type of deep learning technique, have shown some interests as they are able to process the signal after limited pre-processing. In this study, we propose to investigate the performance of CNNs in relation of their architecture and in relation to how they are evaluated: a single system for each subject, or a system for all the subjects. More particularly, we want to address the change of performance that can be observed between specifying a neural network to a subject, or by considering a neural network for a group of subjects, taking advantage of a larger number of trials from different subjects. The results support the conclusion that a convolutional neural network trained on different subjects can lead to an AUC above 0.9 by using an appropriate architecture using spatial filtering and shift invariant layers.

  13. Molecular evolution of a peptide GPCR ligand driven by artificial neural networks.

    Directory of Open Access Journals (Sweden)

    Sebastian Bandholtz

    Full Text Available Peptide ligands of G protein-coupled receptors constitute valuable natural lead structures for the development of highly selective drugs and high-affinity tools to probe ligand-receptor interaction. Currently, pharmacological and metabolic modification of natural peptides involves either an iterative trial-and-error process based on structure-activity relationships or screening of peptide libraries that contain many structural variants of the native molecule. Here, we present a novel neural network architecture for the improvement of metabolic stability without loss of bioactivity. In this approach the peptide sequence determines the topology of the neural network and each cell corresponds one-to-one to a single amino acid of the peptide chain. Using a training set, the learning algorithm calculated weights for each cell. The resulting network calculated the fitness function in a genetic algorithm to explore the virtual space of all possible peptides. The network training was based on gradient descent techniques which rely on the efficient calculation of the gradient by back-propagation. After three consecutive cycles of sequence design by the neural network, peptide synthesis and bioassay this new approach yielded a ligand with 70fold higher metabolic stability compared to the wild type peptide without loss of the subnanomolar activity in the biological assay. Combining specialized neural networks with an exploration of the combinatorial amino acid sequence space by genetic algorithms represents a novel rational strategy for peptide design and optimization.

  14. A comparative study of two neural networks for document retrieval

    International Nuclear Information System (INIS)

    Hui, S.C.; Goh, A.

    1997-01-01

    In recent years there has been specific interest in adopting advanced computer techniques in the field of document retrieval. This interest is generated by the fact that classical methods such as the Boolean search, the vector space model or even probabilistic retrieval cannot handle the increasing demands of end-users in satisfying their needs. The most recent attempt is the application of the neural network paradigm as a means of providing end-users with a more powerful retrieval mechanism. Neural networks are not only good pattern matchers but also highly versatile and adaptable. In this paper, we demonstrate how to apply two neural networks, namely Adaptive Resonance Theory and Fuzzy Kohonen Neural Network, for document retrieval. In addition, a comparison of these two neural networks based on performance is also given

  15. Investigation of efficient features for image recognition by neural networks.

    Science.gov (United States)

    Goltsev, Alexander; Gritsenko, Vladimir

    2012-04-01

    In the paper, effective and simple features for image recognition (named LiRA-features) are investigated in the task of handwritten digit recognition. Two neural network classifiers are considered-a modified 3-layer perceptron LiRA and a modular assembly neural network. A method of feature selection is proposed that analyses connection weights formed in the preliminary learning process of a neural network classifier. In the experiments using the MNIST database of handwritten digits, the feature selection procedure allows reduction of feature number (from 60 000 to 7000) preserving comparable recognition capability while accelerating computations. Experimental comparison between the LiRA perceptron and the modular assembly neural network is accomplished, which shows that recognition capability of the modular assembly neural network is somewhat better. Copyright © 2011 Elsevier Ltd. All rights reserved.

  16. 23rd Workshop of the Italian Neural Networks Society (SIREN)

    CERN Document Server

    Esposito, Anna; Morabito, Francesco

    2014-01-01

    This volume collects a selection of contributions which has been presented at the 23rd Italian Workshop on Neural Networks, the yearly meeting of the Italian Society for Neural Networks (SIREN). The conference was held in Vietri sul Mare, Salerno, Italy during May 23-24, 2013. The annual meeting of SIREN is sponsored by International Neural Network Society (INNS), European Neural Network Society (ENNS) and IEEE Computational Intelligence Society (CIS). The book – as well as the workshop-  is organized in two main components, a special session and a group of regular sessions featuring different aspects and point of views of artificial neural networks, artificial and natural intelligence, as well as psychological and cognitive theories for modeling human behaviors and human machine interactions, including Information Communication applications of compelling interest.  .

  17. A Recurrent Neural Network for Nonlinear Fractional Programming

    Directory of Open Access Journals (Sweden)

    Quan-Ju Zhang

    2012-01-01

    Full Text Available This paper presents a novel recurrent time continuous neural network model which performs nonlinear fractional optimization subject to interval constraints on each of the optimization variables. The network is proved to be complete in the sense that the set of optima of the objective function to be minimized with interval constraints coincides with the set of equilibria of the neural network. It is also shown that the network is primal and globally convergent in the sense that its trajectory cannot escape from the feasible region and will converge to an exact optimal solution for any initial point being chosen in the feasible interval region. Simulation results are given to demonstrate further the global convergence and good performance of the proposing neural network for nonlinear fractional programming problems with interval constraints.

  18. Learning characteristics of a space-time neural network as a tether skiprope observer

    Science.gov (United States)

    Lea, Robert N.; Villarreal, James A.; Jani, Yashvant; Copeland, Charles

    1993-01-01

    The Software Technology Laboratory at the Johnson Space Center is testing a Space Time Neural Network (STNN) for observing tether oscillations present during retrieval of a tethered satellite. Proper identification of tether oscillations, known as 'skiprope' motion, is vital to safe retrieval of the tethered satellite. Our studies indicate that STNN has certain learning characteristics that must be understood properly to utilize this type of neural network for the tethered satellite problem. We present our findings on the learning characteristics including a learning rate versus momentum performance table.

  19. Sonar discrimination of cylinders from different angles using neural networks neural networks

    DEFF Research Database (Denmark)

    Andersen, Lars Nonboe; Au, Whiwlow; Larsen, Jan

    1999-01-01

    This paper describes an underwater object discrimination system applied to recognize cylinders of various compositions from different angles. The system is based on a new combination of simulated dolphin clicks, simulated auditory filters and artificial neural networks. The model demonstrates its...

  20. Recurrent Neural Network for Computing Outer Inverse.

    Science.gov (United States)

    Živković, Ivan S; Stanimirović, Predrag S; Wei, Yimin

    2016-05-01

    Two linear recurrent neural networks for generating outer inverses with prescribed range and null space are defined. Each of the proposed recurrent neural networks is based on the matrix-valued differential equation, a generalization of dynamic equations proposed earlier for the nonsingular matrix inversion, the Moore-Penrose inversion, as well as the Drazin inversion, under the condition of zero initial state. The application of the first approach is conditioned by the properties of the spectrum of a certain matrix; the second approach eliminates this drawback, though at the cost of increasing the number of matrix operations. The cases corresponding to the most common generalized inverses are defined. The conditions that ensure stability of the proposed neural network are presented. Illustrative examples present the results of numerical simulations.

  1. Identification and prediction of dynamic systems using an interactively recurrent self-evolving fuzzy neural network.

    Science.gov (United States)

    Lin, Yang-Yin; Chang, Jyh-Yeong; Lin, Chin-Teng

    2013-02-01

    This paper presents a novel recurrent fuzzy neural network, called an interactively recurrent self-evolving fuzzy neural network (IRSFNN), for prediction and identification of dynamic systems. The recurrent structure in an IRSFNN is formed as an external loops and internal feedback by feeding the rule firing strength of each rule to others rules and itself. The consequent part in the IRSFNN is composed of a Takagi-Sugeno-Kang (TSK) or functional-link-based type. The proposed IRSFNN employs a functional link neural network (FLNN) to the consequent part of fuzzy rules for promoting the mapping ability. Unlike a TSK-type fuzzy neural network, the FLNN in the consequent part is a nonlinear function of input variables. An IRSFNNs learning starts with an empty rule base and all of the rules are generated and learned online through a simultaneous structure and parameter learning. An on-line clustering algorithm is effective in generating fuzzy rules. The consequent update parameters are derived by a variable-dimensional Kalman filter algorithm. The premise and recurrent parameters are learned through a gradient descent algorithm. We test the IRSFNN for the prediction and identification of dynamic plants and compare it to other well-known recurrent FNNs. The proposed model obtains enhanced performance results.

  2. Development and function of human cerebral cortex neural networks from pluripotent stem cells in vitro.

    Science.gov (United States)

    Kirwan, Peter; Turner-Bridger, Benita; Peter, Manuel; Momoh, Ayiba; Arambepola, Devika; Robinson, Hugh P C; Livesey, Frederick J

    2015-09-15

    A key aspect of nervous system development, including that of the cerebral cortex, is the formation of higher-order neural networks. Developing neural networks undergo several phases with distinct activity patterns in vivo, which are thought to prune and fine-tune network connectivity. We report here that human pluripotent stem cell (hPSC)-derived cerebral cortex neurons form large-scale networks that reflect those found in the developing cerebral cortex in vivo. Synchronised oscillatory networks develop in a highly stereotyped pattern over several weeks in culture. An initial phase of increasing frequency of oscillations is followed by a phase of decreasing frequency, before giving rise to non-synchronous, ordered activity patterns. hPSC-derived cortical neural networks are excitatory, driven by activation of AMPA- and NMDA-type glutamate receptors, and can undergo NMDA-receptor-mediated plasticity. Investigating single neuron connectivity within PSC-derived cultures, using rabies-based trans-synaptic tracing, we found two broad classes of neuronal connectivity: most neurons have small numbers (40). These data demonstrate that the formation of hPSC-derived cortical networks mimics in vivo cortical network development and function, demonstrating the utility of in vitro systems for mechanistic studies of human forebrain neural network biology. © 2015. Published by The Company of Biologists Ltd.

  3. Neutron spectra unfolding in Bonner spheres spectrometry using neural networks

    International Nuclear Information System (INIS)

    Kardan, M.R.; Setayeshi, S.; Koohi-Fayegh, R.; Ghiassi-Nejad, M.

    2003-01-01

    The neural network method has been used for the unfolding of neutron spectra in neutron spectrometry by Bonner spheres. A back propagation algorithm was used for training of neural networks 4mm x 4 mm bare LiI(Eu) and in a polyethylene sphere set: 2, 3, 4, 5, 6, 7, 8, 10, 12, 18 inch diameter have been used for unfolding of neutron spectra. Neural networks were trained by 199 sets of neutron spectra, which were subdivided into 6, 8, 10, 12, 15 and 20 energy bins and for each of them an appropriate neural network was designed and trained. The validation was performed by the 21 sets of neutron spectra. A neural network with 10 energy bins which had a mean value of error of 6% for dose equivalent estimation of spectra in the validation set showed the best results. The obtained results show that neural networks can be applied as an effective method for unfolding neutron spectra especially when the main target is neutron dosimetry. (author)

  4. SOLAR PHOTOVOLTAIC OUTPUT POWER FORECASTING USING BACK PROPAGATION NEURAL NETWORK

    Directory of Open Access Journals (Sweden)

    B. Jency Paulin

    2016-01-01

    Full Text Available Solar Energy is an important renewable and unlimited source of energy. Solar photovoltaic power forecasting, is an estimation of the expected power production, that help the grid operators to better manage the electric balance between power demand and supply. Neural network is a computational model that can predict new outcomes from past trends. The artificial neural network is used for photovoltaic plant energy forecasting. The output power for solar photovoltaic cell is predicted on hourly basis. In historical dataset collection process, two dataset was collected and used for analysis. The dataset was provided with three independent attributes and one dependent attributes. The implementation of Artificial Neural Network structure is done by Multilayer Perceptron (MLP and training procedure for neural network is done by error Back Propagation (BP. In order to train and test the neural network, the datasets are divided in the ratio 70:30. The accuracy of prediction can be done by using various error measurement criteria and the performance of neural network is to be noted.

  5. Modeling of steam generator in nuclear power plant using neural network ensemble

    International Nuclear Information System (INIS)

    Lee, S. K.; Lee, E. C.; Jang, J. W.

    2003-01-01

    Neural network is now being used in modeling the steam generator is known to be difficult due to the reverse dynamics. However, Neural network is prone to the problem of overfitting. This paper investigates the use of neural network combining methods to model steam generator water level and compares with single neural network. The results show that neural network ensemble is effective tool which can offer improved generalization, lower dependence of the training set and reduced training time

  6. Neural network decoder for quantum error correcting codes

    Science.gov (United States)

    Krastanov, Stefan; Jiang, Liang

    Artificial neural networks form a family of extremely powerful - albeit still poorly understood - tools used in anything from image and sound recognition through text generation to, in our case, decoding. We present a straightforward Recurrent Neural Network architecture capable of deducing the correcting procedure for a quantum error-correcting code from a set of repeated stabilizer measurements. We discuss the fault-tolerance of our scheme and the cost of training the neural network for a system of a realistic size. Such decoders are especially interesting when applied to codes, like the quantum LDPC codes, that lack known efficient decoding schemes.

  7. Neural Networks through Shared Maps in Mobile Devices

    Directory of Open Access Journals (Sweden)

    William Raveane

    2014-12-01

    Full Text Available We introduce a hybrid system composed of a convolutional neural network and a discrete graphical model for image recognition. This system improves upon traditional sliding window techniques for analysis of an image larger than the training data by effectively processing the full input scene through the neural network in less time. The final result is then inferred from the neural network output through energy minimization to reach a more precize localization than what traditional maximum value class comparisons yield. These results are apt for applying this process in a mobile device for real time image recognition.

  8. Active Engine Mounting Control Algorithm Using Neural Network

    Directory of Open Access Journals (Sweden)

    Fadly Jashi Darsivan

    2009-01-01

    Full Text Available This paper proposes the application of neural network as a controller to isolate engine vibration in an active engine mounting system. It has been shown that the NARMA-L2 neurocontroller has the ability to reject disturbances from a plant. The disturbance is assumed to be both impulse and sinusoidal disturbances that are induced by the engine. The performance of the neural network controller is compared with conventional PD and PID controllers tuned using Ziegler-Nichols. From the result simulated the neural network controller has shown better ability to isolate the engine vibration than the conventional controllers.

  9. Recurrent Neural Network for Computing the Drazin Inverse.

    Science.gov (United States)

    Stanimirović, Predrag S; Zivković, Ivan S; Wei, Yimin

    2015-11-01

    This paper presents a recurrent neural network (RNN) for computing the Drazin inverse of a real matrix in real time. This recurrent neural network (RNN) is composed of n independent parts (subnetworks), where n is the order of the input matrix. These subnetworks can operate concurrently, so parallel and distributed processing can be achieved. In this way, the computational advantages over the existing sequential algorithms can be attained in real-time applications. The RNN defined in this paper is convenient for an implementation in an electronic circuit. The number of neurons in the neural network is the same as the number of elements in the output matrix, which represents the Drazin inverse. The difference between the proposed RNN and the existing ones for the Drazin inverse computation lies in their network architecture and dynamics. The conditions that ensure the stability of the defined RNN as well as its convergence toward the Drazin inverse are considered. In addition, illustrative examples and examples of application to the practical engineering problems are discussed to show the efficacy of the proposed neural network.

  10. DEVELOPMENT OF A COMPUTER SYSTEM FOR IDENTITY AUTHENTICATION USING ARTIFICIAL NEURAL NETWORKS

    Directory of Open Access Journals (Sweden)

    Timur Kartbayev

    2017-03-01

    Full Text Available The aim of the study is to increase the effectiveness of automated face recognition to authenticate identity, considering features of change of the face parameters over time. The improvement of the recognition accuracy, as well as consideration of the features of temporal changes in a human face can be based on the methodology of artificial neural networks. Hybrid neural networks, combining the advantages of classical neural networks and fuzzy logic systems, allow using the network learnability along with the explanation of the findings. The structural scheme of intelligent system for identification based on artificial neural networks is proposed in this work. It realizes the principles of digital information processing and identity recognition taking into account the forecast of key characteristics’ changes over time (e.g., due to aging. The structural scheme has a three-tier architecture and implements preliminary processing, recognition and identification of images obtained as a result of monitoring. On the basis of expert knowledge, the fuzzy base of products is designed. It allows assessing possible changes in key characteristics, used to authenticate identity based on the image. To take this possibility into consideration, a neuro-fuzzy network of ANFIS type was used, which implements the algorithm of Tagaki-Sugeno. The conducted experiments showed high efficiency of the developed neural network and a low value of learning errors, which allows recommending this approach for practical implementation. Application of the developed system of fuzzy production rules that allow predicting changes in individuals over time, will improve the recognition accuracy, reduce the number of authentication failures and improve the efficiency of information processing and decision-making in applications, such as authentication of bank customers, users of mobile applications, or in video monitoring systems of sensitive sites.

  11. Fast neutron spectra determination by threshold activation detectors using neural networks

    International Nuclear Information System (INIS)

    Kardan, M.R.; Koohi-Fayegh, R.; Setayeshi, S.; Ghiassi-Nejad, M.

    2004-01-01

    Neural network method was used for fast neutron spectra unfolding in spectrometry by threshold activation detectors. The input layer of the neural networks consisted of 11 neurons for the specific activities of neutron-induced nuclear reaction products, while the output layers were fast neutron spectra which had been subdivided into 6, 8, 10, 12, 15 and 20 energy bins. Neural network training was performed by 437 fast neutron spectra and corresponding threshold activation detector readings. The trained neural network have been applied for unfolding 50 spectra, which were not in training sets and the results were compared with real spectra and unfolded spectra by SANDII. The best results belong to 10 energy bin spectra. The neural network was also trained by detector readings with 5% uncertainty and the response of the trained neural network to detector readings with 5%, 10%, 15%, 20%, 25% and 50% uncertainty was compared with real spectra. Neural network algorithm, in comparison with other unfolding methods, is very fast and needless to detector response matrix and any prior information about spectra and also the outputs have low sensitivity to uncertainty in the activity measurements. The results show that the neural network algorithm is useful when a fast response is required with reasonable accuracy

  12. Biological Networks Entropies: Examples in Neural Memory Networks, Genetic Regulation Networks and Social Epidemic Networks

    Directory of Open Access Journals (Sweden)

    Jacques Demongeot

    2018-01-01

    Full Text Available Networks used in biological applications at different scales (molecule, cell and population are of different types: neuronal, genetic, and social, but they share the same dynamical concepts, in their continuous differential versions (e.g., non-linear Wilson-Cowan system as well as in their discrete Boolean versions (e.g., non-linear Hopfield system; in both cases, the notion of interaction graph G(J associated to its Jacobian matrix J, and also the concepts of frustrated nodes, positive or negative circuits of G(J, kinetic energy, entropy, attractors, structural stability, etc., are relevant and useful for studying the dynamics and the robustness of these systems. We will give some general results available for both continuous and discrete biological networks, and then study some specific applications of three new notions of entropy: (i attractor entropy, (ii isochronal entropy and (iii entropy centrality; in three domains: a neural network involved in the memory evocation, a genetic network responsible of the iron control and a social network accounting for the obesity spread in high school environment.

  13. Prediksi Pergerakan Harga Valas Menggunakan Algoritma Neural Network

    Directory of Open Access Journals (Sweden)

    Castaka Agus Sugianto

    2018-01-01

    Full Text Available World currency market trading has become one of the many types of work that has been done by the public due to the convenience offered, big profits and the flexibility of time and place in trading. This study aims to predict the movement of EUR / USD currency trends using data mining techniques combined with neural network algorithms compared by linear regression algorithm that can be used as one of the references for traders as an open trading position. Attributes were used in this study namely Open (Opening Price, Close (Closing Price, Highest (Highest Price, Lowest (Lowest Price, for time frame price used is with time frame 1 day and the time period is taken from 3 January 2011 to 15 November 2016. The result of this research is Root Mean Squared Error (RMSE percentage number as well as additional label prediction result that obtained after validation using sliding windows validation. Best result obtained from testing phase using neural network algorithm which uses 0.006 and 0.003 windowing which results is equal to testing phase that does not use windowing. In other hands, testing phase on linear regression algorithm using windowing resulted in 0.007 and testing phase that does not use windowing that is equal to 0.004. T-test showed that neural network has insignificant result compared with linear regression. T-test result value is 1.00 for testing with windowing and 0.077 for windowless test.

  14. A modular neural network scheme applied to fault diagnosis in electric power systems.

    Science.gov (United States)

    Flores, Agustín; Quiles, Eduardo; García, Emilio; Morant, Francisco; Correcher, Antonio

    2014-01-01

    This work proposes a new method for fault diagnosis in electric power systems based on neural modules. With this method the diagnosis is performed by assigning a neural module for each type of component comprising the electric power system, whether it is a transmission line, bus or transformer. The neural modules for buses and transformers comprise two diagnostic levels which take into consideration the logic states of switches and relays, both internal and back-up, with the exception of the neural module for transmission lines which also has a third diagnostic level which takes into account the oscillograms of fault voltages and currents as well as the frequency spectrums of these oscillograms, in order to verify if the transmission line had in fact been subjected to a fault. One important advantage of the diagnostic system proposed is that its implementation does not require the use of a network configurator for the system; it does not depend on the size of the power network nor does it require retraining of the neural modules if the power network increases in size, making its application possible to only one component, a specific area, or the whole context of the power system.

  15. Modeling and Speed Control of Induction Motor Drives Using Neural Networks

    Directory of Open Access Journals (Sweden)

    V. Jamuna

    2010-08-01

    Full Text Available Speed control of induction motor drives using neural networks is presented. The mathematical model of single phase induction motor is developed. A new simulink model for a neural network-controlled bidirectional chopper fed single phase induction motor is proposed. Under normal operation, the true drive parameters are real-time identified and they are converted into the controller parameters through multilayer forward computation by neural networks. Comparative study has been made between the conventional and neural network controllers. It is observed that the neural network controlled drive system has better dynamic performance, reduced overshoot and faster transient response than the conventional controlled system.

  16. Neural networks in front-end processing and control

    International Nuclear Information System (INIS)

    Lister, J.B.; Schnurrenberger, H.; Staeheli, N.; Stockhammer, N.; Duperrex, P.A.; Moret, J.M.

    1992-01-01

    Research into neural networks has gained a large following in recent years. In spite of the long term timescale of this Artificial Intelligence research, the tools which the community is developing can already find useful applications to real practical problems in experimental research. One of the main advantages of the parallel algorithms being developed in AI is the structural simplicity of the required hardware implementation, and the simple nature of the calculations involved. This makes these techniques ideal for problems in which both speed and data volume reduction are important, the case for most front-end processing tasks. In this paper the authors illustrate the use of a particular neural network known as the Multi-Layer Perceptron as a method for solving several different tasks, all drawn from the field of Tokamak research. The authors also briefly discuss the use of the Multi-Layer Perceptron as a non-linear controller in a feedback loop. The authors outline the type of problem which can be usefully addressed by these techniques, even before the large-scale parallel processing hardware currently under development becomes cheaply available. The authors also present some of the difficulties encountered in applying these networks

  17. Neural networks in front-end processing and control

    International Nuclear Information System (INIS)

    Lister, J.B.; Schnurrenberger, H.; Staeheli, N.; Stockhammer, N.; Duperrex, P.A.; Moret, J.M.

    1991-07-01

    Research into neural networks has gained a large following in recent years. In spite of the long term timescale of this Artificial Intelligence research, the tools which the community is developing can already find useful applications to real practical problems in experimental research. One of the main advantages of the parallel algorithms being developed in AI is the structural simplicity of the required hardware implementation, and the simple nature of the calculations involved. This makes these techniques ideal for problems in which both speed and data volume reduction are important, the case for most front-end processing tasks. In this paper we illustrate the use of a particular neural network known as the Multi-Layer Perceptron as a method for solving several different tasks, all drawn from the field of Tokamak research. We also briefly discuss the use of the Multi-Layer Perceptron as a non-linear controller in a feedback loop. We outline the type of problem which can be usefully addressed by these techniques, even before the large-scale parallel processing hardware currently under development becomes cheaply available. We also present some of the difficulties encountered in applying these networks. (author) 13 figs., 9 refs

  18. Feature to prototype transition in neural networks

    Science.gov (United States)

    Krotov, Dmitry; Hopfield, John

    Models of associative memory with higher order (higher than quadratic) interactions, and their relationship to neural networks used in deep learning are discussed. Associative memory is conventionally described by recurrent neural networks with dynamical convergence to stable points. Deep learning typically uses feedforward neural nets without dynamics. However, a simple duality relates these two different views when applied to problems of pattern classification. From the perspective of associative memory such models deserve attention because they make it possible to store a much larger number of memories, compared to the quadratic case. In the dual description, these models correspond to feedforward neural networks with one hidden layer and unusual activation functions transmitting the activities of the visible neurons to the hidden layer. These activation functions are rectified polynomials of a higher degree rather than the rectified linear functions used in deep learning. The network learns representations of the data in terms of features for rectified linear functions, but as the power in the activation function is increased there is a gradual shift to a prototype-based representation, the two extreme regimes of pattern recognition known in cognitive psychology. Simons Center for Systems Biology.

  19. Neural networks for sensor validation and plant-wide monitoring

    International Nuclear Information System (INIS)

    Eryurek, E.

    1991-08-01

    The feasibility of using neural networks to characterize one or more variables as a function of other than related variables has been studied. Neural network or parallel distributed processing is found to be highly suitable for the development of relationships among various parameters. A sensor failure detection is studied, and it is shown that neural network models can be used to estimate the sensor readings during the absence of a sensor. (author). 4 refs.; 3 figs

  20. Optimum Neural Network Architecture for Precipitation Prediction of Myanmar

    OpenAIRE

    Khaing Win Mar; Thinn Thu Naing

    2008-01-01

    Nowadays, precipitation prediction is required for proper planning and management of water resources. Prediction with neural network models has received increasing interest in various research and application domains. However, it is difficult to determine the best neural network architecture for prediction since it is not immediately obvious how many input or hidden nodes are used in the model. In this paper, neural network model is used as a forecasting tool. The major aim is to evaluate a s...

  1. Multiple Time Series Forecasting Using Quasi-Randomized Functional Link Neural Networks

    Directory of Open Access Journals (Sweden)

    Thierry Moudiki

    2018-03-01

    Full Text Available We are interested in obtaining forecasts for multiple time series, by taking into account the potential nonlinear relationships between their observations. For this purpose, we use a specific type of regression model on an augmented dataset of lagged time series. Our model is inspired by dynamic regression models (Pankratz 2012, with the response variable’s lags included as predictors, and is known as Random Vector Functional Link (RVFL neural networks. The RVFL neural networks have been successfully applied in the past, to solving regression and classification problems. The novelty of our approach is to apply an RVFL model to multivariate time series, under two separate regularization constraints on the regression parameters.

  2. Optimal Brain Surgeon on Artificial Neural Networks in

    DEFF Research Database (Denmark)

    Christiansen, Niels Hørbye; Job, Jonas Hultmann; Klyver, Katrine

    2012-01-01

    It is shown how the procedure know as optimal brain surgeon can be used to trim and optimize artificial neural networks in nonlinear structural dynamics. Beside optimizing the neural network, and thereby minimizing computational cost in simulation, the surgery procedure can also serve as a quick...

  3. Automatic classification of ovarian cancer types from cytological images using deep convolutional neural networks.

    Science.gov (United States)

    Wu, Miao; Yan, Chuanbo; Liu, Huiqiang; Liu, Qian

    2018-06-29

    Ovarian cancer is one of the most common gynecologic malignancies. Accurate classification of ovarian cancer types (serous carcinoma, mucous carcinoma, endometrioid carcinoma, transparent cell carcinoma) is an essential part in the different diagnosis. Computer-aided diagnosis (CADx) can provide useful advice for pathologists to determine the diagnosis correctly. In our study, we employed a Deep Convolutional Neural Networks (DCNN) based on AlexNet to automatically classify the different types of ovarian cancers from cytological images. The DCNN consists of five convolutional layers, three max pooling layers, and two full reconnect layers. Then we trained the model by two group input data separately, one was original image data and the other one was augmented image data including image enhancement and image rotation. The testing results are obtained by the method of 10-fold cross-validation, showing that the accuracy of classification models has been improved from 72.76 to 78.20% by using augmented images as training data. The developed scheme was useful for classifying ovarian cancers from cytological images. © 2018 The Author(s).

  4. Deep Neural Network-Based Chinese Semantic Role Labeling

    Institute of Scientific and Technical Information of China (English)

    ZHENG Xiaoqing; CHEN Jun; SHANG Guoqiang

    2017-01-01

    A recent trend in machine learning is to use deep architec-tures to discover multiple levels of features from data, which has achieved impressive results on various natural language processing (NLP) tasks. We propose a deep neural network-based solution to Chinese semantic role labeling (SRL) with its application on message analysis. The solution adopts a six-step strategy: text normalization, named entity recognition (NER), Chinese word segmentation and part-of-speech (POS) tagging, theme classification, SRL, and slot filling. For each step, a novel deep neural network - based model is designed and optimized, particularly for smart phone applications. Ex-periment results on all the NLP sub - tasks of the solution show that the proposed neural networks achieve state-of-the-art performance with the minimal computational cost. The speed advantage of deep neural networks makes them more competitive for large-scale applications or applications requir-ing real-time response, highlighting the potential of the pro-posed solution for practical NLP systems.

  5. Deep Recurrent Convolutional Neural Network: Improving Performance For Speech Recognition

    OpenAIRE

    Zhang, Zewang; Sun, Zheng; Liu, Jiaqi; Chen, Jingwen; Huo, Zhao; Zhang, Xiao

    2016-01-01

    A deep learning approach has been widely applied in sequence modeling problems. In terms of automatic speech recognition (ASR), its performance has significantly been improved by increasing large speech corpus and deeper neural network. Especially, recurrent neural network and deep convolutional neural network have been applied in ASR successfully. Given the arising problem of training speed, we build a novel deep recurrent convolutional network for acoustic modeling and then apply deep resid...

  6. Classification of Urinary Calculi using Feed-Forward Neural Networks

    African Journals Online (AJOL)

    NJD

    Genetic algorithms were used for optimization of neural networks and for selection of the ... Urinary calculi, infrared spectroscopy, classification, neural networks, variable ..... note that the best accuracy is obtained for whewellite, weddellite.

  7. ECO INVESTMENT PROJECT MANAGEMENT THROUGH TIME APPLYING ARTIFICIAL NEURAL NETWORKS

    Directory of Open Access Journals (Sweden)

    Tamara Gvozdenović

    2007-06-01

    Full Text Available he concept of project management expresses an indispensable approach to investment projects. Time is often the most important factor in these projects. The artificial neural network is the paradigm of data processing, which is inspired by the one used by the biological brain, and it is used in numerous, different fields, among which is the project management. This research is oriented to application of artificial neural networks in managing time of investment project. The artificial neural networks are used to define the optimistic, the most probable and the pessimistic time in PERT method. The program package Matlab: Neural Network Toolbox is used in data simulation. The feed-forward back propagation network is chosen.

  8. Forecasting PM10 in metropolitan areas: Efficacy of neural networks

    International Nuclear Information System (INIS)

    Fernando, H.J.S.; Mammarella, M.C.; Grandoni, G.; Fedele, P.; Di Marco, R.; Dimitrova, R.; Hyde, P.

    2012-01-01

    Deterministic photochemical air quality models are commonly used for regulatory management and planning of urban airsheds. These models are complex, computer intensive, and hence are prohibitively expensive for routine air quality predictions. Stochastic methods are becoming increasingly popular as an alternative, which relegate decision making to artificial intelligence based on Neural Networks that are made of artificial neurons or ‘nodes’ capable of ‘learning through training’ via historic data. A Neural Network was used to predict particulate matter concentration at a regulatory monitoring site in Phoenix, Arizona; its development, efficacy as a predictive tool and performance vis-à-vis a commonly used regulatory photochemical model are described in this paper. It is concluded that Neural Networks are much easier, quicker and economical to implement without compromising the accuracy of predictions. Neural Networks can be used to develop rapid air quality warning systems based on a network of automated monitoring stations.Highlights: ► Neural Network is an alternative technique to photochemical modelling. ► Neutral Networks can be as effective as traditional air photochemical modelling. ► Neural Networks are much easier and quicker to implement in health warning system. - Neutral networks are as effective as photochemical modelling for air quality predictions, but are much easier, quicker and economical to implement in air pollution (or health) warning systems.

  9. Genetic prediction of type 2 diabetes using deep neural network.

    Science.gov (United States)

    Kim, J; Kim, J; Kwak, M J; Bajaj, M

    2018-04-01

    Type 2 diabetes (T2DM) has strong heritability but genetic models to explain heritability have been challenging. We tested deep neural network (DNN) to predict T2DM using the nested case-control study of Nurses' Health Study (3326 females, 45.6% T2DM) and Health Professionals Follow-up Study (2502 males, 46.5% T2DM). We selected 96, 214, 399, and 678 single-nucleotide polymorphism (SNPs) through Fisher's exact test and L1-penalized logistic regression. We split each dataset randomly in 4:1 to train prediction models and test their performance. DNN and logistic regressions showed better area under the curve (AUC) of ROC curves than the clinical model when 399 or more SNPs included. DNN was superior than logistic regressions in AUC with 399 or more SNPs in male and 678 SNPs in female. Addition of clinical factors consistently increased AUC of DNN but failed to improve logistic regressions with 214 or more SNPs. In conclusion, we show that DNN can be a versatile tool to predict T2DM incorporating large numbers of SNPs and clinical information. Limitations include a relatively small number of the subjects mostly of European ethnicity. Further studies are warranted to confirm and improve performance of genetic prediction models using DNN in different ethnic groups. © 2017 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  10. q-state Potts-glass neural network based on pseudoinverse rule

    International Nuclear Information System (INIS)

    Xiong Daxing; Zhao Hong

    2010-01-01

    We study the q-state Potts-glass neural network with the pseudoinverse (PI) rule. Its performance is investigated and compared with that of the counterpart network with the Hebbian rule instead. We find that there exists a critical point of q, i.e., q cr =14, below which the storage capacity and the retrieval quality can be greatly improved by introducing the PI rule. We show that the dynamics of the neural networks constructed with the two learning rules respectively are quite different; but however, regardless of the learning rules, in the q-state Potts-glass neural networks with q≥3 there is a common novel dynamical phase in which the spurious memories are completely suppressed. This property has never been noticed in the symmetric feedback neural networks. Free from the spurious memories implies that the multistate Potts-glass neural networks would not be trapped in the metastable states, which is a favorable property for their applications.

  11. A review and analysis of neural networks for classification of remotely sensed multispectral imagery

    Science.gov (United States)

    Paola, Justin D.; Schowengerdt, Robert A.

    1993-01-01

    A literature survey and analysis of the use of neural networks for the classification of remotely sensed multispectral imagery is presented. As part of a brief mathematical review, the backpropagation algorithm, which is the most common method of training multi-layer networks, is discussed with an emphasis on its application to pattern recognition. The analysis is divided into five aspects of neural network classification: (1) input data preprocessing, structure, and encoding; (2) output encoding and extraction of classes; (3) network architecture, (4) training algorithms; and (5) comparisons to conventional classifiers. The advantages of the neural network method over traditional classifiers are its non-parametric nature, arbitrary decision boundary capabilities, easy adaptation to different types of data and input structures, fuzzy output values that can enhance classification, and good generalization for use with multiple images. The disadvantages of the method are slow training time, inconsistent results due to random initial weights, and the requirement of obscure initialization values (e.g., learning rate and hidden layer size). Possible techniques for ameliorating these problems are discussed. It is concluded that, although the neural network method has several unique capabilities, it will become a useful tool in remote sensing only if it is made faster, more predictable, and easier to use.

  12. Neural networks advances and applications 2

    CERN Document Server

    Gelenbe, E

    1992-01-01

    The present volume is a natural follow-up to Neural Networks: Advances and Applications which appeared one year previously. As the title indicates, it combines the presentation of recent methodological results concerning computational models and results inspired by neural networks, and of well-documented applications which illustrate the use of such models in the solution of difficult problems. The volume is balanced with respect to these two orientations: it contains six papers concerning methodological developments and five papers concerning applications and examples illustrating the theoret

  13. Autonomous dynamics in neural networks: the dHAN concept and associative thought processes

    Science.gov (United States)

    Gros, Claudius

    2007-02-01

    The neural activity of the human brain is dominated by self-sustained activities. External sensory stimuli influence this autonomous activity but they do not drive the brain directly. Most standard artificial neural network models are however input driven and do not show spontaneous activities. It constitutes a challenge to develop organizational principles for controlled, self-sustained activity in artificial neural networks. Here we propose and examine the dHAN concept for autonomous associative thought processes in dense and homogeneous associative networks. An associative thought-process is characterized, within this approach, by a time-series of transient attractors. Each transient state corresponds to a stored information, a memory. The subsequent transient states are characterized by large associative overlaps, which are identical to acquired patterns. Memory states, the acquired patterns, have such a dual functionality. In this approach the self-sustained neural activity has a central functional role. The network acquires a discrimination capability, as external stimuli need to compete with the autonomous activity. Noise in the input is readily filtered-out. Hebbian learning of external patterns occurs coinstantaneous with the ongoing associative thought process. The autonomous dynamics needs a long-term working-point optimization which acquires within the dHAN concept a dual functionality: It stabilizes the time development of the associative thought process and limits runaway synaptic growth, which generically occurs otherwise in neural networks with self-induced activities and Hebbian-type learning rules.

  14. Training Deep Spiking Neural Networks Using Backpropagation.

    Science.gov (United States)

    Lee, Jun Haeng; Delbruck, Tobi; Pfeiffer, Michael

    2016-01-01

    Deep spiking neural networks (SNNs) hold the potential for improving the latency and energy efficiency of deep neural networks through data-driven event-based computation. However, training such networks is difficult due to the non-differentiable nature of spike events. In this paper, we introduce a novel technique, which treats the membrane potentials of spiking neurons as differentiable signals, where discontinuities at spike times are considered as noise. This enables an error backpropagation mechanism for deep SNNs that follows the same principles as in conventional deep networks, but works directly on spike signals and membrane potentials. Compared with previous methods relying on indirect training and conversion, our technique has the potential to capture the statistics of spikes more precisely. We evaluate the proposed framework on artificially generated events from the original MNIST handwritten digit benchmark, and also on the N-MNIST benchmark recorded with an event-based dynamic vision sensor, in which the proposed method reduces the error rate by a factor of more than three compared to the best previous SNN, and also achieves a higher accuracy than a conventional convolutional neural network (CNN) trained and tested on the same data. We demonstrate in the context of the MNIST task that thanks to their event-driven operation, deep SNNs (both fully connected and convolutional) trained with our method achieve accuracy equivalent with conventional neural networks. In the N-MNIST example, equivalent accuracy is achieved with about five times fewer computational operations.

  15. Fast and Efficient Asynchronous Neural Computation with Adapting Spiking Neural Networks

    NARCIS (Netherlands)

    D. Zambrano (Davide); S.M. Bohte (Sander)

    2016-01-01

    textabstractBiological neurons communicate with a sparing exchange of pulses - spikes. It is an open question how real spiking neurons produce the kind of powerful neural computation that is possible with deep artificial neural networks, using only so very few spikes to communicate. Building on

  16. A recurrent neural network based on projection operator for extended general variational inequalities.

    Science.gov (United States)

    Liu, Qingshan; Cao, Jinde

    2010-06-01

    Based on the projection operator, a recurrent neural network is proposed for solving extended general variational inequalities (EGVIs). Sufficient conditions are provided to ensure the global convergence of the proposed neural network based on Lyapunov methods. Compared with the existing neural networks for variational inequalities, the proposed neural network is a modified version of the general projection neural network existing in the literature and capable of solving the EGVI problems. In addition, simulation results on numerical examples show the effectiveness and performance of the proposed neural network.

  17. Neutron spectrometry and dosimetry by means of evolutive neural networks

    International Nuclear Information System (INIS)

    Ortiz R, J.M.; Martinez B, M.R.; Vega C, H.R.

    2008-01-01

    The artificial neural networks and the genetic algorithms are two relatively new areas of research, which have been subject to a growing interest during the last years. Both models are inspired by the nature, however, the neural networks are interested in the learning of a single individual, which is defined as fenotypic learning, while the evolutionary algorithms are interested in the adaptation of a population to a changing environment, that which is defined as genotypic learning. Recently, the use of the technology of neural networks has been applied with success in the area of the nuclear sciences, mainly in the areas of neutron spectrometry and dosimetry. The structure (network topology), as well as the learning parameters of a neural network, are factors that contribute in a significant way with the acting of the same one, however, it has been observed that the investigators in this area, carry out the selection of the network parameters through the essay and error technique, that which produces neural networks of poor performance and low generalization capacity. From the revised sources, it has been observed that the use of the evolutionary algorithms, seen as search techniques, it has allowed him to be possible to evolve and to optimize different properties of the neural networks, just as the initialization of the synaptic weights, the network architecture or the training algorithms without the human intervention. The objective of the present work is focused in analyzing the intersection of the neural networks and the evolutionary algorithms, analyzing like it is that the same ones can be used to help in the design processes and training of a neural network, this is, in the good selection of the structural parameters and of network learning, improving its generalization capacity, in such way that the same one is able to reconstruct in an efficient way neutron spectra and to calculate equivalent doses starting from the counting rates of a Bonner sphere

  18. Neural Network for Optimization of Existing Control Systems

    DEFF Research Database (Denmark)

    Madsen, Per Printz

    1995-01-01

    The purpose of this paper is to develop methods to use Neural Network based Controllers (NNC) as an optimization tool for existing control systems.......The purpose of this paper is to develop methods to use Neural Network based Controllers (NNC) as an optimization tool for existing control systems....

  19. assessment of neural networks performance in modeling rainfall ...

    African Journals Online (AJOL)

    Sholagberu

    neural network architecture for precipitation prediction of Myanmar, World Academy of. Science, Engineering and Technology, 48, pp. 130 – 134. Kumarasiri, A.D. and Sonnadara, D.U.J. (2006). Rainfall forecasting: an artificial neural network approach, Proceedings of the Technical Sessions,. 22, pp. 1-13 Institute of Physics ...

  20. Elements of an algorithm for optimizing a parameter-structural neural network

    Science.gov (United States)

    Mrówczyńska, Maria

    2016-06-01

    The field of processing information provided by measurement results is one of the most important components of geodetic technologies. The dynamic development of this field improves classic algorithms for numerical calculations in the aspect of analytical solutions that are difficult to achieve. Algorithms based on artificial intelligence in the form of artificial neural networks, including the topology of connections between neurons have become an important instrument connected to the problem of processing and modelling processes. This concept results from the integration of neural networks and parameter optimization methods and makes it possible to avoid the necessity to arbitrarily define the structure of a network. This kind of extension of the training process is exemplified by the algorithm called the Group Method of Data Handling (GMDH), which belongs to the class of evolutionary algorithms. The article presents a GMDH type network, used for modelling deformations of the geometrical axis of a steel chimney during its operation.