WorldWideScience

Sample records for feedforward neural network

  1. Nonlinear programming with feedforward neural networks.

    Energy Technology Data Exchange (ETDEWEB)

    Reifman, J.

    1999-06-02

    We provide a practical and effective method for solving constrained optimization problems by successively training a multilayer feedforward neural network in a coupled neural-network/objective-function representation. Nonlinear programming problems are easily mapped into this representation which has a simpler and more transparent method of solution than optimization performed with Hopfield-like networks and poses very mild requirements on the functions appearing in the problem. Simulation results are illustrated and compared with an off-the-shelf optimization tool.

  2. Quantum generalisation of feedforward neural networks

    Science.gov (United States)

    Wan, Kwok Ho; Dahlsten, Oscar; Kristjánsson, Hlér; Gardner, Robert; Kim, M. S.

    2017-09-01

    We propose a quantum generalisation of a classical neural network. The classical neurons are firstly rendered reversible by adding ancillary bits. Then they are generalised to being quantum reversible, i.e., unitary (the classical networks we generalise are called feedforward, and have step-function activation functions). The quantum network can be trained efficiently using gradient descent on a cost function to perform quantum generalisations of classical tasks. We demonstrate numerically that it can: (i) compress quantum states onto a minimal number of qubits, creating a quantum autoencoder, and (ii) discover quantum communication protocols such as teleportation. Our general recipe is theoretical and implementation-independent. The quantum neuron module can naturally be implemented photonically.

  3. Feedforward Nonlinear Control Using Neural Gas Network

    Directory of Open Access Journals (Sweden)

    Iván Machón-González

    2017-01-01

    Full Text Available Nonlinear systems control is a main issue in control theory. Many developed applications suffer from a mathematical foundation not as general as the theory of linear systems. This paper proposes a control strategy of nonlinear systems with unknown dynamics by means of a set of local linear models obtained by a supervised neural gas network. The proposed approach takes advantage of the neural gas feature by which the algorithm yields a very robust clustering procedure. The direct model of the plant constitutes a piece-wise linear approximation of the nonlinear system and each neuron represents a local linear model for which a linear controller is designed. The neural gas model works as an observer and a controller at the same time. A state feedback control is implemented by estimation of the state variables based on the local transfer function that was provided by the local linear model. The gradient vectors obtained by the supervised neural gas algorithm provide a robust procedure for feedforward nonlinear control, that is, supposing the inexistence of disturbances.

  4. Decoding small surface codes with feedforward neural networks

    Science.gov (United States)

    Varsamopoulos, Savvas; Criger, Ben; Bertels, Koen

    2018-01-01

    Surface codes reach high error thresholds when decoded with known algorithms, but the decoding time will likely exceed the available time budget, especially for near-term implementations. To decrease the decoding time, we reduce the decoding problem to a classification problem that a feedforward neural network can solve. We investigate quantum error correction and fault tolerance at small code distances using neural network-based decoders, demonstrating that the neural network can generalize to inputs that were not provided during training and that they can reach similar or better decoding performance compared to previous algorithms. We conclude by discussing the time required by a feedforward neural network decoder in hardware.

  5. Classes of feedforward neural networks and their circuit complexity

    NARCIS (Netherlands)

    Shawe-Taylor, John S.; Anthony, Martin H.G.; Kern, Walter

    1992-01-01

    This paper aims to place neural networks in the context of boolean circuit complexity. We define appropriate classes of feedforward neural networks with specified fan-in, accuracy of computation and depth and using techniques of communication complexity proceed to show that the classes fit into a

  6. Direct adaptive control using feedforward neural networks

    OpenAIRE

    Cajueiro, Daniel Oliveira; Hemerly, Elder Moreira

    2003-01-01

    ABSTRACT: This paper proposes a new scheme for direct neural adaptive control that works efficiently employing only one neural network, used for simultaneously identifying and controlling the plant. The idea behind this structure of adaptive control is to compensate the control input obtained by a conventional feedback controller. The neural network training process is carried out by using two different techniques: backpropagation and extended Kalman filter algorithm. Additionally, the conver...

  7. Adaptive training of feedforward neural networks by Kalman filtering

    International Nuclear Information System (INIS)

    Ciftcioglu, Oe.

    1995-02-01

    Adaptive training of feedforward neural networks by Kalman filtering is described. Adaptive training is particularly important in estimation by neural network in real-time environmental where the trained network is used for system estimation while the network is further trained by means of the information provided by the experienced/exercised ongoing operation. As result of this, neural network adapts itself to a changing environment to perform its mission without recourse to re-training. The performance of the training method is demonstrated by means of actual process signals from a nuclear power plant. (orig.)

  8. Feedforward Nonlinear Control Using Neural Gas Network

    OpenAIRE

    Machón-González, Iván; López-García, Hilario

    2017-01-01

    Nonlinear systems control is a main issue in control theory. Many developed applications suffer from a mathematical foundation not as general as the theory of linear systems. This paper proposes a control strategy of nonlinear systems with unknown dynamics by means of a set of local linear models obtained by a supervised neural gas network. The proposed approach takes advantage of the neural gas feature by which the algorithm yields a very robust clustering procedure. The direct model of the ...

  9. Time series prediction by feedforward neural networks - is it difficult?

    International Nuclear Information System (INIS)

    Rosen-Zvi, Michal; Kanter, Ido; Kinzel, Wolfgang

    2003-01-01

    The difficulties that a neural network faces when trying to learn from a quasi-periodic time series are studied analytically using a teacher-student scenario where the random input is divided into two macroscopic regions with different variances, 1 and 1/γ 2 (γ >> 1). The generalization error is found to decrease as ε g ∝ exp(-α/γ 2 ), where α is the number of examples per input dimension. In contradiction to this very slow vanishing generalization error, the next output prediction is found to be almost free of mistakes. This picture is consistent with learning quasi-periodic time series produced by feedforward neural networks, which is dominated by enhanced components of the Fourier spectrum of the input. Simulation results are in good agreement with the analytical results

  10. Neural networks for feedback feedforward nonlinear control systems.

    Science.gov (United States)

    Parisini, T; Zoppoli, R

    1994-01-01

    This paper deals with the problem of designing feedback feedforward control strategies to drive the state of a dynamic system (in general, nonlinear) so as to track any desired trajectory joining the points of given compact sets, while minimizing a certain cost function (in general, nonquadratic). Due to the generality of the problem, conventional methods are difficult to apply. Thus, an approximate solution is sought by constraining control strategies to take on the structure of multilayer feedforward neural networks. After discussing the approximation properties of neural control strategies, a particular neural architecture is presented, which is based on what has been called the "linear-structure preserving principle". The original functional problem is then reduced to a nonlinear programming one, and backpropagation is applied to derive the optimal values of the synaptic weights. Recursive equations to compute the gradient components are presented, which generalize the classical adjoint system equations of N-stage optimal control theory. Simulation results related to nonlinear nonquadratic problems show the effectiveness of the proposed method.

  11. Conjugate descent formulation of backpropagation error in feedforward neural networks

    Directory of Open Access Journals (Sweden)

    NK Sharma

    2009-06-01

    Full Text Available The feedforward neural network architecture uses backpropagation learning to determine optimal weights between different interconnected layers. This learning procedure uses a gradient descent technique applied to a sum-of-squares error function for the given input-output pattern. It employs an iterative procedure to minimise the error function for a given set of patterns, by adjusting the weights of the network. The first derivates of the error with respect to the weights identify the local error surface in the descent direction. Hence the network exhibits a different local error surface for every different pattern presented to it, and weights are iteratively modified in order to minimise the current local error. The determination of an optimal weight vector is possible only when the total minimum error (mean of the minimum local errors for all patterns from the training set may be minimised. In this paper, we present a general mathematical formulation for the second derivative of the error function with respect to the weights (which represents a conjugate descent for arbitrary feedforward neural network topologies, and we use this derivative information to obtain the optimal weight vector. The local error is backpropagated among the units of hidden layers via the second order derivative of the error with respect to the weights of the hidden and output layers independently and also in combination. The new total minimum error point may be evaluated with the help of the current total minimum error and the current minimised local error. The weight modification processes is performed twice: once with respect to the present local error and once more with respect to the current total or mean error. We present some numerical evidence that our proposed method yields better network weights than those determined via a conventional gradient descent approach.

  12. Prediction of metal corrosion using feed-forward neural networks

    International Nuclear Information System (INIS)

    Mahjani, M.G.; Jalili, S.; Jafarian, M.; Jaberi, A.

    2004-01-01

    The reliable prediction of corrosion behavior for the effective control of corrosion is a fundamental requirement. Since real world corrosion never seems to involve quite the same conditions that have previously been tested, using corrosion literature does not provide the necessary answers. In order to provide a methodology for predicting corrosion in real and complex situations, artificial neural networks can be utilized. Feed-forward artificial neural network (FFANN) is an information-processing paradigm inspired by the way the densely interconnected, parallel structure of the human brain process information.The aim of the present work is to predict corrosion behavior in critical conditions, such as industrial applications, based on some laboratory experimental data. Electrochemical behavior of stainless steel in different conditions were studied, using polarization technique and Tafel curves. Back-propagation neural networks models were developed to predict the corrosion behavior. The trained networks result in predicted value in good comparison to the experimental data. They have generally been claimed to be successful in modeling the corrosion behavior. The results are presented in two tables. Table 1 gives corrosion behavior of stainless-steel as a function of pH and CuSO 4 concentration and table 2 gives corrosion behavior of stainless - steel as a function of electrode surface area and CuSO 4 concentration. (authors)

  13. Training Feedforward Neural Networks Using Symbiotic Organisms Search Algorithm

    Directory of Open Access Journals (Sweden)

    Haizhou Wu

    2016-01-01

    Full Text Available Symbiotic organisms search (SOS is a new robust and powerful metaheuristic algorithm, which stimulates the symbiotic interaction strategies adopted by organisms to survive and propagate in the ecosystem. In the supervised learning area, it is a challenging task to present a satisfactory and efficient training algorithm for feedforward neural networks (FNNs. In this paper, SOS is employed as a new method for training FNNs. To investigate the performance of the aforementioned method, eight different datasets selected from the UCI machine learning repository are employed for experiment and the results are compared among seven metaheuristic algorithms. The results show that SOS performs better than other algorithms for training FNNs in terms of converging speed. It is also proven that an FNN trained by the method of SOS has better accuracy than most algorithms compared.

  14. Biomimetic Hybrid Feedback Feedforward Neural-Network Learning Control.

    Science.gov (United States)

    Pan, Yongping; Yu, Haoyong

    2017-06-01

    This brief presents a biomimetic hybrid feedback feedforward neural-network learning control (NNLC) strategy inspired by the human motor learning control mechanism for a class of uncertain nonlinear systems. The control structure includes a proportional-derivative controller acting as a feedback servo machine and a radial-basis-function (RBF) NN acting as a feedforward predictive machine. Under the sufficient constraints on control parameters, the closed-loop system achieves semiglobal practical exponential stability, such that an accurate NN approximation is guaranteed in a local region along recurrent reference trajectories. Compared with the existing NNLC methods, the novelties of the proposed method include: 1) the implementation of an adaptive NN control to guarantee plant states being recurrent is not needed, since recurrent reference signals rather than plant states are utilized as NN inputs, which greatly simplifies the analysis and synthesis of the NNLC and 2) the domain of NN approximation can be determined a priori by the given reference signals, which leads to an easy construction of the RBF-NNs. Simulation results have verified the effectiveness of this approach.

  15. Training feed-forward neural networks with gain constraints

    Science.gov (United States)

    Hartman

    2000-04-01

    Inaccurate input-output gains (partial derivatives of outputs with respect to inputs) are common in neural network models when input variables are correlated or when data are incomplete or inaccurate. Accurate gains are essential for optimization, control, and other purposes. We develop and explore a method for training feedforward neural networks subject to inequality or equality-bound constraints on the gains of the learned mapping. Gain constraints are implemented as penalty terms added to the objective function, and training is done using gradient descent. Adaptive and robust procedures are devised for balancing the relative strengths of the various terms in the objective function, which is essential when the constraints are inconsistent with the data. The approach has the virtue that the model domain of validity can be extended via extrapolation training, which can dramatically improve generalization. The algorithm is demonstrated here on artificial and real-world problems with very good results and has been advantageously applied to dozens of models currently in commercial use.

  16. Evaluation of the Performance of Feedforward and Recurrent Neural Networks in Active Cancellation of Sound Noise

    Directory of Open Access Journals (Sweden)

    Mehrshad Salmasi

    2012-07-01

    Full Text Available Active noise control is based on the destructive interference between the primary noise and generated noise from the secondary source. An antinoise of equal amplitude and opposite phase is generated and combined with the primary noise. In this paper, performance of the neural networks is evaluated in active cancellation of sound noise. For this reason, feedforward and recurrent neural networks are designed and trained. After training, performance of the feedforwrad and recurrent networks in noise attenuation are compared. We use Elman network as a recurrent neural network. For simulations, noise signals from a SPIB database are used. In order to compare the networks appropriately, equal number of layers and neurons are considered for the networks. Moreover, training and test samples are similar. Simulation results show that feedforward and recurrent neural networks present good performance in noise cancellation. As it is seen, the ability of recurrent neural network in noise attenuation is better than feedforward network.

  17. On the approximation by single hidden layer feedforward neural networks with fixed weights

    OpenAIRE

    Guliyev, Namig J.; Ismailov, Vugar E.

    2017-01-01

    International audience; Feedforward neural networks have wide applicability in various disciplines of science due to their universal approximation property. Some authors have shown that single hidden layer feedforward neural networks (SLFNs) with fixed weights still possess the universal approximation property provided that approximated functions are univariate. But this phenomenon does not lay any restrictions on the number of neurons in the hidden layer. The more this number, the more the p...

  18. ISC feedforward control of gasoline engine. Adaptive system using neural network; Jidoshayo gasoline engine no ISC feedforward seigyo. Neural network wo mochiita tekioka

    Energy Technology Data Exchange (ETDEWEB)

    Kinugawa, N; Morita, S; Takiyama, T [Osaka City University, Osaka (Japan)

    1997-10-01

    For fuel economy and a good driver`s feeling, it is necessary for idle-speed to keep at a constant low speed. But keeping low speed has danger of engine stall when the engine torque is disturbed by the alternator, and so on. In this paper, adaptive feedforward idle-speed control system against electrical loads was investigated. This system was based on the reversed tansfer functions of the object system, and a neural network was used to adapt this system for aging. Then, this neural network was also used for creating feedforward table map. Good experimental results were obtained. 2 refs., 11 figs.

  19. Feed-Forward Neural Networks and Minimal Search Space Learning

    Czech Academy of Sciences Publication Activity Database

    Neruda, Roman

    2005-01-01

    Roč. 4, č. 12 (2005), s. 1867-1872 ISSN 1109-2750 R&D Projects: GA ČR GA201/05/0557 Institutional research plan: CEZ:AV0Z10300504 Keywords : search space * feed-forward networks * genetic algorithm s Subject RIV: BA - General Mathematics

  20. Classification of Urinary Calculi using Feed-Forward Neural Networks

    African Journals Online (AJOL)

    NJD

    Genetic algorithms were used for optimization of neural networks and for selection of the ... Urinary calculi, infrared spectroscopy, classification, neural networks, variable ..... note that the best accuracy is obtained for whewellite, weddellite.

  1. Evaluation of the Performance of Feedforward and Recurrent Neural Networks in Active Cancellation of Sound Noise

    OpenAIRE

    Mehrshad Salmasi; Homayoun Mahdavi-Nasab

    2012-01-01

    Active noise control is based on the destructive interference between the primary noise and generated noise from the secondary source. An antinoise of equal amplitude and opposite phase is generated and combined with the primary noise. In this paper, performance of the neural networks is evaluated in active cancellation of sound noise. For this reason, feedforward and recurrent neural networks are designed and trained. After training, performance of the feedforwrad and recurrent networks in n...

  2. A novel approach to error function minimization for feedforward neural networks

    International Nuclear Information System (INIS)

    Sinkus, R.

    1995-01-01

    Feedforward neural networks with error backpropagation are widely applied to pattern recognition. One general problem encountered with this type of neural networks is the uncertainty, whether the minimization procedure has converged to a global minimum of the cost function. To overcome this problem a novel approach to minimize the error function is presented. It allows to monitor the approach to the global minimum and as an outcome several ambiguities related to the choice of free parameters of the minimization procedure are removed. (orig.)

  3. Single-hidden-layer feed-forward quantum neural network based on Grover learning.

    Science.gov (United States)

    Liu, Cheng-Yi; Chen, Chein; Chang, Ching-Ter; Shih, Lun-Min

    2013-09-01

    In this paper, a novel single-hidden-layer feed-forward quantum neural network model is proposed based on some concepts and principles in the quantum theory. By combining the quantum mechanism with the feed-forward neural network, we defined quantum hidden neurons and connected quantum weights, and used them as the fundamental information processing unit in a single-hidden-layer feed-forward neural network. The quantum neurons make a wide range of nonlinear functions serve as the activation functions in the hidden layer of the network, and the Grover searching algorithm outstands the optimal parameter setting iteratively and thus makes very efficient neural network learning possible. The quantum neuron and weights, along with a Grover searching algorithm based learning, result in a novel and efficient neural network characteristic of reduced network, high efficient training and prospect application in future. Some simulations are taken to investigate the performance of the proposed quantum network and the result show that it can achieve accurate learning. Copyright © 2013 Elsevier Ltd. All rights reserved.

  4. Precision requirements for single-layer feed-forward neural networks

    NARCIS (Netherlands)

    Annema, Anne J.; Hoen, K.; Hoen, Klaas; Wallinga, Hans

    1994-01-01

    This paper presents a mathematical analysis of the effect of limited precision analog hardware for weight adaptation to be used in on-chip learning feedforward neural networks. Easy-to-read equations and simple worst-case estimations for the maximum tolerable imprecision are presented. As an

  5. Variable synaptic strengths controls the firing rate distribution in feedforward neural networks.

    Science.gov (United States)

    Ly, Cheng; Marsat, Gary

    2018-02-01

    Heterogeneity of firing rate statistics is known to have severe consequences on neural coding. Recent experimental recordings in weakly electric fish indicate that the distribution-width of superficial pyramidal cell firing rates (trial- and time-averaged) in the electrosensory lateral line lobe (ELL) depends on the stimulus, and also that network inputs can mediate changes in the firing rate distribution across the population. We previously developed theoretical methods to understand how two attributes (synaptic and intrinsic heterogeneity) interact and alter the firing rate distribution in a population of integrate-and-fire neurons with random recurrent coupling. Inspired by our experimental data, we extend these theoretical results to a delayed feedforward spiking network that qualitatively capture the changes of firing rate heterogeneity observed in in-vivo recordings. We demonstrate how heterogeneous neural attributes alter firing rate heterogeneity, accounting for the effect with various sensory stimuli. The model predicts how the strength of the effective network connectivity is related to intrinsic heterogeneity in such delayed feedforward networks: the strength of the feedforward input is positively correlated with excitability (threshold value for spiking) when firing rate heterogeneity is low and is negatively correlated with excitability with high firing rate heterogeneity. We also show how our theory can be used to predict effective neural architecture. We demonstrate that neural attributes do not interact in a simple manner but rather in a complex stimulus-dependent fashion to control neural heterogeneity and discuss how it can ultimately shape population codes.

  6. Encoding Time in Feedforward Trajectories of a Recurrent Neural Network Model.

    Science.gov (United States)

    Hardy, N F; Buonomano, Dean V

    2018-02-01

    Brain activity evolves through time, creating trajectories of activity that underlie sensorimotor processing, behavior, and learning and memory. Therefore, understanding the temporal nature of neural dynamics is essential to understanding brain function and behavior. In vivo studies have demonstrated that sequential transient activation of neurons can encode time. However, it remains unclear whether these patterns emerge from feedforward network architectures or from recurrent networks and, furthermore, what role network structure plays in timing. We address these issues using a recurrent neural network (RNN) model with distinct populations of excitatory and inhibitory units. Consistent with experimental data, a single RNN could autonomously produce multiple functionally feedforward trajectories, thus potentially encoding multiple timed motor patterns lasting up to several seconds. Importantly, the model accounted for Weber's law, a hallmark of timing behavior. Analysis of network connectivity revealed that efficiency-a measure of network interconnectedness-decreased as the number of stored trajectories increased. Additionally, the balance of excitation (E) and inhibition (I) shifted toward excitation during each unit's activation time, generating the prediction that observed sequential activity relies on dynamic control of the E/I balance. Our results establish for the first time that the same RNN can generate multiple functionally feedforward patterns of activity as a result of dynamic shifts in the E/I balance imposed by the connectome of the RNN. We conclude that recurrent network architectures account for sequential neural activity, as well as for a fundamental signature of timing behavior: Weber's law.

  7. Modeling of quasistatic magnetic hysteresis with feed-forward neural networks

    International Nuclear Information System (INIS)

    Makaveev, Dimitre; Dupre, Luc; De Wulf, Marc; Melkebeek, Jan

    2001-01-01

    A modeling technique for rate-independent (quasistatic) scalar magnetic hysteresis is presented, using neural networks. Based on the theory of dynamic systems and the wiping-out and congruency properties of the classical scalar Preisach hysteresis model, the choice of a feed-forward neural network model is motivated. The neural network input parameters at each time step are the corresponding magnetic field strength and memory state, thereby assuring accurate prediction of the change of magnetic induction. For rate-independent hysteresis, the current memory state can be determined by the last extreme magnetic field strength and induction values, kept in memory. The choice of a network training set is motivated and the performance of the network is illustrated for a test set not used during training. Very accurate prediction of both major and minor hysteresis loops is observed, proving that the neural network technique is suitable for hysteresis modeling. [copyright] 2001 American Institute of Physics

  8. Neural network based approach for tuning of SNS feedback and feedforward controllers

    International Nuclear Information System (INIS)

    Kwon, Sung-Il; Prokop, Mark S.; Regan, Amy H.

    2002-01-01

    The primary controllers in the SNS low level RF system are proportional-integral (PI) feedback controllers. To obtain the best performance of the linac control systems, approximately 91 individual PI controller gains should be optimally tuned. Tuning is time consuming and requires automation. In this paper, a neural network is used for the controller gain tuning. A neural network can approximate any continuous mapping through learning. In a sense, the cavity loop PI controller is a continuous mapping of the tracking error and its one-sample-delay inputs to the controller output. Also, monotonic cavity output with respect to its input makes knowing the detailed parameters of the cavity unnecessary. Hence the PI controller is a prime candidate for approximation through a neural network. Using mean square error minimization to train the neural network along with a continuous mapping of appropriate weights, optimally tuned PI controller gains can be determined. The same neural network approximation property is also applied to enhance the adaptive feedforward controller performance. This is done by adjusting the feedforward controller gains, forgetting factor, and learning ratio. Lastly, the automation of the tuning procedure data measurement, neural network training, tuning and loading the controller gain to the DSP is addressed.

  9. Breast cancer detection via Hu moment invariant and feedforward neural network

    Science.gov (United States)

    Zhang, Xiaowei; Yang, Jiquan; Nguyen, Elijah

    2018-04-01

    One of eight women can get breast cancer during all her life. This study used Hu moment invariant and feedforward neural network to diagnose breast cancer. With the help of K-fold cross validation, we can test the out-of-sample accuracy of our method. Finally, we found that our methods can improve the accuracy of detecting breast cancer and reduce the difficulty of judging.

  10. Boundedness and convergence of online gradient method with penalty for feedforward neural networks.

    Science.gov (United States)

    Zhang, Huisheng; Wu, Wei; Liu, Fei; Yao, Mingchen

    2009-06-01

    In this brief, we consider an online gradient method with penalty for training feedforward neural networks. Specifically, the penalty is a term proportional to the norm of the weights. Its roles in the method are to control the magnitude of the weights and to improve the generalization performance of the network. By proving that the weights are automatically bounded in the network training with penalty, we simplify the conditions that are required for convergence of online gradient method in literature. A numerical example is given to support the theoretical analysis.

  11. Modeling of an industrial process of pleuromutilin fermentation using feed-forward neural networks

    Directory of Open Access Journals (Sweden)

    L. Khaouane

    2013-03-01

    Full Text Available This work investigates the use of artificial neural networks in modeling an industrial fermentation process of Pleuromutilin produced by Pleurotus mutilus in a fed-batch mode. Three feed-forward neural network models characterized by a similar structure (five neurons in the input layer, one hidden layer and one neuron in the output layer are constructed and optimized with the aim to predict the evolution of three main bioprocess variables: biomass, substrate and product. Results show a good fit between the predicted and experimental values for each model (the root mean squared errors were 0.4624% - 0.1234 g/L and 0.0016 mg/g respectively. Furthermore, the comparison between the optimized models and the unstructured kinetic models in terms of simulation results shows that neural network models gave more significant results. These results encourage further studies to integrate the mathematical formulae extracted from these models into an industrial control loop of the process.

  12. Synaptic convergence regulates synchronization-dependent spike transfer in feedforward neural networks.

    Science.gov (United States)

    Sailamul, Pachaya; Jang, Jaeson; Paik, Se-Bum

    2017-12-01

    Correlated neural activities such as synchronizations can significantly alter the characteristics of spike transfer between neural layers. However, it is not clear how this synchronization-dependent spike transfer can be affected by the structure of convergent feedforward wiring. To address this question, we implemented computer simulations of model neural networks: a source and a target layer connected with different types of convergent wiring rules. In the Gaussian-Gaussian (GG) model, both the connection probability and the strength are given as Gaussian distribution as a function of spatial distance. In the Uniform-Constant (UC) and Uniform-Exponential (UE) models, the connection probability density is a uniform constant within a certain range, but the connection strength is set as a constant value or an exponentially decaying function, respectively. Then we examined how the spike transfer function is modulated under these conditions, while static or synchronized input patterns were introduced to simulate different levels of feedforward spike synchronization. We observed that the synchronization-dependent modulation of the transfer function appeared noticeably different for each convergence condition. The modulation of the spike transfer function was largest in the UC model, and smallest in the UE model. Our analysis showed that this difference was induced by the different spike weight distributions that was generated from convergent synapses in each model. Our results suggest that, the structure of the feedforward convergence is a crucial factor for correlation-dependent spike control, thus must be considered important to understand the mechanism of information transfer in the brain.

  13. A Constrained Multi-Objective Learning Algorithm for Feed-Forward Neural Network Classifiers

    Directory of Open Access Journals (Sweden)

    M. Njah

    2017-06-01

    Full Text Available This paper proposes a new approach to address the optimal design of a Feed-forward Neural Network (FNN based classifier. The originality of the proposed methodology, called CMOA, lie in the use of a new constraint handling technique based on a self-adaptive penalty procedure in order to direct the entire search effort towards finding only Pareto optimal solutions that are acceptable. Neurons and connections of the FNN Classifier are dynamically built during the learning process. The approach includes differential evolution to create new individuals and then keeps only the non-dominated ones as the basis for the next generation. The designed FNN Classifier is applied to six binary classification benchmark problems, obtained from the UCI repository, and results indicated the advantages of the proposed approach over other existing multi-objective evolutionary neural networks classifiers reported recently in the literature.

  14. Force control of a magnetorheological damper using an elementary hysteresis model-based feedforward neural network

    International Nuclear Information System (INIS)

    Ekkachai, Kittipong; Nilkhamhang, Itthisek; Tungpimolrut, Kanokvate

    2013-01-01

    An inverse controller is proposed for a magnetorheological (MR) damper that consists of a hysteresis model and a voltage controller. The force characteristics of the MR damper caused by excitation signals are represented by a feedforward neural network (FNN) with an elementary hysteresis model (EHM). The voltage controller is constructed using another FNN to calculate a suitable input signal that will allow the MR damper to produce the desired damping force. The performance of the proposed EHM-based FNN controller is experimentally compared to existing control methodologies, such as clipped-optimal control, signum function control, conventional FNN, and recurrent neural network with displacement or velocity inputs. The results show that the proposed controller, which does not require force feedback to implement, provides excellent accuracy, fast response time, and lower energy consumption. (paper)

  15. Single-Iteration Learning Algorithm for Feed-Forward Neural Networks

    Energy Technology Data Exchange (ETDEWEB)

    Barhen, J.; Cogswell, R.; Protopopescu, V.

    1999-07-31

    A new methodology for neural learning is presented, whereby only a single iteration is required to train a feed-forward network with near-optimal results. To this aim, a virtual input layer is added to the multi-layer architecture. The virtual input layer is connected to the nominal input layer by a specird nonlinear transfer function, and to the fwst hidden layer by regular (linear) synapses. A sequence of alternating direction singular vrdue decompositions is then used to determine precisely the inter-layer synaptic weights. This algorithm exploits the known separability of the linear (inter-layer propagation) and nonlinear (neuron activation) aspects of information &ansfer within a neural network.

  16. Neural network feedforward control of a closed-circuit wind tunnel

    Science.gov (United States)

    Sutcliffe, Peter

    Accurate control of wind-tunnel test conditions can be dramatically enhanced using feedforward control architectures which allow operating conditions to be maintained at a desired setpoint through the use of mathematical models as the primary source of prediction. However, as the desired accuracy of the feedforward prediction increases, the model complexity also increases, so that an ever increasing computational load is incurred. This drawback can be avoided by employing a neural network that is trained offline using the output of a high fidelity wind-tunnel mathematical model, so that the neural network can rapidly reproduce the predictions of the model with a greatly reduced computational overhead. A novel neural network database generation method, developed through the use of fractional factorial arrays, was employed such that a neural network can accurately predict wind-tunnel parameters across a wide range of operating conditions whilst trained upon a highly efficient database. The subsequent network was incorporated into a Neural Network Model Predictive Control (NNMPC) framework to allow an optimised output schedule capable of providing accurate control of the wind-tunnel operating parameters. Facilitation of an optimised path through the solution space is achieved through the use of a chaos optimisation algorithm such that a more globally optimum solution is likely to be found with less computational expense than the gradient descent method. The parameters associated with the NNMPC such as the control horizon are determined through the use of a Taguchi methodology enabling the minimum number of experiments to be carried out to determine the optimal combination. The resultant NNMPC scheme was employed upon the Hessert Low Speed Wind Tunnel at the University of Notre Dame to control the test-section temperature such that it follows a pre-determined reference trajectory during changes in the test-section velocity. Experimental testing revealed that the

  17. Robust sequential learning of feedforward neural networks in the presence of heavy-tailed noise.

    Science.gov (United States)

    Vuković, Najdan; Miljković, Zoran

    2015-03-01

    Feedforward neural networks (FFNN) are among the most used neural networks for modeling of various nonlinear problems in engineering. In sequential and especially real time processing all neural networks models fail when faced with outliers. Outliers are found across a wide range of engineering problems. Recent research results in the field have shown that to avoid overfitting or divergence of the model, new approach is needed especially if FFNN is to run sequentially or in real time. To accommodate limitations of FFNN when training data contains a certain number of outliers, this paper presents new learning algorithm based on improvement of conventional extended Kalman filter (EKF). Extended Kalman filter robust to outliers (EKF-OR) is probabilistic generative model in which measurement noise covariance is not constant; the sequence of noise measurement covariance is modeled as stochastic process over the set of symmetric positive-definite matrices in which prior is modeled as inverse Wishart distribution. In each iteration EKF-OR simultaneously estimates noise estimates and current best estimate of FFNN parameters. Bayesian framework enables one to mathematically derive expressions, while analytical intractability of the Bayes' update step is solved by using structured variational approximation. All mathematical expressions in the paper are derived using the first principles. Extensive experimental study shows that FFNN trained with developed learning algorithm, achieves low prediction error and good generalization quality regardless of outliers' presence in training data. Copyright © 2014 Elsevier Ltd. All rights reserved.

  18. A Novel Memristive Multilayer Feedforward Small-World Neural Network with Its Applications in PID Control

    Directory of Open Access Journals (Sweden)

    Zhekang Dong

    2014-01-01

    Full Text Available In this paper, we present an implementation scheme of memristor-based multilayer feedforward small-world neural network (MFSNN inspirited by the lack of the hardware realization of the MFSNN on account of the need of a large number of electronic neurons and synapses. More specially, a mathematical closed-form charge-governed memristor model is presented with derivation procedures and the corresponding Simulink model is presented, which is an essential block for realizing the memristive synapse and the activation function in electronic neurons. Furthermore, we investigate a more intelligent memristive PID controller by incorporating the proposed MFSNN into intelligent PID control based on the advantages of the memristive MFSNN on computation speed and accuracy. Finally, numerical simulations have demonstrated the effectiveness of the proposed scheme.

  19. A novel memristive multilayer feedforward small-world neural network with its applications in PID control.

    Science.gov (United States)

    Dong, Zhekang; Duan, Shukai; Hu, Xiaofang; Wang, Lidan; Li, Hai

    2014-01-01

    In this paper, we present an implementation scheme of memristor-based multilayer feedforward small-world neural network (MFSNN) inspirited by the lack of the hardware realization of the MFSNN on account of the need of a large number of electronic neurons and synapses. More specially, a mathematical closed-form charge-governed memristor model is presented with derivation procedures and the corresponding Simulink model is presented, which is an essential block for realizing the memristive synapse and the activation function in electronic neurons. Furthermore, we investigate a more intelligent memristive PID controller by incorporating the proposed MFSNN into intelligent PID control based on the advantages of the memristive MFSNN on computation speed and accuracy. Finally, numerical simulations have demonstrated the effectiveness of the proposed scheme.

  20. Development of Sorting System for Fishes by Feed-forward Neural Networks Using Rotation Invariant Features

    Science.gov (United States)

    Shiraishi, Yuhki; Takeda, Fumiaki

    In this research, we have developed a sorting system for fishes, which is comprised of a conveyance part, a capturing image part, and a sorting part. In the conveyance part, we have developed an independent conveyance system in order to separate one fish from an intertwined group of fishes. After the image of the separated fish is captured in the capturing part, a rotation invariant feature is extracted using two-dimensional fast Fourier transform, which is the mean value of the power spectrum with the same distance from the origin in the spectrum field. After that, the fishes are classified by three-layered feed-forward neural networks. The experimental results show that the developed system classifies three kinds of fishes captured in various angles with the classification ratio of 98.95% for 1044 captured images of five fishes. The other experimental results show the classification ratio of 90.7% for 300 fishes by 10-fold cross validation method.

  1. PREDICTIVE CONTROL OF A BATCH POLYMERIZATION SYSTEM USING A FEEDFORWARD NEURAL NETWORK WITH ONLINE ADAPTATION BY GENETIC ALGORITHM

    OpenAIRE

    Cancelier, A.; Claumann, C. A.; Bolzan, A.; Machado, R. A. F.

    2016-01-01

    Abstract This study used a predictive controller based on an empirical nonlinear model comprising a three-layer feedforward neural network for temperature control of the suspension polymerization process. In addition to the offline training technique, an algorithm was also analyzed for online adaptation of its parameters. For the offline training, the network was statically trained and the genetic algorithm technique was used in combination with the least squares method. For online training, ...

  2. Hybrid feedback feedforward: An efficient design of adaptive neural network control.

    Science.gov (United States)

    Pan, Yongping; Liu, Yiqi; Xu, Bin; Yu, Haoyong

    2016-04-01

    This paper presents an efficient hybrid feedback feedforward (HFF) adaptive approximation-based control (AAC) strategy for a class of uncertain Euler-Lagrange systems. The control structure includes a proportional-derivative (PD) control term in the feedback loop and a radial-basis-function (RBF) neural network (NN) in the feedforward loop, which mimics the human motor learning control mechanism. At the presence of discontinuous friction, a sigmoid-jump-function NN is incorporated to improve control performance. The major difference of the proposed HFF-AAC design from the traditional feedback AAC (FB-AAC) design is that only desired outputs, rather than both tracking errors and desired outputs, are applied as RBF-NN inputs. Yet, such a slight modification leads to several attractive properties of HFF-AAC, including the convenient choice of an approximation domain, the decrease of the number of RBF-NN inputs, and semiglobal practical asymptotic stability dominated by control gains. Compared with previous HFF-AAC approaches, the proposed approach possesses the following two distinctive features: (i) all above attractive properties are achieved by a much simpler control scheme; (ii) the bounds of plant uncertainties are not required to be known. Consequently, the proposed approach guarantees a minimum configuration of the control structure and a minimum requirement of plant knowledge for the AAC design, which leads to a sharp decrease of implementation cost in terms of hardware selection, algorithm realization and system debugging. Simulation results have demonstrated that the proposed HFF-AAC can perform as good as or even better than the traditional FB-AAC under much simpler control synthesis and much lower computational cost. Copyright © 2015 Elsevier Ltd. All rights reserved.

  3. Control of a local neural network by feedforward and feedback inhibition

    NARCIS (Netherlands)

    Remme, M.W.H.; Wadman, W.J.

    2004-01-01

    The signal transfer of a neuronal network is shaped by the local interactions between the excitatory principal cells and the inhibitory interneurons. We investigated with a simple lumped model how feedforward and feedback inhibition in.uence the steady-state network signal transfer. We analyze how

  4. Controlling the chaotic discrete-Hénon system using a feedforward neural network with an adaptive learning rate

    OpenAIRE

    GÖKCE, Kürşad; UYAROĞLU, Yılmaz

    2013-01-01

    This paper proposes a feedforward neural network-based control scheme to control the chaotic trajectories of a discrete-Hénon map in order to stay within an acceptable distance from the stable fixed point. An adaptive learning back propagation algorithm with online training is employed to improve the effectiveness of the proposed method. The simulation study carried in the discrete-Hénon system verifies the validity of the proposed control system.

  5. An H(∞) control approach to robust learning of feedforward neural networks.

    Science.gov (United States)

    Jing, Xingjian

    2011-09-01

    A novel H(∞) robust control approach is proposed in this study to deal with the learning problems of feedforward neural networks (FNNs). The analysis and design of a desired weight update law for the FNN is transformed into a robust controller design problem for a discrete dynamic system in terms of the estimation error. The drawbacks of some existing learning algorithms can therefore be revealed, especially for the case that the output data is fast changing with respect to the input or the output data is corrupted by noise. Based on this approach, the optimal learning parameters can be found by utilizing the linear matrix inequality (LMI) optimization techniques to achieve a predefined H(∞) "noise" attenuation level. Several existing BP-type algorithms are shown to be special cases of the new H(∞)-learning algorithm. Theoretical analysis and several examples are provided to show the advantages of the new method. Copyright © 2011 Elsevier Ltd. All rights reserved.

  6. Selection of W-pair-production in DELPHI with feed-forward neural networks

    International Nuclear Information System (INIS)

    Becks, K.-H.; Buschmann, P.; Drees, J.; Mueller, U.; Wahlen, H.

    2001-01-01

    Since 1998 feed-forward networks have been applied for the separation of hadronic WW-decays from background processes measured by the DELPHI collaboration at different center-of-mass energies of the Large Electron Positron collider at CERN. Prior to the publication of the 189 GeV results intensive studies of systematic effects and uncertainties were performed. The methods and results will be discussed and compared to standard selection procedures

  7. A feed-forward Hopfield neural network algorithm (FHNNA) with a colour satellite image for water quality mapping

    Science.gov (United States)

    Asal Kzar, Ahmed; Mat Jafri, M. Z.; Hwee San, Lim; Al-Zuky, Ali A.; Mutter, Kussay N.; Hassan Al-Saleh, Anwar

    2016-06-01

    There are many techniques that have been given for water quality problem, but the remote sensing techniques have proven their success, especially when the artificial neural networks are used as mathematical models with these techniques. Hopfield neural network is one type of artificial neural networks which is common, fast, simple, and efficient, but it when it deals with images that have more than two colours such as remote sensing images. This work has attempted to solve this problem via modifying the network that deals with colour remote sensing images for water quality mapping. A Feed-forward Hopfield Neural Network Algorithm (FHNNA) was modified and used with a satellite colour image from type of Thailand earth observation system (THEOS) for TSS mapping in the Penang strait, Malaysia, through the classification of TSS concentrations. The new algorithm is based essentially on three modifications: using HNN as feed-forward network, considering the weights of bitplanes, and non-self-architecture or zero diagonal of weight matrix, in addition, it depends on a validation data. The achieved map was colour-coded for visual interpretation. The efficiency of the new algorithm has found out by the higher correlation coefficient (R=0.979) and the lower root mean square error (RMSE=4.301) between the validation data that were divided into two groups. One used for the algorithm and the other used for validating the results. The comparison was with the minimum distance classifier. Therefore, TSS mapping of polluted water in Penang strait, Malaysia, can be performed using FHNNA with remote sensing technique (THEOS). It is a new and useful application of HNN, so it is a new model with remote sensing techniques for water quality mapping which is considered important environmental problem.

  8. Accurate estimation of CO2 adsorption on activated carbon with multi-layer feed-forward neural network (MLFNN algorithm

    Directory of Open Access Journals (Sweden)

    Alireza Rostami

    2018-03-01

    Full Text Available Global warming due to greenhouse effect has been considered as a serious problem for many years around the world. Among the different gases which cause greenhouse gas effect, carbon dioxide is of great difficulty by entering into the surrounding atmosphere. So CO2 capturing and separation especially by adsorption is one of the most interesting approaches because of the low equipment cost, ease of operation, simplicity of design, and low energy consumption.In this study, experimental results are presented for the adsorption equilibria of carbon dioxide on activated carbon. The adsorption equilibrium data for carbon dioxide were predicted with two commonly used isotherm models in order to compare with multi-layer feed-forward neural network (MLFNN algorithm for a wide range of partial pressure. As a result, the ANN-based algorithm shows much better efficiency and accuracy than the Sips and Langmuir isotherms. In addition, the applicability of the Sips and Langmuir models are limited to isothermal conditions, even though the ANN-based algorithm is not restricted to the constant temperature condition. Consequently, it is proved that MLFNN algorithm is a promising model for calculation of CO2 adsorption density on activated carbon. Keywords: Global warming, CO2 adsorption, Activated carbon, Multi-layer feed-forward neural network algorithm, Statistical quality measures

  9. Spike-timing computation properties of a feed-forward neural network model

    Directory of Open Access Journals (Sweden)

    Drew Benjamin Sinha

    2014-01-01

    Full Text Available Brain function is characterized by dynamical interactions among networks of neurons. These interactions are mediated by network topology at many scales ranging from microcircuits to brain areas. Understanding how networks operate can be aided by understanding how the transformation of inputs depends upon network connectivity patterns, e.g. serial and parallel pathways. To tractably determine how single synapses or groups of synapses in such pathways shape transformations, we modeled feed-forward networks of 7-22 neurons in which synaptic strength changed according to a spike-timing dependent plasticity rule. We investigated how activity varied when dynamics were perturbed by an activity-dependent electrical stimulation protocol (spike-triggered stimulation; STS in networks of different topologies and background input correlations. STS can successfully reorganize functional brain networks in vivo, but with a variability in effectiveness that may derive partially from the underlying network topology. In a simulated network with a single disynaptic pathway driven by uncorrelated background activity, structured spike-timing relationships between polysynaptically connected neurons were not observed. When background activity was correlated or parallel disynaptic pathways were added, however, robust polysynaptic spike timing relationships were observed, and application of STS yielded predictable changes in synaptic strengths and spike-timing relationships. These observations suggest that precise input-related or topologically induced temporal relationships in network activity are necessary for polysynaptic signal propagation. Such constraints for polysynaptic computation suggest potential roles for higher-order topological structure in network organization, such as maintaining polysynaptic correlation in the face of relatively weak synapses.

  10. Introduction to Artificial Neural Networks

    DEFF Research Database (Denmark)

    Larsen, Jan

    1999-01-01

    The note addresses introduction to signal analysis and classification based on artificial feed-forward neural networks.......The note addresses introduction to signal analysis and classification based on artificial feed-forward neural networks....

  11. Implementation of a feed-forward artificial neural network in VHDL on FPGA

    NARCIS (Netherlands)

    Dondon, P.; Carvalho, J.; Gardere, R.; Lahalle, P.; Tsenov, G.; Mladenov, V.M.; Reljin, B.; Stankovic, S.

    2014-01-01

    Describing an Artificial Neural Network (ANN) using VHDL allows a further implementation of such a system on FPGA. Indeed, the principal point of using FPGA for ANNs is flexibility that gives it an advantage toward other systems like ASICS which are entirely dedicated to one unique architecture and

  12. PREDICTIVE CONTROL OF A BATCH POLYMERIZATION SYSTEM USING A FEEDFORWARD NEURAL NETWORK WITH ONLINE ADAPTATION BY GENETIC ALGORITHM

    Directory of Open Access Journals (Sweden)

    A. Cancelier

    Full Text Available Abstract This study used a predictive controller based on an empirical nonlinear model comprising a three-layer feedforward neural network for temperature control of the suspension polymerization process. In addition to the offline training technique, an algorithm was also analyzed for online adaptation of its parameters. For the offline training, the network was statically trained and the genetic algorithm technique was used in combination with the least squares method. For online training, the network was trained on a recurring basis and only the technique of genetic algorithms was used. In this case, only the weights and bias of the output layer neuron were modified, starting from the parameters obtained from the offline training. From the experimental results obtained in a pilot plant, a good performance was observed for the proposed control system, with superior performance for the control algorithm with online adaptation of the model, particularly with respect to the presence of off-set for the case of the fixed parameters model.

  13. Biological engineering applications of feedforward neural networks designed and parameterized by genetic algorithms.

    Science.gov (United States)

    Ferentinos, Konstantinos P

    2005-09-01

    Two neural network (NN) applications in the field of biological engineering are developed, designed and parameterized by an evolutionary method based on the evolutionary process of genetic algorithms. The developed systems are a fault detection NN model and a predictive modeling NN system. An indirect or 'weak specification' representation was used for the encoding of NN topologies and training parameters into genes of the genetic algorithm (GA). Some a priori knowledge of the demands in network topology for specific application cases is required by this approach, so that the infinite search space of the problem is limited to some reasonable degree. Both one-hidden-layer and two-hidden-layer network architectures were explored by the GA. Except for the network architecture, each gene of the GA also encoded the type of activation functions in both hidden and output nodes of the NN and the type of minimization algorithm that was used by the backpropagation algorithm for the training of the NN. Both models achieved satisfactory performance, while the GA system proved to be a powerful tool that can successfully replace the problematic trial-and-error approach that is usually used for these tasks.

  14. Application of a feedforward neural network in the search for kuroko deposits in the hokuroku district, Japan

    Science.gov (United States)

    Singer, D.A.; Kouda, R.

    1996-01-01

    A feedforward neural network with one hidden layer and five neurons was trained to recognize the distance to kuroko mineral deposits. Average amounts per hole of pyrite, sericite, and gypsum plus anhydrite as measured by X-rays in 69 drillholes were used in train the net. Drillholes near and between the Fukazawa, Furutobe, and Shakanai mines were used. The training data were selected carefully to represent well-explored areas where some confidence of the distance to ore was assured. A logarithmic transform was applied to remove the skewness of distance and each variable was scaled and centered by subtracting the median and dividing by the interquartile range. The learning algorithm of annealing plus conjugate gradients was used to minimise the mean squared error of the sealed distance to ore. The trained network then was applied to all of the 152 drillholes that had measured gypsum, sericite, and pyrite. A contour plot of the neural net predicted distance to ore shows fairly wide areas of 1 km or less to ore; each of the known deposit groups is within the 1 km contour. The high and htw distances on the margins of the contoured distance plot are in part the result of boundary effects of the contouring algorithm. For example, the short distances to ore predicted west of the Shakanai (Hanaoka) deposits are in basement. However, the short distances to ore predicted northeast of Furotobe, just off the figure, coincide with the location of the Nurukawa kuroko deposit and the Omaki deposit, south of the Shakanai-Hanaoka deposits, seems to be on an extension of short distance to ore contour, but is beyond the 3 km limit from drillholes. Also of interest are some areas only a few kilometers from the Fukazawa and Shakanai groups of deposits that are estimated to be many kilometers from ore, apparently reflecting the network's recognition of the extreme local variability of the geology near some deposits. 1996 International Association for Mathematical Geology.

  15. Impacts of Sample Design for Validation Data on the Accuracy of Feedforward Neural Network Classification

    Directory of Open Access Journals (Sweden)

    Giles M. Foody

    2017-08-01

    Full Text Available Validation data are often used to evaluate the performance of a trained neural network and used in the selection of a network deemed optimal for the task at-hand. Optimality is commonly assessed with a measure, such as overall classification accuracy. The latter is often calculated directly from a confusion matrix showing the counts of cases in the validation set with particular labelling properties. The sample design used to form the validation set can, however, influence the estimated magnitude of the accuracy. Commonly, the validation set is formed with a stratified sample to give balanced classes, but also via random sampling, which reflects class abundance. It is suggested that if the ultimate aim is to accurately classify a dataset in which the classes do vary in abundance, a validation set formed via random, rather than stratified, sampling is preferred. This is illustrated with the classification of simulated and remotely-sensed datasets. With both datasets, statistically significant differences in the accuracy with which the data could be classified arose from the use of validation sets formed via random and stratified sampling (z = 2.7 and 1.9 for the simulated and real datasets respectively, for both p < 0.05%. The accuracy of the classifications that used a stratified sample in validation were smaller, a result of cases of an abundant class being commissioned into a rarer class. Simple means to address the issue are suggested.

  16. Real-Time Monitoring and Fault Diagnosis of a Low Power Hub Motor Using Feedforward Neural Network

    Directory of Open Access Journals (Sweden)

    Mehmet Şimşir

    2016-01-01

    Full Text Available Low power hub motors are widely used in electromechanical systems such as electrical bicycles and solar vehicles due to their robustness and compact structure. Such systems driven by hub motors (in wheel motors encounter previously defined and undefined faults under operation. It may inevitably lead to the interruption of the electromechanical system operation; hence, economic losses take place at certain times. Therefore, in order to maintain system operation sustainability, the motor should be precisely monitored and the faults are diagnosed considering various significant motor parameters. In this study, the artificial feedforward backpropagation neural network approach is proposed to real-time monitor and diagnose the faults of the hub motor by measuring seven main system parameters. So as to construct a necessary model, we trained the model, using a data set consisting of 4160 samples where each has 7 parameters, by the MATLAB environment until the best model is obtained. The results are encouraging and meaningful for the specific motor and the developed model may be applicable to other types of hub motors. The prosperous model of the whole system was embedded into Arduino Due microcontroller card and the mobile real-time monitoring and fault diagnosis system prototype for hub motor was designed and manufactured.

  17. Feedforward neural network model estimating pollutant removal process within mesophilic upflow anaerobic sludge blanket bioreactor treating industrial starch processing wastewater.

    Science.gov (United States)

    Antwi, Philip; Li, Jianzheng; Meng, Jia; Deng, Kaiwen; Koblah Quashie, Frank; Li, Jiuling; Opoku Boadi, Portia

    2018-06-01

    In this a, three-layered feedforward-backpropagation artificial neural network (BPANN) model was developed and employed to evaluate COD removal an upflow anaerobic sludge blanket (UASB) reactor treating industrial starch processing wastewater. At the end of UASB operation, microbial community characterization revealed satisfactory composition of microbes whereas morphology depicted rod-shaped archaea. pH, COD, NH 4 + , VFA, OLR and biogas yield were selected by principal component analysis and used as input variables. Whilst tangent sigmoid function (tansig) and linear function (purelin) were assigned as activation functions at the hidden-layer and output-layer, respectively, optimum BPANN architecture was achieved with Levenberg-Marquardt algorithm (trainlm) after eleven training algorithms had been tested. Based on performance indicators such the mean squared errors, fractional variance, index of agreement and coefficient of determination (R 2 ), the BPANN model demonstrated significant performance with R 2 reaching 87%. The study revealed that, control and optimization of an anaerobic digestion process with BPANN model was feasible. Copyright © 2018 Elsevier Ltd. All rights reserved.

  18. A dynamic feedforward neural network based on gaussian particle swarm optimization and its application for predictive control.

    Science.gov (United States)

    Han, Min; Fan, Jianchao; Wang, Jun

    2011-09-01

    A dynamic feedforward neural network (DFNN) is proposed for predictive control, whose adaptive parameters are adjusted by using Gaussian particle swarm optimization (GPSO) in the training process. Adaptive time-delay operators are added in the DFNN to improve its generalization for poorly known nonlinear dynamic systems with long time delays. Furthermore, GPSO adopts a chaotic map with Gaussian function to balance the exploration and exploitation capabilities of particles, which improves the computational efficiency without compromising the performance of the DFNN. The stability of the particle dynamics is analyzed, based on the robust stability theory, without any restrictive assumption. A stability condition for the GPSO+DFNN model is derived, which ensures a satisfactory global search and quick convergence, without the need for gradients. The particle velocity ranges could change adaptively during the optimization process. The results of a comparative study show that the performance of the proposed algorithm can compete with selected algorithms on benchmark problems. Additional simulation results demonstrate the effectiveness and accuracy of the proposed combination algorithm in identifying and controlling nonlinear systems with long time delays.

  19. Diagnosis of Alzheimer’s Disease Using Dual-Tree Complex Wavelet Transform, PCA, and Feed-Forward Neural Network

    Directory of Open Access Journals (Sweden)

    Debesh Jha

    2017-01-01

    Full Text Available Background. Error-free diagnosis of Alzheimer’s disease (AD from healthy control (HC patients at an early stage of the disease is a major concern, because information about the condition’s severity and developmental risks present allows AD sufferer to take precautionary measures before irreversible brain damage occurs. Recently, there has been great interest in computer-aided diagnosis in magnetic resonance image (MRI classification. However, distinguishing between Alzheimer’s brain data and healthy brain data in older adults (age > 60 is challenging because of their highly similar brain patterns and image intensities. Recently, cutting-edge feature extraction technologies have found extensive application in numerous fields, including medical image analysis. Here, we propose a dual-tree complex wavelet transform (DTCWT for extracting features from an image. The dimensionality of feature vector is reduced by using principal component analysis (PCA. The reduced feature vector is sent to feed-forward neural network (FNN to distinguish AD and HC from the input MR images. These proposed and implemented pipelines, which demonstrate improvements in classification output when compared to that of recent studies, resulted in high and reproducible accuracy rates of 90.06 ± 0.01% with a sensitivity of 92.00 ± 0.04%, a specificity of 87.78 ± 0.04%, and a precision of 89.6 ± 0.03% with 10-fold cross-validation.

  20. Liver Tumor Segmentation from MR Images Using 3D Fast Marching Algorithm and Single Hidden Layer Feedforward Neural Network

    Directory of Open Access Journals (Sweden)

    Trong-Ngoc Le

    2016-01-01

    Full Text Available Objective. Our objective is to develop a computerized scheme for liver tumor segmentation in MR images. Materials and Methods. Our proposed scheme consists of four main stages. Firstly, the region of interest (ROI image which contains the liver tumor region in the T1-weighted MR image series was extracted by using seed points. The noise in this ROI image was reduced and the boundaries were enhanced. A 3D fast marching algorithm was applied to generate the initial labeled regions which are considered as teacher regions. A single hidden layer feedforward neural network (SLFN, which was trained by a noniterative algorithm, was employed to classify the unlabeled voxels. Finally, the postprocessing stage was applied to extract and refine the liver tumor boundaries. The liver tumors determined by our scheme were compared with those manually traced by a radiologist, used as the “ground truth.” Results. The study was evaluated on two datasets of 25 tumors from 16 patients. The proposed scheme obtained the mean volumetric overlap error of 27.43% and the mean percentage volume error of 15.73%. The mean of the average surface distance, the root mean square surface distance, and the maximal surface distance were 0.58 mm, 1.20 mm, and 6.29 mm, respectively.

  1. Chaotic diagonal recurrent neural network

    International Nuclear Information System (INIS)

    Wang Xing-Yuan; Zhang Yi

    2012-01-01

    We propose a novel neural network based on a diagonal recurrent neural network and chaos, and its structure and learning algorithm are designed. The multilayer feedforward neural network, diagonal recurrent neural network, and chaotic diagonal recurrent neural network are used to approach the cubic symmetry map. The simulation results show that the approximation capability of the chaotic diagonal recurrent neural network is better than the other two neural networks. (interdisciplinary physics and related areas of science and technology)

  2. Natural Language Processing with Small Feed-Forward Networks

    OpenAIRE

    Botha, Jan A.; Pitler, Emily; Ma, Ji; Bakalov, Anton; Salcianu, Alex; Weiss, David; McDonald, Ryan; Petrov, Slav

    2017-01-01

    We show that small and shallow feed-forward neural networks can achieve near state-of-the-art results on a range of unstructured and structured language processing tasks while being considerably cheaper in memory and computational requirements than deep recurrent models. Motivated by resource-constrained environments like mobile phones, we showcase simple techniques for obtaining such small neural network models, and investigate different tradeoffs when deciding how to allocate a small memory...

  3. Cascading a systolic array and a feedforward neural network for navigation and obstacle avoidance using potential fields

    Science.gov (United States)

    Plumer, Edward S.

    1991-01-01

    A technique is developed for vehicle navigation and control in the presence of obstacles. A potential function was devised that peaks at the surface of obstacles and has its minimum at the proper vehicle destination. This function is computed using a systolic array and is guaranteed not to have local minima. A feedfoward neural network is then used to control the steering of the vehicle using local potential field information. In this case, the vehicle is a trailer truck backing up. Previous work has demonstrated the capability of a neural network to control steering of such a trailer truck backing to a loading platform, but without obstacles. Now, the neural network was able to learn to navigate a trailer truck around obstacles while backing toward its destination. The network is trained in an obstacle free space to follow the negative gradient of the field, after which the network is able to control and navigate the truck to its target destination in a space of obstacles which may be stationary or movable.

  4. Feed-Forward Neural Network Soft-Sensor Modeling of Flotation Process Based on Particle Swarm Optimization and Gravitational Search Algorithm

    Directory of Open Access Journals (Sweden)

    Jie-Sheng Wang

    2015-01-01

    Full Text Available For predicting the key technology indicators (concentrate grade and tailings recovery rate of flotation process, a feed-forward neural network (FNN based soft-sensor model optimized by the hybrid algorithm combining particle swarm optimization (PSO algorithm and gravitational search algorithm (GSA is proposed. Although GSA has better optimization capability, it has slow convergence velocity and is easy to fall into local optimum. So in this paper, the velocity vector and position vector of GSA are adjusted by PSO algorithm in order to improve its convergence speed and prediction accuracy. Finally, the proposed hybrid algorithm is adopted to optimize the parameters of FNN soft-sensor model. Simulation results show that the model has better generalization and prediction accuracy for the concentrate grade and tailings recovery rate to meet the online soft-sensor requirements of the real-time control in the flotation process.

  5. ACCPndn : adaptive congestion control protocol in named data networking by learning capacities using optimized time-lagged feedforward neural network

    OpenAIRE

    Karami, Amin

    2015-01-01

    Named Data Networking (NDN) is a promising network architecture being considered as a possible replacement for the current IP-based Internet infrastructure. However, NDN is subject to congestion when the number of data packets that reach one or various routers in a certain period of time is so high than its queue gets overflowed. To address this problem many congestion control protocols have been proposed in the literature which, however, they are highly sensitive to their control parameters ...

  6. Stochastic resonance in feedforward acupuncture networks

    Science.gov (United States)

    Qin, Ying-Mei; Wang, Jiang; Men, Cong; Deng, Bin; Wei, Xi-Le; Yu, Hai-Tao; Chan, Wai-Lok

    2014-10-01

    Effects of noises and some other network properties on the weak signal propagation are studied systematically in feedforward acupuncture networks (FFN) based on FitzHugh-Nagumo neuron model. It is found that noises with medium intensity can enhance signal propagation and this effect can be further increased by the feedforward network structure. Resonant properties in the noisy network can also be altered by several network parameters, such as heterogeneity, synapse features, and feedback connections. These results may also provide a novel potential explanation for the propagation of acupuncture signal.

  7. Simultaneous fitting of a potential-energy surface and its corresponding force fields using feedforward neural networks

    Science.gov (United States)

    Pukrittayakamee, A.; Malshe, M.; Hagan, M.; Raff, L. M.; Narulkar, R.; Bukkapatnum, S.; Komanduri, R.

    2009-04-01

    An improved neural network (NN) approach is presented for the simultaneous development of accurate potential-energy hypersurfaces and corresponding force fields that can be utilized to conduct ab initio molecular dynamics and Monte Carlo studies on gas-phase chemical reactions. The method is termed as combined function derivative approximation (CFDA). The novelty of the CFDA method lies in the fact that although the NN has only a single output neuron that represents potential energy, the network is trained in such a way that the derivatives of the NN output match the gradient of the potential-energy hypersurface. Accurate force fields can therefore be computed simply by differentiating the network. Both the computed energies and the gradients are then accurately interpolated using the NN. This approach is superior to having the gradients appear in the output layer of the NN because it greatly simplifies the required architecture of the network. The CFDA permits weighting of function fitting relative to gradient fitting. In every test that we have run on six different systems, CFDA training (without a validation set) has produced smaller out-of-sample testing error than early stopping (with a validation set) or Bayesian regularization (without a validation set). This indicates that CFDA training does a better job of preventing overfitting than the standard methods currently in use. The training data can be obtained using an empirical potential surface or any ab initio method. The accuracy and interpolation power of the method have been tested for the reaction dynamics of H+HBr using an analytical potential. The results show that the present NN training technique produces more accurate fits to both the potential-energy surface as well as the corresponding force fields than the previous methods. The fitting and interpolation accuracy is so high (rms error=1.2 cm-1) that trajectories computed on the NN potential exhibit point-by-point agreement with corresponding

  8. Fruit Classification by Wavelet-Entropy and Feedforward Neural Network Trained by Fitness-Scaled Chaotic ABC and Biogeography-Based Optimization

    Directory of Open Access Journals (Sweden)

    Shuihua Wang

    2015-08-01

    Full Text Available Fruit classification is quite difficult because of the various categories and similar shapes and features of fruit. In this work, we proposed two novel machine-learning based classification methods. The developed system consists of wavelet entropy (WE, principal component analysis (PCA, feedforward neural network (FNN trained by fitness-scaled chaotic artificial bee colony (FSCABC and biogeography-based optimization (BBO, respectively. The K-fold stratified cross validation (SCV was utilized for statistical analysis. The classification performance for 1653 fruit images from 18 categories showed that the proposed “WE + PCA + FSCABC-FNN” and “WE + PCA + BBO-FNN” methods achieve the same accuracy of 89.5%, higher than state-of-the-art approaches: “(CH + MP + US + PCA + GA-FNN ” of 84.8%, “(CH + MP + US + PCA + PSO-FNN” of 87.9%, “(CH + MP + US + PCA + ABC-FNN” of 85.4%, “(CH + MP + US + PCA + kSVM” of 88.2%, and “(CH + MP + US + PCA + FSCABC-FNN” of 89.1%. Besides, our methods used only 12 features, less than the number of features used by other methods. Therefore, the proposed methods are effective for fruit classification.

  9. A novel method to produce nonlinear empirical physical formulas for experimental nonlinear electro-optical responses of doped nematic liquid crystals: Feedforward neural network approach

    Energy Technology Data Exchange (ETDEWEB)

    Yildiz, Nihat, E-mail: nyildiz@cumhuriyet.edu.t [Cumhuriyet University, Faculty of Science and Literature, Department of Physics, 58140 Sivas (Turkey); San, Sait Eren; Okutan, Mustafa [Department of Physics, Gebze Institute of Technology, P.O. Box 141, Gebze 41400, Kocaeli (Turkey); Kaya, Hueseyin [Cumhuriyet University, Faculty of Science and Literature, Department of Physics, 58140 Sivas (Turkey)

    2010-04-15

    Among other significant obstacles, inherent nonlinearity in experimental physical response data poses severe difficulty in empirical physical formula (EPF) construction. In this paper, we applied a novel method (namely layered feedforward neural network (LFNN) approach) to produce explicit nonlinear EPFs for experimental nonlinear electro-optical responses of doped nematic liquid crystals (NLCs). Our motivation was that, as we showed in a previous theoretical work, an appropriate LFNN, due to its exceptional nonlinear function approximation capabilities, is highly relevant to EPF construction. Therefore, in this paper, we obtained excellently produced LFNN approximation functions as our desired EPFs for above-mentioned highly nonlinear response data of NLCs. In other words, by using suitable LFNNs, we successfully fitted the experimentally measured response and predicted the new (yet-to-be measured) response data. The experimental data (response versus input) were diffraction and dielectric properties versus bias voltage; and they were all taken from our previous experimental work. We conclude that in general, LFNN can be applied to construct various types of EPFs for the corresponding various nonlinear physical perturbation (thermal, electronic, molecular, electric, optical, etc.) data of doped NLCs.

  10. A novel method to produce nonlinear empirical physical formulas for experimental nonlinear electro-optical responses of doped nematic liquid crystals: Feedforward neural network approach

    International Nuclear Information System (INIS)

    Yildiz, Nihat; San, Sait Eren; Okutan, Mustafa; Kaya, Hueseyin

    2010-01-01

    Among other significant obstacles, inherent nonlinearity in experimental physical response data poses severe difficulty in empirical physical formula (EPF) construction. In this paper, we applied a novel method (namely layered feedforward neural network (LFNN) approach) to produce explicit nonlinear EPFs for experimental nonlinear electro-optical responses of doped nematic liquid crystals (NLCs). Our motivation was that, as we showed in a previous theoretical work, an appropriate LFNN, due to its exceptional nonlinear function approximation capabilities, is highly relevant to EPF construction. Therefore, in this paper, we obtained excellently produced LFNN approximation functions as our desired EPFs for above-mentioned highly nonlinear response data of NLCs. In other words, by using suitable LFNNs, we successfully fitted the experimentally measured response and predicted the new (yet-to-be measured) response data. The experimental data (response versus input) were diffraction and dielectric properties versus bias voltage; and they were all taken from our previous experimental work. We conclude that in general, LFNN can be applied to construct various types of EPFs for the corresponding various nonlinear physical perturbation (thermal, electronic, molecular, electric, optical, etc.) data of doped NLCs.

  11. Real-Time Analysis of Online Product Reviews by Means of Multi-Layer Feed-Forward Neural Networks

    Directory of Open Access Journals (Sweden)

    Reinhold Decker

    2014-11-01

    Full Text Available In the recent past, the quantitative analysis of online product reviews (OPRs has become a popular manifestation of marketing intelligence activities focusing on products that are frequently subject to electronic word-of-mouth (eWOM. Typical elements of OPRs are overall star ratings, product at- tribute scores, recommendations, pros and cons, and free texts. The first three elements are of pa r- ticular interest because they provide an aggregate view of reviewers’ opinions about the products of interest. However, the significance of individual product attributes in the overall evaluation pro c- ess  can  vary  in  the  course  of  time.  Accordingly,  ad  hoc  analyses  of  OPRs  that  have  been downloaded at a certain point in time are of limited value for dynamic eWOM monitoring because of their snapshot character. On the other hand, opinion platforms can increase the meaningfulness of the OPRs posted there and, therewith, the usefulness of the platform as a whole, by directing eWOM activities to those product attributes that really matter at present. This paper therefore in- troduces a neural network-based approach that allows the dynamic tracking of the influence the posted scores of product attributes have on the overall star ratings of the concerning products. By using an elasticity measure, this approach supports the identification of those attributes that tend to lose or gain significance in the product evaluation process over time. The usability of this ap- proach is demonstrated using real OPR data on digital cameras and hotels.

  12. Input vector optimization of feed-forward neural networks for fitting ab initio potential-energy databases

    Science.gov (United States)

    Malshe, M.; Raff, L. M.; Hagan, M.; Bukkapatnam, S.; Komanduri, R.

    2010-05-01

    The variation in the fitting accuracy of neural networks (NNs) when used to fit databases comprising potential energies obtained from ab initio electronic structure calculations is investigated as a function of the number and nature of the elements employed in the input vector to the NN. Ab initio databases for H2O2, HONO, Si5, and H2CCHBr were employed in the investigations. These systems were chosen so as to include four-, five-, and six-body systems containing first, second, third, and fourth row elements with a wide variety of chemical bonding and whose conformations cover a wide range of structures that occur under high-energy machining conditions and in chemical reactions involving cis-trans isomerizations, six different types of two-center bond ruptures, and two different three-center dissociation reactions. The ab initio databases for these systems were obtained using density functional theory/B3LYP, MP2, and MP4 methods with extended basis sets. A total of 31 input vectors were investigated. In each case, the elements of the input vector were chosen from interatomic distances, inverse powers of the interatomic distance, three-body angles, and dihedral angles. Both redundant and nonredundant input vectors were investigated. The results show that among all the input vectors investigated, the set employed in the Z-matrix specification of the molecular configurations in the electronic structure calculations gave the lowest NN fitting accuracy for both Si5 and vinyl bromide. The underlying reason for this result appears to be the discontinuity present in the dihedral angle for planar geometries. The use of trigometric functions of the angles as input elements produced significantly improved fitting accuracy as this choice eliminates the discontinuity. The most accurate fitting was obtained when the elements of the input vector were taken to have the form Rij-n, where the Rij are the interatomic distances. When the Levenberg-Marquardt procedure was modified

  13. An Artificial Neural Network Controller for Intelligent Transportation Systems Applications

    Science.gov (United States)

    1996-01-01

    An Autonomous Intelligent Cruise Control (AICC) has been designed using a feedforward artificial neural network, as an example for utilizing artificial neural networks for nonlinear control problems arising in intelligent transportation systems appli...

  14. Neural networks

    International Nuclear Information System (INIS)

    Denby, Bruce; Lindsey, Clark; Lyons, Louis

    1992-01-01

    The 1980s saw a tremendous renewal of interest in 'neural' information processing systems, or 'artificial neural networks', among computer scientists and computational biologists studying cognition. Since then, the growth of interest in neural networks in high energy physics, fueled by the need for new information processing technologies for the next generation of high energy proton colliders, can only be described as explosive

  15. The mechanism of synchronization in feed-forward neuronal networks

    International Nuclear Information System (INIS)

    Goedeke, S; Diesmann, M

    2008-01-01

    Synchronization in feed-forward subnetworks of the brain has been proposed to explain the precisely timed spike patterns observed in experiments. While the attractor dynamics of these networks is now well understood, the underlying single neuron mechanisms remain unexplained. Previous attempts have captured the effects of the highly fluctuating membrane potential by relating spike intensity f(U) to the instantaneous voltage U generated by the input. This article shows that f is high during the rise and low during the decay of U(t), demonstrating that the U-dot-dependence of f, not refractoriness, is essential for synchronization. Moreover, the bifurcation scenario is quantitatively described by a simple f(U,U-dot) relationship. These findings suggest f(U,U-dot) as the relevant model class for the investigation of neural synchronization phenomena in a noisy environment

  16. Salient Feature Selection Using Feed-Forward Neural Networks and Signal-to-Noise Ratios with a Focus Toward Network Threat Detection and Classification

    Science.gov (United States)

    2014-03-27

    0.8.0. The virtual machine’s network adapter was set to internal network only to keep any outside traffic from interfering. A MySQL -based query...primary output of Fullstats is the ARFF file format, intended for use with the WEKA Java -based data mining software developed at the University of Waikato

  17. Neural Network Based Model of an Industrial Oil-Fired Boiler System ...

    African Journals Online (AJOL)

    A two-layer feed-forward neural network with Hyperbolic tangent sigmoid ... The neural network model when subjected to test, using the validation input data; ... Proportional Integral Derivative (PID) Controller is used to control the neural ...

  18. Neural Networks

    International Nuclear Information System (INIS)

    Smith, Patrick I.

    2003-01-01

    Physicists use large detectors to measure particles created in high-energy collisions at particle accelerators. These detectors typically produce signals indicating either where ionization occurs along the path of the particle, or where energy is deposited by the particle. The data produced by these signals is fed into pattern recognition programs to try to identify what particles were produced, and to measure the energy and direction of these particles. Ideally, there are many techniques used in this pattern recognition software. One technique, neural networks, is particularly suitable for identifying what type of particle caused by a set of energy deposits. Neural networks can derive meaning from complicated or imprecise data, extract patterns, and detect trends that are too complex to be noticed by either humans or other computer related processes. To assist in the advancement of this technology, Physicists use a tool kit to experiment with several neural network techniques. The goal of this research is interface a neural network tool kit into Java Analysis Studio (JAS3), an application that allows data to be analyzed from any experiment. As the final result, a physicist will have the ability to train, test, and implement a neural network with the desired output while using JAS3 to analyze the results or output. Before an implementation of a neural network can take place, a firm understanding of what a neural network is and how it works is beneficial. A neural network is an artificial representation of the human brain that tries to simulate the learning process [5]. It is also important to think of the word artificial in that definition as computer programs that use calculations during the learning process. In short, a neural network learns by representative examples. Perhaps the easiest way to describe the way neural networks learn is to explain how the human brain functions. The human brain contains billions of neural cells that are responsible for processing

  19. Neural Networks in Control Applications

    DEFF Research Database (Denmark)

    Sørensen, O.

    The intention of this report is to make a systematic examination of the possibilities of applying neural networks in those technical areas, which are familiar to a control engineer. In other words, the potential of neural networks in control applications is given higher priority than a detailed...... study of the networks themselves. With this end in view the following restrictions have been made: - Amongst numerous neural network structures, only the Multi Layer Perceptron (a feed-forward network) is applied. - Amongst numerous training algorithms, only four algorithms are examined, all...... in a recursive form (sample updating). The simplest is the Back Probagation Error Algorithm, and the most complex is the recursive Prediction Error Method using a Gauss-Newton search direction. - Over-fitting is often considered to be a serious problem when training neural networks. This problem is specifically...

  20. Application of adaptive boosting to EP-derived multilayer feed-forward neural networks (MLFN) to improve benign/malignant breast cancer classification

    Science.gov (United States)

    Land, Walker H., Jr.; Masters, Timothy D.; Lo, Joseph Y.; McKee, Dan

    2001-07-01

    A new neural network technology was developed for improving the benign/malignant diagnosis of breast cancer using mammogram findings. A new paradigm, Adaptive Boosting (AB), uses a markedly different theory in solutioning Computational Intelligence (CI) problems. AB, a new machine learning paradigm, focuses on finding weak learning algorithm(s) that initially need to provide slightly better than random performance (i.e., approximately 55%) when processing a mammogram training set. Then, by successive development of additional architectures (using the mammogram training set), the adaptive boosting process improves the performance of the basic Evolutionary Programming derived neural network architectures. The results of these several EP-derived hybrid architectures are then intelligently combined and tested using a similar validation mammogram data set. Optimization focused on improving specificity and positive predictive value at very high sensitivities, where an analysis of the performance of the hybrid would be most meaningful. Using the DUKE mammogram database of 500 biopsy proven samples, on average this hybrid was able to achieve (under statistical 5-fold cross-validation) a specificity of 48.3% and a positive predictive value (PPV) of 51.8% while maintaining 100% sensitivity. At 97% sensitivity, a specificity of 56.6% and a PPV of 55.8% were obtained.

  1. A single hidden layer feedforward network with only one neuron in the hidden layer can approximate any univariate function

    OpenAIRE

    Guliyev , Namig; Ismailov , Vugar

    2016-01-01

    The possibility of approximating a continuous function on a compact subset of the real line by a feedforward single hidden layer neural network with a sigmoidal activation function has been studied in many papers. Such networks can approximate an arbitrary continuous function provided that an unlimited number of neurons in a hidden layer is permitted. In this paper, we consider constructive approximation on any finite interval of $\\mathbb{R}$ by neural networks with only one neuron in the hid...

  2. Interpretation of correlated neural variability from models of feed-forward and recurrent circuits

    Science.gov (United States)

    2018-01-01

    Neural populations respond to the repeated presentations of a sensory stimulus with correlated variability. These correlations have been studied in detail, with respect to their mechanistic origin, as well as their influence on stimulus discrimination and on the performance of population codes. A number of theoretical studies have endeavored to link network architecture to the nature of the correlations in neural activity. Here, we contribute to this effort: in models of circuits of stochastic neurons, we elucidate the implications of various network architectures—recurrent connections, shared feed-forward projections, and shared gain fluctuations—on the stimulus dependence in correlations. Specifically, we derive mathematical relations that specify the dependence of population-averaged covariances on firing rates, for different network architectures. In turn, these relations can be used to analyze data on population activity. We examine recordings from neural populations in mouse auditory cortex. We find that a recurrent network model with random effective connections captures the observed statistics. Furthermore, using our circuit model, we investigate the relation between network parameters, correlations, and how well different stimuli can be discriminated from one another based on the population activity. As such, our approach allows us to relate properties of the neural circuit to information processing. PMID:29408930

  3. Interpretation of correlated neural variability from models of feed-forward and recurrent circuits.

    Directory of Open Access Journals (Sweden)

    Volker Pernice

    2018-02-01

    Full Text Available Neural populations respond to the repeated presentations of a sensory stimulus with correlated variability. These correlations have been studied in detail, with respect to their mechanistic origin, as well as their influence on stimulus discrimination and on the performance of population codes. A number of theoretical studies have endeavored to link network architecture to the nature of the correlations in neural activity. Here, we contribute to this effort: in models of circuits of stochastic neurons, we elucidate the implications of various network architectures-recurrent connections, shared feed-forward projections, and shared gain fluctuations-on the stimulus dependence in correlations. Specifically, we derive mathematical relations that specify the dependence of population-averaged covariances on firing rates, for different network architectures. In turn, these relations can be used to analyze data on population activity. We examine recordings from neural populations in mouse auditory cortex. We find that a recurrent network model with random effective connections captures the observed statistics. Furthermore, using our circuit model, we investigate the relation between network parameters, correlations, and how well different stimuli can be discriminated from one another based on the population activity. As such, our approach allows us to relate properties of the neural circuit to information processing.

  4. Prior-knowledge-based feedforward network simulation of true boiling point curve of crude oil.

    Science.gov (United States)

    Chen, C W; Chen, D Z

    2001-11-01

    Theoretical results and practical experience indicate that feedforward networks can approximate a wide class of functional relationships very well. This property is exploited in modeling chemical processes. Given finite and noisy training data, it is important to encode the prior knowledge in neural networks to improve the fit precision and the prediction ability of the model. In this paper, as to the three-layer feedforward networks and the monotonic constraint, the unconstrained method, Joerding's penalty function method, the interpolation method, and the constrained optimization method are analyzed first. Then two novel methods, the exponential weight method and the adaptive method, are proposed. These methods are applied in simulating the true boiling point curve of a crude oil with the condition of increasing monotonicity. The simulation experimental results show that the network models trained by the novel methods are good at approximating the actual process. Finally, all these methods are discussed and compared with each other.

  5. Extracting functionally feedforward networks from a population of spiking neurons.

    Science.gov (United States)

    Vincent, Kathleen; Tauskela, Joseph S; Thivierge, Jean-Philippe

    2012-01-01

    Neuronal avalanches are a ubiquitous form of activity characterized by spontaneous bursts whose size distribution follows a power-law. Recent theoretical models have replicated power-law avalanches by assuming the presence of functionally feedforward connections (FFCs) in the underlying dynamics of the system. Accordingly, avalanches are generated by a feedforward chain of activation that persists despite being embedded in a larger, massively recurrent circuit. However, it is unclear to what extent networks of living neurons that exhibit power-law avalanches rely on FFCs. Here, we employed a computational approach to reconstruct the functional connectivity of cultured cortical neurons plated on multielectrode arrays (MEAs) and investigated whether pharmacologically induced alterations in avalanche dynamics are accompanied by changes in FFCs. This approach begins by extracting a functional network of directed links between pairs of neurons, and then evaluates the strength of FFCs using Schur decomposition. In a first step, we examined the ability of this approach to extract FFCs from simulated spiking neurons. The strength of FFCs obtained in strictly feedforward networks diminished monotonically as links were gradually rewired at random. Next, we estimated the FFCs of spontaneously active cortical neuron cultures in the presence of either a control medium, a GABA(A) receptor antagonist (PTX), or an AMPA receptor antagonist combined with an NMDA receptor antagonist (APV/DNQX). The distribution of avalanche sizes in these cultures was modulated by this pharmacology, with a shallower power-law under PTX (due to the prominence of larger avalanches) and a steeper power-law under APV/DNQX (due to avalanches recruiting fewer neurons) relative to control cultures. The strength of FFCs increased in networks after application of PTX, consistent with an amplification of feedforward activity during avalanches. Conversely, FFCs decreased after application of APV

  6. Neural Networks

    Directory of Open Access Journals (Sweden)

    Schwindling Jerome

    2010-04-01

    Full Text Available This course presents an overview of the concepts of the neural networks and their aplication in the framework of High energy physics analyses. After a brief introduction on the concept of neural networks, the concept is explained in the frame of neuro-biology, introducing the concept of multi-layer perceptron, learning and their use as data classifer. The concept is then presented in a second part using in more details the mathematical approach focussing on typical use cases faced in particle physics. Finally, the last part presents the best way to use such statistical tools in view of event classifers, putting the emphasis on the setup of the multi-layer perceptron. The full article (15 p. corresponding to this lecture is written in french and is provided in the proceedings of the book SOS 2008.

  7. Regularization and Complexity Control in Feed-forward Networks

    OpenAIRE

    Bishop, C. M.

    1995-01-01

    In this paper we consider four alternative approaches to complexity control in feed-forward networks based respectively on architecture selection, regularization, early stopping, and training with noise. We show that there are close similarities between these approaches and we argue that, for most practical applications, the technique of regularization should be the method of choice.

  8. neural network based model o work based model of an industrial oil

    African Journals Online (AJOL)

    eobe

    technique. g, Neural Network Model, Regression, Mean Square Error, PID controller. ... during the training processes. An additio ... used to carry out simulation studies of the mode .... A two-layer feed-forward neural network with Matlab.

  9. Neural Network Predictions of the 4-Quadrant Wageningen Propeller Series

    National Research Council Canada - National Science Library

    Roddy, Robert F; Hess, David E; Faller, Will

    2006-01-01

    .... This report describes the development of feedforward neural network (FFNN) predictions of four-quadrant thrust and torque behavior for the Wageningen B-Screw Series of propellers and for two Wageningen ducted propeller series...

  10. Additive Feed Forward Control with Neural Networks

    DEFF Research Database (Denmark)

    Sørensen, O.

    1999-01-01

    This paper demonstrates a method to control a non-linear, multivariable, noisy process using trained neural networks. The basis for the method is a trained neural network controller acting as the inverse process model. A training method for obtaining such an inverse process model is applied....... A suitable 'shaped' (low-pass filtered) reference is used to overcome problems with excessive control action when using a controller acting as the inverse process model. The control concept is Additive Feed Forward Control, where the trained neural network controller, acting as the inverse process model......, is placed in a supplementary pure feed-forward path to an existing feedback controller. This concept benefits from the fact, that an existing, traditional designed, feedback controller can be retained without any modifications, and after training the connection of the neural network feed-forward controller...

  11. Mass reconstruction with a neural network

    International Nuclear Information System (INIS)

    Loennblad, L.; Peterson, C.; Roegnvaldsson, T.

    1992-01-01

    A feed-forward neural network method is developed for reconstructing the invariant mass of hadronic jets appearing in a calorimeter. The approach is illustrated in W→qanti q, where W-bosons are produced in panti p reactions at SPS collider energies. The neural network method yields results that are superior to conventional methods. This neural network application differs from the classification ones in the sense that an analog number (the mass) is computed by the network, rather than a binary decision being made. As a by-product our application clearly demonstrates the need for using 'intelligent' variables in instances when the amount of training instances is limited. (orig.)

  12. Feedforward neural control of toe walking in humans.

    Science.gov (United States)

    Lorentzen, Jakob; Willerslev-Olsen, Maria; Hüche Larsen, Helle; Svane, Christian; Forman, Christian; Frisk, Rasmus; Farmer, Simon Francis; Kersting, Uwe; Nielsen, Jens Bo

    2018-03-23

    Activation of ankle muscles at ground contact during toe walking is unaltered when sensory feedback is blocked or the ground is suddenly dropped. Responses in the soleus muscle to transcranial magnetic stimulation, but not peripheral nerve stimulation, are facilitated at ground contact during toe walking. We argue that toe walking is supported by feedforward control at ground contact. Toe walking requires careful control of the ankle muscles in order to absorb the impact of ground contact and maintain a stable position of the joint. The present study aimed to clarify the peripheral and central neural mechanisms involved. Fifteen healthy adults walked on a treadmill (3.0 km h -1 ). Tibialis anterior (TA) and soleus (Sol) EMG, knee and ankle joint angles, and gastrocnemius-soleus muscle fascicle lengths were recorded. Peripheral and central contributions to the EMG activity were assessed by afferent blockade, H-reflex testing, transcranial magnetic brain stimulation (TMS) and sudden unloading of the planter flexor muscle-tendon complex. Sol EMG activity started prior to ground contact and remained high throughout stance. TA EMG activity, which is normally seen around ground contact during heel strike walking, was absent. Although stretch of the Achilles tendon-muscle complex was observed after ground contact, this was not associated with lengthening of the ankle plantar flexor muscle fascicles. Sol EMG around ground contact was not affected by ischaemic blockade of large-diameter sensory afferents, or the sudden removal of ground support shortly after toe contact. Soleus motor-evoked potentials elicited by TMS were facilitated immediately after ground contact, whereas Sol H-reflexes were not. These findings indicate that at the crucial time of ankle stabilization following ground contact, toe walking is governed by centrally mediated motor drive rather than sensory driven reflex mechanisms. These findings have implications for our understanding of the control of

  13. Visualization of neural networks using saliency maps

    DEFF Research Database (Denmark)

    Mørch, Niels J.S.; Kjems, Ulrik; Hansen, Lars Kai

    1995-01-01

    The saliency map is proposed as a new method for understanding and visualizing the nonlinearities embedded in feedforward neural networks, with emphasis on the ill-posed case, where the dimensionality of the input-field by far exceeds the number of examples. Several levels of approximations...

  14. Parameter estimation using compensatory neural networks

    Indian Academy of Sciences (India)

    of interconnections among neurons but also reduces the total computing time for training. The suggested model has properties of the basic neuron ..... Engelbrecht A P, Cloete I, Geldenhuys J, Zurada J M 1995 Automatic scaling using gamma learning for feedforward neural networks. From natural to artificial computing.

  15. Morphological neural networks

    Energy Technology Data Exchange (ETDEWEB)

    Ritter, G.X.; Sussner, P. [Univ. of Florida, Gainesville, FL (United States)

    1996-12-31

    The theory of artificial neural networks has been successfully applied to a wide variety of pattern recognition problems. In this theory, the first step in computing the next state of a neuron or in performing the next layer neural network computation involves the linear operation of multiplying neural values by their synaptic strengths and adding the results. Thresholding usually follows the linear operation in order to provide for nonlinearity of the network. In this paper we introduce a novel class of neural networks, called morphological neural networks, in which the operations of multiplication and addition are replaced by addition and maximum (or minimum), respectively. By taking the maximum (or minimum) of sums instead of the sum of products, morphological network computation is nonlinear before thresholding. As a consequence, the properties of morphological neural networks are drastically different than those of traditional neural network models. In this paper we consider some of these differences and provide some particular examples of morphological neural network.

  16. PEAK TRACKING WITH A NEURAL NETWORK FOR SPECTRAL RECOGNITION

    NARCIS (Netherlands)

    COENEGRACHT, PMJ; METTING, HJ; VANLOO, EM; SNOEIJER, GJ; DOORNBOS, DA

    1993-01-01

    A peak tracking method based on a simulated feed-forward neural network with back-propagation is presented. The network uses the normalized UV spectra and peak areas measured in one chromatogram for peak recognition. It suffices to train the network with only one set of spectra recorded in one

  17. Artificial neural networks for plasma spectroscopy analysis

    International Nuclear Information System (INIS)

    Morgan, W.L.; Larsen, J.T.; Goldstein, W.H.

    1992-01-01

    Artificial neural networks have been applied to a variety of signal processing and image recognition problems. Of the several common neural models the feed-forward, back-propagation network is well suited for the analysis of scientific laboratory data, which can be viewed as a pattern recognition problem. The authors present a discussion of the basic neural network concepts and illustrate its potential for analysis of experiments by applying it to the spectra of laser produced plasmas in order to obtain estimates of electron temperatures and densities. Although these are high temperature and density plasmas, the neural network technique may be of interest in the analysis of the low temperature and density plasmas characteristic of experiments and devices in gaseous electronics

  18. Tracking and vertex finding with drift chambers and neural networks

    International Nuclear Information System (INIS)

    Lindsey, C.

    1991-09-01

    Finding tracks, track vertices and event vertices with neural networks from drift chamber signals is discussed. Simulated feed-forward neural networks have been trained with back-propagation to give track parameters using Monte Carlo simulated tracks in one case and actual experimental data in another. Effects on network performance of limited weight resolution, noise and drift chamber resolution are given. Possible implementations in hardware are discussed. 7 refs., 10 figs

  19. Recognition of decays of charged tracks with neural network techniques

    International Nuclear Information System (INIS)

    Stimpfl-Abele, G.

    1991-01-01

    We developed neural-network learning techniques for the recognition of decays of charged tracks using a feed-forward network with error back-propagation. Two completely different methods are described in detail and their efficiencies for several NN architectures are compared with conventional methods. Excellent results are obtained. (orig.)

  20. Hardware implementation of stochastic spiking neural networks.

    Science.gov (United States)

    Rosselló, Josep L; Canals, Vincent; Morro, Antoni; Oliver, Antoni

    2012-08-01

    Spiking Neural Networks, the last generation of Artificial Neural Networks, are characterized by its bio-inspired nature and by a higher computational capacity with respect to other neural models. In real biological neurons, stochastic processes represent an important mechanism of neural behavior and are responsible of its special arithmetic capabilities. In this work we present a simple hardware implementation of spiking neurons that considers this probabilistic nature. The advantage of the proposed implementation is that it is fully digital and therefore can be massively implemented in Field Programmable Gate Arrays. The high computational capabilities of the proposed model are demonstrated by the study of both feed-forward and recurrent networks that are able to implement high-speed signal filtering and to solve complex systems of linear equations.

  1. Noise-enhanced categorization in a recurrently reconnected neural network

    International Nuclear Information System (INIS)

    Monterola, Christopher; Zapotocky, Martin

    2005-01-01

    We investigate the interplay of recurrence and noise in neural networks trained to categorize spatial patterns of neural activity. We develop the following procedure to demonstrate how, in the presence of noise, the introduction of recurrence permits to significantly extend and homogenize the operating range of a feed-forward neural network. We first train a two-level perceptron in the absence of noise. Following training, we identify the input and output units of the feed-forward network, and thus convert it into a two-layer recurrent network. We show that the performance of the reconnected network has features reminiscent of nondynamic stochastic resonance: the addition of noise enables the network to correctly categorize stimuli of subthreshold strength, with optimal noise magnitude significantly exceeding the stimulus strength. We characterize the dynamics leading to this effect and contrast it to the behavior of a more simple associative memory network in which noise-mediated categorization fails

  2. Noise-enhanced categorization in a recurrently reconnected neural network

    Science.gov (United States)

    Monterola, Christopher; Zapotocky, Martin

    2005-03-01

    We investigate the interplay of recurrence and noise in neural networks trained to categorize spatial patterns of neural activity. We develop the following procedure to demonstrate how, in the presence of noise, the introduction of recurrence permits to significantly extend and homogenize the operating range of a feed-forward neural network. We first train a two-level perceptron in the absence of noise. Following training, we identify the input and output units of the feed-forward network, and thus convert it into a two-layer recurrent network. We show that the performance of the reconnected network has features reminiscent of nondynamic stochastic resonance: the addition of noise enables the network to correctly categorize stimuli of subthreshold strength, with optimal noise magnitude significantly exceeding the stimulus strength. We characterize the dynamics leading to this effect and contrast it to the behavior of a more simple associative memory network in which noise-mediated categorization fails.

  3. Flood routing modelling with Artificial Neural Networks

    Directory of Open Access Journals (Sweden)

    R. Peters

    2006-01-01

    Full Text Available For the modelling of the flood routing in the lower reaches of the Freiberger Mulde river and its tributaries the one-dimensional hydrodynamic modelling system HEC-RAS has been applied. Furthermore, this model was used to generate a database to train multilayer feedforward networks. To guarantee numerical stability for the hydrodynamic modelling of some 60 km of streamcourse an adequate resolution in space requires very small calculation time steps, which are some two orders of magnitude smaller than the input data resolution. This leads to quite high computation requirements seriously restricting the application – especially when dealing with real time operations such as online flood forecasting. In order to solve this problem we tested the application of Artificial Neural Networks (ANN. First studies show the ability of adequately trained multilayer feedforward networks (MLFN to reproduce the model performance.

  4. A quantum-implementable neural network model

    Science.gov (United States)

    Chen, Jialin; Wang, Lingli; Charbon, Edoardo

    2017-10-01

    A quantum-implementable neural network, namely quantum probability neural network (QPNN) model, is proposed in this paper. QPNN can use quantum parallelism to trace all possible network states to improve the result. Due to its unique quantum nature, this model is robust to several quantum noises under certain conditions, which can be efficiently implemented by the qubus quantum computer. Another advantage is that QPNN can be used as memory to retrieve the most relevant data and even to generate new data. The MATLAB experimental results of Iris data classification and MNIST handwriting recognition show that much less neuron resources are required in QPNN to obtain a good result than the classical feedforward neural network. The proposed QPNN model indicates that quantum effects are useful for real-life classification tasks.

  5. Neural Network Predictions of the 4-Quadrant Wageningen Propeller Series (CD-ROM)

    National Research Council Canada - National Science Library

    Roddy, Robert F; Hess, David E; Faller, Will

    2006-01-01

    .... This report describes the development of feedforward neural network (FFNN) predictions of four-quadrant thrust and torque behavior for the Wageningen B-Screw Series of propellers and for two Wageningen ducted propeller series...

  6. Effects of topologies on signal propagation in feedforward networks

    Science.gov (United States)

    Zhao, Jia; Qin, Ying-Mei; Che, Yan-Qiu

    2018-01-01

    We systematically investigate the effects of topologies on signal propagation in feedforward networks (FFNs) based on the FitzHugh-Nagumo neuron model. FFNs with different topological structures are constructed with same number of both in-degrees and out-degrees in each layer and given the same input signal. The propagation of firing patterns and firing rates are found to be affected by the distribution of neuron connections in the FFNs. Synchronous firing patterns emerge in the later layers of FFNs with identical, uniform, and exponential degree distributions, but the number of synchronous spike trains in the output layers of the three topologies obviously differs from one another. The firing rates in the output layers of the three FFNs can be ordered from high to low according to their topological structures as exponential, uniform, and identical distributions, respectively. Interestingly, the sequence of spiking regularity in the output layers of the three FFNs is consistent with the firing rates, but their firing synchronization is in the opposite order. In summary, the node degree is an important factor that can dramatically influence the neuronal network activity.

  7. Feature to prototype transition in neural networks

    Science.gov (United States)

    Krotov, Dmitry; Hopfield, John

    Models of associative memory with higher order (higher than quadratic) interactions, and their relationship to neural networks used in deep learning are discussed. Associative memory is conventionally described by recurrent neural networks with dynamical convergence to stable points. Deep learning typically uses feedforward neural nets without dynamics. However, a simple duality relates these two different views when applied to problems of pattern classification. From the perspective of associative memory such models deserve attention because they make it possible to store a much larger number of memories, compared to the quadratic case. In the dual description, these models correspond to feedforward neural networks with one hidden layer and unusual activation functions transmitting the activities of the visible neurons to the hidden layer. These activation functions are rectified polynomials of a higher degree rather than the rectified linear functions used in deep learning. The network learns representations of the data in terms of features for rectified linear functions, but as the power in the activation function is increased there is a gradual shift to a prototype-based representation, the two extreme regimes of pattern recognition known in cognitive psychology. Simons Center for Systems Biology.

  8. Optimized Neural Network for Fault Diagnosis and Classification

    International Nuclear Information System (INIS)

    Elaraby, S.M.

    2005-01-01

    This paper presents a developed and implemented toolbox for optimizing neural network structure of fault diagnosis and classification. Evolutionary algorithm based on hierarchical genetic algorithm structure is used for optimization. The simplest feed-forward neural network architecture is selected. Developed toolbox has friendly user interface. Multiple solutions are generated. The performance and applicability of the proposed toolbox is verified with benchmark data patterns and accident diagnosis of Egyptian Second research reactor (ETRR-2)

  9. Inverse kinematics problem in robotics using neural networks

    Science.gov (United States)

    Choi, Benjamin B.; Lawrence, Charles

    1992-01-01

    In this paper, Multilayer Feedforward Networks are applied to the robot inverse kinematic problem. The networks are trained with endeffector position and joint angles. After training, performance is measured by having the network generate joint angles for arbitrary endeffector trajectories. A 3-degree-of-freedom (DOF) spatial manipulator is used for the study. It is found that neural networks provide a simple and effective way to both model the manipulator inverse kinematics and circumvent the problems associated with algorithmic solution methods.

  10. Fastest learning in small-world neural networks

    International Nuclear Information System (INIS)

    Simard, D.; Nadeau, L.; Kroeger, H.

    2005-01-01

    We investigate supervised learning in neural networks. We consider a multi-layered feed-forward network with back propagation. We find that the network of small-world connectivity reduces the learning error and learning time when compared to the networks of regular or random connectivity. Our study has potential applications in the domain of data-mining, image processing, speech recognition, and pattern recognition

  11. Neural Networks: Implementations and Applications

    OpenAIRE

    Vonk, E.; Veelenturf, L.P.J.; Jain, L.C.

    1996-01-01

    Artificial neural networks, also called neural networks, have been used successfully in many fields including engineering, science and business. This paper presents the implementation of several neural network simulators and their applications in character recognition and other engineering areas

  12. Adaptive Regularization of Neural Networks Using Conjugate Gradient

    DEFF Research Database (Denmark)

    Goutte, Cyril; Larsen, Jan

    1998-01-01

    Andersen et al. (1997) and Larsen et al. (1996, 1997) suggested a regularization scheme which iteratively adapts regularization parameters by minimizing validation error using simple gradient descent. In this contribution we present an improved algorithm based on the conjugate gradient technique........ Numerical experiments with feedforward neural networks successfully demonstrate improved generalization ability and lower computational cost...

  13. Bringing Interpretability and Visualization with Artificial Neural Networks

    Science.gov (United States)

    Gritsenko, Andrey

    2017-01-01

    Extreme Learning Machine (ELM) is a training algorithm for Single-Layer Feed-forward Neural Network (SLFN). The difference in theory of ELM from other training algorithms is in the existence of explicitly-given solution due to the immutability of initialed weights. In practice, ELMs achieve performance similar to that of other state-of-the-art…

  14. Neural networks and particle physics

    CERN Document Server

    Peterson, Carsten

    1993-01-01

    1. Introduction : Structure of the Central Nervous System Generics2. Feed-forward networks, Perceptions, Function approximators3. Self-organisation, Feature Maps4. Feed-back Networks, The Hopfield model, Optimization problems, Feed-back, Networks, Deformable templates, Graph bisection

  15. Neural dynamics of feedforward and feedback processing in figure-ground segregation.

    Science.gov (United States)

    Layton, Oliver W; Mingolla, Ennio; Yazdanbakhsh, Arash

    2014-01-01

    Determining whether a region belongs to the interior or exterior of a shape (figure-ground segregation) is a core competency of the primate brain, yet the underlying mechanisms are not well understood. Many models assume that figure-ground segregation occurs by assembling progressively more complex representations through feedforward connections, with feedback playing only a modulatory role. We present a dynamical model of figure-ground segregation in the primate ventral stream wherein feedback plays a crucial role in disambiguating a figure's interior and exterior. We introduce a processing strategy whereby jitter in RF center locations and variation in RF sizes is exploited to enhance and suppress neural activity inside and outside of figures, respectively. Feedforward projections emanate from units that model cells in V4 known to respond to the curvature of boundary contours (curved contour cells), and feedback projections from units predicted to exist in IT that strategically group neurons with different RF sizes and RF center locations (teardrop cells). Neurons (convex cells) that preferentially respond when centered on a figure dynamically balance feedforward (bottom-up) information and feedback from higher visual areas. The activation is enhanced when an interior portion of a figure is in the RF via feedback from units that detect closure in the boundary contours of a figure. Our model produces maximal activity along the medial axis of well-known figures with and without concavities, and inside algorithmically generated shapes. Our results suggest that the dynamic balancing of feedforward signals with the specific feedback mechanisms proposed by the model is crucial for figure-ground segregation.

  16. Neural Dynamics of Feedforward and Feedback Processing in Figure-Ground Segregation

    Directory of Open Access Journals (Sweden)

    Oliver W. Layton

    2014-09-01

    Full Text Available Determining whether a region belongs to the interior or exterior of a shape (figure-ground segregation is a core competency of the primate brain, yet the underlying mechanisms are not well understood. Many models assume that figure-ground segregation occurs by assembling progressively more complex representations through feedforward connections, with feedback playing only a modulatory role. We present a dynamical model of figure-ground segregation in the primate ventral stream wherein feedback plays a crucial role in disambiguating a figure’s interior and exterior. We introduce a processing strategy whereby jitter in RF center locations and variation in RF sizes is exploited to enhance and suppress neural activity inside and outside of figures, respectively. Feedforward projections emanate from units that model cells in V4 known to respond to the curvature of boundary contours (curved contour cells, and feedback projections from units predicted to exist in IT that strategically group neurons with different RF sizes and RF center locations (teardrop cells. Neurons (convex cells that preferentially respond when centered on a figure dynamically balance feedforward (bottom-up information and feedback from higher visual areas. The activation is enhanced when an interior portion of a figure is in the RF via feedback from units that detect closure in the boundary contours of a figure. Our model produces maximal activity along the medial axis of well-known figures with and without concavities, and inside algorithmically generated shapes. Our results suggest that the dynamic balancing of feedforward signals with the specific feedback mechanisms proposed by the model is crucial for figure-ground segregation.

  17. Neural dynamics of feedforward and feedback processing in figure-ground segregation

    Science.gov (United States)

    Layton, Oliver W.; Mingolla, Ennio; Yazdanbakhsh, Arash

    2014-01-01

    Determining whether a region belongs to the interior or exterior of a shape (figure-ground segregation) is a core competency of the primate brain, yet the underlying mechanisms are not well understood. Many models assume that figure-ground segregation occurs by assembling progressively more complex representations through feedforward connections, with feedback playing only a modulatory role. We present a dynamical model of figure-ground segregation in the primate ventral stream wherein feedback plays a crucial role in disambiguating a figure's interior and exterior. We introduce a processing strategy whereby jitter in RF center locations and variation in RF sizes is exploited to enhance and suppress neural activity inside and outside of figures, respectively. Feedforward projections emanate from units that model cells in V4 known to respond to the curvature of boundary contours (curved contour cells), and feedback projections from units predicted to exist in IT that strategically group neurons with different RF sizes and RF center locations (teardrop cells). Neurons (convex cells) that preferentially respond when centered on a figure dynamically balance feedforward (bottom-up) information and feedback from higher visual areas. The activation is enhanced when an interior portion of a figure is in the RF via feedback from units that detect closure in the boundary contours of a figure. Our model produces maximal activity along the medial axis of well-known figures with and without concavities, and inside algorithmically generated shapes. Our results suggest that the dynamic balancing of feedforward signals with the specific feedback mechanisms proposed by the model is crucial for figure-ground segregation. PMID:25346703

  18. Hidden neural networks

    DEFF Research Database (Denmark)

    Krogh, Anders Stærmose; Riis, Søren Kamaric

    1999-01-01

    A general framework for hybrids of hidden Markov models (HMMs) and neural networks (NNs) called hidden neural networks (HNNs) is described. The article begins by reviewing standard HMMs and estimation by conditional maximum likelihood, which is used by the HNN. In the HNN, the usual HMM probability...... parameters are replaced by the outputs of state-specific neural networks. As opposed to many other hybrids, the HNN is normalized globally and therefore has a valid probabilistic interpretation. All parameters in the HNN are estimated simultaneously according to the discriminative conditional maximum...... likelihood criterion. The HNN can be viewed as an undirected probabilistic independence network (a graphical model), where the neural networks provide a compact representation of the clique functions. An evaluation of the HNN on the task of recognizing broad phoneme classes in the TIMIT database shows clear...

  19. Nonlinear adaptive inverse control via the unified model neural network

    Science.gov (United States)

    Jeng, Jin-Tsong; Lee, Tsu-Tian

    1999-03-01

    In this paper, we propose a new nonlinear adaptive inverse control via a unified model neural network. In order to overcome nonsystematic design and long training time in nonlinear adaptive inverse control, we propose the approximate transformable technique to obtain a Chebyshev Polynomials Based Unified Model (CPBUM) neural network for the feedforward/recurrent neural networks. It turns out that the proposed method can use less training time to get an inverse model. Finally, we apply this proposed method to control magnetic bearing system. The experimental results show that the proposed nonlinear adaptive inverse control architecture provides a greater flexibility and better performance in controlling magnetic bearing systems.

  20. Beneficial role of noise in artificial neural networks

    International Nuclear Information System (INIS)

    Monterola, Christopher; Saloma, Caesar; Zapotocky, Martin

    2008-01-01

    We demonstrate enhancement of neural networks efficacy to recognize frequency encoded signals and/or to categorize spatial patterns of neural activity as a result of noise addition. For temporal information recovery, noise directly added to the receiving neurons allow instantaneous improvement of signal-to-noise ratio [Monterola and Saloma, Phys. Rev. Lett. 2002]. For spatial patterns however, recurrence is necessary to extend and homogenize the operating range of a feed-forward neural network [Monterola and Zapotocky, Phys. Rev. E 2005]. Finally, using the size of the basin of attraction of the networks learned patterns (dynamical fixed points), a procedure for estimating the optimal noise is demonstrated

  1. Application of artificial neural networks in particle physics

    International Nuclear Information System (INIS)

    Kolanoski, H.

    1995-04-01

    The application of Artificial Neural Networks in Particle Physics is reviewed. Most common is the use of feed-forward nets for event classification and function approximation. This network type is best suited for a hardware implementation and special VLSI chips are available which are used in fast trigger processors. Also discussed are fully connected networks of the Hopfield type for pattern recognition in tracking detectors. (orig.)

  2. Neural networks for aircraft control

    Science.gov (United States)

    Linse, Dennis

    1990-01-01

    Current research in Artificial Neural Networks indicates that networks offer some potential advantages in adaptation and fault tolerance. This research is directed at determining the possible applicability of neural networks to aircraft control. The first application will be to aircraft trim. Neural network node characteristics, network topology and operation, neural network learning and example histories using neighboring optimal control with a neural net are discussed.

  3. Neural Network Ensembles

    DEFF Research Database (Denmark)

    Hansen, Lars Kai; Salamon, Peter

    1990-01-01

    We propose several means for improving the performance an training of neural networks for classification. We use crossvalidation as a tool for optimizing network parameters and architecture. We show further that the remaining generalization error can be reduced by invoking ensembles of similar...... networks....

  4. Critical Branching Neural Networks

    Science.gov (United States)

    Kello, Christopher T.

    2013-01-01

    It is now well-established that intrinsic variations in human neural and behavioral activity tend to exhibit scaling laws in their fluctuations and distributions. The meaning of these scaling laws is an ongoing matter of debate between isolable causes versus pervasive causes. A spiking neural network model is presented that self-tunes to critical…

  5. Mechanism for propagation of rate signals through a 10-layer feedforward neuronal network

    International Nuclear Information System (INIS)

    Jie, Li; Wan-Qing, Yu; Ding, Xu; Feng, Liu; Wei, Wang

    2009-01-01

    Using numerical simulations, we explore the mechanism for propagation of rate signals through a 10-layer feedforward network composed of Hodgkin–Huxley (HH) neurons with sparse connectivity. When white noise is afferent to the input layer, neuronal firing becomes progressively more synchronous in successive layers and synchrony is well developed in deeper layers owing to the feedforward connections between neighboring layers. The synchrony ensures the successful propagation of rate signals through the network when the synaptic conductance is weak. As the synaptic time constant τ syn varies, coherence resonance is observed in the network activity due to the intrinsic property of HH neurons. This makes the output firing rate single-peaked as a function of τ syn , suggesting that the signal propagation can be modulated by the synaptic time constant. These results are consistent with experimental results and advance our understanding of how information is processed in feedforward networks. (cross-disciplinary physics and related areas of science and technology)

  6. Pattern Recognition and Classification of Fatal Traffic Accidents in Israel A Neural Network Approach

    DEFF Research Database (Denmark)

    Prato, Carlo Giacomo; Gitelman, Victoria; Bekhor, Shlomo

    2011-01-01

    on 1,793 fatal traffic accidents occurred during the period between 2003 and 2006 and applies Kohonen and feed-forward back-propagation neural networks with the objective of extracting from the data typical patterns and relevant factors. Kohonen neural networks reveal five compelling accident patterns....... Feed-forward back-propagation neural networks indicate that sociodemographic characteristics of drivers and victims, accident location, and period of the day are extremely relevant factors. Accident patterns suggest that countermeasures are necessary for identified problems concerning mainly vulnerable...

  7. Control of beam halo-chaos using neural network self-adaptation method

    International Nuclear Information System (INIS)

    Fang Jinqing; Huang Guoxian; Luo Xiaoshu

    2004-11-01

    Taking the advantages of neural network control method for nonlinear complex systems, control of beam halo-chaos in the periodic focusing channels (network) of high intensity accelerators is studied by feed-forward back-propagating neural network self-adaptation method. The envelope radius of high-intensity proton beam is reached to the matching beam radius by suitably selecting the control structure of neural network and the linear feedback coefficient, adjusted the right-coefficient of neural network. The beam halo-chaos is obviously suppressed and shaking size is much largely reduced after the neural network self-adaptation control is applied. (authors)

  8. Parallel consensual neural networks.

    Science.gov (United States)

    Benediktsson, J A; Sveinsson, J R; Ersoy, O K; Swain, P H

    1997-01-01

    A new type of a neural-network architecture, the parallel consensual neural network (PCNN), is introduced and applied in classification/data fusion of multisource remote sensing and geographic data. The PCNN architecture is based on statistical consensus theory and involves using stage neural networks with transformed input data. The input data are transformed several times and the different transformed data are used as if they were independent inputs. The independent inputs are first classified using the stage neural networks. The output responses from the stage networks are then weighted and combined to make a consensual decision. In this paper, optimization methods are used in order to weight the outputs from the stage networks. Two approaches are proposed to compute the data transforms for the PCNN, one for binary data and another for analog data. The analog approach uses wavelet packets. The experimental results obtained with the proposed approach show that the PCNN outperforms both a conjugate-gradient backpropagation neural network and conventional statistical methods in terms of overall classification accuracy of test data.

  9. Neural networks for sensor validation and plant monitoring

    International Nuclear Information System (INIS)

    Upadhyaya, B.R.; Eryurek, E.; Mathai, G.

    1990-01-01

    Sensor and process monitoring in power plants require the estimation of one or more process variables. Neural network paradigms are suitable for establishing general nonlinear relationships among a set of plant variables. Multiple-input multiple-output autoassociative networks can follow changes in plant-wide behavior. The backpropagation algorithm has been applied for training feedforward networks. A new and enhanced algorithm for training neural networks (BPN) has been developed and implemented in a VAX workstation. Operational data from the Experimental Breeder Reactor-II (EBR-II) have been used to study the performance of BPN. Several results of application to the EBR-II are presented

  10. Character Recognition Using Genetically Trained Neural Networks

    Energy Technology Data Exchange (ETDEWEB)

    Diniz, C.; Stantz, K.M.; Trahan, M.W.; Wagner, J.S.

    1998-10-01

    Computationally intelligent recognition of characters and symbols addresses a wide range of applications including foreign language translation and chemical formula identification. The combination of intelligent learning and optimization algorithms with layered neural structures offers powerful techniques for character recognition. These techniques were originally developed by Sandia National Laboratories for pattern and spectral analysis; however, their ability to optimize vast amounts of data make them ideal for character recognition. An adaptation of the Neural Network Designer soflsvare allows the user to create a neural network (NN_) trained by a genetic algorithm (GA) that correctly identifies multiple distinct characters. The initial successfid recognition of standard capital letters can be expanded to include chemical and mathematical symbols and alphabets of foreign languages, especially Arabic and Chinese. The FIN model constructed for this project uses a three layer feed-forward architecture. To facilitate the input of characters and symbols, a graphic user interface (GUI) has been developed to convert the traditional representation of each character or symbol to a bitmap. The 8 x 8 bitmap representations used for these tests are mapped onto the input nodes of the feed-forward neural network (FFNN) in a one-to-one correspondence. The input nodes feed forward into a hidden layer, and the hidden layer feeds into five output nodes correlated to possible character outcomes. During the training period the GA optimizes the weights of the NN until it can successfully recognize distinct characters. Systematic deviations from the base design test the network's range of applicability. Increasing capacity, the number of letters to be recognized, requires a nonlinear increase in the number of hidden layer neurodes. Optimal character recognition performance necessitates a minimum threshold for the number of cases when genetically training the net. And, the

  11. Introduction to neural networks

    International Nuclear Information System (INIS)

    Pavlopoulos, P.

    1996-01-01

    This lecture is a presentation of today's research in neural computation. Neural computation is inspired by knowledge from neuro-science. It draws its methods in large degree from statistical physics and its potential applications lie mainly in computer science and engineering. Neural networks models are algorithms for cognitive tasks, such as learning and optimization, which are based on concepts derived from research into the nature of the brain. The lecture first gives an historical presentation of neural networks development and interest in performing complex tasks. Then, an exhaustive overview of data management and networks computation methods is given: the supervised learning and the associative memory problem, the capacity of networks, the Perceptron networks, the functional link networks, the Madaline (Multiple Adalines) networks, the back-propagation networks, the reduced coulomb energy (RCE) networks, the unsupervised learning and the competitive learning and vector quantization. An example of application in high energy physics is given with the trigger systems and track recognition system (track parametrization, event selection and particle identification) developed for the CPLEAR experiment detectors from the LEAR at CERN. (J.S.). 56 refs., 20 figs., 1 tab., 1 appendix

  12. Artificial neural network based approach to transmission lines protection

    International Nuclear Information System (INIS)

    Joorabian, M.

    1999-05-01

    The aim of this paper is to present and accurate fault detection technique for high speed distance protection using artificial neural networks. The feed-forward multi-layer neural network with the use of supervised learning and the common training rule of error back-propagation is chosen for this study. Information available locally at the relay point is passed to a neural network in order for an assessment of the fault location to be made. However in practice there is a large amount of information available, and a feature extraction process is required to reduce the dimensionality of the pattern vectors, whilst retaining important information that distinguishes the fault point. The choice of features is critical to the performance of the neural networks learning and operation. A significant feature in this paper is that an artificial neural network has been designed and tested to enhance the precision of the adaptive capabilities for distance protection

  13. Modeling Broadband Microwave Structures by Artificial Neural Networks

    Directory of Open Access Journals (Sweden)

    V. Otevrel

    2004-06-01

    Full Text Available The paper describes the exploitation of feed-forward neural networksand recurrent neural networks for replacing full-wave numerical modelsof microwave structures in complex microwave design tools. Building aneural model, attention is turned to the modeling accuracy and to theefficiency of building a model. Dealing with the accuracy, we describea method of increasing it by successive completing a training set.Neural models are mutually compared in order to highlight theiradvantages and disadvantages. As a reference model for comparisons,approximations based on standard cubic splines are used. Neural modelsare used to replace both the time-domain numeric models and thefrequency-domain ones.

  14. System Identification, Prediction, Simulation and Control with Neural Networks

    DEFF Research Database (Denmark)

    Sørensen, O.

    1997-01-01

    a Gauss-Newton search direction is applied. 3) Amongst numerous model types, often met in control applications, only the Non-linear ARMAX (NARMAX) model, representing input/output description, is examined. A simulated example confirms that a neural network has the potential to perform excellent System......The intention of this paper is to make a systematic examination of the possibilities of applying neural networks in those technical areas, which are familiar to a control engineer. In other words, the potential of neural networks in control applications is given higher priority than a detailed...... study of the networks themselves. With this end in view the following restrictions have been made: 1) Amongst numerous neural network structures, only the Multi Layer Perceptron (a feed-forward network) is applied. 2) Amongst numerous training algorithms, only the Recursive Prediction Error Method using...

  15. Short-Term Load Forecasting Model Based on Quantum Elman Neural Networks

    Directory of Open Access Journals (Sweden)

    Zhisheng Zhang

    2016-01-01

    Full Text Available Short-term load forecasting model based on quantum Elman neural networks was constructed in this paper. The quantum computation and Elman feedback mechanism were integrated into quantum Elman neural networks. Quantum computation can effectively improve the approximation capability and the information processing ability of the neural networks. Quantum Elman neural networks have not only the feedforward connection but also the feedback connection. The feedback connection between the hidden nodes and the context nodes belongs to the state feedback in the internal system, which has formed specific dynamic memory performance. Phase space reconstruction theory is the theoretical basis of constructing the forecasting model. The training samples are formed by means of K-nearest neighbor approach. Through the example simulation, the testing results show that the model based on quantum Elman neural networks is better than the model based on the quantum feedforward neural network, the model based on the conventional Elman neural network, and the model based on the conventional feedforward neural network. So the proposed model can effectively improve the prediction accuracy. The research in the paper makes a theoretical foundation for the practical engineering application of the short-term load forecasting model based on quantum Elman neural networks.

  16. Neural Networks For Electrohydrodynamic Effect Modelling

    Directory of Open Access Journals (Sweden)

    Wiesław Wajs

    2004-01-01

    Full Text Available This paper presents currently achieved results concerning methods of electrohydrodynamiceffect used in geophysics simulated with feedforward networks trained with backpropagation algorithm, radial basis function networks and generalized regression networks.

  17. Modeling and control of magnetorheological fluid dampers using neural networks

    Science.gov (United States)

    Wang, D. H.; Liao, W. H.

    2005-02-01

    Due to the inherent nonlinear nature of magnetorheological (MR) fluid dampers, one of the challenging aspects for utilizing these devices to achieve high system performance is the development of accurate models and control algorithms that can take advantage of their unique characteristics. In this paper, the direct identification and inverse dynamic modeling for MR fluid dampers using feedforward and recurrent neural networks are studied. The trained direct identification neural network model can be used to predict the damping force of the MR fluid damper on line, on the basis of the dynamic responses across the MR fluid damper and the command voltage, and the inverse dynamic neural network model can be used to generate the command voltage according to the desired damping force through supervised learning. The architectures and the learning methods of the dynamic neural network models and inverse neural network models for MR fluid dampers are presented, and some simulation results are discussed. Finally, the trained neural network models are applied to predict and control the damping force of the MR fluid damper. Moreover, validation methods for the neural network models developed are proposed and used to evaluate their performance. Validation results with different data sets indicate that the proposed direct identification dynamic model using the recurrent neural network can be used to predict the damping force accurately and the inverse identification dynamic model using the recurrent neural network can act as a damper controller to generate the command voltage when the MR fluid damper is used in a semi-active mode.

  18. Feed-Forward Propagation of Temporal and Rate Information between Cortical Populations during Coherent Activation in Engineered In Vitro Networks.

    Science.gov (United States)

    DeMarse, Thomas B; Pan, Liangbin; Alagapan, Sankaraleengam; Brewer, Gregory J; Wheeler, Bruce C

    2016-01-01

    Transient propagation of information across neuronal assembles is thought to underlie many cognitive processes. However, the nature of the neural code that is embedded within these transmissions remains uncertain. Much of our understanding of how information is transmitted among these assemblies has been derived from computational models. While these models have been instrumental in understanding these processes they often make simplifying assumptions about the biophysical properties of neurons that may influence the nature and properties expressed. To address this issue we created an in vitro analog of a feed-forward network composed of two small populations (also referred to as assemblies or layers) of living dissociated rat cortical neurons. The populations were separated by, and communicated through, a microelectromechanical systems (MEMS) device containing a strip of microscale tunnels. Delayed culturing of one population in the first layer followed by the second a few days later induced the unidirectional growth of axons through the microtunnels resulting in a primarily feed-forward communication between these two small neural populations. In this study we systematically manipulated the number of tunnels that connected each layer and hence, the number of axons providing communication between those populations. We then assess the effect of reducing the number of tunnels has upon the properties of between-layer communication capacity and fidelity of neural transmission among spike trains transmitted across and within layers. We show evidence based on Victor-Purpura's and van Rossum's spike train similarity metrics supporting the presence of both rate and temporal information embedded within these transmissions whose fidelity increased during communication both between and within layers when the number of tunnels are increased. We also provide evidence reinforcing the role of synchronized activity upon transmission fidelity during the spontaneous synchronized

  19. Artificial Neural Networks to Detect Risk of Type 2 Diabetes | Baha ...

    African Journals Online (AJOL)

    A multilayer feedforward architecture with backpropagation algorithm was designed using Neural Network Toolbox of Matlab. The network was trained using batch mode backpropagation with gradient descent and momentum. Best performed network identified during the training was 2 hidden layers of 6 and 3 neurons, ...

  20. ECO INVESTMENT PROJECT MANAGEMENT THROUGH TIME APPLYING ARTIFICIAL NEURAL NETWORKS

    Directory of Open Access Journals (Sweden)

    Tamara Gvozdenović

    2007-06-01

    Full Text Available he concept of project management expresses an indispensable approach to investment projects. Time is often the most important factor in these projects. The artificial neural network is the paradigm of data processing, which is inspired by the one used by the biological brain, and it is used in numerous, different fields, among which is the project management. This research is oriented to application of artificial neural networks in managing time of investment project. The artificial neural networks are used to define the optimistic, the most probable and the pessimistic time in PERT method. The program package Matlab: Neural Network Toolbox is used in data simulation. The feed-forward back propagation network is chosen.

  1. Neural network training by Kalman filtering in process system monitoring

    International Nuclear Information System (INIS)

    Ciftcioglu, Oe.

    1996-03-01

    Kalman filtering approach for neural network training is described. Its extended form is used as an adaptive filter in a nonlinear environment of the form a feedforward neural network. Kalman filtering approach generally provides fast training as well as avoiding excessive learning which results in enhanced generalization capability. The network is used in a process monitoring application where the inputs are measurement signals. Since the measurement errors are also modelled in Kalman filter the approach yields accurate training with the implication of accurate neural network model representing the input and output relationships in the application. As the process of concern is a dynamic system, the input source of information to neural network is time dependent so that the training algorithm presents an adaptive form for real-time operation for the monitoring task. (orig.)

  2. Smooth function approximation using neural networks.

    Science.gov (United States)

    Ferrari, Silvia; Stengel, Robert F

    2005-01-01

    An algebraic approach for representing multidimensional nonlinear functions by feedforward neural networks is presented. In this paper, the approach is implemented for the approximation of smooth batch data containing the function's input, output, and possibly, gradient information. The training set is associated to the network adjustable parameters by nonlinear weight equations. The cascade structure of these equations reveals that they can be treated as sets of linear systems. Hence, the training process and the network approximation properties can be investigated via linear algebra. Four algorithms are developed to achieve exact or approximate matching of input-output and/or gradient-based training sets. Their application to the design of forward and feedback neurocontrollers shows that algebraic training is characterized by faster execution speeds and better generalization properties than contemporary optimization techniques.

  3. Deconvolution using a neural network

    Energy Technology Data Exchange (ETDEWEB)

    Lehman, S.K.

    1990-11-15

    Viewing one dimensional deconvolution as a matrix inversion problem, we compare a neural network backpropagation matrix inverse with LMS, and pseudo-inverse. This is a largely an exercise in understanding how our neural network code works. 1 ref.

  4. A study of reactor monitoring method with neural network

    Energy Technology Data Exchange (ETDEWEB)

    Nabeshima, Kunihiko [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment

    2001-03-01

    The purpose of this study is to investigate the methodology of Nuclear Power Plant (NPP) monitoring with neural networks, which create the plant models by the learning of the past normal operation patterns. The concept of this method is to detect the symptom of small anomalies by monitoring the deviations between the process signals measured from an actual plant and corresponding output signals from the neural network model, which might not be equal if the abnormal operational patterns are presented to the input of the neural network. Auto-associative network, which has same output as inputs, can detect an kind of anomaly condition by using normal operation data only. The monitoring tests of the feedforward neural network with adaptive learning were performed using the PWR plant simulator by which many kinds of anomaly conditions can be easily simulated. The adaptively trained feedforward network could follow the actual plant dynamics and the changes of plant condition, and then find most of the anomalies much earlier than the conventional alarm system during steady state and transient operations. Then the off-line and on-line test results during one year operation at the actual NPP (PWR) showed that the neural network could detect several small anomalies which the operators or the conventional alarm system didn't noticed. Furthermore, the sensitivity analysis suggests that the plant models by neural networks are appropriate. Finally, the simulation results show that the recurrent neural network with feedback connections could successfully model the slow behavior of the reactor dynamics without adaptive learning. Therefore, the recurrent neural network with adaptive learning will be the best choice for the actual reactor monitoring system. (author)

  5. Artificial neural network modelling

    CERN Document Server

    Samarasinghe, Sandhya

    2016-01-01

    This book covers theoretical aspects as well as recent innovative applications of Artificial Neural networks (ANNs) in natural, environmental, biological, social, industrial and automated systems. It presents recent results of ANNs in modelling small, large and complex systems under three categories, namely, 1) Networks, Structure Optimisation, Robustness and Stochasticity 2) Advances in Modelling Biological and Environmental Systems and 3) Advances in Modelling Social and Economic Systems. The book aims at serving undergraduates, postgraduates and researchers in ANN computational modelling. .

  6. Rotation Invariance Neural Network

    OpenAIRE

    Li, Shiyuan

    2017-01-01

    Rotation invariance and translation invariance have great values in image recognition tasks. In this paper, we bring a new architecture in convolutional neural network (CNN) named cyclic convolutional layer to achieve rotation invariance in 2-D symbol recognition. We can also get the position and orientation of the 2-D symbol by the network to achieve detection purpose for multiple non-overlap target. Last but not least, this architecture can achieve one-shot learning in some cases using thos...

  7. Temporal neural networks and transient analysis of complex engineering systems

    Science.gov (United States)

    Uluyol, Onder

    A theory is introduced for a multi-layered Local Output Gamma Feedback (LOGF) neural network within the paradigm of Locally-Recurrent Globally-Feedforward neural networks. It is developed for the identification, prediction, and control tasks of spatio-temporal systems and allows for the presentation of different time scales through incorporation of a gamma memory. It is initially applied to the tasks of sunspot and Mackey-Glass series prediction as benchmarks, then it is extended to the task of power level control of a nuclear reactor at different fuel cycle conditions. The developed LOGF neuron model can also be viewed as a Transformed Input and State (TIS) Gamma memory for neural network architectures for temporal processing. The novel LOGF neuron model extends the static neuron model by incorporating into it a short-term memory structure in the form of a digital gamma filter. A feedforward neural network made up of LOGF neurons can thus be used to model dynamic systems. A learning algorithm based upon the Backpropagation-Through-Time (BTT) approach is derived. It is applicable for training a general L-layer LOGF neural network. The spatial and temporal weights and parameters of the network are iteratively optimized for a given problem using the derived learning algorithm.

  8. Neural Networks and Micromechanics

    Science.gov (United States)

    Kussul, Ernst; Baidyk, Tatiana; Wunsch, Donald C.

    The title of the book, "Neural Networks and Micromechanics," seems artificial. However, the scientific and technological developments in recent decades demonstrate a very close connection between the two different areas of neural networks and micromechanics. The purpose of this book is to demonstrate this connection. Some artificial intelligence (AI) methods, including neural networks, could be used to improve automation system performance in manufacturing processes. However, the implementation of these AI methods within industry is rather slow because of the high cost of conducting experiments using conventional manufacturing and AI systems. To lower the cost, we have developed special micromechanical equipment that is similar to conventional mechanical equipment but of much smaller size and therefore of lower cost. This equipment could be used to evaluate different AI methods in an easy and inexpensive way. The proved methods could be transferred to industry through appropriate scaling. In this book, we describe the prototypes of low cost microequipment for manufacturing processes and the implementation of some AI methods to increase precision, such as computer vision systems based on neural networks for microdevice assembly and genetic algorithms for microequipment characterization and the increase of microequipment precision.

  9. Prediction of pelvic organ prolapse using an artificial neural network.

    Science.gov (United States)

    Robinson, Christopher J; Swift, Steven; Johnson, Donna D; Almeida, Jonas S

    2008-08-01

    The objective of this investigation was to test the ability of a feedforward artificial neural network (ANN) to differentiate patients who have pelvic organ prolapse (POP) from those who retain good pelvic organ support. Following institutional review board approval, patients with POP (n = 87) and controls with good pelvic organ support (n = 368) were identified from the urogynecology research database. Historical and clinical information was extracted from the database. Data analysis included the training of a feedforward ANN, variable selection, and external validation of the model with an independent data set. Twenty variables were used. The median-performing ANN model used a median of 3 (quartile 1:3 to quartile 3:5) variables and achieved an area under the receiver operator curve of 0.90 (external, independent validation set). Ninety percent sensitivity and 83% specificity were obtained in the external validation by ANN classification. Feedforward ANN modeling is applicable to the identification and prediction of POP.

  10. Applying neural networks as software sensors for enzyme engineering.

    Science.gov (United States)

    Linko, S; Zhu, Y H; Linko, P

    1999-04-01

    The on-line control of enzyme-production processes is difficult, owing to the uncertainties typical of biological systems and to the lack of suitable on-line sensors for key process variables. For example, intelligent methods to predict the end point of fermentation could be of great economic value. Computer-assisted control based on artificial-neural-network models offers a novel solution in such situations. Well-trained feedforward-backpropagation neural networks can be used as software sensors in enzyme-process control; their performance can be affected by a number of factors.

  11. Control of 12-Cylinder Camless Engine with Neural Networks

    OpenAIRE

    Ashhab Moh’d Sami

    2017-01-01

    The 12-cyliner camless engine breathing process is modeled with artificial neural networks (ANN’s). The inputs to the net are the intake valve lift (IVL) and intake valve closing timing (IVC) whereas the output of the net is the cylinder air charge (CAC). The ANN is trained with data collected from an engine simulation model which is based on thermodynamics principles and calibrated against real engine data. A method for adapting single-output feed-forward neural networks is proposed and appl...

  12. Dynamic Pricing in Electronic Commerce Using Neural Network

    Science.gov (United States)

    Ghose, Tapu Kumar; Tran, Thomas T.

    In this paper, we propose an approach where feed-forward neural network is used for dynamically calculating a competitive price of a product in order to maximize sellers’ revenue. In the approach we considered that along with product price other attributes such as product quality, delivery time, after sales service and seller’s reputation contribute in consumers purchase decision. We showed that once the sellers, by using their limited prior knowledge, set an initial price of a product our model adjusts the price automatically with the help of neural network so that sellers’ revenue is maximized.

  13. Modelling of word usage frequency dynamics using artificial neural network

    International Nuclear Information System (INIS)

    Maslennikova, Yu S; Bochkarev, V V; Voloskov, D S

    2014-01-01

    In this paper the method for modelling of word usage frequency time series is proposed. An artificial feedforward neural network was used to predict word usage frequencies. The neural network was trained using the maximum likelihood criterion. The Google Books Ngram corpus was used for the analysis. This database provides a large amount of data on frequency of specific word forms for 7 languages. Statistical modelling of word usage frequency time series allows finding optimal fitting and filtering algorithm for subsequent lexicographic analysis and verification of frequency trend models

  14. Learning and Generalisation in Neural Networks with Local Preprocessing

    OpenAIRE

    Kutsia, Merab

    2007-01-01

    We study learning and generalisation ability of a specific two-layer feed-forward neural network and compare its properties to that of a simple perceptron. The input patterns are mapped nonlinearly onto a hidden layer, much larger than the input layer, and this mapping is either fixed or may result from an unsupervised learning process. Such preprocessing of initially uncorrelated random patterns results in the correlated patterns in the hidden layer. The hidden-to-output mapping of the net...

  15. Neural networks for predicting breeding values and genetic gains

    Directory of Open Access Journals (Sweden)

    Gabi Nunes Silva

    2014-12-01

    Full Text Available Analysis using Artificial Neural Networks has been described as an approach in the decision-making process that, although incipient, has been reported as presenting high potential for use in animal and plant breeding. In this study, we introduce the procedure of using the expanded data set for training the network. Wealso proposed using statistical parameters to estimate the breeding value of genotypes in simulated scenarios, in addition to the mean phenotypic value in a feed-forward back propagation multilayer perceptron network. After evaluating artificial neural network configurations, our results showed its superiority to estimates based on linear models, as well as its applicability in the genetic value prediction process. The results further indicated the good generalization performance of the neural network model in several additional validation experiments.

  16. Empirical modeling of nuclear power plants using neural networks

    International Nuclear Information System (INIS)

    Parlos, A.G.; Atiya, A.; Chong, K.T.

    1991-01-01

    A summary of a procedure for nonlinear identification of process dynamics encountered in nuclear power plant components is presented in this paper using artificial neural systems. A hybrid feedforward/feedback neural network, namely, a recurrent multilayer perceptron, is used as the nonlinear structure for system identification. In the overall identification process, the feedforward portion of the network architecture provides its well-known interpolation property, while through recurrency and cross-talk, the local information feedback enables representation of time-dependent system nonlinearities. The standard backpropagation learning algorithm is modified and is used to train the proposed hybrid network in a supervised manner. The performance of recurrent multilayer perceptron networks in identifying process dynamics is investigated via the case study of a U-tube steam generator. The nonlinear response of a representative steam generator is predicted using a neural network and is compared to the response obtained from a sophisticated physical model during both high- and low-power operation. The transient responses compare well, though further research is warranted for training and testing of recurrent neural networks during more severe operational transients and accident scenarios

  17. Neural networks for triggering

    International Nuclear Information System (INIS)

    Denby, B.; Campbell, M.; Bedeschi, F.; Chriss, N.; Bowers, C.; Nesti, F.

    1990-01-01

    Two types of neural network beauty trigger architectures, based on identification of electrons in jets and recognition of secondary vertices, have been simulated in the environment of the Fermilab CDF experiment. The efficiencies for B's and rejection of background obtained are encouraging. If hardware tests are successful, the electron identification architecture will be tested in the 1991 run of CDF. 10 refs., 5 figs., 1 tab

  18. A robust neural network-based approach for microseismic event detection

    KAUST Repository

    Akram, Jubran; Ovcharenko, Oleg; Peter, Daniel

    2017-01-01

    We present an artificial neural network based approach for robust event detection from low S/N waveforms. We use a feed-forward network with a single hidden layer that is tuned on a training dataset and later applied on the entire example dataset

  19. Neural electrical activity and neural network growth.

    Science.gov (United States)

    Gafarov, F M

    2018-05-01

    The development of central and peripheral neural system depends in part on the emergence of the correct functional connectivity in its input and output pathways. Now it is generally accepted that molecular factors guide neurons to establish a primary scaffold that undergoes activity-dependent refinement for building a fully functional circuit. However, a number of experimental results obtained recently shows that the neuronal electrical activity plays an important role in the establishing of initial interneuronal connections. Nevertheless, these processes are rather difficult to study experimentally, due to the absence of theoretical description and quantitative parameters for estimation of the neuronal activity influence on growth in neural networks. In this work we propose a general framework for a theoretical description of the activity-dependent neural network growth. The theoretical description incorporates a closed-loop growth model in which the neural activity can affect neurite outgrowth, which in turn can affect neural activity. We carried out the detailed quantitative analysis of spatiotemporal activity patterns and studied the relationship between individual cells and the network as a whole to explore the relationship between developing connectivity and activity patterns. The model, developed in this work will allow us to develop new experimental techniques for studying and quantifying the influence of the neuronal activity on growth processes in neural networks and may lead to a novel techniques for constructing large-scale neural networks by self-organization. Copyright © 2018 Elsevier Ltd. All rights reserved.

  20. Program Helps Simulate Neural Networks

    Science.gov (United States)

    Villarreal, James; Mcintire, Gary

    1993-01-01

    Neural Network Environment on Transputer System (NNETS) computer program provides users high degree of flexibility in creating and manipulating wide variety of neural-network topologies at processing speeds not found in conventional computing environments. Supports back-propagation and back-propagation-related algorithms. Back-propagation algorithm used is implementation of Rumelhart's generalized delta rule. NNETS developed on INMOS Transputer(R). Predefines back-propagation network, Jordan network, and reinforcement network to assist users in learning and defining own networks. Also enables users to configure other neural-network paradigms from NNETS basic architecture. Small portion of software written in OCCAM(R) language.

  1. Trimaran Resistance Artificial Neural Network

    Science.gov (United States)

    2011-01-01

    11th International Conference on Fast Sea Transportation FAST 2011, Honolulu, Hawaii, USA, September 2011 Trimaran Resistance Artificial Neural Network Richard...Trimaran Resistance Artificial Neural Network 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) 5d. PROJECT NUMBER 5e... Artificial Neural Network and is restricted to the center and side-hull configurations tested. The value in the parametric model is that it is able to

  2. Rule extraction from minimal neural networks for credit card screening.

    Science.gov (United States)

    Setiono, Rudy; Baesens, Bart; Mues, Christophe

    2011-08-01

    While feedforward neural networks have been widely accepted as effective tools for solving classification problems, the issue of finding the best network architecture remains unresolved, particularly so in real-world problem settings. We address this issue in the context of credit card screening, where it is important to not only find a neural network with good predictive performance but also one that facilitates a clear explanation of how it produces its predictions. We show that minimal neural networks with as few as one hidden unit provide good predictive accuracy, while having the added advantage of making it easier to generate concise and comprehensible classification rules for the user. To further reduce model size, a novel approach is suggested in which network connections from the input units to this hidden unit are removed by a very straightaway pruning procedure. In terms of predictive accuracy, both the minimized neural networks and the rule sets generated from them are shown to compare favorably with other neural network based classifiers. The rules generated from the minimized neural networks are concise and thus easier to validate in a real-life setting.

  3. Forecasting SPEI and SPI Drought Indices Using the Integrated Artificial Neural Networks

    Directory of Open Access Journals (Sweden)

    Petr Maca

    2016-01-01

    Full Text Available The presented paper compares forecast of drought indices based on two different models of artificial neural networks. The first model is based on feedforward multilayer perceptron, sANN, and the second one is the integrated neural network model, hANN. The analyzed drought indices are the standardized precipitation index (SPI and the standardized precipitation evaporation index (SPEI and were derived for the period of 1948–2002 on two US catchments. The meteorological and hydrological data were obtained from MOPEX experiment. The training of both neural network models was made by the adaptive version of differential evolution, JADE. The comparison of models was based on six model performance measures. The results of drought indices forecast, explained by the values of four model performance indices, show that the integrated neural network model was superior to the feedforward multilayer perceptron with one hidden layer of neurons.

  4. Parameter estimation in space systems using recurrent neural networks

    Science.gov (United States)

    Parlos, Alexander G.; Atiya, Amir F.; Sunkel, John W.

    1991-01-01

    The identification of time-varying parameters encountered in space systems is addressed, using artificial neural systems. A hybrid feedforward/feedback neural network, namely a recurrent multilayer perception, is used as the model structure in the nonlinear system identification. The feedforward portion of the network architecture provides its well-known interpolation property, while through recurrency and cross-talk, the local information feedback enables representation of temporal variations in the system nonlinearities. The standard back-propagation-learning algorithm is modified and it is used for both the off-line and on-line supervised training of the proposed hybrid network. The performance of recurrent multilayer perceptron networks in identifying parameters of nonlinear dynamic systems is investigated by estimating the mass properties of a representative large spacecraft. The changes in the spacecraft inertia are predicted using a trained neural network, during two configurations corresponding to the early and late stages of the spacecraft on-orbit assembly sequence. The proposed on-line mass properties estimation capability offers encouraging results, though, further research is warranted for training and testing the predictive capabilities of these networks beyond nominal spacecraft operations.

  5. Nonlinear identification of process dynamics using neural networks

    International Nuclear Information System (INIS)

    Parlos, A.G.; Atiya, A.F.; Chong, K.T.

    1992-01-01

    In this paper the nonlinear identification of process dynamics encountered in nuclear power plant components is addressed, in an input-output sense, using artificial neural systems. A hybrid feedforward/feedback neural network, namely, a recurrent multilayer perceptron, is used as the model structure to be identified. The feedforward portion of the network architecture provides its well-known interpolation property, while through recurrency and cross-talk, the local information feedback enables representation of temporal variations in the system nonlinearities. The standard backpropagation learning algorithm is modified, and it is used for the supervised training of the proposed hybrid network. The performance of recurrent multilayer perceptron networks in identifying process dynamics is investigated via the case study of a U-tube steam generator. The response of representative steam generator is predicted using a neural network, and it is compared to the response obtained from a sophisticated computer model based on first principles. The transient responses compare well, although further research is warranted to determine the predictive capabilities of these networks during more severe operational transients and accident scenarios

  6. Moving image compression and generalization capability of constructive neural networks

    Science.gov (United States)

    Ma, Liying; Khorasani, Khashayar

    2001-03-01

    To date numerous techniques have been proposed to compress digital images to ease their storage and transmission over communication channels. Recently, a number of image compression algorithms using Neural Networks NNs have been developed. Particularly, several constructive feed-forward neural networks FNNs have been proposed by researchers for image compression, and promising results have been reported. At the previous SPIE AeroSense conference 2000, we proposed to use a constructive One-Hidden-Layer Feedforward Neural Network OHL-FNN for compressing digital images. In this paper, we first investigate the generalization capability of the proposed OHL-FNN in the presence of additive noise for network training and/ or generalization. Extensive experimental results for different scenarios are presented. It is revealed that the constructive OHL-FNN is not as robust to additive noise in input image as expected. Next, the constructive OHL-FNN is applied to moving images, video sequences. The first, or other specified frame in a moving image sequence is used to train the network. The remaining moving images that follow are then generalized/compressed by this trained network. Three types of correlation-like criteria measuring the similarity of any two images are introduced. The relationship between the generalization capability of the constructed net and the similarity of images is investigated in some detail. It is shown that the constructive OHL-FNN is promising even for changing images such as those extracted from a football game.

  7. Analysis and optimization of gas-centrifugal separation of uranium isotopes by neural networks

    Directory of Open Access Journals (Sweden)

    Migliavacca S.C.P.

    2002-01-01

    Full Text Available Neural networks are an attractive alternative for modeling complex problems with too many difficulties to be solved by a phenomenological model. A feed-forward neural network was used to model a gas-centrifugal separation of uranium isotopes. The prediction showed good agreement with the experimental data. An optimization study was carried out. The optimal operational condition was tested by a new experiment and a difference of less than 1% was found.

  8. Separating true V0's from combinatoric background with a neural network

    International Nuclear Information System (INIS)

    Justice, M.

    1997-01-01

    A feedforward multilayered neural network has been trained to ''recognize'' true V0's in the presence of a large combinatoric background using simulated data for 2 GeV/nucleon Ni + Cu interactions. The resulting neural network filter has been applied to actual data from the EOS TPC experiment. An enhancement of signal to background over more traditional selection mechanisms has been observed. (orig.)

  9. Interacting neural networks

    Science.gov (United States)

    Metzler, R.; Kinzel, W.; Kanter, I.

    2000-08-01

    Several scenarios of interacting neural networks which are trained either in an identical or in a competitive way are solved analytically. In the case of identical training each perceptron receives the output of its neighbor. The symmetry of the stationary state as well as the sensitivity to the used training algorithm are investigated. Two competitive perceptrons trained on mutually exclusive learning aims and a perceptron which is trained on the opposite of its own output are examined analytically. An ensemble of competitive perceptrons is used as decision-making algorithms in a model of a closed market (El Farol Bar problem or the Minority Game. In this game, a set of agents who have to make a binary decision is considered.); each network is trained on the history of minority decisions. This ensemble of perceptrons relaxes to a stationary state whose performance can be better than random.

  10. Neural network-based model reference adaptive control system.

    Science.gov (United States)

    Patino, H D; Liu, D

    2000-01-01

    In this paper, an approach to model reference adaptive control based on neural networks is proposed and analyzed for a class of first-order continuous-time nonlinear dynamical systems. The controller structure can employ either a radial basis function network or a feedforward neural network to compensate adaptively the nonlinearities in the plant. A stable controller-parameter adjustment mechanism, which is determined using the Lyapunov theory, is constructed using a sigma-modification-type updating law. The evaluation of control error in terms of the neural network learning error is performed. That is, the control error converges asymptotically to a neighborhood of zero, whose size is evaluated and depends on the approximation error of the neural network. In the design and analysis of neural network-based control systems, it is important to take into account the neural network learning error and its influence on the control error of the plant. Simulation results showing the feasibility and performance of the proposed approach are given.

  11. Artificial neural networks in the nuclear engineering (Part 2)

    International Nuclear Information System (INIS)

    Baptista Filho, Benedito Dias

    2002-01-01

    The field of Artificial Neural Networks (ANN), one of the branches of Artificial Intelligence has been waking up a lot of interest in the Nuclear Engineering (NE). ANN can be used to solve problems of difficult modeling, when the data are fail or incomplete and in high complexity problems of control. The first part of this work began a discussion with feed-forward neural networks in back-propagation. In this part of the work, the Multi-synaptic neural networks is applied to control problems. Also, the self-organized maps is presented in a typical pattern classification problem: transients classification. The main purpose of the work is to show that ANN can be successfully used in NE if a carefully choice of its type is done: the application sets this choice. (author)

  12. Analysis of neural networks

    CERN Document Server

    Heiden, Uwe

    1980-01-01

    The purpose of this work is a unified and general treatment of activity in neural networks from a mathematical pOint of view. Possible applications of the theory presented are indica­ ted throughout the text. However, they are not explored in de­ tail for two reasons : first, the universal character of n- ral activity in nearly all animals requires some type of a general approach~ secondly, the mathematical perspicuity would suffer if too many experimental details and empirical peculiarities were interspersed among the mathematical investigation. A guide to many applications is supplied by the references concerning a variety of specific issues. Of course the theory does not aim at covering all individual problems. Moreover there are other approaches to neural network theory (see e.g. Poggio-Torre, 1978) based on the different lev­ els at which the nervous system may be viewed. The theory is a deterministic one reflecting the average be­ havior of neurons or neuron pools. In this respect the essay is writt...

  13. Synthesis of recurrent neural networks for dynamical system simulation.

    Science.gov (United States)

    Trischler, Adam P; D'Eleuterio, Gabriele M T

    2016-08-01

    We review several of the most widely used techniques for training recurrent neural networks to approximate dynamical systems, then describe a novel algorithm for this task. The algorithm is based on an earlier theoretical result that guarantees the quality of the network approximation. We show that a feedforward neural network can be trained on the vector-field representation of a given dynamical system using backpropagation, then recast it as a recurrent network that replicates the original system's dynamics. After detailing this algorithm and its relation to earlier approaches, we present numerical examples that demonstrate its capabilities. One of the distinguishing features of our approach is that both the original dynamical systems and the recurrent networks that simulate them operate in continuous time. Copyright © 2016 Elsevier Ltd. All rights reserved.

  14. Spiking neural network for recognizing spatiotemporal sequences of spikes

    International Nuclear Information System (INIS)

    Jin, Dezhe Z.

    2004-01-01

    Sensory neurons in many brain areas spike with precise timing to stimuli with temporal structures, and encode temporally complex stimuli into spatiotemporal spikes. How the downstream neurons read out such neural code is an important unsolved problem. In this paper, we describe a decoding scheme using a spiking recurrent neural network. The network consists of excitatory neurons that form a synfire chain, and two globally inhibitory interneurons of different types that provide delayed feedforward and fast feedback inhibition, respectively. The network signals recognition of a specific spatiotemporal sequence when the last excitatory neuron down the synfire chain spikes, which happens if and only if that sequence was present in the input spike stream. The recognition scheme is invariant to variations in the intervals between input spikes within some range. The computation of the network can be mapped into that of a finite state machine. Our network provides a simple way to decode spatiotemporal spikes with diverse types of neurons

  15. Organization of feed-forward loop motifs reveals architectural principles in natural and engineered networks.

    Science.gov (United States)

    Gorochowski, Thomas E; Grierson, Claire S; di Bernardo, Mario

    2018-03-01

    Network motifs are significantly overrepresented subgraphs that have been proposed as building blocks for natural and engineered networks. Detailed functional analysis has been performed for many types of motif in isolation, but less is known about how motifs work together to perform complex tasks. To address this issue, we measure the aggregation of network motifs via methods that extract precisely how these structures are connected. Applying this approach to a broad spectrum of networked systems and focusing on the widespread feed-forward loop motif, we uncover striking differences in motif organization. The types of connection are often highly constrained, differ between domains, and clearly capture architectural principles. We show how this information can be used to effectively predict functionally important nodes in the metabolic network of Escherichia coli . Our findings have implications for understanding how networked systems are constructed from motif parts and elucidate constraints that guide their evolution.

  16. Robust recurrent neural network modeling for software fault detection and correction prediction

    International Nuclear Information System (INIS)

    Hu, Q.P.; Xie, M.; Ng, S.H.; Levitin, G.

    2007-01-01

    Software fault detection and correction processes are related although different, and they should be studied together. A practical approach is to apply software reliability growth models to model fault detection, and fault correction process is assumed to be a delayed process. On the other hand, the artificial neural networks model, as a data-driven approach, tries to model these two processes together with no assumptions. Specifically, feedforward backpropagation networks have shown their advantages over analytical models in fault number predictions. In this paper, the following approach is explored. First, recurrent neural networks are applied to model these two processes together. Within this framework, a systematic networks configuration approach is developed with genetic algorithm according to the prediction performance. In order to provide robust predictions, an extra factor characterizing the dispersion of prediction repetitions is incorporated into the performance function. Comparisons with feedforward neural networks and analytical models are developed with respect to a real data set

  17. Neural Networks for Optimal Control

    DEFF Research Database (Denmark)

    Sørensen, O.

    1995-01-01

    Two neural networks are trained to act as an observer and a controller, respectively, to control a non-linear, multi-variable process.......Two neural networks are trained to act as an observer and a controller, respectively, to control a non-linear, multi-variable process....

  18. Neural networks at the Tevatron

    International Nuclear Information System (INIS)

    Badgett, W.; Burkett, K.; Campbell, M.K.; Wu, D.Y.; Bianchin, S.; DeNardi, M.; Pauletta, G.; Santi, L.; Caner, A.; Denby, B.; Haggerty, H.; Lindsey, C.S.; Wainer, N.; Dall'Agata, M.; Johns, K.; Dickson, M.; Stanco, L.; Wyss, J.L.

    1992-10-01

    This paper summarizes neural network applications at the Fermilab Tevatron, including the first online hardware application in high energy physics (muon tracking): the CDF and DO neural network triggers; offline quark/gluon discrimination at CDF; ND a new tool for top to multijets recognition at CDF

  19. Neural Networks for the Beginner.

    Science.gov (United States)

    Snyder, Robin M.

    Motivated by the brain, neural networks are a right-brained approach to artificial intelligence that is used to recognize patterns based on previous training. In practice, one would not program an expert system to recognize a pattern and one would not train a neural network to make decisions from rules; but one could combine the best features of…

  20. A Neural Network Approach for GMA Butt Joint Welding

    DEFF Research Database (Denmark)

    Christensen, Kim Hardam; Sørensen, Torben

    2003-01-01

    This paper describes the application of the neural network technology for gas metal arc welding (GMAW) control. A system has been developed for modeling and online adjustment of welding parameters, appropriate to guarantee a certain degree of quality in the field of butt joint welding with full...... penetration, when the gap width is varying during the welding process. The process modeling to facilitate the mapping from joint geometry and reference weld quality to significant welding parameters has been based on a multi-layer feed-forward network. The Levenberg-Marquardt algorithm for non-linear least...

  1. A Neural Network Approach for GMA Butt Joint Welding

    DEFF Research Database (Denmark)

    Christensen, Kim Hardam; Sørensen, Torben

    2003-01-01

    penetration, when the gap width is varying during the welding process. The process modeling to facilitate the mapping from joint geometry and reference weld quality to significant welding parameters has been based on a multi-layer feed-forward network. The Levenberg-Marquardt algorithm for non-linear least......This paper describes the application of the neural network technology for gas metal arc welding (GMAW) control. A system has been developed for modeling and online adjustment of welding parameters, appropriate to guarantee a certain degree of quality in the field of butt joint welding with full...

  2. Artificial neural networks in NDT

    International Nuclear Information System (INIS)

    Abdul Aziz Mohamed

    2001-01-01

    Artificial neural networks, simply known as neural networks, have attracted considerable interest in recent years largely because of a growing recognition of the potential of these computational paradigms as powerful alternative models to conventional pattern recognition or function approximation techniques. The neural networks approach is having a profound effect on almost all fields, and has been utilised in fields Where experimental inter-disciplinary work is being carried out. Being a multidisciplinary subject with a broad knowledge base, Nondestructive Testing (NDT) or Nondestructive Evaluation (NDE) is no exception. This paper explains typical applications of neural networks in NDT/NDE. Three promising types of neural networks are highlighted, namely, back-propagation, binary Hopfield and Kohonen's self-organising maps. (Author)

  3. Experimental calibration of forward and inverse neural networks for rotary type magnetorheological damper

    DEFF Research Database (Denmark)

    Bhowmik, Subrata; Weber, Felix; Høgsberg, Jan Becker

    2013-01-01

    This paper presents a systematic design and training procedure for the feed-forward backpropagation neural network (NN) modeling of both forward and inverse behavior of a rotary magnetorheological (MR) damper based on experimental data. For the forward damper model, with damper force as output...

  4. Classification of data patterns using an autoassociative neural network topology

    Science.gov (United States)

    Dietz, W. E.; Kiech, E. L.; Ali, M.

    1989-01-01

    A diagnostic expert system based on neural networks is developed and applied to the real-time diagnosis of jet and rocket engines. The expert system methodologies are based on the analysis of patterns of behavior of physical mechanisms. In this approach, fault diagnosis is conceptualized as the mapping or association of patterns of sensor data to patterns representing fault conditions. The approach addresses deficiencies inherent in many feedforward neural network models and greatly reduces the number of networks necessary to identify the existence of a fault condition and estimate the duration and severity of the identified fault. The network topology used in the present implementation of the diagnostic system is described, as well as the training regimen used and the response of the system to inputs representing both previously observed and unknown fault scenarios. Noise effects on the integrity of the diagnosis are also evaluated.

  5. Using function approximation to determine neural network accuracy

    International Nuclear Information System (INIS)

    Wichman, R.F.; Alexander, J.

    2013-01-01

    Many, if not most, control processes demonstrate nonlinear behavior in some portion of their operating range and the ability of neural networks to model non-linear dynamics makes them very appealing for control. Control of high reliability safety systems, and autonomous control in process or robotic applications, however, require accurate and consistent control and neural networks are only approximators of various functions so their degree of approximation becomes important. In this paper, the factors affecting the ability of a feed-forward back-propagation neural network to accurately approximate a non-linear function are explored. Compared to pattern recognition using a neural network for function approximation provides an easy and accurate method for determining the network's accuracy. In contrast to other techniques, we show that errors arising in function approximation or curve fitting are caused by the neural network itself rather than scatter in the data. A method is proposed that provides improvements in the accuracy achieved during training and resulting ability of the network to generalize after training. Binary input vectors provided a more accurate model than with scalar inputs and retraining using a small number of the outlier x,y pairs improved generalization. (author)

  6. Analyzing Divisia Rules Extracted from a Feedforward Neural Network

    National Research Council Canada - National Science Library

    Schmidt, Vincent A; Binner, Jane M

    2006-01-01

    This paper introduces a mechanism for generating a series of rules that characterize the money-price relationship, defined as the relationship between the rate of growth of the money supply and inflation...

  7. Visual language recognition with a feed-forward network of spiking neurons

    Energy Technology Data Exchange (ETDEWEB)

    Rasmussen, Craig E [Los Alamos National Laboratory; Garrett, Kenyan [Los Alamos National Laboratory; Sottile, Matthew [GALOIS; Shreyas, Ns [INDIANA UNIV.

    2010-01-01

    An analogy is made and exploited between the recognition of visual objects and language parsing. A subset of regular languages is used to define a one-dimensional 'visual' language, in which the words are translational and scale invariant. This allows an exploration of the viewpoint invariant languages that can be solved by a network of concurrent, hierarchically connected processors. A language family is defined that is hierarchically tiling system recognizable (HREC). As inspired by nature, an algorithm is presented that constructs a cellular automaton that recognizes strings from a language in the HREC family. It is demonstrated how a language recognizer can be implemented from the cellular automaton using a feed-forward network of spiking neurons. This parser recognizes fixed-length strings from the language in parallel and as the computation is pipelined, a new string can be parsed in each new interval of time. The analogy with formal language theory allows inferences to be drawn regarding what class of objects can be recognized by visual cortex operating in purely feed-forward fashion and what class of objects requires a more complicated network architecture.

  8. Local excitation-inhibition ratio for synfire chain propagation in feed-forward neuronal networks

    Science.gov (United States)

    Guo, Xinmeng; Yu, Haitao; Wang, Jiang; Liu, Jing; Cao, Yibin; Deng, Bin

    2017-09-01

    A leading hypothesis holds that spiking activity propagates along neuronal sub-populations which are connected in a feed-forward manner, and the propagation efficiency would be affected by the dynamics of sub-populations. In this paper, how the interaction between local excitation and inhibition effects on synfire chain propagation in feed-forward network (FFN) is investigated. The simulation results show that there is an appropriate excitation-inhibition (EI) ratio maximizing the performance of synfire chain propagation. The optimal EI ratio can significantly enhance the selectivity of FFN to synchronous signals, which thereby increases the stability to background noise. Moreover, the effect of network topology on synfire chain propagation is also investigated. It is found that synfire chain propagation can be maximized by an optimal interlayer linking probability. We also find that external noise is detrimental to synchrony propagation by inducing spiking jitter. The results presented in this paper may provide insights into the effects of network dynamics on neuronal computations.

  9. Neural networks within multi-core optic fibers.

    Science.gov (United States)

    Cohen, Eyal; Malka, Dror; Shemer, Amir; Shahmoon, Asaf; Zalevsky, Zeev; London, Michael

    2016-07-07

    Hardware implementation of artificial neural networks facilitates real-time parallel processing of massive data sets. Optical neural networks offer low-volume 3D connectivity together with large bandwidth and minimal heat production in contrast to electronic implementation. Here, we present a conceptual design for in-fiber optical neural networks. Neurons and synapses are realized as individual silica cores in a multi-core fiber. Optical signals are transferred transversely between cores by means of optical coupling. Pump driven amplification in erbium-doped cores mimics synaptic interactions. We simulated three-layered feed-forward neural networks and explored their capabilities. Simulations suggest that networks can differentiate between given inputs depending on specific configurations of amplification; this implies classification and learning capabilities. Finally, we tested experimentally our basic neuronal elements using fibers, couplers, and amplifiers, and demonstrated that this configuration implements a neuron-like function. Therefore, devices similar to our proposed multi-core fiber could potentially serve as building blocks for future large-scale small-volume optical artificial neural networks.

  10. A SIMULATION OF THE PENICILLIN G PRODUCTION BIOPROCESS APPLYING NEURAL NETWORKS

    Directory of Open Access Journals (Sweden)

    A.J.G. da Cruz

    1997-12-01

    Full Text Available The production of penicillin G by Penicillium chrysogenum IFO 8644 was simulated employing a feedforward neural network with three layers. The neural network training procedure used an algorithm combining two procedures: random search and backpropagation. The results of this approach were very promising, and it was observed that the neural network was able to accurately describe the nonlinear behavior of the process. Besides, the results showed that this technique can be successfully applied to control process algorithms due to its long processing time and its flexibility in the incorporation of new data

  11. Forecasting the mortality rates of Indonesian population by using neural network

    Science.gov (United States)

    Safitri, Lutfiani; Mardiyati, Sri; Rahim, Hendrisman

    2018-03-01

    A model that can represent a problem is required in conducting a forecasting. One of the models that has been acknowledged by the actuary community in forecasting mortality rate is the Lee-Certer model. Lee Carter model supported by Neural Network will be used to calculate mortality forecasting in Indonesia. The type of Neural Network used is feedforward neural network aligned with backpropagation algorithm in python programming language. And the final result of this study is mortality rate in forecasting Indonesia for the next few years

  12. Artificial Neural Network Analysis System

    Science.gov (United States)

    2001-02-27

    Contract No. DASG60-00-M-0201 Purchase request no.: Foot in the Door-01 Title Name: Artificial Neural Network Analysis System Company: Atlantic... Artificial Neural Network Analysis System 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) Powell, Bruce C 5d. PROJECT NUMBER 5e. TASK NUMBER...34) 27-02-2001 Report Type N/A Dates Covered (from... to) ("DD MON YYYY") 28-10-2000 27-02-2001 Title and Subtitle Artificial Neural Network Analysis

  13. Propagation of spiking regularity and double coherence resonance in feedforward networks.

    Science.gov (United States)

    Men, Cong; Wang, Jiang; Qin, Ying-Mei; Deng, Bin; Tsang, Kai-Ming; Chan, Wai-Lok

    2012-03-01

    We investigate the propagation of spiking regularity in noisy feedforward networks (FFNs) based on FitzHugh-Nagumo neuron model systematically. It is found that noise could modulate the transmission of firing rate and spiking regularity. Noise-induced synchronization and synfire-enhanced coherence resonance are also observed when signals propagate in noisy multilayer networks. It is interesting that double coherence resonance (DCR) with the combination of synaptic input correlation and noise intensity is finally attained after the processing layer by layer in FFNs. Furthermore, inhibitory connections also play essential roles in shaping DCR phenomena. Several properties of the neuronal network such as noise intensity, correlation of synaptic inputs, and inhibitory connections can serve as control parameters in modulating both rate coding and the order of temporal coding.

  14. An Artificial Neural Network for Data Forecasting Purposes

    Directory of Open Access Journals (Sweden)

    Catalina Lucia COCIANU

    2015-01-01

    Full Text Available Considering the fact that markets are generally influenced by different external factors, the stock market prediction is one of the most difficult tasks of time series analysis. The research reported in this paper aims to investigate the potential of artificial neural networks (ANN in solving the forecast task in the most general case, when the time series are non-stationary. We used a feed-forward neural architecture: the nonlinear autoregressive network with exogenous inputs. The network training function used to update the weight and bias parameters corresponds to gradient descent with adaptive learning rate variant of the backpropagation algorithm. The results obtained using this technique are compared with the ones resulted from some ARIMA models. We used the mean square error (MSE measure to evaluate the performances of these two models. The comparative analysis leads to the conclusion that the proposed model can be successfully applied to forecast the financial data.

  15. Application of artificial neural network for medical image recognition and diagnostic decision making

    International Nuclear Information System (INIS)

    Asada, N.; Eiho, S.; Doi, K.; MacMahon, H.; Montner, S.M.; Giger, M.L.

    1989-01-01

    An artificial neural network has been applied for pattern recognition and used as a tool in an expert system. The purpose of this study is to examine the potential usefulness of the neural network approach in medical applications for image recognition and decision making. The authors designed multilayer feedforward neural networks with a back-propagation algorithm for our study. Using first-pass radionuclide ventriculograms, we attempted to identify the right and left ventricles of the heart and the lungs by training the neural network from patterns of time-activity curves. In a preliminary study, the neural network enabled identification of the lungs and heart chambers once the network was trained sufficiently by means of repeated entries of data from the same case

  16. Modelling and Prediction of Photovoltaic Power Output Using Artificial Neural Networks

    Directory of Open Access Journals (Sweden)

    Aminmohammad Saberian

    2014-01-01

    Full Text Available This paper presents a solar power modelling method using artificial neural networks (ANNs. Two neural network structures, namely, general regression neural network (GRNN feedforward back propagation (FFBP, have been used to model a photovoltaic panel output power and approximate the generated power. Both neural networks have four inputs and one output. The inputs are maximum temperature, minimum temperature, mean temperature, and irradiance; the output is the power. The data used in this paper started from January 1, 2006, until December 31, 2010. The five years of data were split into two parts: 2006–2008 and 2009-2010; the first part was used for training and the second part was used for testing the neural networks. A mathematical equation is used to estimate the generated power. At the end, both of these networks have shown good modelling performance; however, FFBP has shown a better performance comparing with GRNN.

  17. Neural networks and statistical learning

    CERN Document Server

    Du, Ke-Lin

    2014-01-01

    Providing a broad but in-depth introduction to neural network and machine learning in a statistical framework, this book provides a single, comprehensive resource for study and further research. All the major popular neural network models and statistical learning approaches are covered with examples and exercises in every chapter to develop a practical working understanding of the content. Each of the twenty-five chapters includes state-of-the-art descriptions and important research results on the respective topics. The broad coverage includes the multilayer perceptron, the Hopfield network, associative memory models, clustering models and algorithms, the radial basis function network, recurrent neural networks, principal component analysis, nonnegative matrix factorization, independent component analysis, discriminant analysis, support vector machines, kernel methods, reinforcement learning, probabilistic and Bayesian networks, data fusion and ensemble learning, fuzzy sets and logic, neurofuzzy models, hardw...

  18. Optical Neural Network Classifier Architectures

    National Research Council Canada - National Science Library

    Getbehead, Mark

    1998-01-01

    We present an adaptive opto-electronic neural network hardware architecture capable of exploiting parallel optics to realize real-time processing and classification of high-dimensional data for Air...

  19. Memristor-based neural networks

    International Nuclear Information System (INIS)

    Thomas, Andy

    2013-01-01

    The synapse is a crucial element in biological neural networks, but a simple electronic equivalent has been absent. This complicates the development of hardware that imitates biological architectures in the nervous system. Now, the recent progress in the experimental realization of memristive devices has renewed interest in artificial neural networks. The resistance of a memristive system depends on its past states and exactly this functionality can be used to mimic the synaptic connections in a (human) brain. After a short introduction to memristors, we present and explain the relevant mechanisms in a biological neural network, such as long-term potentiation and spike time-dependent plasticity, and determine the minimal requirements for an artificial neural network. We review the implementations of these processes using basic electric circuits and more complex mechanisms that either imitate biological systems or could act as a model system for them. (topical review)

  20. What are artificial neural networks?

    DEFF Research Database (Denmark)

    Krogh, Anders

    2008-01-01

    Artificial neural networks have been applied to problems ranging from speech recognition to prediction of protein secondary structure, classification of cancers and gene prediction. How do they work and what might they be good for? Udgivelsesdato: 2008-Feb......Artificial neural networks have been applied to problems ranging from speech recognition to prediction of protein secondary structure, classification of cancers and gene prediction. How do they work and what might they be good for? Udgivelsesdato: 2008-Feb...

  1. Recurrent Convolutional Neural Networks: A Better Model of Biological Object Recognition.

    Science.gov (United States)

    Spoerer, Courtney J; McClure, Patrick; Kriegeskorte, Nikolaus

    2017-01-01

    Feedforward neural networks provide the dominant model of how the brain performs visual object recognition. However, these networks lack the lateral and feedback connections, and the resulting recurrent neuronal dynamics, of the ventral visual pathway in the human and non-human primate brain. Here we investigate recurrent convolutional neural networks with bottom-up (B), lateral (L), and top-down (T) connections. Combining these types of connections yields four architectures (B, BT, BL, and BLT), which we systematically test and compare. We hypothesized that recurrent dynamics might improve recognition performance in the challenging scenario of partial occlusion. We introduce two novel occluded object recognition tasks to test the efficacy of the models, digit clutter (where multiple target digits occlude one another) and digit debris (where target digits are occluded by digit fragments). We find that recurrent neural networks outperform feedforward control models (approximately matched in parametric complexity) at recognizing objects, both in the absence of occlusion and in all occlusion conditions. Recurrent networks were also found to be more robust to the inclusion of additive Gaussian noise. Recurrent neural networks are better in two respects: (1) they are more neurobiologically realistic than their feedforward counterparts; (2) they are better in terms of their ability to recognize objects, especially under challenging conditions. This work shows that computer vision can benefit from using recurrent convolutional architectures and suggests that the ubiquitous recurrent connections in biological brains are essential for task performance.

  2. New approach to ECG's features recognition involving neural network

    International Nuclear Information System (INIS)

    Babloyantz, A.; Ivanov, V.V.; Zrelov, P.V.

    2001-01-01

    A new approach for the detection of slight changes in the form of the ECG signal is proposed. It is based on the approximation of raw ECG data inside each RR-interval by the expansion in polynomials of special type and on the classification of samples represented by sets of expansion coefficients using a layered feed-forward neural network. The transformation applied provides significantly simpler data structure, stability to noise and to other accidental factors. A by-product of the method is the compression of ECG data with factor 5

  3. Supervised learning of probability distributions by neural networks

    Science.gov (United States)

    Baum, Eric B.; Wilczek, Frank

    1988-01-01

    Supervised learning algorithms for feedforward neural networks are investigated analytically. The back-propagation algorithm described by Werbos (1974), Parker (1985), and Rumelhart et al. (1986) is generalized by redefining the values of the input and output neurons as probabilities. The synaptic weights are then varied to follow gradients in the logarithm of likelihood rather than in the error. This modification is shown to provide a more rigorous theoretical basis for the algorithm and to permit more accurate predictions. A typical application involving a medical-diagnosis expert system is discussed.

  4. Toward heterogeneity in feedforward network with synaptic delays based on FitzHugh-Nagumo model

    Science.gov (United States)

    Qin, Ying-Mei; Men, Cong; Zhao, Jia; Han, Chun-Xiao; Che, Yan-Qiu

    2018-01-01

    We focus on the role of heterogeneity on the propagation of firing patterns in feedforward network (FFN). Effects of heterogeneities both in parameters of neuronal excitability and synaptic delays are investigated systematically. Neuronal heterogeneity is found to modulate firing rates and spiking regularity by changing the excitability of the network. Synaptic delays are strongly related with desynchronized and synchronized firing patterns of the FFN, which indicate that synaptic delays may play a significant role in bridging rate coding and temporal coding. Furthermore, quasi-coherence resonance (quasi-CR) phenomenon is observed in the parameter domain of connection probability and delay-heterogeneity. All these phenomena above enable a detailed characterization of neuronal heterogeneity in FFN, which may play an indispensable role in reproducing the important properties of in vivo experiments.

  5. Classification of remotely sensed data using OCR-inspired neural network techniques. [Optical Character Recognition

    Science.gov (United States)

    Kiang, Richard K.

    1992-01-01

    Neural networks have been applied to classifications of remotely sensed data with some success. To improve the performance of this approach, an examination was made of how neural networks are applied to the optical character recognition (OCR) of handwritten digits and letters. A three-layer, feedforward network, along with techniques adopted from OCR, was used to classify Landsat-4 Thematic Mapper data. Good results were obtained. To overcome the difficulties that are characteristic of remote sensing applications and to attain significant improvements in classification accuracy, a special network architecture may be required.

  6. The application of artificial neural network in radon disaster model of uranium mining

    International Nuclear Information System (INIS)

    Zhu Yufeng; Zhu Guogen; Zhou Shijian

    2012-01-01

    The structural features, data analysis and learning process of feed-forward neural network (BP ANN) were analyzed at first. Rodon sample from Fuzhou Jinan Uranium Industry Limited Company were used to training the network and make the forecast then, and a forecasting model was established for the radon disaster in uranium mines. The method and effectiveness of BP neural network in predicting radon disaster was discussed. The test of training samples showed that the BP network had gotten fairly satisfied result in predicting mine radon disaster. (authors)

  7. Complex-Valued Neural Networks

    CERN Document Server

    Hirose, Akira

    2012-01-01

    This book is the second enlarged and revised edition of the first successful monograph on complex-valued neural networks (CVNNs) published in 2006, which lends itself to graduate and undergraduate courses in electrical engineering, informatics, control engineering, mechanics, robotics, bioengineering, and other relevant fields. In the second edition the recent trends in CVNNs research are included, resulting in e.g. almost a doubled number of references. The parametron invented in 1954 is also referred to with discussion on analogy and disparity. Also various additional arguments on the advantages of the complex-valued neural networks enhancing the difference to real-valued neural networks are given in various sections. The book is useful for those beginning their studies, for instance, in adaptive signal processing for highly functional sensing and imaging, control in unknown and changing environment, robotics inspired by human neural systems, and brain-like information processing, as well as interdisciplina...

  8. NNSYSID and NNCTRL Tools for system identification and control with neural networks

    DEFF Research Database (Denmark)

    Nørgaard, Magnus; Ravn, Ole; Poulsen, Niels Kjølstad

    2001-01-01

    choose among several designs such as direct inverse control, internal model control, nonlinear feedforward, feedback linearisation, optimal control, gain scheduling based on instantaneous linearisation of neural network models and nonlinear model predictive control. This article gives an overview......Two toolsets for use with MATLAB have been developed: the neural network based system identification toolbox (NNSYSID) and the neural network based control system design toolkit (NNCTRL). The NNSYSID toolbox has been designed to assist identification of nonlinear dynamic systems. It contains...... a number of nonlinear model structures based on neural networks, effective training algorithms and tools for model validation and model structure selection. The NNCTRL toolkit is an add-on to NNSYSID and provides tools for design and simulation of control systems based on neural networks. The user can...

  9. NNSYSID and NNCTRL Tools for system identification and control with neural networks

    DEFF Research Database (Denmark)

    Nørgaard, Magnus; Ravn, Ole; Poulsen, Niels Kjølstad

    2001-01-01

    a number of nonlinear model structures based on neural networks, effective training algorithms and tools for model validation and model structure selection. The NNCTRL toolkit is an add-on to NNSYSID and provides tools for design and simulation of control systems based on neural networks. The user can...... choose among several designs such as direct inverse control, internal model control, nonlinear feedforward, feedback linearisation, optimal control, gain scheduling based on instantaneous linearisation of neural network models and nonlinear model predictive control. This article gives an overview......Two toolsets for use with MATLAB have been developed: the neural network based system identification toolbox (NNSYSID) and the neural network based control system design toolkit (NNCTRL). The NNSYSID toolbox has been designed to assist identification of nonlinear dynamic systems. It contains...

  10. Fractional Hopfield Neural Networks: Fractional Dynamic Associative Recurrent Neural Networks.

    Science.gov (United States)

    Pu, Yi-Fei; Yi, Zhang; Zhou, Ji-Liu

    2017-10-01

    This paper mainly discusses a novel conceptual framework: fractional Hopfield neural networks (FHNN). As is commonly known, fractional calculus has been incorporated into artificial neural networks, mainly because of its long-term memory and nonlocality. Some researchers have made interesting attempts at fractional neural networks and gained competitive advantages over integer-order neural networks. Therefore, it is naturally makes one ponder how to generalize the first-order Hopfield neural networks to the fractional-order ones, and how to implement FHNN by means of fractional calculus. We propose to introduce a novel mathematical method: fractional calculus to implement FHNN. First, we implement fractor in the form of an analog circuit. Second, we implement FHNN by utilizing fractor and the fractional steepest descent approach, construct its Lyapunov function, and further analyze its attractors. Third, we perform experiments to analyze the stability and convergence of FHNN, and further discuss its applications to the defense against chip cloning attacks for anticounterfeiting. The main contribution of our work is to propose FHNN in the form of an analog circuit by utilizing a fractor and the fractional steepest descent approach, construct its Lyapunov function, prove its Lyapunov stability, analyze its attractors, and apply FHNN to the defense against chip cloning attacks for anticounterfeiting. A significant advantage of FHNN is that its attractors essentially relate to the neuron's fractional order. FHNN possesses the fractional-order-stability and fractional-order-sensitivity characteristics.

  11. Antenna analysis using neural networks

    Science.gov (United States)

    Smith, William T.

    1992-01-01

    Conventional computing schemes have long been used to analyze problems in electromagnetics (EM). The vast majority of EM applications require computationally intensive algorithms involving numerical integration and solutions to large systems of equations. The feasibility of using neural network computing algorithms for antenna analysis is investigated. The ultimate goal is to use a trained neural network algorithm to reduce the computational demands of existing reflector surface error compensation techniques. Neural networks are computational algorithms based on neurobiological systems. Neural nets consist of massively parallel interconnected nonlinear computational elements. They are often employed in pattern recognition and image processing problems. Recently, neural network analysis has been applied in the electromagnetics area for the design of frequency selective surfaces and beam forming networks. The backpropagation training algorithm was employed to simulate classical antenna array synthesis techniques. The Woodward-Lawson (W-L) and Dolph-Chebyshev (D-C) array pattern synthesis techniques were used to train the neural network. The inputs to the network were samples of the desired synthesis pattern. The outputs are the array element excitations required to synthesize the desired pattern. Once trained, the network is used to simulate the W-L or D-C techniques. Various sector patterns and cosecant-type patterns (27 total) generated using W-L synthesis were used to train the network. Desired pattern samples were then fed to the neural network. The outputs of the network were the simulated W-L excitations. A 20 element linear array was used. There were 41 input pattern samples with 40 output excitations (20 real parts, 20 imaginary). A comparison between the simulated and actual W-L techniques is shown for a triangular-shaped pattern. Dolph-Chebyshev is a different class of synthesis technique in that D-C is used for side lobe control as opposed to pattern

  12. Neural Dynamics of Feedforward and Feedback Processing in Figure-Ground Segregation

    OpenAIRE

    Oliver W. Layton; Ennio eMingolla; Arash eYazdanbakhsh

    2014-01-01

    Determining whether a region belongs to the interior or exterior of a shape (figure-ground segregation) is a core competency of the primate brain, yet the underlying mechanisms are not well understood. Many models assume that figure-ground segregation occurs by assembling progressively more complex representations through feedforward connections, with feedback playing only a modulatory role. We present a dynamical model of figure-ground segregation in the primate ventral stream wherein feedba...

  13. From biological neural networks to thinking machines: Transitioning biological organizational principles to computer technology

    Science.gov (United States)

    Ross, Muriel D.

    1991-01-01

    The three-dimensional organization of the vestibular macula is under study by computer assisted reconstruction and simulation methods as a model for more complex neural systems. One goal of this research is to transition knowledge of biological neural network architecture and functioning to computer technology, to contribute to the development of thinking computers. Maculas are organized as weighted neural networks for parallel distributed processing of information. The network is characterized by non-linearity of its terminal/receptive fields. Wiring appears to develop through constrained randomness. A further property is the presence of two main circuits, highly channeled and distributed modifying, that are connected through feedforward-feedback collaterals and biasing subcircuit. Computer simulations demonstrate that differences in geometry of the feedback (afferent) collaterals affects the timing and the magnitude of voltage changes delivered to the spike initiation zone. Feedforward (efferent) collaterals act as voltage followers and likely inhibit neurons of the distributed modifying circuit. These results illustrate the importance of feedforward-feedback loops, of timing, and of inhibition in refining neural network output. They also suggest that it is the distributed modifying network that is most involved in adaptation, memory, and learning. Tests of macular adaptation, through hyper- and microgravitational studies, support this hypothesis since synapses in the distributed modifying circuit, but not the channeled circuit, are altered. Transitioning knowledge of biological systems to computer technology, however, remains problematical.

  14. Distinct Feedforward and Feedback Effects of Microstimulation in Visual Cortex Reveal Neural Mechanisms of Texture Segregation.

    Science.gov (United States)

    Klink, P Christiaan; Dagnino, Bruno; Gariel-Mathis, Marie-Alice; Roelfsema, Pieter R

    2017-07-05

    The visual cortex is hierarchically organized, with low-level areas coding for simple features and higher areas for complex ones. Feedforward and feedback connections propagate information between areas in opposite directions, but their functional roles are only partially understood. We used electrical microstimulation to perturb the propagation of neuronal activity between areas V1 and V4 in monkeys performing a texture-segregation task. In both areas, microstimulation locally caused a brief phase of excitation, followed by inhibition. Both these effects propagated faithfully in the feedforward direction from V1 to V4. Stimulation of V4, however, caused little V1 excitation, but it did yield a delayed suppression during the late phase of visually driven activity. This suppression was pronounced for the V1 figure representation and weaker for background representations. Our results reveal functional differences between feedforward and feedback processing in texture segregation and suggest a specific modulating role for feedback connections in perceptual organization. Copyright © 2017 Elsevier Inc. All rights reserved.

  15. Limits to the development of feed-forward structures in large recurrent neuronal networks

    Directory of Open Access Journals (Sweden)

    Susanne Kunkel

    2011-02-01

    Full Text Available Spike-timing dependent plasticity (STDP has traditionally been of great interest to theoreticians, as it seems to provide an answer to the question of how the brain can develop functional structure in response to repeated stimuli. However, despite this high level of interest, convincing demonstrations of this capacity in large, initially random networks have not been forthcoming. Such demonstrations as there are typically rely on constraining the problem artificially. Techniques include employing additional pruning mechanisms or STDP rules that enhance symmetry breaking, simulating networks with low connectivity that magnify competition between synapses, or combinations of the above. In this paper we first review modeling choices that carry particularly high risks of producing non-generalizable results in the context of STDP in recurrent networks. We then develop a theory for the development of feed-forward structure in random networks and conclude that an unstable fixed point in the dynamics prevents the stable propagation of structure in recurrent networks with weight-dependent STDP. We demonstrate that the key predictions of the theory hold in large-scale simulations. The theory provides insight into the reasons why such development does not take place in unconstrained systems and enables us to identify candidate biologically motivated adaptations to the balanced random network model that might enable it.

  16. Reliability analysis of C-130 turboprop engine components using artificial neural network

    Science.gov (United States)

    Qattan, Nizar A.

    In this study, we predict the failure rate of Lockheed C-130 Engine Turbine. More than thirty years of local operational field data were used for failure rate prediction and validation. The Weibull regression model and the Artificial Neural Network model including (feed-forward back-propagation, radial basis neural network, and multilayer perceptron neural network model); will be utilized to perform this study. For this purpose, the thesis will be divided into five major parts. First part deals with Weibull regression model to predict the turbine general failure rate, and the rate of failures that require overhaul maintenance. The second part will cover the Artificial Neural Network (ANN) model utilizing the feed-forward back-propagation algorithm as a learning rule. The MATLAB package will be used in order to build and design a code to simulate the given data, the inputs to the neural network are the independent variables, the output is the general failure rate of the turbine, and the failures which required overhaul maintenance. In the third part we predict the general failure rate of the turbine and the failures which require overhaul maintenance, using radial basis neural network model on MATLAB tool box. In the fourth part we compare the predictions of the feed-forward back-propagation model, with that of Weibull regression model, and radial basis neural network model. The results show that the failure rate predicted by the feed-forward back-propagation artificial neural network model is closer in agreement with radial basis neural network model compared with the actual field-data, than the failure rate predicted by the Weibull model. By the end of the study, we forecast the general failure rate of the Lockheed C-130 Engine Turbine, the failures which required overhaul maintenance and six categorical failures using multilayer perceptron neural network (MLP) model on DTREG commercial software. The results also give an insight into the reliability of the engine

  17. Artificial neural networks for processing fluorescence spectroscopy data in skin cancer diagnostics

    International Nuclear Information System (INIS)

    Lenhardt, L; Zeković, I; Dramićanin, T; Dramićanin, M D

    2013-01-01

    Over the years various optical spectroscopic techniques have been widely used as diagnostic tools in the discrimination of many types of malignant diseases. Recently, synchronous fluorescent spectroscopy (SFS) coupled with chemometrics has been applied in cancer diagnostics. The SFS method involves simultaneous scanning of both emission and excitation wavelengths while keeping the interval of wavelengths (constant-wavelength mode) or frequencies (constant-energy mode) between them constant. This method is fast, relatively inexpensive, sensitive and non-invasive. Total synchronous fluorescence spectra of normal skin, nevus and melanoma samples were used as input for training of artificial neural networks. Two different types of artificial neural networks were trained, the self-organizing map and the feed-forward neural network. Histopathology results of investigated skin samples were used as the gold standard for network output. Based on the obtained classification success rate of neural networks, we concluded that both networks provided high sensitivity with classification errors between 2 and 4%. (paper)

  18. Learning and coding in biological neural networks

    Science.gov (United States)

    Fiete, Ila Rani

    How can large groups of neurons that locally modify their activities learn to collectively perform a desired task? Do studies of learning in small networks tell us anything about learning in the fantastically large collection of neurons that make up a vertebrate brain? What factors do neurons optimize by encoding sensory inputs or motor commands in the way they do? In this thesis I present a collection of four theoretical works: each of the projects was motivated by specific constraints and complexities of biological neural networks, as revealed by experimental studies; together, they aim to partially address some of the central questions of neuroscience posed above. We first study the role of sparse neural activity, as seen in the coding of sequential commands in a premotor area responsible for birdsong. We show that the sparse coding of temporal sequences in the songbird brain can, in a network where the feedforward plastic weights must translate the sparse sequential code into a time-varying muscle code, facilitate learning by minimizing synaptic interference. Next, we propose a biologically plausible synaptic plasticity rule that can perform goal-directed learning in recurrent networks of voltage-based spiking neurons that interact through conductances. Learning is based on the correlation of noisy local activity with a global reward signal; we prove that this rule performs stochastic gradient ascent on the reward. Thus, if the reward signal quantifies network performance on some desired task, the plasticity rule provably drives goal-directed learning in the network. To assess the convergence properties of the learning rule, we compare it with a known example of learning in the brain. Song-learning in finches is a clear example of a learned behavior, with detailed available neurophysiological data. With our learning rule, we train an anatomically accurate model birdsong network that drives a sound source to mimic an actual zebrafinch song. Simulation and

  19. Artificial neural network simulation of battery performance

    Energy Technology Data Exchange (ETDEWEB)

    O`Gorman, C.C.; Ingersoll, D.; Jungst, R.G.; Paez, T.L.

    1998-12-31

    Although they appear deceptively simple, batteries embody a complex set of interacting physical and chemical processes. While the discrete engineering characteristics of a battery such as the physical dimensions of the individual components, are relatively straightforward to define explicitly, their myriad chemical and physical processes, including interactions, are much more difficult to accurately represent. Within this category are the diffusive and solubility characteristics of individual species, reaction kinetics and mechanisms of primary chemical species as well as intermediates, and growth and morphology characteristics of reaction products as influenced by environmental and operational use profiles. For this reason, development of analytical models that can consistently predict the performance of a battery has only been partially successful, even though significant resources have been applied to this problem. As an alternative approach, the authors have begun development of a non-phenomenological model for battery systems based on artificial neural networks. Both recurrent and non-recurrent forms of these networks have been successfully used to develop accurate representations of battery behavior. The connectionist normalized linear spline (CMLS) network has been implemented with a self-organizing layer to model a battery system with the generalized radial basis function net. Concurrently, efforts are under way to use the feedforward back propagation network to map the {open_quotes}state{close_quotes} of a battery system. Because of the complexity of battery systems, accurate representation of the input and output parameters has proven to be very important. This paper describes these initial feasibility studies as well as the current models and makes comparisons between predicted and actual performance.

  20. Neural networks in signal processing

    International Nuclear Information System (INIS)

    Govil, R.

    2000-01-01

    Nuclear Engineering has matured during the last decade. In research and design, control, supervision, maintenance and production, mathematical models and theories are used extensively. In all such applications signal processing is embedded in the process. Artificial Neural Networks (ANN), because of their nonlinear, adaptive nature are well suited to such applications where the classical assumptions of linearity and second order Gaussian noise statistics cannot be made. ANN's can be treated as nonparametric techniques, which can model an underlying process from example data. They can also adopt their model parameters to statistical change with time. Algorithms in the framework of Neural Networks in Signal processing have found new applications potentials in the field of Nuclear Engineering. This paper reviews the fundamentals of Neural Networks in signal processing and their applications in tasks such as recognition/identification and control. The topics covered include dynamic modeling, model based ANN's, statistical learning, eigen structure based processing and generalization structures. (orig.)

  1. Deep Neural Networks: A New Framework for Modeling Biological Vision and Brain Information Processing.

    Science.gov (United States)

    Kriegeskorte, Nikolaus

    2015-11-24

    Recent advances in neural network modeling have enabled major strides in computer vision and other artificial intelligence applications. Human-level visual recognition abilities are coming within reach of artificial systems. Artificial neural networks are inspired by the brain, and their computations could be implemented in biological neurons. Convolutional feedforward networks, which now dominate computer vision, take further inspiration from the architecture of the primate visual hierarchy. However, the current models are designed with engineering goals, not to model brain computations. Nevertheless, initial studies comparing internal representations between these models and primate brains find surprisingly similar representational spaces. With human-level performance no longer out of reach, we are entering an exciting new era, in which we will be able to build biologically faithful feedforward and recurrent computational models of how biological brains perform high-level feats of intelligence, including vision.

  2. Predicting physical time series using dynamic ridge polynomial neural networks.

    Directory of Open Access Journals (Sweden)

    Dhiya Al-Jumeily

    Full Text Available Forecasting naturally occurring phenomena is a common problem in many domains of science, and this has been addressed and investigated by many scientists. The importance of time series prediction stems from the fact that it has wide range of applications, including control systems, engineering processes, environmental systems and economics. From the knowledge of some aspects of the previous behaviour of the system, the aim of the prediction process is to determine or predict its future behaviour. In this paper, we consider a novel application of a higher order polynomial neural network architecture called Dynamic Ridge Polynomial Neural Network that combines the properties of higher order and recurrent neural networks for the prediction of physical time series. In this study, four types of signals have been used, which are; The Lorenz attractor, mean value of the AE index, sunspot number, and heat wave temperature. The simulation results showed good improvements in terms of the signal to noise ratio in comparison to a number of higher order and feedforward neural networks in comparison to the benchmarked techniques.

  3. Entropy Learning in Neural Network

    Directory of Open Access Journals (Sweden)

    Geok See Ng

    2017-12-01

    Full Text Available In this paper, entropy term is used in the learning phase of a neural network.  As learning progresses, more hidden nodes get into saturation.  The early creation of such hidden nodes may impair generalisation.  Hence entropy approach is proposed to dampen the early creation of such nodes.  The entropy learning also helps to increase the importance of relevant nodes while dampening the less important nodes.  At the end of learning, the less important nodes can then be eliminated to reduce the memory requirements of the neural network.

  4. Artificial neural networks for spatial distribution of fuel assemblies in reload of PWR reactors

    International Nuclear Information System (INIS)

    Oliveira, Edyene; Castro, Victor F.; Velásquez, Carlos E.; Pereira, Claubia

    2017-01-01

    An artificial neural network methodology is being developed in order to find an optimum spatial distribution of the fuel assemblies in a nuclear reactor core during reload. The main bounding parameter of the modelling was the neutron multiplication factor, k ef f . The characteristics of the network are defined by the nuclear parameters: cycle, burnup, enrichment, fuel type, and average power peak of each element. These parameters were obtained by the ORNL nuclear code package SCALE6.0. As for the artificial neural network, the ANN Feedforward Multi L ayer P erceptron with various layers and neurons were constructed. Three algorithms were used and tested: LM (Levenberg-Marquardt), SCG (Scaled Conjugate Gradient) and BayR (Bayesian Regularization). Artificial neural network have implemented using MATLAB 2015a version. As preliminary results, the spatial distribution of the fuel assemblies in the core using a neural network was slightly better than the standard core. (author)

  5. Neural Network for Sparse Reconstruction

    Directory of Open Access Journals (Sweden)

    Qingfa Li

    2014-01-01

    Full Text Available We construct a neural network based on smoothing approximation techniques and projected gradient method to solve a kind of sparse reconstruction problems. Neural network can be implemented by circuits and can be seen as an important method for solving optimization problems, especially large scale problems. Smoothing approximation is an efficient technique for solving nonsmooth optimization problems. We combine these two techniques to overcome the difficulties of the choices of the step size in discrete algorithms and the item in the set-valued map of differential inclusion. In theory, the proposed network can converge to the optimal solution set of the given problem. Furthermore, some numerical experiments show the effectiveness of the proposed network in this paper.

  6. Arabic Handwriting Recognition Using Neural Network Classifier

    African Journals Online (AJOL)

    pc

    2018-03-05

    Mar 5, 2018 ... an OCR using Neural Network classifier preceded by a set of preprocessing .... Artificial Neural Networks (ANNs), which we adopt in this research, consist of ... advantage and disadvantages of each technique. In [9],. Khemiri ...

  7. Application of neural networks in coastal engineering

    Digital Repository Service at National Institute of Oceanography (India)

    Mandal, S.

    the neural network attractive. A neural network is an information processing system modeled on the structure of the dynamic process. It can solve the complex/nonlinear problems quickly once trained by operating on problems using an interconnected number...

  8. Ocean wave forecasting using recurrent neural networks

    Digital Repository Service at National Institute of Oceanography (India)

    Mandal, S.; Prabaharan, N.

    , merchant vessel routing, nearshore construction, etc. more efficiently and safely. This paper describes an artificial neural network, namely recurrent neural network with rprop update algorithm and is applied for wave forecasting. Measured ocean waves off...

  9. Neural networks and applications tutorial

    Science.gov (United States)

    Guyon, I.

    1991-09-01

    The importance of neural networks has grown dramatically during this decade. While only a few years ago they were primarily of academic interest, now dozens of companies and many universities are investigating the potential use of these systems and products are beginning to appear. The idea of building a machine whose architecture is inspired by that of the brain has roots which go far back in history. Nowadays, technological advances of computers and the availability of custom integrated circuits, permit simulations of hundreds or even thousands of neurons. In conjunction, the growing interest in learning machines, non-linear dynamics and parallel computation spurred renewed attention in artificial neural networks. Many tentative applications have been proposed, including decision systems (associative memories, classifiers, data compressors and optimizers), or parametric models for signal processing purposes (system identification, automatic control, noise canceling, etc.). While they do not always outperform standard methods, neural network approaches are already used in some real world applications for pattern recognition and signal processing tasks. The tutorial is divided into six lectures, that where presented at the Third Graduate Summer Course on Computational Physics (September 3-7, 1990) on Parallel Architectures and Applications, organized by the European Physical Society: (1) Introduction: machine learning and biological computation. (2) Adaptive artificial neurons (perceptron, ADALINE, sigmoid units, etc.): learning rules and implementations. (3) Neural network systems: architectures, learning algorithms. (4) Applications: pattern recognition, signal processing, etc. (5) Elements of learning theory: how to build networks which generalize. (6) A case study: a neural network for on-line recognition of handwritten alphanumeric characters.

  10. Adaptive Graph Convolutional Neural Networks

    OpenAIRE

    Li, Ruoyu; Wang, Sheng; Zhu, Feiyun; Huang, Junzhou

    2018-01-01

    Graph Convolutional Neural Networks (Graph CNNs) are generalizations of classical CNNs to handle graph data such as molecular data, point could and social networks. Current filters in graph CNNs are built for fixed and shared graph structure. However, for most real data, the graph structures varies in both size and connectivity. The paper proposes a generalized and flexible graph CNN taking data of arbitrary graph structure as input. In that way a task-driven adaptive graph is learned for eac...

  11. Artificial neural networks for dynamic monitoring of simulated-operating parameters of high temperature gas cooled engineering test reactor (HTTR)

    International Nuclear Information System (INIS)

    Seker, Serhat; Tuerkcan, Erdinc; Ayaz, Emine; Barutcu, Burak

    2003-01-01

    This paper addresses to the problem of utilisation of the artificial neural networks (ANNs) for detecting anomalies as well as physical parameters of a nuclear power plant during power operation in real time. Three different types of neural network algorithms were used namely, feed-forward neural network (back-propagation, BP) and two types of recurrent neural networks (RNN). The data used in this paper were gathered from the simulation of the power operation of the Japan's High Temperature Engineering Testing Reactor (HTTR). For the wide range of power operation, 56 signals were generated by the reactor dynamic simulation code for several hours of normal power operation at different power ramps between 30 and 100% nominal power. Paper will compare the outcomes of different neural networks and presents the neural network system and the determination of physical parameters from the simulated operating data

  12. Curriculum Assessment Using Artificial Neural Network and Support Vector Machine Modeling Approaches: A Case Study. IR Applications. Volume 29

    Science.gov (United States)

    Chen, Chau-Kuang

    2010-01-01

    Artificial Neural Network (ANN) and Support Vector Machine (SVM) approaches have been on the cutting edge of science and technology for pattern recognition and data classification. In the ANN model, classification accuracy can be achieved by using the feed-forward of inputs, back-propagation of errors, and the adjustment of connection weights. In…

  13. Neural network to diagnose lining condition

    Science.gov (United States)

    Yemelyanov, V. A.; Yemelyanova, N. Y.; Nedelkin, A. A.; Zarudnaya, M. V.

    2018-03-01

    The paper presents data on the problem of diagnosing the lining condition at the iron and steel works. The authors describe the neural network structure and software that are designed and developed to determine the lining burnout zones. The simulation results of the proposed neural networks are presented. The authors note the low learning and classification errors of the proposed neural networks. To realize the proposed neural network, the specialized software has been developed.

  14. Medical Imaging with Neural Networks

    International Nuclear Information System (INIS)

    Pattichis, C.; Cnstantinides, A.

    1994-01-01

    The objective of this paper is to provide an overview of the recent developments in the use of artificial neural networks in medical imaging. The areas of medical imaging that are covered include : ultrasound, magnetic resonance, nuclear medicine and radiological (including computerized tomography). (authors)

  15. Optoelectronic Implementation of Neural Networks

    Indian Academy of Sciences (India)

    neural networks, such as learning, adapting and copying by means of parallel ... to provide robust recognition of hand-printed English text. Engine idle and misfiring .... and s represents the bounded activation function of a neuron. It is typically ...

  16. Aphasia Classification Using Neural Networks

    DEFF Research Database (Denmark)

    Axer, H.; Jantzen, Jan; Berks, G.

    2000-01-01

    A web-based software model (http://fuzzy.iau.dtu.dk/aphasia.nsf) was developed as an example for classification of aphasia using neural networks. Two multilayer perceptrons were used to classify the type of aphasia (Broca, Wernicke, anomic, global) according to the results in some subtests...

  17. Intelligent neural network diagnostic system

    International Nuclear Information System (INIS)

    Mohamed, A.H.

    2010-01-01

    Recently, artificial neural network (ANN) has made a significant mark in the domain of diagnostic applications. Neural networks are used to implement complex non-linear mappings (functions) using simple elementary units interrelated through connections with adaptive weights. The performance of the ANN is mainly depending on their topology structure and weights. Some systems have been developed using genetic algorithm (GA) to optimize the topology of the ANN. But, they suffer from some limitations. They are : (1) The computation time requires for training the ANN several time reaching for the average weight required, (2) Slowness of GA for optimization process and (3) Fitness noise appeared in the optimization of ANN. This research suggests new issues to overcome these limitations for finding optimal neural network architectures to learn particular problems. This proposed methodology is used to develop a diagnostic neural network system. It has been applied for a 600 MW turbo-generator as a case of real complex systems. The proposed system has proved its significant performance compared to two common methods used in the diagnostic applications.

  18. Medical Imaging with Neural Networks

    Energy Technology Data Exchange (ETDEWEB)

    Pattichis, C [Department of Computer Science, University of Cyprus, Kallipoleos 75, P.O.Box 537, Nicosia (Cyprus); Cnstantinides, A [Department of Electrical Engineering, Imperial College of Science, Technology and Medicine, London SW7 2BT (United Kingdom)

    1994-12-31

    The objective of this paper is to provide an overview of the recent developments in the use of artificial neural networks in medical imaging. The areas of medical imaging that are covered include : ultrasound, magnetic resonance, nuclear medicine and radiological (including computerized tomography). (authors). 61 refs, 4 tabs.

  19. Numerical experiments with neural networks

    International Nuclear Information System (INIS)

    Miranda, Enrique.

    1990-01-01

    Neural networks are highly idealized models which, in spite of their simplicity, reproduce some key features of the real brain. In this paper, they are introduced at a level adequate for an undergraduate computational physics course. Some relevant magnitudes are defined and evaluated numerically for the Hopfield model and a short term memory model. (Author)

  20. Spin glasses and neural networks

    International Nuclear Information System (INIS)

    Parga, N.; Universidad Nacional de Cuyo, San Carlos de Bariloche

    1989-01-01

    The mean-field theory of spin glass models has been used as a prototype of systems with frustration and disorder. One of the most interesting related systems are models of associative memories. In these lectures we review the main concepts developed to solve the Sherrington-Kirkpatrick model and its application to neural networks. (orig.)

  1. Application of neural networks to seismic active control

    International Nuclear Information System (INIS)

    Tang, Yu.

    1995-01-01

    An exploratory study on seismic active control using an artificial neural network (ANN) is presented in which a singledegree-of-freedom (SDF) structural system is controlled by a trained neural network. A feed-forward neural network and the backpropagation training method are used in the study. In backpropagation training, the learning rate is determined by ensuring the decrease of the error function at each training cycle. The training patterns for the neural net are generated randomly. Then, the trained ANN is used to compute the control force according to the control algorithm. The control strategy proposed herein is to apply the control force at every time step to destroy the build-up of the system response. The ground motions considered in the simulations are the N21E and N69W components of the Lake Hughes No. 12 record that occurred in the San Fernando Valley in California on February 9, 1971. Significant reduction of the structural response by one order of magnitude is observed. Also, it is shown that the proposed control strategy has the ability to reduce the peak that occurs during the first few cycles of the time history. These promising results assert the potential of applying ANNs to active structural control under seismic loads

  2. Simplified LQG Control with Neural Networks

    DEFF Research Database (Denmark)

    Sørensen, O.

    1997-01-01

    A new neural network application for non-linear state control is described. One neural network is modelled to form a Kalmann predictor and trained to act as an optimal state observer for a non-linear process. Another neural network is modelled to form a state controller and trained to produce...

  3. Analysis of neural networks through base functions

    NARCIS (Netherlands)

    van der Zwaag, B.J.; Slump, Cornelis H.; Spaanenburg, L.

    Problem statement. Despite their success-story, neural networks have one major disadvantage compared to other techniques: the inability to explain comprehensively how a trained neural network reaches its output; neural networks are not only (incorrectly) seen as a "magic tool" but possibly even more

  4. Genetic Algorithm Optimized Neural Networks Ensemble as ...

    African Journals Online (AJOL)

    NJD

    Improvements in neural network calibration models by a novel approach using neural network ensemble (NNE) for the simultaneous ... process by training a number of neural networks. .... Matlab® version 6.1 was employed for building principal component ... provide a fair simulation of calibration data set with some degree.

  5. Artificial Neural Network Model for Monitoring Oil Film Regime in Spur Gear Based on Acoustic Emission Data

    Directory of Open Access Journals (Sweden)

    Yasir Hassan Ali

    2015-01-01

    Full Text Available The thickness of an oil film lubricant can contribute to less gear tooth wear and surface failure. The purpose of this research is to use artificial neural network (ANN computational modelling to correlate spur gear data from acoustic emissions, lubricant temperature, and specific film thickness (λ. The approach is using an algorithm to monitor the oil film thickness and to detect which lubrication regime the gearbox is running either hydrodynamic, elastohydrodynamic, or boundary. This monitoring can aid identification of fault development. Feed-forward and recurrent Elman neural network algorithms were used to develop ANN models, which are subjected to training, testing, and validation process. The Levenberg-Marquardt back-propagation algorithm was applied to reduce errors. Log-sigmoid and Purelin were identified as suitable transfer functions for hidden and output nodes. The methods used in this paper shows accurate predictions from ANN and the feed-forward network performance is superior to the Elman neural network.

  6. Understanding the role of speech production in reading: Evidence for a print-to-speech neural network using graphical analysis.

    Science.gov (United States)

    Cummine, Jacqueline; Cribben, Ivor; Luu, Connie; Kim, Esther; Bahktiari, Reyhaneh; Georgiou, George; Boliek, Carol A

    2016-05-01

    The neural circuitry associated with language processing is complex and dynamic. Graphical models are useful for studying complex neural networks as this method provides information about unique connectivity between regions within the context of the entire network of interest. Here, the authors explored the neural networks during covert reading to determine the role of feedforward and feedback loops in covert speech production. Brain activity of skilled adult readers was assessed in real word and pseudoword reading tasks with functional MRI (fMRI). The authors provide evidence for activity coherence in the feedforward system (inferior frontal gyrus-supplementary motor area) during real word reading and in the feedback system (supramarginal gyrus-precentral gyrus) during pseudoword reading. Graphical models provided evidence of an extensive, highly connected, neural network when individuals read real words that relied on coordination of the feedforward system. In contrast, when individuals read pseudowords the authors found a limited/restricted network that relied on coordination of the feedback system. Together, these results underscore the importance of considering multiple pathways and articulatory loops during language tasks and provide evidence for a print-to-speech neural network. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  7. Identification of Complex Dynamical Systems with Neural Networks (2/2)

    CERN Multimedia

    CERN. Geneva

    2016-01-01

    The identification and analysis of high dimensional nonlinear systems is obviously a challenging task. Neural networks have been proven to be universal approximators but this still leaves the identification task a hard one. To do it efficiently, we have to violate some of the rules of classical regression theory. Furthermore we should focus on the interpretation of the resulting model to overcome its black box character. First, we will discuss function approximation with 3 layer feedforward neural networks up to new developments in deep neural networks and deep learning. These nets are not only of interest in connection with image analysis but are a center point of the current artificial intelligence developments. Second, we will focus on the analysis of complex dynamical system in the form of state space models realized as recurrent neural networks. After the introduction of small open dynamical systems we will study dynamical systems on manifolds. Here manifold and dynamics have to be identified in parall...

  8. Identification of Complex Dynamical Systems with Neural Networks (1/2)

    CERN Multimedia

    CERN. Geneva

    2016-01-01

    The identification and analysis of high dimensional nonlinear systems is obviously a challenging task. Neural networks have been proven to be universal approximators but this still leaves the identification task a hard one. To do it efficiently, we have to violate some of the rules of classical regression theory. Furthermore we should focus on the interpretation of the resulting model to overcome its black box character. First, we will discuss function approximation with 3 layer feedforward neural networks up to new developments in deep neural networks and deep learning. These nets are not only of interest in connection with image analysis but are a center point of the current artificial intelligence developments. Second, we will focus on the analysis of complex dynamical system in the form of state space models realized as recurrent neural networks. After the introduction of small open dynamical systems we will study dynamical systems on manifolds. Here manifold and dynamics have to be identified in parall...

  9. Adaptive competitive learning neural networks

    Directory of Open Access Journals (Sweden)

    Ahmed R. Abas

    2013-11-01

    Full Text Available In this paper, the adaptive competitive learning (ACL neural network algorithm is proposed. This neural network not only groups similar input feature vectors together but also determines the appropriate number of groups of these vectors. This algorithm uses a new proposed criterion referred to as the ACL criterion. This criterion evaluates different clustering structures produced by the ACL neural network for an input data set. Then, it selects the best clustering structure and the corresponding network architecture for this data set. The selected structure is composed of the minimum number of clusters that are compact and balanced in their sizes. The selected network architecture is efficient, in terms of its complexity, as it contains the minimum number of neurons. Synaptic weight vectors of these neurons represent well-separated, compact and balanced clusters in the input data set. The performance of the ACL algorithm is evaluated and compared with the performance of a recently proposed algorithm in the literature in clustering an input data set and determining its number of clusters. Results show that the ACL algorithm is more accurate and robust in both determining the number of clusters and allocating input feature vectors into these clusters than the other algorithm especially with data sets that are sparsely distributed.

  10. Optical resonators and neural networks

    Science.gov (United States)

    Anderson, Dana Z.

    1986-08-01

    It may be possible to implement neural network models using continuous field optical architectures. These devices offer the inherent parallelism of propagating waves and an information density in principle dictated by the wavelength of light and the quality of the bulk optical elements. Few components are needed to construct a relatively large equivalent network. Various associative memories based on optical resonators have been demonstrated in the literature, a ring resonator design is discussed in detail here. Information is stored in a holographic medium and recalled through a competitive processes in the gain medium supplying energy to the ring rsonator. The resonator memory is the first realized example of a neural network function implemented with this kind of architecture.

  11. Investigation of tt in the full hadronic final state at CDF with a neural network approach

    CERN Document Server

    Sidoti, A; Busetto, G; Castro, A; Dusini, S; Lazzizzera, I; Wyss, J

    2001-01-01

    In this work we present the results of a neural network (NN) approach to the measurement of the tt production cross-section and top mass in the all-hadronic channel, analyzing data collected at the Collider Detector at Fermilab (CDF) experiment. We have used a hardware implementation of a feedforward neural network, TOTEM, the product of a collaboration of INFN (Istituto Nazionale Fisica Nucleare)-IRST (Istituto per la Ricerca Scientifica e Tecnologica)-University of Trento, Italy. Particular attention has been paid to the evaluation of the systematics specifically related to the NN approach. The results are consistent with those obtained at CDF by conventional data selection techniques. (38 refs).

  12. Neural network-based run-to-run controller using exposure and resist thickness adjustment

    Science.gov (United States)

    Geary, Shane; Barry, Ronan

    2003-06-01

    This paper describes the development of a run-to-run control algorithm using a feedforward neural network, trained using the backpropagation training method. The algorithm is used to predict the critical dimension of the next lot using previous lot information. It is compared to a common prediction algorithm - the exponentially weighted moving average (EWMA) and is shown to give superior prediction performance in simulations. The manufacturing implementation of the final neural network showed significantly improved process capability when compared to the case where no run-to-run control was utilised.

  13. Particle identification with neural networks using a rotational invariant moment representation

    International Nuclear Information System (INIS)

    Sinkus, R.

    1997-01-01

    A feed-forward neural network is used to identify electromagnetic particles based upon their showering properties within a segmented calorimeter. The novel feature is the expansion of the energy distribution in terms of moments of the so-called Zernike functions which are invariant under rotation. The multidimensional input distribution for the neural network is transformed via a principle component analysis and rescaled by its respective variances to ensure input values of the order of one. This results is a better performance in identifying and separating electromagnetic from hadronic particles, especially at low energies. (orig.)

  14. A simple mechanical system for studying adaptive oscillatory neural networks

    DEFF Research Database (Denmark)

    Jouffroy, Guillaume; Jouffroy, Jerome

    Central Pattern Generators (CPG) are oscillatory systems that are responsible for generating rhythmic patterns at the origin of many biological activities such as for example locomotion or digestion. These systems are generally modelled as recurrent neural networks whose parameters are tuned so...... that the network oscillates in a suitable way, this tuning being a non trivial task. It also appears that the link with the physical body that these oscillatory entities control has a fundamental importance, and it seems that most bodies used for experimental validation in the literature (walking robots, lamprey...... a brief description of the Roller-Racer, we present as a preliminary study an RNN-based feed-forward controller whose parameters are obtained through the well-known teacher forcing learning algorithm, extended to learn signals with a continuous component....

  15. Photon spectrometry utilizing neural networks

    International Nuclear Information System (INIS)

    Silveira, R.; Benevides, C.; Lima, F.; Vilela, E.

    2015-01-01

    Having in mind the time spent on the uneventful work of characterization of the radiation beams used in a ionizing radiation metrology laboratory, the Metrology Service of the Centro Regional de Ciencias Nucleares do Nordeste - CRCN-NE verified the applicability of artificial intelligence (artificial neural networks) to perform the spectrometry in photon fields. For this, was developed a multilayer neural network, as an application for the classification of patterns in energy, associated with a thermoluminescent dosimetric system (TLD-700 and TLD-600). A set of dosimeters was initially exposed to various well known medium energies, between 40 keV and 1.2 MeV, coinciding with the beams determined by ISO 4037 standard, for the dose of 10 mSv in the quantity Hp(10), on a chest phantom (ISO slab phantom) with the purpose of generating a set of training data for the neural network. Subsequently, a new set of dosimeters irradiated in unknown energies was presented to the network with the purpose to test the method. The methodology used in this work was suitable for application in the classification of energy beams, having obtained 100% of the classification performed. (authors)

  16. Artificial neural network modeling and optimization of ultrahigh pressure extraction of green tea polyphenols.

    Science.gov (United States)

    Xi, Jun; Xue, Yujing; Xu, Yinxiang; Shen, Yuhong

    2013-11-01

    In this study, the ultrahigh pressure extraction of green tea polyphenols was modeled and optimized by a three-layer artificial neural network. A feed-forward neural network trained with an error back-propagation algorithm was used to evaluate the effects of pressure, liquid/solid ratio and ethanol concentration on the total phenolic content of green tea extracts. The neural network coupled with genetic algorithms was also used to optimize the conditions needed to obtain the highest yield of tea polyphenols. The obtained optimal architecture of artificial neural network model involved a feed-forward neural network with three input neurons, one hidden layer with eight neurons and one output layer including single neuron. The trained network gave the minimum value in the MSE of 0.03 and the maximum value in the R(2) of 0.9571, which implied a good agreement between the predicted value and the actual value, and confirmed a good generalization of the network. Based on the combination of neural network and genetic algorithms, the optimum extraction conditions for the highest yield of green tea polyphenols were determined as follows: 498.8 MPa for pressure, 20.8 mL/g for liquid/solid ratio and 53.6% for ethanol concentration. The total phenolic content of the actual measurement under the optimum predicated extraction conditions was 582.4 ± 0.63 mg/g DW, which was well matched with the predicted value (597.2mg/g DW). This suggests that the artificial neural network model described in this work is an efficient quantitative tool to predict the extraction efficiency of green tea polyphenols. Crown Copyright © 2013. Published by Elsevier Ltd. All rights reserved.

  17. Neural networks based identification and compensation of rate-dependent hysteresis in piezoelectric actuators

    International Nuclear Information System (INIS)

    Zhang, Xinliang; Tan, Yonghong; Su, Miyong; Xie, Yangqiu

    2010-01-01

    This paper presents a method of the identification for the rate-dependent hysteresis in the piezoelectric actuator (PEA) by use of neural networks. In this method, a special hysteretic operator is constructed from the Prandtl-Ishlinskii (PI) model to extract the changing tendency of the static hysteresis. Then, an expanded input space is constructed by introducing the proposed hysteretic operator to transform the multi-valued mapping of the hysteresis into a one-to-one mapping. Thus, a feedforward neural network is applied to the approximation of the rate-independent hysteresis on the constructed expanded input space. Moreover, in order to describe the rate-dependent performance of the hysteresis, a special hybrid model, which is constructed by a linear auto-regressive exogenous input (ARX) sub-model preceded with the previously obtained neural network based rate-independent hysteresis sub-model, is proposed. For the compensation of the effect of the hysteresis in PEA, the PID feedback controller with a feedforward hysteresis compensator is developed for the tracking control of the PEA. Thus, a corresponding inverse model based on the proposed modeling method is developed for the feedforward hysteresis compensator. Finally, both simulations and experimental results on piezoelectric actuator are presented to verify the effectiveness of the proposed approach for the rate-dependent hysteresis.

  18. Transform a Simple Sketch to a Chinese Painting by a Multiscale Deep Neural Network

    Directory of Open Access Journals (Sweden)

    Daoyu Lin

    2018-01-01

    Full Text Available Recently, inspired by the power of deep learning, convolution neural networks can produce fantastic images at the pixel level. However, a significant limiting factor for previous approaches is that they focus on some simple datasets such as faces and bedrooms. In this paper, we propose a multiscale deep neural network to transform sketches into Chinese paintings. To synthesize more realistic imagery, we train the generative network by using both L1 loss and adversarial loss. Additionally, users can control the process of the synthesis since the generative network is feed-forward. This network can also be treated as neural style transfer by adding an edge detector. Furthermore, additional experiments on image colorization and image super-resolution demonstrate the universality of our proposed approach.

  19. Supervised Learning Based on Temporal Coding in Spiking Neural Networks.

    Science.gov (United States)

    Mostafa, Hesham

    2017-08-01

    Gradient descent training techniques are remarkably successful in training analog-valued artificial neural networks (ANNs). Such training techniques, however, do not transfer easily to spiking networks due to the spike generation hard nonlinearity and the discrete nature of spike communication. We show that in a feedforward spiking network that uses a temporal coding scheme where information is encoded in spike times instead of spike rates, the network input-output relation is differentiable almost everywhere. Moreover, this relation is piecewise linear after a transformation of variables. Methods for training ANNs thus carry directly to the training of such spiking networks as we show when training on the permutation invariant MNIST task. In contrast to rate-based spiking networks that are often used to approximate the behavior of ANNs, the networks we present spike much more sparsely and their behavior cannot be directly approximated by conventional ANNs. Our results highlight a new approach for controlling the behavior of spiking networks with realistic temporal dynamics, opening up the potential for using these networks to process spike patterns with complex temporal information.

  20. IMNN: Information Maximizing Neural Networks

    Science.gov (United States)

    Charnock, Tom; Lavaux, Guilhem; Wandelt, Benjamin D.

    2018-04-01

    This software trains artificial neural networks to find non-linear functionals of data that maximize Fisher information: information maximizing neural networks (IMNNs). As compressing large data sets vastly simplifies both frequentist and Bayesian inference, important information may be inadvertently missed. Likelihood-free inference based on automatically derived IMNN summaries produces summaries that are good approximations to sufficient statistics. IMNNs are robustly capable of automatically finding optimal, non-linear summaries of the data even in cases where linear compression fails: inferring the variance of Gaussian signal in the presence of noise, inferring cosmological parameters from mock simulations of the Lyman-α forest in quasar spectra, and inferring frequency-domain parameters from LISA-like detections of gravitational waveforms. In this final case, the IMNN summary outperforms linear data compression by avoiding the introduction of spurious likelihood maxima.

  1. Neural Networks Methodology and Applications

    CERN Document Server

    Dreyfus, Gérard

    2005-01-01

    Neural networks represent a powerful data processing technique that has reached maturity and broad application. When clearly understood and appropriately used, they are a mandatory component in the toolbox of any engineer who wants make the best use of the available data, in order to build models, make predictions, mine data, recognize shapes or signals, etc. Ranging from theoretical foundations to real-life applications, this book is intended to provide engineers and researchers with clear methodologies for taking advantage of neural networks in industrial, financial or banking applications, many instances of which are presented in the book. For the benefit of readers wishing to gain deeper knowledge of the topics, the book features appendices that provide theoretical details for greater insight, and algorithmic details for efficient programming and implementation. The chapters have been written by experts ands seemlessly edited to present a coherent and comprehensive, yet not redundant, practically-oriented...

  2. Spatial Multiplexing of Atom-Photon Entanglement Sources using Feedforward Control and Switching Networks.

    Science.gov (United States)

    Tian, Long; Xu, Zhongxiao; Chen, Lirong; Ge, Wei; Yuan, Haoxiang; Wen, Yafei; Wang, Shengzhi; Li, Shujing; Wang, Hai

    2017-09-29

    The light-matter quantum interface that can create quantum correlations or entanglement between a photon and one atomic collective excitation is a fundamental building block for a quantum repeater. The intrinsic limit is that the probability of preparing such nonclassical atom-photon correlations has to be kept low in order to suppress multiexcitation. To enhance this probability without introducing multiexcitation errors, a promising scheme is to apply multimode memories to the interface. Significant progress has been made in temporal, spectral, and spatial multiplexing memories, but the enhanced probability for generating the entangled atom-photon pair has not been experimentally realized. Here, by using six spin-wave-photon entanglement sources, a switching network, and feedforward control, we build a multiplexed light-matter interface and then demonstrate a ∼sixfold (∼fourfold) probability increase in generating entangled atom-photon (photon-photon) pairs. The measured compositive Bell parameter for the multiplexed interface is 2.49±0.03 combined with a memory lifetime of up to ∼51  μs.

  3. A fast and accurate online sequential learning algorithm for feedforward networks.

    Science.gov (United States)

    Liang, Nan-Ying; Huang, Guang-Bin; Saratchandran, P; Sundararajan, N

    2006-11-01

    In this paper, we develop an online sequential learning algorithm for single hidden layer feedforward networks (SLFNs) with additive or radial basis function (RBF) hidden nodes in a unified framework. The algorithm is referred to as online sequential extreme learning machine (OS-ELM) and can learn data one-by-one or chunk-by-chunk (a block of data) with fixed or varying chunk size. The activation functions for additive nodes in OS-ELM can be any bounded nonconstant piecewise continuous functions and the activation functions for RBF nodes can be any integrable piecewise continuous functions. In OS-ELM, the parameters of hidden nodes (the input weights and biases of additive nodes or the centers and impact factors of RBF nodes) are randomly selected and the output weights are analytically determined based on the sequentially arriving data. The algorithm uses the ideas of ELM of Huang et al. developed for batch learning which has been shown to be extremely fast with generalization performance better than other batch training methods. Apart from selecting the number of hidden nodes, no other control parameters have to be manually chosen. Detailed performance comparison of OS-ELM is done with other popular sequential learning algorithms on benchmark problems drawn from the regression, classification and time series prediction areas. The results show that the OS-ELM is faster than the other sequential algorithms and produces better generalization performance.

  4. Scheduling with artificial neural networks

    OpenAIRE

    Gürgün, Burçkaan

    1993-01-01

    Ankara : Department of Industrial Engineering and The Institute of Engineering and Sciences of Bilkent Univ., 1993. Thesis (Master's) -- Bilkent University, 1993. Includes bibliographical references leaves 59-65. Artificial Neural Networks (ANNs) attempt to emulate the massively parallel and distributed processing of the human brain. They are being examined for a variety of problems that have been very difficult to solve. The objective of this thesis is to review the curren...

  5. Design of Neural Networks for Fast Convergence and Accuracy: Dynamics and Control

    Science.gov (United States)

    Maghami, Peiman G.; Sparks, Dean W., Jr.

    1997-01-01

    A procedure for the design and training of artificial neural networks, used for rapid and efficient controls and dynamics design and analysis for flexible space systems, has been developed. Artificial neural networks are employed, such that once properly trained, they provide a means of evaluating the impact of design changes rapidly. Specifically, two-layer feedforward neural networks are designed to approximate the functional relationship between the component/spacecraft design changes and measures of its performance or nonlinear dynamics of the system/components. A training algorithm, based on statistical sampling theory, is presented, which guarantees that the trained networks provide a designer-specified degree of accuracy in mapping the functional relationship. Within each iteration of this statistical-based algorithm, a sequential design algorithm is used for the design and training of the feedforward network to provide rapid convergence to the network goals. Here, at each sequence a new network is trained to minimize the error of previous network. The proposed method should work for applications wherein an arbitrary large source of training data can be generated. Two numerical examples are performed on a spacecraft application in order to demonstrate the feasibility of the proposed approach.

  6. Neural network application to aircraft control system design

    Science.gov (United States)

    Troudet, Terry; Garg, Sanjay; Merrill, Walter C.

    1991-01-01

    The feasibility of using artificial neural network as control systems for modern, complex aerospace vehicles is investigated via an example aircraft control design study. The problem considered is that of designing a controller for an integrated airframe/propulsion longitudinal dynamics model of a modern fighter aircraft to provide independent control of pitch rate and airspeed responses to pilot command inputs. An explicit model following controller using H infinity control design techniques is first designed to gain insight into the control problem as well as to provide a baseline for evaluation of the neurocontroller. Using the model of the desired dynamics as a command generator, a multilayer feedforward neural network is trained to control the vehicle model within the physical limitations of the actuator dynamics. This is achieved by minimizing an objective function which is a weighted sum of tracking errors and control input commands and rates. To gain insight in the neurocontrol, linearized representations of the nonlinear neurocontroller are analyzed along a commanded trajectory. Linear robustness analysis tools are then applied to the linearized neurocontroller models and to the baseline H infinity based controller. Future areas of research identified to enhance the practical applicability of neural networks to flight control design.

  7. Neural network application to aircraft control system design

    Science.gov (United States)

    Troudet, Terry; Garg, Sanjay; Merrill, Walter C.

    1991-01-01

    The feasibility of using artificial neural networks as control systems for modern, complex aerospace vehicles is investigated via an example aircraft control design study. The problem considered is that of designing a controller for an integrated airframe/propulsion longitudinal dynamics model of a modern fighter aircraft to provide independent control of pitch rate and airspeed responses to pilot command inputs. An explicit model following controller using H infinity control design techniques is first designed to gain insight into the control problem as well as to provide a baseline for evaluation of the neurocontroller. Using the model of the desired dynamics as a command generator, a multilayer feedforward neural network is trained to control the vehicle model within the physical limitations of the actuator dynamics. This is achieved by minimizing an objective function which is a weighted sum of tracking errors and control input commands and rates. To gain insight in the neurocontrol, linearized representations of the nonlinear neurocontroller are analyzed along a commanded trajectory. Linear robustness analysis tools are then applied to the linearized neurocontroller models and to the baseline H infinity based controller. Future areas of research are identified to enhance the practical applicability of neural networks to flight control design.

  8. Control of 12-Cylinder Camless Engine with Neural Networks

    Directory of Open Access Journals (Sweden)

    Ashhab Moh’d Sami

    2017-01-01

    Full Text Available The 12-cyliner camless engine breathing process is modeled with artificial neural networks (ANN’s. The inputs to the net are the intake valve lift (IVL and intake valve closing timing (IVC whereas the output of the net is the cylinder air charge (CAC. The ANN is trained with data collected from an engine simulation model which is based on thermodynamics principles and calibrated against real engine data. A method for adapting single-output feed-forward neural networks is proposed and applied to the camless engine ANN model. As a consequence the overall 12-cyliner camless engine feedback controller is upgraded and the necessary changes are implemented in order to contain the adaptive neural network with the objective of tracking the cylinder air charge (driver’s torque demand while minimizing the pumping losses (increasing engine efficiency. All the needed measurements are extracted only from the two conventional and inexpensive sensors, namely, the mass air flow through the throttle body (MAF and the intake manifold absolute pressure (MAP sensors. The feedback controller’s capability is demonstrated through computer simulation.

  9. The LILARTI neural network system

    Energy Technology Data Exchange (ETDEWEB)

    Allen, J.D. Jr.; Schell, F.M.; Dodd, C.V.

    1992-10-01

    The material of this Technical Memorandum is intended to provide the reader with conceptual and technical background information on the LILARTI neural network system of detail sufficient to confer an understanding of the LILARTI method as it is presently allied and to facilitate application of the method to problems beyond the scope of this document. Of particular importance in this regard are the descriptive sections and the Appendices which include operating instructions, partial listings of program output and data files, and network construction information.

  10. Parameterization Of Solar Radiation Using Neural Network

    International Nuclear Information System (INIS)

    Jiya, J. D.; Alfa, B.

    2002-01-01

    This paper presents a neural network technique for parameterization of global solar radiation. The available data from twenty-one stations is used for training the neural network and the data from other ten stations is used to validate the neural model. The neural network utilizes latitude, longitude, altitude, sunshine duration and period number to parameterize solar radiation values. The testing data was not used in the training to demonstrate the performance of the neural network in unknown stations to parameterize solar radiation. The results indicate a good agreement between the parameterized solar radiation values and actual measured values

  11. Neural Networks in Control Applications

    DEFF Research Database (Denmark)

    Sørensen, O.

    are examined. The models are separated into three groups representing input/output descriptions as well as state space descriptions: - Models, where all in- and outputs are measurable (static networks). - Models, where some inputs are non-measurable (recurrent networks). - Models, where some in- and some...... outputs are non-measurable (recurrent networks with incomplete state information). The three groups are ordered in increasing complexity, and for each group it is shown how to solve the problems concerning training and application of the specific model type. Of particular interest are the model types...... Kalmann filter) representing state space description. The potentials of neural networks for control of non-linear processes are also examined, focusing on three different groups of control concepts, all considered as generalizations of known linear control concepts to handle also non-linear processes...

  12. Decorrelation of Neural-Network Activity by Inhibitory Feedback

    Science.gov (United States)

    Einevoll, Gaute T.; Diesmann, Markus

    2012-01-01

    Correlations in spike-train ensembles can seriously impair the encoding of information by their spatio-temporal structure. An inevitable source of correlation in finite neural networks is common presynaptic input to pairs of neurons. Recent studies demonstrate that spike correlations in recurrent neural networks are considerably smaller than expected based on the amount of shared presynaptic input. Here, we explain this observation by means of a linear network model and simulations of networks of leaky integrate-and-fire neurons. We show that inhibitory feedback efficiently suppresses pairwise correlations and, hence, population-rate fluctuations, thereby assigning inhibitory neurons the new role of active decorrelation. We quantify this decorrelation by comparing the responses of the intact recurrent network (feedback system) and systems where the statistics of the feedback channel is perturbed (feedforward system). Manipulations of the feedback statistics can lead to a significant increase in the power and coherence of the population response. In particular, neglecting correlations within the ensemble of feedback channels or between the external stimulus and the feedback amplifies population-rate fluctuations by orders of magnitude. The fluctuation suppression in homogeneous inhibitory networks is explained by a negative feedback loop in the one-dimensional dynamics of the compound activity. Similarly, a change of coordinates exposes an effective negative feedback loop in the compound dynamics of stable excitatory-inhibitory networks. The suppression of input correlations in finite networks is explained by the population averaged correlations in the linear network model: In purely inhibitory networks, shared-input correlations are canceled by negative spike-train correlations. In excitatory-inhibitory networks, spike-train correlations are typically positive. Here, the suppression of input correlations is not a result of the mere existence of correlations between

  13. Practical neural network recipies in C++

    CERN Document Server

    Masters

    2014-01-01

    This text serves as a cookbook for neural network solutions to practical problems using C++. It will enable those with moderate programming experience to select a neural network model appropriate to solving a particular problem, and to produce a working program implementing that network. The book provides guidance along the entire problem-solving path, including designing the training set, preprocessing variables, training and validating the network, and evaluating its performance. Though the book is not intended as a general course in neural networks, no background in neural works is assum

  14. Neural network modeling of emotion

    Science.gov (United States)

    Levine, Daniel S.

    2007-03-01

    This article reviews the history and development of computational neural network modeling of cognitive and behavioral processes that involve emotion. The exposition starts with models of classical conditioning dating from the early 1970s. Then it proceeds toward models of interactions between emotion and attention. Then models of emotional influences on decision making are reviewed, including some speculative (not and not yet simulated) models of the evolution of decision rules. Through the late 1980s, the neural networks developed to model emotional processes were mainly embodiments of significant functional principles motivated by psychological data. In the last two decades, network models of these processes have become much more detailed in their incorporation of known physiological properties of specific brain regions, while preserving many of the psychological principles from the earlier models. Most network models of emotional processes so far have dealt with positive and negative emotion in general, rather than specific emotions such as fear, joy, sadness, and anger. But a later section of this article reviews a few models relevant to specific emotions: one family of models of auditory fear conditioning in rats, and one model of induced pleasure enhancing creativity in humans. Then models of emotional disorders are reviewed. The article concludes with philosophical statements about the essential contributions of emotion to intelligent behavior and the importance of quantitative theories and models to the interdisciplinary enterprise of understanding the interactions of emotion, cognition, and behavior.

  15. MEMBRAIN NEURAL NETWORK FOR VISUAL PATTERN RECOGNITION

    Directory of Open Access Journals (Sweden)

    Artur Popko

    2013-06-01

    Full Text Available Recognition of visual patterns is one of significant applications of Artificial Neural Networks, which partially emulate human thinking in the domain of artificial intelligence. In the paper, a simplified neural approach to recognition of visual patterns is portrayed and discussed. This paper is dedicated for investigators in visual patterns recognition, Artificial Neural Networking and related disciplines. The document describes also MemBrain application environment as a powerful and easy to use neural networks’ editor and simulator supporting ANN.

  16. Artificial neural networks application for solid fuel slagging intensity predictions

    Directory of Open Access Journals (Sweden)

    Kakietek Sławomir

    2017-01-01

    Full Text Available Slagging issues present in pulverized steam boilers very often lead to heat transfer problems, corrosion and not planned outages of boilers which increase the cost of energy production and decrease the efficiency of energy production. Slagging especially occurs in regions with reductive atmospheres which nowadays are very common due to very strict limitations in NOx emissions. Moreover alternative fuels like biomass which are also used in combustion systems from two decades in order to decrease CO2 emissions also usually increase the risk of slagging. Thus the prediction of slagging properties of fuels is not the minor issue which can be neglected before purchasing or mixing of fuels. This however is rather difficult to estimate and even commonly known standard laboratory methods like fusion temperature determination or special indexers calculated on the basis of proximate and ultimate analyses, very often have no reasonable correlation to real boiler fuel behaviour. In this paper the method of determination of slagging properties of solid fuels based on laboratory investigation and artificial neural networks were presented. A fuel data base with over 40 fuels was created. Neural networks simulations were carried out in order to predict the beginning temperature and intensity of slagging. Reasonable results were obtained for some of tested neural networks, especially for hybrid feedforward networks with PCA technique. Consequently neural network model will be used in Common Intelligent Boiler Operation Platform (CIBOP being elaborated within CERUBIS research project for two BP-1150 and BB-1150 steam boilers. The model among others enables proper fuel selection in order to minimize slagging risk.

  17. Cryptography based on neural networks - analytical results

    International Nuclear Information System (INIS)

    Rosen-Zvi, Michal; Kanter, Ido; Kinzel, Wolfgang

    2002-01-01

    The mutual learning process between two parity feed-forward networks with discrete and continuous weights is studied analytically, and we find that the number of steps required to achieve full synchronization between the two networks in the case of discrete weights is finite. The synchronization process is shown to be non-self-averaging and the analytical solution is based on random auxiliary variables. The learning time of an attacker that is trying to imitate one of the networks is examined analytically and is found to be much longer than the synchronization time. Analytical results are found to be in agreement with simulations. (letter to the editor)

  18. Using neural networks with jet shapes to identify b jets in e+e- interactions

    International Nuclear Information System (INIS)

    Bellantoni, L.; Conway, J.S.; Jacobsen, J.E.; Pan, Y.B.; Wu Saulan

    1991-01-01

    A feed-forward neural network trained using backpropagation was used to discriminate between b and light quark jets in e + e - → Z 0 → qanti q events. The information presented to the network consisted of 25 jet shape variables. The network successfully identified b jets in two- and three-jet events modeled using a detector simulation. The jet identification efficiency for two-jet events was 61% and the probability to call a light quark jet a b jet equal to 20%. (orig.)

  19. Selection of hidden layer nodes in neural networks by statistical tests

    International Nuclear Information System (INIS)

    Ciftcioglu, Ozer

    1992-05-01

    A statistical methodology for selection of the number of hidden layer nodes in feedforward neural networks is described. The method considers the network as an empirical model for the experimental data set subject to pattern classification so that the selection process becomes a model estimation through parameter identification. The solution is performed for an overdetermined estimation problem for identification using nonlinear least squares minimization technique. The number of the hidden layer nodes is determined as result of hypothesis testing. Accordingly the redundant network structure with respect to the number of parameters is avoided and the classification error being kept to a minimum. (author). 11 refs.; 4 figs.; 1 tab

  20. Mode Choice Modeling Using Artificial Neural Networks

    OpenAIRE

    Edara, Praveen Kumar

    2003-01-01

    Artificial intelligence techniques have produced excellent results in many diverse fields of engineering. Techniques such as neural networks and fuzzy systems have found their way into transportation engineering. In recent years, neural networks are being used instead of regression techniques for travel demand forecasting purposes. The basic reason lies in the fact that neural networks are able to capture complex relationships and learn from examples and also able to adapt when new data becom...

  1. Dynamic training algorithm for dynamic neural networks

    International Nuclear Information System (INIS)

    Tan, Y.; Van Cauwenberghe, A.; Liu, Z.

    1996-01-01

    The widely used backpropagation algorithm for training neural networks based on the gradient descent has a significant drawback of slow convergence. A Gauss-Newton method based recursive least squares (RLS) type algorithm with dynamic error backpropagation is presented to speed-up the learning procedure of neural networks with local recurrent terms. Finally, simulation examples concerning the applications of the RLS type algorithm to identification of nonlinear processes using a local recurrent neural network are also included in this paper

  2. Adaptive optimization and control using neural networks

    Energy Technology Data Exchange (ETDEWEB)

    Mead, W.C.; Brown, S.K.; Jones, R.D.; Bowling, P.S.; Barnes, C.W.

    1993-10-22

    Recent work has demonstrated the ability of neural-network-based controllers to optimize and control machines with complex, non-linear, relatively unknown control spaces. We present a brief overview of neural networks via a taxonomy illustrating some capabilities of different kinds of neural networks. We present some successful control examples, particularly the optimization and control of a small-angle negative ion source.

  3. Fuzzy neural network theory and application

    CERN Document Server

    Liu, Puyin

    2004-01-01

    This book systematically synthesizes research achievements in the field of fuzzy neural networks in recent years. It also provides a comprehensive presentation of the developments in fuzzy neural networks, with regard to theory as well as their application to system modeling and image restoration. Special emphasis is placed on the fundamental concepts and architecture analysis of fuzzy neural networks. The book is unique in treating all kinds of fuzzy neural networks and their learning algorithms and universal approximations, and employing simulation examples which are carefully designed to he

  4. Boolean Factor Analysis by Attractor Neural Network

    Czech Academy of Sciences Publication Activity Database

    Frolov, A. A.; Húsek, Dušan; Muraviev, I. P.; Polyakov, P.Y.

    2007-01-01

    Roč. 18, č. 3 (2007), s. 698-707 ISSN 1045-9227 R&D Projects: GA AV ČR 1ET100300419; GA ČR GA201/05/0079 Institutional research plan: CEZ:AV0Z10300504 Keywords : recurrent neural network * Hopfield-like neural network * associative memory * unsupervised learning * neural network architecture * neural network application * statistics * Boolean factor analysis * dimensionality reduction * features clustering * concepts search * information retrieval Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 2.769, year: 2007

  5. Finite connectivity attractor neural networks

    International Nuclear Information System (INIS)

    Wemmenhove, B; Coolen, A C C

    2003-01-01

    We study a family of diluted attractor neural networks with a finite average number of (symmetric) connections per neuron. As in finite connectivity spin glasses, their equilibrium properties are described by order parameter functions, for which we derive an integral equation in replica symmetric approximation. A bifurcation analysis of this equation reveals the locations of the paramagnetic to recall and paramagnetic to spin-glass transition lines in the phase diagram. The line separating the retrieval phase from the spin-glass phase is calculated at zero temperature. All phase transitions are found to be continuous

  6. Prediction of friction factor of pure water flowing inside vertical smooth and microfin tubes by using artificial neural networks

    Science.gov (United States)

    Çebi, A.; Akdoğan, E.; Celen, A.; Dalkilic, A. S.

    2017-02-01

    An artificial neural network (ANN) model of friction factor in smooth and microfin tubes under heating, cooling and isothermal conditions was developed in this study. Data used in ANN was taken from a vertically positioned heat exchanger experimental setup. Multi-layered feed-forward neural network with backpropagation algorithm, radial basis function networks and hybrid PSO-neural network algorithm were applied to the database. Inputs were the ratio of cross sectional flow area to hydraulic diameter, experimental condition number depending on isothermal, heating, or cooling conditions and mass flow rate while the friction factor was the output of the constructed system. It was observed that such neural network based system could effectively predict the friction factor values of the flows regardless of their tube types. A dependency analysis to determine the strongest parameter that affected the network and database was also performed and tube geometry was found to be the strongest parameter of all as a result of analysis.

  7. A training rule which guarantees finite-region stability for a class of closed-loop neural-network control systems.

    Science.gov (United States)

    Kuntanapreeda, S; Fullmer, R R

    1996-01-01

    A training method for a class of neural network controllers is presented which guarantees closed-loop system stability. The controllers are assumed to be nonlinear, feedforward, sampled-data, full-state regulators implemented as single hidden-layer neural networks. The controlled systems must be locally hermitian and observable. Stability of the closed-loop system is demonstrated by determining a Lyapunov function, which can be used to identify a finite stability region about the regulator point.

  8. The Laplacian spectrum of neural networks

    Science.gov (United States)

    de Lange, Siemon C.; de Reus, Marcel A.; van den Heuvel, Martijn P.

    2014-01-01

    The brain is a complex network of neural interactions, both at the microscopic and macroscopic level. Graph theory is well suited to examine the global network architecture of these neural networks. Many popular graph metrics, however, encode average properties of individual network elements. Complementing these “conventional” graph metrics, the eigenvalue spectrum of the normalized Laplacian describes a network's structure directly at a systems level, without referring to individual nodes or connections. In this paper, the Laplacian spectra of the macroscopic anatomical neuronal networks of the macaque and cat, and the microscopic network of the Caenorhabditis elegans were examined. Consistent with conventional graph metrics, analysis of the Laplacian spectra revealed an integrative community structure in neural brain networks. Extending previous findings of overlap of network attributes across species, similarity of the Laplacian spectra across the cat, macaque and C. elegans neural networks suggests a certain level of consistency in the overall architecture of the anatomical neural networks of these species. Our results further suggest a specific network class for neural networks, distinct from conceptual small-world and scale-free models as well as several empirical networks. PMID:24454286

  9. AIR POLLUITON INDEX PREDICTION USING MULTIPLE NEURAL NETWORKS

    Directory of Open Access Journals (Sweden)

    Zainal Ahmad

    2017-05-01

    Full Text Available Air quality monitoring and forecasting tools are necessary for the purpose of taking precautionary measures against air pollution, such as reducing the effect of a predicted air pollution peak on the surrounding population and ecosystem. In this study a single Feed-forward Artificial Neural Network (FANN is shown to be able to predict the Air Pollution Index (API with a Mean Squared Error (MSE and coefficient determination, R2, of 0.1856 and 0.7950 respectively. However, due to the non-robust nature of single FANN, a selective combination of Multiple Neural Networks (MNN is introduced using backward elimination and a forward selection method. The results show that both selective combination methods can improve the robustness and performance of the API prediction with the MSE and R2 of 0.1614 and 0.8210 respectively. This clearly shows that it is possible to reduce the number of networks combined in MNN for API prediction, without losses of any information in terms of the performance of the final API prediction model.

  10. Gap Filling of Daily Sea Levels by Artificial Neural Networks

    Directory of Open Access Journals (Sweden)

    Lyubka Pashova

    2013-06-01

    Full Text Available In the recent years, intelligent methods as artificial neural networks are successfully applied for data analysis from different fields of the geosciences. One of the encountered practical problems is the availability of gaps in the time series that prevent their comprehensive usage for the scientific and practical purposes. The article briefly describes two types of the artificial neural network (ANN architectures - Feed-Forward Backpropagation (FFBP and recurrent Echo state network (ESN. In some cases, the ANN can be used as an alternative on the traditional methods, to fill in missing values in the time series. We have been conducted several experiments to fill the missing values of daily sea levels spanning a 5-years period using both ANN architectures. A multiple linear regression for the same purpose has been also applied. The sea level data are derived from the records of the tide gauge Burgas, which is located on the western Black Sea coast. The achieved results have shown that the performance of ANN models is better than that of the classical one and they are very promising for the real-time interpolation of missing data in the time series.

  11. Neural network stochastic simulation applied for quantifying uncertainties

    Directory of Open Access Journals (Sweden)

    N Foudil-Bey

    2016-09-01

    Full Text Available Generally the geostatistical simulation methods are used to generate several realizations of physical properties in the sub-surface, these methods are based on the variogram analysis and limited to measures correlation between variables at two locations only. In this paper, we propose a simulation of properties based on supervised Neural network training at the existing drilling data set. The major advantage is that this method does not require a preliminary geostatistical study and takes into account several points. As a result, the geological information and the diverse geophysical data can be combined easily. To do this, we used a neural network with multi-layer perceptron architecture like feed-forward, then we used the back-propagation algorithm with conjugate gradient technique to minimize the error of the network output. The learning process can create links between different variables, this relationship can be used for interpolation of the properties on the one hand, or to generate several possible distribution of physical properties on the other hand, changing at each time and a random value of the input neurons, which was kept constant until the period of learning. This method was tested on real data to simulate multiple realizations of the density and the magnetic susceptibility in three-dimensions at the mining camp of Val d'Or, Québec (Canada.

  12. Estimation of airway smooth muscle stiffness changes due to length oscillation using artificial neural network.

    Science.gov (United States)

    Al-Jumaily, Ahmed; Chen, Leizhi

    2012-10-07

    This paper presents a novel approach to estimate stiffness changes in airway smooth muscles due to external oscillation. Artificial neural networks are used to model the stiffness changes due to cyclic stretches of the smooth muscles. The nonlinear relationship between stiffness ratios and oscillation frequencies is modeled by a feed-forward neural network (FNN) model. The structure of the FNN is selected through the training and validation using literature data from 11 experiments with different muscle lengths, muscle masses, oscillation frequencies and amplitudes. Data pre-processing methods are used to improve the robustness of the neural network model to match the non-linearity. The validation results show that the FNN model can predict the stiffness ratio changes with a mean square error of 0.0042. Copyright © 2012 Elsevier Ltd. All rights reserved.

  13. Artificial neural networks for modeling time series of beach litter in the southern North Sea.

    Science.gov (United States)

    Schulz, Marcus; Matthies, Michael

    2014-07-01

    In European marine waters, existing monitoring programs of beach litter need to be improved concerning litter items used as indicators of pollution levels, efficiency, and effectiveness. In order to ease and focus future monitoring of beach litter on few important litter items, feed-forward neural networks consisting of three layers were developed to relate single litter items to general categories of marine litter. The neural networks developed were applied to seven beaches in the southern North Sea and modeled time series of five general categories of marine litter, such as litter from fishing, shipping, and tourism. Results of regression analyses show that general categories were predicted significantly moderately to well. Measured and modeled data were in the same order of magnitude, and minima and maxima overlapped well. Neural networks were found to be eligible tools to deliver reliable predictions of marine litter with low computational effort and little input of information. Copyright © 2014 Elsevier Ltd. All rights reserved.

  14. Using a neural network in the search for the Higgs boson

    International Nuclear Information System (INIS)

    Hultqvist, K.; Jacobsson, R.; Johansson, K.E.

    1995-01-01

    The search for the Standard Model Higgs boson in high energy e + e - collisions requires analysis techniques which efficiently discriminate against the very large background. A classifier based on a feed-forward neural network has been extensively used in a search in the channel where the Higgs boson is produced in association with neutrinos. The method has significantly improved the sensitivity of the search. With a simple preselection based on event topology followed by a neural network we have obtained a combined background rejection factor of more than 29 000 and a selection efficiency for Higgs particle events of 54%, assuming a mass of 55 GeV/c 2 for the Higgs boson. We describe here the details of the analysis with emphasis on the neural network. (orig.)

  15. Statistical and optimization methods to expedite neural network training for transient identification

    International Nuclear Information System (INIS)

    Reifman, J.; Vitela, E.J.; Lee, J.C.

    1993-01-01

    Two complementary methods, statistical feature selection and nonlinear optimization through conjugate gradients, are used to expedite feedforward neural network training. Statistical feature selection techniques in the form of linear correlation coefficients and information-theoretic entropy are used to eliminate redundant and non-informative plant parameters to reduce the size of the network. The method of conjugate gradients is used to accelerate the network training convergence and to systematically calculate the Teaming and momentum constants at each iteration. The proposed techniques are compared with the backpropagation algorithm using the entire set of plant parameters in the training of neural networks to identify transients simulated with the Midland Nuclear Power Plant Unit 2 simulator. By using 25% of the plant parameters and the conjugate gradients, a 30-fold reduction in CPU time was obtained without degrading the diagnostic ability of the network

  16. A P2P Botnet detection scheme based on decision tree and adaptive multilayer neural networks.

    Science.gov (United States)

    Alauthaman, Mohammad; Aslam, Nauman; Zhang, Li; Alasem, Rafe; Hossain, M A

    2018-01-01

    In recent years, Botnets have been adopted as a popular method to carry and spread many malicious codes on the Internet. These malicious codes pave the way to execute many fraudulent activities including spam mail, distributed denial-of-service attacks and click fraud. While many Botnets are set up using centralized communication architecture, the peer-to-peer (P2P) Botnets can adopt a decentralized architecture using an overlay network for exchanging command and control data making their detection even more difficult. This work presents a method of P2P Bot detection based on an adaptive multilayer feed-forward neural network in cooperation with decision trees. A classification and regression tree is applied as a feature selection technique to select relevant features. With these features, a multilayer feed-forward neural network training model is created using a resilient back-propagation learning algorithm. A comparison of feature set selection based on the decision tree, principal component analysis and the ReliefF algorithm indicated that the neural network model with features selection based on decision tree has a better identification accuracy along with lower rates of false positives. The usefulness of the proposed approach is demonstrated by conducting experiments on real network traffic datasets. In these experiments, an average detection rate of 99.08 % with false positive rate of 0.75 % was observed.

  17. Neural networks with discontinuous/impact activations

    CERN Document Server

    Akhmet, Marat

    2014-01-01

    This book presents as its main subject new models in mathematical neuroscience. A wide range of neural networks models with discontinuities are discussed, including impulsive differential equations, differential equations with piecewise constant arguments, and models of mixed type. These models involve discontinuities, which are natural because huge velocities and short distances are usually observed in devices modeling the networks. A discussion of the models, appropriate for the proposed applications, is also provided. This book also: Explores questions related to the biological underpinning for models of neural networks\\ Considers neural networks modeling using differential equations with impulsive and piecewise constant argument discontinuities Provides all necessary mathematical basics for application to the theory of neural networks Neural Networks with Discontinuous/Impact Activations is an ideal book for researchers and professionals in the field of engineering mathematics that have an interest in app...

  18. Computational modeling of spiking neural network with learning rules from STDP and intrinsic plasticity

    Science.gov (United States)

    Li, Xiumin; Wang, Wei; Xue, Fangzheng; Song, Yongduan

    2018-02-01

    Recently there has been continuously increasing interest in building up computational models of spiking neural networks (SNN), such as the Liquid State Machine (LSM). The biologically inspired self-organized neural networks with neural plasticity can enhance the capability of computational performance, with the characteristic features of dynamical memory and recurrent connection cycles which distinguish them from the more widely used feedforward neural networks. Despite a variety of computational models for brain-like learning and information processing have been proposed, the modeling of self-organized neural networks with multi-neural plasticity is still an important open challenge. The main difficulties lie in the interplay among different forms of neural plasticity rules and understanding how structures and dynamics of neural networks shape the computational performance. In this paper, we propose a novel approach to develop the models of LSM with a biologically inspired self-organizing network based on two neural plasticity learning rules. The connectivity among excitatory neurons is adapted by spike-timing-dependent plasticity (STDP) learning; meanwhile, the degrees of neuronal excitability are regulated to maintain a moderate average activity level by another learning rule: intrinsic plasticity (IP). Our study shows that LSM with STDP+IP performs better than LSM with a random SNN or SNN obtained by STDP alone. The noticeable improvement with the proposed method is due to the better reflected competition among different neurons in the developed SNN model, as well as the more effectively encoded and processed relevant dynamic information with its learning and self-organizing mechanism. This result gives insights to the optimization of computational models of spiking neural networks with neural plasticity.

  19. Neural attractor network for application in visual field data classification

    International Nuclear Information System (INIS)

    Fink, Wolfgang

    2004-01-01

    The purpose was to introduce a novel method for computer-based classification of visual field data derived from perimetric examination, that may act as a ' counsellor', providing an independent 'second opinion' to the diagnosing physician. The classification system consists of a Hopfield-type neural attractor network that obtains its input data from perimetric examination results. An iterative relaxation process determines the states of the neurons dynamically. Therefore, even 'noisy' perimetric output, e.g., early stages of a disease, may eventually be classified correctly according to the predefined idealized visual field defect (scotoma) patterns, stored as attractors of the network, that are found with diseases of the eye, optic nerve and the central nervous system. Preliminary tests of the classification system on real visual field data derived from perimetric examinations have shown a classification success of over 80%. Some of the main advantages of the Hopfield-attractor-network-based approach over feed-forward type neural networks are: (1) network architecture is defined by the classification problem; (2) no training is required to determine the neural coupling strengths; (3) assignment of an auto-diagnosis confidence level is possible by means of an overlap parameter and the Hamming distance. In conclusion, the novel method for computer-based classification of visual field data, presented here, furnishes a valuable first overview and an independent 'second opinion' in judging perimetric examination results, pointing towards a final diagnosis by a physician. It should not be considered a substitute for the diagnosing physician. Thanks to the worldwide accessibility of the Internet, the classification system offers a promising perspective towards modern computer-assisted diagnosis in both medicine and tele-medicine, for example and in particular, with respect to non-ophthalmic clinics or in communities where perimetric expertise is not readily available

  20. Multistability in bidirectional associative memory neural networks

    International Nuclear Information System (INIS)

    Huang Gan; Cao Jinde

    2008-01-01

    In this Letter, the multistability issue is studied for Bidirectional Associative Memory (BAM) neural networks. Based on the existence and stability analysis of the neural networks with or without delay, it is found that the 2n-dimensional networks can have 3 n equilibria and 2 n equilibria of them are locally exponentially stable, where each layer of the BAM network has n neurons. Furthermore, the results has been extended to (n+m)-dimensional BAM neural networks, where there are n and m neurons on the two layers respectively. Finally, two numerical examples are presented to illustrate the validity of our results

  1. Multistability in bidirectional associative memory neural networks

    Science.gov (United States)

    Huang, Gan; Cao, Jinde

    2008-04-01

    In this Letter, the multistability issue is studied for Bidirectional Associative Memory (BAM) neural networks. Based on the existence and stability analysis of the neural networks with or without delay, it is found that the 2 n-dimensional networks can have 3 equilibria and 2 equilibria of them are locally exponentially stable, where each layer of the BAM network has n neurons. Furthermore, the results has been extended to (n+m)-dimensional BAM neural networks, where there are n and m neurons on the two layers respectively. Finally, two numerical examples are presented to illustrate the validity of our results.

  2. Drift chamber tracking with neural networks

    International Nuclear Information System (INIS)

    Lindsey, C.S.; Denby, B.; Haggerty, H.

    1992-10-01

    We discuss drift chamber tracking with a commercial log VLSI neural network chip. Voltages proportional to the drift times in a 4-layer drift chamber were presented to the Intel ETANN chip. The network was trained to provide the intercept and slope of straight tracks traversing the chamber. The outputs were recorded and later compared off line to conventional track fits. Two types of network architectures were studied. Applications of neural network tracking to high energy physics detector triggers is discussed

  3. Neural Network Based Load Frequency Control for Restructuring ...

    African Journals Online (AJOL)

    Neural Network Based Load Frequency Control for Restructuring Power Industry. ... an artificial neural network (ANN) application of load frequency control (LFC) of a Multi-Area power system by using a neural network controller is presented.

  4. Hidden neural networks: application to speech recognition

    DEFF Research Database (Denmark)

    Riis, Søren Kamaric

    1998-01-01

    We evaluate the hidden neural network HMM/NN hybrid on two speech recognition benchmark tasks; (1) task independent isolated word recognition on the Phonebook database, and (2) recognition of broad phoneme classes in continuous speech from the TIMIT database. It is shown how hidden neural networks...

  5. Neural Network Classifier Based on Growing Hyperspheres

    Czech Academy of Sciences Publication Activity Database

    Jiřina Jr., Marcel; Jiřina, Marcel

    2000-01-01

    Roč. 10, č. 3 (2000), s. 417-428 ISSN 1210-0552. [Neural Network World 2000. Prague, 09.07.2000-12.07.2000] Grant - others:MŠMT ČR(CZ) VS96047; MPO(CZ) RP-4210 Institutional research plan: AV0Z1030915 Keywords : neural network * classifier * hyperspheres * big -dimensional data Subject RIV: BA - General Mathematics

  6. Neural Networks for Non-linear Control

    DEFF Research Database (Denmark)

    Sørensen, O.

    1994-01-01

    This paper describes how a neural network, structured as a Multi Layer Perceptron, is trained to predict, simulate and control a non-linear process.......This paper describes how a neural network, structured as a Multi Layer Perceptron, is trained to predict, simulate and control a non-linear process....

  7. Interpretable neural networks with BP-SOM

    NARCIS (Netherlands)

    Weijters, A.J.M.M.; Bosch, van den A.P.J.; Pobil, del A.P.; Mira, J.; Ali, M.

    1998-01-01

    Artificial Neural Networks (ANNS) are used successfully in industry and commerce. This is not surprising since neural networks are especially competitive for complex tasks for which insufficient domain-specific knowledge is available. However, interpretation of models induced by ANNS is often

  8. The neural network approach to parton fitting

    International Nuclear Information System (INIS)

    Rojo, Joan; Latorre, Jose I.; Del Debbio, Luigi; Forte, Stefano; Piccione, Andrea

    2005-01-01

    We introduce the neural network approach to global fits of parton distribution functions. First we review previous work on unbiased parametrizations of deep-inelastic structure functions with faithful estimation of their uncertainties, and then we summarize the current status of neural network parton distribution fits

  9. Neural Network to Solve Concave Games

    OpenAIRE

    Liu, Zixin; Wang, Nengfa

    2014-01-01

    The issue on neural network method to solve concave games is concerned. Combined with variational inequality, Ky Fan inequality, and projection equation, concave games are transformed into a neural network model. On the basis of the Lyapunov stable theory, some stability results are also given. Finally, two classic games’ simulation results are given to illustrate the theoretical results.

  10. Neural Network Algorithm for Particle Loading

    International Nuclear Information System (INIS)

    Lewandowski, J.L.V.

    2003-01-01

    An artificial neural network algorithm for continuous minimization is developed and applied to the case of numerical particle loading. It is shown that higher-order moments of the probability distribution function can be efficiently renormalized using this technique. A general neural network for the renormalization of an arbitrary number of moments is given

  11. Memory in Neural Networks and Glasses

    NARCIS (Netherlands)

    Heerema, M.

    2000-01-01

    The thesis tries and models a neural network in a way which, at essential points, is biologically realistic. In a biological context, the changes of the synapses of the neural network are most often described by what is called `Hebb's learning rule'. On careful analysis it is, in fact, nothing but a

  12. Introduction to Concepts in Artificial Neural Networks

    Science.gov (United States)

    Niebur, Dagmar

    1995-01-01

    This introduction to artificial neural networks summarizes some basic concepts of computational neuroscience and the resulting models of artificial neurons. The terminology of biological and artificial neurons, biological and machine learning and neural processing is introduced. The concepts of supervised and unsupervised learning are explained with examples from the power system area. Finally, a taxonomy of different types of neurons and different classes of artificial neural networks is presented.

  13. Adaptive Control of Nonlinear Discrete-Time Systems by Using OS-ELM Neural Networks

    Directory of Open Access Journals (Sweden)

    Xiao-Li Li

    2014-01-01

    Full Text Available As a kind of novel feedforward neural network with single hidden layer, ELM (extreme learning machine neural networks are studied for the identification and control of nonlinear dynamic systems. The property of simple structure and fast convergence of ELM can be shown clearly. In this paper, we are interested in adaptive control of nonlinear dynamic plants by using OS-ELM (online sequential extreme learning machine neural networks. Based on data scope division, the problem that training process of ELM neural network is sensitive to the initial training data is also solved. According to the output range of the controlled plant, the data corresponding to this range will be used to initialize ELM. Furthermore, due to the drawback of conventional adaptive control, when the OS-ELM neural network is used for adaptive control of the system with jumping parameters, the topological structure of the neural network can be adjusted dynamically by using multiple model switching strategy, and an MMAC (multiple model adaptive control will be used to improve the control performance. Simulation results are included to complement the theoretical results.

  14. Identification and adaptive neural network control of a DC motor system with dead-zone characteristics.

    Science.gov (United States)

    Peng, Jinzhu; Dubay, Rickey

    2011-10-01

    In this paper, an adaptive control approach based on the neural networks is presented to control a DC motor system with dead-zone characteristics (DZC), where two neural networks are proposed to formulate the traditional identification and control approaches. First, a Wiener-type neural network (WNN) is proposed to identify the motor DZC, which formulates the Wiener model with a linear dynamic block in cascade with a nonlinear static gain. Second, a feedforward neural network is proposed to formulate the traditional PID controller, termed as PID-type neural network (PIDNN), which is then used to control and compensate for the DZC. In this way, the DC motor system with DZC is identified by the WNN identifier, which provides model information to the PIDNN controller in order to make it adaptive. Back-propagation algorithms are used to train both neural networks. Also, stability and convergence analysis are conducted using the Lyapunov theorem. Finally, experiments on the DC motor system demonstrated accurate identification and good compensation for dead-zone with improved control performance over the conventional PID control. Copyright © 2011 ISA. Published by Elsevier Ltd. All rights reserved.

  15. Linear and nonlinear ARMA model parameter estimation using an artificial neural network

    Science.gov (United States)

    Chon, K. H.; Cohen, R. J.

    1997-01-01

    This paper addresses parametric system identification of linear and nonlinear dynamic systems by analysis of the input and output signals. Specifically, we investigate the relationship between estimation of the system using a feedforward neural network model and estimation of the system by use of linear and nonlinear autoregressive moving-average (ARMA) models. By utilizing a neural network model incorporating a polynomial activation function, we show the equivalence of the artificial neural network to the linear and nonlinear ARMA models. We compare the parameterization of the estimated system using the neural network and ARMA approaches by utilizing data generated by means of computer simulations. Specifically, we show that the parameters of a simulated ARMA system can be obtained from the neural network analysis of the simulated data or by conventional least squares ARMA analysis. The feasibility of applying neural networks with polynomial activation functions to the analysis of experimental data is explored by application to measurements of heart rate (HR) and instantaneous lung volume (ILV) fluctuations.

  16. Statistical inference, the bootstrap, and neural-network modeling with application to foreign exchange rates.

    Science.gov (United States)

    White, H; Racine, J

    2001-01-01

    We propose tests for individual and joint irrelevance of network inputs. Such tests can be used to determine whether an input or group of inputs "belong" in a particular model, thus permitting valid statistical inference based on estimated feedforward neural-network models. The approaches employ well-known statistical resampling techniques. We conduct a small Monte Carlo experiment showing that our tests have reasonable level and power behavior, and we apply our methods to examine whether there are predictable regularities in foreign exchange rates. We find that exchange rates do appear to contain information that is exploitable for enhanced point prediction, but the nature of the predictive relations evolves through time.

  17. Predictive Behavior of a Computational Foot/Ankle Model through Artificial Neural Networks

    Directory of Open Access Journals (Sweden)

    Ruchi D. Chande

    2017-01-01

    Full Text Available Computational models are useful tools to study the biomechanics of human joints. Their predictive performance is heavily dependent on bony anatomy and soft tissue properties. Imaging data provides anatomical requirements while approximate tissue properties are implemented from literature data, when available. We sought to improve the predictive capability of a computational foot/ankle model by optimizing its ligament stiffness inputs using feedforward and radial basis function neural networks. While the former demonstrated better performance than the latter per mean square error, both networks provided reasonable stiffness predictions for implementation into the computational model.

  18. Predictive Behavior of a Computational Foot/Ankle Model through Artificial Neural Networks.

    Science.gov (United States)

    Chande, Ruchi D; Hargraves, Rosalyn Hobson; Ortiz-Robinson, Norma; Wayne, Jennifer S

    2017-01-01

    Computational models are useful tools to study the biomechanics of human joints. Their predictive performance is heavily dependent on bony anatomy and soft tissue properties. Imaging data provides anatomical requirements while approximate tissue properties are implemented from literature data, when available. We sought to improve the predictive capability of a computational foot/ankle model by optimizing its ligament stiffness inputs using feedforward and radial basis function neural networks. While the former demonstrated better performance than the latter per mean square error, both networks provided reasonable stiffness predictions for implementation into the computational model.

  19. Gas metal arc welding of butt joint with varying gap width based on neural networks

    DEFF Research Database (Denmark)

    Christensen, Kim Hardam; Sørensen, Torben

    2005-01-01

    penetration, when the gap width is varying during the welding process. The process modeling to facilitate the mapping from joint geometry and reference weld quality to significant welding parameters, has been based on a multi-layer feed-forward network. The Levenberg-Marquardt algorithm for non-linear least......This paper describes the application of the neural network technology for gas metal arc welding (GMAW) control. A system has been developed for modeling and online adjustment of welding parameters, appropriate to guarantee a certain degree of quality in the field of butt joint welding with full...

  20. Maximum entropy methods for extracting the learned features of deep neural networks.

    Science.gov (United States)

    Finnegan, Alex; Song, Jun S

    2017-10-01

    New architectures of multilayer artificial neural networks and new methods for training them are rapidly revolutionizing the application of machine learning in diverse fields, including business, social science, physical sciences, and biology. Interpreting deep neural networks, however, currently remains elusive, and a critical challenge lies in understanding which meaningful features a network is actually learning. We present a general method for interpreting deep neural networks and extracting network-learned features from input data. We describe our algorithm in the context of biological sequence analysis. Our approach, based on ideas from statistical physics, samples from the maximum entropy distribution over possible sequences, anchored at an input sequence and subject to constraints implied by the empirical function learned by a network. Using our framework, we demonstrate that local transcription factor binding motifs can be identified from a network trained on ChIP-seq data and that nucleosome positioning signals are indeed learned by a network trained on chemical cleavage nucleosome maps. Imposing a further constraint on the maximum entropy distribution also allows us to probe whether a network is learning global sequence features, such as the high GC content in nucleosome-rich regions. This work thus provides valuable mathematical tools for interpreting and extracting learned features from feed-forward neural networks.

  1. Signal Processing and Neural Network Simulator

    Science.gov (United States)

    Tebbe, Dennis L.; Billhartz, Thomas J.; Doner, John R.; Kraft, Timothy T.

    1995-04-01

    The signal processing and neural network simulator (SPANNS) is a digital signal processing simulator with the capability to invoke neural networks into signal processing chains. This is a generic tool which will greatly facilitate the design and simulation of systems with embedded neural networks. The SPANNS is based on the Signal Processing WorkSystemTM (SPWTM), a commercial-off-the-shelf signal processing simulator. SPW provides a block diagram approach to constructing signal processing simulations. Neural network paradigms implemented in the SPANNS include Backpropagation, Kohonen Feature Map, Outstar, Fully Recurrent, Adaptive Resonance Theory 1, 2, & 3, and Brain State in a Box. The SPANNS was developed by integrating SAIC's Industrial Strength Neural Networks (ISNN) Software into SPW.

  2. International Conference on Artificial Neural Networks (ICANN)

    CERN Document Server

    Mladenov, Valeri; Kasabov, Nikola; Artificial Neural Networks : Methods and Applications in Bio-/Neuroinformatics

    2015-01-01

    The book reports on the latest theories on artificial neural networks, with a special emphasis on bio-neuroinformatics methods. It includes twenty-three papers selected from among the best contributions on bio-neuroinformatics-related issues, which were presented at the International Conference on Artificial Neural Networks, held in Sofia, Bulgaria, on September 10-13, 2013 (ICANN 2013). The book covers a broad range of topics concerning the theory and applications of artificial neural networks, including recurrent neural networks, super-Turing computation and reservoir computing, double-layer vector perceptrons, nonnegative matrix factorization, bio-inspired models of cell communities, Gestalt laws, embodied theory of language understanding, saccadic gaze shifts and memory formation, and new training algorithms for Deep Boltzmann Machines, as well as dynamic neural networks and kernel machines. It also reports on new approaches to reinforcement learning, optimal control of discrete time-delay systems, new al...

  3. Neural Based Orthogonal Data Fitting The EXIN Neural Networks

    CERN Document Server

    Cirrincione, Giansalvo

    2008-01-01

    Written by three leaders in the field of neural based algorithms, Neural Based Orthogonal Data Fitting proposes several neural networks, all endowed with a complete theory which not only explains their behavior, but also compares them with the existing neural and traditional algorithms. The algorithms are studied from different points of view, including: as a differential geometry problem, as a dynamic problem, as a stochastic problem, and as a numerical problem. All algorithms have also been analyzed on real time problems (large dimensional data matrices) and have shown accurate solutions. Wh

  4. Continuous Online Sequence Learning with an Unsupervised Neural Network Model.

    Science.gov (United States)

    Cui, Yuwei; Ahmad, Subutar; Hawkins, Jeff

    2016-09-14

    The ability to recognize and predict temporal sequences of sensory inputs is vital for survival in natural environments. Based on many known properties of cortical neurons, hierarchical temporal memory (HTM) sequence memory recently has been proposed as a theoretical framework for sequence learning in the cortex. In this letter, we analyze properties of HTM sequence memory and apply it to sequence learning and prediction problems with streaming data. We show the model is able to continuously learn a large number of variableorder temporal sequences using an unsupervised Hebbian-like learning rule. The sparse temporal codes formed by the model can robustly handle branching temporal sequences by maintaining multiple predictions until there is sufficient disambiguating evidence. We compare the HTM sequence memory with other sequence learning algorithms, including statistical methods: autoregressive integrated moving average; feedforward neural networks-time delay neural network and online sequential extreme learning machine; and recurrent neural networks-long short-term memory and echo-state networks on sequence prediction problems with both artificial and real-world data. The HTM model achieves comparable accuracy to other state-of-the-art algorithms. The model also exhibits properties that are critical for sequence learning, including continuous online learning, the ability to handle multiple predictions and branching sequences with high-order statistics, robustness to sensor noise and fault tolerance, and good performance without task-specific hyperparameter tuning. Therefore, the HTM sequence memory not only advances our understanding of how the brain may solve the sequence learning problem but is also applicable to real-world sequence learning problems from continuous data streams.

  5. Learning representations for the early detection of sepsis with deep neural networks.

    Science.gov (United States)

    Kam, Hye Jin; Kim, Ha Young

    2017-10-01

    Sepsis is one of the leading causes of death in intensive care unit patients. Early detection of sepsis is vital because mortality increases as the sepsis stage worsens. This study aimed to develop detection models for the early stage of sepsis using deep learning methodologies, and to compare the feasibility and performance of the new deep learning methodology with those of the regression method with conventional temporal feature extraction. Study group selection adhered to the InSight model. The results of the deep learning-based models and the InSight model were compared. With deep feedforward networks, the area under the ROC curve (AUC) of the models were 0.887 and 0.915 for the InSight and the new feature sets, respectively. For the model with the combined feature set, the AUC was the same as that of the basic feature set (0.915). For the long short-term memory model, only the basic feature set was applied and the AUC improved to 0.929 compared with the existing 0.887 of the InSight model. The contributions of this paper can be summarized in three ways: (i) improved performance without feature extraction using domain knowledge, (ii) verification of feature extraction capability of deep neural networks through comparison with reference features, and (iii) improved performance with feedforward neural networks using long short-term memory, a neural network architecture that can learn sequential patterns. Copyright © 2017 Elsevier Ltd. All rights reserved.

  6. Enhancing neural-network performance via assortativity

    International Nuclear Information System (INIS)

    Franciscis, Sebastiano de; Johnson, Samuel; Torres, Joaquin J.

    2011-01-01

    The performance of attractor neural networks has been shown to depend crucially on the heterogeneity of the underlying topology. We take this analysis a step further by examining the effect of degree-degree correlations - assortativity - on neural-network behavior. We make use of a method recently put forward for studying correlated networks and dynamics thereon, both analytically and computationally, which is independent of how the topology may have evolved. We show how the robustness to noise is greatly enhanced in assortative (positively correlated) neural networks, especially if it is the hub neurons that store the information.

  7. Neural network recognition of mammographic lesions

    International Nuclear Information System (INIS)

    Oldham, W.J.B.; Downes, P.T.; Hunter, V.

    1987-01-01

    A method for recognition of mammographic lesions through the use of neural networks is presented. Neural networks have exhibited the ability to learn the shape andinternal structure of patterns. Digitized mammograms containing circumscribed and stelate lesions were used to train a feedfoward synchronous neural network that self-organizes to stable attractor states. Encoding of data for submission to the network was accomplished by performing a fractal analysis of the digitized image. This results in scale invariant representation of the lesions. Results are discussed

  8. A neural network approach to burst detection.

    Science.gov (United States)

    Mounce, S R; Day, A J; Wood, A S; Khan, A; Widdop, P D; Machell, J

    2002-01-01

    This paper describes how hydraulic and water quality data from a distribution network may be used to provide a more efficient leakage management capability for the water industry. The research presented concerns the application of artificial neural networks to the issue of detection and location of leakage in treated water distribution systems. An architecture for an Artificial Neural Network (ANN) based system is outlined. The neural network uses time series data produced by sensors to directly construct an empirical model for predication and classification of leaks. Results are presented using data from an experimental site in Yorkshire Water's Keighley distribution system.

  9. Collision avoidance using neural networks

    Science.gov (United States)

    Sugathan, Shilpa; Sowmya Shree, B. V.; Warrier, Mithila R.; Vidhyapathi, C. M.

    2017-11-01

    Now a days, accidents on roads are caused due to the negligence of drivers and pedestrians or due to unexpected obstacles that come into the vehicle’s path. In this paper, a model (robot) is developed to assist drivers for a smooth travel without accidents. It reacts to the real time obstacles on the four critical sides of the vehicle and takes necessary action. The sensor used for detecting the obstacle was an IR proximity sensor. A single layer perceptron neural network is used to train and test all possible combinations of sensors result by using Matlab (offline). A microcontroller (ARM Cortex-M3 LPC1768) is used to control the vehicle through the output data which is received from Matlab via serial communication. Hence, the vehicle becomes capable of reacting to any combination of real time obstacles.

  10. Neural networks: a biased overview

    International Nuclear Information System (INIS)

    Domany, E.

    1988-01-01

    An overview of recent activity in the field of neural networks is presented. The long-range aim of this research is to understand how the brain works. First some of the problems are stated and terminology defined; then an attempt is made to explain why physicists are drawn to the field, and their main potential contribution. In particular, in recent years some interesting models have been introduced by physicists. A small subset of these models is described, with particular emphasis on those that are analytically soluble. Finally a brief review of the history and recent developments of single- and multilayer perceptrons is given, bringing the situation up to date regarding the central immediate problem of the field: search for a learning algorithm that has an associated convergence theorem

  11. Hybrid digital signal processing and neural networks for automated diagnostics using NDE methods

    International Nuclear Information System (INIS)

    Upadhyaya, B.R.; Yan, W.

    1993-11-01

    The primary purpose of the current research was to develop an integrated approach by combining information compression methods and artificial neural networks for the monitoring of plant components using nondestructive examination data. Specifically, data from eddy current inspection of heat exchanger tubing were utilized to evaluate this technology. The focus of the research was to develop and test various data compression methods (for eddy current data) and the performance of different neural network paradigms for defect classification and defect parameter estimation. Feedforward, fully-connected neural networks, that use the back-propagation algorithm for network training, were implemented for defect classification and defect parameter estimation using a modular network architecture. A large eddy current tube inspection database was acquired from the Metals and Ceramics Division of ORNL. These data were used to study the performance of artificial neural networks for defect type classification and for estimating defect parameters. A PC-based data preprocessing and display program was also developed as part of an expert system for data management and decision making. The results of the analysis showed that for effective (low-error) defect classification and estimation of parameters, it is necessary to identify proper feature vectors using different data representation methods. The integration of data compression and artificial neural networks for information processing was established as an effective technique for automation of diagnostics using nondestructive examination methods

  12. Local Dynamics in Trained Recurrent Neural Networks.

    Science.gov (United States)

    Rivkind, Alexander; Barak, Omri

    2017-06-23

    Learning a task induces connectivity changes in neural circuits, thereby changing their dynamics. To elucidate task-related neural dynamics, we study trained recurrent neural networks. We develop a mean field theory for reservoir computing networks trained to have multiple fixed point attractors. Our main result is that the dynamics of the network's output in the vicinity of attractors is governed by a low-order linear ordinary differential equation. The stability of the resulting equation can be assessed, predicting training success or failure. As a consequence, networks of rectified linear units and of sigmoidal nonlinearities are shown to have diametrically different properties when it comes to learning attractors. Furthermore, a characteristic time constant, which remains finite at the edge of chaos, offers an explanation of the network's output robustness in the presence of variability of the internal neural dynamics. Finally, the proposed theory predicts state-dependent frequency selectivity in the network response.

  13. Local Dynamics in Trained Recurrent Neural Networks

    Science.gov (United States)

    Rivkind, Alexander; Barak, Omri

    2017-06-01

    Learning a task induces connectivity changes in neural circuits, thereby changing their dynamics. To elucidate task-related neural dynamics, we study trained recurrent neural networks. We develop a mean field theory for reservoir computing networks trained to have multiple fixed point attractors. Our main result is that the dynamics of the network's output in the vicinity of attractors is governed by a low-order linear ordinary differential equation. The stability of the resulting equation can be assessed, predicting training success or failure. As a consequence, networks of rectified linear units and of sigmoidal nonlinearities are shown to have diametrically different properties when it comes to learning attractors. Furthermore, a characteristic time constant, which remains finite at the edge of chaos, offers an explanation of the network's output robustness in the presence of variability of the internal neural dynamics. Finally, the proposed theory predicts state-dependent frequency selectivity in the network response.

  14. Neural networks and orbit control in accelerators

    International Nuclear Information System (INIS)

    Bozoki, E.; Friedman, A.

    1994-01-01

    An overview of the architecture, workings and training of Neural Networks is given. We stress the aspects which are important for the use of Neural Networks for orbit control in accelerators and storage rings, especially its ability to cope with the nonlinear behavior of the orbit response to 'kicks' and the slow drift in the orbit response during long-term operation. Results obtained for the two NSLS storage rings with several network architectures and various training methods for each architecture are given

  15. Modular representation of layered neural networks.

    Science.gov (United States)

    Watanabe, Chihiro; Hiramatsu, Kaoru; Kashino, Kunio

    2018-01-01

    Layered neural networks have greatly improved the performance of various applications including image processing, speech recognition, natural language processing, and bioinformatics. However, it is still difficult to discover or interpret knowledge from the inference provided by a layered neural network, since its internal representation has many nonlinear and complex parameters embedded in hierarchical layers. Therefore, it becomes important to establish a new methodology by which layered neural networks can be understood. In this paper, we propose a new method for extracting a global and simplified structure from a layered neural network. Based on network analysis, the proposed method detects communities or clusters of units with similar connection patterns. We show its effectiveness by applying it to three use cases. (1) Network decomposition: it can decompose a trained neural network into multiple small independent networks thus dividing the problem and reducing the computation time. (2) Training assessment: the appropriateness of a trained result with a given hyperparameter or randomly chosen initial parameters can be evaluated by using a modularity index. And (3) data analysis: in practical data it reveals the community structure in the input, hidden, and output layers, which serves as a clue for discovering knowledge from a trained neural network. Copyright © 2017 Elsevier Ltd. All rights reserved.

  16. Application of neural network to CT

    International Nuclear Information System (INIS)

    Ma, Xiao-Feng; Takeda, Tatsuoki

    1999-01-01

    This paper presents a new method for two-dimensional image reconstruction by using a multilayer neural network. Multilayer neural networks are extensively investigated and practically applied to solution of various problems such as inverse problems or time series prediction problems. From learning an input-output mapping from a set of examples, neural networks can be regarded as synthesizing an approximation of multidimensional function (that is, solving the problem of hypersurface reconstruction, including smoothing and interpolation). From this viewpoint, neural networks are well suited to the solution of CT image reconstruction. Though a conventionally used object function of a neural network is composed of a sum of squared errors of the output data, we can define an object function composed of a sum of residue of an integral equation. By employing an appropriate line integral for this integral equation, we can construct a neural network that can be used for CT. We applied this method to some model problems and obtained satisfactory results. As it is not necessary to discretized the integral equation using this reconstruction method, therefore it is application to the problem of complicated geometrical shapes is also feasible. Moreover, in neural networks, interpolation is performed quite smoothly, as a result, inverse mapping can be achieved smoothly even in case of including experimental and numerical errors, However, use of conventional back propagation technique for optimization leads to an expensive computation cost. To overcome this drawback, 2nd order optimization methods or parallel computing will be applied in future. (J.P.N.)

  17. An Efficient Supervised Training Algorithm for Multilayer Spiking Neural Networks.

    Science.gov (United States)

    Xie, Xiurui; Qu, Hong; Liu, Guisong; Zhang, Malu; Kurths, Jürgen

    2016-01-01

    The spiking neural networks (SNNs) are the third generation of neural networks and perform remarkably well in cognitive tasks such as pattern recognition. The spike emitting and information processing mechanisms found in biological cognitive systems motivate the application of the hierarchical structure and temporal encoding mechanism in spiking neural networks, which have exhibited strong computational capability. However, the hierarchical structure and temporal encoding approach require neurons to process information serially in space and time respectively, which reduce the training efficiency significantly. For training the hierarchical SNNs, most existing methods are based on the traditional back-propagation algorithm, inheriting its drawbacks of the gradient diffusion and the sensitivity on parameters. To keep the powerful computation capability of the hierarchical structure and temporal encoding mechanism, but to overcome the low efficiency of the existing algorithms, a new training algorithm, the Normalized Spiking Error Back Propagation (NSEBP) is proposed in this paper. In the feedforward calculation, the output spike times are calculated by solving the quadratic function in the spike response model instead of detecting postsynaptic voltage states at all time points in traditional algorithms. Besides, in the feedback weight modification, the computational error is propagated to previous layers by the presynaptic spike jitter instead of the gradient decent rule, which realizes the layer-wised training. Furthermore, our algorithm investigates the mathematical relation between the weight variation and voltage error change, which makes the normalization in the weight modification applicable. Adopting these strategies, our algorithm outperforms the traditional SNN multi-layer algorithms in terms of learning efficiency and parameter sensitivity, that are also demonstrated by the comprehensive experimental results in this paper.

  18. Neural network regulation driven by autonomous neural firings

    Science.gov (United States)

    Cho, Myoung Won

    2016-07-01

    Biological neurons naturally fire spontaneously due to the existence of a noisy current. Such autonomous firings may provide a driving force for network formation because synaptic connections can be modified due to neural firings. Here, we study the effect of autonomous firings on network formation. For the temporally asymmetric Hebbian learning, bidirectional connections lose their balance easily and become unidirectional ones. Defining the difference between reciprocal connections as new variables, we could express the learning dynamics as if Ising model spins interact with each other in magnetism. We present a theoretical method to estimate the interaction between the new variables in a neural system. We apply the method to some network systems and find some tendencies of autonomous neural network regulation.

  19. A new method to estimate parameters of linear compartmental models using artificial neural networks

    International Nuclear Information System (INIS)

    Gambhir, Sanjiv S.; Keppenne, Christian L.; Phelps, Michael E.; Banerjee, Pranab K.

    1998-01-01

    At present, the preferred tool for parameter estimation in compartmental analysis is an iterative procedure; weighted nonlinear regression. For a large number of applications, observed data can be fitted to sums of exponentials whose parameters are directly related to the rate constants/coefficients of the compartmental models. Since weighted nonlinear regression often has to be repeated for many different data sets, the process of fitting data from compartmental systems can be very time consuming. Furthermore the minimization routine often converges to a local (as opposed to global) minimum. In this paper, we examine the possibility of using artificial neural networks instead of weighted nonlinear regression in order to estimate model parameters. We train simple feed-forward neural networks to produce as outputs the parameter values of a given model when kinetic data are fed to the networks' input layer. The artificial neural networks produce unbiased estimates and are orders of magnitude faster than regression algorithms. At noise levels typical of many real applications, the neural networks are found to produce lower variance estimates than weighted nonlinear regression in the estimation of parameters from mono- and biexponential models. These results are primarily due to the inability of weighted nonlinear regression to converge. These results establish that artificial neural networks are powerful tools for estimating parameters for simple compartmental models. (author)

  20. Machine Learning Topological Invariants with Neural Networks

    Science.gov (United States)

    Zhang, Pengfei; Shen, Huitao; Zhai, Hui

    2018-02-01

    In this Letter we supervisedly train neural networks to distinguish different topological phases in the context of topological band insulators. After training with Hamiltonians of one-dimensional insulators with chiral symmetry, the neural network can predict their topological winding numbers with nearly 100% accuracy, even for Hamiltonians with larger winding numbers that are not included in the training data. These results show a remarkable success that the neural network can capture the global and nonlinear topological features of quantum phases from local inputs. By opening up the neural network, we confirm that the network does learn the discrete version of the winding number formula. We also make a couple of remarks regarding the role of the symmetry and the opposite effect of regularization techniques when applying machine learning to physical systems.

  1. Genetic algorithm for neural networks optimization

    Science.gov (United States)

    Setyawati, Bina R.; Creese, Robert C.; Sahirman, Sidharta

    2004-11-01

    This paper examines the forecasting performance of multi-layer feed forward neural networks in modeling a particular foreign exchange rates, i.e. Japanese Yen/US Dollar. The effects of two learning methods, Back Propagation and Genetic Algorithm, in which the neural network topology and other parameters fixed, were investigated. The early results indicate that the application of this hybrid system seems to be well suited for the forecasting of foreign exchange rates. The Neural Networks and Genetic Algorithm were programmed using MATLAB«.

  2. Stock market index prediction using neural networks

    Science.gov (United States)

    Komo, Darmadi; Chang, Chein-I.; Ko, Hanseok

    1994-03-01

    A neural network approach to stock market index prediction is presented. Actual data of the Wall Street Journal's Dow Jones Industrial Index has been used for a benchmark in our experiments where Radial Basis Function based neural networks have been designed to model these indices over the period from January 1988 to Dec 1992. A notable success has been achieved with the proposed model producing over 90% prediction accuracies observed based on monthly Dow Jones Industrial Index predictions. The model has also captured both moderate and heavy index fluctuations. The experiments conducted in this study demonstrated that the Radial Basis Function neural network represents an excellent candidate to predict stock market index.

  3. Estimation of Conditional Quantile using Neural Networks

    DEFF Research Database (Denmark)

    Kulczycki, P.; Schiøler, Henrik

    1999-01-01

    The problem of estimating conditional quantiles using neural networks is investigated here. A basic structure is developed using the methodology of kernel estimation, and a theory guaranteeing con-sistency on a mild set of assumptions is provided. The constructed structure constitutes a basis...... for the design of a variety of different neural networks, some of which are considered in detail. The task of estimating conditional quantiles is related to Bayes point estimation whereby a broad range of applications within engineering, economics and management can be suggested. Numerical results illustrating...... the capabilities of the elaborated neural network are also given....

  4. Convolutional Neural Network for Image Recognition

    CERN Document Server

    Seifnashri, Sahand

    2015-01-01

    The aim of this project is to use machine learning techniques especially Convolutional Neural Networks for image processing. These techniques can be used for Quark-Gluon discrimination using calorimeters data, but unfortunately I didn’t manage to get the calorimeters data and I just used the Jet data fromminiaodsim(ak4 chs). The Jet data was not good enough for Convolutional Neural Network which is designed for ’image’ recognition. This report is made of twomain part, part one is mainly about implementing Convolutional Neural Network on unphysical data such as MNIST digits and CIFAR-10 dataset and part 2 is about the Jet data.

  5. Applications of neural network to numerical analyses

    International Nuclear Information System (INIS)

    Takeda, Tatsuoki; Fukuhara, Makoto; Ma, Xiao-Feng; Liaqat, Ali

    1999-01-01

    Applications of a multi-layer neural network to numerical analyses are described. We are mainly concerned with the computed tomography and the solution of differential equations. In both cases as the objective functions for the training process of the neural network we employed residuals of the integral equation or the differential equations. This is different from the conventional neural network training where sum of the squared errors of the output values is adopted as the objective function. For model problems both the methods gave satisfactory results and the methods are considered promising for some kind of problems. (author)

  6. Potential usefulness of an artificial neural network for assessing ventricular size

    International Nuclear Information System (INIS)

    Fukuda, Haruyuki; Nakajima, Hideyuki; Usuki, Noriaki; Saiwai, Shigeo; Miyamoto, Takeshi; Inoue, Yuichi; Onoyama, Yasuto.

    1995-01-01

    An artificial neural network approach was applied to assess ventricular size from computed tomograms. Three layer, feed-forward neural networks with a back propagation algorithm were designed to distinguish between three degree of enlargement of the ventricles on the basis of patient's age and six items of computed tomographic information. Data for training and testing the neural network were created with computed tomograms of the brains selected at random from daily examinations. Four radiologists decided by mutual consent subjectively based on their experience whether the ventricles were within normal limits, slightly enlarged, or enlarged for the patient's age. The data for training was obtained from 38 patients. The data for testing was obtained from 47 other patients. The performance of the neural network trained using the data for training was evaluated by the rate of correct answers to the data for testing. The valid solution ratio to response of the test data obtained from the trained neural networks was more than 90% for all conditions in this study. The solutions were completely valid in the neural networks with two or three units at the hidden layer with 2,200 learning iterations, and with two units at the hidden layer with 11,000 learning iterations. The squared error decreased remarkably in the range from 0 to 500 learning iterations, and was close to a contrast over two thousand learning iterations. The neural network with a hidden layer having two or three units showed high decision performance. The preliminary results strongly suggest that the neural network approach has potential utility in computer-aided estimation of enlargement of the ventricles. (author)

  7. Nonequilibrium landscape theory of neural networks.

    Science.gov (United States)

    Yan, Han; Zhao, Lei; Hu, Liang; Wang, Xidi; Wang, Erkang; Wang, Jin

    2013-11-05

    The brain map project aims to map out the neuron connections of the human brain. Even with all of the wirings mapped out, the global and physical understandings of the function and behavior are still challenging. Hopfield quantified the learning and memory process of symmetrically connected neural networks globally through equilibrium energy. The energy basins of attractions represent memories, and the memory retrieval dynamics is determined by the energy gradient. However, the realistic neural networks are asymmetrically connected, and oscillations cannot emerge from symmetric neural networks. Here, we developed a nonequilibrium landscape-flux theory for realistic asymmetrically connected neural networks. We uncovered the underlying potential landscape and the associated Lyapunov function for quantifying the global stability and function. We found the dynamics and oscillations in human brains responsible for cognitive processes and physiological rhythm regulations are determined not only by the landscape gradient but also by the flux. We found that the flux is closely related to the degrees of the asymmetric connections in neural networks and is the origin of the neural oscillations. The neural oscillation landscape shows a closed-ring attractor topology. The landscape gradient attracts the network down to the ring. The flux is responsible for coherent oscillations on the ring. We suggest the flux may provide the driving force for associations among memories. We applied our theory to rapid-eye movement sleep cycle. We identified the key regulation factors for function through global sensitivity analysis of landscape topography against wirings, which are in good agreements with experiments.

  8. Nonequilibrium landscape theory of neural networks

    Science.gov (United States)

    Yan, Han; Zhao, Lei; Hu, Liang; Wang, Xidi; Wang, Erkang; Wang, Jin

    2013-01-01

    The brain map project aims to map out the neuron connections of the human brain. Even with all of the wirings mapped out, the global and physical understandings of the function and behavior are still challenging. Hopfield quantified the learning and memory process of symmetrically connected neural networks globally through equilibrium energy. The energy basins of attractions represent memories, and the memory retrieval dynamics is determined by the energy gradient. However, the realistic neural networks are asymmetrically connected, and oscillations cannot emerge from symmetric neural networks. Here, we developed a nonequilibrium landscape–flux theory for realistic asymmetrically connected neural networks. We uncovered the underlying potential landscape and the associated Lyapunov function for quantifying the global stability and function. We found the dynamics and oscillations in human brains responsible for cognitive processes and physiological rhythm regulations are determined not only by the landscape gradient but also by the flux. We found that the flux is closely related to the degrees of the asymmetric connections in neural networks and is the origin of the neural oscillations. The neural oscillation landscape shows a closed-ring attractor topology. The landscape gradient attracts the network down to the ring. The flux is responsible for coherent oscillations on the ring. We suggest the flux may provide the driving force for associations among memories. We applied our theory to rapid-eye movement sleep cycle. We identified the key regulation factors for function through global sensitivity analysis of landscape topography against wirings, which are in good agreements with experiments. PMID:24145451

  9. Diagnosis method utilizing neural networks

    International Nuclear Information System (INIS)

    Watanabe, K.; Tamayama, K.

    1990-01-01

    Studies have been made on the technique of neural networks, which will be used to identify a cause of a small anomalous state in the reactor coolant system of the ATR (Advance Thermal Reactor). Three phases of analyses were carried out in this study. First, simulation for 100 seconds was made to determine how the plant parameters respond after the occurence of a transient decrease in reactivity, flow rate and temperature of feed water and increase in the steam flow rate and steam pressure, which would produce a decrease of water level in a steam drum of the ATR. Next, the simulation data was analysed utilizing an autoregressive model. From this analysis, a total of 36 coherency functions up to 0.5 Hz in each transient were computed among nine important and detectable plant parameters: neutron flux, flow rate of coolant, steam or feed water, water level in the steam drum, pressure and opening area of control valve in a steam pipe, feed water temperature and electrical power. Last, learning of neural networks composed of 96 input, 4-9 hidden and 5 output layer units was done by use of the generalized delta rule, namely a back-propagation algorithm. These convergent computations were continued as far as the difference between the desired outputs, 1 for direct cause or 0 for four other ones and actual outputs reached less than 10%. (1) Coherency functions were not governed by decreasing rate of reactivity in the range of 0.41x10 -2 dollar/s to 1.62x10 -2 dollar /s or by decreasing depth of the feed water temperature in the range of 3 deg C to 10 deg C or by a change of 10% or less in the three other causes. Change in coherency functions only depended on the type of cause. (2) The direct cause from the other four ones could be discriminated with 0.94+-0.01 of output level. A maximum of 0.06 output height was found among the other four causes. (3) Calculation load which is represented as products of learning times and numbers of the hidden units did not depend on the

  10. Parameter extraction with neural networks

    Science.gov (United States)

    Cazzanti, Luca; Khan, Mumit; Cerrina, Franco

    1998-06-01

    In semiconductor processing, the modeling of the process is becoming more and more important. While the ultimate goal is that of developing a set of tools for designing a complete process (Technology CAD), it is also necessary to have modules to simulate the various technologies and, in particular, to optimize specific steps. This need is particularly acute in lithography, where the continuous decrease in CD forces the technologies to operate near their limits. In the development of a 'model' for a physical process, we face several levels of challenges. First, it is necessary to develop a 'physical model,' i.e. a rational description of the process itself on the basis of know physical laws. Second, we need an 'algorithmic model' to represent in a virtual environment the behavior of the 'physical model.' After a 'complete' model has been developed and verified, it becomes possible to do performance analysis. In many cases the input parameters are poorly known or not accessible directly to experiment. It would be extremely useful to obtain the values of these 'hidden' parameters from experimental results by comparing model to data. This is particularly severe, because the complexity and costs associated with semiconductor processing make a simple 'trial-and-error' approach infeasible and cost- inefficient. Even when computer models of the process already exists, obtaining data through simulations may be time consuming. Neural networks (NN) are powerful computational tools to predict the behavior of a system from an existing data set. They are able to adaptively 'learn' input/output mappings and to act as universal function approximators. In this paper we use artificial neural networks to build a mapping from the input parameters of the process to output parameters which are indicative of the performance of the process. Once the NN has been 'trained,' it is also possible to observe the process 'in reverse,' and to extract the values of the inputs which yield outputs

  11. The quest for a Quantum Neural Network

    OpenAIRE

    Schuld, M.; Sinayskiy, I.; Petruccione, F.

    2014-01-01

    With the overwhelming success in the field of quantum information in the last decades, the "quest" for a Quantum Neural Network (QNN) model began in order to combine quantum computing with the striking properties of neural computing. This article presents a systematic approach to QNN research, which so far consists of a conglomeration of ideas and proposals. It outlines the challenge of combining the nonlinear, dissipative dynamics of neural computing and the linear, unitary dynamics of quant...

  12. Detection of different states of sleep in the rodents by the means of artificial neural networks

    Science.gov (United States)

    Musatov, Viacheslav; Dykin, Viacheslav; Pitsik, Elena; Pisarchik, Alexander

    2018-04-01

    This paper considers the possibility of classification of electroencephalogram (EEG) and electromyogram (EMG) signals corresponding to different phases of sleep and wakefulness of mice by the means of artificial neural networks. A feed-forward artificial neural network based on multilayer perceptron was created and trained on the data of one of the rodents. The trained network was used to read and classify the EEG and EMG data corresponding to different phases of sleep and wakefulness of the same mouse and other mouse. The results show a good recognition quality of all phases for the rodent on which the training was conducted (80-99%) and acceptable recognition quality for the data collected from the same mouse after a stroke.

  13. Particle identification with neural networks using a rotational invariant moment representation

    Science.gov (United States)

    Sinkus, Ralph; Voss, Thomas

    1997-02-01

    A feed-forward neural network is used to identify electromagnetic particles based upon their showering properties within a segmented calorimeter. A preprocessing procedure is applied to the spatial energy distribution of the particle shower in order to account for the varying geometry of the calorimeter. The novel feature is the expansion of the energy distribution in terms of moments of the so-called Zernike functions which are invariant under rotation. The distributions of moments exhibit very different scales, thus the multidimensional input distribution for the neural network is transformed via a principal component analysis and rescaled by its respective variances to ensure input values of the order of one. This increases the sensitivity of the network and thus results in better performance in identifying and separating electromagnetic from hadronic particles, especially at low energies.

  14. Particle identification with neural networks using a rotational invariant moment representation

    International Nuclear Information System (INIS)

    Sinkus, R.

    1997-01-01

    A feed-forward neural network is used to identify electromagnetic particles based upon their showering properties within a segmented calorimeter. A preprocessing procedure is applied to the spatial energy distribution of the particle shower in order to account for the varying geometry of the calorimeter. The novel feature is the expansion of the energy distribution in terms of moments of the so-called Zernike functions which are invariant under rotation. The distributions of moments exhibit very different scales, thus the multidimensional input distribution for the neural network is transformed via a principal component analysis and rescaled by its respective variances to ensure input values of the order of one. This increases the sensitivity of the network and thus results in better performance in identifying and separating electromagnetic from hadronic particles, especially at low energies. (orig.)

  15. Artificial neural networks in prediction of mechanical behavior of concrete at high temperature

    International Nuclear Information System (INIS)

    Mukherjee, A.; Nag Biswas, S.

    1997-01-01

    The behavior of concrete structures that are exposed to extreme thermo-mechanical loading is an issue of great importance in nuclear engineering. The mechanical behavior of concrete at high temperature is non-linear. The properties that regulate its response are highly temperature dependent and extremely complex. In addition, the constituent materials, e.g. aggregates, influence the response significantly. Attempts have been made to trace the stress-strain curve through mathematical models and rheological models. However, it has been difficult to include all the contributing factors in the mathematical model. This paper examines a new programming paradigm, artificial neural networks, for the problem. Implementing a feedforward network and backpropagation algorithm the stress-strain relationship of the material is captured. The neural networks for the prediction of uniaxial behavior of concrete at high temperature has been presented here. The results of the present investigation are very encouraging. (orig.)

  16. Universal approximation in p-mean by neural networks

    NARCIS (Netherlands)

    Burton, R.M; Dehling, H.G

    A feedforward neural net with d input neurons and with a single hidden layer of n neurons is given by [GRAPHICS] where a(j), theta(j), w(ji) is an element of R. In this paper we study the approximation of arbitrary functions f: R-d --> R by a neural net in an L-p(mu) norm for some finite measure mu

  17. Deep Learning Neural Networks and Bayesian Neural Networks in Data Analysis

    Directory of Open Access Journals (Sweden)

    Chernoded Andrey

    2017-01-01

    Full Text Available Most of the modern analyses in high energy physics use signal-versus-background classification techniques of machine learning methods and neural networks in particular. Deep learning neural network is the most promising modern technique to separate signal and background and now days can be widely and successfully implemented as a part of physical analysis. In this article we compare Deep learning and Bayesian neural networks application as a classifiers in an instance of top quark analysis.

  18. Improved transformer protection using probabilistic neural network ...

    African Journals Online (AJOL)

    user

    secure and dependable protection for power transformers. Owing to its superior learning and generalization capabilities Artificial. Neural Network (ANN) can considerably enhance the scope of WI method. ANN approach is faster, robust and easier to implement than the conventional waveform approach. The use of neural ...

  19. Prediction of Austenite Formation Temperatures Using Artificial Neural Networks

    International Nuclear Information System (INIS)

    Schulze, P; Schmidl, E; Grund, T; Lampke, T

    2016-01-01

    For the modeling and design of heat treatments, in consideration of the development/ transformation of the microstructure, different material data depending on the chemical composition, the respective microstructure/phases and the temperature are necessary. Material data are, e.g. the thermal conductivity, heat capacity, thermal expansion and transformation data etc. The quality of thermal simulations strongly depends on the accuracy of the material data. For many materials, the required data - in particular for different microstructures and temperatures - are rare in the literature. In addition, a different chemical composition within the permitted limits of the considered steel alloy cannot be predicted. A solution for this problem is provided by the calculation of material data using Artificial Neural Networks (ANN). In the present study, the start and finish temperatures of the transformation from the bcc lattice to the fcc lattice structure of hypoeutectoid steels are calculated using an Artificial Neural Network. An appropriate database containing different transformation temperatures (austenite formation temperatures) to train the ANN is selected from the literature. In order to find a suitable feedforward network, the network topologies as well as the activation functions of the hidden layers are varied and subsequently evaluated in terms of the prediction accuracy. The transformation temperatures calculated by the ANN exhibit a very good compliance compared to the experimental data. The results show that the prediction performance is even higher compared to classical empirical equations such as Andrews or Brandis. Therefore, it can be assumed that the presented ANN is a convenient tool to distinguish between bcc and fcc phases in hypoeutectoid steels. (paper)

  20. Prediction of Austenite Formation Temperatures Using Artificial Neural Networks

    Science.gov (United States)

    Schulze, P.; Schmidl, E.; Grund, T.; Lampke, T.

    2016-03-01

    For the modeling and design of heat treatments, in consideration of the development/ transformation of the microstructure, different material data depending on the chemical composition, the respective microstructure/phases and the temperature are necessary. Material data are, e.g. the thermal conductivity, heat capacity, thermal expansion and transformation data etc. The quality of thermal simulations strongly depends on the accuracy of the material data. For many materials, the required data - in particular for different microstructures and temperatures - are rare in the literature. In addition, a different chemical composition within the permitted limits of the considered steel alloy cannot be predicted. A solution for this problem is provided by the calculation of material data using Artificial Neural Networks (ANN). In the present study, the start and finish temperatures of the transformation from the bcc lattice to the fcc lattice structure of hypoeutectoid steels are calculated using an Artificial Neural Network. An appropriate database containing different transformation temperatures (austenite formation temperatures) to train the ANN is selected from the literature. In order to find a suitable feedforward network, the network topologies as well as the activation functions of the hidden layers are varied and subsequently evaluated in terms of the prediction accuracy. The transformation temperatures calculated by the ANN exhibit a very good compliance compared to the experimental data. The results show that the prediction performance is even higher compared to classical empirical equations such as Andrews or Brandis. Therefore, it can be assumed that the presented ANN is a convenient tool to distinguish between bcc and fcc phases in hypoeutectoid steels.

  1. An Introduction to Neural Networks for Hearing Aid Noise Recognition.

    Science.gov (United States)

    Kim, Jun W.; Tyler, Richard S.

    1995-01-01

    This article introduces the use of multilayered artificial neural networks in hearing aid noise recognition. It reviews basic principles of neural networks, and offers an example of an application in which a neural network is used to identify the presence or absence of noise in speech. The ability of neural networks to "learn" the…

  2. Neural Networks in Mobile Robot Motion

    Directory of Open Access Journals (Sweden)

    Danica Janglová

    2004-03-01

    Full Text Available This paper deals with a path planning and intelligent control of an autonomous robot which should move safely in partially structured environment. This environment may involve any number of obstacles of arbitrary shape and size; some of them are allowed to move. We describe our approach to solving the motion-planning problem in mobile robot control using neural networks-based technique. Our method of the construction of a collision-free path for moving robot among obstacles is based on two neural networks. The first neural network is used to determine the “free” space using ultrasound range finder data. The second neural network “finds” a safe direction for the next robot section of the path in the workspace while avoiding the nearest obstacles. Simulation examples of generated path with proposed techniques will be presented.

  3. water demand prediction using artificial neural network

    African Journals Online (AJOL)

    user

    2017-01-01

    Jan 1, 2017 ... Interface for activation and deactivation of valves. •. Interface demand ... process could be done and monitored at the computer terminal as expected of a .... [15] Arbib, M. A.The Handbook of Brain Theory and Neural. Networks.

  4. Hopfield neural network in HEP track reconstruction

    International Nuclear Information System (INIS)

    Muresan, R.; Pentia, M.

    1997-01-01

    In experimental particle physics, pattern recognition problems, specifically for neural network methods, occur frequently in track finding or feature extraction. Track finding is a combinatorial optimization problem. Given a set of points in Euclidean space, one tries the reconstruction of particle trajectories, subject to smoothness constraints.The basic ingredients in a neural network are the N binary neurons and the synaptic strengths connecting them. In our case the neurons are the segments connecting all possible point pairs.The dynamics of the neural network is given by a local updating rule wich evaluates for each neuron the sign of the 'upstream activity'. An updating rule in the form of sigmoid function is given. The synaptic strengths are defined in terms of angle between the segments and the lengths of the segments implied in the track reconstruction. An algorithm based on Hopfield neural network has been developed and tested on the track coordinates measured by silicon microstrip tracking system

  5. PREDIKSI FOREX MENGGUNAKAN MODEL NEURAL NETWORK

    Directory of Open Access Journals (Sweden)

    R. Hadapiningradja Kusumodestoni

    2015-11-01

    Full Text Available ABSTRAK Prediksi adalah salah satu teknik yang paling penting dalam menjalankan bisnis forex. Keputusan dalam memprediksi adalah sangatlah penting, karena dengan prediksi dapat membantu mengetahui nilai forex di waktu tertentu kedepan sehingga dapat mengurangi resiko kerugian. Tujuan dari penelitian ini dimaksudkan memprediksi bisnis fores menggunakan model neural network dengan data time series per 1 menit untuk mengetahui nilai akurasi prediksi sehingga dapat mengurangi resiko dalam menjalankan bisnis forex. Metode penelitian pada penelitian ini meliputi metode pengumpulan data kemudian dilanjutkan ke metode training, learning, testing menggunakan neural network. Setelah di evaluasi hasil penelitian ini menunjukan bahwa penerapan algoritma Neural Network mampu untuk memprediksi forex dengan tingkat akurasi prediksi 0.431 +/- 0.096 sehingga dengan prediksi ini dapat membantu mengurangi resiko dalam menjalankan bisnis forex. Kata kunci: prediksi, forex, neural network.

  6. Artificial neural networks a practical course

    CERN Document Server

    da Silva, Ivan Nunes; Andrade Flauzino, Rogerio; Liboni, Luisa Helena Bartocci; dos Reis Alves, Silas Franco

    2017-01-01

    This book provides comprehensive coverage of neural networks, their evolution, their structure, the problems they can solve, and their applications. The first half of the book looks at theoretical investigations on artificial neural networks and addresses the key architectures that are capable of implementation in various application scenarios. The second half is designed specifically for the production of solutions using artificial neural networks to solve practical problems arising from different areas of knowledge. It also describes the various implementation details that were taken into account to achieve the reported results. These aspects contribute to the maturation and improvement of experimental techniques to specify the neural network architecture that is most appropriate for a particular application scope. The book is appropriate for students in graduate and upper undergraduate courses in addition to researchers and professionals.

  7. Control of autonomous robot using neural networks

    Science.gov (United States)

    Barton, Adam; Volna, Eva

    2017-07-01

    The aim of the article is to design a method of control of an autonomous robot using artificial neural networks. The introductory part describes control issues from the perspective of autonomous robot navigation and the current mobile robots controlled by neural networks. The core of the article is the design of the controlling neural network, and generation and filtration of the training set using ART1 (Adaptive Resonance Theory). The outcome of the practical part is an assembled Lego Mindstorms EV3 robot solving the problem of avoiding obstacles in space. To verify models of an autonomous robot behavior, a set of experiments was created as well as evaluation criteria. The speed of each motor was adjusted by the controlling neural network with respect to the situation in which the robot was found.

  8. PREDICTION OF DEMAND FOR PRIMARY BOND OFFERINGS USING ARTIFICIAL NEURAL NETWORKS

    Directory of Open Access Journals (Sweden)

    Michal Tkac

    2014-12-01

    Full Text Available Purpose: Primary bond markets represent an interesting investment opportunity not only for banks, insurance companies, and other institutional investors, but also for individuals looking for capital gains. Since offered securities vary in terms of their rating, industrial classification, coupon, or maturity, demand of buyers for particular offerings often overcomes issued volume and price of given bond on secondary market consequently rises. Investors might be regarded as consumers purchasing required service according to their specific preferences at desired price. This paper aims at analysis of demand for bonds on primary market using artificial neural networks.Design/methodology: We design a multilayered feedforward neural network trained by Levenberg-Marquardt algorithm in order to estimate demand for individual bonds based on parameters of particular offerings. Outcomes obtained by artificial neural network are compared with conventional econometric methods.Findings: Our results indicate that artificial neural network significantly outperformed standard econometric techniques and on examined sample of primary bond offerings achieved considerably better performance in terms of prediction accuracy and mean squared error.Originality: We show that proposed neural network is able to successfully predict demand for primary obligation offerings based on their specifications. Moreover, we identify relevant parameters of issues which are able to considerably affect total demand for given security.  Our findings might not only help investors to detect marketable securities, but also enable issuing entities to increase demand for their bonds in order to decrease their offering price. 

  9. Forecasting crude oil price with an EMD-based neural network ensemble learning paradigm

    International Nuclear Information System (INIS)

    Yu, Lean; Wang, Shouyang; Lai, Kin Keung

    2008-01-01

    In this study, an empirical mode decomposition (EMD) based neural network ensemble learning paradigm is proposed for world crude oil spot price forecasting. For this purpose, the original crude oil spot price series were first decomposed into a finite, and often small, number of intrinsic mode functions (IMFs). Then a three-layer feed-forward neural network (FNN) model was used to model each of the extracted IMFs, so that the tendencies of these IMFs could be accurately predicted. Finally, the prediction results of all IMFs are combined with an adaptive linear neural network (ALNN), to formulate an ensemble output for the original crude oil price series. For verification and testing, two main crude oil price series, West Texas Intermediate (WTI) crude oil spot price and Brent crude oil spot price, are used to test the effectiveness of the proposed EMD-based neural network ensemble learning methodology. Empirical results obtained demonstrate attractiveness of the proposed EMD-based neural network ensemble learning paradigm. (author)

  10. Neural networks, D0, and the SSC

    International Nuclear Information System (INIS)

    Barter, C.; Cutts, D.; Hoftun, J.S.; Partridge, R.A.; Sornborger, A.T.; Johnson, C.T.; Zeller, R.T.

    1989-01-01

    We outline several exploratory studies involving neural network simulations applied to pattern recognition in high energy physics. We describe the D0 data acquisition system and a natual means by which algorithms derived from neural networks techniques may be incorporated into recently developed hardware associated with the D0 MicroVAX farm nodes. Such applications to the event filtering needed by SSC detectors look interesting. 10 refs., 11 figs

  11. Neural network monitoring of resistive welding

    International Nuclear Information System (INIS)

    Quero, J.M.; Millan, R.L.; Franquelo, L.G.; Canas, J.

    1994-01-01

    Supervision of welding processes is one of the most important and complicated tasks in production lines. Artificial Neural Networks have been applied for modeling and control of ph physical processes. In our paper we propose the use of a neural network classifier for on-line non-destructive testing. This system has been developed and installed in a resistive welding station. Results confirm the validity of this novel approach. (Author) 6 refs

  12. Neural Network Models for Time Series Forecasts

    OpenAIRE

    Tim Hill; Marcus O'Connor; William Remus

    1996-01-01

    Neural networks have been advocated as an alternative to traditional statistical forecasting methods. In the present experiment, time series forecasts produced by neural networks are compared with forecasts from six statistical time series methods generated in a major forecasting competition (Makridakis et al. [Makridakis, S., A. Anderson, R. Carbone, R. Fildes, M. Hibon, R. Lewandowski, J. Newton, E. Parzen, R. Winkler. 1982. The accuracy of extrapolation (time series) methods: Results of a ...

  13. Using neural networks in software repositories

    Science.gov (United States)

    Eichmann, David (Editor); Srinivas, Kankanahalli; Boetticher, G.

    1992-01-01

    The first topic is an exploration of the use of neural network techniques to improve the effectiveness of retrieval in software repositories. The second topic relates to a series of experiments conducted to evaluate the feasibility of using adaptive neural networks as a means of deriving (or more specifically, learning) measures on software. Taken together, these two efforts illuminate a very promising mechanism supporting software infrastructures - one based upon a flexible and responsive technology.

  14. Application of neural networks in CRM systems

    Directory of Open Access Journals (Sweden)

    Bojanowska Agnieszka

    2017-01-01

    Full Text Available The central aim of this study is to investigate how to apply artificial neural networks in Customer Relationship Management (CRM. The paper presents several business applications of neural networks in software systems designed to aid CRM, e.g. in deciding on the profitability of building a relationship with a given customer. Furthermore, a framework for a neural-network based CRM software tool is developed. Building beneficial relationships with customers is generating considerable interest among various businesses, and is often mentioned as one of the crucial objectives of enterprises, next to their key aim: to bring satisfactory profit. There is a growing tendency among businesses to invest in CRM systems, which together with an organisational culture of a company aid managing customer relationships. It is the sheer amount of gathered data as well as the need for constant updating and analysis of this breadth of information that may imply the suitability of neural networks for the application in question. Neural networks exhibit considerably higher computational capabilities than sequential calculations because the solution to a problem is obtained without the need for developing a special algorithm. In the majority of presented CRM applications neural networks constitute and are presented as a managerial decision-taking optimisation tool.

  15. Logarithmic learning for generalized classifier neural network.

    Science.gov (United States)

    Ozyildirim, Buse Melis; Avci, Mutlu

    2014-12-01

    Generalized classifier neural network is introduced as an efficient classifier among the others. Unless the initial smoothing parameter value is close to the optimal one, generalized classifier neural network suffers from convergence problem and requires quite a long time to converge. In this work, to overcome this problem, a logarithmic learning approach is proposed. The proposed method uses logarithmic cost function instead of squared error. Minimization of this cost function reduces the number of iterations used for reaching the minima. The proposed method is tested on 15 different data sets and performance of logarithmic learning generalized classifier neural network is compared with that of standard one. Thanks to operation range of radial basis function included by generalized classifier neural network, proposed logarithmic approach and its derivative has continuous values. This makes it possible to adopt the advantage of logarithmic fast convergence by the proposed learning method. Due to fast convergence ability of logarithmic cost function, training time is maximally decreased to 99.2%. In addition to decrease in training time, classification performance may also be improved till 60%. According to the test results, while the proposed method provides a solution for time requirement problem of generalized classifier neural network, it may also improve the classification accuracy. The proposed method can be considered as an efficient way for reducing the time requirement problem of generalized classifier neural network. Copyright © 2014 Elsevier Ltd. All rights reserved.

  16. Diabetic retinopathy screening using deep neural network.

    Science.gov (United States)

    Ramachandran, Nishanthan; Hong, Sheng Chiong; Sime, Mary J; Wilson, Graham A

    2017-09-07

    There is a burgeoning interest in the use of deep neural network in diabetic retinal screening. To determine whether a deep neural network could satisfactorily detect diabetic retinopathy that requires referral to an ophthalmologist from a local diabetic retinal screening programme and an international database. Retrospective audit. Diabetic retinal photos from Otago database photographed during October 2016 (485 photos), and 1200 photos from Messidor international database. Receiver operating characteristic curve to illustrate the ability of a deep neural network to identify referable diabetic retinopathy (moderate or worse diabetic retinopathy or exudates within one disc diameter of the fovea). Area under the receiver operating characteristic curve, sensitivity and specificity. For detecting referable diabetic retinopathy, the deep neural network had an area under receiver operating characteristic curve of 0.901 (95% confidence interval 0.807-0.995), with 84.6% sensitivity and 79.7% specificity for Otago and 0.980 (95% confidence interval 0.973-0.986), with 96.0% sensitivity and 90.0% specificity for Messidor. This study has shown that a deep neural network can detect referable diabetic retinopathy with sensitivities and specificities close to or better than 80% from both an international and a domestic (New Zealand) database. We believe that deep neural networks can be integrated into community screening once they can successfully detect both diabetic retinopathy and diabetic macular oedema. © 2017 Royal Australian and New Zealand College of Ophthalmologists.

  17. Neural Network Compensation for Frequency Cross-Talk in Laser Interferometry

    Science.gov (United States)

    Lee, Wooram; Heo, Gunhaeng; You, Kwanho

    The heterodyne laser interferometer acts as an ultra-precise measurement apparatus in semiconductor manufacture. However the periodical nonlinearity property caused from frequency cross-talk is an obstacle to improve the high measurement accuracy in nanometer scale. In order to minimize the nonlinearity error of the heterodyne interferometer, we propose a frequency cross-talk compensation algorithm using an artificial intelligence method. The feedforward neural network trained by back-propagation compensates the nonlinearity error and regulates to minimize the difference with the reference signal. With some experimental results, the improved accuracy is proved through comparison with the position value from a capacitive displacement sensor.

  18. Neural-Network Object-Recognition Program

    Science.gov (United States)

    Spirkovska, L.; Reid, M. B.

    1993-01-01

    HONTIOR computer program implements third-order neural network exhibiting invariance under translation, change of scale, and in-plane rotation. Invariance incorporated directly into architecture of network. Only one view of each object needed to train network for two-dimensional-translation-invariant recognition of object. Also used for three-dimensional-transformation-invariant recognition by training network on only set of out-of-plane rotated views. Written in C language.

  19. A neural network device for on-line particle identification in cosmic ray experiments

    International Nuclear Information System (INIS)

    Scrimaglio, R.; Finetti, N.; D'Altorio, L.; Rantucci, E.; Raso, M.; Segreto, E.; Tassoni, A.; Cardarilli, G.C.

    2004-01-01

    On-line particle identification is one of the main goals of many experiments in space both for rare event studies and for optimizing measurements along the orbital trajectory. Neural networks can be a useful tool for signal processing and real time data analysis in such experiments. In this document we report on the performances of a programmable neural device which was developed in VLSI analog/digital technology. Neurons and synapses were accomplished by making use of Operational Transconductance Amplifier (OTA) structures. In this paper we report on the results of measurements performed in order to verify the agreement of the characteristic curves of each elementary cell with simulations and on the device performances obtained by implementing simple neural structures on the VLSI chip. A feed-forward neural network (Multi-Layer Perceptron, MLP) was implemented on the VLSI chip and trained to identify particles by processing the signals of two-dimensional position-sensitive Si detectors. The radiation monitoring device consisted of three double-sided silicon strip detectors. From the analysis of a set of simulated data it was found that the MLP implemented on the neural device gave results comparable with those obtained with the standard method of analysis confirming that the implemented neural network could be employed for real time particle identification

  20. Forecast of TEXT plasma disruptions using soft X rays as input signal in a neural network

    International Nuclear Information System (INIS)

    Vannucci, A.; Oliveira, K.A.; Tajima, T.

    1999-01-01

    A feedforward neural network with two hidden layers is used to forecast major and minor disruptive instabilities in TEXT tokamak discharges. Using the experimental data of soft X ray signals as input data, the neural network is trained with one disruptive plasma discharge, and a different disruptive discharge is used for validation. After being properly trained, the networks, with the same set of weights, are used to forecast disruptions in two other plasma discharges. It is observed that the neural network is able to predict the occurrence of a disruption more than 3 ms in advance. This time interval is almost 3 times longer than the one already obtained previously when a magnetic signal from a Mirnov coil was used to feed the neural networks. Visually no indication of an upcoming disruption is seen from the experimental data this far back from the time of disruption. Finally, by observing the predictive behaviour of the network for the disruptive discharges analysed and comparing the soft X ray data with the corresponding magnetic experimental signal, it is conjectured about where inside the plasma column the disruption first started. (author)

  1. Artificial Astrocytes Improve Neural Network Performance

    Science.gov (United States)

    Porto-Pazos, Ana B.; Veiguela, Noha; Mesejo, Pablo; Navarrete, Marta; Alvarellos, Alberto; Ibáñez, Oscar; Pazos, Alejandro; Araque, Alfonso

    2011-01-01

    Compelling evidence indicates the existence of bidirectional communication between astrocytes and neurons. Astrocytes, a type of glial cells classically considered to be passive supportive cells, have been recently demonstrated to be actively involved in the processing and regulation of synaptic information, suggesting that brain function arises from the activity of neuron-glia networks. However, the actual impact of astrocytes in neural network function is largely unknown and its application in artificial intelligence remains untested. We have investigated the consequences of including artificial astrocytes, which present the biologically defined properties involved in astrocyte-neuron communication, on artificial neural network performance. Using connectionist systems and evolutionary algorithms, we have compared the performance of artificial neural networks (NN) and artificial neuron-glia networks (NGN) to solve classification problems. We show that the degree of success of NGN is superior to NN. Analysis of performances of NN with different number of neurons or different architectures indicate that the effects of NGN cannot be accounted for an increased number of network elements, but rather they are specifically due to astrocytes. Furthermore, the relative efficacy of NGN vs. NN increases as the complexity of the network increases. These results indicate that artificial astrocytes improve neural network performance, and established the concept of Artificial Neuron-Glia Networks, which represents a novel concept in Artificial Intelligence with implications in computational science as well as in the understanding of brain function. PMID:21526157

  2. Artificial astrocytes improve neural network performance.

    Directory of Open Access Journals (Sweden)

    Ana B Porto-Pazos

    Full Text Available Compelling evidence indicates the existence of bidirectional communication between astrocytes and neurons. Astrocytes, a type of glial cells classically considered to be passive supportive cells, have been recently demonstrated to be actively involved in the processing and regulation of synaptic information, suggesting that brain function arises from the activity of neuron-glia networks. However, the actual impact of astrocytes in neural network function is largely unknown and its application in artificial intelligence remains untested. We have investigated the consequences of including artificial astrocytes, which present the biologically defined properties involved in astrocyte-neuron communication, on artificial neural network performance. Using connectionist systems and evolutionary algorithms, we have compared the performance of artificial neural networks (NN and artificial neuron-glia networks (NGN to solve classification problems. We show that the degree of success of NGN is superior to NN. Analysis of performances of NN with different number of neurons or different architectures indicate that the effects of NGN cannot be accounted for an increased number of network elements, but rather they are specifically due to astrocytes. Furthermore, the relative efficacy of NGN vs. NN increases as the complexity of the network increases. These results indicate that artificial astrocytes improve neural network performance, and established the concept of Artificial Neuron-Glia Networks, which represents a novel concept in Artificial Intelligence with implications in computational science as well as in the understanding of brain function.

  3. Artificial astrocytes improve neural network performance.

    Science.gov (United States)

    Porto-Pazos, Ana B; Veiguela, Noha; Mesejo, Pablo; Navarrete, Marta; Alvarellos, Alberto; Ibáñez, Oscar; Pazos, Alejandro; Araque, Alfonso

    2011-04-19

    Compelling evidence indicates the existence of bidirectional communication between astrocytes and neurons. Astrocytes, a type of glial cells classically considered to be passive supportive cells, have been recently demonstrated to be actively involved in the processing and regulation of synaptic information, suggesting that brain function arises from the activity of neuron-glia networks. However, the actual impact of astrocytes in neural network function is largely unknown and its application in artificial intelligence remains untested. We have investigated the consequences of including artificial astrocytes, which present the biologically defined properties involved in astrocyte-neuron communication, on artificial neural network performance. Using connectionist systems and evolutionary algorithms, we have compared the performance of artificial neural networks (NN) and artificial neuron-glia networks (NGN) to solve classification problems. We show that the degree of success of NGN is superior to NN. Analysis of performances of NN with different number of neurons or different architectures indicate that the effects of NGN cannot be accounted for an increased number of network elements, but rather they are specifically due to astrocytes. Furthermore, the relative efficacy of NGN vs. NN increases as the complexity of the network increases. These results indicate that artificial astrocytes improve neural network performance, and established the concept of Artificial Neuron-Glia Networks, which represents a novel concept in Artificial Intelligence with implications in computational science as well as in the understanding of brain function.

  4. NEURAL NETWORKS FOR STOCK MARKET OPTION PRICING

    Directory of Open Access Journals (Sweden)

    Sergey A. Sannikov

    2017-03-01

    Full Text Available Introduction: The use of neural networks for non-linear models helps to understand where linear model drawbacks, coused by their specification, reveal themselves. This paper attempts to find this out. The objective of research is to determine the meaning of “option prices calculation using neural networks”. Materials and Methods: We use two kinds of variables: endogenous (variables included in the model of neural network and variables affecting on the model (permanent disturbance. Results: All data are divided into 3 sets: learning, affirming and testing. All selected variables are normalised from 0 to 1. Extreme values of income were shortcut. Discussion and Conclusions: Using the 33-14-1 neural network with direct links we obtained two sets of forecasts. Optimal criteria of strategies in stock markets’ option pricing were developed.

  5. Noise Analysis studies with neural networks

    International Nuclear Information System (INIS)

    Seker, S.; Ciftcioglu, O.

    1996-01-01

    Noise analysis studies with neural network are aimed. Stochastic signals at the input of the network are used to obtain an algorithmic multivariate stochastic signal modeling. To this end, lattice modeling of a stochastic signal is performed to obtain backward residual noise sources which are uncorrelated among themselves. There are applied together with an additional input to the network to obtain an algorithmic model which is used for signal detection for early failure in plant monitoring. The additional input provides the information to the network to minimize the difference between the signal and the network's one-step-ahead prediction. A stochastic algorithm is used for training where the errors reflecting the measurement error during the training are also modelled so that fast and consistent convergence of network's weights is obtained. The lattice structure coupled to neural network investigated with measured signals from an actual power plant. (authors)

  6. Self-organized critical neural networks

    International Nuclear Information System (INIS)

    Bornholdt, Stefan; Roehl, Torsten

    2003-01-01

    A mechanism for self-organization of the degree of connectivity in model neural networks is studied. Network connectivity is regulated locally on the basis of an order parameter of the global dynamics, which is estimated from an observable at the single synapse level. This principle is studied in a two-dimensional neural network with randomly wired asymmetric weights. In this class of networks, network connectivity is closely related to a phase transition between ordered and disordered dynamics. A slow topology change is imposed on the network through a local rewiring rule motivated by activity-dependent synaptic development: Neighbor neurons whose activity is correlated, on average develop a new connection while uncorrelated neighbors tend to disconnect. As a result, robust self-organization of the network towards the order disorder transition occurs. Convergence is independent of initial conditions, robust against thermal noise, and does not require fine tuning of parameters

  7. On-line validation of feedwater flow rate in nuclear power plants using neural networks

    International Nuclear Information System (INIS)

    Khadem, M.; Ipakchi, A.; Alexandro, F.J.; Colley, R.W.

    1994-01-01

    On-line calibration of feedwater flow rate measurement in nuclear power plants provides a continuous realistic value of feedwater flow rate. It also reduces the manpower required for periodic calibration needed due to the fouling and defouling of the venturi meter surface condition. This paper presents a method for on-line validation of feedwater flow rate in nuclear power plants. The method is an improvement of the previously developed method which is based on the use of a set of process variables dynamically related to the feedwater flow rate. The online measurements of this set of variables are used as inputs to a neural network to obtain an estimate of the feedwater flow rate reading. The difference between the on-line feedwater flow rate reading, and the neural network estimate establishes whether there is a need to apply a correction factor to the feedwater flow rate measurement for calculation of the actual reactor power. The method was applied to the feedwater flow meters in the two feedwater flow loops of the TMI-1 nuclear power plant. The venturi meters used for flow measurements are susceptible to frequent fouling that degrades their measurement accuracy. The fouling effects can cause an inaccuracy of up to 3% relative error in feedwater flow rate reading. A neural network, whose inputs were the readings of a set of reference instruments, was designed to predict both feedwater flow rates simultaneously. A multi-layer feedforward neural network employing the backpropagation algorithm was used. A number of neural network training tests were performed to obtain an optimum filtering technique of the input/output data of the neural networks. The result of the selection of the filtering technique was confirmed by numerous Fast Fourier Transform (FFT) tests. Training and testing were done on data from TMI-1 nuclear power plant. The results show that the neural network can predict the correct flow rates with an absolute relative error of less than 2%

  8. Artificial neural network for modeling the extraction of aromatic hydrocarbons from lube oil cuts

    Energy Technology Data Exchange (ETDEWEB)

    Mehrkesh, A.H.; Hajimirzaee, S. [Islamic Azad University, Majlesi Branch, Isfahan (Iran, Islamic Republic of); Hatamipour, M.S.; Tavakoli, T. [Department of Chemical Engineering, University of Isfahan, Isfahan (Iran, Islamic Republic of)

    2011-03-15

    An artificial neural network (ANN) approach was used to obtain a simulation model to predict the rotating disc contactor (RDC) performance during the extraction of aromatic hydrocarbons from lube oil cuts, to produce a lubricating base oil using furfural as solvent. The field data used for training the ANN model was obtained from a lubricating oil production company. The input parameters of the ANN model were the volumetric flow rates of feed and solvent, the temperatures of feed and solvent, and the disc rotation rate. The output parameters were the volumetric flow rate of the raffinate phase and the extraction yield. In this study, a feed-forward multi-layer perceptron neural network was successfully used to demonstrate the complex relationship between the mentioned input and output parameters. (Copyright copyright 2011 WILEY-VCH Verlag GmbH and Co. KGaA, Weinheim)

  9. Adaptive online inverse control of a shape memory alloy wire actuator using a dynamic neural network

    International Nuclear Information System (INIS)

    Mai, Huanhuan; Liao, Xiaofeng; Song, Gangbing

    2013-01-01

    Shape memory alloy (SMA) actuators exhibit severe hysteresis, a nonlinear behavior, which complicates control strategies and limits their applications. This paper presents a new approach to controlling an SMA actuator through an adaptive inverse model based controller that consists of a dynamic neural network (DNN) identifier, a copy dynamic neural network (CDNN) feedforward term and a proportional (P) feedback action. Unlike fixed hysteresis models used in most inverse controllers, the proposed one uses a DNN to identify online the relationship between the applied voltage to the actuator and the displacement (the inverse model). Even without a priori knowledge of the SMA hysteresis and without pre-training, the proposed controller can precisely control the SMA wire actuator in various tracking tasks by identifying online the inverse model of the SMA actuator. Experiments were conducted, and experimental results demonstrated real-time modeling capabilities of DNN and the performance of the adaptive inverse controller. (paper)

  10. Adaptive online inverse control of a shape memory alloy wire actuator using a dynamic neural network

    Science.gov (United States)

    Mai, Huanhuan; Song, Gangbing; Liao, Xiaofeng

    2013-01-01

    Shape memory alloy (SMA) actuators exhibit severe hysteresis, a nonlinear behavior, which complicates control strategies and limits their applications. This paper presents a new approach to controlling an SMA actuator through an adaptive inverse model based controller that consists of a dynamic neural network (DNN) identifier, a copy dynamic neural network (CDNN) feedforward term and a proportional (P) feedback action. Unlike fixed hysteresis models used in most inverse controllers, the proposed one uses a DNN to identify online the relationship between the applied voltage to the actuator and the displacement (the inverse model). Even without a priori knowledge of the SMA hysteresis and without pre-training, the proposed controller can precisely control the SMA wire actuator in various tracking tasks by identifying online the inverse model of the SMA actuator. Experiments were conducted, and experimental results demonstrated real-time modeling capabilities of DNN and the performance of the adaptive inverse controller.

  11. Design of Optimal Hybrid Position/Force Controller for a Robot Manipulator Using Neural Networks

    Directory of Open Access Journals (Sweden)

    Vikas Panwar

    2007-01-01

    Full Text Available The application of quadratic optimization and sliding-mode approach is considered for hybrid position and force control of a robot manipulator. The dynamic model of the manipulator is transformed into a state-space model to contain two sets of state variables, where one describes the constrained motion and the other describes the unconstrained motion. The optimal feedback control law is derived solving matrix differential Riccati equation, which is obtained using Hamilton Jacobi Bellman optimization. The optimal feedback control law is shown to be globally exponentially stable using Lyapunov function approach. The dynamic model uncertainties are compensated with a feedforward neural network. The neural network requires no preliminary offline training and is trained with online weight tuning algorithms that guarantee small errors and bounded control signals. The application of the derived control law is demonstrated through simulation with a 4-DOF robot manipulator to track an elliptical planar constrained surface while applying the desired force on the surface.

  12. Neural networks for tracking of unknown SISO discrete-time nonlinear dynamic systems.

    Science.gov (United States)

    Aftab, Muhammad Saleheen; Shafiq, Muhammad

    2015-11-01

    This article presents a Lyapunov function based neural network tracking (LNT) strategy for single-input, single-output (SISO) discrete-time nonlinear dynamic systems. The proposed LNT architecture is composed of two feedforward neural networks operating as controller and estimator. A Lyapunov function based back propagation learning algorithm is used for online adjustment of the controller and estimator parameters. The controller and estimator error convergence and closed-loop system stability analysis is performed by Lyapunov stability theory. Moreover, two simulation examples and one real-time experiment are investigated as case studies. The achieved results successfully validate the controller performance. Copyright © 2015 ISA. Published by Elsevier Ltd. All rights reserved.

  13. An automatic system for Turkish word recognition using Discrete Wavelet Neural Network based on adaptive entropy

    International Nuclear Information System (INIS)

    Avci, E.

    2007-01-01

    In this paper, an automatic system is presented for word recognition using real Turkish word signals. This paper especially deals with combination of the feature extraction and classification from real Turkish word signals. A Discrete Wavelet Neural Network (DWNN) model is used, which consists of two layers: discrete wavelet layer and multi-layer perceptron. The discrete wavelet layer is used for adaptive feature extraction in the time-frequency domain and is composed of Discrete Wavelet Transform (DWT) and wavelet entropy. The multi-layer perceptron used for classification is a feed-forward neural network. The performance of the used system is evaluated by using noisy Turkish word signals. Test results showing the effectiveness of the proposed automatic system are presented in this paper. The rate of correct recognition is about 92.5% for the sample speech signals. (author)

  14. Detection of directional eye movements based on the electrooculogram signals through an artificial neural network

    International Nuclear Information System (INIS)

    Erkaymaz, Hande; Ozer, Mahmut; Orak, İlhami Muharrem

    2015-01-01

    The electrooculogram signals are very important at extracting information about detection of directional eye movements. Therefore, in this study, we propose a new intelligent detection model involving an artificial neural network for the eye movements based on the electrooculogram signals. In addition to conventional eye movements, our model also involves the detection of tic and blinking of an eye. We extract only two features from the electrooculogram signals, and use them as inputs for a feed-forwarded artificial neural network. We develop a new approach to compute these two features, which we call it as a movement range. The results suggest that the proposed model have a potential to become a new tool to determine the directional eye movements accurately

  15. Design of Artificial Neural Network-Based pH Estimator

    Directory of Open Access Journals (Sweden)

    Shebel A. Alsabbah

    2010-10-01

    Full Text Available Taking into consideration the cost, size and drawbacks might be found with real hardware instrument for measuring pH values such that the complications of the wiring, installing, calibrating and troubleshooting the system, would make a person look for a cheaper, accurate, and alternative choice to perform the measuring operation, Where’s hereby, a feedforward artificial neural network-based pH estimator has to be proposed. The proposed estimator has been designed with multi- layer perceptrons. One input which is a measured base stream and two outputs represent pH values at strong base and strong/weak acids for a titration process. The created data base has been obtained with consideration of temperature variation. The final numerical results ensure the effectiveness and robustness of the design neural network-based pH estimator.

  16. Prototype-Incorporated Emotional Neural Network.

    Science.gov (United States)

    Oyedotun, Oyebade K; Khashman, Adnan

    2017-08-15

    Artificial neural networks (ANNs) aim to simulate the biological neural activities. Interestingly, many ''engineering'' prospects in ANN have relied on motivations from cognition and psychology studies. So far, two important learning theories that have been subject of active research are the prototype and adaptive learning theories. The learning rules employed for ANNs can be related to adaptive learning theory, where several examples of the different classes in a task are supplied to the network for adjusting internal parameters. Conversely, the prototype-learning theory uses prototypes (representative examples); usually, one prototype per class of the different classes contained in the task. These prototypes are supplied for systematic matching with new examples so that class association can be achieved. In this paper, we propose and implement a novel neural network algorithm based on modifying the emotional neural network (EmNN) model to unify the prototype- and adaptive-learning theories. We refer to our new model as ``prototype-incorporated EmNN''. Furthermore, we apply the proposed model to two real-life challenging tasks, namely, static hand-gesture recognition and face recognition, and compare the result to those obtained using the popular back-propagation neural network (BPNN), emotional BPNN (EmNN), deep networks, an exemplar classification model, and k-nearest neighbor.

  17. Artificial neural network intelligent method for prediction

    Science.gov (United States)

    Trifonov, Roumen; Yoshinov, Radoslav; Pavlova, Galya; Tsochev, Georgi

    2017-09-01

    Accounting and financial classification and prediction problems are high challenge and researchers use different methods to solve them. Methods and instruments for short time prediction of financial operations using artificial neural network are considered. The methods, used for prediction of financial data as well as the developed forecasting system with neural network are described in the paper. The architecture of a neural network used four different technical indicators, which are based on the raw data and the current day of the week is presented. The network developed is used for forecasting movement of stock prices one day ahead and consists of an input layer, one hidden layer and an output layer. The training method is algorithm with back propagation of the error. The main advantage of the developed system is self-determination of the optimal topology of neural network, due to which it becomes flexible and more precise The proposed system with neural network is universal and can be applied to various financial instruments using only basic technical indicators as input data.

  18. Deformable image registration using convolutional neural networks

    NARCIS (Netherlands)

    Eppenhof, Koen A.J.; Lafarge, Maxime W.; Moeskops, Pim; Veta, Mitko; Pluim, Josien P.W.

    2018-01-01

    Deformable image registration can be time-consuming and often needs extensive parameterization to perform well on a specific application. We present a step towards a registration framework based on a three-dimensional convolutional neural network. The network directly learns transformations between

  19. Estimating Conditional Distributions by Neural Networks

    DEFF Research Database (Denmark)

    Kulczycki, P.; Schiøler, Henrik

    1998-01-01

    Neural Networks for estimating conditionaldistributions and their associated quantiles are investigated in this paper. A basic network structure is developed on the basis of kernel estimation theory, and consistency property is considered from a mild set of assumptions. A number of applications...

  20. Artificial Neural Networks and Instructional Technology.

    Science.gov (United States)

    Carlson, Patricia A.

    1991-01-01

    Artificial neural networks (ANN), part of artificial intelligence, are discussed. Such networks are fed sample cases (training sets), learn how to recognize patterns in the sample data, and use this experience in handling new cases. Two cognitive roles for ANNs (intelligent filters and spreading, associative memories) are examined. Prototypes…

  1. Learning drifting concepts with neural networks

    NARCIS (Netherlands)

    Biehl, Michael; Schwarze, Holm

    1993-01-01

    The learning of time-dependent concepts with a neural network is studied analytically and numerically. The linearly separable target rule is represented by an N-vector, whose time dependence is modelled by a random or deterministic drift process. A single-layer network is trained online using

  2. Direct and inverse neural networks modelling applied to study the influence of the gas diffusion layer properties on PBI-based PEM fuel cells

    Energy Technology Data Exchange (ETDEWEB)

    Lobato, Justo; Canizares, Pablo; Rodrigo, Manuel A.; Linares, Jose J. [Chemical Engineering Department, University of Castilla-La Mancha, Campus Universitario s/n, 13004 Ciudad Real (Spain); Piuleac, Ciprian-George; Curteanu, Silvia [Faculty of Chemical Engineering and Environmental Protection, Department of Chemical Engineering, ' ' Gh. Asachi' ' Technical University Iasi Bd. D. Mangeron, No. 71A, 700050 IASI (Romania)

    2010-08-15

    This article shows the application of a very useful mathematical tool, artificial neural networks, to predict the fuel cells results (the value of the tortuosity and the cell voltage, at a given current density, and therefore, the power) on the basis of several properties that define a Gas Diffusion Layer: Teflon content, air permeability, porosity, mean pore size, hydrophobia level. Four neural networks types (multilayer perceptron, generalized feedforward network, modular neural network, and Jordan-Elman neural network) have been applied, with a good fitting between the predicted and the experimental values in the polarization curves. A simple feedforward neural network with one hidden layer proved to be an accurate model with good generalization capability (error about 1% in the validation phase). A procedure based on inverse neural network modelling was able to determine, with small errors, the initial conditions leading to imposed values for characteristics of the fuel cell. In addition, the use of this tool has been proved to be very attractive in order to predict the cell performance, and more interestingly, the influence of the properties of the gas diffusion layer on the cell performance, allowing possible enhancements of this material by changing some of its properties. (author)

  3. Neural network tagging in a toy model

    International Nuclear Information System (INIS)

    Milek, Marko; Patel, Popat

    1999-01-01

    The purpose of this study is a comparison of Artificial Neural Network approach to HEP analysis against the traditional methods. A toy model used in this analysis consists of two types of particles defined by four generic properties. A number of 'events' was created according to the model using standard Monte Carlo techniques. Several fully connected, feed forward multi layered Artificial Neural Networks were trained to tag the model events. The performance of each network was compared to the standard analysis mechanisms and significant improvement was observed

  4. Hindcasting of storm waves using neural networks

    Digital Repository Service at National Institute of Oceanography (India)

    Rao, S.; Mandal, S.

    Department NN neural network net i weighted sum of the inputs of neuron i o k network output at kth output node P total number of training pattern s i output of neuron i t k target output at kth output node 1. Introduction Severe storms occur in Bay of Bengal...), forecasting of runoff (Crespo and Mora, 1993), concrete strength (Kasperkiewicz et al., 1995). The uses of neural network in the coastal the wave conditions will change from year to year, thus a proper statistical and climatological treatment requires several...

  5. Template-based procedures for neural network interpretation.

    Science.gov (United States)

    Alexander, J A.; Mozer, M C.

    1999-04-01

    Although neural networks often achieve impressive learning and generalization performance, their internal workings are typically all but impossible to decipher. This characteristic of the networks, their opacity, is one of the disadvantages of connectionism compared to more traditional, rule-oriented approaches to artificial intelligence. Without a thorough understanding of the network behavior, confidence in a system's results is lowered, and the transfer of learned knowledge to other processing systems - including humans - is precluded. Methods that address the opacity problem by casting network weights in symbolic terms are commonly referred to as rule extraction techniques. This work describes a principled approach to symbolic rule extraction from standard multilayer feedforward networks based on the notion of weight templates, parameterized regions of weight space corresponding to specific symbolic expressions. With an appropriate choice of representation, we show how template parameters may be efficiently identified and instantiated to yield the optimal match to the actual weights of a unit. Depending on the requirements of the application domain, the approach can accommodate n-ary disjunctions and conjunctions with O(k) complexity, simple n-of-m expressions with O(k(2)) complexity, or more general classes of recursive n-of-m expressions with O(k(L+2)) complexity, where k is the number of inputs to an unit and L the recursion level of the expression class. Compared to other approaches in the literature, our method of rule extraction offers benefits in simplicity, computational performance, and overall flexibility. Simulation results on a variety of problems demonstrate the application of our procedures as well as the strengths and the weaknesses of our general approach.

  6. Petri neural network model for the effect of controlled thermomechanical process parameters on the mechanical properties of HSLA steels

    Energy Technology Data Exchange (ETDEWEB)

    Datta, S.

    1999-10-01

    The effect of composition and controlled thermomechanical process parameters on the mechanical properties of HSLA steels is modelled using the Widrow-Hoff's concept of training a neural net with feed-forward topology by applying Rumelhart's back propagation type algorithm for supervised learning, using a Petri like net structure. The data used are from laboratory experiments as well as from the published literature. The results from the neural network are found to be consistent and in good agreement with the experimented results. (author)

  7. Neural network evaluation of tokamak current profiles for real time control (abstract)

    Science.gov (United States)

    Wróblewski, Dariusz

    1997-01-01

    Active feedback control of the current profile, requiring real-time determination of the current profile parameters, is envisioned for tokamaks operating in enhanced confinement regimes. The distribution of toroidal current in a tokamak is now routinely evaluated based on external (magnetic probes, flux loops) and internal (motional Stark effect) measurements of the poloidal magnetic field. However, the analysis involves reconstruction of magnetohydrodynamic equilibrium and is too intensive computationally to be performed in real time. In the present study, a neural network is used to provide a mapping from the magnetic measurements (internal and external) to selected parameters of the safety factor profile. The single-pass, feedforward calculation of output of a trained neural network is very fast, making this approach particularly suitable for real-time applications. The network was trained on a large set of simulated equilibrium data for the DIII-D tokamak. The database encompasses a large variety of current profiles including the hollow current profiles important for reversed central shear operation. The parameters of safety factor profile (a quantity related to the current profile through the magnetic field tilt angle) estimated by the neural network include central safety factor, q0, minimum value of q, qmin, and the location of qmin. Very good performance of the trained neural network both for simulated test data and for experimental data is demonstrated.

  8. Neural network evaluation of tokamak current profiles for real time control

    Science.gov (United States)

    Wróblewski, Dariusz

    1997-02-01

    Active feedback control of the current profile, requiring real-time determination of the current profile parameters, is envisioned for tokamaks operating in enhanced confinement regimes. The distribution of toroidal current in a tokamak is now routinely evaluated based on external (magnetic probes, flux loops) and internal (motional Stark effect) measurements of the poloidal magnetic field. However, the analysis involves reconstruction of magnetohydrodynamic equilibrium and is too intensive computationally to be performed in real time. In the present study, a neural network is used to provide a mapping from the magnetic measurements (internal and external) to selected parameters of the safety factor profile. The single-pass, feedforward calculation of output of a trained neural network is very fast, making this approach particularly suitable for real-time applications. The network was trained on a large set of simulated equilibrium data for the DIII-D tokamak. The database encompasses a large variety of current profiles including the hollow current profiles important for reversed central shear operation. The parameters of safety factor profile (a quantity related to the current profile through the magnetic field tilt angle) estimated by the neural network include central safety factor, q0, minimum value of q, qmin, and the location of qmin. Very good performance of the trained neural network both for simulated test data and for experimental datais demonstrated.

  9. Artificial neural networks for spatial distribution of fuel assemblies in reload of PWR reactors

    Energy Technology Data Exchange (ETDEWEB)

    Oliveira, Edyene; Castro, Victor F.; Velásquez, Carlos E.; Pereira, Claubia, E-mail: claubia@nuclear.ufmg.br [Universidade Federal de Minas Gerais (UFMG), Belo Horizonte, MG (Brazil). Programa de Pós-Graduação em Ciências e Técnicas Nucleares

    2017-07-01

    An artificial neural network methodology is being developed in order to find an optimum spatial distribution of the fuel assemblies in a nuclear reactor core during reload. The main bounding parameter of the modelling was the neutron multiplication factor, k{sub ef{sub f}}. The characteristics of the network are defined by the nuclear parameters: cycle, burnup, enrichment, fuel type, and average power peak of each element. These parameters were obtained by the ORNL nuclear code package SCALE6.0. As for the artificial neural network, the ANN Feedforward Multi{sub L}ayer{sub P}erceptron with various layers and neurons were constructed. Three algorithms were used and tested: LM (Levenberg-Marquardt), SCG (Scaled Conjugate Gradient) and BayR (Bayesian Regularization). Artificial neural network have implemented using MATLAB 2015a version. As preliminary results, the spatial distribution of the fuel assemblies in the core using a neural network was slightly better than the standard core. (author)

  10. Neural Network with Local Memory for Nuclear Reactor Power Level Control

    International Nuclear Information System (INIS)

    Uluyol, Oender; Ragheb, Magdi; Tsoukalas, Lefteri

    2001-01-01

    A methodology is introduced for a neural network with local memory called a multilayered local output gamma feedback (LOGF) neural network within the paradigm of locally-recurrent globally-feedforward neural networks. It appears to be well-suited for the identification, prediction, and control tasks in highly dynamic systems; it allows for the presentation of different timescales through incorporation of a gamma memory. A learning algorithm based on the backpropagation-through-time approach is derived. The spatial and temporal weights of the network are iteratively optimized for a given problem using the derived learning algorithm. As a demonstration of the methodology, it is applied to the task of power level control of a nuclear reactor at different fuel cycle conditions. The results demonstrate that the LOGF neural network controller outperforms the classical as well as the state feedback-assisted classical controllers for reactor power level control by showing a better tracking of the demand power, improving the fuel and exit temperature responses, and by performing robustly in different fuel cycle and power level conditions

  11. Neural network evaluation of tokamak current profiles for real time control

    International Nuclear Information System (INIS)

    Wroblewski, D.

    1997-01-01

    Active feedback control of the current profile, requiring real-time determination of the current profile parameters, is envisioned for tokamaks operating in enhanced confinement regimes. The distribution of toroidal current in a tokamak is now routinely evaluated based on external (magnetic probes, flux loops) and internal (motional Stark effect) measurements of the poloidal magnetic field. However, the analysis involves reconstruction of magnetohydrodynamic equilibrium and is too intensive computationally to be performed in real time. In the present study, a neural network is used to provide a mapping from the magnetic measurements (internal and external) to selected parameters of the safety factor profile. The single-pass, feedforward calculation of output of a trained neural network is very fast, making this approach particularly suitable for real-time applications. The network was trained on a large set of simulated equilibrium data for the DIII-D tokamak. The database encompasses a large variety of current profiles including the hollow current profiles important for reversed central shear operation. The parameters of safety factor profile (a quantity related to the current profile through the magnetic field tilt angle) estimated by the neural network include central safety factor, q 0 , minimum value of q, q min , and the location of q min . Very good performance of the trained neural network both for simulated test data and for experimental datais demonstrated. copyright 1997 American Institute of Physics

  12. Neural network evaluation of tokamak current profiles for real time control (abstract)

    International Nuclear Information System (INIS)

    Wroblewski, D.

    1997-01-01

    Active feedback control of the current profile, requiring real-time determination of the current profile parameters, is envisioned for tokamaks operating in enhanced confinement regimes. The distribution of toroidal current in a tokamak is now routinely evaluated based on external (magnetic probes, flux loops) and internal (motional Stark effect) measurements of the poloidal magnetic field. However, the analysis involves reconstruction of magnetohydrodynamic equilibrium and is too intensive computationally to be performed in real time. In the present study, a neural network is used to provide a mapping from the magnetic measurements (internal and external) to selected parameters of the safety factor profile. The single-pass, feedforward calculation of output of a trained neural network is very fast, making this approach particularly suitable for real-time applications. The network was trained on a large set of simulated equilibrium data for the DIII-D tokamak. The database encompasses a large variety of current profiles including the hollow current profiles important for reversed central shear operation. The parameters of safety factor profile (a quantity related to the current profile through the magnetic field tilt angle) estimated by the neural network include central safety factor, q 0 , minimum value of q, q min , and the location of q min . Very good performance of the trained neural network both for simulated test data and for experimental data is demonstrated. copyright 1997 American Institute of Physics

  13. Connectivity in the yeast cell cycle transcription network: inferences from neural networks.

    Directory of Open Access Journals (Sweden)

    Christopher E Hart

    2006-12-01

    Full Text Available A current challenge is to develop computational approaches to infer gene network regulatory relationships based on multiple types of large-scale functional genomic data. We find that single-layer feed-forward artificial neural network (ANN models can effectively discover gene network structure by integrating global in vivo protein:DNA interaction data (ChIP/Array with genome-wide microarray RNA data. We test this on the yeast cell cycle transcription network, which is composed of several hundred genes with phase-specific RNA outputs. These ANNs were robust to noise in data and to a variety of perturbations. They reliably identified and ranked 10 of 12 known major cell cycle factors at the top of a set of 204, based on a sum-of-squared weights metric. Comparative analysis of motif occurrences among multiple yeast species independently confirmed relationships inferred from ANN weights analysis. ANN models can capitalize on properties of biological gene networks that other kinds of models do not. ANNs naturally take advantage of patterns of absence, as well as presence, of factor binding associated with specific expression output; they are easily subjected to in silico "mutation" to uncover biological redundancies; and they can use the full range of factor binding values. A prominent feature of cell cycle ANNs suggested an analogous property might exist in the biological network. This postulated that "network-local discrimination" occurs when regulatory connections (here between MBF and target genes are explicitly disfavored in one network module (G2, relative to others and to the class of genes outside the mitotic network. If correct, this predicts that MBF motifs will be significantly depleted from the discriminated class and that the discrimination will persist through evolution. Analysis of distantly related Schizosaccharomyces pombe confirmed this, suggesting that network-local discrimination is real and complements well-known enrichment of

  14. Neutron spectrometry with artificial neural networks

    International Nuclear Information System (INIS)

    Vega C, H.R.; Hernandez D, V.M.; Manzanares A, E.; Rodriguez, J.M.; Mercado S, G.A.; Iniguez de la Torre Bayo, M.P.; Barquero, R.; Arteaga A, T.

    2005-01-01

    An artificial neural network has been designed to obtain the neutron spectra from the Bonner spheres spectrometer's count rates. The neural network was trained using 129 neutron spectra. These include isotopic neutron sources; reference and operational spectra from accelerators and nuclear reactors, spectra from mathematical functions as well as few energy groups and monoenergetic spectra. The spectra were transformed from lethargy to energy distribution and were re-bin ned to 31 energy groups using the MCNP 4C code. Re-binned spectra and UTA4 response matrix were used to calculate the expected count rates in Bonner spheres spectrometer. These count rates were used as input and the respective spectrum was used as output during neural network training. After training the network was tested with the Bonner spheres count rates produced by a set of neutron spectra. This set contains data used during network training as well as data not used. Training and testing was carried out in the Mat lab program. To verify the network unfolding performance the original and unfolded spectra were compared using the χ 2 -test and the total fluence ratios. The use of Artificial Neural Networks to unfold neutron spectra in neutron spectrometry is an alternative procedure that overcomes the drawbacks associated in this ill-conditioned problem. (Author)

  15. Neutron spectrometry using artificial neural networks

    International Nuclear Information System (INIS)

    Vega-Carrillo, Hector Rene; Martin Hernandez-Davila, Victor; Manzanares-Acuna, Eduardo; Mercado Sanchez, Gema A.; Pilar Iniguez de la Torre, Maria; Barquero, Raquel; Palacios, Francisco; Mendez Villafane, Roberto; Arteaga Arteaga, Tarcicio; Manuel Ortiz Rodriguez, Jose

    2006-01-01

    An artificial neural network has been designed to obtain neutron spectra from Bonner spheres spectrometer count rates. The neural network was trained using 129 neutron spectra. These include spectra from isotopic neutron sources; reference and operational spectra from accelerators and nuclear reactors, spectra based on mathematical functions as well as few energy groups and monoenergetic spectra. The spectra were transformed from lethargy to energy distribution and were re-binned to 31 energy groups using the MCNP 4C code. The re-binned spectra and the UTA4 response matrix were used to calculate the expected count rates in Bonner spheres spectrometer. These count rates were used as input and their respective spectra were used as output during the neural network training. After training, the network was tested with the Bonner spheres count rates produced by folding a set of neutron spectra with the response matrix. This set contains data used during network training as well as data not used. Training and testing was carried out using the Matlab ( R) program. To verify the network unfolding performance, the original and unfolded spectra were compared using the root mean square error. The use of artificial neural networks to unfold neutron spectra in neutron spectrometry is an alternative procedure that overcomes the drawbacks associated with this ill-conditioned problem

  16. Using neural networks to describe tracer correlations

    Directory of Open Access Journals (Sweden)

    D. J. Lary

    2004-01-01

    Full Text Available Neural networks are ideally suited to describe the spatial and temporal dependence of tracer-tracer correlations. The neural network performs well even in regions where the correlations are less compact and normally a family of correlation curves would be required. For example, the CH4-N2O correlation can be well described using a neural network trained with the latitude, pressure, time of year, and methane volume mixing ratio (v.m.r.. In this study a neural network using Quickprop learning and one hidden layer with eight nodes was able to reproduce the CH4-N2O correlation with a correlation coefficient between simulated and training values of 0.9995. Such an accurate representation of tracer-tracer correlations allows more use to be made of long-term datasets to constrain chemical models. Such as the dataset from the Halogen Occultation Experiment (HALOE which has continuously observed CH4  (but not N2O from 1991 till the present. The neural network Fortran code used is available for download.

  17. Neural network based multiscale image restoration approach

    Science.gov (United States)

    de Castro, Ana Paula A.; da Silva, José D. S.

    2007-02-01

    This paper describes a neural network based multiscale image restoration approach. Multilayer perceptrons are trained with artificial images of degraded gray level circles, in an attempt to make the neural network learn inherent space relations of the degraded pixels. The present approach simulates the degradation by a low pass Gaussian filter blurring operation and the addition of noise to the pixels at pre-established rates. The training process considers the degraded image as input and the non-degraded image as output for the supervised learning process. The neural network thus performs an inverse operation by recovering a quasi non-degraded image in terms of least squared. The main difference of the approach to existing ones relies on the fact that the space relations are taken from different scales, thus providing relational space data to the neural network. The approach is an attempt to come up with a simple method that leads to an optimum solution to the problem. Considering different window sizes around a pixel simulates the multiscale operation. In the generalization phase the neural network is exposed to indoor, outdoor, and satellite degraded images following the same steps use for the artificial circle image.

  18. Inverting radiometric measurements with a neural network

    Science.gov (United States)

    Measure, Edward M.; Yee, Young P.; Balding, Jeff M.; Watkins, Wendell R.

    1992-02-01

    A neural network scheme for retrieving remotely sensed vertical temperature profiles was applied to observed ground based radiometer measurements. The neural network used microwave radiance measurements and surface measurements of temperature and pressure as inputs. Because the microwave radiometer is capable of measuring 4 oxygen channels at 5 different elevation angles (9, 15, 25, 40, and 90 degs), 20 microwave measurements are potentially available. Because these measurements have considerable redundancy, a neural network was experimented with, accepting as inputs microwave measurements taken at 53.88 GHz, 40 deg; 57.45 GHz, 40 deg; and 57.45, 90 deg. The primary test site was located at White Sands Missile Range (WSMR), NM. Results are compared with measurements made simultaneously with balloon borne radiosonde instruments and with radiometric temperature retrievals made using more conventional retrieval algorithms. The neural network was trained using a Widrow-Hoff delta rule procedure. Functions of date to include season dependence in the retrieval process and functions of time to include diurnal effects were used as inputs to the neural network.

  19. Efficient Cancer Detection Using Multiple Neural Networks.

    Science.gov (United States)

    Shell, John; Gregory, William D

    2017-01-01

    The inspection of live excised tissue specimens to ascertain malignancy is a challenging task in dermatopathology and generally in histopathology. We introduce a portable desktop prototype device that provides highly accurate neural network classification of malignant and benign tissue. The handheld device collects 47 impedance data samples from 1 Hz to 32 MHz via tetrapolar blackened platinum electrodes. The data analysis was implemented with six different backpropagation neural networks (BNN). A data set consisting of 180 malignant and 180 benign breast tissue data files in an approved IRB study at the Aurora Medical Center, Milwaukee, WI, USA, were utilized as a neural network input. The BNN structure consisted of a multi-tiered consensus approach autonomously selecting four of six neural networks to determine a malignant or benign classification. The BNN analysis was then compared with the histology results with consistent sensitivity of 100% and a specificity of 100%. This implementation successfully relied solely on statistical variation between the benign and malignant impedance data and intricate neural network configuration. This device and BNN implementation provides a novel approach that could be a valuable tool to augment current medical practice assessment of the health of breast, squamous, and basal cell carcinoma and other excised tissue without requisite tissue specimen expertise. It has the potential to provide clinical management personnel with a fast non-invasive accurate assessment of biopsied or sectioned excised tissue in various clinical settings.

  20. Decoding of Human Movements Based on Deep Brain Local Field Potentials Using Ensemble Neural Networks

    Directory of Open Access Journals (Sweden)

    Mohammad S. Islam

    2017-01-01

    Full Text Available Decoding neural activities related to voluntary and involuntary movements is fundamental to understanding human brain motor circuits and neuromotor disorders and can lead to the development of neuromotor prosthetic devices for neurorehabilitation. This study explores using recorded deep brain local field potentials (LFPs for robust movement decoding of Parkinson’s disease (PD and Dystonia patients. The LFP data from voluntary movement activities such as left and right hand index finger clicking were recorded from patients who underwent surgeries for implantation of deep brain stimulation electrodes. Movement-related LFP signal features were extracted by computing instantaneous power related to motor response in different neural frequency bands. An innovative neural network ensemble classifier has been proposed and developed for accurate prediction of finger movement and its forthcoming laterality. The ensemble classifier contains three base neural network classifiers, namely, feedforward, radial basis, and probabilistic neural networks. The majority voting rule is used to fuse the decisions of the three base classifiers to generate the final decision of the ensemble classifier. The overall decoding performance reaches a level of agreement (kappa value at about 0.729±0.16 for decoding movement from the resting state and about 0.671±0.14 for decoding left and right visually cued movements.

  1. Implementing Signature Neural Networks with Spiking Neurons.

    Science.gov (United States)

    Carrillo-Medina, José Luis; Latorre, Roberto

    2016-01-01

    Spiking Neural Networks constitute the most promising approach to develop realistic Artificial Neural Networks (ANNs). Unlike traditional firing rate-based paradigms, information coding in spiking models is based on the precise timing of individual spikes. It has been demonstrated that spiking ANNs can be successfully and efficiently applied to multiple realistic problems solvable with traditional strategies (e.g., data classification or pattern recognition). In recent years, major breakthroughs in neuroscience research have discovered new relevant computational principles in different living neural systems. Could ANNs benefit from some of these recent findings providing novel elements of inspiration? This is an intriguing question for the research community and the development of spiking ANNs including novel bio-inspired information coding and processing strategies is gaining attention. From this perspective, in this work, we adapt the core concepts of the recently proposed Signature Neural Network paradigm-i.e., neural signatures to identify each unit in the network, local information contextualization during the processing, and multicoding strategies for information propagation regarding the origin and the content of the data-to be employed in a spiking neural network. To the best of our knowledge, none of these mechanisms have been used yet in the context of ANNs of spiking neurons. This paper provides a proof-of-concept for their applicability in such networks. Computer simulations show that a simple network model like the discussed here exhibits complex self-organizing properties. The combination of multiple simultaneous encoding schemes allows the network to generate coexisting spatio-temporal patterns of activity encoding information in different spatio-temporal spaces. As a function of the network and/or intra-unit parameters shaping the corresponding encoding modality, different forms of competition among the evoked patterns can emerge even in the absence

  2. Foreign currency rate forecasting using neural networks

    Science.gov (United States)

    Pandya, Abhijit S.; Kondo, Tadashi; Talati, Amit; Jayadevappa, Suryaprasad

    2000-03-01

    Neural networks are increasingly being used as a forecasting tool in many forecasting problems. This paper discusses the application of neural networks in predicting daily foreign exchange rates between the USD, GBP as well as DEM. We approach the problem from a time-series analysis framework - where future exchange rates are forecasted solely using past exchange rates. This relies on the belief that the past prices and future prices are very close related, and interdependent. We present the result of training a neural network with historical USD-GBP data. The methodology used in explained, as well as the training process. We discuss the selection of inputs to the network, and present a comparison of using the actual exchange rates and the exchange rate differences as inputs. Price and rate differences are the preferred way of training neural network in financial applications. Results of both approaches are present together for comparison. We show that the network is able to learn the trends in the exchange rate movements correctly, and present the results of the prediction over several periods of time.

  3. Training Deep Spiking Neural Networks Using Backpropagation.

    Science.gov (United States)

    Lee, Jun Haeng; Delbruck, Tobi; Pfeiffer, Michael

    2016-01-01

    Deep spiking neural networks (SNNs) hold the potential for improving the latency and energy efficiency of deep neural networks through data-driven event-based computation. However, training such networks is difficult due to the non-differentiable nature of spike events. In this paper, we introduce a novel technique, which treats the membrane potentials of spiking neurons as differentiable signals, where discontinuities at spike times are considered as noise. This enables an error backpropagation mechanism for deep SNNs that follows the same principles as in conventional deep networks, but works directly on spike signals and membrane potentials. Compared with previous methods relying on indirect training and conversion, our technique has the potential to capture the statistics of spikes more precisely. We evaluate the proposed framework on artificially generated events from the original MNIST handwritten digit benchmark, and also on the N-MNIST benchmark recorded with an event-based dynamic vision sensor, in which the proposed method reduces the error rate by a factor of more than three compared to the best previous SNN, and also achieves a higher accuracy than a conventional convolutional neural network (CNN) trained and tested on the same data. We demonstrate in the context of the MNIST task that thanks to their event-driven operation, deep SNNs (both fully connected and convolutional) trained with our method achieve accuracy equivalent with conventional neural networks. In the N-MNIST example, equivalent accuracy is achieved with about five times fewer computational operations.

  4. Deep Neural Network Detects Quantum Phase Transition

    Science.gov (United States)

    Arai, Shunta; Ohzeki, Masayuki; Tanaka, Kazuyuki

    2018-03-01

    We detect the quantum phase transition of a quantum many-body system by mapping the observed results of the quantum state onto a neural network. In the present study, we utilized the simplest case of a quantum many-body system, namely a one-dimensional chain of Ising spins with the transverse Ising model. We prepared several spin configurations, which were obtained using repeated observations of the model for a particular strength of the transverse field, as input data for the neural network. Although the proposed method can be employed using experimental observations of quantum many-body systems, we tested our technique with spin configurations generated by a quantum Monte Carlo simulation without initial relaxation. The neural network successfully identified the strength of transverse field only from the spin configurations, leading to consistent estimations of the critical point of our model Γc = J.

  5. Recurrent Neural Network for Computing Outer Inverse.

    Science.gov (United States)

    Živković, Ivan S; Stanimirović, Predrag S; Wei, Yimin

    2016-05-01

    Two linear recurrent neural networks for generating outer inverses with prescribed range and null space are defined. Each of the proposed recurrent neural networks is based on the matrix-valued differential equation, a generalization of dynamic equations proposed earlier for the nonsingular matrix inversion, the Moore-Penrose inversion, as well as the Drazin inversion, under the condition of zero initial state. The application of the first approach is conditioned by the properties of the spectrum of a certain matrix; the second approach eliminates this drawback, though at the cost of increasing the number of matrix operations. The cases corresponding to the most common generalized inverses are defined. The conditions that ensure stability of the proposed neural network are presented. Illustrative examples present the results of numerical simulations.

  6. Open quantum generalisation of Hopfield neural networks

    Science.gov (United States)

    Rotondo, P.; Marcuzzi, M.; Garrahan, J. P.; Lesanovsky, I.; Müller, M.

    2018-03-01

    We propose a new framework to understand how quantum effects may impact on the dynamics of neural networks. We implement the dynamics of neural networks in terms of Markovian open quantum systems, which allows us to treat thermal and quantum coherent effects on the same footing. In particular, we propose an open quantum generalisation of the Hopfield neural network, the simplest toy model of associative memory. We determine its phase diagram and show that quantum fluctuations give rise to a qualitatively new non-equilibrium phase. This novel phase is characterised by limit cycles corresponding to high-dimensional stationary manifolds that may be regarded as a generalisation of storage patterns to the quantum domain.

  7. Reconstruction of neutron spectra through neural networks

    International Nuclear Information System (INIS)

    Vega C, H.R.; Hernandez D, V.M.; Manzanares A, E.

    2003-01-01

    A neural network has been used to reconstruct the neutron spectra starting from the counting rates of the detectors of the Bonner sphere spectrophotometric system. A group of 56 neutron spectra was selected to calculate the counting rates that would produce in a Bonner sphere system, with these data and the spectra it was trained the neural network. To prove the performance of the net, 12 spectra were used, 6 were taken of the group used for the training, 3 were obtained of mathematical functions and those other 3 correspond to real spectra. When comparing the original spectra of those reconstructed by the net we find that our net has a poor performance when reconstructing monoenergetic spectra, this attributes it to those characteristic of the spectra used for the training of the neural network, however for the other groups of spectra the results of the net are appropriate with the prospective ones. (Author)

  8. Eddy Current Flaw Characterization Using Neural Networks

    International Nuclear Information System (INIS)

    Song, S. J.; Park, H. J.; Shin, Y. K.

    1998-01-01

    Determination of location, shape and size of a flaw from its eddy current testing signal is one of the fundamental issues in eddy current nondestructive evaluation of steam generator tubes. Here, we propose an approach to this problem; an inversion of eddy current flaw signal using neural networks trained by finite element model-based synthetic signatures. Total 216 eddy current signals from four different types of axisymmetric flaws in tubes are generated by finite element models of which the accuracy is experimentally validated. From each simulated signature, total 24 eddy current features are extracted and among them 13 features are finally selected for flaw characterization. Based on these features, probabilistic neural networks discriminate flaws into four different types according to the location and the shape, and successively back propagation neural networks determine the size parameters of the discriminated flaw

  9. Neural Network Classifiers for Local Wind Prediction.

    Science.gov (United States)

    Kretzschmar, Ralf; Eckert, Pierre; Cattani, Daniel; Eggimann, Fritz

    2004-05-01

    This paper evaluates the quality of neural network classifiers for wind speed and wind gust prediction with prediction lead times between +1 and +24 h. The predictions were realized based on local time series and model data. The selection of appropriate input features was initiated by time series analysis and completed by empirical comparison of neural network classifiers trained on several choices of input features. The selected input features involved day time, yearday, features from a single wind observation device at the site of interest, and features derived from model data. The quality of the resulting classifiers was benchmarked against persistence for two different sites in Switzerland. The neural network classifiers exhibited superior quality when compared with persistence judged on a specific performance measure, hit and false-alarm rates.

  10. Cooperative and supportive neural networks

    International Nuclear Information System (INIS)

    Sree Hari Rao, V.; Raja Sekhara Rao, P.

    2007-01-01

    This Letter deals with the concepts of co-operation and support among neurons existing in a network which contribute to their collective capabilities and distributed operations. Activational dynamical properties of these networks are discussed

  11. Convergent dynamics for multistable delayed neural networks

    International Nuclear Information System (INIS)

    Shih, Chih-Wen; Tseng, Jui-Pin

    2008-01-01

    This investigation aims at developing a methodology to establish convergence of dynamics for delayed neural network systems with multiple stable equilibria. The present approach is general and can be applied to several network models. We take the Hopfield-type neural networks with both instantaneous and delayed feedbacks to illustrate the idea. We shall construct the complete dynamical scenario which comprises exactly 2 n stable equilibria and exactly (3 n − 2 n ) unstable equilibria for the n-neuron network. In addition, it is shown that every solution of the system converges to one of the equilibria as time tends to infinity. The approach is based on employing the geometrical structure of the network system. Positively invariant sets and componentwise dynamical properties are derived under the geometrical configuration. An iteration scheme is subsequently designed to confirm the convergence of dynamics for the system. Two examples with numerical simulations are arranged to illustrate the present theory

  12. Accident scenario diagnostics with neural networks

    International Nuclear Information System (INIS)

    Guo, Z.

    1992-01-01

    Nuclear power plants are very complex systems. The diagnoses of transients or accident conditions is very difficult because a large amount of information, which is often noisy, or intermittent, or even incomplete, need to be processed in real time. To demonstrate their potential application to nuclear power plants, neural networks axe used to monitor the accident scenarios simulated by the training simulator of TVA's Watts Bar Nuclear Power Plant. A self-organization network is used to compress original data to reduce the total number of training patterns. Different accident scenarios are closely related to different key parameters which distinguish one accident scenario from another. Therefore, the accident scenarios can be monitored by a set of small size neural networks, called modular networks, each one of which monitors only one assigned accident scenario, to obtain fast training and recall. Sensitivity analysis is applied to select proper input variables for modular networks

  13. In vivo temporal property of GABAergic neural transmission in collateral feed-forward inhibition system of hippocampal-prefrontal pathway.

    Science.gov (United States)

    Takita, Masatoshi; Kuramochi, Masahito; Izaki, Yoshinori; Ohtomi, Michiko

    2007-05-30

    Anatomical evidence suggests that rat CA1 hippocampal afferents collaterally innervate excitatory projecting pyramidal neurons and inhibitory interneurons, creating a disynaptic, feed-forward inhibition microcircuit in the medial prefrontal cortex (mPFC). We investigated the temporal relationship between the frequency of paired synaptic transmission and gamma-aminobutyric acid (GABA)ergic receptor-mediated modulation of the microcircuit in vivo under urethane anesthesia. Local perfusions of a GABAa antagonist (-)-bicuculline into the mPFC via microdialysis resulted in a statistically significant disinhibitory effect on intrinsic GABA action, increasing the first and second mPFC responses following hippocampal paired stimulation at interstimulus intervals of 100-200 ms, but not those at 25-50 ms. This (-)-bicuculline-induced disinhibition was compensated by the GABAa agonist muscimol, which itself did not attenuate the intrinsic oscillation of the local field potentials. The perfusion of a sub-minimal concentration of GABAb agonist (R)-baclofen slightly enhanced the synaptic transmission, regardless of the interstimulus interval. In addition to the tonic control by spontaneous fast-spiking GABAergic neurons, it is clear the sequential transmission of the hippocampal-mPFC pathway can phasically drive the collateral feed-forward inhibition system through activation of a GABAa receptor, bringing an active signal filter to the various types of impulse trains that enter the mPFC from the hippocampus in vivo.

  14. Cotton genotypes selection through artificial neural networks.

    Science.gov (United States)

    Júnior, E G Silva; Cardoso, D B O; Reis, M C; Nascimento, A F O; Bortolin, D I; Martins, M R; Sousa, L B

    2017-09-27

    Breeding programs currently use statistical analysis to assist in the identification of superior genotypes at various stages of a cultivar's development. Differently from these analyses, the computational intelligence approach has been little explored in genetic improvement of cotton. Thus, this study was carried out with the objective of presenting the use of artificial neural networks as auxiliary tools in the improvement of the cotton to improve fiber quality. To demonstrate the applicability of this approach, this research was carried out using the evaluation data of 40 genotypes. In order to classify the genotypes for fiber quality, the artificial neural networks were trained with replicate data of 20 genotypes of cotton evaluated in the harvests of 2013/14 and 2014/15, regarding fiber length, uniformity of length, fiber strength, micronaire index, elongation, short fiber index, maturity index, reflectance degree, and fiber quality index. This quality index was estimated by means of a weighted average on the determined score (1 to 5) of each characteristic of the HVI evaluated, according to its industry standards. The artificial neural networks presented a high capacity of correct classification of the 20 selected genotypes based on the fiber quality index, so that when using fiber length associated with the short fiber index, fiber maturation, and micronaire index, the artificial neural networks presented better results than using only fiber length and previous associations. It was also observed that to submit data of means of new genotypes to the neural networks trained with data of repetition, provides better results of classification of the genotypes. When observing the results obtained in the present study, it was verified that the artificial neural networks present great potential to be used in the different stages of a genetic improvement program of the cotton, aiming at the improvement of the fiber quality of the future cultivars.

  15. Recurrent fuzzy neural network by using feedback error learning approaches for LFC in interconnected power system

    International Nuclear Information System (INIS)

    Sabahi, Kamel; Teshnehlab, Mohammad; Shoorhedeli, Mahdi Aliyari

    2009-01-01

    In this study, a new adaptive controller based on modified feedback error learning (FEL) approaches is proposed for load frequency control (LFC) problem. The FEL strategy consists of intelligent and conventional controllers in feedforward and feedback paths, respectively. In this strategy, a conventional feedback controller (CFC), i.e. proportional, integral and derivative (PID) controller, is essential to guarantee global asymptotic stability of the overall system; and an intelligent feedforward controller (INFC) is adopted to learn the inverse of the controlled system. Therefore, when the INFC learns the inverse of controlled system, the tracking of reference signal is done properly. Generally, the CFC is designed at nominal operating conditions of the system and, therefore, fails to provide the best control performance as well as global stability over a wide range of changes in the operating conditions of the system. So, in this study a supervised controller (SC), a lookup table based controller, is addressed for tuning of the CFC. During abrupt changes of the power system parameters, the SC adjusts the PID parameters according to these operating conditions. Moreover, for improving the performance of overall system, a recurrent fuzzy neural network (RFNN) is adopted in INFC instead of the conventional neural network, which was used in past studies. The proposed FEL controller has been compared with the conventional feedback error learning controller (CFEL) and the PID controller through some performance indices

  16. Gas Turbine Engine Control Design Using Fuzzy Logic and Neural Networks

    Directory of Open Access Journals (Sweden)

    M. Bazazzadeh

    2011-01-01

    Full Text Available This paper presents a successful approach in designing a Fuzzy Logic Controller (FLC for a specific Jet Engine. At first, a suitable mathematical model for the jet engine is presented by the aid of SIMULINK. Then by applying different reasonable fuel flow functions via the engine model, some important engine-transient operation parameters (such as thrust, compressor surge margin, turbine inlet temperature, etc. are obtained. These parameters provide a precious database, which train a neural network. At the second step, by designing and training a feedforward multilayer perceptron neural network according to this available database; a number of different reasonable fuel flow functions for various engine acceleration operations are determined. These functions are used to define the desired fuzzy fuel functions. Indeed, the neural networks are used as an effective method to define the optimum fuzzy fuel functions. At the next step, we propose a FLC by using the engine simulation model and the neural network results. The proposed control scheme is proved by computer simulation using the designed engine model. The simulation results of engine model with FLC illustrate that the proposed controller achieves the desired performance and stability.

  17. Integration of Online Parameter Identification and Neural Network for In-Flight Adaptive Control

    Science.gov (United States)

    Hageman, Jacob J.; Smith, Mark S.; Stachowiak, Susan

    2003-01-01

    An indirect adaptive system has been constructed for robust control of an aircraft with uncertain aerodynamic characteristics. This system consists of a multilayer perceptron pre-trained neural network, online stability and control derivative identification, a dynamic cell structure online learning neural network, and a model following control system based on the stochastic optimal feedforward and feedback technique. The pre-trained neural network and model following control system have been flight-tested, but the online parameter identification and online learning neural network are new additions used for in-flight adaptation of the control system model. A description of the modification and integration of these two stand-alone software packages into the complete system in preparation for initial flight tests is presented. Open-loop results using both simulation and flight data, as well as closed-loop performance of the complete system in a nonlinear, six-degree-of-freedom, flight validated simulation, are analyzed. Results show that this online learning system, in contrast to the nonlearning system, has the ability to adapt to changes in aerodynamic characteristics in a real-time, closed-loop, piloted simulation, resulting in improved flying qualities.

  18. On the complexity of neural network classifiers: a comparison between shallow and deep architectures.

    Science.gov (United States)

    Bianchini, Monica; Scarselli, Franco

    2014-08-01

    Recently, researchers in the artificial neural network field have focused their attention on connectionist models composed by several hidden layers. In fact, experimental results and heuristic considerations suggest that deep architectures are more suitable than shallow ones for modern applications, facing very complex problems, e.g., vision and human language understanding. However, the actual theoretical results supporting such a claim are still few and incomplete. In this paper, we propose a new approach to study how the depth of feedforward neural networks impacts on their ability in implementing high complexity functions. First, a new measure based on topological concepts is introduced, aimed at evaluating the complexity of the function implemented by a neural network, used for classification purposes. Then, deep and shallow neural architectures with common sigmoidal activation functions are compared, by deriving upper and lower bounds on their complexity, and studying how the complexity depends on the number of hidden units and the used activation function. The obtained results seem to support the idea that deep networks actually implements functions of higher complexity, so that they are able, with the same number of resources, to address more difficult problems.

  19. Generating Seismograms with Deep Neural Networks

    Science.gov (United States)

    Krischer, L.; Fichtner, A.

    2017-12-01

    The recent surge of successful uses of deep neural networks in computer vision, speech recognition, and natural language processing, mainly enabled by the availability of fast GPUs and extremely large data sets, is starting to see many applications across all natural sciences. In seismology these are largely confined to classification and discrimination tasks. In this contribution we explore the use of deep neural networks for another class of problems: so called generative models.Generative modelling is a branch of statistics concerned with generating new observed data samples, usually by drawing from some underlying probability distribution. Samples with specific attributes can be generated by conditioning on input variables. In this work we condition on seismic source (mechanism and location) and receiver (location) parameters to generate multi-component seismograms.The deep neural networks are trained on synthetic data calculated with Instaseis (http://instaseis.net, van Driel et al. (2015)) and waveforms from the global ShakeMovie project (http://global.shakemovie.princeton.edu, Tromp et al. (2010)). The underlying radially symmetric or smoothly three dimensional Earth structures result in comparatively small waveform differences from similar events or at close receivers and the networks learn to interpolate between training data samples.Of particular importance is the chosen misfit functional. Generative adversarial networks (Goodfellow et al. (2014)) implement a system in which two networks compete: the generator network creates samples and the discriminator network distinguishes these from the true training examples. Both are trained in an adversarial fashion until the discriminator can no longer distinguish between generated and real samples. We show how this can be applied to seismograms and in particular how it compares to networks trained with more conventional misfit metrics. Last but not least we attempt to shed some light on the black-box nature of

  20. Neural-network hybrid control for antilock braking systems.

    Science.gov (United States)

    Lin, Chih-Min; Hsu, C F

    2003-01-01

    The antilock braking systems are designed to maximize wheel traction by preventing the wheels from locking during braking, while also maintaining adequate vehicle steerability; however, the performance is often degraded under harsh road conditions. In this paper, a hybrid control system with a recurrent neural network (RNN) observer is developed for antilock braking systems. This hybrid control system is comprised of an ideal controller and a compensation controller. The ideal controller, containing an RNN uncertainty observer, is the principal controller; and the compensation controller is a compensator for the difference between the system uncertainty and the estimated uncertainty. Since for dynamic response the RNN has capabilities superior to the feedforward NN, it is utilized for the uncertainty observer. The Taylor linearization technique is employed to increase the learning ability of the RNN. In addition, the on-line parameter adaptation laws are derived based on a Lyapunov function, so the stability of the system can be guaranteed. Simulations are performed to demonstrate the effectiveness of the proposed NN hybrid control system for antilock braking control under various road conditions.

  1. A Telescopic Binary Learning Machine for Training Neural Networks.

    Science.gov (United States)

    Brunato, Mauro; Battiti, Roberto

    2017-03-01

    This paper proposes a new algorithm based on multiscale stochastic local search with binary representation for training neural networks [binary learning machine (BLM)]. We study the effects of neighborhood evaluation strategies, the effect of the number of bits per weight and that of the maximum weight range used for mapping binary strings to real values. Following this preliminary investigation, we propose a telescopic multiscale version of local search, where the number of bits is increased in an adaptive manner, leading to a faster search and to local minima of better quality. An analysis related to adapting the number of bits in a dynamic way is presented. The control on the number of bits, which happens in a natural manner in the proposed method, is effective to increase the generalization performance. The learning dynamics are discussed and validated on a highly nonlinear artificial problem and on real-world tasks in many application domains; BLM is finally applied to a problem requiring either feedforward or recurrent architectures for feedback control.

  2. Unsupervised neural networks for solving Troesch's problem

    International Nuclear Information System (INIS)

    Raja Muhammad Asif Zahoor

    2014-01-01

    In this study, stochastic computational intelligence techniques are presented for the solution of Troesch's boundary value problem. The proposed stochastic solvers use the competency of a feed-forward artificial neural network for mathematical modeling of the problem in an unsupervised manner, whereas the learning of unknown parameters is made with local and global optimization methods as well as their combinations. Genetic algorithm (GA) and pattern search (PS) techniques are used as the global search methods and the interior point method (IPM) is used for an efficient local search. The combination of techniques like GA hybridized with IPM (GA-IPM) and PS hybridized with IPM (PS-IPM) are also applied to solve different forms of the equation. A comparison of the proposed results obtained from GA, PS, IPM, PS-IPM and GA-IPM has been made with the standard solutions including well known analytic techniques of the Adomian decomposition method, the variational iterational method and the homotopy perturbation method. The reliability and effectiveness of the proposed schemes, in term of accuracy and convergence, are evaluated from the results of statistical analysis based on sufficiently large independent runs. (interdisciplinary physics and related areas of science and technology)

  3. Neural networks prove effective at NOx reduction

    Energy Technology Data Exchange (ETDEWEB)

    Radl, B.J. [Pegasus Technologies, Mentor, OH (USA)

    2000-05-01

    The availability of low cost computer hardware and software is opening up possibilities for the use of artificial intelligence concepts, notably neural networks, in power plant control applications, delivering lower costs, greater efficiencies and reduced emissions. One example of a neural network system is the NeuSIGHT combustion optimisation system, developed by Pegasus Technologies, a subsidiary of KFx Inc. It can help reduce NOx emissions, improve heat rate and enable either deferral or elimination of capital expenditures. on other NOx control technologies, such as low NOx burners, SNCR and SCR. This paper illustrates these benefits using three recent case studies. 4 figs.

  4. Top tagging with deep neural networks [Vidyo

    CERN Multimedia

    CERN. Geneva

    2017-01-01

    Recent literature on deep neural networks for top tagging has focussed on image based techniques or multivariate approaches using high level jet substructure variables. Here, we take a sequential approach to this task by using anordered sequence of energy deposits as training inputs. Unlike previous approaches, this strategy does not result in a loss of information during pixelization or the calculation of high level features. We also propose new preprocessing methods that do not alter key physical quantities such as jet mass. We compare the performance of this approach to standard tagging techniques and present results evaluating the robustness of the neural network to pileup.

  5. Avoiding object by robot using neural network

    International Nuclear Information System (INIS)

    Prasetijo, D.W.

    1997-01-01

    A Self controlling robot is necessary in the robot application in which operator control is difficult. Serial method such as process on the computer of van newman is difficult to be applied for self controlling robot. In this research, Neural network system for robotic control system was developed by performance expanding at the SCARA. In this research, it was shown that SCARA with application at Neural network system can avoid blocking objects without influence by number and density of the blocking objects, also departure and destination paint. robot developed by this study also can control its moving by self

  6. Alpha spectral analysis via artificial neural networks

    International Nuclear Information System (INIS)

    Kangas, L.J.; Hashem, S.; Keller, P.E.; Kouzes, R.T.; Troyer, G.L.

    1994-10-01

    An artificial neural network system that assigns quality factors to alpha particle energy spectra is discussed. The alpha energy spectra are used to detect plutonium contamination in the work environment. The quality factors represent the levels of spectral degradation caused by miscalibration and foreign matter affecting the instruments. A set of spectra was labeled with a quality factor by an expert and used in training the artificial neural network expert system. The investigation shows that the expert knowledge of alpha spectra quality factors can be transferred to an ANN system

  7. Human Face Recognition Using Convolutional Neural Networks

    Directory of Open Access Journals (Sweden)

    Răzvan-Daniel Albu

    2009-10-01

    Full Text Available In this paper, I present a novel hybrid face recognition approach based on a convolutional neural architecture, designed to robustly detect highly variable face patterns. The convolutional network extracts successively larger features in a hierarchical set of layers. With the weights of the trained neural networks there are created kernel windows used for feature extraction in a 3-stage algorithm. I present experimental results illustrating the efficiency of the proposed approach. I use a database of 796 images of 159 individuals from Reims University which contains quite a high degree of variability in expression, pose, and facial details.

  8. Target recognition based on convolutional neural network

    Science.gov (United States)

    Wang, Liqiang; Wang, Xin; Xi, Fubiao; Dong, Jian

    2017-11-01

    One of the important part of object target recognition is the feature extraction, which can be classified into feature extraction and automatic feature extraction. The traditional neural network is one of the automatic feature extraction methods, while it causes high possibility of over-fitting due to the global connection. The deep learning algorithm used in this paper is a hierarchical automatic feature extraction method, trained with the layer-by-layer convolutional neural network (CNN), which can extract the features from lower layers to higher layers. The features are more discriminative and it is beneficial to the object target recognition.

  9. Livermore Big Artificial Neural Network Toolkit

    Energy Technology Data Exchange (ETDEWEB)

    2016-07-01

    LBANN is a toolkit that is designed to train artificial neural networks efficiently on high performance computing architectures. It is optimized to take advantages of key High Performance Computing features to accelerate neural network training. Specifically it is optimized for low-latency, high bandwidth interconnects, node-local NVRAM, node-local GPU accelerators, and high bandwidth parallel file systems. It is built on top of the open source Elemental distributed-memory dense and spars-direct linear algebra and optimization library that is released under the BSD license. The algorithms contained within LBANN are drawn from the academic literature and implemented to work within a distributed-memory framework.

  10. Quantitative phase microscopy using deep neural networks

    Science.gov (United States)

    Li, Shuai; Sinha, Ayan; Lee, Justin; Barbastathis, George

    2018-02-01

    Deep learning has been proven to achieve ground-breaking accuracy in various tasks. In this paper, we implemented a deep neural network (DNN) to achieve phase retrieval in a wide-field microscope. Our DNN utilized the residual neural network (ResNet) architecture and was trained using the data generated by a phase SLM. The results showed that our DNN was able to reconstruct the profile of the phase target qualitatively. In the meantime, large error still existed, which indicated that our approach still need to be improved.

  11. Neural network approach to radiologic lesion detection

    International Nuclear Information System (INIS)

    Newman, F.D.; Raff, U.; Stroud, D.

    1989-01-01

    An area of artificial intelligence that has gained recent attention is the neural network approach to pattern recognition. The authors explore the use of neural networks in radiologic lesion detection with what is known in the literature as the novelty filter. This filter uses a linear model; images of normal patterns become training vectors and are stored as columns of a matrix. An image of an abnormal pattern is introduced and the abnormality or novelty is extracted. A VAX 750 was used to encode the novelty filter, and two experiments have been examined

  12. Neural networks advances and applications 2

    CERN Document Server

    Gelenbe, E

    1992-01-01

    The present volume is a natural follow-up to Neural Networks: Advances and Applications which appeared one year previously. As the title indicates, it combines the presentation of recent methodological results concerning computational models and results inspired by neural networks, and of well-documented applications which illustrate the use of such models in the solution of difficult problems. The volume is balanced with respect to these two orientations: it contains six papers concerning methodological developments and five papers concerning applications and examples illustrating the theoret

  13. A Newton-type neural network learning algorithm

    International Nuclear Information System (INIS)

    Ivanov, V.V.; Puzynin, I.V.; Purehvdorzh, B.

    1993-01-01

    First- and second-order learning methods for feed-forward multilayer networks are considered. A Newton-type algorithm is proposed and compared with the common back-propagation algorithm. It is shown that the proposed algorithm provides better learning quality. Some recommendations for their usage are given. 11 refs.; 1 fig.; 1 tab

  14. Neural network segmentation of magnetic resonance images

    International Nuclear Information System (INIS)

    Frederick, B.

    1990-01-01

    Neural networks are well adapted to the task of grouping input patterns into subsets which share some similarity. Moreover, once trained, they can generalize their classification rules to classify new data sets. Sets of pixel intensities from magnetic resonance (MR) images provide a natural input to a neural network; by varying imaging parameters, MR images can reflect various independent physical parameters of tissues in their pixel intensities. A neural net can then be trained to classify physically similar tissue types based on sets of pixel intensities resulting from different imaging studies on the same subject. This paper reports that a neural network classifier for image segmentation was implanted on a Sun 4/60, and was tested on the task of classifying tissues of canine head MR images. Four images of a transaxial slice with different imaging sequences were taken as input to the network (three spin-echo images and an inversion recovery image). The training set consisted of 691 representative samples of gray matter, white matter, cerebrospinal fluid, bone, and muscle preclassified by a neuroscientist. The network was trained using a fast backpropagation algorithm to derive the decision criteria to classify any location in the image by its pixel intensities, and the image was subsequently segmented by the classifier

  15. Neutron spectrum unfolding using neural networks

    International Nuclear Information System (INIS)

    Vega C, H.R.; Hernandez D, V.M.; Manzanares A, E.

    2004-01-01

    An artificial neural network has been designed to obtain the neutron spectra from the Bonner spheres spectrometer's count rates. The neural network was trained using a large set of neutron spectra compiled by the International Atomic Energy Agency. These include spectra from iso- topic neutron sources, reference and operational neutron spectra obtained from accelerators and nuclear reactors. The spectra were transformed from lethargy to energy distribution and were re-binned to 31 energy groups using the MCNP 4C code. Re-binned spectra and UTA4 matrix were used to calculate the expected count rates in Bonner spheres spectrometer. These count rates were used as input and correspondent spectrum was used as output during neural network training. The network has 7 input nodes, 56 neurons as hidden layer and 31 neurons in the output layer. After training the network was tested with the Bonner spheres count rates produced by twelve neutron spectra. The network allows unfolding the neutron spectrum from count rates measured with Bonner spheres. Good results are obtained when testing count rates belong to neutron spectra used during training, acceptable results are obtained for count rates obtained from actual neutron fields; however the network fails when count rates belong to monoenergetic neutron sources. (Author)

  16. Analysis of Recurrent Analog Neural Networks

    Directory of Open Access Journals (Sweden)

    Z. Raida

    1998-06-01

    Full Text Available In this paper, an original rigorous analysis of recurrent analog neural networks, which are built from opamp neurons, is presented. The analysis, which comes from the approximate model of the operational amplifier, reveals causes of possible non-stable states and enables to determine convergence properties of the network. Results of the analysis are discussed in order to enable development of original robust and fast analog networks. In the analysis, the special attention is turned to the examination of the influence of real circuit elements and of the statistical parameters of processed signals to the parameters of the network.

  17. Statistical physics of interacting neural networks

    Science.gov (United States)

    Kinzel, Wolfgang; Metzler, Richard; Kanter, Ido

    2001-12-01

    Recent results on the statistical physics of time series generation and prediction are presented. A neural network is trained on quasi-periodic and chaotic sequences and overlaps to the sequence generator as well as the prediction errors are calculated numerically. For each network there exists a sequence for which it completely fails to make predictions. Two interacting networks show a transition to perfect synchronization. A pool of interacting networks shows good coordination in the minority game-a model of competition in a closed market. Finally, as a demonstration, a perceptron predicts bit sequences produced by human beings.

  18. A robust neural network-based approach for microseismic event detection

    KAUST Repository

    Akram, Jubran

    2017-08-17

    We present an artificial neural network based approach for robust event detection from low S/N waveforms. We use a feed-forward network with a single hidden layer that is tuned on a training dataset and later applied on the entire example dataset for event detection. The input features used include the average of absolute amplitudes, variance, energy-ratio and polarization rectilinearity. These features are calculated in a moving-window of same length for the entire waveform. The output is set as a user-specified relative probability curve, which provides a robust way of distinguishing between weak and strong events. An optimal network is selected by studying the weight-based saliency and effect of number of neurons on the predicted results. Using synthetic data examples, we demonstrate that this approach is effective in detecting weaker events and reduces the number of false positives.

  19. Modeling the Effect of Crude Oil Impacted Sand on the Properties of Concrete Using Artificial Neural Networks

    OpenAIRE

    W. O. Ajagbe; A. A. Ganiyu; M. O. Owoyele; J. O. Labiran

    2013-01-01

    A network of the feedforward-type artificial neural networks (ANNs) was used to predict the compressive strength of concrete made from crude oil contaminated soil samples at 3, 7, 14, 28, 56, 84, and 168 days at different degrees of contamination of 2.5%, 5%, 10%, 15%, 20% and 25%. A total of 49 samples were used in the training, testing, and prediction phase of the modeling in the ratio 32 : 11 : 7. The TANH activation function was used and the maximum number of iterations was limited to 20,...

  20. Computational chaos in massively parallel neural networks

    Science.gov (United States)

    Barhen, Jacob; Gulati, Sandeep

    1989-01-01

    A fundamental issue which directly impacts the scalability of current theoretical neural network models to massively parallel embodiments, in both software as well as hardware, is the inherent and unavoidable concurrent asynchronicity of emerging fine-grained computational ensembles and the possible emergence of chaotic manifestations. Previous analyses attributed dynamical instability to the topology of the interconnection matrix, to parasitic components or to propagation delays. However, researchers have observed the existence of emergent computational chaos in a concurrently asynchronous framework, independent of the network topology. Researcher present a methodology enabling the effective asynchronous operation of large-scale neural networks. Necessary and sufficient conditions guaranteeing concurrent asynchronous convergence are established in terms of contracting operators. Lyapunov exponents are computed formally to characterize the underlying nonlinear dynamics. Simulation results are presented to illustrate network convergence to the correct results, even in the presence of large delays.

  1. Wave transmission prediction of multilayer floating breakwater using neural network

    Digital Repository Service at National Institute of Oceanography (India)

    Mandal, S.; Patil, S.G.; Hegde, A.V.

    In the present study, an artificial neural network method has been applied for wave transmission prediction of multilayer floating breakwater. Two neural network models are constructed based on the parameters which influence the wave transmission...

  2. Stability prediction of berm breakwater using neural network

    Digital Repository Service at National Institute of Oceanography (India)

    Mandal, S.; Rao, S.; Manjunath, Y.R.

    In the present study, an artificial neural network method has been applied to predict the stability of berm breakwaters. Four neural network models are constructed based on the parameters which influence the stability of breakwater. Training...

  3. Parameter Identification by Bayes Decision and Neural Networks

    DEFF Research Database (Denmark)

    Kulczycki, P.; Schiøler, Henrik

    1994-01-01

    The problem of parameter identification by Bayes point estimation using neural networks is investigated.......The problem of parameter identification by Bayes point estimation using neural networks is investigated....

  4. Stability of Neutral Fractional Neural Networks with Delay

    Institute of Scientific and Technical Information of China (English)

    LI Yan; JIANG Wei; HU Bei-bei

    2016-01-01

    This paper studies stability of neutral fractional neural networks with delay. By introducing the definition of norm and using the uniform stability, the sufficient condition for uniform stability of neutral fractional neural networks with delay is obtained.

  5. One weird trick for parallelizing convolutional neural networks

    OpenAIRE

    Krizhevsky, Alex

    2014-01-01

    I present a new way to parallelize the training of convolutional neural networks across multiple GPUs. The method scales significantly better than all alternatives when applied to modern convolutional neural networks.

  6. Artificial Neural Network Analysis of Xinhui Pericarpium Citri ...

    African Journals Online (AJOL)

    Methods: Artificial neural networks (ANN) models, including general regression neural network (GRNN) and multi-layer ... N-hexane (HPLC grade) was purchased from. Fisher Scientific. ..... Simultaneous Quantification of Seven Flavonoids in.

  7. Deep Gate Recurrent Neural Network

    Science.gov (United States)

    2016-11-22

    and Fred Cummins. Learning to forget: Continual prediction with lstm . Neural computation, 12(10):2451–2471, 2000. Alex Graves. Generating sequences...DSGU) and Simple Gated Unit (SGU), which are structures for learning long-term dependencies. Compared to traditional Long Short-Term Memory ( LSTM ) and...Gated Recurrent Unit (GRU), both structures require fewer parameters and less computation time in sequence classification tasks. Unlike GRU and LSTM

  8. Neural networks of human nature and nurture

    Directory of Open Access Journals (Sweden)

    Daniel S. Levine

    2009-11-01

    Full Text Available Neural network methods have facilitated the unification of several unfortunate splits in psychology, including nature versus nurture. We review the contributions of this methodology and then discuss tentative network theories of caring behavior, of uncaring behavior, and of how the frontal lobes are involved in the choices between them. The implications of our theory are optimistic about the prospects of society to encourage the human potential for caring.

  9. A short-term neural network memory

    Energy Technology Data Exchange (ETDEWEB)

    Morris, R.J.T.; Wong, W.S.

    1988-12-01

    Neural network memories with storage prescriptions based on Hebb's rule are known to collapse as more words are stored. By requiring that the most recently stored word be remembered precisely, a new simple short-term neutral network memory is obtained and its steady state capacity analyzed and simulated. Comparisons are drawn with Hopfield's method, the delta method of Widrow and Hoff, and the revised marginalist model of Mezard, Nadal, and Toulouse.

  10. Learning-parameter adjustment in neural networks

    Science.gov (United States)

    Heskes, Tom M.; Kappen, Bert

    1992-06-01

    We present a learning-parameter adjustment algorithm, valid for a large class of learning rules in neural-network literature. The algorithm follows directly from a consideration of the statistics of the weights in the network. The characteristic behavior of the algorithm is calculated, both in a fixed and a changing environment. A simple example, Widrow-Hoff learning for statistical classification, serves as an illustration.

  11. Modelling of solar energy potential in Nigeria using an artificial neural network model

    International Nuclear Information System (INIS)

    Fadare, D.A.

    2009-01-01

    In this study, an artificial neural network (ANN) based model for prediction of solar energy potential in Nigeria (lat. 4-14 o N, log. 2-15 o E) was developed. Standard multilayered, feed-forward, back-propagation neural networks with different architecture were designed using neural toolbox for MATLAB. Geographical and meteorological data of 195 cities in Nigeria for period of 10 years (1983-1993) from the NASA geo-satellite database were used for the training and testing the network. Meteorological and geographical data (latitude, longitude, altitude, month, mean sunshine duration, mean temperature, and relative humidity) were used as inputs to the network, while the solar radiation intensity was used as the output of the network. The results show that the correlation coefficients between the ANN predictions and actual mean monthly global solar radiation intensities for training and testing datasets were higher than 90%, thus suggesting a high reliability of the model for evaluation of solar radiation in locations where solar radiation data are not available. The predicted solar radiation values from the model were given in form of monthly maps. The monthly mean solar radiation potential in northern and southern regions ranged from 7.01-5.62 to 5.43-3.54 kW h/m 2 day, respectively. A graphical user interface (GUI) was developed for the application of the model. The model can be used easily for estimation of solar radiation for preliminary design of solar applications.

  12. Advanced Applications of Neural Networks and Artificial Intelligence: A Review

    OpenAIRE

    Koushal Kumar; Gour Sundar Mitra Thakur

    2012-01-01

    Artificial Neural Network is a branch of Artificial intelligence and has been accepted as a new computing technology in computer science fields. This paper reviews the field of Artificial intelligence and focusing on recent applications which uses Artificial Neural Networks (ANN’s) and Artificial Intelligence (AI). It also considers the integration of neural networks with other computing methods Such as fuzzy logic to enhance the interpretation ability of data. Artificial Neural Networks is c...

  13. Neural network application to diesel generator diagnostics

    International Nuclear Information System (INIS)

    Logan, K.P.

    1990-01-01

    Diagnostic problems typically begin with the observation of some system behavior which is recognized as a deviation from the expected. The fundamental underlying process is one involving pattern matching cf observed symptoms to a set of compiled symptoms belonging to a fault-symptom mapping. Pattern recognition is often relied upon for initial fault detection and diagnosis. Parallel distributed processing (PDP) models employing neural network paradigms are known to be good pattern recognition devices. This paper describes the application of neural network processing techniques to the malfunction diagnosis of subsystems within a typical diesel generator configuration. Neural network models employing backpropagation learning were developed to correctly recognize fault conditions from the input diagnostic symptom patterns pertaining to various engine subsystems. The resulting network models proved to be excellent pattern recognizers for malfunction examples within the training set. The motivation for employing network models in lieu of a rule-based expert system, however, is related to the network's potential for generalizing malfunctions outside of the training set, as in the case of noisy or partial symptom patterns

  14. Applying Gradient Descent in Convolutional Neural Networks

    Science.gov (United States)

    Cui, Nan

    2018-04-01

    With the development of the integrated circuit and computer science, people become caring more about solving practical issues via information technologies. Along with that, a new subject called Artificial Intelligent (AI) comes up. One popular research interest of AI is about recognition algorithm. In this paper, one of the most common algorithms, Convolutional Neural Networks (CNNs) will be introduced, for image recognition. Understanding its theory and structure is of great significance for every scholar who is interested in this field. Convolution Neural Network is an artificial neural network which combines the mathematical method of convolution and neural network. The hieratical structure of CNN provides it reliable computer speed and reasonable error rate. The most significant characteristics of CNNs are feature extraction, weight sharing and dimension reduction. Meanwhile, combining with the Back Propagation (BP) mechanism and the Gradient Descent (GD) method, CNNs has the ability to self-study and in-depth learning. Basically, BP provides an opportunity for backwardfeedback for enhancing reliability and GD is used for self-training process. This paper mainly discusses the CNN and the related BP and GD algorithms, including the basic structure and function of CNN, details of each layer, the principles and features of BP and GD, and some examples in practice with a summary in the end.

  15. Artificial neural networks in neutron dosimetry

    Energy Technology Data Exchange (ETDEWEB)

    Vega C, H.R.; Hernandez D, V.M.; Manzanares A, E.; Mercado, G.A.; Perales M, W.A.; Robles R, J.A. [Unidades Academicas de Estudios Nucleares, UAZ, A.P. 336, 98000 Zacatecas (Mexico); Gallego, E.; Lorente, A. [Depto. de Ingenieria Nuclear, Universidad Politecnica de Madrid, (Spain)

    2005-07-01

    An artificial neural network has been designed to obtain the neutron doses using only the Bonner spheres spectrometer's count rates. Ambient, personal and effective neutron doses were included. 187 neutron spectra were utilized to calculate the Bonner count rates and the neutron doses. The spectra were transformed from lethargy to energy distribution and were re-binned to 31 energy groups using the MCNP 4C code. Re-binned spectra, UTA4 response matrix and fluence-to-dose coefficients were used to calculate the count rates in Bonner spheres spectrometer and the doses. Count rates were used as input and the respective doses were used as output during neural network training. Training and testing was carried out in Mat lab environment. The artificial neural network performance was evaluated using the {chi}{sup 2}- test, where the original and calculated doses were compared. The use of Artificial Neural Networks in neutron dosimetry is an alternative procedure that overcomes the drawbacks associated in this ill-conditioned problem. (Author)

  16. Neural networks to predict exosphere temperature corrections

    Science.gov (United States)

    Choury, Anna; Bruinsma, Sean; Schaeffer, Philippe

    2013-10-01

    Precise orbit prediction requires a forecast of the atmospheric drag force with a high degree of accuracy. Artificial neural networks are universal approximators derived from artificial intelligence and are widely used for prediction. This paper presents a method of artificial neural networking for prediction of the thermosphere density by forecasting exospheric temperature, which will be used by the semiempirical thermosphere Drag Temperature Model (DTM) currently developed. Artificial neural network has shown to be an effective and robust forecasting model for temperature prediction. The proposed model can be used for any mission from which temperature can be deduced accurately, i.e., it does not require specific training. Although the primary goal of the study was to create a model for 1 day ahead forecast, the proposed architecture has been generalized to 2 and 3 days prediction as well. The impact of artificial neural network predictions has been quantified for the low-orbiting satellite Gravity Field and Steady-State Ocean Circulation Explorer in 2011, and an order of magnitude smaller orbit errors were found when compared with orbits propagated using the thermosphere model DTM2009.

  17. Energy Complexity of Recurrent Neural Networks

    Czech Academy of Sciences Publication Activity Database

    Šíma, Jiří

    2014-01-01

    Roč. 26, č. 5 (2014), s. 953-973 ISSN 0899-7667 R&D Projects: GA ČR GAP202/10/1333 Institutional support: RVO:67985807 Keywords : neural network * finite automaton * energy complexity * optimal size Subject RIV: IN - Informatics, Computer Science Impact factor: 2.207, year: 2014

  18. Epileptiform spike detection via convolutional neural networks

    DEFF Research Database (Denmark)

    Johansen, Alexander Rosenberg; Jin, Jing; Maszczyk, Tomasz

    2016-01-01

    The EEG of epileptic patients often contains sharp waveforms called "spikes", occurring between seizures. Detecting such spikes is crucial for diagnosing epilepsy. In this paper, we develop a convolutional neural network (CNN) for detecting spikes in EEG of epileptic patients in an automated...

  19. Convolutional Neural Networks for SAR Image Segmentation

    DEFF Research Database (Denmark)

    Malmgren-Hansen, David; Nobel-Jørgensen, Morten

    2015-01-01

    Segmentation of Synthetic Aperture Radar (SAR) images has several uses, but it is a difficult task due to a number of properties related to SAR images. In this article we show how Convolutional Neural Networks (CNNs) can easily be trained for SAR image segmentation with good results. Besides...

  20. Convolutional Neural Networks - Generalizability and Interpretations

    DEFF Research Database (Denmark)

    Malmgren-Hansen, David

    from data despite it being limited in amount or context representation. Within Machine Learning this thesis focuses on Convolutional Neural Networks for Computer Vision. The research aims to answer how to explore a model's generalizability to the whole population of data samples and how to interpret...

  1. Neural Networks for protein Structure Prediction

    DEFF Research Database (Denmark)

    Bohr, Henrik

    1998-01-01

    This is a review about neural network applications in bioinformatics. Especially the applications to protein structure prediction, e.g. prediction of secondary structures, prediction of surface structure, fold class recognition and prediction of the 3-dimensional structure of protein backbones...

  2. Fast Fingerprint Classification with Deep Neural Network

    DEFF Research Database (Denmark)

    Michelsanti, Daniel; Guichi, Yanis; Ene, Andreea-Daniela

    2018-01-01

    . In this work we evaluate the performance of two pre-trained convolutional neural networks fine-tuned on the NIST SD4 benchmark database. The obtained results show that this approach is comparable with other results in the literature, with the advantage of a fast feature extraction stage....

  3. Novel quantum inspired binary neural network algorithm

    Indian Academy of Sciences (India)

    This parameter is taken as the threshold of neuron for learning of neural network. This algorithm is tested with three benchmark datasets and ... Author Affiliations. OM PRAKASH PATEL1 ARUNA TIWARI. Department of Computer Science and Engineering, Indian Institute of Technology Indore, Indore 453552, India ...

  4. Nonlinear Time Series Analysis via Neural Networks

    Science.gov (United States)

    Volná, Eva; Janošek, Michal; Kocian, Václav; Kotyrba, Martin

    This article deals with a time series analysis based on neural networks in order to make an effective forex market [Moore and Roche, J. Int. Econ. 58, 387-411 (2002)] pattern recognition. Our goal is to find and recognize important patterns which repeatedly appear in the market history to adapt our trading system behaviour based on them.

  5. Application of neural networks in experimental physics

    International Nuclear Information System (INIS)

    Kisel', I.V.; Neskromnyj, V.N.; Ososkov, G.A.

    1993-01-01

    The theoretical foundations of numerous models of artificial neural networks (ANN) and their applications to the actual problems of associative memory, optimization and pattern recognition are given. This review contains also numerous using of ANN in the experimental physics both as the hardware realization of fast triggering systems for even selection and for the following software implementation of the trajectory data recognition

  6. Integrating neural network technology and noise analysis

    International Nuclear Information System (INIS)

    Uhrig, R.E.; Oak Ridge National Lab., TN

    1995-01-01

    The integrated use of neural network and noise analysis technologies offers advantages not available by the use of either technology alone. The application of neural network technology to noise analysis offers an opportunity to expand the scope of problems where noise analysis is useful and unique ways in which the integration of these technologies can be used productively. The two-sensor technique, in which the responses of two sensors to an unknown driving source are related, is used to demonstration such integration. The relationship between power spectral densities (PSDs) of accelerometer signals is derived theoretically using noise analysis to demonstrate its uniqueness. This relationship is modeled from experimental data using a neural network when the system is working properly, and the actual PSD of one sensor is compared with the PSD of that sensor predicted by the neural network using the PSD of the other sensor as an input. A significant deviation between the actual and predicted PSDs indicate that system is changing (i.e., failing). Experiments carried out on check values and bearings illustrate the usefulness of the methodology developed. (Author)

  7. Localizing Tortoise Nests by Neural Networks.

    Directory of Open Access Journals (Sweden)

    Roberto Barbuti

    Full Text Available The goal of this research is to recognize the nest digging activity of tortoises using a device mounted atop the tortoise carapace. The device classifies tortoise movements in order to discriminate between nest digging, and non-digging activity (specifically walking and eating. Accelerometer data was collected from devices attached to the carapace of a number of tortoises during their two-month nesting period. Our system uses an accelerometer and an activity recognition system (ARS which is modularly structured using an artificial neural network and an output filter. For the purpose of experiment and comparison, and with the aim of minimizing the computational cost, the artificial neural network has been modelled according to three different architectures based on the input delay neural network (IDNN. We show that the ARS can achieve very high accuracy on segments of data sequences, with an extremely small neural network that can be embedded in programmable low power devices. Given that digging is typically a long activity (up to two hours, the application of ARS on data segments can be repeated over time to set up a reliable and efficient system, called Tortoise@, for digging activity recognition.

  8. Image Encryption and Chaotic Cellular Neural Network

    Science.gov (United States)

    Peng, Jun; Zhang, Du

    Machine learning has been playing an increasingly important role in information security and assurance. One of the areas of new applications is to design cryptographic systems by using chaotic neural network due to the fact that chaotic systems have several appealing features for information security applications. In this chapter, we describe a novel image encryption algorithm that is based on a chaotic cellular neural network. We start by giving an introduction to the concept of image encryption and its main technologies, and an overview of the chaotic cellular neural network. We then discuss the proposed image encryption algorithm in details, which is followed by a number of security analyses (key space analysis, sensitivity analysis, information entropy analysis and statistical analysis). The comparison with the most recently reported chaos-based image encryption algorithms indicates that the algorithm proposed in this chapter has a better security performance. Finally, we conclude the chapter with possible future work and application prospects of the chaotic cellular neural network in other information assurance and security areas.

  9. Based on BP Neural Network Stock Prediction

    Science.gov (United States)

    Liu, Xiangwei; Ma, Xin

    2012-01-01

    The stock market has a high profit and high risk features, on the stock market analysis and prediction research has been paid attention to by people. Stock price trend is a complex nonlinear function, so the price has certain predictability. This article mainly with improved BP neural network (BPNN) to set up the stock market prediction model, and…

  10. Artificial neural networks in neutron dosimetry

    International Nuclear Information System (INIS)

    Vega C, H.R.; Hernandez D, V.M.; Manzanares A, E.; Mercado, G.A.; Perales M, W.A.; Robles R, J.A.; Gallego, E.; Lorente, A.

    2005-01-01

    An artificial neural network has been designed to obtain the neutron doses using only the Bonner spheres spectrometer's count rates. Ambient, personal and effective neutron doses were included. 187 neutron spectra were utilized to calculate the Bonner count rates and the neutron doses. The spectra were transformed from lethargy to energy distribution and were re-binned to 31 energy groups using the MCNP 4C code. Re-binned spectra, UTA4 response matrix and fluence-to-dose coefficients were used to calculate the count rates in Bonner spheres spectrometer and the doses. Count rates were used as input and the respective doses were used as output during neural network training. Training and testing was carried out in Mat lab environment. The artificial neural network performance was evaluated using the χ 2 - test, where the original and calculated doses were compared. The use of Artificial Neural Networks in neutron dosimetry is an alternative procedure that overcomes the drawbacks associated in this ill-conditioned problem. (Author)

  11. Separable explanations of neural network decisions

    DEFF Research Database (Denmark)

    Rieger, Laura

    2017-01-01

    Deep Taylor Decomposition is a method used to explain neural network decisions. When applying this method to non-dominant classifications, the resulting explanation does not reflect important features for the chosen classification. We propose that this is caused by the dense layers and propose...

  12. Vibration monitoring with artificial neural networks

    International Nuclear Information System (INIS)

    Alguindigue, I.

    1991-01-01

    Vibration monitoring of components in nuclear power plants has been used for a number of years. This technique involves the analysis of vibration data coming from vital components of the plant to detect features which reflect the operational state of machinery. The analysis leads to the identification of potential failures and their causes, and makes it possible to perform efficient preventive maintenance. Earlydetection is important because it can decrease the probability of catastrophic failures, reduce forced outgage, maximize utilization of available assets, increase the life of the plant, and reduce maintenance costs. This paper documents our work on the design of a vibration monitoring methodology based on neural network technology. This technology provides an attractive complement to traditional vibration analysis because of the potential of neural network to operate in real-time mode and to handle data which may be distorted or noisy. Our efforts have been concentrated on the analysis and classification of vibration signatures collected from operating machinery. Two neural networks algorithms were used in our project: the Recirculation algorithm for data compression and the Backpropagation algorithm to perform the actual classification of the patterns. Although this project is in the early stages of development it indicates that neural networks may provide a viable methodology for monitoring and diagnostics of vibrating components. Our results to date are very encouraging

  13. Towards semen quality assessment using neural networks

    DEFF Research Database (Denmark)

    Linneberg, Christian; Salamon, P.; Svarer, C.

    1994-01-01

    The paper presents the methodology and results from a neural net based classification of human sperm head morphology. The methodology uses a preprocessing scheme in which invariant Fourier descriptors are lumped into “energy” bands. The resulting networks are pruned using optimal brain damage. Pe...

  14. Improved transformer protection using probabilistic neural network ...

    African Journals Online (AJOL)

    This article presents a novel technique to distinguish between magnetizing inrush current and internal fault current of power transformer. An algorithm has been developed around the theme of the conventional differential protection method in which parallel combination of Probabilistic Neural Network (PNN) and Power ...

  15. A locality aware convolutional neural networks accelerator

    NARCIS (Netherlands)

    Shi, R.; Xu, Z.; Sun, Z.; Peemen, M.C.J.; Li, A.; Corporaal, H.; Wu, D.

    2015-01-01

    The advantages of Convolutional Neural Networks (CNNs) with respect to traditional methods for visual pattern recognition have changed the field of machine vision. The main issue that hinders broad adoption of this technique is the massive computing workload in CNN that prevents real-time

  16. Prediction of geomagnetic storm using neural networks: Comparison of the efficiency of the Satellite and ground-based input parameters

    International Nuclear Information System (INIS)

    Stepanova, Marina; Antonova, Elizavieta; Munos-Uribe, F A; Gordo, S L Gomez; Torres-Sanchez, M V

    2008-01-01

    Different kinds of neural networks have established themselves as an effective tool in the prediction of different geomagnetic indices, including the Dst being the most important constituent for determination of the impact of Space Weather on the human life. Feed-forward networks with one hidden layer are used to forecast the Dst variation, using separately the solar wind paramenters, polar cap index, and auroral electrojet index as input parameters. It was found that in all three cases the storm-time intervals were predicted much more precisely as quite time intervals. The majority of cross-correlation coefficients between predicted and observed Dst of strong geomagnetic storms are situated between 0.8 and 0.9. Changes in the neural network architecture, including the number of nodes in the input and hidden layers and the transfer functions between them lead to an improvement of a network performance up to 10%.

  17. Application of radial basis neural network for state estimation of ...

    African Journals Online (AJOL)

    An original application of radial basis function (RBF) neural network for power system state estimation is proposed in this paper. The property of massive parallelism of neural networks is employed for this. The application of RBF neural network for state estimation is investigated by testing its applicability on a IEEE 14 bus ...

  18. Prediction based chaos control via a new neural network

    International Nuclear Information System (INIS)

    Shen Liqun; Wang Mao; Liu Wanyu; Sun Guanghui

    2008-01-01

    In this Letter, a new chaos control scheme based on chaos prediction is proposed. To perform chaos prediction, a new neural network architecture for complex nonlinear approximation is proposed. And the difficulty in building and training the neural network is also reduced. Simulation results of Logistic map and Lorenz system show the effectiveness of the proposed chaos control scheme and the proposed neural network

  19. Neural networks in economic modelling : An empirical study

    NARCIS (Netherlands)

    Verkooijen, W.J.H.

    1996-01-01

    This dissertation addresses the statistical aspects of neural networks and their usability for solving problems in economics and finance. Neural networks are discussed in a framework of modelling which is generally accepted in econometrics. Within this framework a neural network is regarded as a

  20. Tensor Basis Neural Network v. 1.0 (beta)

    Energy Technology Data Exchange (ETDEWEB)

    2017-03-28

    This software package can be used to build, train, and test a neural network machine learning model. The neural network architecture is specifically designed to embed tensor invariance properties by enforcing that the model predictions sit on an invariant tensor basis. This neural network architecture can be used in developing constitutive models for applications such as turbulence modeling, materials science, and electromagnetism.