WorldWideScience

Sample records for neural network function

  1. Using function approximation to determine neural network accuracy

    International Nuclear Information System (INIS)

    Wichman, R.F.; Alexander, J.

    2013-01-01

    Many, if not most, control processes demonstrate nonlinear behavior in some portion of their operating range and the ability of neural networks to model non-linear dynamics makes them very appealing for control. Control of high reliability safety systems, and autonomous control in process or robotic applications, however, require accurate and consistent control and neural networks are only approximators of various functions so their degree of approximation becomes important. In this paper, the factors affecting the ability of a feed-forward back-propagation neural network to accurately approximate a non-linear function are explored. Compared to pattern recognition using a neural network for function approximation provides an easy and accurate method for determining the network's accuracy. In contrast to other techniques, we show that errors arising in function approximation or curve fitting are caused by the neural network itself rather than scatter in the data. A method is proposed that provides improvements in the accuracy achieved during training and resulting ability of the network to generalize after training. Binary input vectors provided a more accurate model than with scalar inputs and retraining using a small number of the outlier x,y pairs improved generalization. (author)

  2. Functional model of biological neural networks.

    Science.gov (United States)

    Lo, James Ting-Ho

    2010-12-01

    A functional model of biological neural networks, called temporal hierarchical probabilistic associative memory (THPAM), is proposed in this paper. THPAM comprises functional models of dendritic trees for encoding inputs to neurons, a first type of neuron for generating spike trains, a second type of neuron for generating graded signals to modulate neurons of the first type, supervised and unsupervised Hebbian learning mechanisms for easy learning and retrieving, an arrangement of dendritic trees for maximizing generalization, hardwiring for rotation-translation-scaling invariance, and feedback connections with different delay durations for neurons to make full use of present and past informations generated by neurons in the same and higher layers. These functional models and their processing operations have many functions of biological neural networks that have not been achieved by other models in the open literature and provide logically coherent answers to many long-standing neuroscientific questions. However, biological justifications of these functional models and their processing operations are required for THPAM to qualify as a macroscopic model (or low-order approximate) of biological neural networks.

  3. Radial basis function neural network for power system load-flow

    International Nuclear Information System (INIS)

    Karami, A.; Mohammadi, M.S.

    2008-01-01

    This paper presents a method for solving the load-flow problem of the electric power systems using radial basis function (RBF) neural network with a fast hybrid training method. The main idea is that some operating conditions (values) are needed to solve the set of non-linear algebraic equations of load-flow by employing an iterative numerical technique. Therefore, we may view the outputs of a load-flow program as functions of the operating conditions. Indeed, we are faced with a function approximation problem and this can be done by an RBF neural network. The proposed approach has been successfully applied to the 10-machine and 39-bus New England test system. In addition, this method has been compared with that of a multi-layer perceptron (MLP) neural network model. The simulation results show that the RBF neural network is a simpler method to implement and requires less training time to converge than the MLP neural network. (author)

  4. Function approximation of tasks by neural networks

    International Nuclear Information System (INIS)

    Gougam, L.A.; Chikhi, A.; Mekideche-Chafa, F.

    2008-01-01

    For several years now, neural network models have enjoyed wide popularity, being applied to problems of regression, classification and time series analysis. Neural networks have been recently seen as attractive tools for developing efficient solutions for many real world problems in function approximation. The latter is a very important task in environments where computation has to be based on extracting information from data samples in real world processes. In a previous contribution, we have used a well known simplified architecture to show that it provides a reasonably efficient, practical and robust, multi-frequency analysis. We have investigated the universal approximation theory of neural networks whose transfer functions are: sigmoid (because of biological relevance), Gaussian and two specified families of wavelets. The latter have been found to be more appropriate to use. The aim of the present contribution is therefore to use a m exican hat wavelet a s transfer function to approximate different tasks relevant and inherent to various applications in physics. The results complement and provide new insights into previously published results on this problem

  5. Functional neural networks underlying response inhibition in adolescents and adults.

    Science.gov (United States)

    Stevens, Michael C; Kiehl, Kent A; Pearlson, Godfrey D; Calhoun, Vince D

    2007-07-19

    This study provides the first description of neural network dynamics associated with response inhibition in healthy adolescents and adults. Functional and effective connectivity analyses of whole brain hemodynamic activity elicited during performance of a Go/No-Go task were used to identify functionally integrated neural networks and characterize their causal interactions. Three response inhibition circuits formed a hierarchical, inter-dependent system wherein thalamic modulation of input to premotor cortex by fronto-striatal regions led to response suppression. Adolescents differed from adults in the degree of network engagement, regional fronto-striatal-thalamic connectivity, and network dynamics. We identify and characterize several age-related differences in the function of neural circuits that are associated with behavioral performance changes across adolescent development.

  6. High-Dimensional Function Approximation With Neural Networks for Large Volumes of Data.

    Science.gov (United States)

    Andras, Peter

    2018-02-01

    Approximation of high-dimensional functions is a challenge for neural networks due to the curse of dimensionality. Often the data for which the approximated function is defined resides on a low-dimensional manifold and in principle the approximation of the function over this manifold should improve the approximation performance. It has been show that projecting the data manifold into a lower dimensional space, followed by the neural network approximation of the function over this space, provides a more precise approximation of the function than the approximation of the function with neural networks in the original data space. However, if the data volume is very large, the projection into the low-dimensional space has to be based on a limited sample of the data. Here, we investigate the nature of the approximation error of neural networks trained over the projection space. We show that such neural networks should have better approximation performance than neural networks trained on high-dimensional data even if the projection is based on a relatively sparse sample of the data manifold. We also find that it is preferable to use a uniformly distributed sparse sample of the data for the purpose of the generation of the low-dimensional projection. We illustrate these results considering the practical neural network approximation of a set of functions defined on high-dimensional data including real world data as well.

  7. Neural Networks

    International Nuclear Information System (INIS)

    Smith, Patrick I.

    2003-01-01

    Physicists use large detectors to measure particles created in high-energy collisions at particle accelerators. These detectors typically produce signals indicating either where ionization occurs along the path of the particle, or where energy is deposited by the particle. The data produced by these signals is fed into pattern recognition programs to try to identify what particles were produced, and to measure the energy and direction of these particles. Ideally, there are many techniques used in this pattern recognition software. One technique, neural networks, is particularly suitable for identifying what type of particle caused by a set of energy deposits. Neural networks can derive meaning from complicated or imprecise data, extract patterns, and detect trends that are too complex to be noticed by either humans or other computer related processes. To assist in the advancement of this technology, Physicists use a tool kit to experiment with several neural network techniques. The goal of this research is interface a neural network tool kit into Java Analysis Studio (JAS3), an application that allows data to be analyzed from any experiment. As the final result, a physicist will have the ability to train, test, and implement a neural network with the desired output while using JAS3 to analyze the results or output. Before an implementation of a neural network can take place, a firm understanding of what a neural network is and how it works is beneficial. A neural network is an artificial representation of the human brain that tries to simulate the learning process [5]. It is also important to think of the word artificial in that definition as computer programs that use calculations during the learning process. In short, a neural network learns by representative examples. Perhaps the easiest way to describe the way neural networks learn is to explain how the human brain functions. The human brain contains billions of neural cells that are responsible for processing

  8. Satisfiability of logic programming based on radial basis function neural networks

    International Nuclear Information System (INIS)

    Hamadneh, Nawaf; Sathasivam, Saratha; Tilahun, Surafel Luleseged; Choon, Ong Hong

    2014-01-01

    In this paper, we propose a new technique to test the Satisfiability of propositional logic programming and quantified Boolean formula problem in radial basis function neural networks. For this purpose, we built radial basis function neural networks to represent the proportional logic which has exactly three variables in each clause. We used the Prey-predator algorithm to calculate the output weights of the neural networks, while the K-means clustering algorithm is used to determine the hidden parameters (the centers and the widths). Mean of the sum squared error function is used to measure the activity of the two algorithms. We applied the developed technique with the recurrent radial basis function neural networks to represent the quantified Boolean formulas. The new technique can be applied to solve many applications such as electronic circuits and NP-complete problems

  9. Satisfiability of logic programming based on radial basis function neural networks

    Energy Technology Data Exchange (ETDEWEB)

    Hamadneh, Nawaf; Sathasivam, Saratha; Tilahun, Surafel Luleseged; Choon, Ong Hong [School of Mathematical Sciences, Universiti Sains Malaysia, 11800 USM, Penang (Malaysia)

    2014-07-10

    In this paper, we propose a new technique to test the Satisfiability of propositional logic programming and quantified Boolean formula problem in radial basis function neural networks. For this purpose, we built radial basis function neural networks to represent the proportional logic which has exactly three variables in each clause. We used the Prey-predator algorithm to calculate the output weights of the neural networks, while the K-means clustering algorithm is used to determine the hidden parameters (the centers and the widths). Mean of the sum squared error function is used to measure the activity of the two algorithms. We applied the developed technique with the recurrent radial basis function neural networks to represent the quantified Boolean formulas. The new technique can be applied to solve many applications such as electronic circuits and NP-complete problems.

  10. Pattern classification and recognition of invertebrate functional groups using self-organizing neural networks.

    Science.gov (United States)

    Zhang, WenJun

    2007-07-01

    Self-organizing neural networks can be used to mimic non-linear systems. The main objective of this study is to make pattern classification and recognition on sampling information using two self-organizing neural network models. Invertebrate functional groups sampled in the irrigated rice field were classified and recognized using one-dimensional self-organizing map and self-organizing competitive learning neural networks. Comparisons between neural network models, distance (similarity) measures, and number of neurons were conducted. The results showed that self-organizing map and self-organizing competitive learning neural network models were effective in pattern classification and recognition of sampling information. Overall the performance of one-dimensional self-organizing map neural network was better than self-organizing competitive learning neural network. The number of neurons could determine the number of classes in the classification. Different neural network models with various distance (similarity) measures yielded similar classifications. Some differences, dependent upon the specific network structure, would be found. The pattern of an unrecognized functional group was recognized with the self-organizing neural network. A relative consistent classification indicated that the following invertebrate functional groups, terrestrial blood sucker; terrestrial flyer; tourist (nonpredatory species with no known functional role other than as prey in ecosystem); gall former; collector (gather, deposit feeder); predator and parasitoid; leaf miner; idiobiont (acarine ectoparasitoid), were classified into the same group, and the following invertebrate functional groups, external plant feeder; terrestrial crawler, walker, jumper or hunter; neustonic (water surface) swimmer (semi-aquatic), were classified into another group. It was concluded that reliable conclusions could be drawn from comparisons of different neural network models that use different distance

  11. Lyapunov Functions to Caputo Fractional Neural Networks with Time-Varying Delays

    Directory of Open Access Journals (Sweden)

    Ravi Agarwal

    2018-05-01

    Full Text Available One of the main properties of solutions of nonlinear Caputo fractional neural networks is stability and often the direct Lyapunov method is used to study stability properties (usually these Lyapunov functions do not depend on the time variable. In connection with the Lyapunov fractional method we present a brief overview of the most popular fractional order derivatives of Lyapunov functions among Caputo fractional delay differential equations. These derivatives are applied to various types of neural networks with variable coefficients and time-varying delays. We show that quadratic Lyapunov functions and their Caputo fractional derivatives are not applicable in some cases when one studies stability properties. Some sufficient conditions for stability of equilibrium of nonlinear Caputo fractional neural networks with time dependent transmission delays, time varying self-regulating parameters of all units and time varying functions of the connection between two neurons in the network are obtained. The cases of time varying Lipschitz coefficients as well as nonLipschitz activation functions are studied. We illustrate our theory on particular nonlinear Caputo fractional neural networks.

  12. Radial basis function neural network in fault detection of automotive ...

    African Journals Online (AJOL)

    Radial basis function neural network in fault detection of automotive engines. ... Five faults have been simulated on the MVEM, including three sensor faults, one component fault and one actuator fault. The three sensor faults ... Keywords: Automotive engine, independent RBFNN model, RBF neural network, fault detection

  13. Analysis of neural networks through base functions

    NARCIS (Netherlands)

    van der Zwaag, B.J.; Slump, Cornelis H.; Spaanenburg, L.

    Problem statement. Despite their success-story, neural networks have one major disadvantage compared to other techniques: the inability to explain comprehensively how a trained neural network reaches its output; neural networks are not only (incorrectly) seen as a "magic tool" but possibly even more

  14. Individual Identification Using Functional Brain Fingerprint Detected by Recurrent Neural Network.

    Science.gov (United States)

    Chen, Shiyang; Hu, Xiaoping P

    2018-03-20

    Individual identification based on brain function has gained traction in literature. Investigating individual differences in brain function can provide additional insights into the brain. In this work, we introduce a recurrent neural network based model for identifying individuals based on only a short segment of resting state functional MRI data. In addition, we demonstrate how the global signal and differences in atlases affect the individual identifiability. Furthermore, we investigate neural network features that exhibit the uniqueness of each individual. The results indicate that our model is able to identify individuals based on neural features and provides additional information regarding brain dynamics.

  15. Polarized DIS Structure Functions from Neural Networks

    International Nuclear Information System (INIS)

    Del Debbio, L.; Guffanti, A.; Piccione, A.

    2007-01-01

    We present a parametrization of polarized Deep-Inelastic-Scattering (DIS) structure functions based on Neural Networks. The parametrization provides a bias-free determination of the probability measure in the space of structure functions, which retains information on experimental errors and correlations. As an example we discuss the application of this method to the study of the structure function g 1 p (x,Q 2 )

  16. Smooth function approximation using neural networks.

    Science.gov (United States)

    Ferrari, Silvia; Stengel, Robert F

    2005-01-01

    An algebraic approach for representing multidimensional nonlinear functions by feedforward neural networks is presented. In this paper, the approach is implemented for the approximation of smooth batch data containing the function's input, output, and possibly, gradient information. The training set is associated to the network adjustable parameters by nonlinear weight equations. The cascade structure of these equations reveals that they can be treated as sets of linear systems. Hence, the training process and the network approximation properties can be investigated via linear algebra. Four algorithms are developed to achieve exact or approximate matching of input-output and/or gradient-based training sets. Their application to the design of forward and feedback neurocontrollers shows that algebraic training is characterized by faster execution speeds and better generalization properties than contemporary optimization techniques.

  17. Neural Networks in R Using the Stuttgart Neural Network Simulator: RSNNS

    Directory of Open Access Journals (Sweden)

    Christopher Bergmeir

    2012-01-01

    Full Text Available Neural networks are important standard machine learning procedures for classification and regression. We describe the R package RSNNS that provides a convenient interface to the popular Stuttgart Neural Network Simulator SNNS. The main features are (a encapsulation of the relevant SNNS parts in a C++ class, for sequential and parallel usage of different networks, (b accessibility of all of the SNNSalgorithmic functionality from R using a low-level interface, and (c a high-level interface for convenient, R-style usage of many standard neural network procedures. The package also includes functions for visualization and analysis of the models and the training procedures, as well as functions for data input/output from/to the original SNNSfile formats.

  18. Upset Prediction in Friction Welding Using Radial Basis Function Neural Network

    Directory of Open Access Journals (Sweden)

    Wei Liu

    2013-01-01

    Full Text Available This paper addresses the upset prediction problem of friction welded joints. Based on finite element simulations of inertia friction welding (IFW, a radial basis function (RBF neural network was developed initially to predict the final upset for a number of welding parameters. The predicted joint upset by the RBF neural network was compared to validated finite element simulations, producing an error of less than 8.16% which is reasonable. Furthermore, the effects of initial rotational speed and axial pressure on the upset were investigated in relation to energy conversion with the RBF neural network. The developed RBF neural network was also applied to linear friction welding (LFW and continuous drive friction welding (CDFW. The correlation coefficients of RBF prediction for LFW and CDFW were 0.963 and 0.998, respectively, which further suggest that an RBF neural network is an effective method for upset prediction of friction welded joints.

  19. Invertebrate diversity classification using self-organizing map neural network: with some special topological functions

    Directory of Open Access Journals (Sweden)

    WenJun Zhang

    2014-06-01

    Full Text Available In present study we used self-organizing map (SOM neural network to conduct the non-supervisory clustering of invertebrate orders in rice field. Four topological functions, i.e., cossintopf, sincostopf, acossintopf, and expsintopf, established on the template in toolbox of Matlab, were used in SOM neural network learning. Results showed that clusters were different when using different topological functions because different topological functions will generate different spatial structure of neurons in neural network. We may chose these functions and results based on comparison with the practical situation.

  20. New results for global exponential synchronization in neural networks via functional differential inclusions.

    Science.gov (United States)

    Wang, Dongshu; Huang, Lihong; Tang, Longkun

    2015-08-01

    This paper is concerned with the synchronization dynamical behaviors for a class of delayed neural networks with discontinuous neuron activations. Continuous and discontinuous state feedback controller are designed such that the neural networks model can realize exponential complete synchronization in view of functional differential inclusions theory, Lyapunov functional method and inequality technique. The new proposed results here are very easy to verify and also applicable to neural networks with continuous activations. Finally, some numerical examples show the applicability and effectiveness of our main results.

  1. Computing single step operators of logic programming in radial basis function neural networks

    Science.gov (United States)

    Hamadneh, Nawaf; Sathasivam, Saratha; Choon, Ong Hong

    2014-07-01

    Logic programming is the process that leads from an original formulation of a computing problem to executable programs. A normal logic program consists of a finite set of clauses. A valuation I of logic programming is a mapping from ground atoms to false or true. The single step operator of any logic programming is defined as a function (Tp:I→I). Logic programming is well-suited to building the artificial intelligence systems. In this study, we established a new technique to compute the single step operators of logic programming in the radial basis function neural networks. To do that, we proposed a new technique to generate the training data sets of single step operators. The training data sets are used to build the neural networks. We used the recurrent radial basis function neural networks to get to the steady state (the fixed point of the operators). To improve the performance of the neural networks, we used the particle swarm optimization algorithm to train the networks.

  2. Computing single step operators of logic programming in radial basis function neural networks

    Energy Technology Data Exchange (ETDEWEB)

    Hamadneh, Nawaf; Sathasivam, Saratha; Choon, Ong Hong [School of Mathematical Sciences, Universiti Sains Malaysia, 11800 USM, Penang (Malaysia)

    2014-07-10

    Logic programming is the process that leads from an original formulation of a computing problem to executable programs. A normal logic program consists of a finite set of clauses. A valuation I of logic programming is a mapping from ground atoms to false or true. The single step operator of any logic programming is defined as a function (T{sub p}:I→I). Logic programming is well-suited to building the artificial intelligence systems. In this study, we established a new technique to compute the single step operators of logic programming in the radial basis function neural networks. To do that, we proposed a new technique to generate the training data sets of single step operators. The training data sets are used to build the neural networks. We used the recurrent radial basis function neural networks to get to the steady state (the fixed point of the operators). To improve the performance of the neural networks, we used the particle swarm optimization algorithm to train the networks.

  3. Computing single step operators of logic programming in radial basis function neural networks

    International Nuclear Information System (INIS)

    Hamadneh, Nawaf; Sathasivam, Saratha; Choon, Ong Hong

    2014-01-01

    Logic programming is the process that leads from an original formulation of a computing problem to executable programs. A normal logic program consists of a finite set of clauses. A valuation I of logic programming is a mapping from ground atoms to false or true. The single step operator of any logic programming is defined as a function (T p :I→I). Logic programming is well-suited to building the artificial intelligence systems. In this study, we established a new technique to compute the single step operators of logic programming in the radial basis function neural networks. To do that, we proposed a new technique to generate the training data sets of single step operators. The training data sets are used to build the neural networks. We used the recurrent radial basis function neural networks to get to the steady state (the fixed point of the operators). To improve the performance of the neural networks, we used the particle swarm optimization algorithm to train the networks

  4. A novel approach to error function minimization for feedforward neural networks

    International Nuclear Information System (INIS)

    Sinkus, R.

    1995-01-01

    Feedforward neural networks with error backpropagation are widely applied to pattern recognition. One general problem encountered with this type of neural networks is the uncertainty, whether the minimization procedure has converged to a global minimum of the cost function. To overcome this problem a novel approach to minimize the error function is presented. It allows to monitor the approach to the global minimum and as an outcome several ambiguities related to the choice of free parameters of the minimization procedure are removed. (orig.)

  5. Kernel Function Tuning for Single-Layer Neural Networks

    Czech Academy of Sciences Publication Activity Database

    Vidnerová, Petra; Neruda, Roman

    -, accepted 28.11. 2017 (2018) ISSN 2278-0149 R&D Projects: GA ČR GA15-18108S Institutional support: RVO:67985807 Keywords : single-layer neural networks * kernel methods * kernel function * optimisation Subject RIV: IN - Informatics, Computer Science http://www.ijmerr.com/

  6. Exponential stability of Cohen-Grossberg neural networks with a general class of activation functions

    International Nuclear Information System (INIS)

    Wan Anhua; Wang Miansen; Peng Jigen; Qiao Hong

    2006-01-01

    In this Letter, the dynamics of Cohen-Grossberg neural networks model are investigated. The activation functions are only assumed to be Lipschitz continuous, which provide a much wider application domain for neural networks than the previous results. By means of the extended nonlinear measure approach, new and relaxed sufficient conditions for the existence, uniqueness and global exponential stability of equilibrium of the neural networks are obtained. Moreover, an estimate for the exponential convergence rate of the neural networks is precisely characterized. Our results improve those existing ones

  7. Modelling and prediction for chaotic fir laser attractor using rational function neural network.

    Science.gov (United States)

    Cho, S

    2001-02-01

    Many real-world systems such as irregular ECG signal, volatility of currency exchange rate and heated fluid reaction exhibit highly complex nonlinear characteristic known as chaos. These chaotic systems cannot be retreated satisfactorily using linear system theory due to its high dimensionality and irregularity. This research focuses on prediction and modelling of chaotic FIR (Far InfraRed) laser system for which the underlying equations are not given. This paper proposed a method for prediction and modelling a chaotic FIR laser time series using rational function neural network. Three network architectures, TDNN (Time Delayed Neural Network), RBF (radial basis function) network and the RF (rational function) network, are also presented. Comparisons between these networks performance show the improvements introduced by the RF network in terms of a decrement in network complexity and better ability of predictability.

  8. Nonlinear transfer function encodes synchronization in a neural network from the mammalian brain.

    Science.gov (United States)

    Menendez de la Prida, L; Sanchez-Andres, J V

    1999-09-01

    Synchronization is one of the mechanisms by which the brain encodes information. The observed synchronization of neuronal activity has, however, several levels of fluctuations, which presumably regulate local features of specific areas. This means that biological neural networks should have an intrinsic mechanism able to synchronize the neuronal activity but also to preserve the firing capability of individual cells. Here, we investigate the input-output relationship of a biological neural network from developing mammalian brain, i.e., the hippocampus. We show that the probability of occurrence of synchronous output activity (which consists in stereotyped population bursts recorded throughout the hippocampus) is encoded by a sigmoidal transfer function of the input frequency. Under this scope, low-frequency inputs will not produce any coherent output while high-frequency inputs will determine a synchronous pattern of output activity (population bursts). We analyze the effect of the network size (N) on the parameters of the transfer function (threshold and slope). We found that sigmoidal functions realistically simulate the synchronous output activity of hippocampal neural networks. This outcome is particularly important in the application of results from neural network models to neurobiology.

  9. Hidden neural networks

    DEFF Research Database (Denmark)

    Krogh, Anders Stærmose; Riis, Søren Kamaric

    1999-01-01

    A general framework for hybrids of hidden Markov models (HMMs) and neural networks (NNs) called hidden neural networks (HNNs) is described. The article begins by reviewing standard HMMs and estimation by conditional maximum likelihood, which is used by the HNN. In the HNN, the usual HMM probability...... parameters are replaced by the outputs of state-specific neural networks. As opposed to many other hybrids, the HNN is normalized globally and therefore has a valid probabilistic interpretation. All parameters in the HNN are estimated simultaneously according to the discriminative conditional maximum...... likelihood criterion. The HNN can be viewed as an undirected probabilistic independence network (a graphical model), where the neural networks provide a compact representation of the clique functions. An evaluation of the HNN on the task of recognizing broad phoneme classes in the TIMIT database shows clear...

  10. Application of Functional Link Artificial Neural Network for Prediction of Machinery Noise in Opencast Mines

    Directory of Open Access Journals (Sweden)

    Santosh Kumar Nanda

    2011-01-01

    Full Text Available Functional link-based neural network models were applied to predict opencast mining machineries noise. The paper analyzes the prediction capabilities of functional link neural network based noise prediction models vis-à-vis existing statistical models. In order to find the actual noise status in opencast mines, some of the popular noise prediction models, for example, ISO-9613-2, CONCAWE, VDI, and ENM, have been applied in mining and allied industries to predict the machineries noise by considering various attenuation factors. Functional link artificial neural network (FLANN, polynomial perceptron network (PPN, and Legendre neural network (LeNN were used to predict the machinery noise in opencast mines. The case study is based on data collected from an opencast coal mine of Orissa, India. From the present investigations, it could be concluded that the FLANN model give better noise prediction than the PPN and LeNN model.

  11. A Squeezed Artificial Neural Network for the Symbolic Network Reliability Functions of Binary-State Networks.

    Science.gov (United States)

    Yeh, Wei-Chang

    Network reliability is an important index to the provision of useful information for decision support in the modern world. There is always a need to calculate symbolic network reliability functions (SNRFs) due to dynamic and rapid changes in network parameters. In this brief, the proposed squeezed artificial neural network (SqANN) approach uses the Monte Carlo simulation to estimate the corresponding reliability of a given designed matrix from the Box-Behnken design, and then the Taguchi method is implemented to find the appropriate number of neurons and activation functions of the hidden layer and the output layer in ANN to evaluate SNRFs. According to the experimental results of the benchmark networks, the comparison appears to support the superiority of the proposed SqANN method over the traditional ANN-based approach with at least 16.6% improvement in the median absolute deviation in the cost of extra 2 s on average for all experiments.Network reliability is an important index to the provision of useful information for decision support in the modern world. There is always a need to calculate symbolic network reliability functions (SNRFs) due to dynamic and rapid changes in network parameters. In this brief, the proposed squeezed artificial neural network (SqANN) approach uses the Monte Carlo simulation to estimate the corresponding reliability of a given designed matrix from the Box-Behnken design, and then the Taguchi method is implemented to find the appropriate number of neurons and activation functions of the hidden layer and the output layer in ANN to evaluate SNRFs. According to the experimental results of the benchmark networks, the comparison appears to support the superiority of the proposed SqANN method over the traditional ANN-based approach with at least 16.6% improvement in the median absolute deviation in the cost of extra 2 s on average for all experiments.

  12. Analysis of neural networks in terms of domain functions

    NARCIS (Netherlands)

    van der Zwaag, B.J.; Slump, Cornelis H.; Spaanenburg, Lambert

    Despite their success-story, artificial neural networks have one major disadvantage compared to other techniques: the inability to explain comprehensively how a trained neural network reaches its output; neural networks are not only (incorrectly) seen as a "magic tool" but possibly even more as a

  13. Neural electrical activity and neural network growth.

    Science.gov (United States)

    Gafarov, F M

    2018-05-01

    The development of central and peripheral neural system depends in part on the emergence of the correct functional connectivity in its input and output pathways. Now it is generally accepted that molecular factors guide neurons to establish a primary scaffold that undergoes activity-dependent refinement for building a fully functional circuit. However, a number of experimental results obtained recently shows that the neuronal electrical activity plays an important role in the establishing of initial interneuronal connections. Nevertheless, these processes are rather difficult to study experimentally, due to the absence of theoretical description and quantitative parameters for estimation of the neuronal activity influence on growth in neural networks. In this work we propose a general framework for a theoretical description of the activity-dependent neural network growth. The theoretical description incorporates a closed-loop growth model in which the neural activity can affect neurite outgrowth, which in turn can affect neural activity. We carried out the detailed quantitative analysis of spatiotemporal activity patterns and studied the relationship between individual cells and the network as a whole to explore the relationship between developing connectivity and activity patterns. The model, developed in this work will allow us to develop new experimental techniques for studying and quantifying the influence of the neuronal activity on growth processes in neural networks and may lead to a novel techniques for constructing large-scale neural networks by self-organization. Copyright © 2018 Elsevier Ltd. All rights reserved.

  14. Niche-dependent development of functional neuronal networks from embryonic stem cell-derived neural populations

    Directory of Open Access Journals (Sweden)

    Siebler Mario

    2009-08-01

    Full Text Available Abstract Background The present work was performed to investigate the ability of two different embryonic stem (ES cell-derived neural precursor populations to generate functional neuronal networks in vitro. The first ES cell-derived neural precursor population was cultivated as free-floating neural aggregates which are known to form a developmental niche comprising different types of neural cells, including neural precursor cells (NPCs, progenitor cells and even further matured cells. This niche provides by itself a variety of different growth factors and extracellular matrix proteins that influence the proliferation and differentiation of neural precursor and progenitor cells. The second population was cultivated adherently in monolayer cultures to control most stringently the extracellular environment. This population comprises highly homogeneous NPCs which are supposed to represent an attractive way to provide well-defined neuronal progeny. However, the ability of these different ES cell-derived immature neural cell populations to generate functional neuronal networks has not been assessed so far. Results While both precursor populations were shown to differentiate into sufficient quantities of mature NeuN+ neurons that also express GABA or vesicular-glutamate-transporter-2 (vGlut2, only aggregate-derived neuronal populations exhibited a synchronously oscillating network activity 2–4 weeks after initiating the differentiation as detected by the microelectrode array technology. Neurons derived from homogeneous NPCs within monolayer cultures did merely show uncorrelated spiking activity even when differentiated for up to 12 weeks. We demonstrated that these neurons exhibited sparsely ramified neurites and an embryonic vGlut2 distribution suggesting an inhibited terminal neuronal maturation. In comparison, neurons derived from heterogeneous populations within neural aggregates appeared as fully mature with a dense neurite network and punctuated

  15. Fractional Hopfield Neural Networks: Fractional Dynamic Associative Recurrent Neural Networks.

    Science.gov (United States)

    Pu, Yi-Fei; Yi, Zhang; Zhou, Ji-Liu

    2017-10-01

    This paper mainly discusses a novel conceptual framework: fractional Hopfield neural networks (FHNN). As is commonly known, fractional calculus has been incorporated into artificial neural networks, mainly because of its long-term memory and nonlocality. Some researchers have made interesting attempts at fractional neural networks and gained competitive advantages over integer-order neural networks. Therefore, it is naturally makes one ponder how to generalize the first-order Hopfield neural networks to the fractional-order ones, and how to implement FHNN by means of fractional calculus. We propose to introduce a novel mathematical method: fractional calculus to implement FHNN. First, we implement fractor in the form of an analog circuit. Second, we implement FHNN by utilizing fractor and the fractional steepest descent approach, construct its Lyapunov function, and further analyze its attractors. Third, we perform experiments to analyze the stability and convergence of FHNN, and further discuss its applications to the defense against chip cloning attacks for anticounterfeiting. The main contribution of our work is to propose FHNN in the form of an analog circuit by utilizing a fractor and the fractional steepest descent approach, construct its Lyapunov function, prove its Lyapunov stability, analyze its attractors, and apply FHNN to the defense against chip cloning attacks for anticounterfeiting. A significant advantage of FHNN is that its attractors essentially relate to the neuron's fractional order. FHNN possesses the fractional-order-stability and fractional-order-sensitivity characteristics.

  16. Application of RBF neural network improved by peak density function in intelligent color matching of wood dyeing

    International Nuclear Information System (INIS)

    Guan, Xuemei; Zhu, Yuren; Song, Wenlong

    2016-01-01

    According to the characteristics of wood dyeing, we propose a predictive model of pigment formula for wood dyeing based on Radial Basis Function (RBF) neural network. In practical application, however, it is found that the number of neurons in the hidden layer of RBF neural network is difficult to determine. In general, we need to test several times according to experience and prior knowledge, which is lack of a strict design procedure on theoretical basis. And we also don’t know whether the RBF neural network is convergent. This paper proposes a peak density function to determine the number of neurons in the hidden layer. In contrast to existing approaches, the centers and the widths of the radial basis function are initialized by extracting the features of samples. So the uncertainty caused by random number when initializing the training parameters and the topology of RBF neural network is eliminated. The average relative error of the original RBF neural network is 1.55% in 158 epochs. However, the average relative error of the RBF neural network which is improved by peak density function is only 0.62% in 50 epochs. Therefore, the convergence rate and approximation precision of the RBF neural network are improved significantly.

  17. One-way hash function based on hyper-chaotic cellular neural network

    International Nuclear Information System (INIS)

    Yang Qunting; Gao Tiegang

    2008-01-01

    The design of an efficient one-way hash function with good performance is a hot spot in modern cryptography researches. In this paper, a hash function construction method based on cell neural network with hyper-chaos characteristics is proposed. First, the chaos sequence is gotten by iterating cellular neural network with Runge–Kutta algorithm, and then the chaos sequence is iterated with the message. The hash code is obtained through the corresponding transform of the latter chaos sequence. Simulation and analysis demonstrate that the new method has the merit of convenience, high sensitivity to initial values, good hash performance, especially the strong stability. (general)

  18. multi scale analysis of a function by neural networks elementary derivatives functions

    International Nuclear Information System (INIS)

    Chikhi, A.; Gougam, A.; Chafa, F.

    2006-01-01

    Recently, the wavelet network has been introduced as a special neural network supported by the wavelet theory . Such networks constitute a tool for function approximation problems as it has been already proved in reference . Our present work deals with this model, treating a multi scale analysis of a function. We have then used a linear expansion of a given function in wavelets, neglecting the usual translation parameters. We investigate two training operations. The first one consists on an optimization of the output synaptic layer, the second one, optimizing the output function with respect to scale parameters. We notice a temporary merging of the scale parameters leading to some interesting results : new elementary derivatives units emerge, representing a new elementary task, which is the derivative of the output task

  19. Reconfigurable Flight Control Design using a Robust Servo LQR and Radial Basis Function Neural Networks

    Science.gov (United States)

    Burken, John J.

    2005-01-01

    This viewgraph presentation reviews the use of a Robust Servo Linear Quadratic Regulator (LQR) and a Radial Basis Function (RBF) Neural Network in reconfigurable flight control designs in adaptation to a aircraft part failure. The method uses a robust LQR servomechanism design with model Reference adaptive control, and RBF neural networks. During the failure the LQR servomechanism behaved well, and using the neural networks improved the tracking.

  20. Artificial neural networks in NDT

    International Nuclear Information System (INIS)

    Abdul Aziz Mohamed

    2001-01-01

    Artificial neural networks, simply known as neural networks, have attracted considerable interest in recent years largely because of a growing recognition of the potential of these computational paradigms as powerful alternative models to conventional pattern recognition or function approximation techniques. The neural networks approach is having a profound effect on almost all fields, and has been utilised in fields Where experimental inter-disciplinary work is being carried out. Being a multidisciplinary subject with a broad knowledge base, Nondestructive Testing (NDT) or Nondestructive Evaluation (NDE) is no exception. This paper explains typical applications of neural networks in NDT/NDE. Three promising types of neural networks are highlighted, namely, back-propagation, binary Hopfield and Kohonen's self-organising maps. (Author)

  1. Transiently chaotic neural networks with piecewise linear output functions

    Energy Technology Data Exchange (ETDEWEB)

    Chen, S.-S. [Department of Mathematics, National Taiwan Normal University, Taipei, Taiwan (China); Shih, C.-W. [Department of Applied Mathematics, National Chiao Tung University, 1001 Ta-Hsueh Road, Hsinchu, Taiwan (China)], E-mail: cwshih@math.nctu.edu.tw

    2009-01-30

    Admitting both transient chaotic phase and convergent phase, the transiently chaotic neural network (TCNN) provides superior performance than the classical networks in solving combinatorial optimization problems. We derive concrete parameter conditions for these two essential dynamic phases of the TCNN with piecewise linear output function. The confirmation for chaotic dynamics of the system results from a successful application of the Marotto theorem which was recently clarified. Numerical simulation on applying the TCNN with piecewise linear output function is carried out to find the optimal solution of a travelling salesman problem. It is demonstrated that the performance is even better than the previous TCNN model with logistic output function.

  2. Radial basis function (RBF) neural network control for mechanical systems design, analysis and Matlab simulation

    CERN Document Server

    Liu, Jinkun

    2013-01-01

    Radial Basis Function (RBF) Neural Network Control for Mechanical Systems is motivated by the need for systematic design approaches to stable adaptive control system design using neural network approximation-based techniques. The main objectives of the book are to introduce the concrete design methods and MATLAB simulation of stable adaptive RBF neural control strategies. In this book, a broad range of implementable neural network control design methods for mechanical systems are presented, such as robot manipulators, inverted pendulums, single link flexible joint robots, motors, etc. Advanced neural network controller design methods and their stability analysis are explored. The book provides readers with the fundamentals of neural network control system design.   This book is intended for the researchers in the fields of neural adaptive control, mechanical systems, Matlab simulation, engineering design, robotics and automation. Jinkun Liu is a professor at Beijing University of Aeronautics and Astronauti...

  3. Finite-time convergent recurrent neural network with a hard-limiting activation function for constrained optimization with piecewise-linear objective functions.

    Science.gov (United States)

    Liu, Qingshan; Wang, Jun

    2011-04-01

    This paper presents a one-layer recurrent neural network for solving a class of constrained nonsmooth optimization problems with piecewise-linear objective functions. The proposed neural network is guaranteed to be globally convergent in finite time to the optimal solutions under a mild condition on a derived lower bound of a single gain parameter in the model. The number of neurons in the neural network is the same as the number of decision variables of the optimization problem. Compared with existing neural networks for optimization, the proposed neural network has a couple of salient features such as finite-time convergence and a low model complexity. Specific models for two important special cases, namely, linear programming and nonsmooth optimization, are also presented. In addition, applications to the shortest path problem and constrained least absolute deviation problem are discussed with simulation results to demonstrate the effectiveness and characteristics of the proposed neural network.

  4. A prediction method for the wax deposition rate based on a radial basis function neural network

    Directory of Open Access Journals (Sweden)

    Ying Xie

    2017-06-01

    Full Text Available The radial basis function neural network is a popular supervised learning tool based on machinery learning technology. Its high precision having been proven, the radial basis function neural network has been applied in many areas. The accumulation of deposited materials in the pipeline may lead to the need for increased pumping power, a decreased flow rate or even to the total blockage of the line, with losses of production and capital investment, so research on predicting the wax deposition rate is significant for the safe and economical operation of an oil pipeline. This paper adopts the radial basis function neural network to predict the wax deposition rate by considering four main influencing factors, the pipe wall temperature gradient, pipe wall wax crystal solubility coefficient, pipe wall shear stress and crude oil viscosity, by the gray correlational analysis method. MATLAB software is employed to establish the RBF neural network. Compared with the previous literature, favorable consistency exists between the predicted outcomes and the experimental results, with a relative error of 1.5%. It can be concluded that the prediction method of wax deposition rate based on the RBF neural network is feasible.

  5. Nonequilibrium landscape theory of neural networks

    Science.gov (United States)

    Yan, Han; Zhao, Lei; Hu, Liang; Wang, Xidi; Wang, Erkang; Wang, Jin

    2013-01-01

    The brain map project aims to map out the neuron connections of the human brain. Even with all of the wirings mapped out, the global and physical understandings of the function and behavior are still challenging. Hopfield quantified the learning and memory process of symmetrically connected neural networks globally through equilibrium energy. The energy basins of attractions represent memories, and the memory retrieval dynamics is determined by the energy gradient. However, the realistic neural networks are asymmetrically connected, and oscillations cannot emerge from symmetric neural networks. Here, we developed a nonequilibrium landscape–flux theory for realistic asymmetrically connected neural networks. We uncovered the underlying potential landscape and the associated Lyapunov function for quantifying the global stability and function. We found the dynamics and oscillations in human brains responsible for cognitive processes and physiological rhythm regulations are determined not only by the landscape gradient but also by the flux. We found that the flux is closely related to the degrees of the asymmetric connections in neural networks and is the origin of the neural oscillations. The neural oscillation landscape shows a closed-ring attractor topology. The landscape gradient attracts the network down to the ring. The flux is responsible for coherent oscillations on the ring. We suggest the flux may provide the driving force for associations among memories. We applied our theory to rapid-eye movement sleep cycle. We identified the key regulation factors for function through global sensitivity analysis of landscape topography against wirings, which are in good agreements with experiments. PMID:24145451

  6. Nonequilibrium landscape theory of neural networks.

    Science.gov (United States)

    Yan, Han; Zhao, Lei; Hu, Liang; Wang, Xidi; Wang, Erkang; Wang, Jin

    2013-11-05

    The brain map project aims to map out the neuron connections of the human brain. Even with all of the wirings mapped out, the global and physical understandings of the function and behavior are still challenging. Hopfield quantified the learning and memory process of symmetrically connected neural networks globally through equilibrium energy. The energy basins of attractions represent memories, and the memory retrieval dynamics is determined by the energy gradient. However, the realistic neural networks are asymmetrically connected, and oscillations cannot emerge from symmetric neural networks. Here, we developed a nonequilibrium landscape-flux theory for realistic asymmetrically connected neural networks. We uncovered the underlying potential landscape and the associated Lyapunov function for quantifying the global stability and function. We found the dynamics and oscillations in human brains responsible for cognitive processes and physiological rhythm regulations are determined not only by the landscape gradient but also by the flux. We found that the flux is closely related to the degrees of the asymmetric connections in neural networks and is the origin of the neural oscillations. The neural oscillation landscape shows a closed-ring attractor topology. The landscape gradient attracts the network down to the ring. The flux is responsible for coherent oscillations on the ring. We suggest the flux may provide the driving force for associations among memories. We applied our theory to rapid-eye movement sleep cycle. We identified the key regulation factors for function through global sensitivity analysis of landscape topography against wirings, which are in good agreements with experiments.

  7. Functional Stem Cell Integration into Neural Networks Assessed by Organotypic Slice Cultures.

    Science.gov (United States)

    Forsberg, David; Thonabulsombat, Charoensri; Jäderstad, Johan; Jäderstad, Linda Maria; Olivius, Petri; Herlenius, Eric

    2017-08-14

    Re-formation or preservation of functional, electrically active neural networks has been proffered as one of the goals of stem cell-mediated neural therapeutics. A primary issue for a cell therapy approach is the formation of functional contacts between the implanted cells and the host tissue. Therefore, it is of fundamental interest to establish protocols that allow us to delineate a detailed time course of grafted stem cell survival, migration, differentiation, integration, and functional interaction with the host. One option for in vitro studies is to examine the integration of exogenous stem cells into an existing active neural network in ex vivo organotypic cultures. Organotypic cultures leave the structural integrity essentially intact while still allowing the microenvironment to be carefully controlled. This allows detailed studies over time of cellular responses and cell-cell interactions, which are not readily performed in vivo. This unit describes procedures for using organotypic slice cultures as ex vivo model systems for studying neural stem cell and embryonic stem cell engraftment and communication with CNS host tissue. © 2017 by John Wiley & Sons, Inc. Copyright © 2017 John Wiley & Sons, Inc.

  8. SYNTHESIS AND REDUCED LOGIC GATE REALIZATION OF MULTI-VALUED LOGIC FUNCTIONS USING NEURAL NETWORK DEPLOYMENT ALGORITHM

    Directory of Open Access Journals (Sweden)

    A. K. CHOWDHURY

    2016-02-01

    Full Text Available In this paper an evolutionary technique for synthesizing Multi-Valued Logic (MVL functions using Neural Network Deployment Algorithm (NNDA is presented. The algorithm is combined with back-propagation learning capability and neural MVL operators. This research article is done to observe the anomalistic characteristics of MVL neural operators and their role in synthesis. The advantages of NNDA-MVL algorithm is demonstrated with realization of synthesized many valued functions with lesser MVL operators. The characteristic feature set consists of MVL gate count, network link count, network propagation delay and accuracy achieved in training. In brief, this paper depicts an effort of reduced network size for synthesized MVL functions. Trained MVL operators improve the basic architecture by reducing MIN gate and interlink connection by 52.94% and 23.38% respectively.

  9. Application of neural network to CT

    International Nuclear Information System (INIS)

    Ma, Xiao-Feng; Takeda, Tatsuoki

    1999-01-01

    This paper presents a new method for two-dimensional image reconstruction by using a multilayer neural network. Multilayer neural networks are extensively investigated and practically applied to solution of various problems such as inverse problems or time series prediction problems. From learning an input-output mapping from a set of examples, neural networks can be regarded as synthesizing an approximation of multidimensional function (that is, solving the problem of hypersurface reconstruction, including smoothing and interpolation). From this viewpoint, neural networks are well suited to the solution of CT image reconstruction. Though a conventionally used object function of a neural network is composed of a sum of squared errors of the output data, we can define an object function composed of a sum of residue of an integral equation. By employing an appropriate line integral for this integral equation, we can construct a neural network that can be used for CT. We applied this method to some model problems and obtained satisfactory results. As it is not necessary to discretized the integral equation using this reconstruction method, therefore it is application to the problem of complicated geometrical shapes is also feasible. Moreover, in neural networks, interpolation is performed quite smoothly, as a result, inverse mapping can be achieved smoothly even in case of including experimental and numerical errors, However, use of conventional back propagation technique for optimization leads to an expensive computation cost. To overcome this drawback, 2nd order optimization methods or parallel computing will be applied in future. (J.P.N.)

  10. Introduction to neural networks

    International Nuclear Information System (INIS)

    Pavlopoulos, P.

    1996-01-01

    This lecture is a presentation of today's research in neural computation. Neural computation is inspired by knowledge from neuro-science. It draws its methods in large degree from statistical physics and its potential applications lie mainly in computer science and engineering. Neural networks models are algorithms for cognitive tasks, such as learning and optimization, which are based on concepts derived from research into the nature of the brain. The lecture first gives an historical presentation of neural networks development and interest in performing complex tasks. Then, an exhaustive overview of data management and networks computation methods is given: the supervised learning and the associative memory problem, the capacity of networks, the Perceptron networks, the functional link networks, the Madaline (Multiple Adalines) networks, the back-propagation networks, the reduced coulomb energy (RCE) networks, the unsupervised learning and the competitive learning and vector quantization. An example of application in high energy physics is given with the trigger systems and track recognition system (track parametrization, event selection and particle identification) developed for the CPLEAR experiment detectors from the LEAR at CERN. (J.S.). 56 refs., 20 figs., 1 tab., 1 appendix

  11. Self-tuning control of a nuclear reactor using a Gaussian function neural network

    International Nuclear Information System (INIS)

    Park, M.G.; Cho, N.Z.

    1995-01-01

    A self-tuning control method is described for a nuclear reactor system that requires only a set of input-output measurements. The use of an artificial neural network in nonlinear model-based adaptive control, both as a plant model and a controller, is investigated. A neural network called a Gaussian function network is used for one-step-ahead predictive control to track the desired plant output. The effectiveness of the controller is demonstrated by the application of the method to the power tracking control of the Korea Multipurpose Research Reactor

  12. Resting State fMRI Functional Connectivity-Based Classification Using a Convolutional Neural Network Architecture.

    Science.gov (United States)

    Meszlényi, Regina J; Buza, Krisztian; Vidnyánszky, Zoltán

    2017-01-01

    Machine learning techniques have become increasingly popular in the field of resting state fMRI (functional magnetic resonance imaging) network based classification. However, the application of convolutional networks has been proposed only very recently and has remained largely unexplored. In this paper we describe a convolutional neural network architecture for functional connectome classification called connectome-convolutional neural network (CCNN). Our results on simulated datasets and a publicly available dataset for amnestic mild cognitive impairment classification demonstrate that our CCNN model can efficiently distinguish between subject groups. We also show that the connectome-convolutional network is capable to combine information from diverse functional connectivity metrics and that models using a combination of different connectivity descriptors are able to outperform classifiers using only one metric. From this flexibility follows that our proposed CCNN model can be easily adapted to a wide range of connectome based classification or regression tasks, by varying which connectivity descriptor combinations are used to train the network.

  13. Could LC-NE-Dependent Adjustment of Neural Gain Drive Functional Brain Network Reorganization?

    Directory of Open Access Journals (Sweden)

    Carole Guedj

    2017-01-01

    Full Text Available The locus coeruleus-norepinephrine (LC-NE system is thought to act at synaptic, cellular, microcircuit, and network levels to facilitate cognitive functions through at least two different processes, not mutually exclusive. Accordingly, as a reset signal, the LC-NE system could trigger brain network reorganizations in response to salient information in the environment and/or adjust the neural gain within its target regions to optimize behavioral responses. Here, we provide evidence of the co-occurrence of these two mechanisms at the whole-brain level, in resting-state conditions following a pharmacological stimulation of the LC-NE system. We propose that these two mechanisms are interdependent such that the LC-NE-dependent adjustment of the neural gain inferred from the clustering coefficient could drive functional brain network reorganizations through coherence in the gamma rhythm. Via the temporal dynamic of gamma-range band-limited power, the release of NE could adjust the neural gain, promoting interactions only within the neuronal populations whose amplitude envelopes are correlated, thus making it possible to reorganize neuronal ensembles, functional networks, and ultimately, behavioral responses. Thus, our proposal offers a unified framework integrating the putative influence of the LC-NE system on both local- and long-range adjustments of brain dynamics underlying behavioral flexibility.

  14. Memristor-based neural networks

    International Nuclear Information System (INIS)

    Thomas, Andy

    2013-01-01

    The synapse is a crucial element in biological neural networks, but a simple electronic equivalent has been absent. This complicates the development of hardware that imitates biological architectures in the nervous system. Now, the recent progress in the experimental realization of memristive devices has renewed interest in artificial neural networks. The resistance of a memristive system depends on its past states and exactly this functionality can be used to mimic the synaptic connections in a (human) brain. After a short introduction to memristors, we present and explain the relevant mechanisms in a biological neural network, such as long-term potentiation and spike time-dependent plasticity, and determine the minimal requirements for an artificial neural network. We review the implementations of these processes using basic electric circuits and more complex mechanisms that either imitate biological systems or could act as a model system for them. (topical review)

  15. Multistability of delayed complex-valued recurrent neural networks with discontinuous real-imaginary-type activation functions

    International Nuclear Information System (INIS)

    Huang Yu-Jiao; Hu Hai-Gen

    2015-01-01

    In this paper, the multistability issue is discussed for delayed complex-valued recurrent neural networks with discontinuous real-imaginary-type activation functions. Based on a fixed theorem and stability definition, sufficient criteria are established for the existence and stability of multiple equilibria of complex-valued recurrent neural networks. The number of stable equilibria is larger than that of real-valued recurrent neural networks, which can be used to achieve high-capacity associative memories. One numerical example is provided to show the effectiveness and superiority of the presented results. (paper)

  16. Multistability of neural networks with discontinuous non-monotonic piecewise linear activation functions and time-varying delays.

    Science.gov (United States)

    Nie, Xiaobing; Zheng, Wei Xing

    2015-05-01

    This paper is concerned with the problem of coexistence and dynamical behaviors of multiple equilibrium points for neural networks with discontinuous non-monotonic piecewise linear activation functions and time-varying delays. The fixed point theorem and other analytical tools are used to develop certain sufficient conditions that ensure that the n-dimensional discontinuous neural networks with time-varying delays can have at least 5(n) equilibrium points, 3(n) of which are locally stable and the others are unstable. The importance of the derived results is that it reveals that the discontinuous neural networks can have greater storage capacity than the continuous ones. Moreover, different from the existing results on multistability of neural networks with discontinuous activation functions, the 3(n) locally stable equilibrium points obtained in this paper are located in not only saturated regions, but also unsaturated regions, due to the non-monotonic structure of discontinuous activation functions. A numerical simulation study is conducted to illustrate and support the derived theoretical results. Copyright © 2015 Elsevier Ltd. All rights reserved.

  17. The neural network approach to parton fitting

    International Nuclear Information System (INIS)

    Rojo, Joan; Latorre, Jose I.; Del Debbio, Luigi; Forte, Stefano; Piccione, Andrea

    2005-01-01

    We introduce the neural network approach to global fits of parton distribution functions. First we review previous work on unbiased parametrizations of deep-inelastic structure functions with faithful estimation of their uncertainties, and then we summarize the current status of neural network parton distribution fits

  18. Classification of ion mobility spectra by functional groups using neural networks

    Science.gov (United States)

    Bell, S.; Nazarov, E.; Wang, Y. F.; Eiceman, G. A.

    1999-01-01

    Neural networks were trained using whole ion mobility spectra from a standardized database of 3137 spectra for 204 chemicals at various concentrations. Performance of the network was measured by the success of classification into ten chemical classes. Eleven stages for evaluation of spectra and of spectral pre-processing were employed and minimums established for response thresholds and spectral purity. After optimization of the database, network, and pre-processing routines, the fraction of successful classifications by functional group was 0.91 throughout a range of concentrations. Network classification relied on a combination of features, including drift times, number of peaks, relative intensities, and other factors apparently including peak shape. The network was opportunistic, exploiting different features within different chemical classes. Application of neural networks in a two-tier design where chemicals were first identified by class and then individually eliminated all but one false positive out of 161 test spectra. These findings establish that ion mobility spectra, even with low resolution instrumentation, contain sufficient detail to permit the development of automated identification systems.

  19. Morphological neural networks

    Energy Technology Data Exchange (ETDEWEB)

    Ritter, G.X.; Sussner, P. [Univ. of Florida, Gainesville, FL (United States)

    1996-12-31

    The theory of artificial neural networks has been successfully applied to a wide variety of pattern recognition problems. In this theory, the first step in computing the next state of a neuron or in performing the next layer neural network computation involves the linear operation of multiplying neural values by their synaptic strengths and adding the results. Thresholding usually follows the linear operation in order to provide for nonlinearity of the network. In this paper we introduce a novel class of neural networks, called morphological neural networks, in which the operations of multiplication and addition are replaced by addition and maximum (or minimum), respectively. By taking the maximum (or minimum) of sums instead of the sum of products, morphological network computation is nonlinear before thresholding. As a consequence, the properties of morphological neural networks are drastically different than those of traditional neural network models. In this paper we consider some of these differences and provide some particular examples of morphological neural network.

  20. Classifying the molecular functions of Rab GTPases in membrane trafficking using deep convolutional neural networks.

    Science.gov (United States)

    Le, Nguyen-Quoc-Khanh; Ho, Quang-Thai; Ou, Yu-Yen

    2018-06-13

    Deep learning has been increasingly used to solve a number of problems with state-of-the-art performance in a wide variety of fields. In biology, deep learning can be applied to reduce feature extraction time and achieve high levels of performance. In our present work, we apply deep learning via two-dimensional convolutional neural networks and position-specific scoring matrices to classify Rab protein molecules, which are main regulators in membrane trafficking for transferring proteins and other macromolecules throughout the cell. The functional loss of specific Rab molecular functions has been implicated in a variety of human diseases, e.g., choroideremia, intellectual disabilities, cancer. Therefore, creating a precise model for classifying Rabs is crucial in helping biologists understand the molecular functions of Rabs and design drug targets according to such specific human disease information. We constructed a robust deep neural network for classifying Rabs that achieved an accuracy of 99%, 99.5%, 96.3%, and 97.6% for each of four specific molecular functions. Our approach demonstrates superior performance to traditional artificial neural networks. Therefore, from our proposed study, we provide both an effective tool for classifying Rab proteins and a basis for further research that can improve the performance of biological modeling using deep neural networks. Copyright © 2018 Elsevier Inc. All rights reserved.

  1. Nonlinear programming with feedforward neural networks.

    Energy Technology Data Exchange (ETDEWEB)

    Reifman, J.

    1999-06-02

    We provide a practical and effective method for solving constrained optimization problems by successively training a multilayer feedforward neural network in a coupled neural-network/objective-function representation. Nonlinear programming problems are easily mapped into this representation which has a simpler and more transparent method of solution than optimization performed with Hopfield-like networks and poses very mild requirements on the functions appearing in the problem. Simulation results are illustrated and compared with an off-the-shelf optimization tool.

  2. Multiple Time Series Forecasting Using Quasi-Randomized Functional Link Neural Networks

    Directory of Open Access Journals (Sweden)

    Thierry Moudiki

    2018-03-01

    Full Text Available We are interested in obtaining forecasts for multiple time series, by taking into account the potential nonlinear relationships between their observations. For this purpose, we use a specific type of regression model on an augmented dataset of lagged time series. Our model is inspired by dynamic regression models (Pankratz 2012, with the response variable’s lags included as predictors, and is known as Random Vector Functional Link (RVFL neural networks. The RVFL neural networks have been successfully applied in the past, to solving regression and classification problems. The novelty of our approach is to apply an RVFL model to multivariate time series, under two separate regularization constraints on the regression parameters.

  3. Artificial neural networks contribution to the operational security of embedded systems. Artificial neural networks contribution to fault tolerance of on-board functions in space environment

    International Nuclear Information System (INIS)

    Vintenat, Lionel

    1999-01-01

    A good quality often attributed to artificial neural networks is fault tolerance. In general presentation works, this property is almost always introduced as 'natural', i.e. being obtained without any specific precaution during learning. Besides, space environment is known to be aggressive towards on-board hardware, inducing various abnormal operations. Particularly, digital components suffer from upset phenomenon, i.e. misplaced switches of memory flip-flops. These two observations lead to the question: would neural chips constitute an interesting and robust solution to implement some board functions of spacecrafts? First, the various aspects of the problem are detailed: artificial neural networks and their fault tolerance, neural chips, space environment and resulting failures. Further to this presentation, a particular technique to carry out neural chips is selected because of its simplicity, and especially because it requires few memory flip-flops: random pulse streams. An original method for star recognition inside a field-of-view is then proposed for the board function 'attitude computation'. This method relies on a winner-takes-all competition network, and on a Kohonen self-organized map. An hardware implementation of those two neural models is then proposed using random pulse streams. Thanks to this realization, on one hand difficulties related to that particular implementation technique can be highlighted, and on the other hand a first evaluation of its practical fault tolerance can be carried out. (author) [fr

  4. Global convergence of periodic solution of neural networks with discontinuous activation functions

    International Nuclear Information System (INIS)

    Huang Lihong; Guo Zhenyuan

    2009-01-01

    In this paper, without assuming boundedness and monotonicity of the activation functions, we establish some sufficient conditions ensuring the existence and global asymptotic stability of periodic solution of neural networks with discontinuous activation functions by using the Yoshizawa-like theorem and constructing proper Lyapunov function. The obtained results improve and extend previous works.

  5. Multistability of memristive Cohen-Grossberg neural networks with non-monotonic piecewise linear activation functions and time-varying delays.

    Science.gov (United States)

    Nie, Xiaobing; Zheng, Wei Xing; Cao, Jinde

    2015-11-01

    The problem of coexistence and dynamical behaviors of multiple equilibrium points is addressed for a class of memristive Cohen-Grossberg neural networks with non-monotonic piecewise linear activation functions and time-varying delays. By virtue of the fixed point theorem, nonsmooth analysis theory and other analytical tools, some sufficient conditions are established to guarantee that such n-dimensional memristive Cohen-Grossberg neural networks can have 5(n) equilibrium points, among which 3(n) equilibrium points are locally exponentially stable. It is shown that greater storage capacity can be achieved by neural networks with the non-monotonic activation functions introduced herein than the ones with Mexican-hat-type activation function. In addition, unlike most existing multistability results of neural networks with monotonic activation functions, those obtained 3(n) locally stable equilibrium points are located both in saturated regions and unsaturated regions. The theoretical findings are verified by an illustrative example with computer simulations. Copyright © 2015 Elsevier Ltd. All rights reserved.

  6. A fast identification algorithm for Box-Cox transformation based radial basis function neural network.

    Science.gov (United States)

    Hong, Xia

    2006-07-01

    In this letter, a Box-Cox transformation-based radial basis function (RBF) neural network is introduced using the RBF neural network to represent the transformed system output. Initially a fixed and moderate sized RBF model base is derived based on a rank revealing orthogonal matrix triangularization (QR decomposition). Then a new fast identification algorithm is introduced using Gauss-Newton algorithm to derive the required Box-Cox transformation, based on a maximum likelihood estimator. The main contribution of this letter is to explore the special structure of the proposed RBF neural network for computational efficiency by utilizing the inverse of matrix block decomposition lemma. Finally, the Box-Cox transformation-based RBF neural network, with good generalization and sparsity, is identified based on the derived optimal Box-Cox transformation and a D-optimality-based orthogonal forward regression algorithm. The proposed algorithm and its efficacy are demonstrated with an illustrative example in comparison with support vector machine regression.

  7. Proposal for an All-Spin Artificial Neural Network: Emulating Neural and Synaptic Functionalities Through Domain Wall Motion in Ferromagnets.

    Science.gov (United States)

    Sengupta, Abhronil; Shim, Yong; Roy, Kaushik

    2016-12-01

    Non-Boolean computing based on emerging post-CMOS technologies can potentially pave the way for low-power neural computing platforms. However, existing work on such emerging neuromorphic architectures have either focused on solely mimicking the neuron, or the synapse functionality. While memristive devices have been proposed to emulate biological synapses, spintronic devices have proved to be efficient at performing the thresholding operation of the neuron at ultra-low currents. In this work, we propose an All-Spin Artificial Neural Network where a single spintronic device acts as the basic building block of the system. The device offers a direct mapping to synapse and neuron functionalities in the brain while inter-layer network communication is accomplished via CMOS transistors. To the best of our knowledge, this is the first demonstration of a neural architecture where a single nanoelectronic device is able to mimic both neurons and synapses. The ultra-low voltage operation of low resistance magneto-metallic neurons enables the low-voltage operation of the array of spintronic synapses, thereby leading to ultra-low power neural architectures. Device-level simulations, calibrated to experimental results, was used to drive the circuit and system level simulations of the neural network for a standard pattern recognition problem. Simulation studies indicate energy savings by  ∼  100× in comparison to a corresponding digital/analog CMOS neuron implementation.

  8. Logarithmic learning for generalized classifier neural network.

    Science.gov (United States)

    Ozyildirim, Buse Melis; Avci, Mutlu

    2014-12-01

    Generalized classifier neural network is introduced as an efficient classifier among the others. Unless the initial smoothing parameter value is close to the optimal one, generalized classifier neural network suffers from convergence problem and requires quite a long time to converge. In this work, to overcome this problem, a logarithmic learning approach is proposed. The proposed method uses logarithmic cost function instead of squared error. Minimization of this cost function reduces the number of iterations used for reaching the minima. The proposed method is tested on 15 different data sets and performance of logarithmic learning generalized classifier neural network is compared with that of standard one. Thanks to operation range of radial basis function included by generalized classifier neural network, proposed logarithmic approach and its derivative has continuous values. This makes it possible to adopt the advantage of logarithmic fast convergence by the proposed learning method. Due to fast convergence ability of logarithmic cost function, training time is maximally decreased to 99.2%. In addition to decrease in training time, classification performance may also be improved till 60%. According to the test results, while the proposed method provides a solution for time requirement problem of generalized classifier neural network, it may also improve the classification accuracy. The proposed method can be considered as an efficient way for reducing the time requirement problem of generalized classifier neural network. Copyright © 2014 Elsevier Ltd. All rights reserved.

  9. Complex-Valued Neural Networks

    CERN Document Server

    Hirose, Akira

    2012-01-01

    This book is the second enlarged and revised edition of the first successful monograph on complex-valued neural networks (CVNNs) published in 2006, which lends itself to graduate and undergraduate courses in electrical engineering, informatics, control engineering, mechanics, robotics, bioengineering, and other relevant fields. In the second edition the recent trends in CVNNs research are included, resulting in e.g. almost a doubled number of references. The parametron invented in 1954 is also referred to with discussion on analogy and disparity. Also various additional arguments on the advantages of the complex-valued neural networks enhancing the difference to real-valued neural networks are given in various sections. The book is useful for those beginning their studies, for instance, in adaptive signal processing for highly functional sensing and imaging, control in unknown and changing environment, robotics inspired by human neural systems, and brain-like information processing, as well as interdisciplina...

  10. Artificial Astrocytes Improve Neural Network Performance

    Science.gov (United States)

    Porto-Pazos, Ana B.; Veiguela, Noha; Mesejo, Pablo; Navarrete, Marta; Alvarellos, Alberto; Ibáñez, Oscar; Pazos, Alejandro; Araque, Alfonso

    2011-01-01

    Compelling evidence indicates the existence of bidirectional communication between astrocytes and neurons. Astrocytes, a type of glial cells classically considered to be passive supportive cells, have been recently demonstrated to be actively involved in the processing and regulation of synaptic information, suggesting that brain function arises from the activity of neuron-glia networks. However, the actual impact of astrocytes in neural network function is largely unknown and its application in artificial intelligence remains untested. We have investigated the consequences of including artificial astrocytes, which present the biologically defined properties involved in astrocyte-neuron communication, on artificial neural network performance. Using connectionist systems and evolutionary algorithms, we have compared the performance of artificial neural networks (NN) and artificial neuron-glia networks (NGN) to solve classification problems. We show that the degree of success of NGN is superior to NN. Analysis of performances of NN with different number of neurons or different architectures indicate that the effects of NGN cannot be accounted for an increased number of network elements, but rather they are specifically due to astrocytes. Furthermore, the relative efficacy of NGN vs. NN increases as the complexity of the network increases. These results indicate that artificial astrocytes improve neural network performance, and established the concept of Artificial Neuron-Glia Networks, which represents a novel concept in Artificial Intelligence with implications in computational science as well as in the understanding of brain function. PMID:21526157

  11. Artificial astrocytes improve neural network performance.

    Directory of Open Access Journals (Sweden)

    Ana B Porto-Pazos

    Full Text Available Compelling evidence indicates the existence of bidirectional communication between astrocytes and neurons. Astrocytes, a type of glial cells classically considered to be passive supportive cells, have been recently demonstrated to be actively involved in the processing and regulation of synaptic information, suggesting that brain function arises from the activity of neuron-glia networks. However, the actual impact of astrocytes in neural network function is largely unknown and its application in artificial intelligence remains untested. We have investigated the consequences of including artificial astrocytes, which present the biologically defined properties involved in astrocyte-neuron communication, on artificial neural network performance. Using connectionist systems and evolutionary algorithms, we have compared the performance of artificial neural networks (NN and artificial neuron-glia networks (NGN to solve classification problems. We show that the degree of success of NGN is superior to NN. Analysis of performances of NN with different number of neurons or different architectures indicate that the effects of NGN cannot be accounted for an increased number of network elements, but rather they are specifically due to astrocytes. Furthermore, the relative efficacy of NGN vs. NN increases as the complexity of the network increases. These results indicate that artificial astrocytes improve neural network performance, and established the concept of Artificial Neuron-Glia Networks, which represents a novel concept in Artificial Intelligence with implications in computational science as well as in the understanding of brain function.

  12. Artificial astrocytes improve neural network performance.

    Science.gov (United States)

    Porto-Pazos, Ana B; Veiguela, Noha; Mesejo, Pablo; Navarrete, Marta; Alvarellos, Alberto; Ibáñez, Oscar; Pazos, Alejandro; Araque, Alfonso

    2011-04-19

    Compelling evidence indicates the existence of bidirectional communication between astrocytes and neurons. Astrocytes, a type of glial cells classically considered to be passive supportive cells, have been recently demonstrated to be actively involved in the processing and regulation of synaptic information, suggesting that brain function arises from the activity of neuron-glia networks. However, the actual impact of astrocytes in neural network function is largely unknown and its application in artificial intelligence remains untested. We have investigated the consequences of including artificial astrocytes, which present the biologically defined properties involved in astrocyte-neuron communication, on artificial neural network performance. Using connectionist systems and evolutionary algorithms, we have compared the performance of artificial neural networks (NN) and artificial neuron-glia networks (NGN) to solve classification problems. We show that the degree of success of NGN is superior to NN. Analysis of performances of NN with different number of neurons or different architectures indicate that the effects of NGN cannot be accounted for an increased number of network elements, but rather they are specifically due to astrocytes. Furthermore, the relative efficacy of NGN vs. NN increases as the complexity of the network increases. These results indicate that artificial astrocytes improve neural network performance, and established the concept of Artificial Neuron-Glia Networks, which represents a novel concept in Artificial Intelligence with implications in computational science as well as in the understanding of brain function.

  13. Discrete-time BAM neural networks with variable delays

    Science.gov (United States)

    Liu, Xin-Ge; Tang, Mei-Lan; Martin, Ralph; Liu, Xin-Bi

    2007-07-01

    This Letter deals with the global exponential stability of discrete-time bidirectional associative memory (BAM) neural networks with variable delays. Using a Lyapunov functional, and linear matrix inequality techniques (LMI), we derive a new delay-dependent exponential stability criterion for BAM neural networks with variable delays. As this criterion has no extra constraints on the variable delay functions, it can be applied to quite general BAM neural networks with a broad range of time delay functions. It is also easy to use in practice. An example is provided to illustrate the theoretical development.

  14. Discrete-time BAM neural networks with variable delays

    International Nuclear Information System (INIS)

    Liu Xinge; Tang Meilan; Martin, Ralph; Liu Xinbi

    2007-01-01

    This Letter deals with the global exponential stability of discrete-time bidirectional associative memory (BAM) neural networks with variable delays. Using a Lyapunov functional, and linear matrix inequality techniques (LMI), we derive a new delay-dependent exponential stability criterion for BAM neural networks with variable delays. As this criterion has no extra constraints on the variable delay functions, it can be applied to quite general BAM neural networks with a broad range of time delay functions. It is also easy to use in practice. An example is provided to illustrate the theoretical development

  15. Neural networks and statistical learning

    CERN Document Server

    Du, Ke-Lin

    2014-01-01

    Providing a broad but in-depth introduction to neural network and machine learning in a statistical framework, this book provides a single, comprehensive resource for study and further research. All the major popular neural network models and statistical learning approaches are covered with examples and exercises in every chapter to develop a practical working understanding of the content. Each of the twenty-five chapters includes state-of-the-art descriptions and important research results on the respective topics. The broad coverage includes the multilayer perceptron, the Hopfield network, associative memory models, clustering models and algorithms, the radial basis function network, recurrent neural networks, principal component analysis, nonnegative matrix factorization, independent component analysis, discriminant analysis, support vector machines, kernel methods, reinforcement learning, probabilistic and Bayesian networks, data fusion and ensemble learning, fuzzy sets and logic, neurofuzzy models, hardw...

  16. Applications of neural network to numerical analyses

    International Nuclear Information System (INIS)

    Takeda, Tatsuoki; Fukuhara, Makoto; Ma, Xiao-Feng; Liaqat, Ali

    1999-01-01

    Applications of a multi-layer neural network to numerical analyses are described. We are mainly concerned with the computed tomography and the solution of differential equations. In both cases as the objective functions for the training process of the neural network we employed residuals of the integral equation or the differential equations. This is different from the conventional neural network training where sum of the squared errors of the output values is adopted as the objective function. For model problems both the methods gave satisfactory results and the methods are considered promising for some kind of problems. (author)

  17. Mutual Connectivity Analysis (MCA) Using Generalized Radial Basis Function Neural Networks for Nonlinear Functional Connectivity Network Recovery in Resting-State Functional MRI.

    Science.gov (United States)

    DSouza, Adora M; Abidin, Anas Zainul; Nagarajan, Mahesh B; Wismüller, Axel

    2016-03-29

    We investigate the applicability of a computational framework, called mutual connectivity analysis (MCA), for directed functional connectivity analysis in both synthetic and resting-state functional MRI data. This framework comprises of first evaluating non-linear cross-predictability between every pair of time series prior to recovering the underlying network structure using community detection algorithms. We obtain the non-linear cross-prediction score between time series using Generalized Radial Basis Functions (GRBF) neural networks. These cross-prediction scores characterize the underlying functionally connected networks within the resting brain, which can be extracted using non-metric clustering approaches, such as the Louvain method. We first test our approach on synthetic models with known directional influence and network structure. Our method is able to capture the directional relationships between time series (with an area under the ROC curve = 0.92 ± 0.037) as well as the underlying network structure (Rand index = 0.87 ± 0.063) with high accuracy. Furthermore, we test this method for network recovery on resting-state fMRI data, where results are compared to the motor cortex network recovered from a motor stimulation sequence, resulting in a strong agreement between the two (Dice coefficient = 0.45). We conclude that our MCA approach is effective in analyzing non-linear directed functional connectivity and in revealing underlying functional network structure in complex systems.

  18. Inverting radiometric measurements with a neural network

    Science.gov (United States)

    Measure, Edward M.; Yee, Young P.; Balding, Jeff M.; Watkins, Wendell R.

    1992-02-01

    A neural network scheme for retrieving remotely sensed vertical temperature profiles was applied to observed ground based radiometer measurements. The neural network used microwave radiance measurements and surface measurements of temperature and pressure as inputs. Because the microwave radiometer is capable of measuring 4 oxygen channels at 5 different elevation angles (9, 15, 25, 40, and 90 degs), 20 microwave measurements are potentially available. Because these measurements have considerable redundancy, a neural network was experimented with, accepting as inputs microwave measurements taken at 53.88 GHz, 40 deg; 57.45 GHz, 40 deg; and 57.45, 90 deg. The primary test site was located at White Sands Missile Range (WSMR), NM. Results are compared with measurements made simultaneously with balloon borne radiosonde instruments and with radiometric temperature retrievals made using more conventional retrieval algorithms. The neural network was trained using a Widrow-Hoff delta rule procedure. Functions of date to include season dependence in the retrieval process and functions of time to include diurnal effects were used as inputs to the neural network.

  19. ProLanGO: Protein Function Prediction Using Neural Machine Translation Based on a Recurrent Neural Network.

    Science.gov (United States)

    Cao, Renzhi; Freitas, Colton; Chan, Leong; Sun, Miao; Jiang, Haiqing; Chen, Zhangxin

    2017-10-17

    With the development of next generation sequencing techniques, it is fast and cheap to determine protein sequences but relatively slow and expensive to extract useful information from protein sequences because of limitations of traditional biological experimental techniques. Protein function prediction has been a long standing challenge to fill the gap between the huge amount of protein sequences and the known function. In this paper, we propose a novel method to convert the protein function problem into a language translation problem by the new proposed protein sequence language "ProLan" to the protein function language "GOLan", and build a neural machine translation model based on recurrent neural networks to translate "ProLan" language to "GOLan" language. We blindly tested our method by attending the latest third Critical Assessment of Function Annotation (CAFA 3) in 2016, and also evaluate the performance of our methods on selected proteins whose function was released after CAFA competition. The good performance on the training and testing datasets demonstrates that our new proposed method is a promising direction for protein function prediction. In summary, we first time propose a method which converts the protein function prediction problem to a language translation problem and applies a neural machine translation model for protein function prediction.

  20. Polarity-specific high-level information propagation in neural networks.

    Science.gov (United States)

    Lin, Yen-Nan; Chang, Po-Yen; Hsiao, Pao-Yueh; Lo, Chung-Chuan

    2014-01-01

    Analyzing the connectome of a nervous system provides valuable information about the functions of its subsystems. Although much has been learned about the architectures of neural networks in various organisms by applying analytical tools developed for general networks, two distinct and functionally important properties of neural networks are often overlooked. First, neural networks are endowed with polarity at the circuit level: Information enters a neural network at input neurons, propagates through interneurons, and leaves via output neurons. Second, many functions of nervous systems are implemented by signal propagation through high-level pathways involving multiple and often recurrent connections rather than by the shortest paths between nodes. In the present study, we analyzed two neural networks: the somatic nervous system of Caenorhabditis elegans (C. elegans) and the partial central complex network of Drosophila, in light of these properties. Specifically, we quantified high-level propagation in the vertical and horizontal directions: the former characterizes how signals propagate from specific input nodes to specific output nodes and the latter characterizes how a signal from a specific input node is shared by all output nodes. We found that the two neural networks are characterized by very efficient vertical and horizontal propagation. In comparison, classic small-world networks show a trade-off between vertical and horizontal propagation; increasing the rewiring probability improves the efficiency of horizontal propagation but worsens the efficiency of vertical propagation. Our result provides insights into how the complex functions of natural neural networks may arise from a design that allows them to efficiently transform and combine input signals.

  1. Feature to prototype transition in neural networks

    Science.gov (United States)

    Krotov, Dmitry; Hopfield, John

    Models of associative memory with higher order (higher than quadratic) interactions, and their relationship to neural networks used in deep learning are discussed. Associative memory is conventionally described by recurrent neural networks with dynamical convergence to stable points. Deep learning typically uses feedforward neural nets without dynamics. However, a simple duality relates these two different views when applied to problems of pattern classification. From the perspective of associative memory such models deserve attention because they make it possible to store a much larger number of memories, compared to the quadratic case. In the dual description, these models correspond to feedforward neural networks with one hidden layer and unusual activation functions transmitting the activities of the visible neurons to the hidden layer. These activation functions are rectified polynomials of a higher degree rather than the rectified linear functions used in deep learning. The network learns representations of the data in terms of features for rectified linear functions, but as the power in the activation function is increased there is a gradual shift to a prototype-based representation, the two extreme regimes of pattern recognition known in cognitive psychology. Simons Center for Systems Biology.

  2. Neural Network Algorithm for Particle Loading

    International Nuclear Information System (INIS)

    Lewandowski, J.L.V.

    2003-01-01

    An artificial neural network algorithm for continuous minimization is developed and applied to the case of numerical particle loading. It is shown that higher-order moments of the probability distribution function can be efficiently renormalized using this technique. A general neural network for the renormalization of an arbitrary number of moments is given

  3. Development and function of human cerebral cortex neural networks from pluripotent stem cells in vitro.

    Science.gov (United States)

    Kirwan, Peter; Turner-Bridger, Benita; Peter, Manuel; Momoh, Ayiba; Arambepola, Devika; Robinson, Hugh P C; Livesey, Frederick J

    2015-09-15

    A key aspect of nervous system development, including that of the cerebral cortex, is the formation of higher-order neural networks. Developing neural networks undergo several phases with distinct activity patterns in vivo, which are thought to prune and fine-tune network connectivity. We report here that human pluripotent stem cell (hPSC)-derived cerebral cortex neurons form large-scale networks that reflect those found in the developing cerebral cortex in vivo. Synchronised oscillatory networks develop in a highly stereotyped pattern over several weeks in culture. An initial phase of increasing frequency of oscillations is followed by a phase of decreasing frequency, before giving rise to non-synchronous, ordered activity patterns. hPSC-derived cortical neural networks are excitatory, driven by activation of AMPA- and NMDA-type glutamate receptors, and can undergo NMDA-receptor-mediated plasticity. Investigating single neuron connectivity within PSC-derived cultures, using rabies-based trans-synaptic tracing, we found two broad classes of neuronal connectivity: most neurons have small numbers (40). These data demonstrate that the formation of hPSC-derived cortical networks mimics in vivo cortical network development and function, demonstrating the utility of in vitro systems for mechanistic studies of human forebrain neural network biology. © 2015. Published by The Company of Biologists Ltd.

  4. Shaping Early Reorganization of Neural Networks Promotes Motor Function after Stroke

    Science.gov (United States)

    Volz, L. J.; Rehme, A. K.; Michely, J.; Nettekoven, C.; Eickhoff, S. B.; Fink, G. R.; Grefkes, C.

    2016-01-01

    Neural plasticity is a major factor driving cortical reorganization after stroke. We here tested whether repetitively enhancing motor cortex plasticity by means of intermittent theta-burst stimulation (iTBS) prior to physiotherapy might promote recovery of function early after stroke. Functional magnetic resonance imaging (fMRI) was used to elucidate underlying neural mechanisms. Twenty-six hospitalized, first-ever stroke patients (time since stroke: 1–16 days) with hand motor deficits were enrolled in a sham-controlled design and pseudo-randomized into 2 groups. iTBS was administered prior to physiotherapy on 5 consecutive days either over ipsilesional primary motor cortex (M1-stimulation group) or parieto-occipital vertex (control-stimulation group). Hand motor function, cortical excitability, and resting-state fMRI were assessed 1 day prior to the first stimulation and 1 day after the last stimulation. Recovery of grip strength was significantly stronger in the M1-stimulation compared to the control-stimulation group. Higher levels of motor network connectivity were associated with better motor outcome. Consistently, control-stimulated patients featured a decrease in intra- and interhemispheric connectivity of the motor network, which was absent in the M1-stimulation group. Hence, adding iTBS to prime physiotherapy in recovering stroke patients seems to interfere with motor network degradation, possibly reflecting alleviation of post-stroke diaschisis. PMID:26980614

  5. Artificial Neural Network Modeling of an Inverse Fluidized Bed ...

    African Journals Online (AJOL)

    A Radial Basis Function neural network has been successfully employed for the modeling of the inverse fluidized bed reactor. In the proposed model, the trained neural network represents the kinetics of biological decomposition of pollutants in the reactor. The neural network has been trained with experimental data ...

  6. Distribution network fault section identification and fault location using artificial neural network

    DEFF Research Database (Denmark)

    Dashtdar, Masoud; Dashti, Rahman; Shaker, Hamid Reza

    2018-01-01

    In this paper, a method for fault location in power distribution network is presented. The proposed method uses artificial neural network. In order to train the neural network, a series of specific characteristic are extracted from the recorded fault signals in relay. These characteristics...... components of the sequences as well as three-phase signals could be obtained using statistics to extract the hidden features inside them and present them separately to train the neural network. Also, since the obtained inputs for the training of the neural network strongly depend on the fault angle, fault...... resistance, and fault location, the training data should be selected such that these differences are properly presented so that the neural network does not face any issues for identification. Therefore, selecting the signal processing function, data spectrum and subsequently, statistical parameters...

  7. Optimization of the kernel functions in a probabilistic neural network analyzing the local pattern distribution.

    Science.gov (United States)

    Galleske, I; Castellanos, J

    2002-05-01

    This article proposes a procedure for the automatic determination of the elements of the covariance matrix of the gaussian kernel function of probabilistic neural networks. Two matrices, a rotation matrix and a matrix of variances, can be calculated by analyzing the local environment of each training pattern. The combination of them will form the covariance matrix of each training pattern. This automation has two advantages: First, it will free the neural network designer from indicating the complete covariance matrix, and second, it will result in a network with better generalization ability than the original model. A variation of the famous two-spiral problem and real-world examples from the UCI Machine Learning Repository will show a classification rate not only better than the original probabilistic neural network but also that this model can outperform other well-known classification techniques.

  8. Radial basis function neural networks with sequential learning MRAN and its applications

    CERN Document Server

    Sundararajan, N; Wei Lu Ying

    1999-01-01

    This book presents in detail the newly developed sequential learning algorithm for radial basis function neural networks, which realizes a minimal network. This algorithm, created by the authors, is referred to as Minimal Resource Allocation Networks (MRAN). The book describes the application of MRAN in different areas, including pattern recognition, time series prediction, system identification, control, communication and signal processing. Benchmark problems from these areas have been studied, and MRAN is compared with other algorithms. In order to make the book self-contained, a review of t

  9. Stock market index prediction using neural networks

    Science.gov (United States)

    Komo, Darmadi; Chang, Chein-I.; Ko, Hanseok

    1994-03-01

    A neural network approach to stock market index prediction is presented. Actual data of the Wall Street Journal's Dow Jones Industrial Index has been used for a benchmark in our experiments where Radial Basis Function based neural networks have been designed to model these indices over the period from January 1988 to Dec 1992. A notable success has been achieved with the proposed model producing over 90% prediction accuracies observed based on monthly Dow Jones Industrial Index predictions. The model has also captured both moderate and heavy index fluctuations. The experiments conducted in this study demonstrated that the Radial Basis Function neural network represents an excellent candidate to predict stock market index.

  10. Neural networks within multi-core optic fibers.

    Science.gov (United States)

    Cohen, Eyal; Malka, Dror; Shemer, Amir; Shahmoon, Asaf; Zalevsky, Zeev; London, Michael

    2016-07-07

    Hardware implementation of artificial neural networks facilitates real-time parallel processing of massive data sets. Optical neural networks offer low-volume 3D connectivity together with large bandwidth and minimal heat production in contrast to electronic implementation. Here, we present a conceptual design for in-fiber optical neural networks. Neurons and synapses are realized as individual silica cores in a multi-core fiber. Optical signals are transferred transversely between cores by means of optical coupling. Pump driven amplification in erbium-doped cores mimics synaptic interactions. We simulated three-layered feed-forward neural networks and explored their capabilities. Simulations suggest that networks can differentiate between given inputs depending on specific configurations of amplification; this implies classification and learning capabilities. Finally, we tested experimentally our basic neuronal elements using fibers, couplers, and amplifiers, and demonstrated that this configuration implements a neuron-like function. Therefore, devices similar to our proposed multi-core fiber could potentially serve as building blocks for future large-scale small-volume optical artificial neural networks.

  11. Chaotic diagonal recurrent neural network

    International Nuclear Information System (INIS)

    Wang Xing-Yuan; Zhang Yi

    2012-01-01

    We propose a novel neural network based on a diagonal recurrent neural network and chaos, and its structure and learning algorithm are designed. The multilayer feedforward neural network, diagonal recurrent neural network, and chaotic diagonal recurrent neural network are used to approach the cubic symmetry map. The simulation results show that the approximation capability of the chaotic diagonal recurrent neural network is better than the other two neural networks. (interdisciplinary physics and related areas of science and technology)

  12. A common functional neural network for overt production of speech and gesture.

    Science.gov (United States)

    Marstaller, L; Burianová, H

    2015-01-22

    The perception of co-speech gestures, i.e., hand movements that co-occur with speech, has been investigated by several studies. The results show that the perception of co-speech gestures engages a core set of frontal, temporal, and parietal areas. However, no study has yet investigated the neural processes underlying the production of co-speech gestures. Specifically, it remains an open question whether Broca's area is central to the coordination of speech and gestures as has been suggested previously. The objective of this study was to use functional magnetic resonance imaging to (i) investigate the regional activations underlying overt production of speech, gestures, and co-speech gestures, and (ii) examine functional connectivity with Broca's area. We hypothesized that co-speech gesture production would activate frontal, temporal, and parietal regions that are similar to areas previously found during co-speech gesture perception and that both speech and gesture as well as co-speech gesture production would engage a neural network connected to Broca's area. Whole-brain analysis confirmed our hypothesis and showed that co-speech gesturing did engage brain areas that form part of networks known to subserve language and gesture. Functional connectivity analysis further revealed a functional network connected to Broca's area that is common to speech, gesture, and co-speech gesture production. This network consists of brain areas that play essential roles in motor control, suggesting that the coordination of speech and gesture is mediated by a shared motor control network. Our findings thus lend support to the idea that speech can influence co-speech gesture production on a motoric level. Copyright © 2014 IBRO. Published by Elsevier Ltd. All rights reserved.

  13. A one-layer recurrent neural network for constrained nonsmooth optimization.

    Science.gov (United States)

    Liu, Qingshan; Wang, Jun

    2011-10-01

    This paper presents a novel one-layer recurrent neural network modeled by means of a differential inclusion for solving nonsmooth optimization problems, in which the number of neurons in the proposed neural network is the same as the number of decision variables of optimization problems. Compared with existing neural networks for nonsmooth optimization problems, the global convexity condition on the objective functions and constraints is relaxed, which allows the objective functions and constraints to be nonconvex. It is proven that the state variables of the proposed neural network are convergent to optimal solutions if a single design parameter in the model is larger than a derived lower bound. Numerical examples with simulation results substantiate the effectiveness and illustrate the characteristics of the proposed neural network.

  14. Neural network-based model reference adaptive control system.

    Science.gov (United States)

    Patino, H D; Liu, D

    2000-01-01

    In this paper, an approach to model reference adaptive control based on neural networks is proposed and analyzed for a class of first-order continuous-time nonlinear dynamical systems. The controller structure can employ either a radial basis function network or a feedforward neural network to compensate adaptively the nonlinearities in the plant. A stable controller-parameter adjustment mechanism, which is determined using the Lyapunov theory, is constructed using a sigma-modification-type updating law. The evaluation of control error in terms of the neural network learning error is performed. That is, the control error converges asymptotically to a neighborhood of zero, whose size is evaluated and depends on the approximation error of the neural network. In the design and analysis of neural network-based control systems, it is important to take into account the neural network learning error and its influence on the control error of the plant. Simulation results showing the feasibility and performance of the proposed approach are given.

  15. Intelligent neural network diagnostic system

    International Nuclear Information System (INIS)

    Mohamed, A.H.

    2010-01-01

    Recently, artificial neural network (ANN) has made a significant mark in the domain of diagnostic applications. Neural networks are used to implement complex non-linear mappings (functions) using simple elementary units interrelated through connections with adaptive weights. The performance of the ANN is mainly depending on their topology structure and weights. Some systems have been developed using genetic algorithm (GA) to optimize the topology of the ANN. But, they suffer from some limitations. They are : (1) The computation time requires for training the ANN several time reaching for the average weight required, (2) Slowness of GA for optimization process and (3) Fitness noise appeared in the optimization of ANN. This research suggests new issues to overcome these limitations for finding optimal neural network architectures to learn particular problems. This proposed methodology is used to develop a diagnostic neural network system. It has been applied for a 600 MW turbo-generator as a case of real complex systems. The proposed system has proved its significant performance compared to two common methods used in the diagnostic applications.

  16. Comparing the Selected Transfer Functions and Local Optimization Methods for Neural Network Flood Runoff Forecast

    Directory of Open Access Journals (Sweden)

    Petr Maca

    2014-01-01

    Full Text Available The presented paper aims to analyze the influence of the selection of transfer function and training algorithms on neural network flood runoff forecast. Nine of the most significant flood events, caused by the extreme rainfall, were selected from 10 years of measurement on small headwater catchment in the Czech Republic, and flood runoff forecast was investigated using the extensive set of multilayer perceptrons with one hidden layer of neurons. The analyzed artificial neural network models with 11 different activation functions in hidden layer were trained using 7 local optimization algorithms. The results show that the Levenberg-Marquardt algorithm was superior compared to the remaining tested local optimization methods. When comparing the 11 nonlinear transfer functions, used in hidden layer neurons, the RootSig function was superior compared to the rest of analyzed activation functions.

  17. Neural networks

    International Nuclear Information System (INIS)

    Denby, Bruce; Lindsey, Clark; Lyons, Louis

    1992-01-01

    The 1980s saw a tremendous renewal of interest in 'neural' information processing systems, or 'artificial neural networks', among computer scientists and computational biologists studying cognition. Since then, the growth of interest in neural networks in high energy physics, fueled by the need for new information processing technologies for the next generation of high energy proton colliders, can only be described as explosive

  18. Improving Stability and Convergence for Adaptive Radial Basis Function Neural Networks Algorithm. (On-Line Harmonics Estimation Application

    Directory of Open Access Journals (Sweden)

    Eyad K Almaita

    2017-03-01

    Keywords: Energy efficiency, Power quality, Radial basis function, neural networks, adaptive, harmonic. Article History: Received Dec 15, 2016; Received in revised form Feb 2nd 2017; Accepted 13rd 2017; Available online How to Cite This Article: Almaita, E.K and Shawawreh J.Al (2017 Improving Stability and Convergence for Adaptive Radial Basis Function Neural Networks Algorithm (On-Line Harmonics Estimation Application.  International Journal of Renewable Energy Develeopment, 6(1, 9-17. http://dx.doi.org/10.14710/ijred.6.1.9-17

  19. Hermite Functional Link Neural Network for Solving the Van der Pol-Duffing Oscillator Equation.

    Science.gov (United States)

    Mall, Susmita; Chakraverty, S

    2016-08-01

    Hermite polynomial-based functional link artificial neural network (FLANN) is proposed here to solve the Van der Pol-Duffing oscillator equation. A single-layer hermite neural network (HeNN) model is used, where a hidden layer is replaced by expansion block of input pattern using Hermite orthogonal polynomials. A feedforward neural network model with the unsupervised error backpropagation principle is used for modifying the network parameters and minimizing the computed error function. The Van der Pol-Duffing and Duffing oscillator equations may not be solved exactly. Here, approximate solutions of these types of equations have been obtained by applying the HeNN model for the first time. Three mathematical example problems and two real-life application problems of Van der Pol-Duffing oscillator equation, extracting the features of early mechanical failure signal and weak signal detection problems, are solved using the proposed HeNN method. HeNN approximate solutions have been compared with results obtained by the well known Runge-Kutta method. Computed results are depicted in term of graphs. After training the HeNN model, we may use it as a black box to get numerical results at any arbitrary point in the domain. Thus, the proposed HeNN method is efficient. The results reveal that this method is reliable and can be applied to other nonlinear problems too.

  20. Neuronal spike sorting based on radial basis function neural networks

    Directory of Open Access Journals (Sweden)

    Taghavi Kani M

    2011-02-01

    Full Text Available "nBackground: Studying the behavior of a society of neurons, extracting the communication mechanisms of brain with other tissues, finding treatment for some nervous system diseases and designing neuroprosthetic devices, require an algorithm to sort neuralspikes automatically. However, sorting neural spikes is a challenging task because of the low signal to noise ratio (SNR of the spikes. The main purpose of this study was to design an automatic algorithm for classifying neuronal spikes that are emitted from a specific region of the nervous system."n "nMethods: The spike sorting process usually consists of three stages: detection, feature extraction and sorting. We initially used signal statistics to detect neural spikes. Then, we chose a limited number of typical spikes as features and finally used them to train a radial basis function (RBF neural network to sort the spikes. In most spike sorting devices, these signals are not linearly discriminative. In order to solve this problem, the aforesaid RBF neural network was used."n "nResults: After the learning process, our proposed algorithm classified any arbitrary spike. The obtained results showed that even though the proposed Radial Basis Spike Sorter (RBSS reached to the same error as the previous methods, however, the computational costs were much lower compared to other algorithms. Moreover, the competitive points of the proposed algorithm were its good speed and low computational complexity."n "nConclusion: Regarding the results of this study, the proposed algorithm seems to serve the purpose of procedures that require real-time processing and spike sorting.

  1. Neural networks for aircraft control

    Science.gov (United States)

    Linse, Dennis

    1990-01-01

    Current research in Artificial Neural Networks indicates that networks offer some potential advantages in adaptation and fault tolerance. This research is directed at determining the possible applicability of neural networks to aircraft control. The first application will be to aircraft trim. Neural network node characteristics, network topology and operation, neural network learning and example histories using neighboring optimal control with a neural net are discussed.

  2. Learning Errors by Radial Basis Function Neural Networks and Regularization Networks

    Czech Academy of Sciences Publication Activity Database

    Neruda, Roman; Vidnerová, Petra

    2009-01-01

    Roč. 1, č. 2 (2009), s. 49-57 ISSN 2005-4262 R&D Projects: GA MŠk(CZ) 1M0567 Institutional research plan: CEZ:AV0Z10300504 Keywords : neural network * RBF networks * regularization * learning Subject RIV: IN - Informatics, Computer Science http://www.sersc.org/journals/IJGDC/vol2_no1/5.pdf

  3. Altered Synchronizations among Neural Networks in Geriatric Depression.

    Science.gov (United States)

    Wang, Lihong; Chou, Ying-Hui; Potter, Guy G; Steffens, David C

    2015-01-01

    Although major depression has been considered as a manifestation of discoordinated activity between affective and cognitive neural networks, only a few studies have examined the relationships among neural networks directly. Because of the known disconnection theory, geriatric depression could be a useful model in studying the interactions among different networks. In the present study, using independent component analysis to identify intrinsically connected neural networks, we investigated the alterations in synchronizations among neural networks in geriatric depression to better understand the underlying neural mechanisms. Resting-state fMRI data was collected from thirty-two patients with geriatric depression and thirty-two age-matched never-depressed controls. We compared the resting-state activities between the two groups in the default-mode, central executive, attention, salience, and affective networks as well as correlations among these networks. The depression group showed stronger activity than the controls in an affective network, specifically within the orbitofrontal region. However, unlike the never-depressed controls, geriatric depression group lacked synchronized/antisynchronized activity between the affective network and the other networks. Those depressed patients with lower executive function has greater synchronization between the salience network with the executive and affective networks. Our results demonstrate the effectiveness of the between-network analyses in examining neural models for geriatric depression.

  4. Machine learning of radial basis function neural network based on Kalman filter: Introduction

    Directory of Open Access Journals (Sweden)

    Vuković Najdan L.

    2014-01-01

    Full Text Available This paper analyzes machine learning of radial basis function neural network based on Kalman filtering. Three algorithms are derived: linearized Kalman filter, linearized information filter and unscented Kalman filter. We emphasize basic properties of these estimation algorithms, demonstrate how their advantages can be used for optimization of network parameters, derive mathematical models and show how they can be applied to model problems in engineering practice.

  5. Vestigial preference functions in neural networks and túngara frogs.

    OpenAIRE

    Phelps, S. M.; Ryan, M. J.; Rand, A. S.

    2001-01-01

    Although there is a growing interest in understanding how perceptual mechanisms influence behavioral evolution, few studies have addressed how perception itself is shaped by evolutionary forces. We used a combination of artificial neural network models and behavioral experiments to investigate how evolutionary history influenced the perceptual processes used in mate choice by female túngara frogs. We manipulated the evolutionary history of artificial neural network models and observed an emer...

  6. Application of radial basis neural network for state estimation of ...

    African Journals Online (AJOL)

    An original application of radial basis function (RBF) neural network for power system state estimation is proposed in this paper. The property of massive parallelism of neural networks is employed for this. The application of RBF neural network for state estimation is investigated by testing its applicability on a IEEE 14 bus ...

  7. QSAR modelling using combined simple competitive learning networks and RBF neural networks.

    Science.gov (United States)

    Sheikhpour, R; Sarram, M A; Rezaeian, M; Sheikhpour, E

    2018-04-01

    The aim of this study was to propose a QSAR modelling approach based on the combination of simple competitive learning (SCL) networks with radial basis function (RBF) neural networks for predicting the biological activity of chemical compounds. The proposed QSAR method consisted of two phases. In the first phase, an SCL network was applied to determine the centres of an RBF neural network. In the second phase, the RBF neural network was used to predict the biological activity of various phenols and Rho kinase (ROCK) inhibitors. The predictive ability of the proposed QSAR models was evaluated and compared with other QSAR models using external validation. The results of this study showed that the proposed QSAR modelling approach leads to better performances than other models in predicting the biological activity of chemical compounds. This indicated the efficiency of simple competitive learning networks in determining the centres of RBF neural networks.

  8. Storage capacity and retrieval time of small-world neural networks

    International Nuclear Information System (INIS)

    Oshima, Hiraku; Odagaki, Takashi

    2007-01-01

    To understand the influence of structure on the function of neural networks, we study the storage capacity and the retrieval time of Hopfield-type neural networks for four network structures: regular, small world, random networks generated by the Watts-Strogatz (WS) model, and the same network as the neural network of the nematode Caenorhabditis elegans. Using computer simulations, we find that (1) as the randomness of network is increased, its storage capacity is enhanced; (2) the retrieval time of WS networks does not depend on the network structure, but the retrieval time of C. elegans's neural network is longer than that of WS networks; (3) the storage capacity of the C. elegans network is smaller than that of networks generated by the WS model, though the neural network of C. elegans is considered to be a small-world network

  9. Research on Fault Diagnosis Method Based on Rule Base Neural Network

    Directory of Open Access Journals (Sweden)

    Zheng Ni

    2017-01-01

    Full Text Available The relationship between fault phenomenon and fault cause is always nonlinear, which influences the accuracy of fault location. And neural network is effective in dealing with nonlinear problem. In order to improve the efficiency of uncertain fault diagnosis based on neural network, a neural network fault diagnosis method based on rule base is put forward. At first, the structure of BP neural network is built and the learning rule is given. Then, the rule base is built by fuzzy theory. An improved fuzzy neural construction model is designed, in which the calculated methods of node function and membership function are also given. Simulation results confirm the effectiveness of this method.

  10. Solving differential equations with unknown constitutive relations as recurrent neural networks

    Energy Technology Data Exchange (ETDEWEB)

    Hagge, Tobias J.; Stinis, Panagiotis; Yeung, Enoch H.; Tartakovsky, Alexandre M.

    2017-12-08

    We solve a system of ordinary differential equations with an unknown functional form of a sink (reaction rate) term. We assume that the measurements (time series) of state variables are partially available, and use a recurrent neural network to “learn” the reaction rate from this data. This is achieved by including discretized ordinary differential equations as part of a recurrent neural network training problem. We extend TensorFlow’s recurrent neural network architecture to create a simple but scalable and effective solver for the unknown functions, and apply it to a fedbatch bioreactor simulation problem. Use of techniques from recent deep learning literature enables training of functions with behavior manifesting over thousands of time steps. Our networks are structurally similar to recurrent neural networks, but differ in purpose, and require modified training strategies.

  11. Neutron spectrometry with artificial neural networks

    International Nuclear Information System (INIS)

    Vega C, H.R.; Hernandez D, V.M.; Manzanares A, E.; Rodriguez, J.M.; Mercado S, G.A.; Iniguez de la Torre Bayo, M.P.; Barquero, R.; Arteaga A, T.

    2005-01-01

    An artificial neural network has been designed to obtain the neutron spectra from the Bonner spheres spectrometer's count rates. The neural network was trained using 129 neutron spectra. These include isotopic neutron sources; reference and operational spectra from accelerators and nuclear reactors, spectra from mathematical functions as well as few energy groups and monoenergetic spectra. The spectra were transformed from lethargy to energy distribution and were re-bin ned to 31 energy groups using the MCNP 4C code. Re-binned spectra and UTA4 response matrix were used to calculate the expected count rates in Bonner spheres spectrometer. These count rates were used as input and the respective spectrum was used as output during neural network training. After training the network was tested with the Bonner spheres count rates produced by a set of neutron spectra. This set contains data used during network training as well as data not used. Training and testing was carried out in the Mat lab program. To verify the network unfolding performance the original and unfolded spectra were compared using the χ 2 -test and the total fluence ratios. The use of Artificial Neural Networks to unfold neutron spectra in neutron spectrometry is an alternative procedure that overcomes the drawbacks associated in this ill-conditioned problem. (Author)

  12. A TLD dose algorithm using artificial neural networks

    International Nuclear Information System (INIS)

    Moscovitch, M.; Rotunda, J.E.; Tawil, R.A.; Rathbone, B.A.

    1995-01-01

    An artificial neural network was designed and used to develop a dose algorithm for a multi-element thermoluminescence dosimeter (TLD). The neural network architecture is based on the concept of functional links network (FLN). Neural network is an information processing method inspired by the biological nervous system. A dose algorithm based on neural networks is fundamentally different as compared to conventional algorithms, as it has the capability to learn from its own experience. The neural network algorithm is shown the expected dose values (output) associated with given responses of a multi-element dosimeter (input) many times. The algorithm, being trained that way, eventually is capable to produce its own unique solution to similar (but not exactly the same) dose calculation problems. For personal dosimetry, the output consists of the desired dose components: deep dose, shallow dose and eye dose. The input consists of the TL data obtained from the readout of a multi-element dosimeter. The neural network approach was applied to the Harshaw Type 8825 TLD, and was shown to significantly improve the performance of this dosimeter, well within the U.S. accreditation requirements for personnel dosimeters

  13. Optical resonators and neural networks

    Science.gov (United States)

    Anderson, Dana Z.

    1986-08-01

    It may be possible to implement neural network models using continuous field optical architectures. These devices offer the inherent parallelism of propagating waves and an information density in principle dictated by the wavelength of light and the quality of the bulk optical elements. Few components are needed to construct a relatively large equivalent network. Various associative memories based on optical resonators have been demonstrated in the literature, a ring resonator design is discussed in detail here. Information is stored in a holographic medium and recalled through a competitive processes in the gain medium supplying energy to the ring rsonator. The resonator memory is the first realized example of a neural network function implemented with this kind of architecture.

  14. A neural network approach to job-shop scheduling.

    Science.gov (United States)

    Zhou, D N; Cherkassky, V; Baldwin, T R; Olson, D E

    1991-01-01

    A novel analog computational network is presented for solving NP-complete constraint satisfaction problems, i.e. job-shop scheduling. In contrast to most neural approaches to combinatorial optimization based on quadratic energy cost function, the authors propose to use linear cost functions. As a result, the network complexity (number of neurons and the number of resistive interconnections) grows only linearly with problem size, and large-scale implementations become possible. The proposed approach is related to the linear programming network described by D.W. Tank and J.J. Hopfield (1985), which also uses a linear cost function for a simple optimization problem. It is shown how to map a difficult constraint-satisfaction problem onto a simple neural net in which the number of neural processors equals the number of subjobs (operations) and the number of interconnections grows linearly with the total number of operations. Simulations show that the authors' approach produces better solutions than existing neural approaches to job-shop scheduling, i.e. the traveling salesman problem-type Hopfield approach and integer linear programming approach of J.P.S. Foo and Y. Takefuji (1988), in terms of the quality of the solution and the network complexity.

  15. Changes in neural network homeostasis trigger neuropsychiatric symptoms.

    Science.gov (United States)

    Winkelmann, Aline; Maggio, Nicola; Eller, Joanna; Caliskan, Gürsel; Semtner, Marcus; Häussler, Ute; Jüttner, René; Dugladze, Tamar; Smolinsky, Birthe; Kowalczyk, Sarah; Chronowska, Ewa; Schwarz, Günter; Rathjen, Fritz G; Rechavi, Gideon; Haas, Carola A; Kulik, Akos; Gloveli, Tengis; Heinemann, Uwe; Meier, Jochen C

    2014-02-01

    The mechanisms that regulate the strength of synaptic transmission and intrinsic neuronal excitability are well characterized; however, the mechanisms that promote disease-causing neural network dysfunction are poorly defined. We generated mice with targeted neuron type-specific expression of a gain-of-function variant of the neurotransmitter receptor for glycine (GlyR) that is found in hippocampectomies from patients with temporal lobe epilepsy. In this mouse model, targeted expression of gain-of-function GlyR in terminals of glutamatergic cells or in parvalbumin-positive interneurons persistently altered neural network excitability. The increased network excitability associated with gain-of-function GlyR expression in glutamatergic neurons resulted in recurrent epileptiform discharge, which provoked cognitive dysfunction and memory deficits without affecting bidirectional synaptic plasticity. In contrast, decreased network excitability due to gain-of-function GlyR expression in parvalbumin-positive interneurons resulted in an anxiety phenotype, but did not affect cognitive performance or discriminative associative memory. Our animal model unveils neuron type-specific effects on cognition, formation of discriminative associative memory, and emotional behavior in vivo. Furthermore, our data identify a presynaptic disease-causing molecular mechanism that impairs homeostatic regulation of neural network excitability and triggers neuropsychiatric symptoms.

  16. A one-layer recurrent neural network for constrained nonconvex optimization.

    Science.gov (United States)

    Li, Guocheng; Yan, Zheng; Wang, Jun

    2015-01-01

    In this paper, a one-layer recurrent neural network is proposed for solving nonconvex optimization problems subject to general inequality constraints, designed based on an exact penalty function method. It is proved herein that any neuron state of the proposed neural network is convergent to the feasible region in finite time and stays there thereafter, provided that the penalty parameter is sufficiently large. The lower bounds of the penalty parameter and convergence time are also estimated. In addition, any neural state of the proposed neural network is convergent to its equilibrium point set which satisfies the Karush-Kuhn-Tucker conditions of the optimization problem. Moreover, the equilibrium point set is equivalent to the optimal solution to the nonconvex optimization problem if the objective function and constraints satisfy given conditions. Four numerical examples are provided to illustrate the performances of the proposed neural network.

  17. Deep Learning Neural Networks and Bayesian Neural Networks in Data Analysis

    Directory of Open Access Journals (Sweden)

    Chernoded Andrey

    2017-01-01

    Full Text Available Most of the modern analyses in high energy physics use signal-versus-background classification techniques of machine learning methods and neural networks in particular. Deep learning neural network is the most promising modern technique to separate signal and background and now days can be widely and successfully implemented as a part of physical analysis. In this article we compare Deep learning and Bayesian neural networks application as a classifiers in an instance of top quark analysis.

  18. Linear programming based on neural networks for radiotherapy treatment planning

    International Nuclear Information System (INIS)

    Xingen Wu; Limin Luo

    2000-01-01

    In this paper, we propose a neural network model for linear programming that is designed to optimize radiotherapy treatment planning (RTP). This kind of neural network can be easily implemented by using a kind of 'neural' electronic system in order to obtain an optimization solution in real time. We first give an introduction to the RTP problem and construct a non-constraint objective function for the neural network model. We adopt a gradient algorithm to minimize the objective function and design the structure of the neural network for RTP. Compared to traditional linear programming methods, this neural network model can reduce the time needed for convergence, the size of problems (i.e., the number of variables to be searched) and the number of extra slack and surplus variables needed. We obtained a set of optimized beam weights that result in a better dose distribution as compared to that obtained using the simplex algorithm under the same initial condition. The example presented in this paper shows that this model is feasible in three-dimensional RTP. (author)

  19. Recurrent neural network for non-smooth convex optimization problems with application to the identification of genetic regulatory networks.

    Science.gov (United States)

    Cheng, Long; Hou, Zeng-Guang; Lin, Yingzi; Tan, Min; Zhang, Wenjun Chris; Wu, Fang-Xiang

    2011-05-01

    A recurrent neural network is proposed for solving the non-smooth convex optimization problem with the convex inequality and linear equality constraints. Since the objective function and inequality constraints may not be smooth, the Clarke's generalized gradients of the objective function and inequality constraints are employed to describe the dynamics of the proposed neural network. It is proved that the equilibrium point set of the proposed neural network is equivalent to the optimal solution of the original optimization problem by using the Lagrangian saddle-point theorem. Under weak conditions, the proposed neural network is proved to be stable, and the state of the neural network is convergent to one of its equilibrium points. Compared with the existing neural network models for non-smooth optimization problems, the proposed neural network can deal with a larger class of constraints and is not based on the penalty method. Finally, the proposed neural network is used to solve the identification problem of genetic regulatory networks, which can be transformed into a non-smooth convex optimization problem. The simulation results show the satisfactory identification accuracy, which demonstrates the effectiveness and efficiency of the proposed approach.

  20. Behaviour in O of the Neural Networks Training Cost

    DEFF Research Database (Denmark)

    Goutte, Cyril

    1998-01-01

    We study the behaviour in zero of the derivatives of the cost function used when training non-linear neural networks. It is shown that a fair number offirst, second and higher order derivatives vanish in zero, validating the belief that 0 is a peculiar and potentially harmful location. These calc......We study the behaviour in zero of the derivatives of the cost function used when training non-linear neural networks. It is shown that a fair number offirst, second and higher order derivatives vanish in zero, validating the belief that 0 is a peculiar and potentially harmful location....... These calculations arerelated to practical and theoretical aspects of neural networks training....

  1. Application of radial basis function neural network to predict soil sorption partition coefficient using topological descriptors.

    Science.gov (United States)

    Sabour, Mohammad Reza; Moftakhari Anasori Movahed, Saman

    2017-02-01

    The soil sorption partition coefficient logK oc is an indispensable parameter that can be used in assessing the environmental risk of organic chemicals. In order to predict soil sorption partition coefficient for different and even unknown compounds in a fast and accurate manner, a radial basis function neural network (RBFNN) model was developed. Eight topological descriptors of 800 organic compounds were used as inputs of the model. These 800 organic compounds were chosen from a large and very diverse data set. Generalized Regression Neural Network (GRNN) was utilized as the function in this neural network model due to its capability to adapt very quickly. Hence, it can be used to predict logK oc for new chemicals, as well. Out of total data set, 560 organic compounds were used for training and 240 to test efficiency of the model. The obtained results indicate that the model performance is very well. The correlation coefficients (R2) for training and test sets were 0.995 and 0.933, respectively. The root-mean square errors (RMSE) were 0.2321 for training set and 0.413 for test set. As the results for both training and test set are extremely satisfactory, the proposed neural network model can be employed not only to predict logK oc of known compounds, but also to be adaptive for prediction of this value precisely for new products that enter the market each year. Copyright © 2016 Elsevier Ltd. All rights reserved.

  2. Neural networks for sensor validation and plant-wide monitoring

    International Nuclear Information System (INIS)

    Eryurek, E.

    1991-08-01

    The feasibility of using neural networks to characterize one or more variables as a function of other than related variables has been studied. Neural network or parallel distributed processing is found to be highly suitable for the development of relationships among various parameters. A sensor failure detection is studied, and it is shown that neural network models can be used to estimate the sensor readings during the absence of a sensor. (author). 4 refs.; 3 figs

  3. Patterns of cortical oscillations organize neural activity into whole-brain functional networks evident in the fMRI BOLD signal

    Directory of Open Access Journals (Sweden)

    Jennifer C Whitman

    2013-03-01

    Full Text Available Recent findings from electrophysiology and multimodal neuroimaging have elucidated the relationship between patterns of cortical oscillations evident in EEG / MEG and the functional brain networks evident in the BOLD signal. Much of the existing literature emphasized how high-frequency cortical oscillations are thought to coordinate neural activity locally, while low-frequency oscillations play a role in coordinating activity between more distant brain regions. However, the assignment of different frequencies to different spatial scales is an oversimplification. A more informative approach is to explore the arrangements by which these low- and high-frequency oscillations work in concert, coordinating neural activity into whole-brain functional networks. When relating such networks to the BOLD signal, we must consider how the patterns of cortical oscillations change at the same speed as cognitive states, which often last less than a second. Consequently, the slower BOLD signal may often reflect the summed neural activity of several transient network configurations. This temporal mismatch can be circumvented if we use spatial maps to assess correspondence between oscillatory networks and BOLD networks.

  4. Multi-modular neural networks for the classification of e+e- hadronic events

    International Nuclear Information System (INIS)

    Proriol, J.

    1994-01-01

    Some multi-modular neural network methods of classifying e + e - hadronic events are presented. We compare the performances of the following neural networks: MLP (multilayer perceptron), MLP and LVQ (learning vector quantization) trained sequentially, and MLP and RBF (radial basis function) trained sequentially. We introduce a MLP-RBF cooperative neural network. Our last study is a multi-MLP neural network. (orig.)

  5. Design of Robust Neural Network Classifiers

    DEFF Research Database (Denmark)

    Larsen, Jan; Andersen, Lars Nonboe; Hintz-Madsen, Mads

    1998-01-01

    This paper addresses a new framework for designing robust neural network classifiers. The network is optimized using the maximum a posteriori technique, i.e., the cost function is the sum of the log-likelihood and a regularization term (prior). In order to perform robust classification, we present...... a modified likelihood function which incorporates the potential risk of outliers in the data. This leads to the introduction of a new parameter, the outlier probability. Designing the neural classifier involves optimization of network weights as well as outlier probability and regularization parameters. We...... suggest to adapt the outlier probability and regularisation parameters by minimizing the error on a validation set, and a simple gradient descent scheme is derived. In addition, the framework allows for constructing a simple outlier detector. Experiments with artificial data demonstrate the potential...

  6. Functional Connectivity with Distinct Neural Networks Tracks Fluctuations in Gain/Loss Framing Susceptibility

    Science.gov (United States)

    Smith, David V.; Sip, Kamila E.; Delgado, Mauricio R.

    2016-01-01

    Multiple large-scale neural networks orchestrate a wide range of cognitive processes. For example, interoceptive processes related to self-referential thinking have been linked to the default-mode network (DMN); whereas exteroceptive processes related to cognitive control have been linked to the executive-control network (ECN). Although the DMN and ECN have been postulated to exert opposing effects on cognition, it remains unclear how connectivity with these spatially overlapping networks contribute to fluctuations in behavior. While previous work has suggested the medial prefrontal cortex (MPFC) is involved in behavioral change following feedback, these observations could be linked to interoceptive processes tied to DMN or exteroceptive processes tied to ECN because MPFC is positioned in both networks. To address this problem, we employed independent component analysis combined with dual-regression functional connectivity analysis. Participants made a series of financial decisions framed as monetary gains or losses. In some sessions, participants received feedback from a peer observing their choices; in other sessions, feedback was not provided. Following feedback, framing susceptibility—indexed as the increase in gambling behavior in loss frames compared to gain frames—was heightened in some participants and diminished in others. We examined whether these individual differences were linked to differences in connectivity by contrasting sessions containing feedback against those that did not contain feedback. We found two key results. As framing susceptibility increased, the MPFC increased connectivity with DMN; in contrast, temporal-parietal junction decreased connectivity with the ECN. Our results highlight how functional connectivity patterns with distinct neural networks contribute to idiosyncratic behavioral changes. PMID:25858445

  7. Critical heat flux prediction by using radial basis function and multilayer perceptron neural networks: A comparison study

    International Nuclear Information System (INIS)

    Vaziri, Nima; Hojabri, Alireza; Erfani, Ali; Monsefi, Mehrdad; Nilforooshan, Behnam

    2007-01-01

    Critical heat flux (CHF) is an important parameter for the design of nuclear reactors. Although many experimental and theoretical researches have been performed, there is not a single correlation to predict CHF because it is influenced by many parameters. These parameters are based on fixed inlet, local and fixed outlet conditions. Artificial neural networks (ANNs) have been applied to a wide variety of different areas such as prediction, approximation, modeling and classification. In this study, two types of neural networks, radial basis function (RBF) and multilayer perceptron (MLP), are trained with the experimental CHF data and their performances are compared. RBF predicts CHF with root mean square (RMS) errors of 0.24%, 7.9%, 0.16% and MLP predicts CHF with RMS errors of 1.29%, 8.31% and 2.71%, in fixed inlet conditions, local conditions and fixed outlet conditions, respectively. The results show that neural networks with RBF structure have superior performance in CHF data prediction over MLP neural networks. The parametric trends of CHF obtained by the trained ANNs are also evaluated and results reported

  8. Neutron spectrometry using artificial neural networks

    International Nuclear Information System (INIS)

    Vega-Carrillo, Hector Rene; Martin Hernandez-Davila, Victor; Manzanares-Acuna, Eduardo; Mercado Sanchez, Gema A.; Pilar Iniguez de la Torre, Maria; Barquero, Raquel; Palacios, Francisco; Mendez Villafane, Roberto; Arteaga Arteaga, Tarcicio; Manuel Ortiz Rodriguez, Jose

    2006-01-01

    An artificial neural network has been designed to obtain neutron spectra from Bonner spheres spectrometer count rates. The neural network was trained using 129 neutron spectra. These include spectra from isotopic neutron sources; reference and operational spectra from accelerators and nuclear reactors, spectra based on mathematical functions as well as few energy groups and monoenergetic spectra. The spectra were transformed from lethargy to energy distribution and were re-binned to 31 energy groups using the MCNP 4C code. The re-binned spectra and the UTA4 response matrix were used to calculate the expected count rates in Bonner spheres spectrometer. These count rates were used as input and their respective spectra were used as output during the neural network training. After training, the network was tested with the Bonner spheres count rates produced by folding a set of neutron spectra with the response matrix. This set contains data used during network training as well as data not used. Training and testing was carried out using the Matlab ( R) program. To verify the network unfolding performance, the original and unfolded spectra were compared using the root mean square error. The use of artificial neural networks to unfold neutron spectra in neutron spectrometry is an alternative procedure that overcomes the drawbacks associated with this ill-conditioned problem

  9. Hopfield neural network in HEP track reconstruction

    International Nuclear Information System (INIS)

    Muresan, R.; Pentia, M.

    1997-01-01

    In experimental particle physics, pattern recognition problems, specifically for neural network methods, occur frequently in track finding or feature extraction. Track finding is a combinatorial optimization problem. Given a set of points in Euclidean space, one tries the reconstruction of particle trajectories, subject to smoothness constraints.The basic ingredients in a neural network are the N binary neurons and the synaptic strengths connecting them. In our case the neurons are the segments connecting all possible point pairs.The dynamics of the neural network is given by a local updating rule wich evaluates for each neuron the sign of the 'upstream activity'. An updating rule in the form of sigmoid function is given. The synaptic strengths are defined in terms of angle between the segments and the lengths of the segments implied in the track reconstruction. An algorithm based on Hopfield neural network has been developed and tested on the track coordinates measured by silicon microstrip tracking system

  10. Nano-topography Enhances Communication in Neural Cells Networks

    KAUST Repository

    Onesto, V.

    2017-08-23

    Neural cells are the smallest building blocks of the central and peripheral nervous systems. Information in neural networks and cell-substrate interactions have been heretofore studied separately. Understanding whether surface nano-topography can direct nerve cells assembly into computational efficient networks may provide new tools and criteria for tissue engineering and regenerative medicine. In this work, we used information theory approaches and functional multi calcium imaging (fMCI) techniques to examine how information flows in neural networks cultured on surfaces with controlled topography. We found that substrate roughness Sa affects networks topology. In the low nano-meter range, S-a = 0-30 nm, information increases with Sa. Moreover, we found that energy density of a network of cells correlates to the topology of that network. This reinforces the view that information, energy and surface nano-topography are tightly inter-connected and should not be neglected when studying cell-cell interaction in neural tissue repair and regeneration.

  11. Two-Stage Approach to Image Classification by Deep Neural Networks

    Science.gov (United States)

    Ososkov, Gennady; Goncharov, Pavel

    2018-02-01

    The paper demonstrates the advantages of the deep learning networks over the ordinary neural networks on their comparative applications to image classifying. An autoassociative neural network is used as a standalone autoencoder for prior extraction of the most informative features of the input data for neural networks to be compared further as classifiers. The main efforts to deal with deep learning networks are spent for a quite painstaking work of optimizing the structures of those networks and their components, as activation functions, weights, as well as the procedures of minimizing their loss function to improve their performances and speed up their learning time. It is also shown that the deep autoencoders develop the remarkable ability for denoising images after being specially trained. Convolutional Neural Networks are also used to solve a quite actual problem of protein genetics on the example of the durum wheat classification. Results of our comparative study demonstrate the undoubted advantage of the deep networks, as well as the denoising power of the autoencoders. In our work we use both GPU and cloud services to speed up the calculations.

  12. Parallel consensual neural networks.

    Science.gov (United States)

    Benediktsson, J A; Sveinsson, J R; Ersoy, O K; Swain, P H

    1997-01-01

    A new type of a neural-network architecture, the parallel consensual neural network (PCNN), is introduced and applied in classification/data fusion of multisource remote sensing and geographic data. The PCNN architecture is based on statistical consensus theory and involves using stage neural networks with transformed input data. The input data are transformed several times and the different transformed data are used as if they were independent inputs. The independent inputs are first classified using the stage neural networks. The output responses from the stage networks are then weighted and combined to make a consensual decision. In this paper, optimization methods are used in order to weight the outputs from the stage networks. Two approaches are proposed to compute the data transforms for the PCNN, one for binary data and another for analog data. The analog approach uses wavelet packets. The experimental results obtained with the proposed approach show that the PCNN outperforms both a conjugate-gradient backpropagation neural network and conventional statistical methods in terms of overall classification accuracy of test data.

  13. Artificial Neural Networks and the Mass Appraisal of Real Estate

    Directory of Open Access Journals (Sweden)

    Gang Zhou

    2018-03-01

    Full Text Available With the rapid development of computer, artificial intelligence and big data technology, artificial neural networks have become one of the most powerful machine learning algorithms. In the practice, most of the applications of artificial neural networks use back propagation neural network and its variation. Besides the back propagation neural network, various neural networks have been developing in order to improve the performance of standard models. Though neural networks are well known method in the research of real estate, there is enormous space for future research in order to enhance their function. Some scholars combine genetic algorithm, geospatial information, support vector machine model, particle swarm optimization with artificial neural networks to appraise the real estate, which is helpful for the existing appraisal technology. The mass appraisal of real estate in this paper includes the real estate valuation in the transaction and the tax base valuation in the real estate holding. In this study we focus on the theoretical development of artificial neural networks and mass appraisal of real estate, artificial neural networks model evolution and algorithm improvement, artificial neural networks practice and application, and review the existing literature about artificial neural networks and mass appraisal of real estate. Finally, we provide some suggestions for the mass appraisal of China's real estate.

  14. Multi-stability and almost periodic solutions of a class of recurrent neural networks

    International Nuclear Information System (INIS)

    Liu Yiguang; You Zhisheng

    2007-01-01

    This paper studies multi-stability, existence of almost periodic solutions of a class of recurrent neural networks with bounded activation functions. After introducing a sufficient condition insuring multi-stability, many criteria guaranteeing existence of almost periodic solutions are derived using Mawhin's coincidence degree theory. All the criteria are constructed without assuming the activation functions are smooth, monotonic or Lipschitz continuous, and that the networks contains periodic variables (such as periodic coefficients, periodic inputs or periodic activation functions), so all criteria can be easily extended to fit many concrete forms of neural networks such as Hopfield neural networks, or cellular neural networks, etc. Finally, all kinds of simulations are employed to illustrate the criteria

  15. Cat Swarm Optimization Based Functional Link Artificial Neural Network Filter for Gaussian Noise Removal from Computed Tomography Images

    Directory of Open Access Journals (Sweden)

    M. Kumar

    2016-01-01

    Full Text Available Gaussian noise is one of the dominant noises, which degrades the quality of acquired Computed Tomography (CT image data. It creates difficulties in pathological identification or diagnosis of any disease. Gaussian noise elimination is desirable to improve the clarity of a CT image for clinical, diagnostic, and postprocessing applications. This paper proposes an evolutionary nonlinear adaptive filter approach, using Cat Swarm Functional Link Artificial Neural Network (CS-FLANN to remove the unwanted noise. The structure of the proposed filter is based on the Functional Link Artificial Neural Network (FLANN and the Cat Swarm Optimization (CSO is utilized for the selection of optimum weight of the neural network filter. The applied filter has been compared with the existing linear filters, like the mean filter and the adaptive Wiener filter. The performance indices, such as peak signal to noise ratio (PSNR, have been computed for the quantitative analysis of the proposed filter. The experimental evaluation established the superiority of the proposed filtering technique over existing methods.

  16. Linear and nonlinear ARMA model parameter estimation using an artificial neural network

    Science.gov (United States)

    Chon, K. H.; Cohen, R. J.

    1997-01-01

    This paper addresses parametric system identification of linear and nonlinear dynamic systems by analysis of the input and output signals. Specifically, we investigate the relationship between estimation of the system using a feedforward neural network model and estimation of the system by use of linear and nonlinear autoregressive moving-average (ARMA) models. By utilizing a neural network model incorporating a polynomial activation function, we show the equivalence of the artificial neural network to the linear and nonlinear ARMA models. We compare the parameterization of the estimated system using the neural network and ARMA approaches by utilizing data generated by means of computer simulations. Specifically, we show that the parameters of a simulated ARMA system can be obtained from the neural network analysis of the simulated data or by conventional least squares ARMA analysis. The feasibility of applying neural networks with polynomial activation functions to the analysis of experimental data is explored by application to measurements of heart rate (HR) and instantaneous lung volume (ILV) fluctuations.

  17. Nonlinearly Activated Neural Network for Solving Time-Varying Complex Sylvester Equation.

    Science.gov (United States)

    Li, Shuai; Li, Yangming

    2013-10-28

    The Sylvester equation is often encountered in mathematics and control theory. For the general time-invariant Sylvester equation problem, which is defined in the domain of complex numbers, the Bartels-Stewart algorithm and its extensions are effective and widely used with an O(n³) time complexity. When applied to solving the time-varying Sylvester equation, the computation burden increases intensively with the decrease of sampling period and cannot satisfy continuous realtime calculation requirements. For the special case of the general Sylvester equation problem defined in the domain of real numbers, gradient-based recurrent neural networks are able to solve the time-varying Sylvester equation in real time, but there always exists an estimation error while a recently proposed recurrent neural network by Zhang et al [this type of neural network is called Zhang neural network (ZNN)] converges to the solution ideally. The advancements in complex-valued neural networks cast light to extend the existing real-valued ZNN for solving the time-varying real-valued Sylvester equation to its counterpart in the domain of complex numbers. In this paper, a complex-valued ZNN for solving the complex-valued Sylvester equation problem is investigated and the global convergence of the neural network is proven with the proposed nonlinear complex-valued activation functions. Moreover, a special type of activation function with a core function, called sign-bi-power function, is proven to enable the ZNN to converge in finite time, which further enhances its advantage in online processing. In this case, the upper bound of the convergence time is also derived analytically. Simulations are performed to evaluate and compare the performance of the neural network with different parameters and activation functions. Both theoretical analysis and numerical simulations validate the effectiveness of the proposed method.

  18. Artificial neural networks applied to forecasting time series.

    Science.gov (United States)

    Montaño Moreno, Juan J; Palmer Pol, Alfonso; Muñoz Gracia, Pilar

    2011-04-01

    This study offers a description and comparison of the main models of Artificial Neural Networks (ANN) which have proved to be useful in time series forecasting, and also a standard procedure for the practical application of ANN in this type of task. The Multilayer Perceptron (MLP), Radial Base Function (RBF), Generalized Regression Neural Network (GRNN), and Recurrent Neural Network (RNN) models are analyzed. With this aim in mind, we use a time series made up of 244 time points. A comparative study establishes that the error made by the four neural network models analyzed is less than 10%. In accordance with the interpretation criteria of this performance, it can be concluded that the neural network models show a close fit regarding their forecasting capacity. The model with the best performance is the RBF, followed by the RNN and MLP. The GRNN model is the one with the worst performance. Finally, we analyze the advantages and limitations of ANN, the possible solutions to these limitations, and provide an orientation towards future research.

  19. A one-layer recurrent neural network for constrained nonsmooth invex optimization.

    Science.gov (United States)

    Li, Guocheng; Yan, Zheng; Wang, Jun

    2014-02-01

    Invexity is an important notion in nonconvex optimization. In this paper, a one-layer recurrent neural network is proposed for solving constrained nonsmooth invex optimization problems, designed based on an exact penalty function method. It is proved herein that any state of the proposed neural network is globally convergent to the optimal solution set of constrained invex optimization problems, with a sufficiently large penalty parameter. In addition, any neural state is globally convergent to the unique optimal solution, provided that the objective function and constraint functions are pseudoconvex. Moreover, any neural state is globally convergent to the feasible region in finite time and stays there thereafter. The lower bounds of the penalty parameter and convergence time are also estimated. Two numerical examples are provided to illustrate the performances of the proposed neural network. Copyright © 2013 Elsevier Ltd. All rights reserved.

  20. Neural Networks: Implementations and Applications

    OpenAIRE

    Vonk, E.; Veelenturf, L.P.J.; Jain, L.C.

    1996-01-01

    Artificial neural networks, also called neural networks, have been used successfully in many fields including engineering, science and business. This paper presents the implementation of several neural network simulators and their applications in character recognition and other engineering areas

  1. Optimization of multilayer neural network parameters for speaker recognition

    Science.gov (United States)

    Tovarek, Jaromir; Partila, Pavol; Rozhon, Jan; Voznak, Miroslav; Skapa, Jan; Uhrin, Dominik; Chmelikova, Zdenka

    2016-05-01

    This article discusses the impact of multilayer neural network parameters for speaker identification. The main task of speaker identification is to find a specific person in the known set of speakers. It means that the voice of an unknown speaker (wanted person) belongs to a group of reference speakers from the voice database. One of the requests was to develop the text-independent system, which means to classify wanted person regardless of content and language. Multilayer neural network has been used for speaker identification in this research. Artificial neural network (ANN) needs to set parameters like activation function of neurons, steepness of activation functions, learning rate, the maximum number of iterations and a number of neurons in the hidden and output layers. ANN accuracy and validation time are directly influenced by the parameter settings. Different roles require different settings. Identification accuracy and ANN validation time were evaluated with the same input data but different parameter settings. The goal was to find parameters for the neural network with the highest precision and shortest validation time. Input data of neural networks are a Mel-frequency cepstral coefficients (MFCC). These parameters describe the properties of the vocal tract. Audio samples were recorded for all speakers in a laboratory environment. Training, testing and validation data set were split into 70, 15 and 15 %. The result of the research described in this article is different parameter setting for the multilayer neural network for four speakers.

  2. Structural reliability calculation method based on the dual neural network and direct integration method.

    Science.gov (United States)

    Li, Haibin; He, Yun; Nie, Xiaobo

    2018-01-01

    Structural reliability analysis under uncertainty is paid wide attention by engineers and scholars due to reflecting the structural characteristics and the bearing actual situation. The direct integration method, started from the definition of reliability theory, is easy to be understood, but there are still mathematics difficulties in the calculation of multiple integrals. Therefore, a dual neural network method is proposed for calculating multiple integrals in this paper. Dual neural network consists of two neural networks. The neural network A is used to learn the integrand function, and the neural network B is used to simulate the original function. According to the derivative relationships between the network output and the network input, the neural network B is derived from the neural network A. On this basis, the performance function of normalization is employed in the proposed method to overcome the difficulty of multiple integrations and to improve the accuracy for reliability calculations. The comparisons between the proposed method and Monte Carlo simulation method, Hasofer-Lind method, the mean value first-order second moment method have demonstrated that the proposed method is an efficient and accurate reliability method for structural reliability problems.

  3. Optoelectronic Implementation of Neural Networks

    Indian Academy of Sciences (India)

    neural networks, such as learning, adapting and copying by means of parallel ... to provide robust recognition of hand-printed English text. Engine idle and misfiring .... and s represents the bounded activation function of a neuron. It is typically ...

  4. A Recurrent Neural Network for Nonlinear Fractional Programming

    Directory of Open Access Journals (Sweden)

    Quan-Ju Zhang

    2012-01-01

    Full Text Available This paper presents a novel recurrent time continuous neural network model which performs nonlinear fractional optimization subject to interval constraints on each of the optimization variables. The network is proved to be complete in the sense that the set of optima of the objective function to be minimized with interval constraints coincides with the set of equilibria of the neural network. It is also shown that the network is primal and globally convergent in the sense that its trajectory cannot escape from the feasible region and will converge to an exact optimal solution for any initial point being chosen in the feasible interval region. Simulation results are given to demonstrate further the global convergence and good performance of the proposing neural network for nonlinear fractional programming problems with interval constraints.

  5. Appling a Novel Cost Function to Hopfield Neural Network for Defects Boundaries Detection of Wood Image

    Directory of Open Access Journals (Sweden)

    Qi Dawei

    2010-01-01

    Full Text Available A modified Hopfield neural network with a novel cost function was presented for detecting wood defects boundary in the image. Different from traditional methods, the boundary detection problem in this paper was formulated as an optimization process that sought the boundary points to minimize a cost function. An initial boundary was estimated by Canny algorithm first. The pixel gray value was described as a neuron state of Hopfield neural network. The state updated till the cost function touches the minimum value. The designed cost function ensured that few neurons were activated except the neurons corresponding to actual boundary points and ensured that the activated neurons are positioned in the points which had greatest change in gray value. The tools of Matlab were used to implement the experiment. The results show that the noises of the image are effectively removed, and our method obtains more noiseless and vivid boundary than those of the traditional methods.

  6. Evaluation of artificial neural network techniques for flow forecasting in the River Yangtze, China

    Directory of Open Access Journals (Sweden)

    C. W. Dawson

    2002-01-01

    Full Text Available While engineers have been quantifying rainfall-runoff processes since the mid-19th century, it is only in the last decade that artificial neural network models have been applied to the same task. This paper evaluates two neural networks in this context: the popular multilayer perceptron (MLP, and the radial basis function network (RBF. Using six-hourly rainfall-runoff data for the River Yangtze at Yichang (upstream of the Three Gorges Dam for the period 1991 to 1993, it is shown that both neural network types can simulate river flows beyond the range of the training set. In addition, an evaluation of alternative RBF transfer functions demonstrates that the popular Gaussian function, often used in RBF networks, is not necessarily the ‘best’ function to use for river flow forecasting. Comparisons are also made between these neural networks and conventional statistical techniques; stepwise multiple linear regression, auto regressive moving average models and a zero order forecasting approach. Keywords: Artificial neural network, multilayer perception, radial basis function, flood forecasting

  7. Probabilistic Models and Generative Neural Networks: Towards an Unified Framework for Modeling Normal and Impaired Neurocognitive Functions.

    Science.gov (United States)

    Testolin, Alberto; Zorzi, Marco

    2016-01-01

    Connectionist models can be characterized within the more general framework of probabilistic graphical models, which allow to efficiently describe complex statistical distributions involving a large number of interacting variables. This integration allows building more realistic computational models of cognitive functions, which more faithfully reflect the underlying neural mechanisms at the same time providing a useful bridge to higher-level descriptions in terms of Bayesian computations. Here we discuss a powerful class of graphical models that can be implemented as stochastic, generative neural networks. These models overcome many limitations associated with classic connectionist models, for example by exploiting unsupervised learning in hierarchical architectures (deep networks) and by taking into account top-down, predictive processing supported by feedback loops. We review some recent cognitive models based on generative networks, and we point out promising research directions to investigate neuropsychological disorders within this approach. Though further efforts are required in order to fill the gap between structured Bayesian models and more realistic, biophysical models of neuronal dynamics, we argue that generative neural networks have the potential to bridge these levels of analysis, thereby improving our understanding of the neural bases of cognition and of pathologies caused by brain damage.

  8. Neural-Network Quantum States, String-Bond States, and Chiral Topological States

    Science.gov (United States)

    Glasser, Ivan; Pancotti, Nicola; August, Moritz; Rodriguez, Ivan D.; Cirac, J. Ignacio

    2018-01-01

    Neural-network quantum states have recently been introduced as an Ansatz for describing the wave function of quantum many-body systems. We show that there are strong connections between neural-network quantum states in the form of restricted Boltzmann machines and some classes of tensor-network states in arbitrary dimensions. In particular, we demonstrate that short-range restricted Boltzmann machines are entangled plaquette states, while fully connected restricted Boltzmann machines are string-bond states with a nonlocal geometry and low bond dimension. These results shed light on the underlying architecture of restricted Boltzmann machines and their efficiency at representing many-body quantum states. String-bond states also provide a generic way of enhancing the power of neural-network quantum states and a natural generalization to systems with larger local Hilbert space. We compare the advantages and drawbacks of these different classes of states and present a method to combine them together. This allows us to benefit from both the entanglement structure of tensor networks and the efficiency of neural-network quantum states into a single Ansatz capable of targeting the wave function of strongly correlated systems. While it remains a challenge to describe states with chiral topological order using traditional tensor networks, we show that, because of their nonlocal geometry, neural-network quantum states and their string-bond-state extension can describe a lattice fractional quantum Hall state exactly. In addition, we provide numerical evidence that neural-network quantum states can approximate a chiral spin liquid with better accuracy than entangled plaquette states and local string-bond states. Our results demonstrate the efficiency of neural networks to describe complex quantum wave functions and pave the way towards the use of string-bond states as a tool in more traditional machine-learning applications.

  9. Two-Stage Approach to Image Classification by Deep Neural Networks

    Directory of Open Access Journals (Sweden)

    Ososkov Gennady

    2018-01-01

    Full Text Available The paper demonstrates the advantages of the deep learning networks over the ordinary neural networks on their comparative applications to image classifying. An autoassociative neural network is used as a standalone autoencoder for prior extraction of the most informative features of the input data for neural networks to be compared further as classifiers. The main efforts to deal with deep learning networks are spent for a quite painstaking work of optimizing the structures of those networks and their components, as activation functions, weights, as well as the procedures of minimizing their loss function to improve their performances and speed up their learning time. It is also shown that the deep autoencoders develop the remarkable ability for denoising images after being specially trained. Convolutional Neural Networks are also used to solve a quite actual problem of protein genetics on the example of the durum wheat classification. Results of our comparative study demonstrate the undoubted advantage of the deep networks, as well as the denoising power of the autoencoders. In our work we use both GPU and cloud services to speed up the calculations.

  10. Advances in Artificial Neural Networks – Methodological Development and Application

    Directory of Open Access Journals (Sweden)

    Yanbo Huang

    2009-08-01

    Full Text Available Artificial neural networks as a major soft-computing technology have been extensively studied and applied during the last three decades. Research on backpropagation training algorithms for multilayer perceptron networks has spurred development of other neural network training algorithms for other networks such as radial basis function, recurrent network, feedback network, and unsupervised Kohonen self-organizing network. These networks, especially the multilayer perceptron network with a backpropagation training algorithm, have gained recognition in research and applications in various scientific and engineering areas. In order to accelerate the training process and overcome data over-fitting, research has been conducted to improve the backpropagation algorithm. Further, artificial neural networks have been integrated with other advanced methods such as fuzzy logic and wavelet analysis, to enhance the ability of data interpretation and modeling and to avoid subjectivity in the operation of the training algorithm. In recent years, support vector machines have emerged as a set of high-performance supervised generalized linear classifiers in parallel with artificial neural networks. A review on development history of artificial neural networks is presented and the standard architectures and algorithms of artificial neural networks are described. Furthermore, advanced artificial neural networks will be introduced with support vector machines, and limitations of ANNs will be identified. The future of artificial neural network development in tandem with support vector machines will be discussed in conjunction with further applications to food science and engineering, soil and water relationship for crop management, and decision support for precision agriculture. Along with the network structures and training algorithms, the applications of artificial neural networks will be reviewed as well, especially in the fields of agricultural and biological

  11. Application of a neural network for reflectance spectrum classification

    Science.gov (United States)

    Yang, Gefei; Gartley, Michael

    2017-05-01

    Traditional reflectance spectrum classification algorithms are based on comparing spectrum across the electromagnetic spectrum anywhere from the ultra-violet to the thermal infrared regions. These methods analyze reflectance on a pixel by pixel basis. Inspired by high performance that Convolution Neural Networks (CNN) have demonstrated in image classification, we applied a neural network to analyze directional reflectance pattern images. By using the bidirectional reflectance distribution function (BRDF) data, we can reformulate the 4-dimensional into 2 dimensions, namely incident direction × reflected direction × channels. Meanwhile, RIT's micro-DIRSIG model is utilized to simulate additional training samples for improving the robustness of the neural networks training. Unlike traditional classification by using hand-designed feature extraction with a trainable classifier, neural networks create several layers to learn a feature hierarchy from pixels to classifier and all layers are trained jointly. Hence, the our approach of utilizing the angular features are different to traditional methods utilizing spatial features. Although training processing typically has a large computational cost, simple classifiers work well when subsequently using neural network generated features. Currently, most popular neural networks such as VGG, GoogLeNet and AlexNet are trained based on RGB spatial image data. Our approach aims to build a directional reflectance spectrum based neural network to help us to understand from another perspective. At the end of this paper, we compare the difference among several classifiers and analyze the trade-off among neural networks parameters.

  12. Neural networks and its application in biomedical engineering

    International Nuclear Information System (INIS)

    Husnain, S.K.; Bhatti, M.I.

    2002-01-01

    Artificial network (ANNs) is an information processing system that has certain performance characteristics in common with biological neural networks. A neural network is characterized by connections between the neurons, method of determining the weights on the connections and its activation functions while a biological neuron has three types of components that are of particular interest in understanding an artificial neuron: its dendrites, soma, and axon. The actin of the chemical transmitter modifies the incoming signal. The study of neural networks is an extremely interdisciplinary field. Computer-based diagnosis is an increasingly used method that tries to improve the quality of health care. Systems on Neural Networks have been developed extensively in the last ten years with the hope that medical diagnosis and therefore medical care would improve dramatically. The addition of a symbolic processing layer enhances the ANNs in a number of ways. It is, for instance, possible to supplement a network that is purely diagnostic with a level that recommends or nodes in order to more closely simulate the nervous system. (author)

  13. A fuzzy Hopfield neural network for medical image segmentation

    International Nuclear Information System (INIS)

    Lin, J.S.; Cheng, K.S.; Mao, C.W.

    1996-01-01

    In this paper, an unsupervised parallel segmentation approach using a fuzzy Hopfield neural network (FHNN) is proposed. The main purpose is to embed fuzzy clustering into neural networks so that on-line learning and parallel implementation for medical image segmentation are feasible. The idea is to cast a clustering problem as a minimization problem where the criteria for the optimum segmentation is chosen as the minimization of the Euclidean distance between samples to class centers. In order to generate feasible results, a fuzzy c-means clustering strategy is included in the Hopfield neural network to eliminate the need of finding weighting factors in the energy function, which is formulated and based on a basic concept commonly used in pattern classification, called the within-class scatter matrix principle. The suggested fuzzy c-means clustering strategy has also been proven to be convergent and to allow the network to learn more effectively than the conventional Hopfield neural network. The fuzzy Hopfield neural network based on the within-class scatter matrix shows the promising results in comparison with the hard c-means method

  14. Computational modeling of neural plasticity for self-organization of neural networks.

    Science.gov (United States)

    Chrol-Cannon, Joseph; Jin, Yaochu

    2014-11-01

    Self-organization in biological nervous systems during the lifetime is known to largely occur through a process of plasticity that is dependent upon the spike-timing activity in connected neurons. In the field of computational neuroscience, much effort has been dedicated to building up computational models of neural plasticity to replicate experimental data. Most recently, increasing attention has been paid to understanding the role of neural plasticity in functional and structural neural self-organization, as well as its influence on the learning performance of neural networks for accomplishing machine learning tasks such as classification and regression. Although many ideas and hypothesis have been suggested, the relationship between the structure, dynamics and learning performance of neural networks remains elusive. The purpose of this article is to review the most important computational models for neural plasticity and discuss various ideas about neural plasticity's role. Finally, we suggest a few promising research directions, in particular those along the line that combines findings in computational neuroscience and systems biology, and their synergetic roles in understanding learning, memory and cognition, thereby bridging the gap between computational neuroscience, systems biology and computational intelligence. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  15. Single-hidden-layer feed-forward quantum neural network based on Grover learning.

    Science.gov (United States)

    Liu, Cheng-Yi; Chen, Chein; Chang, Ching-Ter; Shih, Lun-Min

    2013-09-01

    In this paper, a novel single-hidden-layer feed-forward quantum neural network model is proposed based on some concepts and principles in the quantum theory. By combining the quantum mechanism with the feed-forward neural network, we defined quantum hidden neurons and connected quantum weights, and used them as the fundamental information processing unit in a single-hidden-layer feed-forward neural network. The quantum neurons make a wide range of nonlinear functions serve as the activation functions in the hidden layer of the network, and the Grover searching algorithm outstands the optimal parameter setting iteratively and thus makes very efficient neural network learning possible. The quantum neuron and weights, along with a Grover searching algorithm based learning, result in a novel and efficient neural network characteristic of reduced network, high efficient training and prospect application in future. Some simulations are taken to investigate the performance of the proposed quantum network and the result show that it can achieve accurate learning. Copyright © 2013 Elsevier Ltd. All rights reserved.

  16. Representation of linguistic form and function in recurrent neural networks

    NARCIS (Netherlands)

    Kadar, Akos; Chrupala, Grzegorz; Alishahi, Afra

    2017-01-01

    We present novel methods for analyzing the activation patterns of recurrent neural networks from a linguistic point of view and explore the types of linguistic structure they learn. As a case study, we use a standard standalone language model, and a multi-task gated recurrent network architecture

  17. Using domain-specific basic functions for the analysis of supervised artificial neural networks

    NARCIS (Netherlands)

    van der Zwaag, B.J.

    2003-01-01

    Since the early development of artificial neural networks, researchers have tried to analyze trained neural networks in order to gain insight into their behavior. For certain applications and in certain problem domains this has been successful, for example by the development of so-called rule

  18. Sinc-function based Network

    DEFF Research Database (Denmark)

    Madsen, Per Printz

    1998-01-01

    The purpose of this paper is to describe a neural network (SNN), that is based on Shannons ideas of reconstruction of a real continuous function from its samples. The basic function, used in this network, is the Sinc-function. Two learning algorithms are described. A simple one called IM...

  19. Embedding recurrent neural networks into predator-prey models.

    Science.gov (United States)

    Moreau, Yves; Louiès, Stephane; Vandewalle, Joos; Brenig, Leon

    1999-03-01

    We study changes of coordinates that allow the embedding of ordinary differential equations describing continuous-time recurrent neural networks into differential equations describing predator-prey models-also called Lotka-Volterra systems. We transform the equations for the neural network first into quasi-monomial form (Brenig, L. (1988). Complete factorization and analytic solutions of generalized Lotka-Volterra equations. Physics Letters A, 133(7-8), 378-382), where we express the vector field of the dynamical system as a linear combination of products of powers of the variables. In practice, this transformation is possible only if the activation function is the hyperbolic tangent or the logistic sigmoid. From this quasi-monomial form, we can directly transform the system further into Lotka-Volterra equations. The resulting Lotka-Volterra system is of higher dimension than the original system, but the behavior of its first variables is equivalent to the behavior of the original neural network. We expect that this transformation will permit the application of existing techniques for the analysis of Lotka-Volterra systems to recurrent neural networks. Furthermore, our results show that Lotka-Volterra systems are universal approximators of dynamical systems, just as are continuous-time neural networks.

  20. Global Robust Stability of Switched Interval Neural Networks with Discrete and Distributed Time-Varying Delays of Neural Type

    Directory of Open Access Journals (Sweden)

    Huaiqin Wu

    2012-01-01

    Full Text Available By combing the theories of the switched systems and the interval neural networks, the mathematics model of the switched interval neural networks with discrete and distributed time-varying delays of neural type is presented. A set of the interval parameter uncertainty neural networks with discrete and distributed time-varying delays of neural type are used as the individual subsystem, and an arbitrary switching rule is assumed to coordinate the switching between these networks. By applying the augmented Lyapunov-Krasovskii functional approach and linear matrix inequality (LMI techniques, a delay-dependent criterion is achieved to ensure to such switched interval neural networks to be globally asymptotically robustly stable in terms of LMIs. The unknown gain matrix is determined by solving this delay-dependent LMIs. Finally, an illustrative example is given to demonstrate the validity of the theoretical results.

  1. Neural network construction via back-propagation

    International Nuclear Information System (INIS)

    Burwick, T.T.

    1994-06-01

    A method is presented that combines back-propagation with multi-layer neural network construction. Back-propagation is used not only to adjust the weights but also the signal functions. Going from one network to an equivalent one that has additional linear units, the non-linearity of these units and thus their effective presence is then introduced via back-propagation (weight-splitting). The back-propagated error causes the network to include new units in order to minimize the error function. We also show how this formalism allows to escape local minima

  2. Predicting carbonate permeabilities from wireline logs using a back-propagation neural network

    International Nuclear Information System (INIS)

    Wiener, J.M.; Moll, R.F.; Rogers, J.A.

    1991-01-01

    This paper explores the applicability of using Neural Networks to aid in the determination of carbonate permeability from wireline logs. Resistivity, interval transit time, neutron porosity, and bulk density logs form Texaco's Stockyard Creek Oil field were used as input to a specially designed neural network to predict core permeabilities in this carbonate reservoir. Also of interest was the comparison of the neural network's results to those of standard statistical techniques. The process of developing the neural network for this problem has shown that a good understanding of the data is required when creating the training set from which the network learns. This network was trained to learn core permeabilities from raw and transformed log data using a hyperbolic tangent transfer function and a sum of squares global error function. Also, it required two hidden layers to solve this particular problem

  3. A multivariate extension of mutual information for growing neural networks.

    Science.gov (United States)

    Ball, Kenneth R; Grant, Christopher; Mundy, William R; Shafer, Timothy J

    2017-11-01

    Recordings of neural network activity in vitro are increasingly being used to assess the development of neural network activity and the effects of drugs, chemicals and disease states on neural network function. The high-content nature of the data derived from such recordings can be used to infer effects of compounds or disease states on a variety of important neural functions, including network synchrony. Historically, synchrony of networks in vitro has been assessed either by determination of correlation coefficients (e.g. Pearson's correlation), by statistics estimated from cross-correlation histograms between pairs of active electrodes, and/or by pairwise mutual information and related measures. The present study examines the application of Normalized Multiinformation (NMI) as a scalar measure of shared information content in a multivariate network that is robust with respect to changes in network size. Theoretical simulations are designed to investigate NMI as a measure of complexity and synchrony in a developing network relative to several alternative approaches. The NMI approach is applied to these simulations and also to data collected during exposure of in vitro neural networks to neuroactive compounds during the first 12 days in vitro, and compared to other common measures, including correlation coefficients and mean firing rates of neurons. NMI is shown to be more sensitive to developmental effects than first order synchronous and nonsynchronous measures of network complexity. Finally, NMI is a scalar measure of global (rather than pairwise) mutual information in a multivariate network, and hence relies on less assumptions for cross-network comparisons than historical approaches. Copyright © 2017 Elsevier Ltd. All rights reserved.

  4. CONSTRUCTION COST PREDICTION USING NEURAL NETWORKS

    Directory of Open Access Journals (Sweden)

    Smita K Magdum

    2017-10-01

    Full Text Available Construction cost prediction is important for construction firms to compete and grow in the industry. Accurate construction cost prediction in the early stage of project is important for project feasibility studies and successful completion. There are many factors that affect the cost prediction. This paper presents construction cost prediction as multiple regression model with cost of six materials as independent variables. The objective of this paper is to develop neural networks and multilayer perceptron based model for construction cost prediction. Different models of NN and MLP are developed with varying hidden layer size and hidden nodes. Four artificial neural network models and twelve multilayer perceptron models are compared. MLP and NN give better results than statistical regression method. As compared to NN, MLP works better on training dataset but fails on testing dataset. Five activation functions are tested to identify suitable function for the problem. ‘elu' transfer function gives better results than other transfer function.

  5. Probability Density Estimation Using Neural Networks in Monte Carlo Calculations

    International Nuclear Information System (INIS)

    Shim, Hyung Jin; Cho, Jin Young; Song, Jae Seung; Kim, Chang Hyo

    2008-01-01

    The Monte Carlo neutronics analysis requires the capability for a tally distribution estimation like an axial power distribution or a flux gradient in a fuel rod, etc. This problem can be regarded as a probability density function estimation from an observation set. We apply the neural network based density estimation method to an observation and sampling weight set produced by the Monte Carlo calculations. The neural network method is compared with the histogram and the functional expansion tally method for estimating a non-smooth density, a fission source distribution, and an absorption rate's gradient in a burnable absorber rod. The application results shows that the neural network method can approximate a tally distribution quite well. (authors)

  6. Advances in neural networks computational and theoretical issues

    CERN Document Server

    Esposito, Anna; Morabito, Francesco

    2015-01-01

    This book collects research works that exploit neural networks and machine learning techniques from a multidisciplinary perspective. Subjects covered include theoretical, methodological and computational topics which are grouped together into chapters devoted to the discussion of novelties and innovations related to the field of Artificial Neural Networks as well as the use of neural networks for applications, pattern recognition, signal processing, and special topics such as the detection and recognition of multimodal emotional expressions and daily cognitive functions, and  bio-inspired memristor-based networks.  Providing insights into the latest research interest from a pool of international experts coming from different research fields, the volume becomes valuable to all those with any interest in a holistic approach to implement believable, autonomous, adaptive, and context-aware Information Communication Technologies.

  7. Synaptic energy drives the information processing mechanisms in spiking neural networks.

    Science.gov (United States)

    El Laithy, Karim; Bogdan, Martin

    2014-04-01

    Flow of energy and free energy minimization underpins almost every aspect of naturally occurring physical mechanisms. Inspired by this fact this work establishes an energy-based framework that spans the multi-scale range of biological neural systems and integrates synaptic dynamic, synchronous spiking activity and neural states into one consistent working paradigm. Following a bottom-up approach, a hypothetical energy function is proposed for dynamic synaptic models based on the theoretical thermodynamic principles and the Hopfield networks. We show that a synapse exposes stable operating points in terms of its excitatory postsynaptic potential as a function of its synaptic strength. We postulate that synapses in a network operating at these stable points can drive this network to an internal state of synchronous firing. The presented analysis is related to the widely investigated temporal coherent activities (cell assemblies) over a certain range of time scales (binding-by-synchrony). This introduces a novel explanation of the observed (poly)synchronous activities within networks regarding the synaptic (coupling) functionality. On a network level the transitions from one firing scheme to the other express discrete sets of neural states. The neural states exist as long as the network sustains the internal synaptic energy.

  8. The Laplacian spectrum of neural networks

    Science.gov (United States)

    de Lange, Siemon C.; de Reus, Marcel A.; van den Heuvel, Martijn P.

    2014-01-01

    The brain is a complex network of neural interactions, both at the microscopic and macroscopic level. Graph theory is well suited to examine the global network architecture of these neural networks. Many popular graph metrics, however, encode average properties of individual network elements. Complementing these “conventional” graph metrics, the eigenvalue spectrum of the normalized Laplacian describes a network's structure directly at a systems level, without referring to individual nodes or connections. In this paper, the Laplacian spectra of the macroscopic anatomical neuronal networks of the macaque and cat, and the microscopic network of the Caenorhabditis elegans were examined. Consistent with conventional graph metrics, analysis of the Laplacian spectra revealed an integrative community structure in neural brain networks. Extending previous findings of overlap of network attributes across species, similarity of the Laplacian spectra across the cat, macaque and C. elegans neural networks suggests a certain level of consistency in the overall architecture of the anatomical neural networks of these species. Our results further suggest a specific network class for neural networks, distinct from conceptual small-world and scale-free models as well as several empirical networks. PMID:24454286

  9. Program Helps Simulate Neural Networks

    Science.gov (United States)

    Villarreal, James; Mcintire, Gary

    1993-01-01

    Neural Network Environment on Transputer System (NNETS) computer program provides users high degree of flexibility in creating and manipulating wide variety of neural-network topologies at processing speeds not found in conventional computing environments. Supports back-propagation and back-propagation-related algorithms. Back-propagation algorithm used is implementation of Rumelhart's generalized delta rule. NNETS developed on INMOS Transputer(R). Predefines back-propagation network, Jordan network, and reinforcement network to assist users in learning and defining own networks. Also enables users to configure other neural-network paradigms from NNETS basic architecture. Small portion of software written in OCCAM(R) language.

  10. Implementing Signature Neural Networks with Spiking Neurons.

    Science.gov (United States)

    Carrillo-Medina, José Luis; Latorre, Roberto

    2016-01-01

    Spiking Neural Networks constitute the most promising approach to develop realistic Artificial Neural Networks (ANNs). Unlike traditional firing rate-based paradigms, information coding in spiking models is based on the precise timing of individual spikes. It has been demonstrated that spiking ANNs can be successfully and efficiently applied to multiple realistic problems solvable with traditional strategies (e.g., data classification or pattern recognition). In recent years, major breakthroughs in neuroscience research have discovered new relevant computational principles in different living neural systems. Could ANNs benefit from some of these recent findings providing novel elements of inspiration? This is an intriguing question for the research community and the development of spiking ANNs including novel bio-inspired information coding and processing strategies is gaining attention. From this perspective, in this work, we adapt the core concepts of the recently proposed Signature Neural Network paradigm-i.e., neural signatures to identify each unit in the network, local information contextualization during the processing, and multicoding strategies for information propagation regarding the origin and the content of the data-to be employed in a spiking neural network. To the best of our knowledge, none of these mechanisms have been used yet in the context of ANNs of spiking neurons. This paper provides a proof-of-concept for their applicability in such networks. Computer simulations show that a simple network model like the discussed here exhibits complex self-organizing properties. The combination of multiple simultaneous encoding schemes allows the network to generate coexisting spatio-temporal patterns of activity encoding information in different spatio-temporal spaces. As a function of the network and/or intra-unit parameters shaping the corresponding encoding modality, different forms of competition among the evoked patterns can emerge even in the absence

  11. Fundamental study on the interpretation technique for 3-D MT data using neural networks. 2; Neural network wo mochiita sanjigen MT ho data kaishaku gijutsu ni kansuru kisoteki kenkyu. 2

    Energy Technology Data Exchange (ETDEWEB)

    Fukuoka, K; Kobayashi, T [OYO Corp., Tokyo (Japan); Mogi, T [Kyushu University, Fukuoka (Japan). Faculty of Engineering; Spichak, V

    1997-10-22

    Behavior of neural networks relative to noise and the constitution of an optimum network are studied for the construction of a 3-D MT data interpretation system using neural networks. In the study, the relationship is examined between the noise level of educational data and the noise level of the neural network to be constructed. After examination it is found that the neural network is effective in interpreting data whose noise level is the same as that of educational data; it cannot correctly interpret data that it has not met in the educational stage even if such data is free of noise; that the optimum number of neurons in a hidden layer is approximately 40 in a network architecture using the current system; and that the neuron gain function enhances recognition capability when a logistic function is used in the hidden layer and a linear function is used in the output layer. 2 refs., 7 figs., 2 tabs.

  12. Gas Turbine Engine Control Design Using Fuzzy Logic and Neural Networks

    Directory of Open Access Journals (Sweden)

    M. Bazazzadeh

    2011-01-01

    Full Text Available This paper presents a successful approach in designing a Fuzzy Logic Controller (FLC for a specific Jet Engine. At first, a suitable mathematical model for the jet engine is presented by the aid of SIMULINK. Then by applying different reasonable fuel flow functions via the engine model, some important engine-transient operation parameters (such as thrust, compressor surge margin, turbine inlet temperature, etc. are obtained. These parameters provide a precious database, which train a neural network. At the second step, by designing and training a feedforward multilayer perceptron neural network according to this available database; a number of different reasonable fuel flow functions for various engine acceleration operations are determined. These functions are used to define the desired fuzzy fuel functions. Indeed, the neural networks are used as an effective method to define the optimum fuzzy fuel functions. At the next step, we propose a FLC by using the engine simulation model and the neural network results. The proposed control scheme is proved by computer simulation using the designed engine model. The simulation results of engine model with FLC illustrate that the proposed controller achieves the desired performance and stability.

  13. Generating Seismograms with Deep Neural Networks

    Science.gov (United States)

    Krischer, L.; Fichtner, A.

    2017-12-01

    The recent surge of successful uses of deep neural networks in computer vision, speech recognition, and natural language processing, mainly enabled by the availability of fast GPUs and extremely large data sets, is starting to see many applications across all natural sciences. In seismology these are largely confined to classification and discrimination tasks. In this contribution we explore the use of deep neural networks for another class of problems: so called generative models.Generative modelling is a branch of statistics concerned with generating new observed data samples, usually by drawing from some underlying probability distribution. Samples with specific attributes can be generated by conditioning on input variables. In this work we condition on seismic source (mechanism and location) and receiver (location) parameters to generate multi-component seismograms.The deep neural networks are trained on synthetic data calculated with Instaseis (http://instaseis.net, van Driel et al. (2015)) and waveforms from the global ShakeMovie project (http://global.shakemovie.princeton.edu, Tromp et al. (2010)). The underlying radially symmetric or smoothly three dimensional Earth structures result in comparatively small waveform differences from similar events or at close receivers and the networks learn to interpolate between training data samples.Of particular importance is the chosen misfit functional. Generative adversarial networks (Goodfellow et al. (2014)) implement a system in which two networks compete: the generator network creates samples and the discriminator network distinguishes these from the true training examples. Both are trained in an adversarial fashion until the discriminator can no longer distinguish between generated and real samples. We show how this can be applied to seismograms and in particular how it compares to networks trained with more conventional misfit metrics. Last but not least we attempt to shed some light on the black-box nature of

  14. Neural networks for perception human and machine perception

    CERN Document Server

    Wechsler, Harry

    1991-01-01

    Neural Networks for Perception, Volume 1: Human and Machine Perception focuses on models for understanding human perception in terms of distributed computation and examples of PDP models for machine perception. This book addresses both theoretical and practical issues related to the feasibility of both explaining human perception and implementing machine perception in terms of neural network models. The book is organized into two parts. The first part focuses on human perception. Topics on network model ofobject recognition in human vision, the self-organization of functional architecture in t

  15. Firing rate dynamics in recurrent spiking neural networks with intrinsic and network heterogeneity.

    Science.gov (United States)

    Ly, Cheng

    2015-12-01

    Heterogeneity of neural attributes has recently gained a lot of attention and is increasing recognized as a crucial feature in neural processing. Despite its importance, this physiological feature has traditionally been neglected in theoretical studies of cortical neural networks. Thus, there is still a lot unknown about the consequences of cellular and circuit heterogeneity in spiking neural networks. In particular, combining network or synaptic heterogeneity and intrinsic heterogeneity has yet to be considered systematically despite the fact that both are known to exist and likely have significant roles in neural network dynamics. In a canonical recurrent spiking neural network model, we study how these two forms of heterogeneity lead to different distributions of excitatory firing rates. To analytically characterize how these types of heterogeneities affect the network, we employ a dimension reduction method that relies on a combination of Monte Carlo simulations and probability density function equations. We find that the relationship between intrinsic and network heterogeneity has a strong effect on the overall level of heterogeneity of the firing rates. Specifically, this relationship can lead to amplification or attenuation of firing rate heterogeneity, and these effects depend on whether the recurrent network is firing asynchronously or rhythmically firing. These observations are captured with the aforementioned reduction method, and furthermore simpler analytic descriptions based on this dimension reduction method are developed. The final analytic descriptions provide compact and descriptive formulas for how the relationship between intrinsic and network heterogeneity determines the firing rate heterogeneity dynamics in various settings.

  16. Global Approximations to Cost and Production Functions using Artificial Neural Networks

    Directory of Open Access Journals (Sweden)

    Efthymios G. Tsionas

    2009-06-01

    Full Text Available The estimation of cost and production functions in economics relies on standard specifications which are less than satisfactory in numerous situations. However, instead of fitting the data with a pre-specified model, Artificial Neural Networks (ANNs let the data itself serve as evidence to support the modelrs estimation of the underlying process. In this context, the proposed approach combines the strengths of economics, statistics and machine learning research and the paper proposes a global approximation to arbitrary cost and production functions, respectively, given by ANNs. Suggestions on implementation are proposed and empirical application relies on standard techniques. All relevant measures such as Returns to Scale (RTS and Total Factor Productivity (TFP may be computed routinely.

  17. Neural network post-processing of grayscale optical correlator

    Science.gov (United States)

    Lu, Thomas T; Hughlett, Casey L.; Zhoua, Hanying; Chao, Tien-Hsin; Hanan, Jay C.

    2005-01-01

    In this paper we present the use of a radial basis function neural network (RBFNN) as a post-processor to assist the optical correlator to identify the objects and to reject false alarms. Image plane features near the correlation peaks are extracted and fed to the neural network for analysis. The approach is capable of handling large number of object variations and filter sets. Preliminary experimental results are presented and the performance is analyzed.

  18. Sequential and parallel image restoration: neural network implementations.

    Science.gov (United States)

    Figueiredo, M T; Leitao, J N

    1994-01-01

    Sequential and parallel image restoration algorithms and their implementations on neural networks are proposed. For images degraded by linear blur and contaminated by additive white Gaussian noise, maximum a posteriori (MAP) estimation and regularization theory lead to the same high dimension convex optimization problem. The commonly adopted strategy (in using neural networks for image restoration) is to map the objective function of the optimization problem into the energy of a predefined network, taking advantage of its energy minimization properties. Departing from this approach, we propose neural implementations of iterative minimization algorithms which are first proved to converge. The developed schemes are based on modified Hopfield (1985) networks of graded elements, with both sequential and parallel updating schedules. An algorithm supported on a fully standard Hopfield network (binary elements and zero autoconnections) is also considered. Robustness with respect to finite numerical precision is studied, and examples with real images are presented.

  19. State-dependent, bidirectional modulation of neural network activity by endocannabinoids.

    Science.gov (United States)

    Piet, Richard; Garenne, André; Farrugia, Fanny; Le Masson, Gwendal; Marsicano, Giovanni; Chavis, Pascale; Manzoni, Olivier J

    2011-11-16

    The endocannabinoid (eCB) system and the cannabinoid CB1 receptor (CB1R) play key roles in the modulation of brain functions. Although actions of eCBs and CB1Rs are well described at the synaptic level, little is known of their modulation of neural activity at the network level. Using microelectrode arrays, we have examined the role of CB1R activation in the modulation of the electrical activity of rat and mice cortical neural networks in vitro. We find that exogenous activation of CB1Rs expressed on glutamatergic neurons decreases the spontaneous activity of cortical neural networks. Moreover, we observe that the net effect of the CB1R antagonist AM251 inversely correlates with the initial level of activity in the network: blocking CB1Rs increases network activity when basal network activity is low, whereas it depresses spontaneous activity when its initial level is high. Our results reveal a complex role of CB1Rs in shaping spontaneous network activity, and suggest that the outcome of endogenous neuromodulation on network function might be state dependent.

  20. Comparison of multiple linear regression and artificial neural network in developing the objective functions of the orthopaedic screws.

    Science.gov (United States)

    Hsu, Ching-Chi; Lin, Jinn; Chao, Ching-Kong

    2011-12-01

    Optimizing the orthopaedic screws can greatly improve their biomechanical performances. However, a methodical design optimization approach requires a long time to search the best design. Thus, the surrogate objective functions of the orthopaedic screws should be accurately developed. To our knowledge, there is no study to evaluate the strengths and limitations of the surrogate methods in developing the objective functions of the orthopaedic screws. Three-dimensional finite element models for both the tibial locking screws and the spinal pedicle screws were constructed and analyzed. Then, the learning data were prepared according to the arrangement of the Taguchi orthogonal array, and the verification data were selected with use of a randomized selection. Finally, the surrogate objective functions were developed by using either the multiple linear regression or the artificial neural network. The applicability and accuracy of those surrogate methods were evaluated and discussed. The multiple linear regression method could successfully construct the objective function of the tibial locking screws, but it failed to develop the objective function of the spinal pedicle screws. The artificial neural network method showed a greater capacity of prediction in developing the objective functions for the tibial locking screws and the spinal pedicle screws than the multiple linear regression method. The artificial neural network method may be a useful option for developing the objective functions of the orthopaedic screws with a greater structural complexity. The surrogate objective functions of the orthopaedic screws could effectively decrease the time and effort required for the design optimization process. Copyright © 2010 Elsevier Ireland Ltd. All rights reserved.

  1. Sejarah, Penerapan, dan Analisis Resiko dari Neural Network: Sebuah Tinjauan Pustaka

    Directory of Open Access Journals (Sweden)

    Cristina Cristina

    2018-05-01

    Full Text Available A neural network is a form of artificial intelligence that has the ability to learn, grow, and adapt in a dynamic environment. Neural network began since 1890 because a great American psychologist named William James created the book "Principles of Psycology". James was the first one publish a number of facts related to the structure and function of the brain. The history of neural network development is divided into 4 epochs, the Camelot era, the Depression, the Renaissance, and the Neoconnectiosm era. Neural networks used today are not 100 percent accurate. However, neural networks are still used because of better performance than alternative computing models. The use of neural network consists of pattern recognition, signal analysis, robotics, and expert systems. For risk analysis of the neural network, it is first performed using hazards and operability studies (HAZOPS. Determining the neural network requirements in a good way will help in determining its contribution to system hazards and validating the control or mitigation of any hazards. After completion of the first stage at HAZOPS and the second stage determines the requirements, the next stage is designing. Neural network underwent repeated design-train-test development. At the design stage, the hazard analysis should consider the design aspects of the development, which include neural network architecture, size, intended use, and so on. It will be continued at the implementation stage, test phase, installation and inspection phase, operation phase, and ends at the maintenance stage.

  2. A comparison of neural network architectures for the prediction of MRR in EDM

    Science.gov (United States)

    Jena, A. R.; Das, Raja

    2017-11-01

    The aim of the research work is to predict the material removal rate of a work-piece in electrical discharge machining (EDM). Here, an effort has been made to predict the material removal rate through back-propagation neural network (BPN) and radial basis function neural network (RBFN) for a work-piece of AISI D2 steel. The input parameters for the architecture are discharge-current (Ip), pulse-duration (Ton), and duty-cycle (τ) taken for consideration to obtained the output for material removal rate of the work-piece. In the architecture, it has been observed that radial basis function neural network is comparatively faster than back-propagation neural network but logically back-propagation neural network results more real value. Therefore BPN may consider as a better process in this architecture for consistent prediction to save time and money for conducting experiments.

  3. Comparing Two Methods of Neural Networks to Evaluate Dead Oil Viscosity

    Directory of Open Access Journals (Sweden)

    Meysam Dabiri-Atashbeyk

    2018-01-01

    Full Text Available Reservoir characterization and asset management require comprehensive information about formation fluids. In fact, it is not possible to find accurate solutions to many petroleum engineering problems without having accurate pressure-volume-temperature (PVT data. Traditionally, fluid information has been obtained by capturing samples and then by measuring the PVT properties in a laboratory. In recent years, neural network has been applied to a large number of petroleum engineering problems. In this paper, a multi-layer perception neural network and radial basis function network (both optimized by a genetic algorithm were used to evaluate the dead oil viscosity of crude oil, and it was found out that the estimated dead oil viscosity by the multi-layer perception neural network was more accurate than the one obtained by radial basis function network.

  4. The application of artificial neural networks to TLD dose algorithm

    International Nuclear Information System (INIS)

    Moscovitch, M.

    1997-01-01

    We review the application of feed forward neural networks to multi element thermoluminescence dosimetry (TLD) dose algorithm development. A Neural Network is an information processing method inspired by the biological nervous system. A dose algorithm based on a neural network is a fundamentally different approach from conventional algorithms, as it has the capability to learn from its own experience. The neural network algorithm is shown the expected dose values (output) associated with a given response of a multi-element dosimeter (input) many times.The algorithm, being trained that way, eventually is able to produce its own unique solution to similar (but not exactly the same) dose calculation problems. For personnel dosimetry, the output consists of the desired dose components: deep dose, shallow dose, and eye dose. The input consists of the TL data obtained from the readout of a multi-element dosimeter. For this application, a neural network architecture was developed based on the concept of functional links network (FLN). The FLN concept allowed an increase in the dimensionality of the input space and construction of a neural network without any hidden layers. This simplifies the problem and results in a relatively simple and reliable dose calculation algorithm. Overall, the neural network dose algorithm approach has been shown to significantly improve the precision and accuracy of dose calculations. (authors)

  5. Neural network to diagnose lining condition

    Science.gov (United States)

    Yemelyanov, V. A.; Yemelyanova, N. Y.; Nedelkin, A. A.; Zarudnaya, M. V.

    2018-03-01

    The paper presents data on the problem of diagnosing the lining condition at the iron and steel works. The authors describe the neural network structure and software that are designed and developed to determine the lining burnout zones. The simulation results of the proposed neural networks are presented. The authors note the low learning and classification errors of the proposed neural networks. To realize the proposed neural network, the specialized software has been developed.

  6. Optimization of operation schemes in boiling water reactors using neural networks

    International Nuclear Information System (INIS)

    Ortiz S, J. J.; Castillo M, A.; Pelta, D. A.

    2012-10-01

    In previous works were presented the results of a recurrent neural network to find the best combination of several groups of fuel cells, fuel load and control bars patterns. These solution groups to each problem of Fuel Management were previously optimized by diverse optimization techniques. The neural network chooses the partial solutions so the combination of them, correspond to a good configuration of the reactor according to a function objective. The values of the involved variables in this objective function are obtained through the simulation of the combination of partial solutions by means of Simulate-3. In the present work, a multilayer neural network that learned how to predict some results of Simulate-3 was used so was possible to substitute it in the objective function for the neural network and to accelerate the response time of the whole system of this way. The preliminary results shown in this work are encouraging to continue carrying out efforts in this sense and to improve the response quality of the system. (Author)

  7. Quasi-projective synchronization of fractional-order complex-valued recurrent neural networks.

    Science.gov (United States)

    Yang, Shuai; Yu, Juan; Hu, Cheng; Jiang, Haijun

    2018-08-01

    In this paper, without separating the complex-valued neural networks into two real-valued systems, the quasi-projective synchronization of fractional-order complex-valued neural networks is investigated. First, two new fractional-order inequalities are established by using the theory of complex functions, Laplace transform and Mittag-Leffler functions, which generalize traditional inequalities with the first-order derivative in the real domain. Additionally, different from hybrid control schemes given in the previous work concerning the projective synchronization, a simple and linear control strategy is designed in this paper and several criteria are derived to ensure quasi-projective synchronization of the complex-valued neural networks with fractional-order based on the established fractional-order inequalities and the theory of complex functions. Moreover, the error bounds of quasi-projective synchronization are estimated. Especially, some conditions are also presented for the Mittag-Leffler synchronization of the addressed neural networks. Finally, some numerical examples with simulations are provided to show the effectiveness of the derived theoretical results. Copyright © 2018 Elsevier Ltd. All rights reserved.

  8. Automated radial basis function neural network based image classification system for diabetic retinopathy detection in retinal images

    Science.gov (United States)

    Anitha, J.; Vijila, C. Kezi Selva; Hemanth, D. Jude

    2010-02-01

    Diabetic retinopathy (DR) is a chronic eye disease for which early detection is highly essential to avoid any fatal results. Image processing of retinal images emerge as a feasible tool for this early diagnosis. Digital image processing techniques involve image classification which is a significant technique to detect the abnormality in the eye. Various automated classification systems have been developed in the recent years but most of them lack high classification accuracy. Artificial neural networks are the widely preferred artificial intelligence technique since it yields superior results in terms of classification accuracy. In this work, Radial Basis function (RBF) neural network based bi-level classification system is proposed to differentiate abnormal DR Images and normal retinal images. The results are analyzed in terms of classification accuracy, sensitivity and specificity. A comparative analysis is performed with the results of the probabilistic classifier namely Bayesian classifier to show the superior nature of neural classifier. Experimental results show promising results for the neural classifier in terms of the performance measures.

  9. Stability analysis for cellular neural networks with variable delays

    International Nuclear Information System (INIS)

    Zhang Qiang; Wei Xiaopeng; Xu Jin

    2006-01-01

    Some sufficient conditions for the global exponential stability of cellular neural networks with variable delay are obtained by means of a method based on delay differential inequality. The method, which does not make use of Lyapunov functionals, is simple and effective for the stability analysis of neural networks with delay. Some previously established results in the literature are shown to be special cases of the presented result

  10. Hybrid discrete-time neural networks.

    Science.gov (United States)

    Cao, Hongjun; Ibarz, Borja

    2010-11-13

    Hybrid dynamical systems combine evolution equations with state transitions. When the evolution equations are discrete-time (also called map-based), the result is a hybrid discrete-time system. A class of biological neural network models that has recently received some attention falls within this category: map-based neuron models connected by means of fast threshold modulation (FTM). FTM is a connection scheme that aims to mimic the switching dynamics of a neuron subject to synaptic inputs. The dynamic equations of the neuron adopt different forms according to the state (either firing or not firing) and type (excitatory or inhibitory) of their presynaptic neighbours. Therefore, the mathematical model of one such network is a combination of discrete-time evolution equations with transitions between states, constituting a hybrid discrete-time (map-based) neural network. In this paper, we review previous work within the context of these models, exemplifying useful techniques to analyse them. Typical map-based neuron models are low-dimensional and amenable to phase-plane analysis. In bursting models, fast-slow decomposition can be used to reduce dimensionality further, so that the dynamics of a pair of connected neurons can be easily understood. We also discuss a model that includes electrical synapses in addition to chemical synapses with FTM. Furthermore, we describe how master stability functions can predict the stability of synchronized states in these networks. The main results are extended to larger map-based neural networks.

  11. Neural Networks in Control Applications

    DEFF Research Database (Denmark)

    Sørensen, O.

    The intention of this report is to make a systematic examination of the possibilities of applying neural networks in those technical areas, which are familiar to a control engineer. In other words, the potential of neural networks in control applications is given higher priority than a detailed...... study of the networks themselves. With this end in view the following restrictions have been made: - Amongst numerous neural network structures, only the Multi Layer Perceptron (a feed-forward network) is applied. - Amongst numerous training algorithms, only four algorithms are examined, all...... in a recursive form (sample updating). The simplest is the Back Probagation Error Algorithm, and the most complex is the recursive Prediction Error Method using a Gauss-Newton search direction. - Over-fitting is often considered to be a serious problem when training neural networks. This problem is specifically...

  12. Classification of Company Performance using Weighted Probabilistic Neural Network

    Science.gov (United States)

    Yasin, Hasbi; Waridi Basyiruddin Arifin, Adi; Warsito, Budi

    2018-05-01

    Classification of company performance can be judged by looking at its financial status, whether good or bad state. Classification of company performance can be achieved by some approach, either parametric or non-parametric. Neural Network is one of non-parametric methods. One of Artificial Neural Network (ANN) models is Probabilistic Neural Network (PNN). PNN consists of four layers, i.e. input layer, pattern layer, addition layer, and output layer. The distance function used is the euclidean distance and each class share the same values as their weights. In this study used PNN that has been modified on the weighting process between the pattern layer and the addition layer by involving the calculation of the mahalanobis distance. This model is called the Weighted Probabilistic Neural Network (WPNN). The results show that the company's performance modeling with the WPNN model has a very high accuracy that reaches 100%.

  13. Transient analysis for PWR reactor core using neural networks predictors

    International Nuclear Information System (INIS)

    Gueray, B.S.

    2001-01-01

    In this study, transient analysis for a Pressurized Water Reactor core has been performed. A lumped parameter approximation is preferred for that purpose, to describe the reactor core together with mechanism which play an important role in dynamic analysis. The dynamic behavior of the reactor core during transients is analyzed considering the transient initiating events, wich are an essential part of Safety Analysis Reports. several transients are simulated based on the employed core model. Simulation results are in accord the physical expectations. A neural network is developed to predict the future response of the reactor core, in advance. The neural network is trained using the simulation results of a number of representative transients. Structure of the neural network is optimized by proper selection of transfer functions for the neurons. Trained neural network is used to predict the future responses following an early observation of the changes in system variables. Estimated behaviour using the neural network is in good agreement with the simulation results for various for types of transients. Results of this study indicate that the designed neural network can be used as an estimator of the time dependent behavior of the reactor core under transient conditions

  14. Practical neural network recipies in C++

    CERN Document Server

    Masters

    2014-01-01

    This text serves as a cookbook for neural network solutions to practical problems using C++. It will enable those with moderate programming experience to select a neural network model appropriate to solving a particular problem, and to produce a working program implementing that network. The book provides guidance along the entire problem-solving path, including designing the training set, preprocessing variables, training and validating the network, and evaluating its performance. Though the book is not intended as a general course in neural networks, no background in neural works is assum

  15. Quantum generalisation of feedforward neural networks

    Science.gov (United States)

    Wan, Kwok Ho; Dahlsten, Oscar; Kristjánsson, Hlér; Gardner, Robert; Kim, M. S.

    2017-09-01

    We propose a quantum generalisation of a classical neural network. The classical neurons are firstly rendered reversible by adding ancillary bits. Then they are generalised to being quantum reversible, i.e., unitary (the classical networks we generalise are called feedforward, and have step-function activation functions). The quantum network can be trained efficiently using gradient descent on a cost function to perform quantum generalisations of classical tasks. We demonstrate numerically that it can: (i) compress quantum states onto a minimal number of qubits, creating a quantum autoencoder, and (ii) discover quantum communication protocols such as teleportation. Our general recipe is theoretical and implementation-independent. The quantum neuron module can naturally be implemented photonically.

  16. A study on neural network representation of reactor power control procedures

    International Nuclear Information System (INIS)

    Moon, Byung Soo; Park, J. C.; Kim, Y. T.; Yang, S. U.; Lee, H. C.; Hwang, I. A.; Hwang, H. S.

    1997-12-01

    A neural algorithm to carry out the curve readings and arithmetic computations necessary for reactor power control is described in this report. The curve readings are for functions of the form z=f(x,y) and require fairly good interpolations. One of the functions is the total power defect as a function of reactor power and boron concentration. The second is the new position of control rod as a function of the current rod position and the increment of total power defect needed for the required power change. The curves involving xenon effect are also considered separately. We represented these curves by cubic spline interpolations first and then converted them to fuzzy systems so that they perform the identical interpolations as the splines. The resulting fuzzy systems are then converted to artificial neural networks similar to the RBF type neural network. These networks still carry the O(h'4) accuracy as the cubic spline interpolating functions. Also included is a description of an important result on how to find the spline interpolation coefficients without solving the matrix equation, when the function is a polynomial of the form f(t)=t'm. This result provides a systematic way of presenting continuous functions by fuzzy systems and hence by artificial neural networks without any training. (author). 10 refs., 2 tabs., 10 figs

  17. Signal Processing and Neural Network Simulator

    Science.gov (United States)

    Tebbe, Dennis L.; Billhartz, Thomas J.; Doner, John R.; Kraft, Timothy T.

    1995-04-01

    The signal processing and neural network simulator (SPANNS) is a digital signal processing simulator with the capability to invoke neural networks into signal processing chains. This is a generic tool which will greatly facilitate the design and simulation of systems with embedded neural networks. The SPANNS is based on the Signal Processing WorkSystemTM (SPWTM), a commercial-off-the-shelf signal processing simulator. SPW provides a block diagram approach to constructing signal processing simulations. Neural network paradigms implemented in the SPANNS include Backpropagation, Kohonen Feature Map, Outstar, Fully Recurrent, Adaptive Resonance Theory 1, 2, & 3, and Brain State in a Box. The SPANNS was developed by integrating SAIC's Industrial Strength Neural Networks (ISNN) Software into SPW.

  18. IMNN: Information Maximizing Neural Networks

    Science.gov (United States)

    Charnock, Tom; Lavaux, Guilhem; Wandelt, Benjamin D.

    2018-04-01

    This software trains artificial neural networks to find non-linear functionals of data that maximize Fisher information: information maximizing neural networks (IMNNs). As compressing large data sets vastly simplifies both frequentist and Bayesian inference, important information may be inadvertently missed. Likelihood-free inference based on automatically derived IMNN summaries produces summaries that are good approximations to sufficient statistics. IMNNs are robustly capable of automatically finding optimal, non-linear summaries of the data even in cases where linear compression fails: inferring the variance of Gaussian signal in the presence of noise, inferring cosmological parameters from mock simulations of the Lyman-α forest in quasar spectra, and inferring frequency-domain parameters from LISA-like detections of gravitational waveforms. In this final case, the IMNN summary outperforms linear data compression by avoiding the introduction of spurious likelihood maxima.

  19. A note on exponential convergence of neural networks with unbounded distributed delays

    Energy Technology Data Exchange (ETDEWEB)

    Chu Tianguang [Intelligent Control Laboratory, Center for Systems and Control, Department of Mechanics and Engineering Science, Peking University, Beijing 100871 (China)]. E-mail: chutg@pku.edu.cn; Yang Haifeng [Intelligent Control Laboratory, Center for Systems and Control, Department of Mechanics and Engineering Science, Peking University, Beijing 100871 (China)

    2007-12-15

    This note examines issues concerning global exponential convergence of neural networks with unbounded distributed delays. Sufficient conditions are derived by exploiting exponentially fading memory property of delay kernel functions. The method is based on comparison principle of delay differential equations and does not need the construction of any Lyapunov functionals. It is simple yet effective in deriving less conservative exponential convergence conditions and more detailed componentwise decay estimates. The results of this note and [Chu T. An exponential convergence estimate for analog neural networks with delay. Phys Lett A 2001;283:113-8] suggest a class of neural networks whose globally exponentially convergent dynamics is completely insensitive to a wide range of time delays from arbitrary bounded discrete type to certain unbounded distributed type. This is of practical interest in designing fast and reliable neural circuits. Finally, an open question is raised on the nature of delay kernels for attaining exponential convergence in an unbounded distributed delayed neural network.

  20. A note on exponential convergence of neural networks with unbounded distributed delays

    International Nuclear Information System (INIS)

    Chu Tianguang; Yang Haifeng

    2007-01-01

    This note examines issues concerning global exponential convergence of neural networks with unbounded distributed delays. Sufficient conditions are derived by exploiting exponentially fading memory property of delay kernel functions. The method is based on comparison principle of delay differential equations and does not need the construction of any Lyapunov functionals. It is simple yet effective in deriving less conservative exponential convergence conditions and more detailed componentwise decay estimates. The results of this note and [Chu T. An exponential convergence estimate for analog neural networks with delay. Phys Lett A 2001;283:113-8] suggest a class of neural networks whose globally exponentially convergent dynamics is completely insensitive to a wide range of time delays from arbitrary bounded discrete type to certain unbounded distributed type. This is of practical interest in designing fast and reliable neural circuits. Finally, an open question is raised on the nature of delay kernels for attaining exponential convergence in an unbounded distributed delayed neural network

  1. Representation of neural networks as Lotka-Volterra systems

    International Nuclear Information System (INIS)

    Moreau, Yves; Vandewalle, Joos; Louies, Stephane; Brenig, Leon

    1999-01-01

    We study changes of coordinates that allow the representation of the ordinary differential equations describing continuous-time recurrent neural networks into differential equations describing predator-prey models--also called Lotka-Volterra systems. We transform the equations for the neural network first into quasi-monomial form, where we express the vector field of the dynamical system as a linear combination of products of powers of the variables. In practice, this transformation is possible only if the activation function is the hyperbolic tangent or the logistic sigmoied. From this quasi-monomial form, we can directly transform the system further into Lotka-Volterra equations. The resulting Lotka-Volterra system is of higher dimension than the original system, but the behavior of its first variables is equivalent to the behavior of the original neural network

  2. Trimaran Resistance Artificial Neural Network

    Science.gov (United States)

    2011-01-01

    11th International Conference on Fast Sea Transportation FAST 2011, Honolulu, Hawaii, USA, September 2011 Trimaran Resistance Artificial Neural Network Richard...Trimaran Resistance Artificial Neural Network 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) 5d. PROJECT NUMBER 5e... Artificial Neural Network and is restricted to the center and side-hull configurations tested. The value in the parametric model is that it is able to

  3. A stochastic learning algorithm for layered neural networks

    International Nuclear Information System (INIS)

    Bartlett, E.B.; Uhrig, R.E.

    1992-01-01

    The random optimization method typically uses a Gaussian probability density function (PDF) to generate a random search vector. In this paper the random search technique is applied to the neural network training problem and is modified to dynamically seek out the optimal probability density function (OPDF) from which to select the search vector. The dynamic OPDF search process, combined with an auto-adaptive stratified sampling technique and a dynamic node architecture (DNA) learning scheme, completes the modifications of the basic method. The DNA technique determines the appropriate number of hidden nodes needed for a given training problem. By using DNA, researchers do not have to set the neural network architectures before training is initiated. The approach is applied to networks of generalized, fully interconnected, continuous perceptions. Computer simulation results are given

  4. Neural network regulation driven by autonomous neural firings

    Science.gov (United States)

    Cho, Myoung Won

    2016-07-01

    Biological neurons naturally fire spontaneously due to the existence of a noisy current. Such autonomous firings may provide a driving force for network formation because synaptic connections can be modified due to neural firings. Here, we study the effect of autonomous firings on network formation. For the temporally asymmetric Hebbian learning, bidirectional connections lose their balance easily and become unidirectional ones. Defining the difference between reciprocal connections as new variables, we could express the learning dynamics as if Ising model spins interact with each other in magnetism. We present a theoretical method to estimate the interaction between the new variables in a neural system. We apply the method to some network systems and find some tendencies of autonomous neural network regulation.

  5. Application of improved PSO-RBF neural network in the synthetic ammonia decarbonization

    Directory of Open Access Journals (Sweden)

    Yongwei LI

    2017-12-01

    Full Text Available The synthetic ammonia decarbonization is a typical complex industrial process, which has the characteristics of time variation, nonlinearity and uncertainty, and the on-line control model is difficult to be established. An improved PSO-RBF neural network control algorithm is proposed to solve the problems of low precision and poor robustness in the complex process of the synthetic ammonia decarbonization. The particle swarm optimization algorithm and RBF neural network are combined. The improved particle swarm algorithm is used to optimize the RBF neural network's hidden layer primary function center, width and the output layer's connection value to construct the RBF neural network model optimized by the improved PSO algorithm. The improved PSO-RBF neural network control model is applied to the key carbonization process and compared with the traditional fuzzy neural network. The simulation results show that the improved PSO-RBF neural network control method used in the synthetic ammonia decarbonization process has higher control accuracy and system robustness, which provides an effective way to solve the modeling and optimization control of a complex industrial process.

  6. Constructing general partial differential equations using polynomial and neural networks.

    Science.gov (United States)

    Zjavka, Ladislav; Pedrycz, Witold

    2016-01-01

    Sum fraction terms can approximate multi-variable functions on the basis of discrete observations, replacing a partial differential equation definition with polynomial elementary data relation descriptions. Artificial neural networks commonly transform the weighted sum of inputs to describe overall similarity relationships of trained and new testing input patterns. Differential polynomial neural networks form a new class of neural networks, which construct and solve an unknown general partial differential equation of a function of interest with selected substitution relative terms using non-linear multi-variable composite polynomials. The layers of the network generate simple and composite relative substitution terms whose convergent series combinations can describe partial dependent derivative changes of the input variables. This regression is based on trained generalized partial derivative data relations, decomposed into a multi-layer polynomial network structure. The sigmoidal function, commonly used as a nonlinear activation of artificial neurons, may transform some polynomial items together with the parameters with the aim to improve the polynomial derivative term series ability to approximate complicated periodic functions, as simple low order polynomials are not able to fully make up for the complete cycles. The similarity analysis facilitates substitutions for differential equations or can form dimensional units from data samples to describe real-world problems. Copyright © 2015 Elsevier Ltd. All rights reserved.

  7. Efficient second order Algorithms for Function Approximation with Neural Networks. Application to Sextic Potentials

    International Nuclear Information System (INIS)

    Gougam, L.A.; Taibi, H.; Chikhi, A.; Mekideche-Chafa, F.

    2009-01-01

    The problem of determining the analytical description for a set of data arises in numerous sciences and applications and can be referred to as data modeling or system identification. Neural networks are a convenient means of representation because they are known to be universal approximates that can learn data. The desired task is usually obtained by a learning procedure which consists in adjusting the s ynaptic weights . For this purpose, many learning algorithms have been proposed to update these weights. The convergence for these learning algorithms is a crucial criterion for neural networks to be useful in different applications. The aim of the present contribution is to use a training algorithm for feed forward wavelet networks used for function approximation. The training is based on the minimization of the least-square cost function. The minimization is performed by iterative second order gradient-based methods. We make use of the Levenberg-Marquardt algorithm to train the architecture of the chosen network and, then, the training procedure starts with a simple gradient method which is followed by a BFGS (Broyden, Fletcher, Glodfarb et Shanno) algorithm. The performances of the two algorithms are then compared. Our method is then applied to determine the energy of the ground state associated to a sextic potential. In fact, the Schrodinger equation does not always admit an exact solution and one has, generally, to solve it numerically. To this end, the sextic potential is, firstly, approximated with the above outlined wavelet network and, secondly, implemented into a numerical scheme. Our results are in good agreement with the ones found in the literature.

  8. Applying Gradient Descent in Convolutional Neural Networks

    Science.gov (United States)

    Cui, Nan

    2018-04-01

    With the development of the integrated circuit and computer science, people become caring more about solving practical issues via information technologies. Along with that, a new subject called Artificial Intelligent (AI) comes up. One popular research interest of AI is about recognition algorithm. In this paper, one of the most common algorithms, Convolutional Neural Networks (CNNs) will be introduced, for image recognition. Understanding its theory and structure is of great significance for every scholar who is interested in this field. Convolution Neural Network is an artificial neural network which combines the mathematical method of convolution and neural network. The hieratical structure of CNN provides it reliable computer speed and reasonable error rate. The most significant characteristics of CNNs are feature extraction, weight sharing and dimension reduction. Meanwhile, combining with the Back Propagation (BP) mechanism and the Gradient Descent (GD) method, CNNs has the ability to self-study and in-depth learning. Basically, BP provides an opportunity for backwardfeedback for enhancing reliability and GD is used for self-training process. This paper mainly discusses the CNN and the related BP and GD algorithms, including the basic structure and function of CNN, details of each layer, the principles and features of BP and GD, and some examples in practice with a summary in the end.

  9. Improve 3D laser scanner measurements accuracy using a FFBP neural network with Widrow-Hoff weight/bias learning function

    Science.gov (United States)

    Rodríguez-Quiñonez, J. C.; Sergiyenko, O.; Hernandez-Balbuena, D.; Rivas-Lopez, M.; Flores-Fuentes, W.; Basaca-Preciado, L. C.

    2014-12-01

    Many laser scanners depend on their mechanical construction to guarantee their measurements accuracy, however, the current computational technologies allow us to improve these measurements by mathematical methods implemented in neural networks. In this article we are going to introduce the current laser scanner technologies, give a description of our 3D laser scanner and adjust their measurement error by a previously trained feed forward back propagation (FFBP) neural network with a Widrow-Hoff weight/bias learning function. A comparative analysis with other learning functions such as the Kohonen algorithm and gradient descendent with momentum algorithm is presented. Finally, computational simulations are conducted to verify the performance and method uncertainty in the proposed system.

  10. Neural networks with discontinuous/impact activations

    CERN Document Server

    Akhmet, Marat

    2014-01-01

    This book presents as its main subject new models in mathematical neuroscience. A wide range of neural networks models with discontinuities are discussed, including impulsive differential equations, differential equations with piecewise constant arguments, and models of mixed type. These models involve discontinuities, which are natural because huge velocities and short distances are usually observed in devices modeling the networks. A discussion of the models, appropriate for the proposed applications, is also provided. This book also: Explores questions related to the biological underpinning for models of neural networks\\ Considers neural networks modeling using differential equations with impulsive and piecewise constant argument discontinuities Provides all necessary mathematical basics for application to the theory of neural networks Neural Networks with Discontinuous/Impact Activations is an ideal book for researchers and professionals in the field of engineering mathematics that have an interest in app...

  11. A One-Layer Recurrent Neural Network for Real-Time Portfolio Optimization With Probability Criterion.

    Science.gov (United States)

    Liu, Qingshan; Dang, Chuangyin; Huang, Tingwen

    2013-02-01

    This paper presents a decision-making model described by a recurrent neural network for dynamic portfolio optimization. The portfolio-optimization problem is first converted into a constrained fractional programming problem. Since the objective function in the programming problem is not convex, the traditional optimization techniques are no longer applicable for solving this problem. Fortunately, the objective function in the fractional programming is pseudoconvex on the feasible region. It leads to a one-layer recurrent neural network modeled by means of a discontinuous dynamic system. To ensure the optimal solutions for portfolio optimization, the convergence of the proposed neural network is analyzed and proved. In fact, the neural network guarantees to get the optimal solutions for portfolio-investment advice if some mild conditions are satisfied. A numerical example with simulation results substantiates the effectiveness and illustrates the characteristics of the proposed neural network.

  12. Simplified LQG Control with Neural Networks

    DEFF Research Database (Denmark)

    Sørensen, O.

    1997-01-01

    A new neural network application for non-linear state control is described. One neural network is modelled to form a Kalmann predictor and trained to act as an optimal state observer for a non-linear process. Another neural network is modelled to form a state controller and trained to produce...

  13. Reconstruction of neutron spectra through neural networks

    International Nuclear Information System (INIS)

    Vega C, H.R.; Hernandez D, V.M.; Manzanares A, E.

    2003-01-01

    A neural network has been used to reconstruct the neutron spectra starting from the counting rates of the detectors of the Bonner sphere spectrophotometric system. A group of 56 neutron spectra was selected to calculate the counting rates that would produce in a Bonner sphere system, with these data and the spectra it was trained the neural network. To prove the performance of the net, 12 spectra were used, 6 were taken of the group used for the training, 3 were obtained of mathematical functions and those other 3 correspond to real spectra. When comparing the original spectra of those reconstructed by the net we find that our net has a poor performance when reconstructing monoenergetic spectra, this attributes it to those characteristic of the spectra used for the training of the neural network, however for the other groups of spectra the results of the net are appropriate with the prospective ones. (Author)

  14. Fuzzy neural network theory and application

    CERN Document Server

    Liu, Puyin

    2004-01-01

    This book systematically synthesizes research achievements in the field of fuzzy neural networks in recent years. It also provides a comprehensive presentation of the developments in fuzzy neural networks, with regard to theory as well as their application to system modeling and image restoration. Special emphasis is placed on the fundamental concepts and architecture analysis of fuzzy neural networks. The book is unique in treating all kinds of fuzzy neural networks and their learning algorithms and universal approximations, and employing simulation examples which are carefully designed to he

  15. Modular representation of layered neural networks.

    Science.gov (United States)

    Watanabe, Chihiro; Hiramatsu, Kaoru; Kashino, Kunio

    2018-01-01

    Layered neural networks have greatly improved the performance of various applications including image processing, speech recognition, natural language processing, and bioinformatics. However, it is still difficult to discover or interpret knowledge from the inference provided by a layered neural network, since its internal representation has many nonlinear and complex parameters embedded in hierarchical layers. Therefore, it becomes important to establish a new methodology by which layered neural networks can be understood. In this paper, we propose a new method for extracting a global and simplified structure from a layered neural network. Based on network analysis, the proposed method detects communities or clusters of units with similar connection patterns. We show its effectiveness by applying it to three use cases. (1) Network decomposition: it can decompose a trained neural network into multiple small independent networks thus dividing the problem and reducing the computation time. (2) Training assessment: the appropriateness of a trained result with a given hyperparameter or randomly chosen initial parameters can be evaluated by using a modularity index. And (3) data analysis: in practical data it reveals the community structure in the input, hidden, and output layers, which serves as a clue for discovering knowledge from a trained neural network. Copyright © 2017 Elsevier Ltd. All rights reserved.

  16. Introduction to Artificial Neural Networks

    DEFF Research Database (Denmark)

    Larsen, Jan

    1999-01-01

    The note addresses introduction to signal analysis and classification based on artificial feed-forward neural networks.......The note addresses introduction to signal analysis and classification based on artificial feed-forward neural networks....

  17. Changes in the interaction of resting-state neural networks from adolescence to adulthood.

    Science.gov (United States)

    Stevens, Michael C; Pearlson, Godfrey D; Calhoun, Vince D

    2009-08-01

    This study examined how the mutual interactions of functionally integrated neural networks during resting-state fMRI differed between adolescence and adulthood. Independent component analysis (ICA) was used to identify functionally connected neural networks in 100 healthy participants aged 12-30 years. Hemodynamic timecourses that represented integrated neural network activity were analyzed with tools that quantified system "causal density" estimates, which indexed the proportion of significant Granger causality relationships among system nodes. Mutual influences among networks decreased with age, likely reflecting stronger within-network connectivity and more efficient between-network influences with greater development. Supplemental tests showed that this normative age-related reduction in causal density was accompanied by fewer significant connections to and from each network, regional increases in the strength of functional integration within networks, and age-related reductions in the strength of numerous specific system interactions. The latter included paths between lateral prefrontal-parietal circuits and "default mode" networks. These results contribute to an emerging understanding that activity in widely distributed networks thought to underlie complex cognition influences activity in other networks. (c) 2009 Wiley-Liss, Inc.

  18. Global robust stability of delayed recurrent neural networks

    International Nuclear Information System (INIS)

    Cao Jinde; Huang Deshuang; Qu Yuzhong

    2005-01-01

    This paper is concerned with the global robust stability of a class of delayed interval recurrent neural networks which contain time-invariant uncertain parameters whose values are unknown but bounded in given compact sets. A new sufficient condition is presented for the existence, uniqueness, and global robust stability of equilibria for interval neural networks with time delays by constructing Lyapunov functional and using matrix-norm inequality. An error is corrected in an earlier publication, and an example is given to show the effectiveness of the obtained results

  19. Exponential stability of neural networks with asymmetric connection weights

    International Nuclear Information System (INIS)

    Yang Jinxiang; Zhong Shouming

    2007-01-01

    This paper investigates the exponential stability of a class of neural networks with asymmetric connection weights. By dividing the network state variables into various parts according to the characters of the neural networks, some new sufficient conditions of exponential stability are derived via constructing a Lyapunov function and using the method of the variation of constant. The new conditions are associated with the initial values and are described by some blocks of the interconnection matrix, and do not depend on other blocks. Examples are given to further illustrate the theory

  20. Artificial Neural Network Analysis System

    Science.gov (United States)

    2001-02-27

    Contract No. DASG60-00-M-0201 Purchase request no.: Foot in the Door-01 Title Name: Artificial Neural Network Analysis System Company: Atlantic... Artificial Neural Network Analysis System 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) Powell, Bruce C 5d. PROJECT NUMBER 5e. TASK NUMBER...34) 27-02-2001 Report Type N/A Dates Covered (from... to) ("DD MON YYYY") 28-10-2000 27-02-2001 Title and Subtitle Artificial Neural Network Analysis

  1. Numerical Analysis of Modeling Based on Improved Elman Neural Network

    Directory of Open Access Journals (Sweden)

    Shao Jie

    2014-01-01

    Full Text Available A modeling based on the improved Elman neural network (IENN is proposed to analyze the nonlinear circuits with the memory effect. The hidden layer neurons are activated by a group of Chebyshev orthogonal basis functions instead of sigmoid functions in this model. The error curves of the sum of squared error (SSE varying with the number of hidden neurons and the iteration step are studied to determine the number of the hidden layer neurons. Simulation results of the half-bridge class-D power amplifier (CDPA with two-tone signal and broadband signals as input have shown that the proposed behavioral modeling can reconstruct the system of CDPAs accurately and depict the memory effect of CDPAs well. Compared with Volterra-Laguerre (VL model, Chebyshev neural network (CNN model, and basic Elman neural network (BENN model, the proposed model has better performance.

  2. Neural Network Ensembles

    DEFF Research Database (Denmark)

    Hansen, Lars Kai; Salamon, Peter

    1990-01-01

    We propose several means for improving the performance an training of neural networks for classification. We use crossvalidation as a tool for optimizing network parameters and architecture. We show further that the remaining generalization error can be reduced by invoking ensembles of similar...... networks....

  3. A neural network to predict symptomatic lung injury

    International Nuclear Information System (INIS)

    Munley, M.T.; Lo, J.Y.

    1999-01-01

    A nonlinear neural network that simultaneously uses pre-radiotherapy (RT) biological and physical data was developed to predict symptomatic lung injury. The input data were pre-RT pulmonary function, three-dimensional treatment plan doses and demographics. The output was a single value between 0 (asymptomatic) and 1 (symptomatic) to predict the likelihood that a particular patient would become symptomatic. The network was trained on data from 97 patients for 400 iterations with the goal to minimize the mean-squared error. Statistical analysis was performed on the resulting network to determine the model's accuracy. Results from the neural network were compared with those given by traditional linear discriminate analysis and the dose-volume histogram reduction (DVHR) scheme of Kutcher. Receiver-operator characteristic (ROC) analysis was performed on the resulting network which had Az=0.833±0.04. (Az is the area under the ROC curve.) Linear discriminate multivariate analysis yielded an Az=0.813±0.06. The DVHR method had Az=0.521±0.08. The network was also used to rank the significance of the input variables. Future studies will be conducted to improve network accuracy and to include functional imaging data. (author)

  4. Automatic recognition of holistic functional brain networks using iteratively optimized convolutional neural networks (IO-CNN) with weak label initialization.

    Science.gov (United States)

    Zhao, Yu; Ge, Fangfei; Liu, Tianming

    2018-07-01

    fMRI data decomposition techniques have advanced significantly from shallow models such as Independent Component Analysis (ICA) and Sparse Coding and Dictionary Learning (SCDL) to deep learning models such Deep Belief Networks (DBN) and Convolutional Autoencoder (DCAE). However, interpretations of those decomposed networks are still open questions due to the lack of functional brain atlases, no correspondence across decomposed or reconstructed networks across different subjects, and significant individual variabilities. Recent studies showed that deep learning, especially deep convolutional neural networks (CNN), has extraordinary ability of accommodating spatial object patterns, e.g., our recent works using 3D CNN for fMRI-derived network classifications achieved high accuracy with a remarkable tolerance for mistakenly labelled training brain networks. However, the training data preparation is one of the biggest obstacles in these supervised deep learning models for functional brain network map recognitions, since manual labelling requires tedious and time-consuming labours which will sometimes even introduce label mistakes. Especially for mapping functional networks in large scale datasets such as hundreds of thousands of brain networks used in this paper, the manual labelling method will become almost infeasible. In response, in this work, we tackled both the network recognition and training data labelling tasks by proposing a new iteratively optimized deep learning CNN (IO-CNN) framework with an automatic weak label initialization, which enables the functional brain networks recognition task to a fully automatic large-scale classification procedure. Our extensive experiments based on ABIDE-II 1099 brains' fMRI data showed the great promise of our IO-CNN framework. Copyright © 2018 Elsevier B.V. All rights reserved.

  5. Stacked Heterogeneous Neural Networks for Time Series Forecasting

    Directory of Open Access Journals (Sweden)

    Florin Leon

    2010-01-01

    Full Text Available A hybrid model for time series forecasting is proposed. It is a stacked neural network, containing one normal multilayer perceptron with bipolar sigmoid activation functions, and the other with an exponential activation function in the output layer. As shown by the case studies, the proposed stacked hybrid neural model performs well on a variety of benchmark time series. The combination of weights of the two stack components that leads to optimal performance is also studied.

  6. Prototype-Incorporated Emotional Neural Network.

    Science.gov (United States)

    Oyedotun, Oyebade K; Khashman, Adnan

    2017-08-15

    Artificial neural networks (ANNs) aim to simulate the biological neural activities. Interestingly, many ''engineering'' prospects in ANN have relied on motivations from cognition and psychology studies. So far, two important learning theories that have been subject of active research are the prototype and adaptive learning theories. The learning rules employed for ANNs can be related to adaptive learning theory, where several examples of the different classes in a task are supplied to the network for adjusting internal parameters. Conversely, the prototype-learning theory uses prototypes (representative examples); usually, one prototype per class of the different classes contained in the task. These prototypes are supplied for systematic matching with new examples so that class association can be achieved. In this paper, we propose and implement a novel neural network algorithm based on modifying the emotional neural network (EmNN) model to unify the prototype- and adaptive-learning theories. We refer to our new model as ``prototype-incorporated EmNN''. Furthermore, we apply the proposed model to two real-life challenging tasks, namely, static hand-gesture recognition and face recognition, and compare the result to those obtained using the popular back-propagation neural network (BPNN), emotional BPNN (EmNN), deep networks, an exemplar classification model, and k-nearest neighbor.

  7. Reduced-Order Modeling for Flutter/LCO Using Recurrent Artificial Neural Network

    Science.gov (United States)

    Yao, Weigang; Liou, Meng-Sing

    2012-01-01

    The present study demonstrates the efficacy of a recurrent artificial neural network to provide a high fidelity time-dependent nonlinear reduced-order model (ROM) for flutter/limit-cycle oscillation (LCO) modeling. An artificial neural network is a relatively straightforward nonlinear method for modeling an input-output relationship from a set of known data, for which we use the radial basis function (RBF) with its parameters determined through a training process. The resulting RBF neural network, however, is only static and is not yet adequate for an application to problems of dynamic nature. The recurrent neural network method [1] is applied to construct a reduced order model resulting from a series of high-fidelity time-dependent data of aero-elastic simulations. Once the RBF neural network ROM is constructed properly, an accurate approximate solution can be obtained at a fraction of the cost of a full-order computation. The method derived during the study has been validated for predicting nonlinear aerodynamic forces in transonic flow and is capable of accurate flutter/LCO simulations. The obtained results indicate that the present recurrent RBF neural network is accurate and efficient for nonlinear aero-elastic system analysis

  8. Optimal Hierarchical Modular Topologies for Producing Limited Sustained Activation of Neural Networks

    OpenAIRE

    Kaiser, Marcus; Hilgetag, Claus C.

    2010-01-01

    An essential requirement for the representation of functional patterns in complex neural networks, such as the mammalian cerebral cortex, is the existence of stable regimes of network activation, typically arising from a limited parameter range. In this range of limited sustained activity (LSA), the activity of neural populations in the network persists between the extremes of either quickly dying out or activating the whole network. Hierarchical modular networks were previously found to show...

  9. Antenna analysis using neural networks

    Science.gov (United States)

    Smith, William T.

    1992-01-01

    Conventional computing schemes have long been used to analyze problems in electromagnetics (EM). The vast majority of EM applications require computationally intensive algorithms involving numerical integration and solutions to large systems of equations. The feasibility of using neural network computing algorithms for antenna analysis is investigated. The ultimate goal is to use a trained neural network algorithm to reduce the computational demands of existing reflector surface error compensation techniques. Neural networks are computational algorithms based on neurobiological systems. Neural nets consist of massively parallel interconnected nonlinear computational elements. They are often employed in pattern recognition and image processing problems. Recently, neural network analysis has been applied in the electromagnetics area for the design of frequency selective surfaces and beam forming networks. The backpropagation training algorithm was employed to simulate classical antenna array synthesis techniques. The Woodward-Lawson (W-L) and Dolph-Chebyshev (D-C) array pattern synthesis techniques were used to train the neural network. The inputs to the network were samples of the desired synthesis pattern. The outputs are the array element excitations required to synthesize the desired pattern. Once trained, the network is used to simulate the W-L or D-C techniques. Various sector patterns and cosecant-type patterns (27 total) generated using W-L synthesis were used to train the network. Desired pattern samples were then fed to the neural network. The outputs of the network were the simulated W-L excitations. A 20 element linear array was used. There were 41 input pattern samples with 40 output excitations (20 real parts, 20 imaginary). A comparison between the simulated and actual W-L techniques is shown for a triangular-shaped pattern. Dolph-Chebyshev is a different class of synthesis technique in that D-C is used for side lobe control as opposed to pattern

  10. Novel stability criteria for uncertain delayed Cohen-Grossberg neural networks using discretized Lyapunov functional

    International Nuclear Information System (INIS)

    Souza, Fernando O.; Palhares, Reinaldo M.; Ekel, Petr Ya.

    2009-01-01

    This paper deals with the stability analysis of delayed uncertain Cohen-Grossberg neural networks (CGNN). The proposed methodology consists in obtaining new robust stability criteria formulated as linear matrix inequalities (LMIs) via the Lyapunov-Krasovskii theory. Particularly one stability criterion is derived from the selection of a parameter-dependent Lyapunov-Krasovskii functional, which allied with the Gu's discretization technique and a simple strategy that decouples the system matrices from the functional matrices, assures a less conservative stability condition. Two computer simulations are presented to support the improved theoretical results.

  11. Deconvolution using a neural network

    Energy Technology Data Exchange (ETDEWEB)

    Lehman, S.K.

    1990-11-15

    Viewing one dimensional deconvolution as a matrix inversion problem, we compare a neural network backpropagation matrix inverse with LMS, and pseudo-inverse. This is a largely an exercise in understanding how our neural network code works. 1 ref.

  12. Intrinsic connectivity of neural networks in the awake rabbit.

    Science.gov (United States)

    Schroeder, Matthew P; Weiss, Craig; Procissi, Daniel; Disterhoft, John F; Wang, Lei

    2016-04-01

    The way in which the brain is functionally connected into different networks has emerged as an important research topic in order to understand normal neural processing and signaling. Since some experimental manipulations are difficult or unethical to perform in humans, animal models are better suited to investigate this topic. Rabbits are a species that can undergo MRI scanning in an awake and conscious state with minimal preparation and habituation. In this study, we characterized the intrinsic functional networks of the resting New Zealand White rabbit brain using BOLD fMRI data. Group independent component analysis revealed seven networks similar to those previously found in humans, non-human primates and/or rodents including the hippocampus, default mode, cerebellum, thalamus, and visual, somatosensory, and parietal cortices. For the first time, the intrinsic functional networks of the resting rabbit brain have been elucidated demonstrating the rabbit's applicability as a translational animal model. Without the confounding effects of anesthetics or sedatives, future experiments may employ rabbits to understand changes in neural connectivity and brain functioning as a result of experimental manipulation (e.g., temporary or permanent network disruption, learning-related changes, and drug administration). Copyright © 2016 Elsevier Inc. All rights reserved.

  13. Performance of an artificial neural network for vertical root fracture detection: an ex vivo study.

    Science.gov (United States)

    Kositbowornchai, Suwadee; Plermkamon, Supattra; Tangkosol, Tawan

    2013-04-01

    To develop an artificial neural network for vertical root fracture detection. A probabilistic neural network design was used to clarify whether a tooth root was sound or had a vertical root fracture. Two hundred images (50 sound and 150 vertical root fractures) derived from digital radiography--used to train and test the artificial neural network--were divided into three groups according to the number of training and test data sets: 80/120,105/95 and 130/70, respectively. Either training or tested data were evaluated using grey-scale data per line passing through the root. These data were normalized to reduce the grey-scale variance and fed as input data of the neural network. The variance of function in recognition data was calculated between 0 and 1 to select the best performance of neural network. The performance of the neural network was evaluated using a diagnostic test. After testing data under several variances of function, we found the highest sensitivity (98%), specificity (90.5%) and accuracy (95.7%) occurred in Group three, for which the variance of function in recognition data was between 0.025 and 0.005. The neural network designed in this study has sufficient sensitivity, specificity and accuracy to be a model for vertical root fracture detection. © 2012 John Wiley & Sons A/S.

  14. Action Potential Modulation of Neural Spin Networks Suggests Possible Role of Spin

    CERN Document Server

    Hu, H P

    2004-01-01

    In this paper we show that nuclear spin networks in neural membranes are modulated by action potentials through J-coupling, dipolar coupling and chemical shielding tensors and perturbed by microscopically strong and fluctuating internal magnetic fields produced largely by paramagnetic oxygen. We suggest that these spin networks could be involved in brain functions since said modulation inputs information carried by the neural spike trains into them, said perturbation activates various dynamics within them and the combination of the two likely produce stochastic resonance thus synchronizing said dynamics to the neural firings. Although quantum coherence is desirable and may indeed exist, it is not required for these spin networks to serve as the subatomic components for the conventional neural networks.

  15. Probing many-body localization with neural networks

    Science.gov (United States)

    Schindler, Frank; Regnault, Nicolas; Neupert, Titus

    2017-06-01

    We show that a simple artificial neural network trained on entanglement spectra of individual states of a many-body quantum system can be used to determine the transition between a many-body localized and a thermalizing regime. Specifically, we study the Heisenberg spin-1/2 chain in a random external field. We employ a multilayer perceptron with a single hidden layer, which is trained on labeled entanglement spectra pertaining to the fully localized and fully thermal regimes. We then apply this network to classify spectra belonging to states in the transition region. For training, we use a cost function that contains, in addition to the usual error and regularization parts, a term that favors a confident classification of the transition region states. The resulting phase diagram is in good agreement with the one obtained by more conventional methods and can be computed for small systems. In particular, the neural network outperforms conventional methods in classifying individual eigenstates pertaining to a single disorder realization. It allows us to map out the structure of these eigenstates across the transition with spatial resolution. Furthermore, we analyze the network operation using the dreaming technique to show that the neural network correctly learns by itself the power-law structure of the entanglement spectra in the many-body localized regime.

  16. Adaptive Filtering Using Recurrent Neural Networks

    Science.gov (United States)

    Parlos, Alexander G.; Menon, Sunil K.; Atiya, Amir F.

    2005-01-01

    A method for adaptive (or, optionally, nonadaptive) filtering has been developed for estimating the states of complex process systems (e.g., chemical plants, factories, or manufacturing processes at some level of abstraction) from time series of measurements of system inputs and outputs. The method is based partly on the fundamental principles of the Kalman filter and partly on the use of recurrent neural networks. The standard Kalman filter involves an assumption of linearity of the mathematical model used to describe a process system. The extended Kalman filter accommodates a nonlinear process model but still requires linearization about the state estimate. Both the standard and extended Kalman filters involve the often unrealistic assumption that process and measurement noise are zero-mean, Gaussian, and white. In contrast, the present method does not involve any assumptions of linearity of process models or of the nature of process noise; on the contrary, few (if any) assumptions are made about process models, noise models, or the parameters of such models. In this regard, the method can be characterized as one of nonlinear, nonparametric filtering. The method exploits the unique ability of neural networks to approximate nonlinear functions. In a given case, the process model is limited mainly by limitations of the approximation ability of the neural networks chosen for that case. Moreover, despite the lack of assumptions regarding process noise, the method yields minimum- variance filters. In that they do not require statistical models of noise, the neural- network-based state filters of this method are comparable to conventional nonlinear least-squares estimators.

  17. Characterization of Radar Signals Using Neural Networks

    Science.gov (United States)

    1990-12-01

    e***e*e*eeeeeeeeeeeesseeeeeese*eee*e*e************s /* Function Name: load.input.ptterns Number: 4.1 /* Description: This function determines wether ...XSE.last.layer Number: 8.5 */ /* Description: The function determines wether to backpropate the *f /* parameter by the sigmoidal or linear update...Sigmoidal Function," Mathematics of Control, Signals and Systems, 2:303-314 (March 1989). 6. Dayhoff, Judith E. Neural Network Architectures. New York: Van

  18. Development of neural network simulating power distribution of a BWR fuel bundle

    International Nuclear Information System (INIS)

    Tanabe, A.; Yamamoto, T.; Shinfuku, K.; Nakamae, T.

    1992-01-01

    A neural network model is developed to simulate the precise nuclear physics analysis program code for quick scoping survey calculations. The relation between enrichment and local power distribution of BWR fuel bundles was learned using two layers neural network (ENET). A new model is to introduce burnable neutron absorber (Gadolinia), added to several fuel rods to decrease initial reactivity of fresh bundle. The 2nd stages three layers neural network (GNET) is added on the 1st stage network ENET. GNET studies the local distribution difference caused by Gadolinia. Using this method, it becomes possible to survey of the gradients of sigmoid functions and back propagation constants with reasonable time. Using 99 learning patterns of zero burnup, good error convergence curve is obtained after many trials. This neural network model is able to simulate no learned cases fairly as well as the learned cases. Computer time of this neural network model is about 100 times faster than a precise analysis model. (author)

  19. Robustness analysis of uncertain dynamical neural networks with multiple time delays.

    Science.gov (United States)

    Senan, Sibel

    2015-10-01

    This paper studies the problem of global robust asymptotic stability of the equilibrium point for the class of dynamical neural networks with multiple time delays with respect to the class of slope-bounded activation functions and in the presence of the uncertainties of system parameters of the considered neural network model. By using an appropriate Lyapunov functional and exploiting the properties of the homeomorphism mapping theorem, we derive a new sufficient condition for the existence, uniqueness and global robust asymptotic stability of the equilibrium point for the class of neural networks with multiple time delays. The obtained stability condition basically relies on testing some relationships imposed on the interconnection matrices of the neural system, which can be easily verified by using some certain properties of matrices. An instructive numerical example is also given to illustrate the applicability of our result and show the advantages of this new condition over the previously reported corresponding results. Copyright © 2015 Elsevier Ltd. All rights reserved.

  20. Neural network recognition of mammographic lesions

    International Nuclear Information System (INIS)

    Oldham, W.J.B.; Downes, P.T.; Hunter, V.

    1987-01-01

    A method for recognition of mammographic lesions through the use of neural networks is presented. Neural networks have exhibited the ability to learn the shape andinternal structure of patterns. Digitized mammograms containing circumscribed and stelate lesions were used to train a feedfoward synchronous neural network that self-organizes to stable attractor states. Encoding of data for submission to the network was accomplished by performing a fractal analysis of the digitized image. This results in scale invariant representation of the lesions. Results are discussed

  1. Neural Networks and Micromechanics

    Science.gov (United States)

    Kussul, Ernst; Baidyk, Tatiana; Wunsch, Donald C.

    The title of the book, "Neural Networks and Micromechanics," seems artificial. However, the scientific and technological developments in recent decades demonstrate a very close connection between the two different areas of neural networks and micromechanics. The purpose of this book is to demonstrate this connection. Some artificial intelligence (AI) methods, including neural networks, could be used to improve automation system performance in manufacturing processes. However, the implementation of these AI methods within industry is rather slow because of the high cost of conducting experiments using conventional manufacturing and AI systems. To lower the cost, we have developed special micromechanical equipment that is similar to conventional mechanical equipment but of much smaller size and therefore of lower cost. This equipment could be used to evaluate different AI methods in an easy and inexpensive way. The proved methods could be transferred to industry through appropriate scaling. In this book, we describe the prototypes of low cost microequipment for manufacturing processes and the implementation of some AI methods to increase precision, such as computer vision systems based on neural networks for microdevice assembly and genetic algorithms for microequipment characterization and the increase of microequipment precision.

  2. A novel delay-dependent criterion for delayed neural networks of neutral type

    International Nuclear Information System (INIS)

    Lee, S.M.; Kwon, O.M.; Park, Ju H.

    2010-01-01

    This Letter considers a robust stability analysis method for delayed neural networks of neutral type. By constructing a new Lyapunov functional, a novel delay-dependent criterion for the stability is derived in terms of LMIs (linear matrix inequalities). A less conservative stability criterion is derived by using nonlinear properties of the activation function of the neural networks. Two numerical examples are illustrated to show the effectiveness of the proposed method.

  3. Fundamentals of computational intelligence neural networks, fuzzy systems, and evolutionary computation

    CERN Document Server

    Keller, James M; Fogel, David B

    2016-01-01

    This book covers the three fundamental topics that form the basis of computational intelligence: neural networks, fuzzy systems, and evolutionary computation. The text focuses on inspiration, design, theory, and practical aspects of implementing procedures to solve real-world problems. While other books in the three fields that comprise computational intelligence are written by specialists in one discipline, this book is co-written by current former Editor-in-Chief of IEEE Transactions on Neural Networks and Learning Systems, a former Editor-in-Chief of IEEE Transactions on Fuzzy Systems, and the founding Editor-in-Chief of IEEE Transactions on Evolutionary Computation. The coverage across the three topics is both uniform and consistent in style and notation. Discusses single-layer and multilayer neural networks, radial-basi function networks, and recurrent neural networks Covers fuzzy set theory, fuzzy relations, fuzzy logic interference, fuzzy clustering and classification, fuzzy measures and fuzz...

  4. Design Of the Approximation Function of a Pedometer based on Artificial Neural Network for the Healthy Life Style Promotion in Diabetic Patients

    OpenAIRE

    Vega Corona, Antonio; Zárate Banda, Magdalena; Barron Adame, Jose Miguel; Martínez Celorio, René Alfredo; Andina de la Fuente, Diego

    2008-01-01

    The present study describes the design of an Artificial Neural Network to synthesize the Approximation Function of a Pedometer for the Healthy Life Style Promotion. Experimentally, the approximation function is synthesized using three basic digital pedometers of low cost, these pedometers were calibrated with an advanced pedometer that calculates calories consumed and computes distance travelled with personal stride input. The synthesized approximation function by means of the designed neural...

  5. Towards Finding the Global Minimum of the D-Wave Objective Function for Improved Neural Network Regressions

    Science.gov (United States)

    Dorband, J. E.

    2017-12-01

    The D-Wave 2X has successfully been used for regression analysis to derive carbon flux data from OCO-2 CO2 concentration using neural networks. The samples returned from the D-Wave should represent the minimum of an objective function presented to it. An accurate as possible minimum function value is needed for this analysis. Samples from the D-Wave are near minimum, but seldom are the global minimum of the function due to quantum noise. Two methods for improving the accuracy of minimized values represented by the samples returned from the D-Wave are presented. The first method finds a new sample with a minimum value near each returned D-Wave sample. The second method uses all the returned samples to find a more global minimum sample. We present three use-cases performed using the former method. In the first use case, it is demonstrated that an objective function with random qubits and coupler coefficients had an improved minimum. In the second use case, the samples corrected by the first method can improve the training of a Boltzmann machine neural network. The third use case demonstrated that using the first method can improve virtual qubit accuracy.The later method was also performed on the first use case.

  6. Stability analysis for stochastic BAM nonlinear neural network with delays

    Science.gov (United States)

    Lv, Z. W.; Shu, H. S.; Wei, G. L.

    2008-02-01

    In this paper, stochastic bidirectional associative memory neural networks with constant or time-varying delays is considered. Based on a Lyapunov-Krasovskii functional and the stochastic stability analysis theory, we derive several sufficient conditions in order to guarantee the global asymptotically stable in the mean square. Our investigation shows that the stochastic bidirectional associative memory neural networks are globally asymptotically stable in the mean square if there are solutions to some linear matrix inequalities(LMIs). Hence, the global asymptotic stability of the stochastic bidirectional associative memory neural networks can be easily checked by the Matlab LMI toolbox. A numerical example is given to demonstrate the usefulness of the proposed global asymptotic stability criteria.

  7. Stability analysis for stochastic BAM nonlinear neural network with delays

    International Nuclear Information System (INIS)

    Lv, Z W; Shu, H S; Wei, G L

    2008-01-01

    In this paper, stochastic bidirectional associative memory neural networks with constant or time-varying delays is considered. Based on a Lyapunov-Krasovskii functional and the stochastic stability analysis theory, we derive several sufficient conditions in order to guarantee the global asymptotically stable in the mean square. Our investigation shows that the stochastic bidirectional associative memory neural networks are globally asymptotically stable in the mean square if there are solutions to some linear matrix inequalities(LMIs). Hence, the global asymptotic stability of the stochastic bidirectional associative memory neural networks can be easily checked by the Matlab LMI toolbox. A numerical example is given to demonstrate the usefulness of the proposed global asymptotic stability criteria

  8. Learning of N-layers neural network

    Directory of Open Access Journals (Sweden)

    Vladimír Konečný

    2005-01-01

    Full Text Available In the last decade we can observe increasing number of applications based on the Artificial Intelligence that are designed to solve problems from different areas of human activity. The reason why there is so much interest in these technologies is that the classical way of solutions does not exist or these technologies are not suitable because of their robustness. They are often used in applications like Business Intelligence that enable to obtain useful information for high-quality decision-making and to increase competitive advantage.One of the most widespread tools for the Artificial Intelligence are the artificial neural networks. Their high advantage is relative simplicity and the possibility of self-learning based on set of pattern situations.For the learning phase is the most commonly used algorithm back-propagation error (BPE. The base of BPE is the method minima of error function representing the sum of squared errors on outputs of neural net, for all patterns of the learning set. However, while performing BPE and in the first usage, we can find out that it is necessary to complete the handling of the learning factor by suitable method. The stability of the learning process and the rate of convergence depend on the selected method. In the article there are derived two functions: one function for the learning process management by the relative great error function value and the second function when the value of error function approximates to global minimum.The aim of the article is to introduce the BPE algorithm in compact matrix form for multilayer neural networks, the derivation of the learning factor handling method and the presentation of the results.

  9. Parameterization Of Solar Radiation Using Neural Network

    International Nuclear Information System (INIS)

    Jiya, J. D.; Alfa, B.

    2002-01-01

    This paper presents a neural network technique for parameterization of global solar radiation. The available data from twenty-one stations is used for training the neural network and the data from other ten stations is used to validate the neural model. The neural network utilizes latitude, longitude, altitude, sunshine duration and period number to parameterize solar radiation values. The testing data was not used in the training to demonstrate the performance of the neural network in unknown stations to parameterize solar radiation. The results indicate a good agreement between the parameterized solar radiation values and actual measured values

  10. Identification of Complex Dynamical Systems with Neural Networks (2/2)

    CERN Multimedia

    CERN. Geneva

    2016-01-01

    The identification and analysis of high dimensional nonlinear systems is obviously a challenging task. Neural networks have been proven to be universal approximators but this still leaves the identification task a hard one. To do it efficiently, we have to violate some of the rules of classical regression theory. Furthermore we should focus on the interpretation of the resulting model to overcome its black box character. First, we will discuss function approximation with 3 layer feedforward neural networks up to new developments in deep neural networks and deep learning. These nets are not only of interest in connection with image analysis but are a center point of the current artificial intelligence developments. Second, we will focus on the analysis of complex dynamical system in the form of state space models realized as recurrent neural networks. After the introduction of small open dynamical systems we will study dynamical systems on manifolds. Here manifold and dynamics have to be identified in parall...

  11. Identification of Complex Dynamical Systems with Neural Networks (1/2)

    CERN Multimedia

    CERN. Geneva

    2016-01-01

    The identification and analysis of high dimensional nonlinear systems is obviously a challenging task. Neural networks have been proven to be universal approximators but this still leaves the identification task a hard one. To do it efficiently, we have to violate some of the rules of classical regression theory. Furthermore we should focus on the interpretation of the resulting model to overcome its black box character. First, we will discuss function approximation with 3 layer feedforward neural networks up to new developments in deep neural networks and deep learning. These nets are not only of interest in connection with image analysis but are a center point of the current artificial intelligence developments. Second, we will focus on the analysis of complex dynamical system in the form of state space models realized as recurrent neural networks. After the introduction of small open dynamical systems we will study dynamical systems on manifolds. Here manifold and dynamics have to be identified in parall...

  12. Application of artificial neural networks in particle physics

    International Nuclear Information System (INIS)

    Kolanoski, H.

    1995-04-01

    The application of Artificial Neural Networks in Particle Physics is reviewed. Most common is the use of feed-forward nets for event classification and function approximation. This network type is best suited for a hardware implementation and special VLSI chips are available which are used in fast trigger processors. Also discussed are fully connected networks of the Hopfield type for pattern recognition in tracking detectors. (orig.)

  13. Robust Template Decomposition without Weight Restriction for Cellular Neural Networks Implementing Arbitrary Boolean Functions Using Support Vector Classifiers

    Directory of Open Access Journals (Sweden)

    Yih-Lon Lin

    2013-01-01

    Full Text Available If the given Boolean function is linearly separable, a robust uncoupled cellular neural network can be designed as a maximal margin classifier. On the other hand, if the given Boolean function is linearly separable but has a small geometric margin or it is not linearly separable, a popular approach is to find a sequence of robust uncoupled cellular neural networks implementing the given Boolean function. In the past research works using this approach, the control template parameters and thresholds are restricted to assume only a given finite set of integers, and this is certainly unnecessary for the template design. In this study, we try to remove this restriction. Minterm- and maxterm-based decomposition algorithms utilizing the soft margin and maximal margin support vector classifiers are proposed to design a sequence of robust templates implementing an arbitrary Boolean function. Several illustrative examples are simulated to demonstrate the efficiency of the proposed method by comparing our results with those produced by other decomposition methods with restricted weights.

  14. Neural Networks for Optimal Control

    DEFF Research Database (Denmark)

    Sørensen, O.

    1995-01-01

    Two neural networks are trained to act as an observer and a controller, respectively, to control a non-linear, multi-variable process.......Two neural networks are trained to act as an observer and a controller, respectively, to control a non-linear, multi-variable process....

  15. Boolean Factor Analysis by Attractor Neural Network

    Czech Academy of Sciences Publication Activity Database

    Frolov, A. A.; Húsek, Dušan; Muraviev, I. P.; Polyakov, P.Y.

    2007-01-01

    Roč. 18, č. 3 (2007), s. 698-707 ISSN 1045-9227 R&D Projects: GA AV ČR 1ET100300419; GA ČR GA201/05/0079 Institutional research plan: CEZ:AV0Z10300504 Keywords : recurrent neural network * Hopfield-like neural network * associative memory * unsupervised learning * neural network architecture * neural network application * statistics * Boolean factor analysis * dimensionality reduction * features clustering * concepts search * information retrieval Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 2.769, year: 2007

  16. Functional MRI studies of the neural mechanisms of human brain attentional networks

    International Nuclear Information System (INIS)

    Hao Jing; Li Kuncheng; Chen Qi; Wang Yan; Peng Xiaozhe; Zhou Xiaolin

    2005-01-01

    Objective: To identify the neural mechanisms of the anterior attention network (AAN) and posterior attention network (PAN) , investigate the possible interaction between them with event-related functional MRI(ER-fMRI). Methods: Eight right-handed healthy volunteers participated in the experiment designed with inhibition of return in visual orienting and Stroop color-word interference effect. The fMRI data were collected on Siemens 1.5 T Sonata MRI systems and analyzed by AFNI to generate the activation map. Results: The data sets from 6 of 8 subjects were used in the study. The functional localizations of the Stroop and IOR, which manifest the function of the AAN and PAN respectively, were consistent with previous imaging researches. On cued locations, left inferior parietal lobule (IPL), area MT/V5, right dorsolateral prefrontal cortex (DLPFC) and left anterior cingulated cortex (ACC) were significantly activated. On uncued locations, right superior parietal lobule (SPL) and bilateral area MT/V5 were significantly activated. Conclusion: The AAN exerts control over the PAN, while its function can be in turn modulated by the PAN. There are interaction between the AAN and PAN. In addition, it is also proved that ER-fMRI is a feasible method to revise preexisting cognitive model and theory. (authors)

  17. Approximate solutions of dual fuzzy polynomials by feed-back neural networks

    Directory of Open Access Journals (Sweden)

    Ahmad Jafarian

    2012-11-01

    Full Text Available Recently, artificial neural networks (ANNs have been extensively studied and used in different areas such as pattern recognition, associative memory, combinatorial optimization, etc. In this paper, we investigate the ability of fuzzy neural networks to approximate solution of a dual fuzzy polynomial of the form $a_{1}x+ ...+a_{n}x^n =b_{1}x+ ...+b_{n}x^n+d,$ where $a_{j},b_{j},d epsilon E^1 (for j=1,...,n.$ Since the operation of fuzzy neural networks is based on Zadeh's extension principle. For this scope we train a fuzzified neural network by back-propagation-type learning algorithm which has five layer where connection weights are crisp numbers. This neural network can get a crisp input signal and then calculates its corresponding fuzzy output. Presented method can give a real approximate solution for given polynomial by using a cost function which is defined for the level sets of fuzzy output and target output. The simulation results are presented to demonstrate the efficiency and effectiveness of the proposed approach.

  18. An improved superconducting neural circuit and its application for a neural network solving a combinatorial optimization problem

    International Nuclear Information System (INIS)

    Onomi, T; Nakajima, K

    2014-01-01

    We have proposed a superconducting Hopfield-type neural network for solving the N-Queens problem which is one of combinatorial optimization problems. The sigmoid-shape function of a neuron output is represented by the output of coupled SQUIDs gate consisting of a single-junction and a double-junction SQUIDs. One of the important factors for an improvement of the network performance is an improvement of a threshold characteristic of a neuron circuit. In this paper, we report an improved design of coupled SQUID gates for a superconducting neural network. A step-like function with a steep threshold at a rising edge is desirable for a neuron circuit to solve a combinatorial optimization problem. A neuron circuit is composed of two coupled SQUIDs gates with a cascade connection in order to obtain such characteristics. The designed neuron circuit is fabricated by a 2.5 kA/cm 2 Nb/AlOx/Nb process. The operation of a fabricated neuron circuit is experimentally demonstrated. Moreover, we discuss about the performance of the neural network using the improved neuron circuits and delayed negative self-connections.

  19. Robust fixed-time synchronization for uncertain complex-valued neural networks with discontinuous activation functions.

    Science.gov (United States)

    Ding, Xiaoshuai; Cao, Jinde; Alsaedi, Ahmed; Alsaadi, Fuad E; Hayat, Tasawar

    2017-06-01

    This paper is concerned with the fixed-time synchronization for a class of complex-valued neural networks in the presence of discontinuous activation functions and parameter uncertainties. Fixed-time synchronization not only claims that the considered master-slave system realizes synchronization within a finite time segment, but also requires a uniform upper bound for such time intervals for all initial synchronization errors. To accomplish the target of fixed-time synchronization, a novel feedback control procedure is designed for the slave neural networks. By means of the Filippov discontinuity theories and Lyapunov stability theories, some sufficient conditions are established for the selection of control parameters to guarantee synchronization within a fixed time, while an upper bound of the settling time is acquired as well, which allows to be modulated to predefined values independently on initial conditions. Additionally, criteria of modified controller for assurance of fixed-time anti-synchronization are also derived for the same system. An example is included to illustrate the proposed methodologies. Copyright © 2017 Elsevier Ltd. All rights reserved.

  20. MATLAB Simulation of Gradient-Based Neural Network for Online Matrix Inversion

    Science.gov (United States)

    Zhang, Yunong; Chen, Ke; Ma, Weimu; Li, Xiao-Dong

    This paper investigates the simulation of a gradient-based recurrent neural network for online solution of the matrix-inverse problem. Several important techniques are employed as follows to simulate such a neural system. 1) Kronecker product of matrices is introduced to transform a matrix-differential-equation (MDE) to a vector-differential-equation (VDE); i.e., finally, a standard ordinary-differential-equation (ODE) is obtained. 2) MATLAB routine "ode45" is introduced to solve the transformed initial-value ODE problem. 3) In addition to various implementation errors, different kinds of activation functions are simulated to show the characteristics of such a neural network. Simulation results substantiate the theoretical analysis and efficacy of the gradient-based neural network for online constant matrix inversion.

  1. Modeling and prediction of Turkey's electricity consumption using Artificial Neural Networks

    International Nuclear Information System (INIS)

    Kavaklioglu, Kadir; Ozturk, Harun Kemal; Canyurt, Olcay Ersel; Ceylan, Halim

    2009-01-01

    Artificial Neural Networks are proposed to model and predict electricity consumption of Turkey. Multi layer perceptron with backpropagation training algorithm is used as the neural network topology. Tangent-sigmoid and pure-linear transfer functions are selected in the hidden and output layer processing elements, respectively. These input-output network models are a result of relationships that exist among electricity consumption and several other socioeconomic variables. Electricity consumption is modeled as a function of economic indicators such as population, gross national product, imports and exports. It is also modeled using export-import ratio and time input only. Performance comparison among different models is made based on absolute and percentage mean square error. Electricity consumption of Turkey is predicted until 2027 using data from 1975 to 2006 along with other economic indicators. The results show that electricity consumption can be modeled using Artificial Neural Networks, and the models can be used to predict future electricity consumption. (author)

  2. Neural networks at the Tevatron

    International Nuclear Information System (INIS)

    Badgett, W.; Burkett, K.; Campbell, M.K.; Wu, D.Y.; Bianchin, S.; DeNardi, M.; Pauletta, G.; Santi, L.; Caner, A.; Denby, B.; Haggerty, H.; Lindsey, C.S.; Wainer, N.; Dall'Agata, M.; Johns, K.; Dickson, M.; Stanco, L.; Wyss, J.L.

    1992-10-01

    This paper summarizes neural network applications at the Fermilab Tevatron, including the first online hardware application in high energy physics (muon tracking): the CDF and DO neural network triggers; offline quark/gluon discrimination at CDF; ND a new tool for top to multijets recognition at CDF

  3. Practical Application of Neural Networks in State Space Control

    DEFF Research Database (Denmark)

    Bendtsen, Jan Dimon

    the networks, although some modifications are needed for the method to apply to the multilayer perceptron network. In connection with the multilayer perceptron networks it is also pointed out how instantaneous, sample-by-sample linearized state space models can be extracted from a trained network, thus opening......In the present thesis we address some problems in discrete-time state space control of nonlinear dynamical systems and attempt to solve them using generic nonlinear models based on artificial neural networks. The main aim of the work is to examine how well such control algorithms perform when...... theoretic notions followed by a detailed description of the topology, neuron functions and learning rules of the two types of neural networks treated in the thesis, the multilayer perceptron and the neurofuzzy networks. In both cases, a Least Squares second-order gradient method is used to train...

  4. Genetic Algorithm Optimized Neural Networks Ensemble as ...

    African Journals Online (AJOL)

    NJD

    Improvements in neural network calibration models by a novel approach using neural network ensemble (NNE) for the simultaneous ... process by training a number of neural networks. .... Matlab® version 6.1 was employed for building principal component ... provide a fair simulation of calibration data set with some degree.

  5. A class of convergent neural network dynamics

    Science.gov (United States)

    Fiedler, Bernold; Gedeon, Tomáš

    1998-01-01

    We consider a class of systems of differential equations in Rn which exhibits convergent dynamics. We find a Lyapunov function and show that every bounded trajectory converges to the set of equilibria. Our result generalizes the results of Cohen and Grossberg (1983) for convergent neural networks. It replaces the symmetry assumption on the matrix of weights by the assumption on the structure of the connections in the neural network. We prove the convergence result also for a large class of Lotka-Volterra systems. These are naturally defined on the closed positive orthant. We show that there are no heteroclinic cycles on the boundary of the positive orthant for the systems in this class.

  6. Knowledge-based approach for functional MRI analysis by SOM neural network using prior labels from Talairach stereotaxic space

    Science.gov (United States)

    Erberich, Stephan G.; Willmes, Klaus; Thron, Armin; Oberschelp, Walter; Huang, H. K.

    2002-04-01

    Among the methods proposed for the analysis of functional MR we have previously introduced a model-independent analysis based on the self-organizing map (SOM) neural network technique. The SOM neural network can be trained to identify the temporal patterns in voxel time-series of individual functional MRI (fMRI) experiments. The separated classes consist of activation, deactivation and baseline patterns corresponding to the task-paradigm. While the classification capability of the SOM is not only based on the distinctness of the patterns themselves but also on their frequency of occurrence in the training set, a weighting or selection of voxels of interest should be considered prior to the training of the neural network to improve pattern learning. Weighting of interesting voxels by means of autocorrelation or F-test significance levels has been used successfully, but still a large number of baseline voxels is included in the training. The purpose of this approach is to avoid the inclusion of these voxels by using three different levels of segmentation and mapping from Talairach space: (1) voxel partitions at the lobe level, (2) voxel partitions at the gyrus level and (3) voxel partitions at the cell level (Brodmann areas). The results of the SOM classification based on these mapping levels in comparison to training with all brain voxels are presented in this paper.

  7. Periodic bidirectional associative memory neural networks with distributed delays

    Science.gov (United States)

    Chen, Anping; Huang, Lihong; Liu, Zhigang; Cao, Jinde

    2006-05-01

    Some sufficient conditions are obtained for the existence and global exponential stability of a periodic solution to the general bidirectional associative memory (BAM) neural networks with distributed delays by using the continuation theorem of Mawhin's coincidence degree theory and the Lyapunov functional method and the Young's inequality technique. These results are helpful for designing a globally exponentially stable and periodic oscillatory BAM neural network, and the conditions can be easily verified and be applied in practice. An example is also given to illustrate our results.

  8. Probabilistic neural network algorithm for using radon emanations as an earthquake precursor

    International Nuclear Information System (INIS)

    Gupta, Dhawal; Shahani, D.T.

    2014-01-01

    The investigation throughout the world in past two decades provides evidence which indicate that significance variation of radon and other soil gases occur in association with major geophysical events such as earthquake. The traditional statistical algorithm includes regression to remove the effect of the meteorological parameters from the raw radon and anomalies are calculated either taking the periodicity in seasonal variations or periodicity computed using Fast Fourier Transform. In case of neural networks the regression step is avoided. A neural network model can be found which can learn the behavior of radon with respect to meteorological parameter in order that changing emission patterns may be adapted to by the model on its own. The output of this neural model is the estimated radon values. This estimated radon value is used to decide whether anomalous behavior of radon has occurred and a valid precursor may be identified. The neural network model developed using Radial Basis function network gave a prediction rate of 87.7%. The same was accompanied by huge false alarms. The present paper deals with improved neural network algorithm using Probabilistic Neural Networks that requires neither an explicit step of regression nor use of any specific period. This neural network model reduces the false alarms to zero and gave same prediction rate as RBF networks. (author)

  9. Fuzzy logic and neural networks basic concepts & application

    CERN Document Server

    Alavala, Chennakesava R

    2008-01-01

    About the Book: The primary purpose of this book is to provide the student with a comprehensive knowledge of basic concepts of fuzzy logic and neural networks. The hybridization of fuzzy logic and neural networks is also included. No previous knowledge of fuzzy logic and neural networks is required. Fuzzy logic and neural networks have been discussed in detail through illustrative examples, methods and generic applications. Extensive and carefully selected references is an invaluable resource for further study of fuzzy logic and neural networks. Each chapter is followed by a question bank

  10. Adiabatic superconducting cells for ultra-low-power artificial neural networks

    Directory of Open Access Journals (Sweden)

    Andrey E. Schegolev

    2016-10-01

    Full Text Available We propose the concept of using superconducting quantum interferometers for the implementation of neural network algorithms with extremely low power dissipation. These adiabatic elements are Josephson cells with sigmoid- and Gaussian-like activation functions. We optimize their parameters for application in three-layer perceptron and radial basis function networks.

  11. Advanced approach to numerical forecasting using artificial neural networks

    Directory of Open Access Journals (Sweden)

    Michael Štencl

    2009-01-01

    Full Text Available Current global market is driven by many factors, such as the information age, the time and amount of information distributed by many data channels it is practically impossible analyze all kinds of incoming information flows and transform them to data with classical methods. New requirements could be met by using other methods. Once trained on patterns artificial neural networks can be used for forecasting and they are able to work with extremely big data sets in reasonable time. The patterns used for learning process are samples of past data. This paper uses Radial Basis Functions neural network in comparison with Multi Layer Perceptron network with Back-propagation learning algorithm on prediction task. The task works with simplified numerical time series and includes forty observations with prediction for next five observations. The main topic of the article is the identification of the main differences between used neural networks architectures together with numerical forecasting. Detected differences then verify on practical comparative example.

  12. Evolving RBF neural networks for adaptive soft-sensor design.

    Science.gov (United States)

    Alexandridis, Alex

    2013-12-01

    This work presents an adaptive framework for building soft-sensors based on radial basis function (RBF) neural network models. The adaptive fuzzy means algorithm is utilized in order to evolve an RBF network, which approximates the unknown system based on input-output data from it. The methodology gradually builds the RBF network model, based on two separate levels of adaptation: On the first level, the structure of the hidden layer is modified by adding or deleting RBF centers, while on the second level, the synaptic weights are adjusted with the recursive least squares with exponential forgetting algorithm. The proposed approach is tested on two different systems, namely a simulated nonlinear DC Motor and a real industrial reactor. The results show that the produced soft-sensors can be successfully applied to model the two nonlinear systems. A comparison with two different adaptive modeling techniques, namely a dynamic evolving neural-fuzzy inference system (DENFIS) and neural networks trained with online backpropagation, highlights the advantages of the proposed methodology.

  13. PID Neural Network Based Speed Control of Asynchronous Motor Using Programmable Logic Controller

    Directory of Open Access Journals (Sweden)

    MARABA, V. A.

    2011-11-01

    Full Text Available This paper deals with the structure and characteristics of PID Neural Network controller for single input and single output systems. PID Neural Network is a new kind of controller that includes the advantages of artificial neural networks and classic PID controller. Functioning of this controller is based on the update of controller parameters according to the value extracted from system output pursuant to the rules of back propagation algorithm used in artificial neural networks. Parameters obtained from the application of PID Neural Network training algorithm on the speed model of the asynchronous motor exhibiting second order linear behavior were used in the real time speed control of the motor. Programmable logic controller (PLC was used as real time controller. The real time control results show that reference speed successfully maintained under various load conditions.

  14. Global asymptotical ω-periodicity of a fractional-order non-autonomous neural networks.

    Science.gov (United States)

    Chen, Boshan; Chen, Jiejie

    2015-08-01

    We study the global asymptotic ω-periodicity for a fractional-order non-autonomous neural networks. Firstly, based on the Caputo fractional-order derivative it is shown that ω-periodic or autonomous fractional-order neural networks cannot generate exactly ω-periodic signals. Next, by using the contraction mapping principle we discuss the existence and uniqueness of S-asymptotically ω-periodic solution for a class of fractional-order non-autonomous neural networks. Then by using a fractional-order differential and integral inequality technique, we study global Mittag-Leffler stability and global asymptotical periodicity of the fractional-order non-autonomous neural networks, which shows that all paths of the networks, starting from arbitrary points and responding to persistent, nonconstant ω-periodic external inputs, asymptotically converge to the same nonconstant ω-periodic function that may be not a solution. Copyright © 2015 Elsevier Ltd. All rights reserved.

  15. A neural network approach to the orienteering problem

    Energy Technology Data Exchange (ETDEWEB)

    Golden, B.; Wang, Q.; Sun, X.; Jia, J.

    1994-12-31

    In the orienteering problem, we are given a transportation network in which a start point and an end point are specified. Other points have associated scores. Given a fixed amount of time, the goal is to determine a path from start to end through a subset of locations in order to maximize the total path score. This problem has received a considerable amount of attention in the last ten years. The TSP is a variant of the orienteering problem. This paper applies a modified, continuous Hopfield neural network to attack this NP-hard optimization problem. In it, we design an effective energy function and learning algorithm. Unlike some applications of neural networks to optimization problems, this approach is shown to perform quite well.

  16. Biological oscillations for learning walking coordination: dynamic recurrent neural network functionally models physiological central pattern generator.

    Science.gov (United States)

    Hoellinger, Thomas; Petieau, Mathieu; Duvinage, Matthieu; Castermans, Thierry; Seetharaman, Karthik; Cebolla, Ana-Maria; Bengoetxea, Ana; Ivanenko, Yuri; Dan, Bernard; Cheron, Guy

    2013-01-01

    The existence of dedicated neuronal modules such as those organized in the cerebral cortex, thalamus, basal ganglia, cerebellum, or spinal cord raises the question of how these functional modules are coordinated for appropriate motor behavior. Study of human locomotion offers an interesting field for addressing this central question. The coordination of the elevation of the 3 leg segments under a planar covariation rule (Borghese et al., 1996) was recently modeled (Barliya et al., 2009) by phase-adjusted simple oscillators shedding new light on the understanding of the central pattern generator (CPG) processing relevant oscillation signals. We describe the use of a dynamic recurrent neural network (DRNN) mimicking the natural oscillatory behavior of human locomotion for reproducing the planar covariation rule in both legs at different walking speeds. Neural network learning was based on sinusoid signals integrating frequency and amplitude features of the first three harmonics of the sagittal elevation angles of the thigh, shank, and foot of each lower limb. We verified the biological plausibility of the neural networks. Best results were obtained with oscillations extracted from the first three harmonics in comparison to oscillations outside the harmonic frequency peaks. Physiological replication steadily increased with the number of neuronal units from 1 to 80, where similarity index reached 0.99. Analysis of synaptic weighting showed that the proportion of inhibitory connections consistently increased with the number of neuronal units in the DRNN. This emerging property in the artificial neural networks resonates with recent advances in neurophysiology of inhibitory neurons that are involved in central nervous system oscillatory activities. The main message of this study is that this type of DRNN may offer a useful model of physiological central pattern generator for gaining insights in basic research and developing clinical applications.

  17. Synchronization of chaotic neural networks via output or state coupling

    International Nuclear Information System (INIS)

    Lu Hongtao; Leeuwen, C. van

    2006-01-01

    We consider the problem of global exponential synchronization between two identical chaotic neural networks that are linearly and unidirectionally coupled. We formulate a general framework for the synchronization problem in which one chaotic neural network, working as the driving system (or master), sends its output or state values to the other, which serves as the response system (or slave). We use Lyapunov functions to establish general theoretical conditions for designing the coupling matrix. Neither symmetry nor negative (positive) definiteness of the coupling matrix are required; under less restrictive conditions, the two coupled chaotic neural networks can achieve global exponential synchronization regardless of their initial states. Detailed comparisons with existing results are made and numerical simulations are carried out to demonstrate the effectiveness of the established synchronization laws

  18. Neural Network Predictive Control for Vanadium Redox Flow Battery

    Directory of Open Access Journals (Sweden)

    Hai-Feng Shen

    2013-01-01

    Full Text Available The vanadium redox flow battery (VRB is a nonlinear system with unknown dynamics and disturbances. The flowrate of the electrolyte is an important control mechanism in the operation of a VRB system. Too low or too high flowrate is unfavorable for the safety and performance of VRB. This paper presents a neural network predictive control scheme to enhance the overall performance of the battery. A radial basis function (RBF network is employed to approximate the dynamics of the VRB system. The genetic algorithm (GA is used to obtain the optimum initial values of the RBF network parameters. The gradient descent algorithm is used to optimize the objective function of the predictive controller. Compared with the constant flowrate, the simulation results show that the flowrate optimized by neural network predictive controller can increase the power delivered by the battery during the discharge and decrease the power consumed during the charge.

  19. Global asymptotic stability to a generalized Cohen-Grossberg BAM neural networks of neutral type delays.

    Science.gov (United States)

    Zhang, Zhengqiu; Liu, Wenbin; Zhou, Dongming

    2012-01-01

    In this paper, we first discuss the existence of a unique equilibrium point of a generalized Cohen-Grossberg BAM neural networks of neutral type delays by means of the Homeomorphism theory and inequality technique. Then, by applying the existence result of an equilibrium point and constructing a Lyapunov functional, we study the global asymptotic stability of the equilibrium solution to the above Cohen-Grossberg BAM neural networks of neutral type. In our results, the hypothesis for boundedness in the existing paper, which discussed Cohen-Grossberg neural networks of neutral type on the activation functions, are removed. Finally, we give an example to demonstrate the validity of our global asymptotic stability result for the above neural networks. Copyright © 2011 Elsevier Ltd. All rights reserved.

  20. Dynamic analysis of stochastic bidirectional associative memory neural networks with delays

    International Nuclear Information System (INIS)

    Zhao Hongyong; Ding Nan

    2007-01-01

    In this paper, stochastic bidirectional associative memory neural networks model with delays is considered. By constructing Lyapunov functionals, and using stochastic analysis method and inequality technique, we give some sufficient criteria ensuring almost sure exponential stability, pth exponential stability and mean value exponential stability. The obtained criteria can be used as theoretic guidance to stabilize neural networks in practical applications when stochastic noise is taken into consideration

  1. Hierarchical modular structure enhances the robustness of self-organized criticality in neural networks

    International Nuclear Information System (INIS)

    Wang Shengjun; Zhou Changsong

    2012-01-01

    One of the most prominent architecture properties of neural networks in the brain is the hierarchical modular structure. How does the structure property constrain or improve brain function? It is thought that operating near criticality can be beneficial for brain function. Here, we find that networks with modular structure can extend the parameter region of coupling strength over which critical states are reached compared to non-modular networks. Moreover, we find that one aspect of network function—dynamical range—is highest for the same parameter region. Thus, hierarchical modularity enhances robustness of criticality as well as function. However, too much modularity constrains function by preventing the neural networks from reaching critical states, because the modular structure limits the spreading of avalanches. Our results suggest that the brain may take advantage of the hierarchical modular structure to attain criticality and enhanced function. (paper)

  2. The effect of the neural activity on topological properties of growing neural networks.

    Science.gov (United States)

    Gafarov, F M; Gafarova, V R

    2016-09-01

    The connectivity structure in cortical networks defines how information is transmitted and processed, and it is a source of the complex spatiotemporal patterns of network's development, and the process of creation and deletion of connections is continuous in the whole life of the organism. In this paper, we study how neural activity influences the growth process in neural networks. By using a two-dimensional activity-dependent growth model we demonstrated the neural network growth process from disconnected neurons to fully connected networks. For making quantitative investigation of the network's activity influence on its topological properties we compared it with the random growth network not depending on network's activity. By using the random graphs theory methods for the analysis of the network's connections structure it is shown that the growth in neural networks results in the formation of a well-known "small-world" network.

  3. On the complexity of neural network classifiers: a comparison between shallow and deep architectures.

    Science.gov (United States)

    Bianchini, Monica; Scarselli, Franco

    2014-08-01

    Recently, researchers in the artificial neural network field have focused their attention on connectionist models composed by several hidden layers. In fact, experimental results and heuristic considerations suggest that deep architectures are more suitable than shallow ones for modern applications, facing very complex problems, e.g., vision and human language understanding. However, the actual theoretical results supporting such a claim are still few and incomplete. In this paper, we propose a new approach to study how the depth of feedforward neural networks impacts on their ability in implementing high complexity functions. First, a new measure based on topological concepts is introduced, aimed at evaluating the complexity of the function implemented by a neural network, used for classification purposes. Then, deep and shallow neural architectures with common sigmoidal activation functions are compared, by deriving upper and lower bounds on their complexity, and studying how the complexity depends on the number of hidden units and the used activation function. The obtained results seem to support the idea that deep networks actually implements functions of higher complexity, so that they are able, with the same number of resources, to address more difficult problems.

  4. Enhancing neural-network performance via assortativity

    International Nuclear Information System (INIS)

    Franciscis, Sebastiano de; Johnson, Samuel; Torres, Joaquin J.

    2011-01-01

    The performance of attractor neural networks has been shown to depend crucially on the heterogeneity of the underlying topology. We take this analysis a step further by examining the effect of degree-degree correlations - assortativity - on neural-network behavior. We make use of a method recently put forward for studying correlated networks and dynamics thereon, both analytically and computationally, which is independent of how the topology may have evolved. We show how the robustness to noise is greatly enhanced in assortative (positively correlated) neural networks, especially if it is the hub neurons that store the information.

  5. Prediction of metal corrosion using feed-forward neural networks

    International Nuclear Information System (INIS)

    Mahjani, M.G.; Jalili, S.; Jafarian, M.; Jaberi, A.

    2004-01-01

    The reliable prediction of corrosion behavior for the effective control of corrosion is a fundamental requirement. Since real world corrosion never seems to involve quite the same conditions that have previously been tested, using corrosion literature does not provide the necessary answers. In order to provide a methodology for predicting corrosion in real and complex situations, artificial neural networks can be utilized. Feed-forward artificial neural network (FFANN) is an information-processing paradigm inspired by the way the densely interconnected, parallel structure of the human brain process information.The aim of the present work is to predict corrosion behavior in critical conditions, such as industrial applications, based on some laboratory experimental data. Electrochemical behavior of stainless steel in different conditions were studied, using polarization technique and Tafel curves. Back-propagation neural networks models were developed to predict the corrosion behavior. The trained networks result in predicted value in good comparison to the experimental data. They have generally been claimed to be successful in modeling the corrosion behavior. The results are presented in two tables. Table 1 gives corrosion behavior of stainless-steel as a function of pH and CuSO 4 concentration and table 2 gives corrosion behavior of stainless - steel as a function of electrode surface area and CuSO 4 concentration. (authors)

  6. A gentle introduction to artificial neural networks.

    Science.gov (United States)

    Zhang, Zhongheng

    2016-10-01

    Artificial neural network (ANN) is a flexible and powerful machine learning technique. However, it is under utilized in clinical medicine because of its technical challenges. The article introduces some basic ideas behind ANN and shows how to build ANN using R in a step-by-step framework. In topology and function, ANN is in analogue to the human brain. There are input and output signals transmitting from input to output nodes. Input signals are weighted before reaching output nodes according to their respective importance. Then the combined signal is processed by activation function. I simulated a simple example to illustrate how to build a simple ANN model using nnet() function. This function allows for one hidden layer with varying number of units in that layer. The basic structure of ANN can be visualized with plug-in plot.nnet() function. The plot function is powerful that it allows for varieties of adjustment to the appearance of the neural networks. Prediction with ANN can be performed with predict() function, similar to that of conventional generalized linear models. Finally, the prediction power of ANN is examined using confusion matrix and average accuracy. It appears that ANN is slightly better than conventional linear model.

  7. Genetic algorithm for neural networks optimization

    Science.gov (United States)

    Setyawati, Bina R.; Creese, Robert C.; Sahirman, Sidharta

    2004-11-01

    This paper examines the forecasting performance of multi-layer feed forward neural networks in modeling a particular foreign exchange rates, i.e. Japanese Yen/US Dollar. The effects of two learning methods, Back Propagation and Genetic Algorithm, in which the neural network topology and other parameters fixed, were investigated. The early results indicate that the application of this hybrid system seems to be well suited for the forecasting of foreign exchange rates. The Neural Networks and Genetic Algorithm were programmed using MATLAB«.

  8. Identification-based chaos control via backstepping design using self-organizing fuzzy neural networks

    International Nuclear Information System (INIS)

    Peng Yafu; Hsu, C.-F.

    2009-01-01

    This paper proposes an identification-based adaptive backstepping control (IABC) for the chaotic systems. The IABC system is comprised of a neural backstepping controller and a robust compensation controller. The neural backstepping controller containing a self-organizing fuzzy neural network (SOFNN) identifier is the principal controller, and the robust compensation controller is designed to dispel the effect of minimum approximation error introduced by the SOFNN identifier. The SOFNN identifier is used to online estimate the chaotic dynamic function with structure and parameter learning phases of fuzzy neural network. The structure learning phase consists of the growing and pruning of fuzzy rules; thus the SOFNN identifier can avoid the time-consuming trial-and-error tuning procedure for determining the neural structure of fuzzy neural network. The parameter learning phase adjusts the interconnection weights of neural network to achieve favorable approximation performance. Finally, simulation results verify that the proposed IABC can achieve favorable tracking performance.

  9. Neural Network Based Load Frequency Control for Restructuring ...

    African Journals Online (AJOL)

    Neural Network Based Load Frequency Control for Restructuring Power Industry. ... an artificial neural network (ANN) application of load frequency control (LFC) of a Multi-Area power system by using a neural network controller is presented.

  10. PREDIKSI FOREX MENGGUNAKAN MODEL NEURAL NETWORK

    Directory of Open Access Journals (Sweden)

    R. Hadapiningradja Kusumodestoni

    2015-11-01

    Full Text Available ABSTRAK Prediksi adalah salah satu teknik yang paling penting dalam menjalankan bisnis forex. Keputusan dalam memprediksi adalah sangatlah penting, karena dengan prediksi dapat membantu mengetahui nilai forex di waktu tertentu kedepan sehingga dapat mengurangi resiko kerugian. Tujuan dari penelitian ini dimaksudkan memprediksi bisnis fores menggunakan model neural network dengan data time series per 1 menit untuk mengetahui nilai akurasi prediksi sehingga dapat mengurangi resiko dalam menjalankan bisnis forex. Metode penelitian pada penelitian ini meliputi metode pengumpulan data kemudian dilanjutkan ke metode training, learning, testing menggunakan neural network. Setelah di evaluasi hasil penelitian ini menunjukan bahwa penerapan algoritma Neural Network mampu untuk memprediksi forex dengan tingkat akurasi prediksi 0.431 +/- 0.096 sehingga dengan prediksi ini dapat membantu mengurangi resiko dalam menjalankan bisnis forex. Kata kunci: prediksi, forex, neural network.

  11. Artificial neural networks a practical course

    CERN Document Server

    da Silva, Ivan Nunes; Andrade Flauzino, Rogerio; Liboni, Luisa Helena Bartocci; dos Reis Alves, Silas Franco

    2017-01-01

    This book provides comprehensive coverage of neural networks, their evolution, their structure, the problems they can solve, and their applications. The first half of the book looks at theoretical investigations on artificial neural networks and addresses the key architectures that are capable of implementation in various application scenarios. The second half is designed specifically for the production of solutions using artificial neural networks to solve practical problems arising from different areas of knowledge. It also describes the various implementation details that were taken into account to achieve the reported results. These aspects contribute to the maturation and improvement of experimental techniques to specify the neural network architecture that is most appropriate for a particular application scope. The book is appropriate for students in graduate and upper undergraduate courses in addition to researchers and professionals.

  12. Sustained NMDA receptor hypofunction induces compromised neural systems integration and schizophrenia-like alterations in functional brain networks.

    Science.gov (United States)

    Dawson, Neil; Xiao, Xiaolin; McDonald, Martin; Higham, Desmond J; Morris, Brian J; Pratt, Judith A

    2014-02-01

    Compromised functional integration between cerebral subsystems and dysfunctional brain network organization may underlie the neurocognitive deficits seen in psychiatric disorders. Applying topological measures from network science to brain imaging data allows the quantification of complex brain network connectivity. While this approach has recently been used to further elucidate the nature of brain dysfunction in schizophrenia, the value of applying this approach in preclinical models of psychiatric disease has not been recognized. For the first time, we apply both established and recently derived algorithms from network science (graph theory) to functional brain imaging data from rats treated subchronically with the N-methyl-D-aspartic acid (NMDA) receptor antagonist phencyclidine (PCP). We show that subchronic PCP treatment induces alterations in the global properties of functional brain networks akin to those reported in schizophrenia. Furthermore, we show that subchronic PCP treatment induces compromised functional integration between distributed neural systems, including between the prefrontal cortex and hippocampus, that have established roles in cognition through, in part, the promotion of thalamic dysconnectivity. We also show that subchronic PCP treatment promotes the functional disintegration of discrete cerebral subsystems and also alters the connectivity of neurotransmitter systems strongly implicated in schizophrenia. Therefore, we propose that sustained NMDA receptor hypofunction contributes to the pathophysiology of dysfunctional brain network organization in schizophrenia.

  13. Optical-Correlator Neural Network Based On Neocognitron

    Science.gov (United States)

    Chao, Tien-Hsin; Stoner, William W.

    1994-01-01

    Multichannel optical correlator implements shift-invariant, high-discrimination pattern-recognizing neural network based on paradigm of neocognitron. Selected as basic building block of this neural network because invariance under shifts is inherent advantage of Fourier optics included in optical correlators in general. Neocognitron is conceptual electronic neural-network model for recognition of visual patterns. Multilayer processing achieved by iteratively feeding back output of feature correlator to input spatial light modulator and updating Fourier filters. Neural network trained by use of characteristic features extracted from target images. Multichannel implementation enables parallel processing of large number of selected features.

  14. Neural network CT image reconstruction method for small amount of projection data

    International Nuclear Information System (INIS)

    Ma, X.F.; Fukuhara, M.; Takeda, T.

    2000-01-01

    This paper presents a new method for two-dimensional image reconstruction by using a multi-layer neural network. Though a conventionally used object function of such a neural network is composed of a sum of squared errors of the output data, we define an object function composed of a sum of squared residuals of an integral equation. By employing an appropriate numerical line integral for this integral equation, we can construct a neural network which can be used for CT image reconstruction for cases with small amount of projection data. We applied this method to some model problems and obtained satisfactory results. This method is especially useful for analyses of laboratory experiments or field observations where only a small amount of projection data is available in comparison with the well-developed medical applications

  15. Neural network CT image reconstruction method for small amount of projection data

    CERN Document Server

    Ma, X F; Takeda, T

    2000-01-01

    This paper presents a new method for two-dimensional image reconstruction by using a multi-layer neural network. Though a conventionally used object function of such a neural network is composed of a sum of squared errors of the output data, we define an object function composed of a sum of squared residuals of an integral equation. By employing an appropriate numerical line integral for this integral equation, we can construct a neural network which can be used for CT image reconstruction for cases with small amount of projection data. We applied this method to some model problems and obtained satisfactory results. This method is especially useful for analyses of laboratory experiments or field observations where only a small amount of projection data is available in comparison with the well-developed medical applications.

  16. Neural network approach for the calculation of potential coefficients in quantum mechanics

    Science.gov (United States)

    Ossandón, Sebastián; Reyes, Camilo; Cumsille, Patricio; Reyes, Carlos M.

    2017-05-01

    A numerical method based on artificial neural networks is used to solve the inverse Schrödinger equation for a multi-parameter class of potentials. First, the finite element method was used to solve repeatedly the direct problem for different parametrizations of the chosen potential function. Then, using the attainable eigenvalues as a training set of the direct radial basis neural network a map of new eigenvalues was obtained. This relationship was later inverted and refined by training an inverse radial basis neural network, allowing the calculation of the unknown parameters and therefore estimating the potential function. Three numerical examples are presented in order to prove the effectiveness of the method. The results show that the method proposed has the advantage to use less computational resources without a significant accuracy loss.

  17. A new neural network model for solving random interval linear programming problems.

    Science.gov (United States)

    Arjmandzadeh, Ziba; Safi, Mohammadreza; Nazemi, Alireza

    2017-05-01

    This paper presents a neural network model for solving random interval linear programming problems. The original problem involving random interval variable coefficients is first transformed into an equivalent convex second order cone programming problem. A neural network model is then constructed for solving the obtained convex second order cone problem. Employing Lyapunov function approach, it is also shown that the proposed neural network model is stable in the sense of Lyapunov and it is globally convergent to an exact satisfactory solution of the original problem. Several illustrative examples are solved in support of this technique. Copyright © 2017 Elsevier Ltd. All rights reserved.

  18. Global asymptotic stability of Cohen-Grossberg neural networks with constant and variable delays

    International Nuclear Information System (INIS)

    Wu Wei; Cui Baotong; Huang Min

    2007-01-01

    Global asymptotic stability of Cohen-Grossberg neural networks with constant and variable delays is studied. Some sufficient conditions for the neural networks are proposed to guarantee the global asymptotic convergence by using different Lyapunov functionals. Our criteria represent an extension of the existing results in literatures. A comparison between our results and the previous results admits that our results establish a new set of stability criteria for delayed Cohen-Grossberg neural networks. Those conditions are less restrictive than those given in the earlier reference

  19. The artificial neural networks: An approach to artificial intelligence; Un approccio ``biologico`` all`intelligenza artificiale

    Energy Technology Data Exchange (ETDEWEB)

    Taraglio, Sergio; Zanela, Andrea [ENEA, Casaccia (Italy). Dipt. Innovazione

    1997-05-01

    The artificial neural networks try to simulate the functionalities of the nervous system through a complex network of simple computing elements. In this work is presented an introduction to the neural networks and some of their possible applications, especially in the field of Artificial Intelligence.

  20. PREDICTING CUSTOMER CHURN IN BANKING INDUSTRY USING NEURAL NETWORKS

    Directory of Open Access Journals (Sweden)

    Alisa Bilal Zorić

    2016-03-01

    Full Text Available The aim of this article is to present a case study of usage of one of the data mining methods, neural network, in knowledge discovery from databases in the banking industry. Data mining is automated process of analysing, organization or grouping a large set of data from different perspectives and summarizing it into useful information using special algorithms. Data mining can help to resolve banking problems by finding some regularity, causality and correlation to business information which are not visible at first sight because they are hidden in large amounts of data. In this paper, we used one of the data mining methods, neural network, within the software package Alyuda NeuroInteligence to predict customer churn in bank. The focus on customer churn is to determinate the customers who are at risk of leaving and analysing whether those customers are worth retaining. Neural network is statistical learning model inspired by biological neural and it is used to estimate or approximate functions that can depend on a large number of inputs which are generally unknown. Although the method itself is complicated, there are tools that enable the use of neural networks without much prior knowledge of how they operate. The results show that clients who use more bank services (products are more loyal, so bank should focus on those clients who use less than three products, and offer them products according to their needs. Similar results are obtained for different network topologies.

  1. Design of Neural Networks for Fast Convergence and Accuracy: Dynamics and Control

    Science.gov (United States)

    Maghami, Peiman G.; Sparks, Dean W., Jr.

    1997-01-01

    A procedure for the design and training of artificial neural networks, used for rapid and efficient controls and dynamics design and analysis for flexible space systems, has been developed. Artificial neural networks are employed, such that once properly trained, they provide a means of evaluating the impact of design changes rapidly. Specifically, two-layer feedforward neural networks are designed to approximate the functional relationship between the component/spacecraft design changes and measures of its performance or nonlinear dynamics of the system/components. A training algorithm, based on statistical sampling theory, is presented, which guarantees that the trained networks provide a designer-specified degree of accuracy in mapping the functional relationship. Within each iteration of this statistical-based algorithm, a sequential design algorithm is used for the design and training of the feedforward network to provide rapid convergence to the network goals. Here, at each sequence a new network is trained to minimize the error of previous network. The proposed method should work for applications wherein an arbitrary large source of training data can be generated. Two numerical examples are performed on a spacecraft application in order to demonstrate the feasibility of the proposed approach.

  2. Neutron spectrometry and dosimetry by means of Bonner spheres system and artificial neural networks applying robust design of artificial neural networks

    Energy Technology Data Exchange (ETDEWEB)

    Martinez B, M.R.; Ortiz R, J.M.; Vega C, H.R. [UAZ, Av. Ramon Lopez Velarde No. 801, 98000 Zacatecas (Mexico)

    2006-07-01

    An Artificial Neural Network has been designed, trained and tested to unfold neutron spectra and simultaneously to calculate equivalent doses. A set of 187 neutron spectra compiled by the International Atomic Energy Agency and 13 equivalent doses were used in the artificial neural network designed, trained and tested. In order to design the neural network was used the robust design of artificial neural networks methodology, which assures that the quality of the neural networks takes into account from the design stage. Unless previous works, here, for first time a group of neural networks were designed and trained to unfold 187 neutron spectra and at the same time to calculate 13 equivalent doses, starting from the count rates coming from the Bonner spheres system by using a systematic and experimental strategy. (Author)

  3. Neutron spectrometry and dosimetry by means of Bonner spheres system and artificial neural networks applying robust design of artificial neural networks

    International Nuclear Information System (INIS)

    Martinez B, M.R.; Ortiz R, J.M.; Vega C, H.R.

    2006-01-01

    An Artificial Neural Network has been designed, trained and tested to unfold neutron spectra and simultaneously to calculate equivalent doses. A set of 187 neutron spectra compiled by the International Atomic Energy Agency and 13 equivalent doses were used in the artificial neural network designed, trained and tested. In order to design the neural network was used the robust design of artificial neural networks methodology, which assures that the quality of the neural networks takes into account from the design stage. Unless previous works, here, for first time a group of neural networks were designed and trained to unfold 187 neutron spectra and at the same time to calculate 13 equivalent doses, starting from the count rates coming from the Bonner spheres system by using a systematic and experimental strategy. (Author)

  4. A neural network approach to the study of dynamics and structure of molecular systems

    International Nuclear Information System (INIS)

    Getino, C.; Sumpter, B.G.; Noid, D.W.

    1994-01-01

    Neural networks are used to study intramolecular energy flow in molecular systems (tetratomics to macromolecules), developing new techniques for efficient analysis of data obtained from molecular-dynamics and quantum mechanics calculations. Neural networks can map phase space points to intramolecular vibrational energies along a classical trajectory (example of complicated coordinate transformation), producing reasonably accurate values for any region of the multidimensional phase space of a tetratomic molecule. Neural network energy flow predictions are found to significantly enhance the molecular-dynamics method to longer time-scales and extensive averaging of trajectories for macromolecular systems. Pattern recognition abilities of neural networks can be used to discern phase space features. Neural networks can also expand model calculations by interpolation of costly quantum mechanical ab initio data, used to develop semiempirical potential energy functions

  5. Motion control of servo cylinder using neural network

    International Nuclear Information System (INIS)

    Hwang, Un Kyoo; Cho, Seung Ho

    2004-01-01

    In this paper, a neural network controller that can be implemented in parallel with a PD controller is suggested for motion control of a hydraulic servo cylinder. By applying a self-excited oscillation method, the system design parameters of open loop transfer function of servo cylinder system are identified. Based on system design parameters, the PD gains are determined for the desired closed loop characteristics. The neural network is incorporated with PD control in order to compensate the inherent nonlinearities of hydraulic servo system. As an application example, a motion control using PD-NN has been performed and proved its superior performance by comparing with that of a PD control

  6. A comparison between wavelet based static and dynamic neural network approaches for runoff prediction

    Science.gov (United States)

    Shoaib, Muhammad; Shamseldin, Asaad Y.; Melville, Bruce W.; Khan, Mudasser Muneer

    2016-04-01

    In order to predict runoff accurately from a rainfall event, the multilayer perceptron type of neural network models are commonly used in hydrology. Furthermore, the wavelet coupled multilayer perceptron neural network (MLPNN) models has also been found superior relative to the simple neural network models which are not coupled with wavelet. However, the MLPNN models are considered as static and memory less networks and lack the ability to examine the temporal dimension of data. Recurrent neural network models, on the other hand, have the ability to learn from the preceding conditions of the system and hence considered as dynamic models. This study for the first time explores the potential of wavelet coupled time lagged recurrent neural network (TLRNN) models for runoff prediction using rainfall data. The Discrete Wavelet Transformation (DWT) is employed in this study to decompose the input rainfall data using six of the most commonly used wavelet functions. The performance of the simple and the wavelet coupled static MLPNN models is compared with their counterpart dynamic TLRNN models. The study found that the dynamic wavelet coupled TLRNN models can be considered as alternative to the static wavelet MLPNN models. The study also investigated the effect of memory depth on the performance of static and dynamic neural network models. The memory depth refers to how much past information (lagged data) is required as it is not known a priori. The db8 wavelet function is found to yield the best results with the static MLPNN models and with the TLRNN models having small memory depths. The performance of the wavelet coupled TLRNN models with large memory depths is found insensitive to the selection of the wavelet function as all wavelet functions have similar performance.

  7. Adaptive Neural Network Sliding Mode Control for Quad Tilt Rotor Aircraft

    Directory of Open Access Journals (Sweden)

    Yanchao Yin

    2017-01-01

    Full Text Available A novel neural network sliding mode control based on multicommunity bidirectional drive collaborative search algorithm (M-CBDCS is proposed to design a flight controller for performing the attitude tracking control of a quad tilt rotors aircraft (QTRA. Firstly, the attitude dynamic model of the QTRA concerning propeller tension, channel arm, and moment of inertia is formulated, and the equivalent sliding mode control law is stated. Secondly, an adaptive control algorithm is presented to eliminate the approximation error, where a radial basis function (RBF neural network is used to online regulate the equivalent sliding mode control law, and the novel M-CBDCS algorithm is developed to uniformly update the unknown neural network weights and essential model parameters adaptively. The nonlinear approximation error is obtained and serves as a novel leakage term in the adaptations to guarantee the sliding surface convergence and eliminate the chattering phenomenon, which benefit the overall attitude control performance for QTRA. Finally, the appropriate comparisons among the novel adaptive neural network sliding mode control, the classical neural network sliding mode control, and the dynamic inverse PID control are examined, and comparative simulations are included to verify the efficacy of the proposed control method.

  8. Experiments in Neural-Network Control of a Free-Flying Space Robot

    Science.gov (United States)

    Wilson, Edward

    1995-01-01

    Four important generic issues are identified and addressed in some depth in this thesis as part of the development of an adaptive neural network based control system for an experimental free flying space robot prototype. The first issue concerns the importance of true system level design of the control system. A new hybrid strategy is developed here, in depth, for the beneficial integration of neural networks into the total control system. A second important issue in neural network control concerns incorporating a priori knowledge into the neural network. In many applications, it is possible to get a reasonably accurate controller using conventional means. If this prior information is used purposefully to provide a starting point for the optimizing capabilities of the neural network, it can provide much faster initial learning. In a step towards addressing this issue, a new generic Fully Connected Architecture (FCA) is developed for use with backpropagation. A third issue is that neural networks are commonly trained using a gradient based optimization method such as backpropagation; but many real world systems have Discrete Valued Functions (DVFs) that do not permit gradient based optimization. One example is the on-off thrusters that are common on spacecraft. A new technique is developed here that now extends backpropagation learning for use with DVFs. The fourth issue is that the speed of adaptation is often a limiting factor in the implementation of a neural network control system. This issue has been strongly resolved in the research by drawing on the above new contributions.

  9. Optical neural network system for pose determination of spinning satellites

    Science.gov (United States)

    Lee, Andrew; Casasent, David

    1990-01-01

    An optical neural network architecture and algorithm based on a Hopfield optimization network are presented for multitarget tracking. This tracker utilizes a neuron for every possible target track, and a quadratic energy function of neural activities which is minimized using gradient descent neural evolution. The neural net tracker is demonstrated as part of a system for determining position and orientation (pose) of spinning satellites with respect to a robotic spacecraft. The input to the system is time sequence video from a single camera. Novelty detection and filtering are utilized to locate and segment novel regions from the input images. The neural net multitarget tracker determines the correspondences (or tracks) of the novel regions as a function of time, and hence the paths of object (satellite) parts. The path traced out by a given part or region is approximately elliptical in image space, and the position, shape and orientation of the ellipse are functions of the satellite geometry and its pose. Having a geometric model of the satellite, and the elliptical path of a part in image space, the three-dimensional pose of the satellite is determined. Digital simulation results using this algorithm are presented for various satellite poses and lighting conditions.

  10. Neural networks and applications tutorial

    Science.gov (United States)

    Guyon, I.

    1991-09-01

    The importance of neural networks has grown dramatically during this decade. While only a few years ago they were primarily of academic interest, now dozens of companies and many universities are investigating the potential use of these systems and products are beginning to appear. The idea of building a machine whose architecture is inspired by that of the brain has roots which go far back in history. Nowadays, technological advances of computers and the availability of custom integrated circuits, permit simulations of hundreds or even thousands of neurons. In conjunction, the growing interest in learning machines, non-linear dynamics and parallel computation spurred renewed attention in artificial neural networks. Many tentative applications have been proposed, including decision systems (associative memories, classifiers, data compressors and optimizers), or parametric models for signal processing purposes (system identification, automatic control, noise canceling, etc.). While they do not always outperform standard methods, neural network approaches are already used in some real world applications for pattern recognition and signal processing tasks. The tutorial is divided into six lectures, that where presented at the Third Graduate Summer Course on Computational Physics (September 3-7, 1990) on Parallel Architectures and Applications, organized by the European Physical Society: (1) Introduction: machine learning and biological computation. (2) Adaptive artificial neurons (perceptron, ADALINE, sigmoid units, etc.): learning rules and implementations. (3) Neural network systems: architectures, learning algorithms. (4) Applications: pattern recognition, signal processing, etc. (5) Elements of learning theory: how to build networks which generalize. (6) A case study: a neural network for on-line recognition of handwritten alphanumeric characters.

  11. THE USE OF NEURAL NETWORK TECHNOLOGY TO MODEL SWIMMING PERFORMANCE

    Directory of Open Access Journals (Sweden)

    António José Silva

    2007-03-01

    Full Text Available The aims of the present study were: to identify the factors which are able to explain the performance in the 200 meters individual medley and 400 meters front crawl events in young swimmers, to model the performance in those events using non-linear mathematic methods through artificial neural networks (multi-layer perceptrons and to assess the neural network models precision to predict the performance. A sample of 138 young swimmers (65 males and 73 females of national level was submitted to a test battery comprising four different domains: kinanthropometric evaluation, dry land functional evaluation (strength and flexibility, swimming functional evaluation (hydrodynamics, hydrostatic and bioenergetics characteristics and swimming technique evaluation. To establish a profile of the young swimmer non-linear combinations between preponderant variables for each gender and swim performance in the 200 meters medley and 400 meters font crawl events were developed. For this purpose a feed forward neural network was used (Multilayer Perceptron with three neurons in a single hidden layer. The prognosis precision of the model (error lower than 0.8% between true and estimated performances is supported by recent evidence. Therefore, we consider that the neural network tool can be a good approach in the resolution of complex problems such as performance modeling and the talent identification in swimming and, possibly, in a wide variety of sports

  12. Adaptive optimization and control using neural networks

    Energy Technology Data Exchange (ETDEWEB)

    Mead, W.C.; Brown, S.K.; Jones, R.D.; Bowling, P.S.; Barnes, C.W.

    1993-10-22

    Recent work has demonstrated the ability of neural-network-based controllers to optimize and control machines with complex, non-linear, relatively unknown control spaces. We present a brief overview of neural networks via a taxonomy illustrating some capabilities of different kinds of neural networks. We present some successful control examples, particularly the optimization and control of a small-angle negative ion source.

  13. Accelerator and feedback control simulation using neural networks

    International Nuclear Information System (INIS)

    Nguyen, D.; Lee, M.; Sass, R.; Shoaee, H.

    1991-05-01

    Unlike present constant model feedback system, neural networks can adapt as the dynamics of the process changes with time. Using a process model, the ''Accelerator'' network is first trained to simulate the dynamics of the beam for a given beam line. This ''Accelerator'' network is then used to train a second ''Controller'' network which performs the control function. In simulation, the networks are used to adjust corrector magnetics to control the launch angle and position of the beam to keep it on the desired trajectory when the incoming beam is perturbed. 4 refs., 3 figs

  14. Neural Networks

    Directory of Open Access Journals (Sweden)

    Schwindling Jerome

    2010-04-01

    Full Text Available This course presents an overview of the concepts of the neural networks and their aplication in the framework of High energy physics analyses. After a brief introduction on the concept of neural networks, the concept is explained in the frame of neuro-biology, introducing the concept of multi-layer perceptron, learning and their use as data classifer. The concept is then presented in a second part using in more details the mathematical approach focussing on typical use cases faced in particle physics. Finally, the last part presents the best way to use such statistical tools in view of event classifers, putting the emphasis on the setup of the multi-layer perceptron. The full article (15 p. corresponding to this lecture is written in french and is provided in the proceedings of the book SOS 2008.

  15. Structure-function relationships during segregated and integrated network states of human brain functional connectivity.

    Science.gov (United States)

    Fukushima, Makoto; Betzel, Richard F; He, Ye; van den Heuvel, Martijn P; Zuo, Xi-Nian; Sporns, Olaf

    2018-04-01

    Structural white matter connections are thought to facilitate integration of neural information across functionally segregated systems. Recent studies have demonstrated that changes in the balance between segregation and integration in brain networks can be tracked by time-resolved functional connectivity derived from resting-state functional magnetic resonance imaging (rs-fMRI) data and that fluctuations between segregated and integrated network states are related to human behavior. However, how these network states relate to structural connectivity is largely unknown. To obtain a better understanding of structural substrates for these network states, we investigated how the relationship between structural connectivity, derived from diffusion tractography, and functional connectivity, as measured by rs-fMRI, changes with fluctuations between segregated and integrated states in the human brain. We found that the similarity of edge weights between structural and functional connectivity was greater in the integrated state, especially at edges connecting the default mode and the dorsal attention networks. We also demonstrated that the similarity of network partitions, evaluated between structural and functional connectivity, increased and the density of direct structural connections within modules in functional networks was elevated during the integrated state. These results suggest that, when functional connectivity exhibited an integrated network topology, structural connectivity and functional connectivity were more closely linked to each other and direct structural connections mediated a larger proportion of neural communication within functional modules. Our findings point out the possibility of significant contributions of structural connections to integrative neural processes underlying human behavior.

  16. Neural Networks for the Beginner.

    Science.gov (United States)

    Snyder, Robin M.

    Motivated by the brain, neural networks are a right-brained approach to artificial intelligence that is used to recognize patterns based on previous training. In practice, one would not program an expert system to recognize a pattern and one would not train a neural network to make decisions from rules; but one could combine the best features of…

  17. The fundamentals of fuzzy neural network and application in nuclear monitoring

    International Nuclear Information System (INIS)

    Feng Diqing; Lei Ming

    1995-01-01

    The authors presents a fuzzy modeling method using fuzzy neural network with the back-propagation algorithm. The new method can identify the fuzzy model of a nonlinear system automatically. Fuzzy neural network is used to generate fuzzy rules and membership functions. The feasibility and inferential statistic of the method is examined by using numerical data and XOR problem. The FNN improves accuracy and reliability, reduces design time and minimizes system cost of fuzzy design. The FNN can be used for estimation of human injury in nuclear explosions and can be simplified to a rule neural network (RNN), which is used for pole extraction of signal. Preliminary simulation show that FNN has vest vistas in nuclear monitoring

  18. Application of CMAC Neural Network to Solar Energy Heliostat Field Fault Diagnosis

    Directory of Open Access Journals (Sweden)

    Neng-Sheng Pai

    2013-01-01

    Full Text Available Solar energy heliostat fields comprise numerous sun tracking platforms. As a result, fault detection is a highly challenging problem. Accordingly, the present study proposes a cerebellar model arithmetic computer (CMAC neutral network for automatically diagnosing faults within the heliostat field in accordance with the rotational speed, vibration, and temperature characteristics of the individual heliostat transmission systems. As compared with radial basis function (RBF neural network and back propagation (BP neural network in the heliostat field fault diagnosis, the experimental results show that the proposed neural network has a low training time, good robustness, and a reliable diagnostic performance. As a result, it provides an ideal solution for fault diagnosis in modern, large-scale heliostat fields.

  19. Finite time convergent learning law for continuous neural networks.

    Science.gov (United States)

    Chairez, Isaac

    2014-02-01

    This paper addresses the design of a discontinuous finite time convergent learning law for neural networks with continuous dynamics. The neural network was used here to obtain a non-parametric model for uncertain systems described by a set of ordinary differential equations. The source of uncertainties was the presence of some external perturbations and poor knowledge of the nonlinear function describing the system dynamics. A new adaptive algorithm based on discontinuous algorithms was used to adjust the weights of the neural network. The adaptive algorithm was derived by means of a non-standard Lyapunov function that is lower semi-continuous and differentiable in almost the whole space. A compensator term was included in the identifier to reject some specific perturbations using a nonlinear robust algorithm. Two numerical examples demonstrated the improvements achieved by the learning algorithm introduced in this paper compared to classical schemes with continuous learning methods. The first one dealt with a benchmark problem used in the paper to explain how the discontinuous learning law works. The second one used the methane production model to show the benefits in engineering applications of the learning law proposed in this paper. Copyright © 2013 Elsevier Ltd. All rights reserved.

  20. Neural Network Approach to Locating Cryptography in Object Code

    Energy Technology Data Exchange (ETDEWEB)

    Jason L. Wright; Milos Manic

    2009-09-01

    Finding and identifying cryptography is a growing concern in the malware analysis community. In this paper, artificial neural networks are used to classify functional blocks from a disassembled program as being either cryptography related or not. The resulting system, referred to as NNLC (Neural Net for Locating Cryptography) is presented and results of applying this system to various libraries are described.

  1. Neural modeling of prefrontal executive function

    Energy Technology Data Exchange (ETDEWEB)

    Levine, D.S. [Univ. of Texas, Arlington, TX (United States)

    1996-12-31

    Brain executive function is based in a distributed system whereby prefrontal cortex is interconnected with other cortical. and subcortical loci. Executive function is divided roughly into three interacting parts: affective guidance of responses; linkage among working memory representations; and forming complex behavioral schemata. Neural network models of each of these parts are reviewed and fit into a preliminary theoretical framework.

  2. Mass reconstruction with a neural network

    International Nuclear Information System (INIS)

    Loennblad, L.; Peterson, C.; Roegnvaldsson, T.

    1992-01-01

    A feed-forward neural network method is developed for reconstructing the invariant mass of hadronic jets appearing in a calorimeter. The approach is illustrated in W→qanti q, where W-bosons are produced in panti p reactions at SPS collider energies. The neural network method yields results that are superior to conventional methods. This neural network application differs from the classification ones in the sense that an analog number (the mass) is computed by the network, rather than a binary decision being made. As a by-product our application clearly demonstrates the need for using 'intelligent' variables in instances when the amount of training instances is limited. (orig.)

  3. Exponential p-stability of delayed Cohen-Grossberg-type BAM neural networks with impulses

    International Nuclear Information System (INIS)

    Xia Yonghui; Huang Zhenkun; Han Maoan

    2008-01-01

    An impulsive Cohen-Grossberg-type bidirectional associative memory (BAM) neural networks with distributed delays is studied. Some new sufficient conditions are established for the existence and global exponential stability of a unique equilibrium without strict conditions imposed on self regulation functions. The approaches are based on Laypunov-Kravsovskii functional and homeomorphism theory. When our results are applied to the BAM neural networks, our results generalize some previously known results. It is believed that these results are significant and useful for the design and applications of Cohen-Grossberg-type bidirectional associative memory networks

  4. New backpropagation algorithm with type-2 fuzzy weights for neural networks

    CERN Document Server

    Gaxiola, Fernando; Valdez, Fevrier

    2016-01-01

    In this book a neural network learning method with type-2 fuzzy weight adjustment is proposed. The mathematical analysis of the proposed learning method architecture and the adaptation of type-2 fuzzy weights are presented. The proposed method is based on research of recent methods that handle weight adaptation and especially fuzzy weights. The internal operation of the neuron is changed to work with two internal calculations for the activation function to obtain two results as outputs of the proposed method. Simulation results and a comparative study among monolithic neural networks, neural network with type-1 fuzzy weights and neural network with type-2 fuzzy weights are presented to illustrate the advantages of the proposed method. The proposed approach is based on recent methods that handle adaptation of weights using fuzzy logic of type-1 and type-2. The proposed approach is applied to a cases of prediction for the Mackey-Glass (for ô=17) and Dow-Jones time series, and recognition of person with iris bi...

  5. Inversion of a lateral log using neural networks

    International Nuclear Information System (INIS)

    Garcia, G.; Whitman, W.W.

    1992-01-01

    In this paper a technique using neural networks is demonstrated for the inversion of a lateral log. The lateral log is simulated by a finite difference method which in turn is used as an input to a backpropagation neural network. An initial guess earth model is generated from the neural network, which is then input to a Marquardt inversion. The neural network reacts to gross and subtle data features in actual logs and produces a response inferred from the knowledge stored in the network during a training process. The neural network inversion of lateral logs is tested on synthetic and field data. Tests using field data resulted in a final earth model whose simulated lateral is in good agreement with the actual log data

  6. Nuclear reactors project optimization based on neural network and genetic algorithm

    International Nuclear Information System (INIS)

    Pereira, Claudio M.N.A.; Schirru, Roberto; Martinez, Aquilino S.

    1997-01-01

    This work presents a prototype of a system for nuclear reactor core design optimization based on genetic algorithms and artificial neural networks. A neural network is modeled and trained in order to predict the flux and the neutron multiplication factor values based in the enrichment, network pitch and cladding thickness, with average error less than 2%. The values predicted by the neural network are used by a genetic algorithm in this heuristic search, guided by an objective function that rewards the high flux values and penalizes multiplication factors far from the required value. Associating the quick prediction - that may substitute the reactor physics calculation code - with the global optimization capacity of the genetic algorithm, it was obtained a quick and effective system for nuclear reactor core design optimization. (author). 11 refs., 8 figs., 3 tabs

  7. Financial time series prediction using spiking neural networks.

    Science.gov (United States)

    Reid, David; Hussain, Abir Jaafar; Tawfik, Hissam

    2014-01-01

    In this paper a novel application of a particular type of spiking neural network, a Polychronous Spiking Network, was used for financial time series prediction. It is argued that the inherent temporal capabilities of this type of network are suited to non-stationary data such as this. The performance of the spiking neural network was benchmarked against three systems: two "traditional", rate-encoded, neural networks; a Multi-Layer Perceptron neural network and a Dynamic Ridge Polynomial neural network, and a standard Linear Predictor Coefficients model. For this comparison three non-stationary and noisy time series were used: IBM stock data; US/Euro exchange rate data, and the price of Brent crude oil. The experiments demonstrated favourable prediction results for the Spiking Neural Network in terms of Annualised Return and prediction error for 5-Step ahead predictions. These results were also supported by other relevant metrics such as Maximum Drawdown and Signal-To-Noise ratio. This work demonstrated the applicability of the Polychronous Spiking Network to financial data forecasting and this in turn indicates the potential of using such networks over traditional systems in difficult to manage non-stationary environments.

  8. Encoding Time in Feedforward Trajectories of a Recurrent Neural Network Model.

    Science.gov (United States)

    Hardy, N F; Buonomano, Dean V

    2018-02-01

    Brain activity evolves through time, creating trajectories of activity that underlie sensorimotor processing, behavior, and learning and memory. Therefore, understanding the temporal nature of neural dynamics is essential to understanding brain function and behavior. In vivo studies have demonstrated that sequential transient activation of neurons can encode time. However, it remains unclear whether these patterns emerge from feedforward network architectures or from recurrent networks and, furthermore, what role network structure plays in timing. We address these issues using a recurrent neural network (RNN) model with distinct populations of excitatory and inhibitory units. Consistent with experimental data, a single RNN could autonomously produce multiple functionally feedforward trajectories, thus potentially encoding multiple timed motor patterns lasting up to several seconds. Importantly, the model accounted for Weber's law, a hallmark of timing behavior. Analysis of network connectivity revealed that efficiency-a measure of network interconnectedness-decreased as the number of stored trajectories increased. Additionally, the balance of excitation (E) and inhibition (I) shifted toward excitation during each unit's activation time, generating the prediction that observed sequential activity relies on dynamic control of the E/I balance. Our results establish for the first time that the same RNN can generate multiple functionally feedforward patterns of activity as a result of dynamic shifts in the E/I balance imposed by the connectome of the RNN. We conclude that recurrent network architectures account for sequential neural activity, as well as for a fundamental signature of timing behavior: Weber's law.

  9. Neural Networks in Mobile Robot Motion

    Directory of Open Access Journals (Sweden)

    Danica Janglová

    2004-03-01

    Full Text Available This paper deals with a path planning and intelligent control of an autonomous robot which should move safely in partially structured environment. This environment may involve any number of obstacles of arbitrary shape and size; some of them are allowed to move. We describe our approach to solving the motion-planning problem in mobile robot control using neural networks-based technique. Our method of the construction of a collision-free path for moving robot among obstacles is based on two neural networks. The first neural network is used to determine the “free” space using ultrasound range finder data. The second neural network “finds” a safe direction for the next robot section of the path in the workspace while avoiding the nearest obstacles. Simulation examples of generated path with proposed techniques will be presented.

  10. International Conference on Artificial Neural Networks (ICANN)

    CERN Document Server

    Mladenov, Valeri; Kasabov, Nikola; Artificial Neural Networks : Methods and Applications in Bio-/Neuroinformatics

    2015-01-01

    The book reports on the latest theories on artificial neural networks, with a special emphasis on bio-neuroinformatics methods. It includes twenty-three papers selected from among the best contributions on bio-neuroinformatics-related issues, which were presented at the International Conference on Artificial Neural Networks, held in Sofia, Bulgaria, on September 10-13, 2013 (ICANN 2013). The book covers a broad range of topics concerning the theory and applications of artificial neural networks, including recurrent neural networks, super-Turing computation and reservoir computing, double-layer vector perceptrons, nonnegative matrix factorization, bio-inspired models of cell communities, Gestalt laws, embodied theory of language understanding, saccadic gaze shifts and memory formation, and new training algorithms for Deep Boltzmann Machines, as well as dynamic neural networks and kernel machines. It also reports on new approaches to reinforcement learning, optimal control of discrete time-delay systems, new al...

  11. A Simple Quantum Neural Net with a Periodic Activation Function

    OpenAIRE

    Daskin, Ammar

    2018-01-01

    In this paper, we propose a simple neural net that requires only $O(nlog_2k)$ number of qubits and $O(nk)$ quantum gates: Here, $n$ is the number of input parameters, and $k$ is the number of weights applied to these parameters in the proposed neural net. We describe the network in terms of a quantum circuit, and then draw its equivalent classical neural net which involves $O(k^n)$ nodes in the hidden layer. Then, we show that the network uses a periodic activation function of cosine values o...

  12. Autonomous dynamics in neural networks: the dHAN concept and associative thought processes

    Science.gov (United States)

    Gros, Claudius

    2007-02-01

    The neural activity of the human brain is dominated by self-sustained activities. External sensory stimuli influence this autonomous activity but they do not drive the brain directly. Most standard artificial neural network models are however input driven and do not show spontaneous activities. It constitutes a challenge to develop organizational principles for controlled, self-sustained activity in artificial neural networks. Here we propose and examine the dHAN concept for autonomous associative thought processes in dense and homogeneous associative networks. An associative thought-process is characterized, within this approach, by a time-series of transient attractors. Each transient state corresponds to a stored information, a memory. The subsequent transient states are characterized by large associative overlaps, which are identical to acquired patterns. Memory states, the acquired patterns, have such a dual functionality. In this approach the self-sustained neural activity has a central functional role. The network acquires a discrimination capability, as external stimuli need to compete with the autonomous activity. Noise in the input is readily filtered-out. Hebbian learning of external patterns occurs coinstantaneous with the ongoing associative thought process. The autonomous dynamics needs a long-term working-point optimization which acquires within the dHAN concept a dual functionality: It stabilizes the time development of the associative thought process and limits runaway synaptic growth, which generically occurs otherwise in neural networks with self-induced activities and Hebbian-type learning rules.

  13. Non-Linear State Estimation Using Pre-Trained Neural Networks

    DEFF Research Database (Denmark)

    Bayramoglu, Enis; Andersen, Nils Axel; Ravn, Ole

    2010-01-01

    effecting the transformation. This function is approximated by a neural network using offline training. The training is based on monte carlo sampling. A way to obtain parametric distributions of flexible shape to be used easily with these networks is also presented. The method can also be used to improve...... other parametric methods around regions with strong non-linearities by including them inside the network....

  14. Interpretable neural networks with BP-SOM

    NARCIS (Netherlands)

    Weijters, A.J.M.M.; Bosch, van den A.P.J.; Pobil, del A.P.; Mira, J.; Ali, M.

    1998-01-01

    Artificial Neural Networks (ANNS) are used successfully in industry and commerce. This is not surprising since neural networks are especially competitive for complex tasks for which insufficient domain-specific knowledge is available. However, interpretation of models induced by ANNS is often

  15. Global stability of stochastic high-order neural networks with discrete and distributed delays

    International Nuclear Information System (INIS)

    Wang Zidong; Fang Jianan; Liu Xiaohui

    2008-01-01

    High-order neural networks can be considered as an expansion of Hopfield neural networks, and have stronger approximation property, faster convergence rate, greater storage capacity, and higher fault tolerance than lower-order neural networks. In this paper, the global asymptotic stability analysis problem is considered for a class of stochastic high-order neural networks with discrete and distributed time-delays. Based on an Lyapunov-Krasovskii functional and the stochastic stability analysis theory, several sufficient conditions are derived, which guarantee the global asymptotic convergence of the equilibrium point in the mean square. It is shown that the stochastic high-order delayed neural networks under consideration are globally asymptotically stable in the mean square if two linear matrix inequalities (LMIs) are feasible, where the feasibility of LMIs can be readily checked by the Matlab LMI toolbox. It is also shown that the main results in this paper cover some recently published works. A numerical example is given to demonstrate the usefulness of the proposed global stability criteria

  16. Synchronization of Switched Neural Networks With Communication Delays via the Event-Triggered Control.

    Science.gov (United States)

    Wen, Shiping; Zeng, Zhigang; Chen, Michael Z Q; Huang, Tingwen

    2017-10-01

    This paper addresses the issue of synchronization of switched delayed neural networks with communication delays via event-triggered control. For synchronizing coupled switched neural networks, we propose a novel event-triggered control law which could greatly reduce the number of control updates for synchronization tasks of coupled switched neural networks involving embedded microprocessors with limited on-board resources. The control signals are driven by properly defined events, which depend on the measurement errors and current-sampled states. By using a delay system method, a novel model of synchronization error system with delays is proposed with the communication delays and event-triggered control in the unified framework for coupled switched neural networks. The criteria are derived for the event-triggered synchronization analysis and control synthesis of switched neural networks via the Lyapunov-Krasovskii functional method and free weighting matrix approach. A numerical example is elaborated on to illustrate the effectiveness of the derived results.

  17. Forecasting crude oil price with an EMD-based neural network ensemble learning paradigm

    International Nuclear Information System (INIS)

    Yu, Lean; Wang, Shouyang; Lai, Kin Keung

    2008-01-01

    In this study, an empirical mode decomposition (EMD) based neural network ensemble learning paradigm is proposed for world crude oil spot price forecasting. For this purpose, the original crude oil spot price series were first decomposed into a finite, and often small, number of intrinsic mode functions (IMFs). Then a three-layer feed-forward neural network (FNN) model was used to model each of the extracted IMFs, so that the tendencies of these IMFs could be accurately predicted. Finally, the prediction results of all IMFs are combined with an adaptive linear neural network (ALNN), to formulate an ensemble output for the original crude oil price series. For verification and testing, two main crude oil price series, West Texas Intermediate (WTI) crude oil spot price and Brent crude oil spot price, are used to test the effectiveness of the proposed EMD-based neural network ensemble learning methodology. Empirical results obtained demonstrate attractiveness of the proposed EMD-based neural network ensemble learning paradigm. (author)

  18. Adaptive nonlinear control using input normalized neural networks

    International Nuclear Information System (INIS)

    Leeghim, Henzeh; Seo, In Ho; Bang, Hyo Choong

    2008-01-01

    An adaptive feedback linearization technique combined with the neural network is addressed to control uncertain nonlinear systems. The neural network-based adaptive control theory has been widely studied. However, the stability analysis of the closed-loop system with the neural network is rather complicated and difficult to understand, and sometimes unnecessary assumptions are involved. As a result, unnecessary assumptions for stability analysis are avoided by using the neural network with input normalization technique. The ultimate boundedness of the tracking error is simply proved by the Lyapunov stability theory. A new simple update law as an adaptive nonlinear control is derived by the simplification of the input normalized neural network assuming the variation of the uncertain term is sufficiently small

  19. Chinese Sentence Classification Based on Convolutional Neural Network

    Science.gov (United States)

    Gu, Chengwei; Wu, Ming; Zhang, Chuang

    2017-10-01

    Sentence classification is one of the significant issues in Natural Language Processing (NLP). Feature extraction is often regarded as the key point for natural language processing. Traditional ways based on machine learning can not take high level features into consideration, such as Naive Bayesian Model. The neural network for sentence classification can make use of contextual information to achieve greater results in sentence classification tasks. In this paper, we focus on classifying Chinese sentences. And the most important is that we post a novel architecture of Convolutional Neural Network (CNN) to apply on Chinese sentence classification. In particular, most of the previous methods often use softmax classifier for prediction, we embed a linear support vector machine to substitute softmax in the deep neural network model, minimizing a margin-based loss to get a better result. And we use tanh as an activation function, instead of ReLU. The CNN model improve the result of Chinese sentence classification tasks. Experimental results on the Chinese news title database validate the effectiveness of our model.

  20. Runoff Modelling in Urban Storm Drainage by Neural Networks

    DEFF Research Database (Denmark)

    Rasmussen, Michael R.; Brorsen, Michael; Schaarup-Jensen, Kjeld

    1995-01-01

    A neural network is used to simulate folw and water levels in a sewer system. The calibration of th neural network is based on a few measured events and the network is validated against measureed events as well as flow simulated with the MOUSE model (Lindberg and Joergensen, 1986). The neural...... network is used to compute flow or water level at selected points in the sewer system, and to forecast the flow from a small residential area. The main advantages of the neural network are the build-in self calibration procedure and high speed performance, but the neural network cannot be used to extract...... knowledge of the runoff process. The neural network was found to simulate 150 times faster than e.g. the MOUSE model....

  1. Neural networks in economic modelling : An empirical study

    NARCIS (Netherlands)

    Verkooijen, W.J.H.

    1996-01-01

    This dissertation addresses the statistical aspects of neural networks and their usability for solving problems in economics and finance. Neural networks are discussed in a framework of modelling which is generally accepted in econometrics. Within this framework a neural network is regarded as a

  2. A new criterion to global exponential periodicity for discrete-time BAM neural network with infinite delays

    International Nuclear Information System (INIS)

    Zhou Tiejun; Liu Yuehua; Li Xiaoping; Liu Yirong

    2009-01-01

    The discrete-time bidirectional associative memory neural network with periodic coefficients and infinite delays is studied. And not by employing the continuation theorem of coincidence degree theory as other literatures, but by constructing suitable Liapunov function, using fixed point theorem and some analysis techniques, a sufficient criterion is obtained which ensures the existence and global exponential stability of periodic solution for the type of discrete-time BAM neural network. The obtained result is less restrictive to the BAM neural networks than previously known criteria. Furthermore, it can be applied to the BAM neural network which signal transfer functions are neither bounded nor differentiable. In addition, an example and its numerical simulation are given to illustrate the effectiveness of the obtained result

  3. A novel neural network for multi project programming with limited resources

    International Nuclear Information System (INIS)

    Liping, Z.; Jianhua, W.; Fenfang, Z.; Guojian, H.

    1996-01-01

    This paper discusses the theory of multi project programming and how to use Artificial Neural Network model to solve this problem. To obtain global optimum solution, the simulated annealing technology is used in our scheme. To improve the convergence property of argument matrix in the process of optimization for target function. Lagrange operator is replaced with the inverse of temperature in simulated annealing. Combining the Hopfield networks algorithm, this problem is solved speedily and satisfactorily. Experimental results show it is very effective to use Artificial Neural Network to solve the problem

  4. Information content of neural networks with self-control and variable activity

    International Nuclear Information System (INIS)

    Bolle, D.; Amari, S.I.; Dominguez Carreta, D.R.C.; Massolo, G.

    2001-01-01

    A self-control mechanism for the dynamics of neural networks with variable activity is discussed using a recursive scheme for the time evolution of the local field. It is based upon the introduction of a self-adapting time-dependent threshold as a function of both the neural and pattern activity in the network. This mechanism leads to an improvement of the information content of the network as well as an increase of the storage capacity and the basins of attraction. Different architectures are considered and the results are compared with numerical simulations

  5. Exponential Synchronization of Networked Chaotic Delayed Neural Network by a Hybrid Event Trigger Scheme.

    Science.gov (United States)

    Fei, Zhongyang; Guan, Chaoxu; Gao, Huijun; Zhongyang Fei; Chaoxu Guan; Huijun Gao; Fei, Zhongyang; Guan, Chaoxu; Gao, Huijun

    2018-06-01

    This paper is concerned with the exponential synchronization for master-slave chaotic delayed neural network with event trigger control scheme. The model is established on a network control framework, where both external disturbance and network-induced delay are taken into consideration. The desired aim is to synchronize the master and slave systems with limited communication capacity and network bandwidth. In order to save the network resource, we adopt a hybrid event trigger approach, which not only reduces the data package sending out, but also gets rid of the Zeno phenomenon. By using an appropriate Lyapunov functional, a sufficient criterion for the stability is proposed for the error system with extended ( , , )-dissipativity performance index. Moreover, hybrid event trigger scheme and controller are codesigned for network-based delayed neural network to guarantee the exponential synchronization between the master and slave systems. The effectiveness and potential of the proposed results are demonstrated through a numerical example.

  6. Globally exponential stability of neural network with constant and variable delays

    International Nuclear Information System (INIS)

    Zhao Weirui; Zhang Huanshui

    2006-01-01

    This Letter presents new sufficient conditions of globally exponential stability of neural networks with delays. We show that these results generalize recently published globally exponential stability results. In particular, several different globally exponential stability conditions in the literatures which were proved using different Lyapunov functionals are generalized and unified by using the same Lyapunov functional and the technique of inequality of integral. A comparison between our results and the previous results admits that our results establish a new set of stability criteria for delayed neural networks. Those conditions are less restrictive than those given in the earlier references

  7. Lukasiewicz-Topos Models of Neural Networks, Cell Genome and Interactome Nonlinear Dynamic Models

    CERN Document Server

    Baianu, I C

    2004-01-01

    A categorical and Lukasiewicz-Topos framework for Lukasiewicz Algebraic Logic models of nonlinear dynamics in complex functional systems such as neural networks, genomes and cell interactomes is proposed. Lukasiewicz Algebraic Logic models of genetic networks and signaling pathways in cells are formulated in terms of nonlinear dynamic systems with n-state components that allow for the generalization of previous logical models of both genetic activities and neural networks. An algebraic formulation of variable 'next-state functions' is extended to a Lukasiewicz Topos with an n-valued Lukasiewicz Algebraic Logic subobject classifier description that represents non-random and nonlinear network activities as well as their transformations in developmental processes and carcinogenesis.

  8. Influence of neural adaptation on dynamics and equilibrium state of neural activities in a ring neural network

    Science.gov (United States)

    Takiyama, Ken

    2017-12-01

    How neural adaptation affects neural information processing (i.e. the dynamics and equilibrium state of neural activities) is a central question in computational neuroscience. In my previous works, I analytically clarified the dynamics and equilibrium state of neural activities in a ring-type neural network model that is widely used to model the visual cortex, motor cortex, and several other brain regions. The neural dynamics and the equilibrium state in the neural network model corresponded to a Bayesian computation and statistically optimal multiple information integration, respectively, under a biologically inspired condition. These results were revealed in an analytically tractable manner; however, adaptation effects were not considered. Here, I analytically reveal how the dynamics and equilibrium state of neural activities in a ring neural network are influenced by spike-frequency adaptation (SFA). SFA is an adaptation that causes gradual inhibition of neural activity when a sustained stimulus is applied, and the strength of this inhibition depends on neural activities. I reveal that SFA plays three roles: (1) SFA amplifies the influence of external input in neural dynamics; (2) SFA allows the history of the external input to affect neural dynamics; and (3) the equilibrium state corresponds to the statistically optimal multiple information integration independent of the existence of SFA. In addition, the equilibrium state in a ring neural network model corresponds to the statistically optimal integration of multiple information sources under biologically inspired conditions, independent of the existence of SFA.

  9. Artificial neural network intelligent method for prediction

    Science.gov (United States)

    Trifonov, Roumen; Yoshinov, Radoslav; Pavlova, Galya; Tsochev, Georgi

    2017-09-01

    Accounting and financial classification and prediction problems are high challenge and researchers use different methods to solve them. Methods and instruments for short time prediction of financial operations using artificial neural network are considered. The methods, used for prediction of financial data as well as the developed forecasting system with neural network are described in the paper. The architecture of a neural network used four different technical indicators, which are based on the raw data and the current day of the week is presented. The network developed is used for forecasting movement of stock prices one day ahead and consists of an input layer, one hidden layer and an output layer. The training method is algorithm with back propagation of the error. The main advantage of the developed system is self-determination of the optimal topology of neural network, due to which it becomes flexible and more precise The proposed system with neural network is universal and can be applied to various financial instruments using only basic technical indicators as input data.

  10. Refrigerant flow through electronic expansion valve: Experiment and neural network modeling

    International Nuclear Information System (INIS)

    Cao, Xiang; Li, Ze-Yu; Shao, Liang-Liang; Zhang, Chun-Lu

    2016-01-01

    Highlights: • Experimental data from different sources were used in comparison of EEV models. • Artificial neural network in EEV modeling is superior to literature correlations. • Artificial neural network with 4-4-1 structure and S function is recommended. • Artificial neural network is flexible for EEV mass flow rate and opening prediction. - Abstract: Electronic expansion valve (EEV) plays a crucial role in controlling refrigerant mass flow rate of refrigeration or heat pump systems for energy savings. However, complexities in two-phase throttling process and geometry make accurate modeling of EEV flow characteristics more difficult. This paper developed an artificial neural network (ANN) model using refrigerant inlet and outlet pressures, inlet subcooling, EEV opening as ANN inputs, refrigerant mass flow rate as ANN output. Both linear and nonlinear transfer functions in hidden layer were used and compared to each other. Experimental data from multiple sources including in-house experiments of one EEV with R410A were used for ANN training and test. In addition, literature correlations were compared with ANN as well. Results showed that the ANN model with nonlinear transfer function worked well in all cases and it is much accurate than the literature correlations. In all cases, nonlinear ANN predicted refrigerant mass flow rates within ±0.4% average relative deviation (A.D.) and 2.7% standard deviation (S.D.), meanwhile it predicted the EEV opening at 0.1% A.D. and 2.1% S.D.

  11. Artificial neural network application for predicting soil distribution coefficient of nickel

    International Nuclear Information System (INIS)

    Falamaki, Amin

    2013-01-01

    The distribution (or partition) coefficient (K d ) is an applicable parameter for modeling contaminant and radionuclide transport as well as risk analysis. Selection of this parameter may cause significant error in predicting the impacts of contaminant migration or site-remediation options. In this regards, various models were presented to predict K d values for different contaminants specially heavy metals and radionuclides. In this study, artificial neural network (ANN) is used to present simplified model for predicting K d of nickel. The main objective is to develop a more accurate model with a minimal number of parameters, which can be determined experimentally or select by review of different studies. In addition, the effects of training as well as the type of the network are considered. The K d values of Ni is strongly dependent on pH of the soil and mathematical relationships were presented between pH and K d of nickel recently. In this study, the same database of these presented models was used to verify that neural network may be more useful tools for predicting of K d . Two different types of ANN, multilayer perceptron and redial basis function, were used to investigate the effect of the network geometry on the results. In addition, each network was trained by 80 and 90% of the data and tested for 20 and 10% of the rest data. Then the results of the networks compared with the results of the mathematical models. Although the networks trained by 80 and 90% of the data the results show that all the networks predict with higher accuracy relative to mathematical models which were derived by 100% of data. More training of a network increases the accuracy of the network. Multilayer perceptron network used in this study predicts better than redial basis function network. - Highlights: ► Simplified models for predicting K d of nickel presented using artificial neural networks. ► Multilayer perceptron and redial basis function used to predict K d of nickel in

  12. Review On Applications Of Neural Network To Computer Vision

    Science.gov (United States)

    Li, Wei; Nasrabadi, Nasser M.

    1989-03-01

    Neural network models have many potential applications to computer vision due to their parallel structures, learnability, implicit representation of domain knowledge, fault tolerance, and ability of handling statistical data. This paper demonstrates the basic principles, typical models and their applications in this field. Variety of neural models, such as associative memory, multilayer back-propagation perceptron, self-stabilized adaptive resonance network, hierarchical structured neocognitron, high order correlator, network with gating control and other models, can be applied to visual signal recognition, reinforcement, recall, stereo vision, motion, object tracking and other vision processes. Most of the algorithms have been simulated on com-puters. Some have been implemented with special hardware. Some systems use features, such as edges and profiles, of images as the data form for input. Other systems use raw data as input signals to the networks. We will present some novel ideas contained in these approaches and provide a comparison of these methods. Some unsolved problems are mentioned, such as extracting the intrinsic properties of the input information, integrating those low level functions to a high-level cognitive system, achieving invariances and other problems. Perspectives of applications of some human vision models and neural network models are analyzed.

  13. Application of neural networks to seismic active control

    International Nuclear Information System (INIS)

    Tang, Yu.

    1995-01-01

    An exploratory study on seismic active control using an artificial neural network (ANN) is presented in which a singledegree-of-freedom (SDF) structural system is controlled by a trained neural network. A feed-forward neural network and the backpropagation training method are used in the study. In backpropagation training, the learning rate is determined by ensuring the decrease of the error function at each training cycle. The training patterns for the neural net are generated randomly. Then, the trained ANN is used to compute the control force according to the control algorithm. The control strategy proposed herein is to apply the control force at every time step to destroy the build-up of the system response. The ground motions considered in the simulations are the N21E and N69W components of the Lake Hughes No. 12 record that occurred in the San Fernando Valley in California on February 9, 1971. Significant reduction of the structural response by one order of magnitude is observed. Also, it is shown that the proposed control strategy has the ability to reduce the peak that occurs during the first few cycles of the time history. These promising results assert the potential of applying ANNs to active structural control under seismic loads

  14. Neural feedback linearization adaptive control for affine nonlinear systems based on neural network estimator

    Directory of Open Access Journals (Sweden)

    Bahita Mohamed

    2011-01-01

    Full Text Available In this work, we introduce an adaptive neural network controller for a class of nonlinear systems. The approach uses two Radial Basis Functions, RBF networks. The first RBF network is used to approximate the ideal control law which cannot be implemented since the dynamics of the system are unknown. The second RBF network is used for on-line estimating the control gain which is a nonlinear and unknown function of the states. The updating laws for the combined estimator and controller are derived through Lyapunov analysis. Asymptotic stability is established with the tracking errors converging to a neighborhood of the origin. Finally, the proposed method is applied to control and stabilize the inverted pendulum system.

  15. A fuzzy neural network for sensor signal estimation

    International Nuclear Information System (INIS)

    Na, Man Gyun

    2000-01-01

    In this work, a fuzzy neural network is used to estimate the relevant sensor signal using other sensor signals. Noise components in input signals into the fuzzy neural network are removed through the wavelet denoising technique. Principal component analysis (PCA) is used to reduce the dimension of an input space without losing a significant amount of information. A lower dimensional input space will also usually reduce the time necessary to train a fuzzy-neural network. Also, the principal component analysis makes easy the selection of the input signals into the fuzzy neural network. The fuzzy neural network parameters are optimized by two learning methods. A genetic algorithm is used to optimize the antecedent parameters of the fuzzy neural network and a least-squares algorithm is used to solve the consequent parameters. The proposed algorithm was verified through the application to the pressurizer water level and the hot-leg flowrate measurements in pressurized water reactors

  16. Multistability in bidirectional associative memory neural networks

    International Nuclear Information System (INIS)

    Huang Gan; Cao Jinde

    2008-01-01

    In this Letter, the multistability issue is studied for Bidirectional Associative Memory (BAM) neural networks. Based on the existence and stability analysis of the neural networks with or without delay, it is found that the 2n-dimensional networks can have 3 n equilibria and 2 n equilibria of them are locally exponentially stable, where each layer of the BAM network has n neurons. Furthermore, the results has been extended to (n+m)-dimensional BAM neural networks, where there are n and m neurons on the two layers respectively. Finally, two numerical examples are presented to illustrate the validity of our results

  17. Multistability in bidirectional associative memory neural networks

    Science.gov (United States)

    Huang, Gan; Cao, Jinde

    2008-04-01

    In this Letter, the multistability issue is studied for Bidirectional Associative Memory (BAM) neural networks. Based on the existence and stability analysis of the neural networks with or without delay, it is found that the 2 n-dimensional networks can have 3 equilibria and 2 equilibria of them are locally exponentially stable, where each layer of the BAM network has n neurons. Furthermore, the results has been extended to (n+m)-dimensional BAM neural networks, where there are n and m neurons on the two layers respectively. Finally, two numerical examples are presented to illustrate the validity of our results.

  18. Machine Learning Topological Invariants with Neural Networks

    Science.gov (United States)

    Zhang, Pengfei; Shen, Huitao; Zhai, Hui

    2018-02-01

    In this Letter we supervisedly train neural networks to distinguish different topological phases in the context of topological band insulators. After training with Hamiltonians of one-dimensional insulators with chiral symmetry, the neural network can predict their topological winding numbers with nearly 100% accuracy, even for Hamiltonians with larger winding numbers that are not included in the training data. These results show a remarkable success that the neural network can capture the global and nonlinear topological features of quantum phases from local inputs. By opening up the neural network, we confirm that the network does learn the discrete version of the winding number formula. We also make a couple of remarks regarding the role of the symmetry and the opposite effect of regularization techniques when applying machine learning to physical systems.

  19. ELeaRNT: Evolutionary Learning of Rich Neural Network Topologies

    National Research Council Canada - National Science Library

    Matteucci, Matteo

    2006-01-01

    In this paper we present ELeaRNT an evolutionary strategy which evolves rich neural network topologies in order to find an optimal domain specific non linear function approximator with a good generalization performance...

  20. Time series prediction with simple recurrent neural networks ...

    African Journals Online (AJOL)

    A hybrid of the two called Elman-Jordan (or Multi-recurrent) neural network is also being used. In this study, we evaluated the performance of these neural networks on three established bench mark time series prediction problems. Results from the experiments showed that Jordan neural network performed significantly ...

  1. ISC feedforward control of gasoline engine. Adaptive system using neural network; Jidoshayo gasoline engine no ISC feedforward seigyo. Neural network wo mochiita tekioka

    Energy Technology Data Exchange (ETDEWEB)

    Kinugawa, N; Morita, S; Takiyama, T [Osaka City University, Osaka (Japan)

    1997-10-01

    For fuel economy and a good driver`s feeling, it is necessary for idle-speed to keep at a constant low speed. But keeping low speed has danger of engine stall when the engine torque is disturbed by the alternator, and so on. In this paper, adaptive feedforward idle-speed control system against electrical loads was investigated. This system was based on the reversed tansfer functions of the object system, and a neural network was used to adapt this system for aging. Then, this neural network was also used for creating feedforward table map. Good experimental results were obtained. 2 refs., 11 figs.

  2. Quantum neural networks: Current status and prospects for development

    Science.gov (United States)

    Altaisky, M. V.; Kaputkina, N. E.; Krylov, V. A.

    2014-11-01

    The idea of quantum artificial neural networks, first formulated in [34], unites the artificial neural network concept with the quantum computation paradigm. Quantum artificial neural networks were first systematically considered in the PhD thesis by T. Menneer (1998). Based on the works of Menneer and Narayanan [42, 43], Kouda, Matsui, and Nishimura [35, 36], Altaisky [2, 68], Zhou [67], and others, quantum-inspired learning algorithms for neural networks were developed, and are now used in various training programs and computer games [29, 30]. The first practically realizable scaled hardware-implemented model of the quantum artificial neural network is obtained by D-Wave Systems, Inc. [33]. It is a quantum Hopfield network implemented on the basis of superconducting quantum interference devices (SQUIDs). In this work we analyze possibilities and underlying principles of an alternative way to implement quantum neural networks on the basis of quantum dots. A possibility of using quantum neural network algorithms in automated control systems, associative memory devices, and in modeling biological and social networks is examined.

  3. Applications of self-organizing neural networks in virtual screening and diversity selection.

    Science.gov (United States)

    Selzer, Paul; Ertl, Peter

    2006-01-01

    Artificial neural networks provide a powerful technique for the analysis and modeling of nonlinear relationships between molecular structures and pharmacological activity. Many network types, including Kohonen and counterpropagation, also provide an intuitive method for the visual assessment of correspondence between the input and output data. This work shows how a combination of neural networks and radial distribution function molecular descriptors can be applied in various areas of industrial pharmaceutical research. These applications include the prediction of biological activity, the selection of screening candidates (cherry picking), and the extraction of representative subsets from large compound collections such as combinatorial libraries. The methods described have also been implemented as an easy-to-use Web tool, allowing chemists to perform interactive neural network experiments on the Novartis intranet.

  4. Neural network modeling for near wall turbulent flow

    International Nuclear Information System (INIS)

    Milano, Michele; Koumoutsakos, Petros

    2002-01-01

    A neural network methodology is developed in order to reconstruct the near wall field in a turbulent flow by exploiting flow fields provided by direct numerical simulations. The results obtained from the neural network methodology are compared with the results obtained from prediction and reconstruction using proper orthogonal decomposition (POD). Using the property that the POD is equivalent to a specific linear neural network, a nonlinear neural network extension is presented. It is shown that for a relatively small additional computational cost nonlinear neural networks provide us with improved reconstruction and prediction capabilities for the near wall velocity fields. Based on these results advantages and drawbacks of both approaches are discussed with an outlook toward the development of near wall models for turbulence modeling and control

  5. Neural Networks In Mining Sciences - General Overview And Some Representative Examples

    Science.gov (United States)

    Tadeusiewicz, Ryszard

    2015-12-01

    The many difficult problems that must now be addressed in mining sciences make us search for ever newer and more efficient computer tools that can be used to solve those problems. Among the numerous tools of this type, there are neural networks presented in this article - which, although not yet widely used in mining sciences, are certainly worth consideration. Neural networks are a technique which belongs to so called artificial intelligence, and originates from the attempts to model the structure and functioning of biological nervous systems. Initially constructed and tested exclusively out of scientific curiosity, as computer models of parts of the human brain, neural networks have become a surprisingly effective calculation tool in many areas: in technology, medicine, economics, and even social sciences. Unfortunately, they are relatively rarely used in mining sciences and mining technology. The article is intended to convince the readers that neural networks can be very useful also in mining sciences. It contains information how modern neural networks are built, how they operate and how one can use them. The preliminary discussion presented in this paper can help the reader gain an opinion whether this is a tool with handy properties, useful for him, and what it might come in useful for. Of course, the brief introduction to neural networks contained in this paper will not be enough for the readers who get convinced by the arguments contained here, and want to use neural networks. They will still need a considerable portion of detailed knowledge so that they can begin to independently create and build such networks, and use them in practice. However, an interested reader who decides to try out the capabilities of neural networks will also find here links to references that will allow him to start exploration of neural networks fast, and then work with this handy tool efficiently. This will be easy, because there are currently quite a few ready-made computer

  6. Delay-Dependent Stability Criteria of Uncertain Periodic Switched Recurrent Neural Networks with Time-Varying Delays

    Directory of Open Access Journals (Sweden)

    Xing Yin

    2011-01-01

    uncertain periodic switched recurrent neural networks with time-varying delays. When uncertain discrete-time recurrent neural network is a periodic system, it is expressed as switched neural network for the finite switching state. Based on the switched quadratic Lyapunov functional approach (SQLF and free-weighting matrix approach (FWM, some linear matrix inequality criteria are found to guarantee the delay-dependent asymptotical stability of these systems. Two examples illustrate the exactness of the proposed criteria.

  7. Application of neural networks in CRM systems

    Directory of Open Access Journals (Sweden)

    Bojanowska Agnieszka

    2017-01-01

    Full Text Available The central aim of this study is to investigate how to apply artificial neural networks in Customer Relationship Management (CRM. The paper presents several business applications of neural networks in software systems designed to aid CRM, e.g. in deciding on the profitability of building a relationship with a given customer. Furthermore, a framework for a neural-network based CRM software tool is developed. Building beneficial relationships with customers is generating considerable interest among various businesses, and is often mentioned as one of the crucial objectives of enterprises, next to their key aim: to bring satisfactory profit. There is a growing tendency among businesses to invest in CRM systems, which together with an organisational culture of a company aid managing customer relationships. It is the sheer amount of gathered data as well as the need for constant updating and analysis of this breadth of information that may imply the suitability of neural networks for the application in question. Neural networks exhibit considerably higher computational capabilities than sequential calculations because the solution to a problem is obtained without the need for developing a special algorithm. In the majority of presented CRM applications neural networks constitute and are presented as a managerial decision-taking optimisation tool.

  8. New exponential stability criteria for stochastic BAM neural networks with impulses

    International Nuclear Information System (INIS)

    Sakthivel, R; Samidurai, R; Anthoni, S M

    2010-01-01

    In this paper, we study the global exponential stability of time-delayed stochastic bidirectional associative memory neural networks with impulses and Markovian jumping parameters. A generalized activation function is considered, and traditional assumptions on the boundedness, monotony and differentiability of activation functions are removed. We obtain a new set of sufficient conditions in terms of linear matrix inequalities, which ensures the global exponential stability of the unique equilibrium point for stochastic BAM neural networks with impulses. The Lyapunov function method with the Ito differential rule is employed for achieving the required result. Moreover, a numerical example is provided to show that the proposed result improves the allowable upper bound of delays over some existing results in the literature.

  9. New exponential stability criteria for stochastic BAM neural networks with impulses

    Science.gov (United States)

    Sakthivel, R.; Samidurai, R.; Anthoni, S. M.

    2010-10-01

    In this paper, we study the global exponential stability of time-delayed stochastic bidirectional associative memory neural networks with impulses and Markovian jumping parameters. A generalized activation function is considered, and traditional assumptions on the boundedness, monotony and differentiability of activation functions are removed. We obtain a new set of sufficient conditions in terms of linear matrix inequalities, which ensures the global exponential stability of the unique equilibrium point for stochastic BAM neural networks with impulses. The Lyapunov function method with the Itô differential rule is employed for achieving the required result. Moreover, a numerical example is provided to show that the proposed result improves the allowable upper bound of delays over some existing results in the literature.

  10. Synchronization of Switched Interval Networks and Applications to Chaotic Neural Networks

    Directory of Open Access Journals (Sweden)

    Jinde Cao

    2013-01-01

    Full Text Available This paper investigates synchronization problem of switched delay networks with interval parameters uncertainty, based on the theories of the switched systems and drive-response technique, a mathematical model of the switched interval drive-response error system is established. Without constructing Lyapunov-Krasovskii functions, introducing matrix measure method for the first time to switched time-varying delay networks, combining Halanay inequality technique, synchronization criteria are derived for switched interval networks under the arbitrary switching rule, which are easy to verify in practice. Moreover, as an application, the proposed scheme is then applied to chaotic neural networks. Finally, numerical simulations are provided to illustrate the effectiveness of the theoretical results.

  11. Local Dynamics in Trained Recurrent Neural Networks.

    Science.gov (United States)

    Rivkind, Alexander; Barak, Omri

    2017-06-23

    Learning a task induces connectivity changes in neural circuits, thereby changing their dynamics. To elucidate task-related neural dynamics, we study trained recurrent neural networks. We develop a mean field theory for reservoir computing networks trained to have multiple fixed point attractors. Our main result is that the dynamics of the network's output in the vicinity of attractors is governed by a low-order linear ordinary differential equation. The stability of the resulting equation can be assessed, predicting training success or failure. As a consequence, networks of rectified linear units and of sigmoidal nonlinearities are shown to have diametrically different properties when it comes to learning attractors. Furthermore, a characteristic time constant, which remains finite at the edge of chaos, offers an explanation of the network's output robustness in the presence of variability of the internal neural dynamics. Finally, the proposed theory predicts state-dependent frequency selectivity in the network response.

  12. Local Dynamics in Trained Recurrent Neural Networks

    Science.gov (United States)

    Rivkind, Alexander; Barak, Omri

    2017-06-01

    Learning a task induces connectivity changes in neural circuits, thereby changing their dynamics. To elucidate task-related neural dynamics, we study trained recurrent neural networks. We develop a mean field theory for reservoir computing networks trained to have multiple fixed point attractors. Our main result is that the dynamics of the network's output in the vicinity of attractors is governed by a low-order linear ordinary differential equation. The stability of the resulting equation can be assessed, predicting training success or failure. As a consequence, networks of rectified linear units and of sigmoidal nonlinearities are shown to have diametrically different properties when it comes to learning attractors. Furthermore, a characteristic time constant, which remains finite at the edge of chaos, offers an explanation of the network's output robustness in the presence of variability of the internal neural dynamics. Finally, the proposed theory predicts state-dependent frequency selectivity in the network response.

  13. Stability results for stochastic delayed recurrent neural networks with discrete and distributed delays

    Science.gov (United States)

    Chen, Guiling; Li, Dingshi; Shi, Lin; van Gaans, Onno; Verduyn Lunel, Sjoerd

    2018-03-01

    We present new conditions for asymptotic stability and exponential stability of a class of stochastic recurrent neural networks with discrete and distributed time varying delays. Our approach is based on the method using fixed point theory, which do not resort to any Liapunov function or Liapunov functional. Our results neither require the boundedness, monotonicity and differentiability of the activation functions nor differentiability of the time varying delays. In particular, a class of neural networks without stochastic perturbations is also considered. Examples are given to illustrate our main results.

  14. Differential neural network configuration during human path integration

    Science.gov (United States)

    Arnold, Aiden E. G. F; Burles, Ford; Bray, Signe; Levy, Richard M.; Iaria, Giuseppe

    2014-01-01

    Path integration is a fundamental skill for navigation in both humans and animals. Despite recent advances in unraveling the neural basis of path integration in animal models, relatively little is known about how path integration operates at a neural level in humans. Previous attempts to characterize the neural mechanisms used by humans to visually path integrate have suggested a central role of the hippocampus in allowing accurate performance, broadly resembling results from animal data. However, in recent years both the central role of the hippocampus and the perspective that animals and humans share similar neural mechanisms for path integration has come into question. The present study uses a data driven analysis to investigate the neural systems engaged during visual path integration in humans, allowing for an unbiased estimate of neural activity across the entire brain. Our results suggest that humans employ common task control, attention and spatial working memory systems across a frontoparietal network during path integration. However, individuals differed in how these systems are configured into functional networks. High performing individuals were found to more broadly express spatial working memory systems in prefrontal cortex, while low performing individuals engaged an allocentric memory system based primarily in the medial occipito-temporal region. These findings suggest that visual path integration in humans over short distances can operate through a spatial working memory system engaging primarily the prefrontal cortex and that the differential configuration of memory systems recruited by task control networks may help explain individual biases in spatial learning strategies. PMID:24808849

  15. Intelligent neural network and fuzzy logic control of industrial and power systems

    Science.gov (United States)

    Kuljaca, Ognjen

    The main role played by neural network and fuzzy logic intelligent control algorithms today is to identify and compensate unknown nonlinear system dynamics. There are a number of methods developed, but often the stability analysis of neural network and fuzzy control systems was not provided. This work will meet those problems for the several algorithms. Some more complicated control algorithms included backstepping and adaptive critics will be designed. Nonlinear fuzzy control with nonadaptive fuzzy controllers is also analyzed. An experimental method for determining describing function of SISO fuzzy controller is given. The adaptive neural network tracking controller for an autonomous underwater vehicle is analyzed. A novel stability proof is provided. The implementation of the backstepping neural network controller for the coupled motor drives is described. Analysis and synthesis of adaptive critic neural network control is also provided in the work. Novel tuning laws for the system with action generating neural network and adaptive fuzzy critic are given. Stability proofs are derived for all those control methods. It is shown how these control algorithms and approaches can be used in practical engineering control. Stability proofs are given. Adaptive fuzzy logic control is analyzed. Simulation study is conducted to analyze the behavior of the adaptive fuzzy system on the different environment changes. A novel stability proof for adaptive fuzzy logic systems is given. Also, adaptive elastic fuzzy logic control architecture is described and analyzed. A novel membership function is used for elastic fuzzy logic system. The stability proof is proffered. Adaptive elastic fuzzy logic control is compared with the adaptive nonelastic fuzzy logic control. The work described in this dissertation serves as foundation on which analysis of particular representative industrial systems will be conducted. Also, it gives a good starting point for analysis of learning abilities of

  16. Mode Choice Modeling Using Artificial Neural Networks

    OpenAIRE

    Edara, Praveen Kumar

    2003-01-01

    Artificial intelligence techniques have produced excellent results in many diverse fields of engineering. Techniques such as neural networks and fuzzy systems have found their way into transportation engineering. In recent years, neural networks are being used instead of regression techniques for travel demand forecasting purposes. The basic reason lies in the fact that neural networks are able to capture complex relationships and learn from examples and also able to adapt when new data becom...

  17. Recovery of Dynamics and Function in Spiking Neural Networks with Closed-Loop Control.

    Science.gov (United States)

    Vlachos, Ioannis; Deniz, Taşkin; Aertsen, Ad; Kumar, Arvind

    2016-02-01

    There is a growing interest in developing novel brain stimulation methods to control disease-related aberrant neural activity and to address basic neuroscience questions. Conventional methods for manipulating brain activity rely on open-loop approaches that usually lead to excessive stimulation and, crucially, do not restore the original computations performed by the network. Thus, they are often accompanied by undesired side-effects. Here, we introduce delayed feedback control (DFC), a conceptually simple but effective method, to control pathological oscillations in spiking neural networks (SNNs). Using mathematical analysis and numerical simulations we show that DFC can restore a wide range of aberrant network dynamics either by suppressing or enhancing synchronous irregular activity. Importantly, DFC, besides steering the system back to a healthy state, also recovers the computations performed by the underlying network. Finally, using our theory we identify the role of single neuron and synapse properties in determining the stability of the closed-loop system.

  18. Image recovery using diffusion equation embedded neural network

    International Nuclear Information System (INIS)

    Torkamani-Azar, F.

    2001-01-01

    Artificial neural networks with their inherent parallelism have been shown to perform well in many processing applications. In this paper, a new self-organizing approach for the recovery of gray level images degraded by additive noise based on embedding the diffusion equation in a neural network (without using a priori knowledge about the image point spread function, noise or original image) is described which enhances and restores gray levels of degraded images and is for application in low-level processing. Two learning features have been proposed which would be effective in the practical implementation of such a network. The recovery procedure needs some parameter estimation such as different error goals. While the required computation is not excessive, the procedure dose not require too many iterations and convergence is very fast. In addition, through the simulation the new network showed that it has superior ability to give a better quality result with a minimum of the sum of the squared error

  19. Attractor neural networks with resource-efficient synaptic connectivity

    Science.gov (United States)

    Pehlevan, Cengiz; Sengupta, Anirvan

    Memories are thought to be stored in the attractor states of recurrent neural networks. Here we explore how resource constraints interplay with memory storage function to shape synaptic connectivity of attractor networks. We propose that given a set of memories, in the form of population activity patterns, the neural circuit choses a synaptic connectivity configuration that minimizes a resource usage cost. We argue that the total synaptic weight (l1-norm) in the network measures the resource cost because synaptic weight is correlated with synaptic volume, which is a limited resource, and is proportional to neurotransmitter release and post-synaptic current, both of which cost energy. Using numerical simulations and replica theory, we characterize optimal connectivity profiles in resource-efficient attractor networks. Our theory explains several experimental observations on cortical connectivity profiles, 1) connectivity is sparse, because synapses are costly, 2) bidirectional connections are overrepresented and 3) are stronger, because attractor states need strong recurrence.

  20. Global exponential stability for reaction-diffusion recurrent neural networks with multiple time varying delays

    International Nuclear Information System (INIS)

    Lou, X.; Cui, B.

    2008-01-01

    In this paper we consider the problem of exponential stability for recurrent neural networks with multiple time varying delays and reaction-diffusion terms. The activation functions are supposed to be bounded and globally Lipschitz continuous. By means of Lyapunov functional, sufficient conditions are derived, which guarantee global exponential stability of the delayed neural network. Finally, a numerical example is given to show the correctness of our analysis. (author)

  1. Application of Artificial Neural Networks for Efficient High-Resolution 2D DOA Estimation

    Directory of Open Access Journals (Sweden)

    M. Agatonović

    2012-12-01

    Full Text Available A novel method to provide high-resolution Two-Dimensional Direction of Arrival (2D DOA estimation employing Artificial Neural Networks (ANNs is presented in this paper. The observed space is divided into azimuth and elevation sectors. Multilayer Perceptron (MLP neural networks are employed to detect the presence of a source in a sector while Radial Basis Function (RBF neural networks are utilized for DOA estimation. It is shown that a number of appropriately trained neural networks can be successfully used for the high-resolution DOA estimation of narrowband sources in both azimuth and elevation. The training time of each smaller network is significantly re¬duced as different training sets are used for networks in detection and estimation stage. By avoiding the spectral search, the proposed method is suitable for real-time ap¬plications as it provides DOA estimates in a matter of seconds. At the same time, it demonstrates the accuracy comparable to that of the super-resolution 2D MUSIC algorithm.

  2. Comparison of Neural Network Error Measures for Simulation of Slender Marine Structures

    DEFF Research Database (Denmark)

    Christiansen, Niels H.; Voie, Per Erlend Torbergsen; Winther, Ole

    2014-01-01

    Training of an artificial neural network (ANN) adjusts the internal weights of the network in order to minimize a predefined error measure. This error measure is given by an error function. Several different error functions are suggested in the literature. However, the far most common measure...

  3. Neural network and its application to CT imaging

    Energy Technology Data Exchange (ETDEWEB)

    Nikravesh, M.; Kovscek, A.R.; Patzek, T.W. [Lawrence Berkeley National Lab., CA (United States)] [and others

    1997-02-01

    We present an integrated approach to imaging the progress of air displacement by spontaneous imbibition of oil into sandstone. We combine Computerized Tomography (CT) scanning and neural network image processing. The main aspects of our approach are (I) visualization of the distribution of oil and air saturation by CT, (II) interpretation of CT scans using neural networks, and (III) reconstruction of 3-D images of oil saturation from the CT scans with a neural network model. Excellent agreement between the actual images and the neural network predictions is found.

  4. Prediction of Aerodynamic Coefficient using Genetic Algorithm Optimized Neural Network for Sparse Data

    Science.gov (United States)

    Rajkumar, T.; Bardina, Jorge; Clancy, Daniel (Technical Monitor)

    2002-01-01

    Wind tunnels use scale models to characterize aerodynamic coefficients, Wind tunnel testing can be slow and costly due to high personnel overhead and intensive power utilization. Although manual curve fitting can be done, it is highly efficient to use a neural network to define the complex relationship between variables. Numerical simulation of complex vehicles on the wide range of conditions required for flight simulation requires static and dynamic data. Static data at low Mach numbers and angles of attack may be obtained with simpler Euler codes. Static data of stalled vehicles where zones of flow separation are usually present at higher angles of attack require Navier-Stokes simulations which are costly due to the large processing time required to attain convergence. Preliminary dynamic data may be obtained with simpler methods based on correlations and vortex methods; however, accurate prediction of the dynamic coefficients requires complex and costly numerical simulations. A reliable and fast method of predicting complex aerodynamic coefficients for flight simulation I'S presented using a neural network. The training data for the neural network are derived from numerical simulations and wind-tunnel experiments. The aerodynamic coefficients are modeled as functions of the flow characteristics and the control surfaces of the vehicle. The basic coefficients of lift, drag and pitching moment are expressed as functions of angles of attack and Mach number. The modeled and training aerodynamic coefficients show good agreement. This method shows excellent potential for rapid development of aerodynamic models for flight simulation. Genetic Algorithms (GA) are used to optimize a previously built Artificial Neural Network (ANN) that reliably predicts aerodynamic coefficients. Results indicate that the GA provided an efficient method of optimizing the ANN model to predict aerodynamic coefficients. The reliability of the ANN using the GA includes prediction of aerodynamic

  5. Design and Modeling of RF Power Amplifiers with Radial Basis Function Artificial Neural Networks

    OpenAIRE

    Ali Reza Zirak; Sobhan Roshani

    2016-01-01

    A radial basis function (RBF) artificial neural network model for a designed high efficiency radio frequency class-F power amplifier (PA) is presented in this paper. The presented amplifier is designed at 1.8 GHz operating frequency with 12 dB of gain and 36 dBm of 1dB output compression point. The obtained power added efficiency (PAE) for the presented PA is 76% under 26 dBm input power. The proposed RBF model uses input and DC power of the PA as inputs variables and considers output power a...

  6. Artificial neural networks in neutron dosimetry

    Energy Technology Data Exchange (ETDEWEB)

    Vega C, H.R.; Hernandez D, V.M.; Manzanares A, E.; Mercado, G.A.; Perales M, W.A.; Robles R, J.A. [Unidades Academicas de Estudios Nucleares, UAZ, A.P. 336, 98000 Zacatecas (Mexico); Gallego, E.; Lorente, A. [Depto. de Ingenieria Nuclear, Universidad Politecnica de Madrid, (Spain)

    2005-07-01

    An artificial neural network has been designed to obtain the neutron doses using only the Bonner spheres spectrometer's count rates. Ambient, personal and effective neutron doses were included. 187 neutron spectra were utilized to calculate the Bonner count rates and the neutron doses. The spectra were transformed from lethargy to energy distribution and were re-binned to 31 energy groups using the MCNP 4C code. Re-binned spectra, UTA4 response matrix and fluence-to-dose coefficients were used to calculate the count rates in Bonner spheres spectrometer and the doses. Count rates were used as input and the respective doses were used as output during neural network training. Training and testing was carried out in Mat lab environment. The artificial neural network performance was evaluated using the {chi}{sup 2}- test, where the original and calculated doses were compared. The use of Artificial Neural Networks in neutron dosimetry is an alternative procedure that overcomes the drawbacks associated in this ill-conditioned problem. (Author)

  7. Artificial neural networks in neutron dosimetry

    International Nuclear Information System (INIS)

    Vega C, H.R.; Hernandez D, V.M.; Manzanares A, E.; Mercado, G.A.; Perales M, W.A.; Robles R, J.A.; Gallego, E.; Lorente, A.

    2005-01-01

    An artificial neural network has been designed to obtain the neutron doses using only the Bonner spheres spectrometer's count rates. Ambient, personal and effective neutron doses were included. 187 neutron spectra were utilized to calculate the Bonner count rates and the neutron doses. The spectra were transformed from lethargy to energy distribution and were re-binned to 31 energy groups using the MCNP 4C code. Re-binned spectra, UTA4 response matrix and fluence-to-dose coefficients were used to calculate the count rates in Bonner spheres spectrometer and the doses. Count rates were used as input and the respective doses were used as output during neural network training. Training and testing was carried out in Mat lab environment. The artificial neural network performance was evaluated using the χ 2 - test, where the original and calculated doses were compared. The use of Artificial Neural Networks in neutron dosimetry is an alternative procedure that overcomes the drawbacks associated in this ill-conditioned problem. (Author)

  8. Fragility in dynamic networks: application to neural networks in the epileptic cortex.

    Science.gov (United States)

    Sritharan, Duluxan; Sarma, Sridevi V

    2014-10-01

    Epilepsy is a network phenomenon characterized by atypical activity at the neuronal and population levels during seizures, including tonic spiking, increased heterogeneity in spiking rates, and synchronization. The etiology of epilepsy is unclear, but a common theme among proposed mechanisms is that structural connectivity between neurons is altered. It is hypothesized that epilepsy arises not from random changes in connectivity, but from specific structural changes to the most fragile nodes or neurons in the network. In this letter, the minimum energy perturbation on functional connectivity required to destabilize linear networks is derived. Perturbation results are then applied to a probabilistic nonlinear neural network model that operates at a stable fixed point. That is, if a small stimulus is applied to the network, the activation probabilities of each neuron respond transiently but eventually recover to their baseline values. When the perturbed network is destabilized, the activation probabilities shift to larger or smaller values or oscillate when a small stimulus is applied. Finally, the structural modifications to the neural network that achieve the functional perturbation are derived. Simulations of the unperturbed and perturbed networks qualitatively reflect neuronal activity observed in epilepsy patients, suggesting that the changes in network dynamics due to destabilizing perturbations, including the emergence of an unstable manifold or a stable limit cycle, may be indicative of neuronal or population dynamics during seizure. That is, the epileptic cortex is always on the brink of instability and minute changes in the synaptic weights associated with the most fragile node can suddenly destabilize the network to cause seizures. Finally, the theory developed here and its interpretation of epileptic networks enables the design of a straightforward feedback controller that first detects when the network has destabilized and then applies linear state

  9. Maximum entropy methods for extracting the learned features of deep neural networks.

    Science.gov (United States)

    Finnegan, Alex; Song, Jun S

    2017-10-01

    New architectures of multilayer artificial neural networks and new methods for training them are rapidly revolutionizing the application of machine learning in diverse fields, including business, social science, physical sciences, and biology. Interpreting deep neural networks, however, currently remains elusive, and a critical challenge lies in understanding which meaningful features a network is actually learning. We present a general method for interpreting deep neural networks and extracting network-learned features from input data. We describe our algorithm in the context of biological sequence analysis. Our approach, based on ideas from statistical physics, samples from the maximum entropy distribution over possible sequences, anchored at an input sequence and subject to constraints implied by the empirical function learned by a network. Using our framework, we demonstrate that local transcription factor binding motifs can be identified from a network trained on ChIP-seq data and that nucleosome positioning signals are indeed learned by a network trained on chemical cleavage nucleosome maps. Imposing a further constraint on the maximum entropy distribution also allows us to probe whether a network is learning global sequence features, such as the high GC content in nucleosome-rich regions. This work thus provides valuable mathematical tools for interpreting and extracting learned features from feed-forward neural networks.

  10. PERFORMANCE EVALUATION OF VARIANCES IN BACKPROPAGATION NEURAL NETWORK USED FOR HANDWRITTEN CHARACTER RECOGNITION

    OpenAIRE

    Vairaprakash Gurusamy *1 & K.Nandhini2

    2017-01-01

    A Neural Network is a powerful data modeling tool that is able to capture and represent complex input/output relationships. The motivation for the development of neural network technology stemmed from the desire to develop an artificial system that could perform "intelligent" tasks similar to those performed by the human brain.Back propagation was created by generalizing the Widrow-Hoff learning rule to multiple-layer networks and nonlinear differentiable transfer functions. The term back pro...

  11. Artificial neural networks for plasma spectroscopy analysis

    International Nuclear Information System (INIS)

    Morgan, W.L.; Larsen, J.T.; Goldstein, W.H.

    1992-01-01

    Artificial neural networks have been applied to a variety of signal processing and image recognition problems. Of the several common neural models the feed-forward, back-propagation network is well suited for the analysis of scientific laboratory data, which can be viewed as a pattern recognition problem. The authors present a discussion of the basic neural network concepts and illustrate its potential for analysis of experiments by applying it to the spectra of laser produced plasmas in order to obtain estimates of electron temperatures and densities. Although these are high temperature and density plasmas, the neural network technique may be of interest in the analysis of the low temperature and density plasmas characteristic of experiments and devices in gaseous electronics

  12. Multimodal functional network connectivity: an EEG-fMRI fusion in network space.

    Directory of Open Access Journals (Sweden)

    Xu Lei

    Full Text Available EEG and fMRI recordings measure the functional activity of multiple coherent networks distributed in the cerebral cortex. Identifying network interaction from the complementary neuroelectric and hemodynamic signals may help to explain the complex relationships between different brain regions. In this paper, multimodal functional network connectivity (mFNC is proposed for the fusion of EEG and fMRI in network space. First, functional networks (FNs are extracted using spatial independent component analysis (ICA in each modality separately. Then the interactions among FNs in each modality are explored by Granger causality analysis (GCA. Finally, fMRI FNs are matched to EEG FNs in the spatial domain using network-based source imaging (NESOI. Investigations of both synthetic and real data demonstrate that mFNC has the potential to reveal the underlying neural networks of each modality separately and in their combination. With mFNC, comprehensive relationships among FNs might be unveiled for the deep exploration of neural activities and metabolic responses in a specific task or neurological state.

  13. Neural network for nonsmooth pseudoconvex optimization with general convex constraints.

    Science.gov (United States)

    Bian, Wei; Ma, Litao; Qin, Sitian; Xue, Xiaoping

    2018-05-01

    In this paper, a one-layer recurrent neural network is proposed for solving a class of nonsmooth, pseudoconvex optimization problems with general convex constraints. Based on the smoothing method, we construct a new regularization function, which does not depend on any information of the feasible region. Thanks to the special structure of the regularization function, we prove the global existence, uniqueness and "slow solution" character of the state of the proposed neural network. Moreover, the state solution of the proposed network is proved to be convergent to the feasible region in finite time and to the optimal solution set of the related optimization problem subsequently. In particular, the convergence of the state to an exact optimal solution is also considered in this paper. Numerical examples with simulation results are given to show the efficiency and good characteristics of the proposed network. In addition, some preliminary theoretical analysis and application of the proposed network for a wider class of dynamic portfolio optimization are included. Copyright © 2018 Elsevier Ltd. All rights reserved.

  14. Finite-Time Stabilization and Adaptive Control of Memristor-Based Delayed Neural Networks.

    Science.gov (United States)

    Wang, Leimin; Shen, Yi; Zhang, Guodong

    Finite-time stability problem has been a hot topic in control and system engineering. This paper deals with the finite-time stabilization issue of memristor-based delayed neural networks (MDNNs) via two control approaches. First, in order to realize the stabilization of MDNNs in finite time, a delayed state feedback controller is proposed. Then, a novel adaptive strategy is applied to the delayed controller, and finite-time stabilization of MDNNs can also be achieved by using the adaptive control law. Some easily verified algebraic criteria are derived to ensure the stabilization of MDNNs in finite time, and the estimation of the settling time functional is given. Moreover, several finite-time stability results as our special cases for both memristor-based neural networks (MNNs) without delays and neural networks are given. Finally, three examples are provided for the illustration of the theoretical results.Finite-time stability problem has been a hot topic in control and system engineering. This paper deals with the finite-time stabilization issue of memristor-based delayed neural networks (MDNNs) via two control approaches. First, in order to realize the stabilization of MDNNs in finite time, a delayed state feedback controller is proposed. Then, a novel adaptive strategy is applied to the delayed controller, and finite-time stabilization of MDNNs can also be achieved by using the adaptive control law. Some easily verified algebraic criteria are derived to ensure the stabilization of MDNNs in finite time, and the estimation of the settling time functional is given. Moreover, several finite-time stability results as our special cases for both memristor-based neural networks (MNNs) without delays and neural networks are given. Finally, three examples are provided for the illustration of the theoretical results.

  15. Dynamic training algorithm for dynamic neural networks

    International Nuclear Information System (INIS)

    Tan, Y.; Van Cauwenberghe, A.; Liu, Z.

    1996-01-01

    The widely used backpropagation algorithm for training neural networks based on the gradient descent has a significant drawback of slow convergence. A Gauss-Newton method based recursive least squares (RLS) type algorithm with dynamic error backpropagation is presented to speed-up the learning procedure of neural networks with local recurrent terms. Finally, simulation examples concerning the applications of the RLS type algorithm to identification of nonlinear processes using a local recurrent neural network are also included in this paper

  16. Neural Networks for Non-linear Control

    DEFF Research Database (Denmark)

    Sørensen, O.

    1994-01-01

    This paper describes how a neural network, structured as a Multi Layer Perceptron, is trained to predict, simulate and control a non-linear process.......This paper describes how a neural network, structured as a Multi Layer Perceptron, is trained to predict, simulate and control a non-linear process....

  17. Fault detection and diagnosis for complex multivariable processes using neural networks

    International Nuclear Information System (INIS)

    Weerasinghe, M.

    1998-06-01

    Development of a reliable fault diagnosis method for large-scale industrial plants is laborious and often difficult to achieve due to the complexity of the targeted systems. The main objective of this thesis is to investigate the application of neural networks to the diagnosis of non-catastrophic faults in an industrial nuclear fuel processing plant. The proposed methods were initially developed by application to a simulated chemical process prior to further validation on real industrial data. The diagnosis of faults at a single operating point is first investigated. Statistical data conditioning methods of data scaling and principal component analysis are investigated to facilitate fault classification and reduce the complexity of neural networks. Successful fault diagnosis was achieved with significantly smaller networks than using all process variables as network inputs. Industrial processes often manufacture at various operating points, but demonstrated applications of neural networks for fault diagnosis usually only consider a single (primary) operating point. Developing a standard neural network scheme for fault diagnosis at all operating points would be usually impractical due to the unavailability of suitable training data for less frequently used (secondary) operating points. To overcome this problem, the application of a single neural network for the diagnosis of faults operating at different points is investigated. The data conditioning followed the same techniques as used for the fault diagnosis of a single operating point. The results showed that a single neural network could be successfully used to diagnose faults at operating points other than that it is trained for, and the data conditioning significantly improved the classification. Artificial neural networks have been shown to be an effective tool for process fault diagnosis. However, a main criticism is that details of the procedures taken to reach the fault diagnosis decisions are embedded in

  18. Stellar Image Interpretation System using Artificial Neural Networks: Unipolar Function Case

    Directory of Open Access Journals (Sweden)

    F. I. Younis

    2001-01-01

    Full Text Available An artificial neural network based system for interpreting astronomical images has been developed. The system is based on feed-forward Artificial Neural Networks (ANNs with error back-propagation learning. Knowledge about images of stars, cosmic ray events and noise found in images is used to prepare two sets of input patterns to train and test our approach. The system has been developed and implemented to scan astronomical digital images in order to segregate stellar images from other entities. It has been coded in C language for users of personal computers. An astronomical image of a star cluster from other objects is undertaken as a test case. The obtained results are found to be in very good agreement with those derived from the DAOPHOTII package, which is widely used in the astronomical community. It is proved that our system is simpler, much faster and more reliable. Moreover, no prior knowledge, or initial data from the frame to be analysed is required.

  19. Reconstruction of periodic signals using neural networks

    Directory of Open Access Journals (Sweden)

    José Danilo Rairán Antolines

    2014-01-01

    Full Text Available In this paper, we reconstruct a periodic signal by using two neural networks. The first network is trained to approximate the period of a signal, and the second network estimates the corresponding coefficients of the signal's Fourier expansion. The reconstruction strategy consists in minimizing the mean-square error via backpro-pagation algorithms over a single neuron with a sine transfer function. Additionally, this paper presents mathematical proof about the quality of the approximation as well as a first modification of the algorithm, which requires less data to reach the same estimation; thus making the algorithm suitable for real-time implementations.

  20. Drift chamber tracking with neural networks

    International Nuclear Information System (INIS)

    Lindsey, C.S.; Denby, B.; Haggerty, H.

    1992-10-01

    We discuss drift chamber tracking with a commercial log VLSI neural network chip. Voltages proportional to the drift times in a 4-layer drift chamber were presented to the Intel ETANN chip. The network was trained to provide the intercept and slope of straight tracks traversing the chamber. The outputs were recorded and later compared off line to conventional track fits. Two types of network architectures were studied. Applications of neural network tracking to high energy physics detector triggers is discussed

  1. Neural-network-directed alignment of optical systems using the laser-beam spatial filter as an example

    Science.gov (United States)

    Decker, Arthur J.; Krasowski, Michael J.; Weiland, Kenneth E.

    1993-01-01

    This report describes an effort at NASA Lewis Research Center to use artificial neural networks to automate the alignment and control of optical measurement systems. Specifically, it addresses the use of commercially available neural network software and hardware to direct alignments of the common laser-beam-smoothing spatial filter. The report presents a general approach for designing alignment records and combining these into training sets to teach optical alignment functions to neural networks and discusses the use of these training sets to train several types of neural networks. Neural network configurations used include the adaptive resonance network, the back-propagation-trained network, and the counter-propagation network. This work shows that neural networks can be used to produce robust sequencers. These sequencers can learn by example to execute the step-by-step procedures of optical alignment and also can learn adaptively to correct for environmentally induced misalignment. The long-range objective is to use neural networks to automate the alignment and operation of optical measurement systems in remote, harsh, or dangerous aerospace environments. This work also shows that when neural networks are trained by a human operator, training sets should be recorded, training should be executed, and testing should be done in a manner that does not depend on intellectual judgments of the human operator.

  2. Using neural networks to describe tracer correlations

    Directory of Open Access Journals (Sweden)

    D. J. Lary

    2004-01-01

    Full Text Available Neural networks are ideally suited to describe the spatial and temporal dependence of tracer-tracer correlations. The neural network performs well even in regions where the correlations are less compact and normally a family of correlation curves would be required. For example, the CH4-N2O correlation can be well described using a neural network trained with the latitude, pressure, time of year, and methane volume mixing ratio (v.m.r.. In this study a neural network using Quickprop learning and one hidden layer with eight nodes was able to reproduce the CH4-N2O correlation with a correlation coefficient between simulated and training values of 0.9995. Such an accurate representation of tracer-tracer correlations allows more use to be made of long-term datasets to constrain chemical models. Such as the dataset from the Halogen Occultation Experiment (HALOE which has continuously observed CH4  (but not N2O from 1991 till the present. The neural network Fortran code used is available for download.

  3. Research on wind field algorithm of wind lidar based on BP neural network and grey prediction

    Science.gov (United States)

    Chen, Yong; Chen, Chun-Li; Luo, Xiong; Zhang, Yan; Yang, Ze-hou; Zhou, Jie; Shi, Xiao-ding; Wang, Lei

    2018-01-01

    This paper uses the BP neural network and grey algorithm to forecast and study radar wind field. In order to reduce the residual error in the wind field prediction which uses BP neural network and grey algorithm, calculating the minimum value of residual error function, adopting the residuals of the gray algorithm trained by BP neural network, using the trained network model to forecast the residual sequence, using the predicted residual error sequence to modify the forecast sequence of the grey algorithm. The test data show that using the grey algorithm modified by BP neural network can effectively reduce the residual value and improve the prediction precision.

  4. Implementing size-optimal discrete neural networks require analog circuitry

    Energy Technology Data Exchange (ETDEWEB)

    Beiu, V.

    1998-12-01

    This paper starts by overviewing results dealing with the approximation capabilities of neural networks, as well as bounds on the size of threshold gate circuits. Based on a constructive solution for Kolmogorov`s superpositions the authors show that implementing Boolean functions can be done using neurons having an identity transfer function. Because in this case the size of the network is minimized, it follows that size-optimal solutions for implementing Boolean functions can be obtained using analog circuitry. Conclusions and several comments on the required precision are ending the paper.

  5. Convolutional Neural Network for Image Recognition

    CERN Document Server

    Seifnashri, Sahand

    2015-01-01

    The aim of this project is to use machine learning techniques especially Convolutional Neural Networks for image processing. These techniques can be used for Quark-Gluon discrimination using calorimeters data, but unfortunately I didn’t manage to get the calorimeters data and I just used the Jet data fromminiaodsim(ak4 chs). The Jet data was not good enough for Convolutional Neural Network which is designed for ’image’ recognition. This report is made of twomain part, part one is mainly about implementing Convolutional Neural Network on unphysical data such as MNIST digits and CIFAR-10 dataset and part 2 is about the Jet data.

  6. A study on neural network representation of reactor power control procedures 2

    International Nuclear Information System (INIS)

    Moon, Byung Soo; Park, Jea Chang; Kim, Young Taek; Lee, Hee Cho; Yang, Sung Uoon; Hwang, Hee Sun; Hwang, In Ah

    1998-12-01

    The major results of this study are as follows; the first is the algorithm developed through this study for computing the spline interpolation coefficients without solving the matrix equation involved. This is expected to be used in various numerical analysis problems. If this algorithm can be extended to functions of two independent variables in the future, then it could be a big help for the finite element method used in solving various boundary value problems. The second is the method developed to reduce systematically the number of output fuzzy sets for fuzzy systems representing functions of two variables. this may be considered as an indication that the neural network representation of functions has advantages over other conventional methods. The third result is an artificial neural network system developed for automating the manual procedures being used to change the reactor power level by adding boric acid or water to the reactor coolant. This along with the neural networks developed earlier can be used in nuclear power plants as an operator aid after a verification process. (author). 8 refs., 13 tabs., 5 figs

  7. A study on neural network representation of reactor power control procedures 2

    Energy Technology Data Exchange (ETDEWEB)

    Moon, Byung Soo; Park, Jea Chang; Kim, Young Taek; Lee, Hee Cho; Yang, Sung Uoon; Hwang, Hee Sun; Hwang, In Ah

    1998-12-01

    The major results of this study are as follows; the first is the algorithm developed through this study for computing the spline interpolation coefficients without solving the matrix equation involved. This is expected to be used in various numerical analysis problems. If this algorithm can be extended to functions of two independent variables in the future, then it could be a big help for the finite element method used in solving various boundary value problems. The second is the method developed to reduce systematically the number of output fuzzy sets for fuzzy systems representing functions of two variables. this may be considered as an indication that the neural network representation of functions has advantages over other conventional methods. The third result is an artificial neural network system developed for automating the manual procedures being used to change the reactor power level by adding boric acid or water to the reactor coolant. This along with the neural networks developed earlier can be used in nuclear power plants as an operator aid after a verification process. (author). 8 refs., 13 tabs., 5 figs.

  8. Neural network modeling of emotion

    Science.gov (United States)

    Levine, Daniel S.

    2007-03-01

    This article reviews the history and development of computational neural network modeling of cognitive and behavioral processes that involve emotion. The exposition starts with models of classical conditioning dating from the early 1970s. Then it proceeds toward models of interactions between emotion and attention. Then models of emotional influences on decision making are reviewed, including some speculative (not and not yet simulated) models of the evolution of decision rules. Through the late 1980s, the neural networks developed to model emotional processes were mainly embodiments of significant functional principles motivated by psychological data. In the last two decades, network models of these processes have become much more detailed in their incorporation of known physiological properties of specific brain regions, while preserving many of the psychological principles from the earlier models. Most network models of emotional processes so far have dealt with positive and negative emotion in general, rather than specific emotions such as fear, joy, sadness, and anger. But a later section of this article reviews a few models relevant to specific emotions: one family of models of auditory fear conditioning in rats, and one model of induced pleasure enhancing creativity in humans. Then models of emotional disorders are reviewed. The article concludes with philosophical statements about the essential contributions of emotion to intelligent behavior and the importance of quantitative theories and models to the interdisciplinary enterprise of understanding the interactions of emotion, cognition, and behavior.

  9. A neural network approach to burst detection.

    Science.gov (United States)

    Mounce, S R; Day, A J; Wood, A S; Khan, A; Widdop, P D; Machell, J

    2002-01-01

    This paper describes how hydraulic and water quality data from a distribution network may be used to provide a more efficient leakage management capability for the water industry. The research presented concerns the application of artificial neural networks to the issue of detection and location of leakage in treated water distribution systems. An architecture for an Artificial Neural Network (ANN) based system is outlined. The neural network uses time series data produced by sensors to directly construct an empirical model for predication and classification of leaks. Results are presented using data from an experimental site in Yorkshire Water's Keighley distribution system.

  10. File access prediction using neural networks.

    Science.gov (United States)

    Patra, Prashanta Kumar; Sahu, Muktikanta; Mohapatra, Subasish; Samantray, Ronak Kumar

    2010-06-01

    One of the most vexing issues in design of a high-speed computer is the wide gap of access times between the memory and the disk. To solve this problem, static file access predictors have been used. In this paper, we propose dynamic file access predictors using neural networks to significantly improve upon the accuracy, success-per-reference, and effective-success-rate-per-reference by using neural-network-based file access predictor with proper tuning. In particular, we verified that the incorrect prediction has been reduced from 53.11% to 43.63% for the proposed neural network prediction method with a standard configuration than the recent popularity (RP) method. With manual tuning for each trace, we are able to improve upon the misprediction rate and effective-success-rate-per-reference using a standard configuration. Simulations on distributed file system (DFS) traces reveal that exact fit radial basis function (RBF) gives better prediction in high end system whereas multilayer perceptron (MLP) trained with Levenberg-Marquardt (LM) backpropagation outperforms in system having good computational capability. Probabilistic and competitive predictors are the most suitable for work stations having limited resources to deal with and the former predictor is more efficient than the latter for servers having maximum system calls. Finally, we conclude that MLP with LM backpropagation algorithm has better success rate of file prediction than those of simple perceptron, last successor, stable successor, and best k out of m predictors.

  11. Neural network classifier of attacks in IP telephony

    Science.gov (United States)

    Safarik, Jakub; Voznak, Miroslav; Mehic, Miralem; Partila, Pavol; Mikulec, Martin

    2014-05-01

    Various types of monitoring mechanism allow us to detect and monitor behavior of attackers in VoIP networks. Analysis of detected malicious traffic is crucial for further investigation and hardening the network. This analysis is typically based on statistical methods and the article brings a solution based on neural network. The proposed algorithm is used as a classifier of attacks in a distributed monitoring network of independent honeypot probes. Information about attacks on these honeypots is collected on a centralized server and then classified. This classification is based on different mechanisms. One of them is based on the multilayer perceptron neural network. The article describes inner structure of used neural network and also information about implementation of this network. The learning set for this neural network is based on real attack data collected from IP telephony honeypot called Dionaea. We prepare the learning set from real attack data after collecting, cleaning and aggregation of this information. After proper learning is the neural network capable to classify 6 types of most commonly used VoIP attacks. Using neural network classifier brings more accurate attack classification in a distributed system of honeypots. With this approach is possible to detect malicious behavior in a different part of networks, which are logically or geographically divided and use the information from one network to harden security in other networks. Centralized server for distributed set of nodes serves not only as a collector and classifier of attack data, but also as a mechanism for generating a precaution steps against attacks.

  12. Single-Iteration Learning Algorithm for Feed-Forward Neural Networks

    Energy Technology Data Exchange (ETDEWEB)

    Barhen, J.; Cogswell, R.; Protopopescu, V.

    1999-07-31

    A new methodology for neural learning is presented, whereby only a single iteration is required to train a feed-forward network with near-optimal results. To this aim, a virtual input layer is added to the multi-layer architecture. The virtual input layer is connected to the nominal input layer by a specird nonlinear transfer function, and to the fwst hidden layer by regular (linear) synapses. A sequence of alternating direction singular vrdue decompositions is then used to determine precisely the inter-layer synaptic weights. This algorithm exploits the known separability of the linear (inter-layer propagation) and nonlinear (neuron activation) aspects of information &ansfer within a neural network.

  13. A Neural Network-Based Interval Pattern Matcher

    Directory of Open Access Journals (Sweden)

    Jing Lu

    2015-07-01

    Full Text Available One of the most important roles in the machine learning area is to classify, and neural networks are very important classifiers. However, traditional neural networks cannot identify intervals, let alone classify them. To improve their identification ability, we propose a neural network-based interval matcher in our paper. After summarizing the theoretical construction of the model, we take a simple and a practical weather forecasting experiment, which show that the recognizer accuracy reaches 100% and that is promising.

  14. Functional electrical stimulation controlled by artificial neural networks: pilot experiments with simple movements are promising for rehabilitation applications.

    Science.gov (United States)

    Ferrante, Simona; Pedrocchi, Alessandra; Iannò, Marco; De Momi, Elena; Ferrarin, Maurizio; Ferrigno, Giancarlo

    2004-01-01

    This study falls within the ambit of research on functional electrical stimulation for the design of rehabilitation training for spinal cord injured patients. In this context, a crucial issue is the control of the stimulation parameters in order to optimize the patterns of muscle activation and to increase the duration of the exercises. An adaptive control system (NEURADAPT) based on artificial neural networks (ANNs) was developed to control the knee joint in accordance with desired trajectories by stimulating quadriceps muscles. This strategy includes an inverse neural model of the stimulated limb in the feedforward line and a neural network trained on-line in the feedback loop. NEURADAPT was compared with a linear closed-loop proportional integrative derivative (PID) controller and with a model-based neural controller (NEUROPID). Experiments on two subjects (one healthy and one paraplegic) show the good performance of NEURADAPT, which is able to reduce the time lag introduced by the PID controller. In addition, control systems based on ANN techniques do not require complicated calibration procedures at the beginning of each experimental session. After the initial learning phase, the ANN, thanks to its generalization capacity, is able to cope with a certain range of variability of skeletal muscle properties.

  15. Global exponential stability for discrete-time neural networks with variable delays

    International Nuclear Information System (INIS)

    Chen Wuhua; Lu Xiaomei; Liang Dongying

    2006-01-01

    This Letter provides new exponential stability criteria for discrete-time neural networks with variable delays. The main technique is to reduce exponential convergence estimation of the neural network solution to that of one component of the corresponding solution by constructing Lyapunov function based on M-matrix. By introducing the tuning parameter diagonal matrix, the delay-independent and delay-dependent exponential stability conditions have been unified in the same mathematical formula. The effectiveness of the new results are illustrated by three examples

  16. Introduction to neural networks with electric power applications

    International Nuclear Information System (INIS)

    Wildberger, A.M.; Hickok, K.A.

    1990-01-01

    This is an introduction to the general field of neural networks with emphasis on prospects for their application in the power industry. It is intended to provide enough background information for its audience to begin to follow technical developments in neural networks and to recognize those which might impact on electric power engineering. Beginning with a brief discussion of natural and artificial neurons, the characteristics of neural networks in general and how they learn, neural networks are compared with other modeling tools such as simulation and expert systems in order to provide guidance in selecting appropriate applications. In the power industry, possible applications include plant control, dispatching, and maintenance scheduling. In particular, neural networks are currently being investigated for enhancements to the Thermal Performance Advisor (TPA) which General Physics Corporation (GP) has developed to improve the efficiency of electric power generation

  17. Real-space mapping of topological invariants using artificial neural networks

    Science.gov (United States)

    Carvalho, D.; García-Martínez, N. A.; Lado, J. L.; Fernández-Rossier, J.

    2018-03-01

    Topological invariants allow one to characterize Hamiltonians, predicting the existence of topologically protected in-gap modes. Those invariants can be computed by tracing the evolution of the occupied wave functions under twisted boundary conditions. However, those procedures do not allow one to calculate a topological invariant by evaluating the system locally, and thus require information about the wave functions in the whole system. Here we show that artificial neural networks can be trained to identify the topological order by evaluating a local projection of the density matrix. We demonstrate this for two different models, a one-dimensional topological superconductor and a two-dimensional quantum anomalous Hall state, both with spatially modulated parameters. Our neural network correctly identifies the different topological domains in real space, predicting the location of in-gap states. By combining a neural network with a calculation of the electronic states that uses the kernel polynomial method, we show that the local evaluation of the invariant can be carried out by evaluating a local quantity, in particular for systems without translational symmetry consisting of tens of thousands of atoms. Our results show that supervised learning is an efficient methodology to characterize the local topology of a system.

  18. Controlling the dynamics of multi-state neural networks

    International Nuclear Information System (INIS)

    Jin, Tao; Zhao, Hong

    2008-01-01

    In this paper, we first analyze the distribution of local fields (DLF) which is induced by the memory patterns in the Q-Ising model. It is found that the structure of the DLF is closely correlated with the network dynamics and the system performance. However, the design rule adopted in the Q-Ising model, like the other rules adopted for multi-state neural networks with associative memories, cannot be applied to directly control the DLF for a given set of memory patterns, and thus cannot be applied to further study the relationships between the structure of the DLF and the dynamics of the network. We then extend a design rule, which was presented recently for designing binary-state neural networks, to make it suitable for designing general multi-state neural networks. This rule is able to control the structure of the DLF as expected. We show that controlling the DLF not only can affect the dynamic behaviors of the multi-state neural networks for a given set of memory patterns, but also can improve the storage capacity. With the change of the DLF, the network shows very rich dynamic behaviors, such as the 'chaos phase', the 'memory phase', and the 'mixture phase'. These dynamic behaviors are also observed in the binary-state neural networks; therefore, our results imply that they may be the universal behaviors of feedback neural networks

  19. Face recognition based on improved BP neural network

    Directory of Open Access Journals (Sweden)

    Yue Gaili

    2017-01-01

    Full Text Available In order to improve the recognition rate of face recognition, face recognition algorithm based on histogram equalization, PCA and BP neural network is proposed. First, the face image is preprocessed by histogram equalization. Then, the classical PCA algorithm is used to extract the features of the histogram equalization image, and extract the principal component of the image. And then train the BP neural network using the trained training samples. This improved BP neural network weight adjustment method is used to train the network because the conventional BP algorithm has the disadvantages of slow convergence, easy to fall into local minima and training process. Finally, the BP neural network with the test sample input is trained to classify and identify the face images, and the recognition rate is obtained. Through the use of ORL database face image simulation experiment, the analysis results show that the improved BP neural network face recognition method can effectively improve the recognition rate of face recognition.

  20. Control of autonomous robot using neural networks

    Science.gov (United States)

    Barton, Adam; Volna, Eva

    2017-07-01

    The aim of the article is to design a method of control of an autonomous robot using artificial neural networks. The introductory part describes control issues from the perspective of autonomous robot navigation and the current mobile robots controlled by neural networks. The core of the article is the design of the controlling neural network, and generation and filtration of the training set using ART1 (Adaptive Resonance Theory). The outcome of the practical part is an assembled Lego Mindstorms EV3 robot solving the problem of avoiding obstacles in space. To verify models of an autonomous robot behavior, a set of experiments was created as well as evaluation criteria. The speed of each motor was adjusted by the controlling neural network with respect to the situation in which the robot was found.

  1. Bayesian and neural networks for preliminary ship design

    DEFF Research Database (Denmark)

    Clausen, H. B.; Lützen, Marie; Friis-Hansen, Andreas

    2001-01-01

    000 ships is acquired and various methods for derivation of empirical relations are employed. A regression analysis is carried out to fit functions to the data. Further, the data are used to learn Bayesian and neural networks to encode the relations between the characteristics. On the basis...

  2. A quantum-implementable neural network model

    Science.gov (United States)

    Chen, Jialin; Wang, Lingli; Charbon, Edoardo

    2017-10-01

    A quantum-implementable neural network, namely quantum probability neural network (QPNN) model, is proposed in this paper. QPNN can use quantum parallelism to trace all possible network states to improve the result. Due to its unique quantum nature, this model is robust to several quantum noises under certain conditions, which can be efficiently implemented by the qubus quantum computer. Another advantage is that QPNN can be used as memory to retrieve the most relevant data and even to generate new data. The MATLAB experimental results of Iris data classification and MNIST handwriting recognition show that much less neuron resources are required in QPNN to obtain a good result than the classical feedforward neural network. The proposed QPNN model indicates that quantum effects are useful for real-life classification tasks.

  3. Assessing Rainfall Erosivity with Artificial Neural Networks for the Ribeira Valley, Brazil

    Directory of Open Access Journals (Sweden)

    Reginald B. Silva

    2010-01-01

    Full Text Available Soil loss is one of the main causes of pauperization and alteration of agricultural soil properties. Various empirical models (e.g., USLE are used to predict soil losses from climate variables which in general have to be derived from spatial interpolation of point measurements. Alternatively, Artificial Neural Networks may be used as a powerful option to obtain site-specific climate data from independent factors. This study aimed to develop an artificial neural network to estimate rainfall erosivity in the Ribeira Valley and Coastal region of the State of São Paulo. In the development of the Artificial Neural Networks the input variables were latitude, longitude, and annual rainfall and a mathematical equation of the activation function for use in the study area as the output variable. It was found among other things that the Artificial Neural Networks can be used in the interpolation of rainfall erosivity values for the Ribeira Valley and Coastal region of the State of São Paulo to a satisfactory degree of precision in the estimation of erosion. The equation performance has been demonstrated by comparison with the mathematical equation of the activation function adjusted to the specific conditions of the study area.

  4. Memory in Neural Networks and Glasses

    NARCIS (Netherlands)

    Heerema, M.

    2000-01-01

    The thesis tries and models a neural network in a way which, at essential points, is biologically realistic. In a biological context, the changes of the synapses of the neural network are most often described by what is called `Hebb's learning rule'. On careful analysis it is, in fact, nothing but a

  5. Simulation Study on the Application of the Generalized Entropy Concept in Artificial Neural Networks

    Directory of Open Access Journals (Sweden)

    Krzysztof Gajowniczek

    2018-04-01

    Full Text Available Artificial neural networks are currently one of the most commonly used classifiers and over the recent years they have been successfully used in many practical applications, including banking and finance, health and medicine, engineering and manufacturing. A large number of error functions have been proposed in the literature to achieve a better predictive power. However, only a few works employ Tsallis statistics, although the method itself has been successfully applied in other machine learning techniques. This paper undertakes the effort to examine the q -generalized function based on Tsallis statistics as an alternative error measure in neural networks. In order to validate different performance aspects of the proposed function and to enable identification of its strengths and weaknesses the extensive simulation was prepared based on the artificial benchmarking dataset. The results indicate that Tsallis entropy error function can be successfully introduced in the neural networks yielding satisfactory results and handling with class imbalance, noise in data or use of non-informative predictors.

  6. Neural Network for Sparse Reconstruction

    Directory of Open Access Journals (Sweden)

    Qingfa Li

    2014-01-01

    Full Text Available We construct a neural network based on smoothing approximation techniques and projected gradient method to solve a kind of sparse reconstruction problems. Neural network can be implemented by circuits and can be seen as an important method for solving optimization problems, especially large scale problems. Smoothing approximation is an efficient technique for solving nonsmooth optimization problems. We combine these two techniques to overcome the difficulties of the choices of the step size in discrete algorithms and the item in the set-valued map of differential inclusion. In theory, the proposed network can converge to the optimal solution set of the given problem. Furthermore, some numerical experiments show the effectiveness of the proposed network in this paper.

  7. Ocean wave forecasting using recurrent neural networks

    Digital Repository Service at National Institute of Oceanography (India)

    Mandal, S.; Prabaharan, N.

    , merchant vessel routing, nearshore construction, etc. more efficiently and safely. This paper describes an artificial neural network, namely recurrent neural network with rprop update algorithm and is applied for wave forecasting. Measured ocean waves off...

  8. Stimulus-dependent suppression of chaos in recurrent neural networks

    International Nuclear Information System (INIS)

    Rajan, Kanaka; Abbott, L. F.; Sompolinsky, Haim

    2010-01-01

    Neuronal activity arises from an interaction between ongoing firing generated spontaneously by neural circuits and responses driven by external stimuli. Using mean-field analysis, we ask how a neural network that intrinsically generates chaotic patterns of activity can remain sensitive to extrinsic input. We find that inputs not only drive network responses, but they also actively suppress ongoing activity, ultimately leading to a phase transition in which chaos is completely eliminated. The critical input intensity at the phase transition is a nonmonotonic function of stimulus frequency, revealing a 'resonant' frequency at which the input is most effective at suppressing chaos even though the power spectrum of the spontaneous activity peaks at zero and falls exponentially. A prediction of our analysis is that the variance of neural responses should be most strongly suppressed at frequencies matching the range over which many sensory systems operate.

  9. Based on BP Neural Network Stock Prediction

    Science.gov (United States)

    Liu, Xiangwei; Ma, Xin

    2012-01-01

    The stock market has a high profit and high risk features, on the stock market analysis and prediction research has been paid attention to by people. Stock price trend is a complex nonlinear function, so the price has certain predictability. This article mainly with improved BP neural network (BPNN) to set up the stock market prediction model, and…

  10. Self-organized critical neural networks

    International Nuclear Information System (INIS)

    Bornholdt, Stefan; Roehl, Torsten

    2003-01-01

    A mechanism for self-organization of the degree of connectivity in model neural networks is studied. Network connectivity is regulated locally on the basis of an order parameter of the global dynamics, which is estimated from an observable at the single synapse level. This principle is studied in a two-dimensional neural network with randomly wired asymmetric weights. In this class of networks, network connectivity is closely related to a phase transition between ordered and disordered dynamics. A slow topology change is imposed on the network through a local rewiring rule motivated by activity-dependent synaptic development: Neighbor neurons whose activity is correlated, on average develop a new connection while uncorrelated neighbors tend to disconnect. As a result, robust self-organization of the network towards the order disorder transition occurs. Convergence is independent of initial conditions, robust against thermal noise, and does not require fine tuning of parameters

  11. Critical Branching Neural Networks

    Science.gov (United States)

    Kello, Christopher T.

    2013-01-01

    It is now well-established that intrinsic variations in human neural and behavioral activity tend to exhibit scaling laws in their fluctuations and distributions. The meaning of these scaling laws is an ongoing matter of debate between isolable causes versus pervasive causes. A spiking neural network model is presented that self-tunes to critical…

  12. Training feed-forward neural networks with gain constraints

    Science.gov (United States)

    Hartman

    2000-04-01

    Inaccurate input-output gains (partial derivatives of outputs with respect to inputs) are common in neural network models when input variables are correlated or when data are incomplete or inaccurate. Accurate gains are essential for optimization, control, and other purposes. We develop and explore a method for training feedforward neural networks subject to inequality or equality-bound constraints on the gains of the learned mapping. Gain constraints are implemented as penalty terms added to the objective function, and training is done using gradient descent. Adaptive and robust procedures are devised for balancing the relative strengths of the various terms in the objective function, which is essential when the constraints are inconsistent with the data. The approach has the virtue that the model domain of validity can be extended via extrapolation training, which can dramatically improve generalization. The algorithm is demonstrated here on artificial and real-world problems with very good results and has been advantageously applied to dozens of models currently in commercial use.

  13. Tensor Basis Neural Network v. 1.0 (beta)

    Energy Technology Data Exchange (ETDEWEB)

    2017-03-28

    This software package can be used to build, train, and test a neural network machine learning model. The neural network architecture is specifically designed to embed tensor invariance properties by enforcing that the model predictions sit on an invariant tensor basis. This neural network architecture can be used in developing constitutive models for applications such as turbulence modeling, materials science, and electromagnetism.

  14. Intranasal oxytocin modulates neural functional connectivity during human social interaction.

    Science.gov (United States)

    Rilling, James K; Chen, Xiangchuan; Chen, Xu; Haroon, Ebrahim

    2018-02-10

    Oxytocin (OT) modulates social behavior in primates and many other vertebrate species. Studies in non-primate animals have demonstrated that, in addition to influencing activity within individual brain areas, OT influences functional connectivity across networks of areas involved in social behavior. Previously, we used fMRI to image brain function in human subjects during a dyadic social interaction task following administration of either intranasal oxytocin (INOT) or placebo, and analyzed the data with a standard general linear model. Here, we conduct an extensive re-analysis of these data to explore how OT modulates functional connectivity across a neural network that animal studies implicate in social behavior. OT induced widespread increases in functional connectivity in response to positive social interactions among men and widespread decreases in functional connectivity in response to negative social interactions among women. Nucleus basalis of Meynert, an important regulator of selective attention and motivation with a particularly high density of OT receptors, had the largest number of OT-modulated connections. Regions known to receive mesolimbic dopamine projections such as the nucleus accumbens and lateral septum were also hubs for OT effects on functional connectivity. Our results suggest that the neural mechanism by which OT influences primate social cognition may include changes in patterns of activity across neural networks that regulate social behavior in other animals. © 2018 Wiley Periodicals, Inc.

  15. Stability analysis of Markovian jumping stochastic Cohen—Grossberg neural networks with discrete and distributed time varying delays

    International Nuclear Information System (INIS)

    Ali, M. Syed

    2014-01-01

    In this paper, the global asymptotic stability problem of Markovian jumping stochastic Cohen—Grossberg neural networks with discrete and distributed time-varying delays (MJSCGNNs) is considered. A novel LMI-based stability criterion is obtained by constructing a new Lyapunov functional to guarantee the asymptotic stability of MJSCGNNs. Our results can be easily verified and they are also less restrictive than previously known criteria and can be applied to Cohen—Grossberg neural networks, recurrent neural networks, and cellular neural networks. Finally, the proposed stability conditions are demonstrated with numerical examples

  16. Cultured Neural Networks: Optimization of Patterned Network Adhesiveness and Characterization of their Neural Activity

    Directory of Open Access Journals (Sweden)

    W. L. C. Rutten

    2006-01-01

    Full Text Available One type of future, improved neural interface is the “cultured probe”. It is a hybrid type of neural information transducer or prosthesis, for stimulation and/or recording of neural activity. It would consist of a microelectrode array (MEA on a planar substrate, each electrode being covered and surrounded by a local circularly confined network (“island” of cultured neurons. The main purpose of the local networks is that they act as biofriendly intermediates for collateral sprouts from the in vivo system, thus allowing for an effective and selective neuron–electrode interface. As a secondary purpose, one may envisage future information processing applications of these intermediary networks. In this paper, first, progress is shown on how substrates can be chemically modified to confine developing networks, cultured from dissociated rat cortex cells, to “islands” surrounding an electrode site. Additional coating of neurophobic, polyimide-coated substrate by triblock-copolymer coating enhances neurophilic-neurophobic adhesion contrast. Secondly, results are given on neuronal activity in patterned, unconnected and connected, circular “island” networks. For connected islands, the larger the island diameter (50, 100 or 150 μm, the more spontaneous activity is seen. Also, activity may show a very high degree of synchronization between two islands. For unconnected islands, activity may start at 22 days in vitro (DIV, which is two weeks later than in unpatterned networks.

  17. Extending unified-theory-of-reinforcement neural networks to steady-state operant behavior.

    Science.gov (United States)

    Calvin, Olivia L; McDowell, J J

    2016-06-01

    The unified theory of reinforcement has been used to develop models of behavior over the last 20 years (Donahoe et al., 1993). Previous research has focused on the theory's concordance with the respondent behavior of humans and animals. In this experiment, neural networks were developed from the theory to extend the unified theory of reinforcement to operant behavior on single-alternative variable-interval schedules. This area of operant research was selected because previously developed neural networks could be applied to it without significant alteration. Previous research with humans and animals indicates that the pattern of their steady-state behavior is hyperbolic when plotted against the obtained rate of reinforcement (Herrnstein, 1970). A genetic algorithm was used in the first part of the experiment to determine parameter values for the neural networks, because values that were used in previous research did not result in a hyperbolic pattern of behavior. After finding these parameters, hyperbolic and other similar functions were fitted to the behavior produced by the neural networks. The form of the neural network's behavior was best described by an exponentiated hyperbola (McDowell, 1986; McLean and White, 1983; Wearden, 1981), which was derived from the generalized matching law (Baum, 1974). In post-hoc analyses the addition of a baseline rate of behavior significantly improved the fit of the exponentiated hyperbola and removed systematic residuals. The form of this function was consistent with human and animal behavior, but the estimated parameter values were not. Copyright © 2016 Elsevier B.V. All rights reserved.

  18. Adaptive neural networks control for camera stabilization with active suspension system

    Directory of Open Access Journals (Sweden)

    Feng Zhao

    2015-08-01

    Full Text Available The camera always suffers from image instability on the moving vehicle due to unintentional vibrations caused by road roughness. This article presents an adaptive neural network approach mixed with linear quadratic regulator control for a quarter-car active suspension system to stabilize the image captured area of the camera. An active suspension system provides extra force through the actuator which allows it to suppress vertical vibration of sprung mass. First, to deal with the road disturbance and the system uncertainties, radial basis function neural network is proposed to construct the map between the state error and the compensation component, which can correct the optimal state-feedback control law. The weights matrix of radial basis function neural network is adaptively tuned online. Then, the closed-loop stability and asymptotic convergence performance is guaranteed by Lyapunov analysis. Finally, the simulation results demonstrate that the proposed controller effectively suppresses the vibration of the camera and enhances the stabilization of the entire camera, where different excitations are considered to validate the system performance.

  19. Topology and computational performance of attractor neural networks

    International Nuclear Information System (INIS)

    McGraw, Patrick N.; Menzinger, Michael

    2003-01-01

    To explore the relation between network structure and function, we studied the computational performance of Hopfield-type attractor neural nets with regular lattice, random, small-world, and scale-free topologies. The random configuration is the most efficient for storage and retrieval of patterns by the network as a whole. However, in the scale-free case retrieval errors are not distributed uniformly among the nodes. The portion of a pattern encoded by the subset of highly connected nodes is more robust and efficiently recognized than the rest of the pattern. The scale-free network thus achieves a very strong partial recognition. The implications of these findings for brain function and social dynamics are suggestive

  20. Complex-valued neural networks advances and applications

    CERN Document Server

    Hirose, Akira

    2013-01-01

    Presents the latest advances in complex-valued neural networks by demonstrating the theory in a wide range of applications Complex-valued neural networks is a rapidly developing neural network framework that utilizes complex arithmetic, exhibiting specific characteristics in its learning, self-organizing, and processing dynamics. They are highly suitable for processing complex amplitude, composed of amplitude and phase, which is one of the core concepts in physical systems to deal with electromagnetic, light, sonic/ultrasonic waves as well as quantum waves, namely, electron and

  1. Arabic Handwriting Recognition Using Neural Network Classifier

    African Journals Online (AJOL)

    pc

    2018-03-05

    Mar 5, 2018 ... an OCR using Neural Network classifier preceded by a set of preprocessing .... Artificial Neural Networks (ANNs), which we adopt in this research, consist of ... advantage and disadvantages of each technique. In [9],. Khemiri ...

  2. Vibration control of uncertain multiple launch rocket system using radial basis function neural network

    Science.gov (United States)

    Li, Bo; Rui, Xiaoting

    2018-01-01

    Poor dispersion characteristics of rockets due to the vibration of Multiple Launch Rocket System (MLRS) have always restricted the MLRS development for several decades. Vibration control is a key technique to improve the dispersion characteristics of rockets. For a mechanical system such as MLRS, the major difficulty in designing an appropriate control strategy that can achieve the desired vibration control performance is to guarantee the robustness and stability of the control system under the occurrence of uncertainties and nonlinearities. To approach this problem, a computed torque controller integrated with a radial basis function neural network is proposed to achieve the high-precision vibration control for MLRS. In this paper, the vibration response of a computed torque controlled MLRS is described. The azimuth and elevation mechanisms of the MLRS are driven by permanent magnet synchronous motors and supposed to be rigid. First, the dynamic model of motor-mechanism coupling system is established using Lagrange method and field-oriented control theory. Then, in order to deal with the nonlinearities, a computed torque controller is designed to control the vibration of the MLRS when it is firing a salvo of rockets. Furthermore, to compensate for the lumped uncertainty due to parametric variations and un-modeled dynamics in the design of the computed torque controller, a radial basis function neural network estimator is developed to adapt the uncertainty based on Lyapunov stability theory. Finally, the simulated results demonstrate the effectiveness of the proposed control system and show that the proposed controller is robust with regard to the uncertainty.

  3. MEMBRAIN NEURAL NETWORK FOR VISUAL PATTERN RECOGNITION

    Directory of Open Access Journals (Sweden)

    Artur Popko

    2013-06-01

    Full Text Available Recognition of visual patterns is one of significant applications of Artificial Neural Networks, which partially emulate human thinking in the domain of artificial intelligence. In the paper, a simplified neural approach to recognition of visual patterns is portrayed and discussed. This paper is dedicated for investigators in visual patterns recognition, Artificial Neural Networking and related disciplines. The document describes also MemBrain application environment as a powerful and easy to use neural networks’ editor and simulator supporting ANN.

  4. Evaluating portland cement concrete degradation by sulphate exposure through artificial neural networks modeling

    International Nuclear Information System (INIS)

    Oliveira, Douglas Nunes de; Bourguignon, Lucas Gabriel Garcia; Tolentino, Evandro; Costa, Rodrigo Moyses; Tello, Cledola Cassia Oliveira de

    2015-01-01

    A concrete is durable if it has accomplished the desired service life in the environment in which it is exposed. The durability of concrete materials can be limited as a result of adverse performance of its cement-paste matrix or aggregate constituents under either chemical or physical attack. Among other aggressive chemical exposures, the sulphate attack is an important concern. Water, soils and gases, which contain sulphate, represent a potential threat to the durability of concrete structures. Sulphate attack in concrete leads to the conversion of the hydration products of cement to ettringite, gypsum, and other phases, and also it leads to the destabilization of the primary strength generating calcium silicate hydrate (C-S-H) gel. The formation of ettringite and gypsum is common in cementitious systems exposed to most types of sulphate solutions. The present work presents the application of the neural networks for estimating deterioration of various concrete mixtures due to exposure to sulphate solutions. A neural networks model was constructed, trained and tested using the available database. In general, artificial neural networks could be successfully used in function approximation problems in order to approach the data generation function. Once data generation function is known, artificial neural network structure is tested using data not presented to the network during training. This paper is intent to provide the technical requirements related to the production of a durable concrete to be used in the structures of the Brazilian near-surface repository of radioactive wastes. (author)

  5. Evaluating portland cement concrete degradation by sulphate exposure through artificial neural networks modeling

    Energy Technology Data Exchange (ETDEWEB)

    Oliveira, Douglas Nunes de; Bourguignon, Lucas Gabriel Garcia; Tolentino, Evandro, E-mail: tolentino@timoteo.cefetmg.br [Centro Federal de Educacao Tecnologica de Minas Gerais (CEFET-MG), Timoteo, MG (Brazil); Costa, Rodrigo Moyses, E-mail: rodrigo@moyses.com.br [Universidade de Itauna, Itauna, MG (Brazil); Tello, Cledola Cassia Oliveira de, E-mail: tellocc@cdtn.br [Centro de Desenvolvimento da Tecnologia Nucelar (CDTN/CNEN-MG), Belo Horizonte, MG (Brazil)

    2015-07-01

    A concrete is durable if it has accomplished the desired service life in the environment in which it is exposed. The durability of concrete materials can be limited as a result of adverse performance of its cement-paste matrix or aggregate constituents under either chemical or physical attack. Among other aggressive chemical exposures, the sulphate attack is an important concern. Water, soils and gases, which contain sulphate, represent a potential threat to the durability of concrete structures. Sulphate attack in concrete leads to the conversion of the hydration products of cement to ettringite, gypsum, and other phases, and also it leads to the destabilization of the primary strength generating calcium silicate hydrate (C-S-H) gel. The formation of ettringite and gypsum is common in cementitious systems exposed to most types of sulphate solutions. The present work presents the application of the neural networks for estimating deterioration of various concrete mixtures due to exposure to sulphate solutions. A neural networks model was constructed, trained and tested using the available database. In general, artificial neural networks could be successfully used in function approximation problems in order to approach the data generation function. Once data generation function is known, artificial neural network structure is tested using data not presented to the network during training. This paper is intent to provide the technical requirements related to the production of a durable concrete to be used in the structures of the Brazilian near-surface repository of radioactive wastes. (author)

  6. Influence of the Training Methods in the Diagnosis of Multiple Sclerosis Using Radial Basis Functions Artificial Neural Networks

    Directory of Open Access Journals (Sweden)

    Ángel Gutiérrez

    2015-04-01

    Full Text Available The data available in the average clinical study of a disease is very often small. This is one of the main obstacles in the application of neural networks to the classification of biological signals used for diagnosing diseases. A rule of thumb states that the number of parameters (weights that can be used for training a neural network should be around 15% of the available data, to avoid overlearning. This condition puts a limit on the dimension of the input space. Different authors have used different approaches to solve this problem, like eliminating redundancy in the data, preprocessing the data to find centers for the radial basis functions, or extracting a small number of features that were used as inputs. It is clear that the classification would be better the more features we could feed into the network. The approach utilized in this paper is incrementing the number of training elements with randomly expanding training sets. This way the number of original signals does not constraint the dimension of the input set in the radial basis network. Then we train the network using the method that minimizes the error function using the gradient descent algorithm and the method that uses the particle swarm optimization technique. A comparison between the two methods showed that for the same number of iterations on both methods, the particle swarm optimization was faster, it was learning to recognize only the sick people. On the other hand, the gradient method was not as good in general better at identifying those people.

  7. Decoding small surface codes with feedforward neural networks

    Science.gov (United States)

    Varsamopoulos, Savvas; Criger, Ben; Bertels, Koen

    2018-01-01

    Surface codes reach high error thresholds when decoded with known algorithms, but the decoding time will likely exceed the available time budget, especially for near-term implementations. To decrease the decoding time, we reduce the decoding problem to a classification problem that a feedforward neural network can solve. We investigate quantum error correction and fault tolerance at small code distances using neural network-based decoders, demonstrating that the neural network can generalize to inputs that were not provided during training and that they can reach similar or better decoding performance compared to previous algorithms. We conclude by discussing the time required by a feedforward neural network decoder in hardware.

  8. Artificial Neural Networks For Hadron Hadron Cross-sections

    International Nuclear Information System (INIS)

    ELMashad, M.; ELBakry, M.Y.; Tantawy, M.; Habashy, D.M.

    2011-01-01

    In recent years artificial neural networks (ANN ) have emerged as a mature and viable framework with many applications in various areas. Artificial neural networks theory is sometimes used to refer to a branch of computational science that uses neural networks as models to either simulate or analyze complex phenomena and/or study the principles of operation of neural networks analytically. In this work a model of hadron- hadron collision using the ANN technique is present, the hadron- hadron based ANN model calculates the cross sections of hadron- hadron collision. The results amply demonstrate the feasibility of such new technique in extracting the collision features and prove its effectiveness

  9. Foreign currency rate forecasting using neural networks

    Science.gov (United States)

    Pandya, Abhijit S.; Kondo, Tadashi; Talati, Amit; Jayadevappa, Suryaprasad

    2000-03-01

    Neural networks are increasingly being used as a forecasting tool in many forecasting problems. This paper discusses the application of neural networks in predicting daily foreign exchange rates between the USD, GBP as well as DEM. We approach the problem from a time-series analysis framework - where future exchange rates are forecasted solely using past exchange rates. This relies on the belief that the past prices and future prices are very close related, and interdependent. We present the result of training a neural network with historical USD-GBP data. The methodology used in explained, as well as the training process. We discuss the selection of inputs to the network, and present a comparison of using the actual exchange rates and the exchange rate differences as inputs. Price and rate differences are the preferred way of training neural network in financial applications. Results of both approaches are present together for comparison. We show that the network is able to learn the trends in the exchange rate movements correctly, and present the results of the prediction over several periods of time.

  10. Face recognition: a convolutional neural-network approach.

    Science.gov (United States)

    Lawrence, S; Giles, C L; Tsoi, A C; Back, A D

    1997-01-01

    We present a hybrid neural-network for human face recognition which compares favourably with other methods. The system combines local image sampling, a self-organizing map (SOM) neural network, and a convolutional neural network. The SOM provides a quantization of the image samples into a topological space where inputs that are nearby in the original space are also nearby in the output space, thereby providing dimensionality reduction and invariance to minor changes in the image sample, and the convolutional neural network provides partial invariance to translation, rotation, scale, and deformation. The convolutional network extracts successively larger features in a hierarchical set of layers. We present results using the Karhunen-Loeve transform in place of the SOM, and a multilayer perceptron (MLP) in place of the convolutional network for comparison. We use a database of 400 images of 40 individuals which contains quite a high degree of variability in expression, pose, and facial details. We analyze the computational complexity and discuss how new classes could be added to the trained recognizer.

  11. Diabetic retinopathy screening using deep neural network.

    Science.gov (United States)

    Ramachandran, Nishanthan; Hong, Sheng Chiong; Sime, Mary J; Wilson, Graham A

    2017-09-07

    There is a burgeoning interest in the use of deep neural network in diabetic retinal screening. To determine whether a deep neural network could satisfactorily detect diabetic retinopathy that requires referral to an ophthalmologist from a local diabetic retinal screening programme and an international database. Retrospective audit. Diabetic retinal photos from Otago database photographed during October 2016 (485 photos), and 1200 photos from Messidor international database. Receiver operating characteristic curve to illustrate the ability of a deep neural network to identify referable diabetic retinopathy (moderate or worse diabetic retinopathy or exudates within one disc diameter of the fovea). Area under the receiver operating characteristic curve, sensitivity and specificity. For detecting referable diabetic retinopathy, the deep neural network had an area under receiver operating characteristic curve of 0.901 (95% confidence interval 0.807-0.995), with 84.6% sensitivity and 79.7% specificity for Otago and 0.980 (95% confidence interval 0.973-0.986), with 96.0% sensitivity and 90.0% specificity for Messidor. This study has shown that a deep neural network can detect referable diabetic retinopathy with sensitivities and specificities close to or better than 80% from both an international and a domestic (New Zealand) database. We believe that deep neural networks can be integrated into community screening once they can successfully detect both diabetic retinopathy and diabetic macular oedema. © 2017 Royal Australian and New Zealand College of Ophthalmologists.

  12. Aspects of artificial neural networks and experimental noise

    NARCIS (Netherlands)

    Derks, E.P.P.A.

    1997-01-01

    About a decade ago, artificial neural networks (ANN) have been introduced to chemometrics for solving problems in analytical chemistry. ANN are based on the functioning of the brain and can be used for modeling complex relationships within chemical data. An ANN-model can be obtained by earning or

  13. An Introduction to Neural Networks for Hearing Aid Noise Recognition.

    Science.gov (United States)

    Kim, Jun W.; Tyler, Richard S.

    1995-01-01

    This article introduces the use of multilayered artificial neural networks in hearing aid noise recognition. It reviews basic principles of neural networks, and offers an example of an application in which a neural network is used to identify the presence or absence of noise in speech. The ability of neural networks to "learn" the…

  14. Thermoelastic steam turbine rotor control based on neural network

    Science.gov (United States)

    Rzadkowski, Romuald; Dominiczak, Krzysztof; Radulski, Wojciech; Szczepanik, R.

    2015-12-01

    Considered here are Nonlinear Auto-Regressive neural networks with eXogenous inputs (NARX) as a mathematical model of a steam turbine rotor for controlling steam turbine stress on-line. In order to obtain neural networks that locate critical stress and temperature points in the steam turbine during transient states, an FE rotor model was built. This model was used to train the neural networks on the basis of steam turbine transient operating data. The training included nonlinearity related to steam turbine expansion, heat exchange and rotor material properties during transients. Simultaneous neural networks are algorithms which can be implemented on PLC controllers. This allows for the application neural networks to control steam turbine stress in industrial power plants.

  15. Application of neural networks in coastal engineering

    Digital Repository Service at National Institute of Oceanography (India)

    Mandal, S.

    the neural network attractive. A neural network is an information processing system modeled on the structure of the dynamic process. It can solve the complex/nonlinear problems quickly once trained by operating on problems using an interconnected number...

  16. What are artificial neural networks?

    DEFF Research Database (Denmark)

    Krogh, Anders

    2008-01-01

    Artificial neural networks have been applied to problems ranging from speech recognition to prediction of protein secondary structure, classification of cancers and gene prediction. How do they work and what might they be good for? Udgivelsesdato: 2008-Feb......Artificial neural networks have been applied to problems ranging from speech recognition to prediction of protein secondary structure, classification of cancers and gene prediction. How do they work and what might they be good for? Udgivelsesdato: 2008-Feb...

  17. Hardware Implementation of Artificial Neural Network for Data Ciphering

    Directory of Open Access Journals (Sweden)

    Sahar L. Kadoory

    2016-10-01

    Full Text Available This paper introduces the design and realization of multiple blocks ciphering techniques on the FPGA (Field Programmable Gate Arrays. A back propagation neural networks have been built for substitution, permutation and XOR blocks ciphering using Neural Network Toolbox in MATLAB program. They are trained to encrypt the data, after obtaining the suitable weights, biases, activation function and layout. Afterward, they are described using VHDL and implemented using Xilinx Spartan-3E FPGA using two approaches: serial and parallel versions. The simulation results obtained with Xilinx ISE 9.2i software. The numerical precision is chosen carefully when implementing the Neural Network on FPGA. Obtained results from the hardware designs show accurate numeric values to cipher the data. As expected, the synthesis results indicate that the serial version requires less area resources than the parallel version. As, the data throughput in parallel version is higher than the serial version in rang between (1.13-1.5 times. Also, a slight difference can be observed in the maximum frequency.

  18. A non-penalty recurrent neural network for solving a class of constrained optimization problems.

    Science.gov (United States)

    Hosseini, Alireza

    2016-01-01

    In this paper, we explain a methodology to analyze convergence of some differential inclusion-based neural networks for solving nonsmooth optimization problems. For a general differential inclusion, we show that if its right hand-side set valued map satisfies some conditions, then solution trajectory of the differential inclusion converges to optimal solution set of its corresponding in optimization problem. Based on the obtained methodology, we introduce a new recurrent neural network for solving nonsmooth optimization problems. Objective function does not need to be convex on R(n) nor does the new neural network model require any penalty parameter. We compare our new method with some penalty-based and non-penalty based models. Moreover for differentiable cases, we implement circuit diagram of the new neural network. Copyright © 2015 Elsevier Ltd. All rights reserved.

  19. Neural network based multiscale image restoration approach

    Science.gov (United States)

    de Castro, Ana Paula A.; da Silva, José D. S.

    2007-02-01

    This paper describes a neural network based multiscale image restoration approach. Multilayer perceptrons are trained with artificial images of degraded gray level circles, in an attempt to make the neural network learn inherent space relations of the degraded pixels. The present approach simulates the degradation by a low pass Gaussian filter blurring operation and the addition of noise to the pixels at pre-established rates. The training process considers the degraded image as input and the non-degraded image as output for the supervised learning process. The neural network thus performs an inverse operation by recovering a quasi non-degraded image in terms of least squared. The main difference of the approach to existing ones relies on the fact that the space relations are taken from different scales, thus providing relational space data to the neural network. The approach is an attempt to come up with a simple method that leads to an optimum solution to the problem. Considering different window sizes around a pixel simulates the multiscale operation. In the generalization phase the neural network is exposed to indoor, outdoor, and satellite degraded images following the same steps use for the artificial circle image.

  20. Human embryonic stem cell-derived neurons adopt and regulate the activity of an established neural network

    Science.gov (United States)

    Weick, Jason P.; Liu, Yan; Zhang, Su-Chun

    2011-01-01

    Whether hESC-derived neurons can fully integrate with and functionally regulate an existing neural network remains unknown. Here, we demonstrate that hESC-derived neurons receive unitary postsynaptic currents both in vitro and in vivo and adopt the rhythmic firing behavior of mouse cortical networks via synaptic integration. Optical stimulation of hESC-derived neurons expressing Channelrhodopsin-2 elicited both inhibitory and excitatory postsynaptic currents and triggered network bursting in mouse neurons. Furthermore, light stimulation of hESC-derived neurons transplanted to the hippocampus of adult mice triggered postsynaptic currents in host pyramidal neurons in acute slice preparations. Thus, hESC-derived neurons can participate in and modulate neural network activity through functional synaptic integration, suggesting they are capable of contributing to neural network information processing both in vitro and in vivo. PMID:22106298

  1. Additive Feed Forward Control with Neural Networks

    DEFF Research Database (Denmark)

    Sørensen, O.

    1999-01-01

    This paper demonstrates a method to control a non-linear, multivariable, noisy process using trained neural networks. The basis for the method is a trained neural network controller acting as the inverse process model. A training method for obtaining such an inverse process model is applied....... A suitable 'shaped' (low-pass filtered) reference is used to overcome problems with excessive control action when using a controller acting as the inverse process model. The control concept is Additive Feed Forward Control, where the trained neural network controller, acting as the inverse process model......, is placed in a supplementary pure feed-forward path to an existing feedback controller. This concept benefits from the fact, that an existing, traditional designed, feedback controller can be retained without any modifications, and after training the connection of the neural network feed-forward controller...

  2. Neural Network to Solve Concave Games

    OpenAIRE

    Liu, Zixin; Wang, Nengfa

    2014-01-01

    The issue on neural network method to solve concave games is concerned. Combined with variational inequality, Ky Fan inequality, and projection equation, concave games are transformed into a neural network model. On the basis of the Lyapunov stable theory, some stability results are also given. Finally, two classic games’ simulation results are given to illustrate the theoretical results.

  3. Neural networks mediating sentence reading in the deaf

    Directory of Open Access Journals (Sweden)

    Elizabeth Ann Hirshorn

    2014-06-01

    Full Text Available The present work addresses the neural bases of sentence reading in deaf populations. To better understand the relative role of deafness and English knowledge in shaping the neural networks that mediate sentence reading, three populations with different degrees of English knowledge and depth of hearing loss were included – deaf signers, oral deaf and hearing individuals. The three groups were matched for reading comprehension and scanned while reading sentences. A similar neural network of left perisylvian areas was observed, supporting the view of a shared network of areas for reading despite differences in hearing and English knowledge. However, differences were observed, in particular in the auditory cortex, with deaf signers and oral deaf showing greatest bilateral superior temporal gyrus (STG recruitment as compared to hearing individuals. Importantly, within deaf individuals, the same STG area in the left hemisphere showed greater recruitment as hearing loss increased. To further understand the functional role of such auditory cortex re-organization after deafness, connectivity analyses were performed from the STG regions identified above. Connectivity from the left STG toward areas typically associated with semantic processing (BA45 and thalami was greater in deaf signers and in oral deaf as compared to hearing. In contrast, connectivity from left STG toward areas identified with speech-based processing was greater in hearing and in oral deaf as compared to deaf signers. These results support the growing literature indicating recruitment of auditory areas after congenital deafness for visually-mediated language functions, and establish that both auditory deprivation and language experience shape its functional reorganization. Implications for differential reliance on semantic vs. phonological pathways during reading in the three groups is discussed.

  4. Topology influences performance in the associative memory neural networks

    International Nuclear Information System (INIS)

    Lu Jianquan; He Juan; Cao Jinde; Gao Zhiqiang

    2006-01-01

    To explore how topology affects performance within Hopfield-type associative memory neural networks (AMNNs), we studied the computational performance of the neural networks with regular lattice, random, small-world, and scale-free structures. In this Letter, we found that the memory performance of neural networks obtained through asynchronous updating from 'larger' nodes to 'smaller' nodes are better than asynchronous updating in random order, especially for the scale-free topology. The computational performance of associative memory neural networks linked by the above-mentioned network topologies with the same amounts of nodes (neurons) and edges (synapses) were studied respectively. Along with topologies becoming more random and less locally disordered, we will see that the performance of associative memory neural network is quite improved. By comparing, we show that the regular lattice and random network form two extremes in terms of patterns stability and retrievability. For a network, its patterns stability and retrievability can be largely enhanced by adding a random component or some shortcuts to its structured component. According to the conclusions of this Letter, we can design the associative memory neural networks with high performance and minimal interconnect requirements

  5. Static Voltage Stability Analysis by Using SVM and Neural Network

    Directory of Open Access Journals (Sweden)

    Mehdi Hajian

    2013-01-01

    Full Text Available Voltage stability is an important problem in power system networks. In this paper, in terms of static voltage stability, and application of Neural Networks (NN and Supported Vector Machine (SVM for estimating of voltage stability margin (VSM and predicting of voltage collapse has been investigated. This paper considers voltage stability in power system in two parts. The first part calculates static voltage stability margin by Radial Basis Function Neural Network (RBFNN. The advantage of the used method is high accuracy in online detecting the VSM. Whereas the second one, voltage collapse analysis of power system is performed by Probabilistic Neural Network (PNN and SVM. The obtained results in this paper indicate, that time and number of training samples of SVM, are less than NN. In this paper, a new model of training samples for detection system, using the normal distribution load curve at each load feeder, has been used. Voltage stability analysis is estimated by well-know L and VSM indexes. To demonstrate the validity of the proposed methods, IEEE 14 bus grid and the actual network of Yazd Province are used.

  6. Hybrid neural network bushing model for vehicle dynamics simulation

    International Nuclear Information System (INIS)

    Sohn, Jeong Hyun; Lee, Seung Kyu; Yoo, Wan Suk

    2008-01-01

    Although the linear model was widely used for the bushing model in vehicle suspension systems, it could not express the nonlinear characteristics of bushing in terms of the amplitude and the frequency. An artificial neural network model was suggested to consider the hysteretic responses of bushings. This model, however, often diverges due to the uncertainties of the neural network under the unexpected excitation inputs. In this paper, a hybrid neural network bushing model combining linear and neural network is suggested. A linear model was employed to represent linear stiffness and damping effects, and the artificial neural network algorithm was adopted to take into account the hysteretic responses. A rubber test was performed to capture bushing characteristics, where sine excitation with different frequencies and amplitudes is applied. Random test results were used to update the weighting factors of the neural network model. It is proven that the proposed model has more robust characteristics than a simple neural network model under step excitation input. A full car simulation was carried out to verify the proposed bushing models. It was shown that the hybrid model results are almost identical to the linear model under several maneuvers

  7. Development of Artificial Neural Network Model for Diesel Fuel Properties Prediction using Vibrational Spectroscopy.

    Science.gov (United States)

    Bolanča, Tomislav; Marinović, Slavica; Ukić, Sime; Jukić, Ante; Rukavina, Vinko

    2012-06-01

    This paper describes development of artificial neural network models which can be used to correlate and predict diesel fuel properties from several FTIR-ATR absorbances and Raman intensities as input variables. Multilayer feed forward and radial basis function neural networks have been used to rapid and simultaneous prediction of cetane number, cetane index, density, viscosity, distillation temperatures at 10% (T10), 50% (T50) and 90% (T90) recovery, contents of total aromatics and polycyclic aromatic hydrocarbons of commercial diesel fuels. In this study two-phase training procedures for multilayer feed forward networks were applied. While first phase training algorithm was constantly the back propagation one, two second phase training algorithms were varied and compared, namely: conjugate gradient and quasi Newton. In case of radial basis function network, radial layer was trained using K-means radial assignment algorithm and three different radial spread algorithms: explicit, isotropic and K-nearest neighbour. The number of hidden layer neurons and experimental data points used for the training set have been optimized for both neural networks in order to insure good predictive ability by reducing unnecessary experimental work. This work shows that developed artificial neural network models can determine main properties of diesel fuels simultaneously based on a single and fast IR or Raman measurement.

  8. Comments on "The multisynapse neural network and its application to fuzzy clustering".

    Science.gov (United States)

    Yu, Jian; Hao, Pengwei

    2005-05-01

    In the above-mentioned paper, Wei and Fahn proposed a neural architecture, the multisynapse neural network, to solve constrained optimization problems including high-order, logarithmic, and sinusoidal forms, etc. As one of its main applications, a fuzzy bidirectional associative clustering network (FBACN) was proposed for fuzzy-partition clustering according to the objective-functional method. The connection between the objective-functional-based fuzzy c-partition algorithms and FBACN is the Lagrange multiplier approach. Unfortunately, the Lagrange multiplier approach was incorrectly applied so that FBACN does not equivalently minimize its corresponding constrained objective-function. Additionally, Wei and Fahn adopted traditional definition of fuzzy c-partition, which is not satisfied by FBACN. Therefore, FBACN can not solve constrained optimization problems, either.

  9. Neural substrate expansion for the restoration of brain function

    Directory of Open Access Journals (Sweden)

    Han-Chiao Isaac Chen

    2016-01-01

    Full Text Available Restoring neurological and cognitive function in individuals who have suffered brain damage is one of the principal objectives of modern translational neuroscience. Electrical stimulation approaches, such as deep-brain stimulation, have achieved the most clinical success, but they ultimately may be limited by the computational capacity of the residual cerebral circuitry. An alternative strategy is brain substrate expansion, in which the computational capacity of the brain is augmented through the addition of new processing units and the reconstitution of network connectivity. This latter approach has been explored to some degree using both biological and electronic means but thus far has not demonstrated the ability to reestablish the function of large-scale neuronal networks. In this review, we contend that fulfilling the potential of brain substrate expansion will require a significant shift from current methods that emphasize direct manipulations of the brain (e.g., injections of cellular suspensions and the implantation of multi-electrode arrays to the generation of more sophisticated neural tissues and neural-electric hybrids in vitro that are subsequently transplanted into the brain. Drawing from neural tissue engineering, stem cell biology, and neural interface technologies, this strategy makes greater use of the manifold techniques available in the laboratory to create biocompatible constructs that recapitulate brain architecture and thus are more easily recognized and utilized by brain networks.

  10. Nonlinear signal processing using neural networks: Prediction and system modelling

    Energy Technology Data Exchange (ETDEWEB)

    Lapedes, A.; Farber, R.

    1987-06-01

    The backpropagation learning algorithm for neural networks is developed into a formalism for nonlinear signal processing. We illustrate the method by selecting two common topics in signal processing, prediction and system modelling, and show that nonlinear applications can be handled extremely well by using neural networks. The formalism is a natural, nonlinear extension of the linear Least Mean Squares algorithm commonly used in adaptive signal processing. Simulations are presented that document the additional performance achieved by using nonlinear neural networks. First, we demonstrate that the formalism may be used to predict points in a highly chaotic time series with orders of magnitude increase in accuracy over conventional methods including the Linear Predictive Method and the Gabor-Volterra-Weiner Polynomial Method. Deterministic chaos is thought to be involved in many physical situations including the onset of turbulence in fluids, chemical reactions and plasma physics. Secondly, we demonstrate the use of the formalism in nonlinear system modelling by providing a graphic example in which it is clear that the neural network has accurately modelled the nonlinear transfer function. It is interesting to note that the formalism provides explicit, analytic, global, approximations to the nonlinear maps underlying the various time series. Furthermore, the neural net seems to be extremely parsimonious in its requirements for data points from the time series. We show that the neural net is able to perform well because it globally approximates the relevant maps by performing a kind of generalized mode decomposition of the maps. 24 refs., 13 figs.

  11. Global exponential stability of fuzzy cellular neural networks with delays and reaction-diffusion terms

    International Nuclear Information System (INIS)

    Wang Jian; Lu Junguo

    2008-01-01

    In this paper, we study the global exponential stability of fuzzy cellular neural networks with delays and reaction-diffusion terms. By constructing a suitable Lyapunov functional and utilizing some inequality techniques, we obtain a sufficient condition for the uniqueness and global exponential stability of the equilibrium solution for a class of fuzzy cellular neural networks with delays and reaction-diffusion terms. The result imposes constraint conditions on the network parameters independently of the delay parameter. The result is also easy to check and plays an important role in the design and application of globally exponentially stable fuzzy neural circuits

  12. Genetic algorithm based adaptive neural network ensemble and its application in predicting carbon flux

    Science.gov (United States)

    Xue, Y.; Liu, S.; Hu, Y.; Yang, J.; Chen, Q.

    2007-01-01

    To improve the accuracy in prediction, Genetic Algorithm based Adaptive Neural Network Ensemble (GA-ANNE) is presented. Intersections are allowed between different training sets based on the fuzzy clustering analysis, which ensures the diversity as well as the accuracy of individual Neural Networks (NNs). Moreover, to improve the accuracy of the adaptive weights of individual NNs, GA is used to optimize the cluster centers. Empirical results in predicting carbon flux of Duke Forest reveal that GA-ANNE can predict the carbon flux more accurately than Radial Basis Function Neural Network (RBFNN), Bagging NN ensemble, and ANNE. ?? 2007 IEEE.

  13. An introduction to neural network methods for differential equations

    CERN Document Server

    Yadav, Neha; Kumar, Manoj

    2015-01-01

    This book introduces a variety of neural network methods for solving differential equations arising in science and engineering. The emphasis is placed on a deep understanding of the neural network techniques, which has been presented in a mostly heuristic and intuitive manner. This approach will enable the reader to understand the working, efficiency and shortcomings of each neural network technique for solving differential equations. The objective of this book is to provide the reader with a sound understanding of the foundations of neural networks, and a comprehensive introduction to neural network methods for solving differential equations together with recent developments in the techniques and their applications. The book comprises four major sections. Section I consists of a brief overview of differential equations and the relevant physical problems arising in science and engineering. Section II illustrates the history of neural networks starting from their beginnings in the 1940s through to the renewed...

  14. Evolutionary Algorithms For Neural Networks Binary And Real Data Classification

    Directory of Open Access Journals (Sweden)

    Dr. Hanan A.R. Akkar

    2015-08-01

    Full Text Available Artificial neural networks are complex networks emulating the way human rational neurons process data. They have been widely used generally in prediction clustering classification and association. The training algorithms that used to determine the network weights are almost the most important factor that influence the neural networks performance. Recently many meta-heuristic and Evolutionary algorithms are employed to optimize neural networks weights to achieve better neural performance. This paper aims to use recently proposed algorithms for optimizing neural networks weights comparing these algorithms performance with other classical meta-heuristic algorithms used for the same purpose. However to evaluate the performance of such algorithms for training neural networks we examine such algorithms to classify four opposite binary XOR clusters and classification of continuous real data sets such as Iris and Ecoli.

  15. An Attractor-Based Complexity Measurement for Boolean Recurrent Neural Networks

    Science.gov (United States)

    Cabessa, Jérémie; Villa, Alessandro E. P.

    2014-01-01

    We provide a novel refined attractor-based complexity measurement for Boolean recurrent neural networks that represents an assessment of their computational power in terms of the significance of their attractor dynamics. This complexity measurement is achieved by first proving a computational equivalence between Boolean recurrent neural networks and some specific class of -automata, and then translating the most refined classification of -automata to the Boolean neural network context. As a result, a hierarchical classification of Boolean neural networks based on their attractive dynamics is obtained, thus providing a novel refined attractor-based complexity measurement for Boolean recurrent neural networks. These results provide new theoretical insights to the computational and dynamical capabilities of neural networks according to their attractive potentialities. An application of our findings is illustrated by the analysis of the dynamics of a simplified model of the basal ganglia-thalamocortical network simulated by a Boolean recurrent neural network. This example shows the significance of measuring network complexity, and how our results bear new founding elements for the understanding of the complexity of real brain circuits. PMID:24727866

  16. Biological neural networks as model systems for designing future parallel processing computers

    Science.gov (United States)

    Ross, Muriel D.

    1991-01-01

    One of the more interesting debates of the present day centers on whether human intelligence can be simulated by computer. The author works under the premise that neurons individually are not smart at all. Rather, they are physical units which are impinged upon continuously by other matter that influences the direction of voltage shifts across the units membranes. It is only the action of a great many neurons, billions in the case of the human nervous system, that intelligent behavior emerges. What is required to understand even the simplest neural system is painstaking analysis, bit by bit, of the architecture and the physiological functioning of its various parts. The biological neural network studied, the vestibular utricular and saccular maculas of the inner ear, are among the most simple of the mammalian neural networks to understand and model. While there is still a long way to go to understand even this most simple neural network in sufficient detail for extrapolation to computers and robots, a start was made. Moreover, the insights obtained and the technologies developed help advance the understanding of the more complex neural networks that underlie human intelligence.

  17. Global exponential stability of mixed discrete and distributively delayed cellular neural network

    International Nuclear Information System (INIS)

    Yao Hong-Xing; Zhou Jia-Yan

    2011-01-01

    This paper concernes analysis for the global exponential stability of a class of recurrent neural networks with mixed discrete and distributed delays. It first proves the existence and uniqueness of the balance point, then by employing the Lyapunov—Krasovskii functional and Young inequality, it gives the sufficient condition of global exponential stability of cellular neural network with mixed discrete and distributed delays, in addition, the example is provided to illustrate the applicability of the result. (general)

  18. An Artificial Neural Network for Data Forecasting Purposes

    Directory of Open Access Journals (Sweden)

    Catalina Lucia COCIANU

    2015-01-01

    Full Text Available Considering the fact that markets are generally influenced by different external factors, the stock market prediction is one of the most difficult tasks of time series analysis. The research reported in this paper aims to investigate the potential of artificial neural networks (ANN in solving the forecast task in the most general case, when the time series are non-stationary. We used a feed-forward neural architecture: the nonlinear autoregressive network with exogenous inputs. The network training function used to update the weight and bias parameters corresponds to gradient descent with adaptive learning rate variant of the backpropagation algorithm. The results obtained using this technique are compared with the ones resulted from some ARIMA models. We used the mean square error (MSE measure to evaluate the performances of these two models. The comparative analysis leads to the conclusion that the proposed model can be successfully applied to forecast the financial data.

  19. Representation of neutron noise data using neural networks

    International Nuclear Information System (INIS)

    Korsah, K.; Damiano, B.; Wood, R.T.

    1992-01-01

    This paper describes a neural network-based method of representing neutron noise spectra using a model developed at the Oak Ridge National Laboratory (ORNL). The backpropagation neural network learned to represent neutron noise data in terms of four descriptors, and the network response matched calculated values to within 3.5 percent. These preliminary results are encouraging, and further research is directed towards the application of neural networks in a diagnostics system for the identification of the causes of changes in structural spectral resonances. This work is part of our current investigation of advanced technologies such as expert systems and neural networks for neutron noise data reduction, analysis, and interpretation. The objective is to improve the state-of-the-art of noise analysis as a diagnostic tool for nuclear power plants and other mechanical systems

  20. Exponential H(infinity) synchronization of general discrete-time chaotic neural networks with or without time delays.

    Science.gov (United States)

    Qi, Donglian; Liu, Meiqin; Qiu, Meikang; Zhang, Senlin

    2010-08-01

    This brief studies exponential H(infinity) synchronization of a class of general discrete-time chaotic neural networks with external disturbance. On the basis of the drive-response concept and H(infinity) control theory, and using Lyapunov-Krasovskii (or Lyapunov) functional, state feedback controllers are established to not only guarantee exponential stable synchronization between two general chaotic neural networks with or without time delays, but also reduce the effect of external disturbance on the synchronization error to a minimal H(infinity) norm constraint. The proposed controllers can be obtained by solving the convex optimization problems represented by linear matrix inequalities. Most discrete-time chaotic systems with or without time delays, such as Hopfield neural networks, cellular neural networks, bidirectional associative memory networks, recurrent multilayer perceptrons, Cohen-Grossberg neural networks, Chua's circuits, etc., can be transformed into this general chaotic neural network to be H(infinity) synchronization controller designed in a unified way. Finally, some illustrated examples with their simulations have been utilized to demonstrate the effectiveness of the proposed methods.

  1. A new delay-independent condition for global robust stability of neural networks with time delays.

    Science.gov (United States)

    Samli, Ruya

    2015-06-01

    This paper studies the problem of robust stability of dynamical neural networks with discrete time delays under the assumptions that the network parameters of the neural system are uncertain and norm-bounded, and the activation functions are slope-bounded. By employing the results of Lyapunov stability theory and matrix theory, new sufficient conditions for the existence, uniqueness and global asymptotic stability of the equilibrium point for delayed neural networks are presented. The results reported in this paper can be easily tested by checking some special properties of symmetric matrices associated with the parameter uncertainties of neural networks. We also present a numerical example to show the effectiveness of the proposed theoretical results. Copyright © 2015 Elsevier Ltd. All rights reserved.

  2. Dynamics in a delayed-neural network

    International Nuclear Information System (INIS)

    Yuan Yuan

    2007-01-01

    In this paper, we consider a neural network of four identical neurons with time-delayed connections. Some parameter regions are given for global, local stability and synchronization using the theory of functional differential equations. The root distributions in the corresponding characteristic transcendental equation are analyzed, Pitchfork bifurcation, Hopf and equivariant Hopf bifurcations are investigated by revealing the center manifolds and normal forms. Numerical simulations are shown the agreements with the theoretical results

  3. The interchangeability of learning rate and gain in backpropagation neural networks

    NARCIS (Netherlands)

    Thimm, G.; Moerland, P.; Fiesler, E.

    1996-01-01

    The backpropagation algorithm is widely used for training multilayer neural networks. In this publication the gain of its activation function(s) is investigated. In specific, it is proven that changing the gain of the activation function is equivalent to changing the learning rate and the weights.

  4. Supervised Learning with Complex-valued Neural Networks

    CERN Document Server

    Suresh, Sundaram; Savitha, Ramasamy

    2013-01-01

    Recent advancements in the field of telecommunications, medical imaging and signal processing deal with signals that are inherently time varying, nonlinear and complex-valued. The time varying, nonlinear characteristics of these signals can be effectively analyzed using artificial neural networks.  Furthermore, to efficiently preserve the physical characteristics of these complex-valued signals, it is important to develop complex-valued neural networks and derive their learning algorithms to represent these signals at every step of the learning process. This monograph comprises a collection of new supervised learning algorithms along with novel architectures for complex-valued neural networks. The concepts of meta-cognition equipped with a self-regulated learning have been known to be the best human learning strategy. In this monograph, the principles of meta-cognition have been introduced for complex-valued neural networks in both the batch and sequential learning modes. For applications where the computati...

  5. Global exponential stability for nonautonomous cellular neural networks with delays

    International Nuclear Information System (INIS)

    Zhang Qiang; Wei Xiaopeng; Xu Jin

    2006-01-01

    In this Letter, by utilizing Lyapunov functional method and Halanay inequalities, we analyze global exponential stability of nonautonomous cellular neural networks with delay. Several new sufficient conditions ensuring global exponential stability of the network are obtained. The results given here extend and improve the earlier publications. An example is given to demonstrate the effectiveness of the obtained results

  6. Robust recurrent neural network modeling for software fault detection and correction prediction

    International Nuclear Information System (INIS)

    Hu, Q.P.; Xie, M.; Ng, S.H.; Levitin, G.

    2007-01-01

    Software fault detection and correction processes are related although different, and they should be studied together. A practical approach is to apply software reliability growth models to model fault detection, and fault correction process is assumed to be a delayed process. On the other hand, the artificial neural networks model, as a data-driven approach, tries to model these two processes together with no assumptions. Specifically, feedforward backpropagation networks have shown their advantages over analytical models in fault number predictions. In this paper, the following approach is explored. First, recurrent neural networks are applied to model these two processes together. Within this framework, a systematic networks configuration approach is developed with genetic algorithm according to the prediction performance. In order to provide robust predictions, an extra factor characterizing the dispersion of prediction repetitions is incorporated into the performance function. Comparisons with feedforward neural networks and analytical models are developed with respect to a real data set

  7. Hardware implementation of stochastic spiking neural networks.

    Science.gov (United States)

    Rosselló, Josep L; Canals, Vincent; Morro, Antoni; Oliver, Antoni

    2012-08-01

    Spiking Neural Networks, the last generation of Artificial Neural Networks, are characterized by its bio-inspired nature and by a higher computational capacity with respect to other neural models. In real biological neurons, stochastic processes represent an important mechanism of neural behavior and are responsible of its special arithmetic capabilities. In this work we present a simple hardware implementation of spiking neurons that considers this probabilistic nature. The advantage of the proposed implementation is that it is fully digital and therefore can be massively implemented in Field Programmable Gate Arrays. The high computational capabilities of the proposed model are demonstrated by the study of both feed-forward and recurrent networks that are able to implement high-speed signal filtering and to solve complex systems of linear equations.

  8. Introduction to Concepts in Artificial Neural Networks

    Science.gov (United States)

    Niebur, Dagmar

    1995-01-01

    This introduction to artificial neural networks summarizes some basic concepts of computational neuroscience and the resulting models of artificial neurons. The terminology of biological and artificial neurons, biological and machine learning and neural processing is introduced. The concepts of supervised and unsupervised learning are explained with examples from the power system area. Finally, a taxonomy of different types of neurons and different classes of artificial neural networks is presented.

  9. Direct adaptive control using feedforward neural networks

    OpenAIRE

    Cajueiro, Daniel Oliveira; Hemerly, Elder Moreira

    2003-01-01

    ABSTRACT: This paper proposes a new scheme for direct neural adaptive control that works efficiently employing only one neural network, used for simultaneously identifying and controlling the plant. The idea behind this structure of adaptive control is to compensate the control input obtained by a conventional feedback controller. The neural network training process is carried out by using two different techniques: backpropagation and extended Kalman filter algorithm. Additionally, the conver...

  10. Particle identification with neural networks using a rotational invariant moment representation

    International Nuclear Information System (INIS)

    Sinkus, R.

    1997-01-01

    A feed-forward neural network is used to identify electromagnetic particles based upon their showering properties within a segmented calorimeter. The novel feature is the expansion of the energy distribution in terms of moments of the so-called Zernike functions which are invariant under rotation. The multidimensional input distribution for the neural network is transformed via a principle component analysis and rescaled by its respective variances to ensure input values of the order of one. This results is a better performance in identifying and separating electromagnetic from hadronic particles, especially at low energies. (orig.)

  11. Neural networks in signal processing

    International Nuclear Information System (INIS)

    Govil, R.

    2000-01-01

    Nuclear Engineering has matured during the last decade. In research and design, control, supervision, maintenance and production, mathematical models and theories are used extensively. In all such applications signal processing is embedded in the process. Artificial Neural Networks (ANN), because of their nonlinear, adaptive nature are well suited to such applications where the classical assumptions of linearity and second order Gaussian noise statistics cannot be made. ANN's can be treated as nonparametric techniques, which can model an underlying process from example data. They can also adopt their model parameters to statistical change with time. Algorithms in the framework of Neural Networks in Signal processing have found new applications potentials in the field of Nuclear Engineering. This paper reviews the fundamentals of Neural Networks in signal processing and their applications in tasks such as recognition/identification and control. The topics covered include dynamic modeling, model based ANN's, statistical learning, eigen structure based processing and generalization structures. (orig.)

  12. Neural Based Orthogonal Data Fitting The EXIN Neural Networks

    CERN Document Server

    Cirrincione, Giansalvo

    2008-01-01

    Written by three leaders in the field of neural based algorithms, Neural Based Orthogonal Data Fitting proposes several neural networks, all endowed with a complete theory which not only explains their behavior, but also compares them with the existing neural and traditional algorithms. The algorithms are studied from different points of view, including: as a differential geometry problem, as a dynamic problem, as a stochastic problem, and as a numerical problem. All algorithms have also been analyzed on real time problems (large dimensional data matrices) and have shown accurate solutions. Wh

  13. On the approximation by single hidden layer feedforward neural networks with fixed weights

    OpenAIRE

    Guliyev, Namig J.; Ismailov, Vugar E.

    2017-01-01

    International audience; Feedforward neural networks have wide applicability in various disciplines of science due to their universal approximation property. Some authors have shown that single hidden layer feedforward neural networks (SLFNs) with fixed weights still possess the universal approximation property provided that approximated functions are univariate. But this phenomenon does not lay any restrictions on the number of neurons in the hidden layer. The more this number, the more the p...

  14. Computation of optimal transport and related hedging problems via penalization and neural networks

    OpenAIRE

    Eckstein, Stephan; Kupper, Michael

    2018-01-01

    This paper presents a widely applicable approach to solving (multi-marginal, martingale) optimal transport and related problems via neural networks. The core idea is to penalize the optimization problem in its dual formulation and reduce it to a finite dimensional one which corresponds to optimizing a neural network with smooth objective function. We present numerical examples from optimal transport, martingale optimal transport, portfolio optimization under uncertainty and generative adversa...

  15. Quantized Synchronization of Chaotic Neural Networks With Scheduled Output Feedback Control.

    Science.gov (United States)

    Wan, Ying; Cao, Jinde; Wen, Guanghui

    In this paper, the synchronization problem of master-slave chaotic neural networks with remote sensors, quantization process, and communication time delays is investigated. The information communication channel between the master chaotic neural network and slave chaotic neural network consists of several remote sensors, with each sensor able to access only partial knowledge of output information of the master neural network. At each sampling instants, each sensor updates its own measurement and only one sensor is scheduled to transmit its latest information to the controller's side in order to update the control inputs for the slave neural network. Thus, such communication process and control strategy are much more energy-saving comparing with the traditional point-to-point scheme. Sufficient conditions for output feedback control gain matrix, allowable length of sampling intervals, and upper bound of network-induced delays are derived to ensure the quantized synchronization of master-slave chaotic neural networks. Lastly, Chua's circuit system and 4-D Hopfield neural network are simulated to validate the effectiveness of the main results.In this paper, the synchronization problem of master-slave chaotic neural networks with remote sensors, quantization process, and communication time delays is investigated. The information communication channel between the master chaotic neural network and slave chaotic neural network consists of several remote sensors, with each sensor able to access only partial knowledge of output information of the master neural network. At each sampling instants, each sensor updates its own measurement and only one sensor is scheduled to transmit its latest information to the controller's side in order to update the control inputs for the slave neural network. Thus, such communication process and control strategy are much more energy-saving comparing with the traditional point-to-point scheme. Sufficient conditions for output feedback control

  16. Bio-inspired spiking neural network for nonlinear systems control.

    Science.gov (United States)

    Pérez, Javier; Cabrera, Juan A; Castillo, Juan J; Velasco, Juan M

    2018-08-01

    Spiking neural networks (SNN) are the third generation of artificial neural networks. SNN are the closest approximation to biological neural networks. SNNs make use of temporal spike trains to command inputs and outputs, allowing a faster and more complex computation. As demonstrated by biological organisms, they are a potentially good approach to designing controllers for highly nonlinear dynamic systems in which the performance of controllers developed by conventional techniques is not satisfactory or difficult to implement. SNN-based controllers exploit their ability for online learning and self-adaptation to evolve when transferred from simulations to the real world. SNN's inherent binary and temporary way of information codification facilitates their hardware implementation compared to analog neurons. Biological neural networks often require a lower number of neurons compared to other controllers based on artificial neural networks. In this work, these neuronal systems are imitated to perform the control of non-linear dynamic systems. For this purpose, a control structure based on spiking neural networks has been designed. Particular attention has been paid to optimizing the structure and size of the neural network. The proposed structure is able to control dynamic systems with a reduced number of neurons and connections. A supervised learning process using evolutionary algorithms has been carried out to perform controller training. The efficiency of the proposed network has been verified in two examples of dynamic systems control. Simulations show that the proposed control based on SNN exhibits superior performance compared to other approaches based on Neural Networks and SNNs. Copyright © 2018 Elsevier Ltd. All rights reserved.

  17. Global Exponential Stability of Periodic Oscillation for Nonautonomous BAM Neural Networks with Distributed Delay

    Directory of Open Access Journals (Sweden)

    Hongli Liu

    2009-01-01

    Full Text Available We derive a new criterion for checking the global stability of periodic oscillation of bidirectional associative memory (BAM neural networks with periodic coefficients and distributed delay, and find that the criterion relies on the Lipschitz constants of the signal transmission functions, weights of the neural network, and delay kernels. The proposed model transforms the original interacting network into matrix analysis problem which is easy to check, thereby significantly reducing the computational complexity and making analysis of periodic oscillation for even large-scale networks.

  18. Adaptive competitive learning neural networks

    Directory of Open Access Journals (Sweden)

    Ahmed R. Abas

    2013-11-01

    Full Text Available In this paper, the adaptive competitive learning (ACL neural network algorithm is proposed. This neural network not only groups similar input feature vectors together but also determines the appropriate number of groups of these vectors. This algorithm uses a new proposed criterion referred to as the ACL criterion. This criterion evaluates different clustering structures produced by the ACL neural network for an input data set. Then, it selects the best clustering structure and the corresponding network architecture for this data set. The selected structure is composed of the minimum number of clusters that are compact and balanced in their sizes. The selected network architecture is efficient, in terms of its complexity, as it contains the minimum number of neurons. Synaptic weight vectors of these neurons represent well-separated, compact and balanced clusters in the input data set. The performance of the ACL algorithm is evaluated and compared with the performance of a recently proposed algorithm in the literature in clustering an input data set and determining its number of clusters. Results show that the ACL algorithm is more accurate and robust in both determining the number of clusters and allocating input feature vectors into these clusters than the other algorithm especially with data sets that are sparsely distributed.

  19. Parameter Identification by Bayes Decision and Neural Networks

    DEFF Research Database (Denmark)

    Kulczycki, P.; Schiøler, Henrik

    1994-01-01

    The problem of parameter identification by Bayes point estimation using neural networks is investigated.......The problem of parameter identification by Bayes point estimation using neural networks is investigated....

  20. Robust synchronization of delayed neural networks based on adaptive control and parameters identification

    International Nuclear Information System (INIS)

    Zhou Jin; Chen Tianping; Xiang Lan

    2006-01-01

    This paper investigates synchronization dynamics of delayed neural networks with all the parameters unknown. By combining the adaptive control and linear feedback with the updated law, some simple yet generic criteria for determining the robust synchronization based on the parameters identification of uncertain chaotic delayed neural networks are derived by using the invariance principle of functional differential equations. It is shown that the approaches developed here further extend the ideas and techniques presented in recent literature, and they are also simple to implement in practice. Furthermore, the theoretical results are applied to a typical chaotic delayed Hopfied neural networks, and numerical simulation also demonstrate the effectiveness and feasibility of the proposed technique