WorldWideScience

Sample records for bam neural networks

  1. Dynamics of Almost Periodic BAM Neural Networks with Neutral Delays

    OpenAIRE

    Yaqin Li

    2014-01-01

    The paper investigates the almost periodic oscillatory properties of neutral-type BAM neural networks with time-varying delays. By employing the contracting mapping principle and constructing suitable Lyapunov functional, several sufficient conditions are established for the existence, uniqueness, and global exponential stability of almost periodic solution of the system. The results of this paper are new and a simple example is given to illustrate the effectiveness of the new results.

  2. Dynamics of Almost Periodic BAM Neural Networks with Neutral Delays

    Directory of Open Access Journals (Sweden)

    Yaqin Li

    2014-01-01

    Full Text Available The paper investigates the almost periodic oscillatory properties of neutral-type BAM neural networks with time-varying delays. By employing the contracting mapping principle and constructing suitable Lyapunov functional, several sufficient conditions are established for the existence, uniqueness, and global exponential stability of almost periodic solution of the system. The results of this paper are new and a simple example is given to illustrate the effectiveness of the new results.

  3. Stability analysis of discrete-time BAM neural networks based on standard neural network models

    Institute of Scientific and Technical Information of China (English)

    ZHANG Sen-lin; LIU Mei-qin

    2005-01-01

    To facilitate stability analysis of discrete-time bidirectional associative memory (BAM) neural networks, they were converted into novel neural network models, termed standard neural network models (SNNMs), which interconnect linear dynamic systems and bounded static nonlinear operators. By combining a number of different Lyapunov functionals with S-procedure, some useful criteria of global asymptotic stability and global exponential stability of the equilibrium points of SNNMs were derived. These stability conditions were formulated as linear matrix inequalities (LMIs). So global stability of the discrete-time BAM neural networks could be analyzed by using the stability results of the SNNMs. Compared to the existing stability analysis methods, the proposed approach is easy to implement, less conservative, and is applicable to other recurrent neural networks.

  4. GLOBAL DYNAMICS OF DELAYED BIDIRECTIONAL ASSOCIATIVE MEMORY (BAM) NEURAL NETWORKS

    Institute of Scientific and Technical Information of China (English)

    ZHOU Jin; LIU Zeng-rong; XIANG Lan

    2005-01-01

    Without assuming the smoothness, monotonicity and boundedness of the activation functions, some novel criteria on the existence and global exponential stability of equilibrium point for delayed bidirectional associative memory (BAM) neural networks are established by applying the Liapunov functional methods and matrix-algebraic techniques. It is shown that the new conditions presented in terms of a nonsingular M matrix described by the networks parameters, the connection matrix and the Lipschitz constant of the activation functions, are not only simple and practical, but also easier to check and less conservative than those imposed by similar results in recent literature.

  5. LMI-based approach for global asymptotic stability analysis of continuous BAM neural networks

    Institute of Scientific and Technical Information of China (English)

    ZHANG Sen-lin; LIU Mei-qin

    2005-01-01

    Studies on the stability of the equilibrium points of continuous bidirectional associative memory (BAM) neural network have yielded many useful results. A novel neural network model called standard neural network model (SNNM) is advanced. By using state affine transformation, the BAM neural networks were converted to SNNMs. Some sufficient conditions for the global asymptotic stability of continuous BAM neural networks were derived from studies on the SNNMs' stability. These conditions were formulated as easily verifiable linear matrix inequalities (LMIs), whose conservativeness is relatively low. The approach proposed extends the known stability results, and can also be applied to other forms of recurrent neural networks (RNNs).

  6. Global Stability of Almost Periodic Solution of a Class of Neutral-Type BAM Neural Networks

    Directory of Open Access Journals (Sweden)

    Tetie Pan

    2012-01-01

    Full Text Available A class of BAM neural networks with variable coefficients and neutral delays are investigated. By employing fixed-point theorem, the exponential dichotomy, and differential inequality techniques, we obtain some sufficient conditions to insure the existence and globally exponential stability of almost periodic solution. This is the first time to investigate the almost periodic solution of the BAM neutral neural network and the results of this paper are new, and they extend previously known results.

  7. Global Stability of Almost Periodic Solution of a Class of Neutral-Type BAM Neural Networks

    OpenAIRE

    Tetie Pan; Bao Shi; Jian Yuan

    2012-01-01

    A class of BAM neural networks with variable coefficients and neutral delays are investigated. By employing fixed-point theorem, the exponential dichotomy, and differential inequality techniques, we obtain some sufficient conditions to insure the existence and globally exponential stability of almost periodic solution. This is the first time to investigate the almost periodic solution of the BAM neutral neural network and the results of this paper are new, and they extend previously known res...

  8. Stability analysis of extended discrete-time BAM neural networks based on LMI approach

    Institute of Scientific and Technical Information of China (English)

    2005-01-01

    We propose a new approach for analyzing the global asymptotic stability of the extended discrete-time bidirectional associative memory (BAM) neural networks. By using the Euler rule, we discretize the continuous-time BAM neural networks as the extended discrete-time BAM neural networks with non-threshold activation functions. Here we present some conditions under which the neural networks have unique equilibrium points. To judge the global asymptotic stability of the equilibrium points, we introduce a new neural network model - standard neural network model (SNNM).For the SNNMs, we derive the sufficient conditions for the global asymptotic stability of the equilibrium points, which are formulated as some linear matrix inequalities (LMIs). We transform the discrete-time BAM into the SNNM and apply the general result about the SNNM to the determination of global asymptotic stability of the discrete-time BAM. The approach proposed extends the known stability results, has lower conservativeness, can be verified easily, and can also be applied to other forms of recurrent neural networks.

  9. Passivity of memristor-based BAM neural networks with different memductance and uncertain delays.

    Science.gov (United States)

    Anbuvithya, R; Mathiyalagan, K; Sakthivel, R; Prakash, P

    2016-08-01

    This paper addresses the passivity problem for a class of memristor-based bidirectional associate memory (BAM) neural networks with uncertain time-varying delays. In particular, the proposed memristive BAM neural networks is formulated with two different types of memductance functions. By constructing proper Lyapunov-Krasovskii functional and using differential inclusions theory, a new set of sufficient condition is obtained in terms of linear matrix inequalities which guarantee the passivity criteria for the considered neural networks. Finally, two numerical examples are given to illustrate the effectiveness of the proposed theoretical results. PMID:27468321

  10. Passivity of memristor-based BAM neural networks with different memductance and uncertain delays.

    Science.gov (United States)

    Anbuvithya, R; Mathiyalagan, K; Sakthivel, R; Prakash, P

    2016-08-01

    This paper addresses the passivity problem for a class of memristor-based bidirectional associate memory (BAM) neural networks with uncertain time-varying delays. In particular, the proposed memristive BAM neural networks is formulated with two different types of memductance functions. By constructing proper Lyapunov-Krasovskii functional and using differential inclusions theory, a new set of sufficient condition is obtained in terms of linear matrix inequalities which guarantee the passivity criteria for the considered neural networks. Finally, two numerical examples are given to illustrate the effectiveness of the proposed theoretical results.

  11. Global asymptotic stability of delay BAM neural networks with impulses

    Energy Technology Data Exchange (ETDEWEB)

    Lou Xuyang [Research Center of Control Science and Engineering, Southern Yangtze University, 1800 Lihu Road, Wuxi, Jiangsu 214122 (China); Cui Baotong [Research Center of Control Science and Engineering, Southern Yangtze University, 1800 Lihu Road, Wuxi, Jiangsu 214122 (China)]. E-mail: btcui@sohu.com

    2006-08-15

    The global asymptotic stability of delay bi-directional associative memory neural networks with impulses are studied by constructing suitable Lyapunov functional. Sufficient conditions, which are independent to the delayed quantity, are obtained for the global asymptotic stability of the neural networks. Some illustrative examples are given to demonstrate the effectiveness of the obtained results.

  12. Global Exponential Stability of Delayed Cohen-Grossberg BAM Neural Networks with Impulses on Time Scales

    Directory of Open Access Journals (Sweden)

    Fei Yu

    2009-01-01

    Full Text Available Based on the theory of calculus on time scales, the homeomorphism theory, Lyapunov functional method, and some analysis techniques, sufficient conditions are obtained for the existence, uniqueness, and global exponential stability of the equilibrium point of Cohen-Grossberg bidirectional associative memory (BAM neural networks with distributed delays and impulses on time scales. This is the first time applying the time-scale calculus theory to unify the discrete-time and continuous-time Cohen-Grossberg BAM neural network with impulses under the same framework.

  13. Almost Periodic Solutions for Neutral-Type BAM Neural Networks with Delays on Time Scales

    OpenAIRE

    Yongkun Li; Li Yang

    2013-01-01

    Using the existence of the exponential dichotomy of linear dynamic equations on time scales, a fixed point theorem and the theory of calculus on time scales, we obtain some sufficient conditions for the existence and exponential stability of almost periodic solutions for a class of neutral-type BAM neural networks with delays on time scales. Finally, a numerical example illustrates the feasibility of our results and also shows that the continuous-time neural network and its dis...

  14. Periodic Solution to BAM-type Cohen-Grossberg Neural Network with Time-varying Delays

    Institute of Scientific and Technical Information of China (English)

    An-ping Chen; Qun-hua Gu

    2011-01-01

    By using the continuation theorem of Mawhin's coincidence degree theory and the Liapunov functional method, some sufficient conditions are obtained to ensure the existence, uniqueness and the global exponential stability of the periodic solution to the BAM-type Cohen-Grossberg neural networks involving time-varying delays.

  15. Delay-Dependent Exponential Stability Criterion for BAM Neural Networks with Time-Varying Delays

    Institute of Scientific and Technical Information of China (English)

    Wei-Wei Su; Yi-Ming Chen

    2008-01-01

    By employing the Lyapunov stability theory and linear matrix inequality (LMI) technique, delay dependent stability criterion is derived to ensure the exponential stability of bi-directional associative memory (BAM) neural networks with time-varying delays. The proposed condition can be checked easily by LMI control toolbox in Matlab. A numerical example is given to demonstrate the effectiveness of our results.

  16. Global Exponential Stability of Periodic Oscillation for Nonautonomous BAM Neural Networks with Distributed Delay

    Directory of Open Access Journals (Sweden)

    Ma Zhongjun

    2009-01-01

    Full Text Available We derive a new criterion for checking the global stability of periodic oscillation of bidirectional associative memory (BAM neural networks with periodic coefficients and distributed delay, and find that the criterion relies on the Lipschitz constants of the signal transmission functions, weights of the neural network, and delay kernels. The proposed model transforms the original interacting network into matrix analysis problem which is easy to check, thereby significantly reducing the computational complexity and making analysis of periodic oscillation for even large-scale networks.

  17. Existence and global exponential stability of periodic solution of memristor-based BAM neural networks with time-varying delays.

    Science.gov (United States)

    Li, Hongfei; Jiang, Haijun; Hu, Cheng

    2016-03-01

    In this paper, we investigate a class of memristor-based BAM neural networks with time-varying delays. Under the framework of Filippov solutions, boundedness and ultimate boundedness of solutions of memristor-based BAM neural networks are guaranteed by Chain rule and inequalities technique. Moreover, a new method involving Yoshizawa-like theorem is favorably employed to acquire the existence of periodic solution. By applying the theory of set-valued maps and functional differential inclusions, an available Lyapunov functional and some new testable algebraic criteria are derived for ensuring the uniqueness and global exponential stability of periodic solution of memristor-based BAM neural networks. The obtained results expand and complement some previous work on memristor-based BAM neural networks. Finally, a numerical example is provided to show the applicability and effectiveness of our theoretical results. PMID:26752438

  18. Existence and global exponential stability of periodic solution of memristor-based BAM neural networks with time-varying delays.

    Science.gov (United States)

    Li, Hongfei; Jiang, Haijun; Hu, Cheng

    2016-03-01

    In this paper, we investigate a class of memristor-based BAM neural networks with time-varying delays. Under the framework of Filippov solutions, boundedness and ultimate boundedness of solutions of memristor-based BAM neural networks are guaranteed by Chain rule and inequalities technique. Moreover, a new method involving Yoshizawa-like theorem is favorably employed to acquire the existence of periodic solution. By applying the theory of set-valued maps and functional differential inclusions, an available Lyapunov functional and some new testable algebraic criteria are derived for ensuring the uniqueness and global exponential stability of periodic solution of memristor-based BAM neural networks. The obtained results expand and complement some previous work on memristor-based BAM neural networks. Finally, a numerical example is provided to show the applicability and effectiveness of our theoretical results.

  19. STABILITY OF DISCRETE-TIME COHEN-GROSSBERG BAM NEURAL NETWORKS WITH DELAYS

    Institute of Scientific and Technical Information of China (English)

    2008-01-01

    In this paper, we study the existence and stability of an equilibrium of discrete-time Cohen-Grossberg BAM Neural Networks with delays. We obtain several sufficient conditions ensuring the existence and stability of an equilibrium of such systems, using discrete Halanay-type inequality and vector Lyapunov methods. In addition, we show that the proposed sufficient condition is independent of the delay parameter. An example is given to demonstrate the effectiveness of the results obtained.

  20. Global robust asymptotic stability of variable-time impulsive BAM neural networks.

    Science.gov (United States)

    Saylı, Mustafa; Yılmaz, Enes

    2014-12-01

    In this paper, the global robust asymptotic stability of the equilibrium point for a more general class of bidirectional associative memory (BAM) neural networks with variable time of impulses is addressed. Unlike most existing studies, the case of non-fix time impulses is focused on in the present study. By means of B-equivalence method, which was introduced in Akhmet (2003, 2005, 2009, 2010), Akhmet and Perestyuk (1990) and Akhmet and Turan (2009), we reduce these networks to a fix time impulsive neural networks system. Sufficient conditions ensuring the existence, uniqueness and global robust asymptotic stability of the equilibrium point are obtained by employing an appropriate Lyapunov function and linear matrix inequality (LMI). Finally, we give one illustrative example to show the effectiveness of the theoretical results.

  1. New LMI-Based Passivity Criteria for Neutral-Type BAM Neural Networks with Randomly Occurring Uncertainties

    Science.gov (United States)

    Sakthivel, R.; Anbuvithya, R.; Mathiyalagan, K.; Arunkumar, A.; Prakash, P.

    2013-12-01

    In this paper, we study the passivity analysis for a class of neutral-type BAM neural networks with time-varying delays and randomly occurring uncertainties as well as generalized activation functions. Linear matrix inequality (LMI) approach together with the construction of proper Lyapunov-Krasovskii functional involving triple integrals and augmented type constraint is implemented to derive a new set of sufficient conditions for obtaining the required result. More precisely, first we derive the passivity condition for BAM neural networks without uncertainties and then the result is extended to the case with randomly occurring uncertainties. In particular, the presented results depend not only upon discrete delay but also distributed time varying delay. The obtained passivity conditions are formulated in terms of linear matrix inequalities that can be easily solved by using the MATLAB-LMI toolbox. Finally, the effectiveness of the proposed passivity criterion is demonstrated through numerical example.

  2. EXISTENCE AND EXPONENTIAL STABILITY OF ALMOST PERIODIC SOLUTIONS TO BAM RECURRENT NEURAL NETWORKS WITH TRANSMISSION DELAYS AND CONTINUOUSLY DISTRIBUTED DELAYS

    Institute of Scientific and Technical Information of China (English)

    2012-01-01

    In this paper,a class of bidirectional associative memory(BAM) recurrent neural networks with delays are studied.By a fixed point theorem and a Lyapunov functional,some new sufficient conditions for the existence,uniqueness and global exponential stability of the almost periodic solutions are established.These conditions are easy to be verified and our results complement the previous known results.

  3. Delay-Dependent Exponential Stability for Discrete-Time BAM Neural Networks with Time-Varying Delays

    Directory of Open Access Journals (Sweden)

    Yonggang Chen

    2008-01-01

    Full Text Available This paper considers the delay-dependent exponential stability for discrete-time BAM neural networks with time-varying delays. By constructing the new Lyapunov functional, the improved delay-dependent exponential stability criterion is derived in terms of linear matrix inequality (LMI. Moreover, in order to reduce the conservativeness, some slack matrices are introduced in this paper. Two numerical examples are presented to show the effectiveness and less conservativeness of the proposed method.

  4. On global exponential stability and existence of periodic solutions for BAM neural networks with distributed delays and reaction-diffusion terms

    Energy Technology Data Exchange (ETDEWEB)

    Lou Xuyang [Research Center of Control Science and Engineering, Southern Yangtze University, 1800 Lihu Road, Wuxi, Jiangsu 214122 (China)], E-mail: louxuyang28945@163.com; Cui Baotong [Research Center of Control Science and Engineering, Southern Yangtze University, 1800 Lihu Road, Wuxi, Jiangsu 214122 (China)], E-mail: btcui@sohu.com; Wu Wei [Research Center of Control Science and Engineering, Southern Yangtze University, 1800 Lihu Road, Wuxi, Jiangsu 214122 (China)

    2008-05-15

    Both exponential stability and existence of periodic solutions are considered for a class of bi-directional associative memory (BAM) neural networks with distributed delays and reaction-diffusion terms by constructing suitable Lyapunov functional and Young inequality technique. The general sufficient conditions are given ensuring the global exponential stability and existence of periodic solutions of BAM neural networks with distributed delays and reaction-diffusion terms. The earlier results are extended and improved, and an illustrative example is given to demonstrate the effectiveness of the results in this paper.

  5. Stability analysis of BAM neural networks with time-varying delays

    Institute of Scientific and Technical Information of China (English)

    ZHANG Huaguang; WANG Zhanshan

    2007-01-01

    Some new criteria for the global asymptotic stability of the equilibrium point for the bi-directional associative memory neural networks with time varying delays are presented. The obtained results present the structure of linear matrix inequality which can be solved efficiently. The comparison with some previously reported results in the literature demonstrates that the results in this paper provide one more set of criteria for determining the stability of the bi-directional associative memory neural networks with delays.

  6. Fold-Hopf bifurcation in a simplified four-neuron BAM (bidirectional associative memory) neural network with two delays

    Institute of Scientific and Technical Information of China (English)

    2010-01-01

    The bidirectional associative memory (BAM) neural network with four neurons and two delays is considered in the present paper.A linear stability analysis for the trivial equilibrium is firstly employed to provide a possible critical point at which a zero and a pair of pure imaginary eigenvalues occur in the corresponding characteristic equation.A fold-Hopf bifurcation is proved to happen at this critical point by the nonlinear analysis.The coupling strength and the delay are considered as bifurcation parameters to investigate the dynamical behaviors derived from the fold-Hopf bifurcation.Various dynamical behaviours are qualitatively classified in the neighbourhood of the fold-Hopf bifurcation point by using the center manifold reduction (CMR) together with the normal form.The bifurcating periodic solutions are expressed analytically in an approximate form.The validity of the results is shown by their consistency with the numerical simulation.

  7. Global asymptotic stability of BAM neural networks with distributed delays and reaction-diffusion terms

    Energy Technology Data Exchange (ETDEWEB)

    Cui Baotong [Research Center of Control Science and Engineering, Southern Yangtze University, 1800 Lihu Rd., Wuxi, Jiangsu 214122 (China)] e-mail: btcui@sohu.com; Lou Xuyang [Research Center of Control Science and Engineering, Southern Yangtze University, 1800 Lihu Rd., Wuxi, Jiangsu 214122 (China)

    2006-03-01

    The global asymptotic stability of bi-directional associative memory neural networks with distributed delays and reaction-diffusion terms are studied by using the analysis technique and Lyapunov functional. A sufficient condition is proposed. Two numerical examples are given to show the correctness of our analysis.

  8. Neural Networks

    Directory of Open Access Journals (Sweden)

    Schwindling Jerome

    2010-04-01

    Full Text Available This course presents an overview of the concepts of the neural networks and their aplication in the framework of High energy physics analyses. After a brief introduction on the concept of neural networks, the concept is explained in the frame of neuro-biology, introducing the concept of multi-layer perceptron, learning and their use as data classifer. The concept is then presented in a second part using in more details the mathematical approach focussing on typical use cases faced in particle physics. Finally, the last part presents the best way to use such statistical tools in view of event classifers, putting the emphasis on the setup of the multi-layer perceptron. The full article (15 p. corresponding to this lecture is written in french and is provided in the proceedings of the book SOS 2008.

  9. Inversion of oceanic chloro- phyll concentrations by neural networks

    Institute of Scientific and Technical Information of China (English)

    2001-01-01

    Neural networks (NNs) for the inversion of chlorophyll concentrations from remote sensing reflectance measurements were designed and trained on a subset of the SeaBAM data set. The remaining SeaBAM data set was then applied to evaluating the performance of NNs and compared with those of the SeaBAM empirical algorithms. NNs achieved better inversion accuracy than the empirical algo-rithms in most of chlorophyll concentration range, especially in the intermediate and high chlorophyll regions and CaseⅡ waters. Systematic overestimation existed in the very low chlorophyll (<0.031 mg/m3) region, and little improvement was obtained by changing the size of the training data set.

  10. Application of MBAM Neural Network in CNC Machine Fault Diagnosis

    Institute of Scientific and Technical Information of China (English)

    宋刚; 胡德金

    2004-01-01

    In order to improve the bidirectional associative memory (BAM) performance, a modified BAM model (MBAM) is used to enhance neural network(NN)'s memory capacity and error correction capability, theoretical analysis and experiment results illuminate that MBAM performs much better than the original BAM. The MBAM is used in computer numeric control(CNC) machine fault diagnosis, it not only can complete fault diagnosis correctly but also have fairly high error correction capability for disturbed Input Information sequence. Moreover MBAM model is a more convenient and effective method of solving the problem of CNC electric system fault diagnosis.

  11. Neural Network Applications

    NARCIS (Netherlands)

    Vonk, E.; Jain, L.C.; Veelenturf, L.P.J.

    1995-01-01

    Artificial neural networks, also called neural networks, have been used successfully in many fields including engineering, science and business. This paper presents the implementation of several neural network simulators and their applications in character recognition and other engineering areas

  12. A parallel processing VLSI BAM engine.

    Science.gov (United States)

    Hasan, S R; Siong, N K

    1997-01-01

    In this paper emerging parallel/distributed architectures are explored for the digital VLSI implementation of adaptive bidirectional associative memory (BAM) neural network. A single instruction stream many data stream (SIMD)-based parallel processing architecture, is developed for the adaptive BAM neural network, taking advantage of the inherent parallelism in BAM. This novel neural processor architecture is named the sliding feeder BAM array processor (SLiFBAM). The SLiFBAM processor can be viewed as a two-stroke neural processing engine, It has four operating modes: learn pattern, evaluate pattern, read weight, and write weight. Design of a SLiFBAM VLSI processor chip is also described. By using 2-mum scalable CMOS technology, a SLiFBAM processor chip with 4+4 neurons and eight modules of 256x5 bit local weight-storage SRAM, was integrated on a 6.9x7.4 mm(2) prototype die. The system architecture is highly flexible and modular, enabling the construction of larger BAM networks of up to 252 neurons using multiple SLiFBAM chips.

  13. Holographic neural networks

    OpenAIRE

    Manger, R

    1998-01-01

    Holographic neural networks are a new and promising type of artificial neural networks. This article gives an overview of the holographic neural technology and its possibilities. The theoretical principles of holographic networks are first reviewed. Then, some other papers are presented, where holographic networks have been applied or experimentally evaluated. A case study dealing with currency exchange rate prediction is described in more detail.

  14. Morphological neural networks

    Energy Technology Data Exchange (ETDEWEB)

    Ritter, G.X.; Sussner, P. [Univ. of Florida, Gainesville, FL (United States)

    1996-12-31

    The theory of artificial neural networks has been successfully applied to a wide variety of pattern recognition problems. In this theory, the first step in computing the next state of a neuron or in performing the next layer neural network computation involves the linear operation of multiplying neural values by their synaptic strengths and adding the results. Thresholding usually follows the linear operation in order to provide for nonlinearity of the network. In this paper we introduce a novel class of neural networks, called morphological neural networks, in which the operations of multiplication and addition are replaced by addition and maximum (or minimum), respectively. By taking the maximum (or minimum) of sums instead of the sum of products, morphological network computation is nonlinear before thresholding. As a consequence, the properties of morphological neural networks are drastically different than those of traditional neural network models. In this paper we consider some of these differences and provide some particular examples of morphological neural network.

  15. Chaotic diagonal recurrent neural network

    Institute of Scientific and Technical Information of China (English)

    Wang Xing-Yuan; Zhang Yi

    2012-01-01

    We propose a novel neural network based on a diagonal recurrent neural network and chaos,and its structure andlearning algorithm are designed.The multilayer feedforward neural network,diagonal recurrent neural network,and chaotic diagonal recurrent neural network are used to approach the cubic symmetry map.The simulation results show that the approximation capability of the chaotic diagonal recurrent neural network is better than the other two neural networks.

  16. Neural Networks: Implementations and Applications

    OpenAIRE

    Vonk, E.; Veelenturf, L.P.J.; Jain, L.C.

    1996-01-01

    Artificial neural networks, also called neural networks, have been used successfully in many fields including engineering, science and business. This paper presents the implementation of several neural network simulators and their applications in character recognition and other engineering areas

  17. Hidden neural networks

    DEFF Research Database (Denmark)

    Krogh, Anders Stærmose; Riis, Søren Kamaric

    1999-01-01

    A general framework for hybrids of hidden Markov models (HMMs) and neural networks (NNs) called hidden neural networks (HNNs) is described. The article begins by reviewing standard HMMs and estimation by conditional maximum likelihood, which is used by the HNN. In the HNN, the usual HMM probability...... parameters are replaced by the outputs of state-specific neural networks. As opposed to many other hybrids, the HNN is normalized globally and therefore has a valid probabilistic interpretation. All parameters in the HNN are estimated simultaneously according to the discriminative conditional maximum...... likelihood criterion. The HNN can be viewed as an undirected probabilistic independence network (a graphical model), where the neural networks provide a compact representation of the clique functions. An evaluation of the HNN on the task of recognizing broad phoneme classes in the TIMIT database shows clear...

  18. Neural networks for aircraft control

    Science.gov (United States)

    Linse, Dennis

    1990-01-01

    Current research in Artificial Neural Networks indicates that networks offer some potential advantages in adaptation and fault tolerance. This research is directed at determining the possible applicability of neural networks to aircraft control. The first application will be to aircraft trim. Neural network node characteristics, network topology and operation, neural network learning and example histories using neighboring optimal control with a neural net are discussed.

  19. Neural Network Ensembles

    DEFF Research Database (Denmark)

    Hansen, Lars Kai; Salamon, Peter

    1990-01-01

    We propose several means for improving the performance an training of neural networks for classification. We use crossvalidation as a tool for optimizing network parameters and architecture. We show further that the remaining generalization error can be reduced by invoking ensembles of similar...... networks....

  20. Critical Branching Neural Networks

    Science.gov (United States)

    Kello, Christopher T.

    2013-01-01

    It is now well-established that intrinsic variations in human neural and behavioral activity tend to exhibit scaling laws in their fluctuations and distributions. The meaning of these scaling laws is an ongoing matter of debate between isolable causes versus pervasive causes. A spiking neural network model is presented that self-tunes to critical…

  1. The Future of Neural Networks

    OpenAIRE

    Lakra, Sachin; T. V. Prasad; G. Ramakrishna

    2012-01-01

    The paper describes some recent developments in neural networks and discusses the applicability of neural networks in the development of a machine that mimics the human brain. The paper mentions a new architecture, the pulsed neural network that is being considered as the next generation of neural networks. The paper also explores the use of memristors in the development of a brain-like computer called the MoNETA. A new model, multi/infinite dimensional neural networks, are a recent developme...

  2. Neural Networks in Data Mining

    OpenAIRE

    Priyanka Gaur

    2012-01-01

    The application of neural networks in the data mining is very wide. Although neural networks may have complex structure, long training time, and uneasily understandable representation of results, neural networks have high acceptance ability for noisy data and high accuracy and are preferable in data mining. In this paper the data mining based on neural networks is researched in detail, and the key technology and ways to achieve the data mining based on neural networks are also researched.

  3. Neural networks and graph theory

    Institute of Scientific and Technical Information of China (English)

    许进; 保铮

    2002-01-01

    The relationships between artificial neural networks and graph theory are considered in detail. The applications of artificial neural networks to many difficult problems of graph theory, especially NP-complete problems, and the applications of graph theory to artificial neural networks are discussed. For example graph theory is used to study the pattern classification problem on the discrete type feedforward neural networks, and the stability analysis of feedback artificial neural networks etc.

  4. Introduction to neural networks

    International Nuclear Information System (INIS)

    This lecture is a presentation of today's research in neural computation. Neural computation is inspired by knowledge from neuro-science. It draws its methods in large degree from statistical physics and its potential applications lie mainly in computer science and engineering. Neural networks models are algorithms for cognitive tasks, such as learning and optimization, which are based on concepts derived from research into the nature of the brain. The lecture first gives an historical presentation of neural networks development and interest in performing complex tasks. Then, an exhaustive overview of data management and networks computation methods is given: the supervised learning and the associative memory problem, the capacity of networks, the Perceptron networks, the functional link networks, the Madaline (Multiple Adalines) networks, the back-propagation networks, the reduced coulomb energy (RCE) networks, the unsupervised learning and the competitive learning and vector quantization. An example of application in high energy physics is given with the trigger systems and track recognition system (track parametrization, event selection and particle identification) developed for the CPLEAR experiment detectors from the LEAR at CERN. (J.S.). 56 refs., 20 figs., 1 tab., 1 appendix

  5. Neural networks in seismic discrimination

    Energy Technology Data Exchange (ETDEWEB)

    Dowla, F.U.

    1995-01-01

    Neural networks are powerful and elegant computational tools that can be used in the analysis of geophysical signals. At Lawrence Livermore National Laboratory, we have developed neural networks to solve problems in seismic discrimination, event classification, and seismic and hydrodynamic yield estimation. Other researchers have used neural networks for seismic phase identification. We are currently developing neural networks to estimate depths of seismic events using regional seismograms. In this paper different types of network architecture and representation techniques are discussed. We address the important problem of designing neural networks with good generalization capabilities. Examples of neural networks for treaty verification applications are also described.

  6. Rule Extraction:Using Neural Networks or for Neural Networks?

    Institute of Scientific and Technical Information of China (English)

    Zhi-Hua Zhou

    2004-01-01

    In the research of rule extraction from neural networks, fidelity describes how well the rules mimic the behavior of a neural network while accuracy describes how well the rules can be generalized. This paper identifies the fidelity-accuracy dilemma. It argues to distinguish rule extraction using neural networks and rule extraction for neural networks according to their different goals, where fidelity and accuracy should be excluded from the rule quality evaluation framework, respectively.

  7. Introduction to Artificial Neural Networks

    DEFF Research Database (Denmark)

    Larsen, Jan

    1999-01-01

    The note addresses introduction to signal analysis and classification based on artificial feed-forward neural networks.......The note addresses introduction to signal analysis and classification based on artificial feed-forward neural networks....

  8. Exponential synchronization of general chaotic delayed neural networks via hybrid feedback

    Institute of Scientific and Technical Information of China (English)

    2008-01-01

    This paper investigates the exponential synchronization problem of some chaotic delayed neural networks based on the proposed general neural network model, which is the interconnection of a linear delayed dynamic system and a bounded static nonlinear operator, and covers several well-known neural networks, such as Hopfield neural networks, cellular neural networks (CNNs), bidirectional associative memory (BAM) networks, recurrent multilayer perceptrons (RMLPs). By virtue of LyapunovKrasovskii stability theory and linear matrix inequality (LMI) technique, some exponential synchronization criteria are derived.Using the drive-response concept, hybrid feedback controllers are designed to synchronize two identical chaotic neural networks based on those synchronization criteria. Finally, detailed comparisons with existing results are made and numerical simulations are carried out to demonstrate the effectiveness of the established synchronization laws.

  9. Electronic Neural Networks

    Science.gov (United States)

    Lambe, John; Moopen, Alexander; Thakoor, Anilkumar P.

    1988-01-01

    Memory based on neural network models content-addressable and fault-tolerant. System includes electronic equivalent of synaptic network; particular, matrix of programmable binary switching elements over which data distributed. Switches programmed in parallel by outputs of serial-input/parallel-output shift registers. Input and output terminals of bank of high-gain nonlinear amplifiers connected in nonlinear-feedback configuration by switches and by memory-prompting shift registers.

  10. Artificial neural network modelling

    CERN Document Server

    Samarasinghe, Sandhya

    2016-01-01

    This book covers theoretical aspects as well as recent innovative applications of Artificial Neural networks (ANNs) in natural, environmental, biological, social, industrial and automated systems. It presents recent results of ANNs in modelling small, large and complex systems under three categories, namely, 1) Networks, Structure Optimisation, Robustness and Stochasticity 2) Advances in Modelling Biological and Environmental Systems and 3) Advances in Modelling Social and Economic Systems. The book aims at serving undergraduates, postgraduates and researchers in ANN computational modelling. .

  11. Quantum Neural Networks

    CERN Document Server

    Gupta, S; Gupta, Sanjay

    2002-01-01

    This paper initiates the study of quantum computing within the constraints of using a polylogarithmic ($O(\\log^k n), k\\geq 1$) number of qubits and a polylogarithmic number of computation steps. The current research in the literature has focussed on using a polynomial number of qubits. A new mathematical model of computation called \\emph{Quantum Neural Networks (QNNs)} is defined, building on Deutsch's model of quantum computational network. The model introduces a nonlinear and irreversible gate, similar to the speculative operator defined by Abrams and Lloyd. The precise dynamics of this operator are defined and while giving examples in which nonlinear Schr\\"{o}dinger's equations are applied, we speculate on its possible implementation. The many practical problems associated with the current model of quantum computing are alleviated in the new model. It is shown that QNNs of logarithmic size and constant depth have the same computational power as threshold circuits, which are used for modeling neural network...

  12. Artificial Neural Network

    Directory of Open Access Journals (Sweden)

    Kapil Nahar

    2012-12-01

    Full Text Available An artificial neural network is an information-processing paradigm that is inspired by the way biological nervous systems, such as the brain, process information. The key element of this paradigm is the novel structure of the information processing system. It is composed of a large number of highly interconnected processing elements (neurons working in unison to solve specific problems. Ann’s, like people, learn by example.

  13. Artificial Neural Network

    Directory of Open Access Journals (Sweden)

    Kapil Nahar

    2012-12-01

    Full Text Available An artificial neural network is an information-processing paradigm that is inspired by the way biological nervous systems, such as the brain, process information.The key element of this paradigm is the novel structure of the information processing system. It is composed of a large number of highly interconnected processing elements (neurons working in unison to solve specific problems.Ann’s, like people, learn by example.

  14. Neural networks for triggering

    Energy Technology Data Exchange (ETDEWEB)

    Denby, B. (Fermi National Accelerator Lab., Batavia, IL (USA)); Campbell, M. (Michigan Univ., Ann Arbor, MI (USA)); Bedeschi, F. (Istituto Nazionale di Fisica Nucleare, Pisa (Italy)); Chriss, N.; Bowers, C. (Chicago Univ., IL (USA)); Nesti, F. (Scuola Normale Superiore, Pisa (Italy))

    1990-01-01

    Two types of neural network beauty trigger architectures, based on identification of electrons in jets and recognition of secondary vertices, have been simulated in the environment of the Fermilab CDF experiment. The efficiencies for B's and rejection of background obtained are encouraging. If hardware tests are successful, the electron identification architecture will be tested in the 1991 run of CDF. 10 refs., 5 figs., 1 tab.

  15. Global stability of bidirectional associative memory neural networks with continuously distributed delays

    Institute of Scientific and Technical Information of China (English)

    张强; 马润年; 许进

    2003-01-01

    Global asymptotic stability of the equilibrium point of bidirectional associative memory (BAM) neural networks with continuously distributed delays is studied. Under two mild assumptions on the acti-vation functions, two sufficient conditions ensuring global stability of such networks are derived by utiliz-ing Lyapunov functional and some inequality analysis technique. The results here extend some previous results. A numerical example is given showing the validity of our method.

  16. Space-Time Neural Networks

    Science.gov (United States)

    Villarreal, James A.; Shelton, Robert O.

    1992-01-01

    Concept of space-time neural network affords distributed temporal memory enabling such network to model complicated dynamical systems mathematically and to recognize temporally varying spatial patterns. Digital filters replace synaptic-connection weights of conventional back-error-propagation neural network.

  17. via dynamic neural networks

    Directory of Open Access Journals (Sweden)

    J. Reyes-Reyes

    2000-01-01

    Full Text Available In this paper, an adaptive technique is suggested to provide the passivity property for a class of partially known SISO nonlinear systems. A simple Dynamic Neural Network (DNN, containing only two neurons and without any hidden-layers, is used to identify the unknown nonlinear system. By means of a Lyapunov-like analysis the new learning law for this DNN, guarantying both successful identification and passivation effects, is derived. Based on this adaptive DNN model, an adaptive feedback controller, serving for wide class of nonlinear systems with an a priori incomplete model description, is designed. Two typical examples illustrate the effectiveness of the suggested approach.

  18. Interacting neural networks.

    Science.gov (United States)

    Metzler, R; Kinzel, W; Kanter, I

    2000-08-01

    Several scenarios of interacting neural networks which are trained either in an identical or in a competitive way are solved analytically. In the case of identical training each perceptron receives the output of its neighbor. The symmetry of the stationary state as well as the sensitivity to the used training algorithm are investigated. Two competitive perceptrons trained on mutually exclusive learning aims and a perceptron which is trained on the opposite of its own output are examined analytically. An ensemble of competitive perceptrons is used as decision-making algorithms in a model of a closed market (El Farol Bar problem or the Minority Game. In this game, a set of agents who have to make a binary decision is considered.); each network is trained on the history of minority decisions. This ensemble of perceptrons relaxes to a stationary state whose performance can be better than random. PMID:11088736

  19. Analysis of neural networks

    CERN Document Server

    Heiden, Uwe

    1980-01-01

    The purpose of this work is a unified and general treatment of activity in neural networks from a mathematical pOint of view. Possible applications of the theory presented are indica­ ted throughout the text. However, they are not explored in de­ tail for two reasons : first, the universal character of n- ral activity in nearly all animals requires some type of a general approach~ secondly, the mathematical perspicuity would suffer if too many experimental details and empirical peculiarities were interspersed among the mathematical investigation. A guide to many applications is supplied by the references concerning a variety of specific issues. Of course the theory does not aim at covering all individual problems. Moreover there are other approaches to neural network theory (see e.g. Poggio-Torre, 1978) based on the different lev­ els at which the nervous system may be viewed. The theory is a deterministic one reflecting the average be­ havior of neurons or neuron pools. In this respect the essay is writt...

  20. Neural Networks for Optimal Control

    DEFF Research Database (Denmark)

    Sørensen, O.

    1995-01-01

    Two neural networks are trained to act as an observer and a controller, respectively, to control a non-linear, multi-variable process.......Two neural networks are trained to act as an observer and a controller, respectively, to control a non-linear, multi-variable process....

  1. Artificial neural networks in NDT

    International Nuclear Information System (INIS)

    Artificial neural networks, simply known as neural networks, have attracted considerable interest in recent years largely because of a growing recognition of the potential of these computational paradigms as powerful alternative models to conventional pattern recognition or function approximation techniques. The neural networks approach is having a profound effect on almost all fields, and has been utilised in fields Where experimental inter-disciplinary work is being carried out. Being a multidisciplinary subject with a broad knowledge base, Nondestructive Testing (NDT) or Nondestructive Evaluation (NDE) is no exception. This paper explains typical applications of neural networks in NDT/NDE. Three promising types of neural networks are highlighted, namely, back-propagation, binary Hopfield and Kohonen's self-organising maps. (Author)

  2. Neural networks in astronomy.

    Science.gov (United States)

    Tagliaferri, Roberto; Longo, Giuseppe; Milano, Leopoldo; Acernese, Fausto; Barone, Fabrizio; Ciaramella, Angelo; De Rosa, Rosario; Donalek, Ciro; Eleuteri, Antonio; Raiconi, Giancarlo; Sessa, Salvatore; Staiano, Antonino; Volpicelli, Alfredo

    2003-01-01

    In the last decade, the use of neural networks (NN) and of other soft computing methods has begun to spread also in the astronomical community which, due to the required accuracy of the measurements, is usually reluctant to use automatic tools to perform even the most common tasks of data reduction and data mining. The federation of heterogeneous large astronomical databases which is foreseen in the framework of the astrophysical virtual observatory and national virtual observatory projects, is, however, posing unprecedented data mining and visualization problems which will find a rather natural and user friendly answer in artificial intelligence tools based on NNs, fuzzy sets or genetic algorithms. This review is aimed to both astronomers (who often have little knowledge of the methodological background) and computer scientists (who often know little about potentially interesting applications), and therefore will be structured as follows: after giving a short introduction to the subject, we shall summarize the methodological background and focus our attention on some of the most interesting fields of application, namely: object extraction and classification, time series analysis, noise identification, and data mining. Most of the original work described in the paper has been performed in the framework of the AstroNeural collaboration (Napoli-Salerno).

  3. Neural Networks in Control Applications

    DEFF Research Database (Denmark)

    Sørensen, O.

    The intention of this report is to make a systematic examination of the possibilities of applying neural networks in those technical areas, which are familiar to a control engineer. In other words, the potential of neural networks in control applications is given higher priority than a detailed...... study of the networks themselves. With this end in view the following restrictions have been made: - Amongst numerous neural network structures, only the Multi Layer Perceptron (a feed-forward network) is applied. - Amongst numerous training algorithms, only four algorithms are examined, all...... in a recursive form (sample updating). The simplest is the Back Probagation Error Algorithm, and the most complex is the recursive Prediction Error Method using a Gauss-Newton search direction. - Over-fitting is often considered to be a serious problem when training neural networks. This problem is specifically...

  4. Medical diagnosis using neural network

    CERN Document Server

    Kamruzzaman, S M; Siddiquee, Abu Bakar; Mazumder, Md Ehsanul Hoque

    2010-01-01

    This research is to search for alternatives to the resolution of complex medical diagnosis where human knowledge should be apprehended in a general fashion. Successful application examples show that human diagnostic capabilities are significantly worse than the neural diagnostic system. This paper describes a modified feedforward neural network constructive algorithm (MFNNCA), a new algorithm for medical diagnosis. The new constructive algorithm with backpropagation; offer an approach for the incremental construction of near-minimal neural network architectures for pattern classification. The algorithm starts with minimal number of hidden units in the single hidden layer; additional units are added to the hidden layer one at a time to improve the accuracy of the network and to get an optimal size of a neural network. The MFNNCA was tested on several benchmarking classification problems including the cancer, heart disease and diabetes. Experimental results show that the MFNNCA can produce optimal neural networ...

  5. Neural networks and statistical learning

    CERN Document Server

    Du, Ke-Lin

    2014-01-01

    Providing a broad but in-depth introduction to neural network and machine learning in a statistical framework, this book provides a single, comprehensive resource for study and further research. All the major popular neural network models and statistical learning approaches are covered with examples and exercises in every chapter to develop a practical working understanding of the content. Each of the twenty-five chapters includes state-of-the-art descriptions and important research results on the respective topics. The broad coverage includes the multilayer perceptron, the Hopfield network, associative memory models, clustering models and algorithms, the radial basis function network, recurrent neural networks, principal component analysis, nonnegative matrix factorization, independent component analysis, discriminant analysis, support vector machines, kernel methods, reinforcement learning, probabilistic and Bayesian networks, data fusion and ensemble learning, fuzzy sets and logic, neurofuzzy models, hardw...

  6. Neural Networks Of VLSI Components

    Science.gov (United States)

    Eberhardt, Silvio P.

    1991-01-01

    Concept for design of electronic neural network calls for assembly of very-large-scale integrated (VLSI) circuits of few standard types. Each VLSI chip, which contains both analog and digital circuitry, used in modular or "building-block" fashion by interconnecting it in any of variety of ways with other chips. Feedforward neural network in typical situation operates under control of host computer and receives inputs from, and sends outputs to, other equipment.

  7. What are artificial neural networks?

    DEFF Research Database (Denmark)

    Krogh, Anders

    2008-01-01

    Artificial neural networks have been applied to problems ranging from speech recognition to prediction of protein secondary structure, classification of cancers and gene prediction. How do they work and what might they be good for? Udgivelsesdato: 2008-Feb......Artificial neural networks have been applied to problems ranging from speech recognition to prediction of protein secondary structure, classification of cancers and gene prediction. How do they work and what might they be good for? Udgivelsesdato: 2008-Feb...

  8. Neural Networks for Fingerprint Recognition

    OpenAIRE

    Baldi, Pierre; Chauvin, Yves

    1993-01-01

    After collecting a data base of fingerprint images, we design a neural network algorithm for fingerprint recognition. When presented with a pair of fingerprint images, the algorithm outputs an estimate of the probability that the two images originate from the same finger. In one experiment, the neural network is trained using a few hundred pairs of images and its performance is subsequently tested using several thousand pairs of images originated from a subset of the database corresponding to...

  9. Neural Networks and Photometric Redshifts

    OpenAIRE

    Tagliaferri, Roberto; Longo, Giuseppe; Andreon, Stefano; Capozziello, Salvatore; Donalek, Ciro; Giordano, Gerardo

    2002-01-01

    We present a neural network based approach to the determination of photometric redshift. The method was tested on the Sloan Digital Sky Survey Early Data Release (SDSS-EDR) reaching an accuracy comparable and, in some cases, better than SED template fitting techniques. Different neural networks architecture have been tested and the combination of a Multi Layer Perceptron with 1 hidden layer (22 neurons) operated in a Bayesian framework, with a Self Organizing Map used to estimate the accuracy...

  10. Correlational Neural Networks.

    Science.gov (United States)

    Chandar, Sarath; Khapra, Mitesh M; Larochelle, Hugo; Ravindran, Balaraman

    2016-02-01

    Common representation learning (CRL), wherein different descriptions (or views) of the data are embedded in a common subspace, has been receiving a lot of attention recently. Two popular paradigms here are canonical correlation analysis (CCA)-based approaches and autoencoder (AE)-based approaches. CCA-based approaches learn a joint representation by maximizing correlation of the views when projected to the common subspace. AE-based methods learn a common representation by minimizing the error of reconstructing the two views. Each of these approaches has its own advantages and disadvantages. For example, while CCA-based approaches outperform AE-based approaches for the task of transfer learning, they are not as scalable as the latter. In this work, we propose an AE-based approach, correlational neural network (CorrNet), that explicitly maximizes correlation among the views when projected to the common subspace. Through a series of experiments, we demonstrate that the proposed CorrNet is better than AE and CCA with respect to its ability to learn correlated common representations. We employ CorrNet for several cross-language tasks and show that the representations learned using it perform better than the ones learned using other state-of-the-art approaches. PMID:26654210

  11. Attracting and Invariant Sets of Cohen-Grossberg-type BAM Networks%Cohen-Grossberg-type BAM神经网络吸引集和不变集(英文)

    Institute of Scientific and Technical Information of China (English)

    王金华; 向红军; 魏叶梅

    2012-01-01

    A class of Cohen-Grossberg-type BAM neural networks with distributed delays is studied.By applying the theory of matrix and inequality technique,we obtain some results about the attracting and invariant set of the considered system.Moreover,an examples is given to demonstrate the feasibility of the obtained results.%利用矩阵理论和不等式分析技巧,讨论了一类具分布时滞的Cohen-Grossberg型BAM神经网络的不变集和吸引集,获得了一些最新结果,并给出一个实例说明我们结果的可行性.

  12. Complex-Valued Neural Networks

    CERN Document Server

    Hirose, Akira

    2012-01-01

    This book is the second enlarged and revised edition of the first successful monograph on complex-valued neural networks (CVNNs) published in 2006, which lends itself to graduate and undergraduate courses in electrical engineering, informatics, control engineering, mechanics, robotics, bioengineering, and other relevant fields. In the second edition the recent trends in CVNNs research are included, resulting in e.g. almost a doubled number of references. The parametron invented in 1954 is also referred to with discussion on analogy and disparity. Also various additional arguments on the advantages of the complex-valued neural networks enhancing the difference to real-valued neural networks are given in various sections. The book is useful for those beginning their studies, for instance, in adaptive signal processing for highly functional sensing and imaging, control in unknown and changing environment, robotics inspired by human neural systems, and brain-like information processing, as well as interdisciplina...

  13. Phase Transitions of Neural Networks

    OpenAIRE

    Kinzel, Wolfgang

    1997-01-01

    The cooperative behaviour of interacting neurons and synapses is studied using models and methods from statistical physics. The competition between training error and entropy may lead to discontinuous properties of the neural network. This is demonstrated for a few examples: Perceptron, associative memory, learning from examples, generalization, multilayer networks, structure recognition, Bayesian estimate, on-line training, noise estimation and time series generation.

  14. Multigradient for Neural Networks for Equalizers

    Directory of Open Access Journals (Sweden)

    Chulhee Lee

    2003-06-01

    Full Text Available Recently, a new training algorithm, multigradient, has been published for neural networks and it is reported that the multigradient outperforms the backpropagation when neural networks are used as a classifier. When neural networks are used as an equalizer in communications, they can be viewed as a classifier. In this paper, we apply the multigradient algorithm to train the neural networks that are used as equalizers. Experiments show that the neural networks trained using the multigradient noticeably outperforms the neural networks trained by the backpropagation.

  15. Video Compression Using Neural Network

    Directory of Open Access Journals (Sweden)

    Sangeeta Mishra

    2012-08-01

    Full Text Available Apart from the existing technology on image compression represented by series of JPEG, MPEG and H.26x standards, new technology such as neural networks and genetic algorithms are being developed to explore the future of image coding. Successful applications of neural networks to basic propagation algorithm have now become well established and other aspects of neural network involvement in this technology. In this paper different algorithms were implemented like gradient descent back propagation, gradient descent with momentum back propagation, gradient descent with adaptive learning back propagation, gradient descent with momentum and adaptive learning back propagation and Levenberg-Marquardt algorithm. The size of original video clip is 25MB and after compression it becomes 21.3MB giving the compression ratio as 85.2% and compression factor of 1.174. It was observed that the size remains same after compression but the difference is in the clarity.

  16. Relations Between Wavelet Network and Feedforward Neural Network

    Institute of Scientific and Technical Information of China (English)

    刘志刚; 何正友; 钱清泉

    2002-01-01

    A comparison of construction forms and base functions is made between feedforward neural network and wavelet network. The relations between them are studied from the constructions of wavelet functions or dilation functions in wavelet network by different activation functions in feedforward neural network. It is concluded that some wavelet function is equal to the linear combination of several neurons in feedforward neural network.

  17. Ocean wave forecasting using recurrent neural networks

    Digital Repository Service at National Institute of Oceanography (India)

    Mandal, S.; Prabaharan, N.

    , merchant vessel routing, nearshore construction, etc. more efficiently and safely. This paper describes an artificial neural network, namely recurrent neural network with rprop update algorithm and is applied for wave forecasting. Measured ocean waves off...

  18. Generalization performance of regularized neural network models

    DEFF Research Database (Denmark)

    Larsen, Jan; Hansen, Lars Kai

    1994-01-01

    Architecture optimization is a fundamental problem of neural network modeling. The optimal architecture is defined as the one which minimizes the generalization error. This paper addresses estimation of the generalization performance of regularized, complete neural network models. Regularization...

  19. Neural Networks for Flight Control

    Science.gov (United States)

    Jorgensen, Charles C.

    1996-01-01

    Neural networks are being developed at NASA Ames Research Center to permit real-time adaptive control of time varying nonlinear systems, enhance the fault-tolerance of mission hardware, and permit online system reconfiguration. In general, the problem of controlling time varying nonlinear systems with unknown structures has not been solved. Adaptive neural control techniques show considerable promise and are being applied to technical challenges including automated docking of spacecraft, dynamic balancing of the space station centrifuge, online reconfiguration of damaged aircraft, and reducing cost of new air and spacecraft designs. Our experiences have shown that neural network algorithms solved certain problems that conventional control methods have been unable to effectively address. These include damage mitigation in nonlinear reconfiguration flight control, early performance estimation of new aircraft designs, compensation for damaged planetary mission hardware by using redundant manipulator capability, and space sensor platform stabilization. This presentation explored these developments in the context of neural network control theory. The discussion began with an overview of why neural control has proven attractive for NASA application domains. The more important issues in control system development were then discussed with references to significant technical advances in the literature. Examples of how these methods have been applied were given, followed by projections of emerging application needs and directions.

  20. Bam Earthquake in Iran

    CERN Multimedia

    2004-01-01

    Following their request for help from members of international organisations, the permanent Mission of the Islamic Republic of Iran has given the following bank account number, where you can donate money to help the victims of the Bam earthquake. Re: Bam earthquake 235 - UBS 311264.35L Bubenberg Platz 3001 BERN

  1. Neural Network Adaptations to Hardware Implementations

    OpenAIRE

    Moerland, Perry,; Fiesler,Emile

    1997-01-01

    In order to take advantage of the massive parallelism offered by artificial neural networks, hardware implementations are essential.However, most standard neural network models are not very suitable for implementation in hardware and adaptations are needed. In this section an overview is given of the various issues that are encountered when mapping an ideal neural network model onto a compact and reliable neural network hardware implementation, like quantization, handling nonuniformities and ...

  2. Neural Network Adaptations to Hardware Implementations

    OpenAIRE

    Moerland, Perry,; Fiesler,Emile; Beale, R

    1997-01-01

    In order to take advantage of the massive parallelism offered by artificial neural networks, hardware implementations are essential. However, most standard neural network models are not very suitable for implementation in hardware and adaptations are needed. In this section an overview is given of the various issues that are encountered when mapping an ideal neural network model onto a compact and reliable neural network hardware implementation, like quantization, handling nonuniformities and...

  3. Building a Chaotic Proved Neural Network

    CERN Document Server

    Bahi, Jacques M; Salomon, Michel

    2011-01-01

    Chaotic neural networks have received a great deal of attention these last years. In this paper we establish a precise correspondence between the so-called chaotic iterations and a particular class of artificial neural networks: global recurrent multi-layer perceptrons. We show formally that it is possible to make these iterations behave chaotically, as defined by Devaney, and thus we obtain the first neural networks proven chaotic. Several neural networks with different architectures are trained to exhibit a chaotical behavior.

  4. Neural Network based Consumption Forecasting

    DEFF Research Database (Denmark)

    Madsen, Per Printz

    2016-01-01

    This paper describe a Neural Network based method for consumption forecasting. This work has been financed by the The ENCOURAGE project. The aims of The ENCOURAGE project is to develop embedded intelligence and integration technologies that will directly optimize energy use in buildings and enable...

  5. Artificial neural networks in medicine

    Energy Technology Data Exchange (ETDEWEB)

    Keller, P.E.

    1994-07-01

    This Technology Brief provides an overview of artificial neural networks (ANN). A definition and explanation of an ANN is given and situations in which an ANN is used are described. ANN applications to medicine specifically are then explored and the areas in which it is currently being used are discussed. Included are medical diagnostic aides, biochemical analysis, medical image analysis and drug development.

  6. Medical Imaging with Neural Networks

    International Nuclear Information System (INIS)

    The objective of this paper is to provide an overview of the recent developments in the use of artificial neural networks in medical imaging. The areas of medical imaging that are covered include : ultrasound, magnetic resonance, nuclear medicine and radiological (including computerized tomography). (authors)

  7. Aphasia Classification Using Neural Networks

    DEFF Research Database (Denmark)

    Axer, H.; Jantzen, Jan; Berks, G.;

    2000-01-01

    A web-based software model (http://fuzzy.iau.dtu.dk/aphasia.nsf) was developed as an example for classification of aphasia using neural networks. Two multilayer perceptrons were used to classify the type of aphasia (Broca, Wernicke, anomic, global) according to the results in some subtests...

  8. Model Of Neural Network With Creative Dynamics

    Science.gov (United States)

    Zak, Michail; Barhen, Jacob

    1993-01-01

    Paper presents analysis of mathematical model of one-neuron/one-synapse neural network featuring coupled activation and learning dynamics and parametrical periodic excitation. Demonstrates self-programming, partly random behavior of suitable designed neural network; believed to be related to spontaneity and creativity of biological neural networks.

  9. Analysis of Neural Networks through Base Functions

    NARCIS (Netherlands)

    Zwaag, van der B.J.; Slump, C.H.; Spaanenburg, L.

    2002-01-01

    Problem statement. Despite their success-story, neural networks have one major disadvantage compared to other techniques: the inability to explain comprehensively how a trained neural network reaches its output; neural networks are not only (incorrectly) seen as a "magic tool" but possibly even more

  10. Simplified LQG Control with Neural Networks

    DEFF Research Database (Denmark)

    Sørensen, O.

    1997-01-01

    A new neural network application for non-linear state control is described. One neural network is modelled to form a Kalmann predictor and trained to act as an optimal state observer for a non-linear process. Another neural network is modelled to form a state controller and trained to produce...

  11. Neural Networks and Photometric Redshifts

    CERN Document Server

    Tagliaferri, R; Andreon, S; Capozziello, S; Donalek, C; Giordano, G; Tagliaferri, Roberto; Longo, Giuseppe; Andreon, Stefano; Capozziello, Salvatore; Donalek, Ciro; Giordano, Gerardo

    2002-01-01

    We present a neural network based approach to the determination of photometric redshift. The method was tested on the Sloan Digital Sky Survey Early Data Release (SDSS-EDR) reaching an accuracy comparable and, in some cases, better than SED template fitting techniques. Different neural networks architecture have been tested and the combination of a Multi Layer Perceptron with 1 hidden layer (22 neurons) operated in a Bayesian framework, with a Self Organizing Map used to estimate the accuracy of the results, turned out to be the most effective. In the best experiment, the implemented network reached an accuracy of 0.020 (interquartile error) in the range 0

  12. Photon spectrometry utilizing neural networks

    International Nuclear Information System (INIS)

    Having in mind the time spent on the uneventful work of characterization of the radiation beams used in a ionizing radiation metrology laboratory, the Metrology Service of the Centro Regional de Ciencias Nucleares do Nordeste - CRCN-NE verified the applicability of artificial intelligence (artificial neural networks) to perform the spectrometry in photon fields. For this, was developed a multilayer neural network, as an application for the classification of patterns in energy, associated with a thermoluminescent dosimetric system (TLD-700 and TLD-600). A set of dosimeters was initially exposed to various well known medium energies, between 40 keV and 1.2 MeV, coinciding with the beams determined by ISO 4037 standard, for the dose of 10 mSv in the quantity Hp(10), on a chest phantom (ISO slab phantom) with the purpose of generating a set of training data for the neural network. Subsequently, a new set of dosimeters irradiated in unknown energies was presented to the network with the purpose to test the method. The methodology used in this work was suitable for application in the classification of energy beams, having obtained 100% of the classification performed. (authors)

  13. Fuzzy logic systems are equivalent to feedforward neural networks

    Institute of Scientific and Technical Information of China (English)

    李洪兴

    2000-01-01

    Fuzzy logic systems and feedforward neural networks are equivalent in essence. First, interpolation representations of fuzzy logic systems are introduced and several important conclusions are given. Then three important kinds of neural networks are defined, i.e. linear neural networks, rectangle wave neural networks and nonlinear neural networks. Then it is proved that nonlinear neural networks can be represented by rectangle wave neural networks. Based on the results mentioned above, the equivalence between fuzzy logic systems and feedforward neural networks is proved, which will be very useful for theoretical research or applications on fuzzy logic systems or neural networks by means of combining fuzzy logic systems with neural networks.

  14. Neural Networks Methodology and Applications

    CERN Document Server

    Dreyfus, Gérard

    2005-01-01

    Neural networks represent a powerful data processing technique that has reached maturity and broad application. When clearly understood and appropriately used, they are a mandatory component in the toolbox of any engineer who wants make the best use of the available data, in order to build models, make predictions, mine data, recognize shapes or signals, etc. Ranging from theoretical foundations to real-life applications, this book is intended to provide engineers and researchers with clear methodologies for taking advantage of neural networks in industrial, financial or banking applications, many instances of which are presented in the book. For the benefit of readers wishing to gain deeper knowledge of the topics, the book features appendices that provide theoretical details for greater insight, and algorithmic details for efficient programming and implementation. The chapters have been written by experts ands seemlessly edited to present a coherent and comprehensive, yet not redundant, practically-oriented...

  15. Learning with heterogeneous neural networks

    OpenAIRE

    Belanche Muñoz, Luis Antonio

    2011-01-01

    This chapter studies a class of neuron models that computes a user-defined similarity function between inputs and weights. The neuron transfer function is formed by composition of an adapted logistic function with the quasi-linear mean of the partial input-weight similarities. The neuron model is capable of dealing directly with mixtures of continuous as well as discrete quantities, among other data types and there is provision for missing values. An artificial neural network using these n...

  16. Process Neural Networks Theory and Applications

    CERN Document Server

    He, Xingui

    2010-01-01

    "Process Neural Networks - Theory and Applications" proposes the concept and model of a process neural network for the first time, showing how it expands the mapping relationship between the input and output of traditional neural networks, and enhancing the expression capability for practical problems, with broad applicability to solving problems relating to process in practice. Some theoretical problems such as continuity, functional approximation capability, and computing capability, are strictly proved. The application methods, network construction principles, and optimization alg

  17. The LILARTI neural network system

    Energy Technology Data Exchange (ETDEWEB)

    Allen, J.D. Jr.; Schell, F.M.; Dodd, C.V.

    1992-10-01

    The material of this Technical Memorandum is intended to provide the reader with conceptual and technical background information on the LILARTI neural network system of detail sufficient to confer an understanding of the LILARTI method as it is presently allied and to facilitate application of the method to problems beyond the scope of this document. Of particular importance in this regard are the descriptive sections and the Appendices which include operating instructions, partial listings of program output and data files, and network construction information.

  18. Practical neural network recipies in C++

    CERN Document Server

    Masters

    2014-01-01

    This text serves as a cookbook for neural network solutions to practical problems using C++. It will enable those with moderate programming experience to select a neural network model appropriate to solving a particular problem, and to produce a working program implementing that network. The book provides guidance along the entire problem-solving path, including designing the training set, preprocessing variables, training and validating the network, and evaluating its performance. Though the book is not intended as a general course in neural networks, no background in neural works is assum

  19. Neural network modeling of emotion

    Science.gov (United States)

    Levine, Daniel S.

    2007-03-01

    This article reviews the history and development of computational neural network modeling of cognitive and behavioral processes that involve emotion. The exposition starts with models of classical conditioning dating from the early 1970s. Then it proceeds toward models of interactions between emotion and attention. Then models of emotional influences on decision making are reviewed, including some speculative (not and not yet simulated) models of the evolution of decision rules. Through the late 1980s, the neural networks developed to model emotional processes were mainly embodiments of significant functional principles motivated by psychological data. In the last two decades, network models of these processes have become much more detailed in their incorporation of known physiological properties of specific brain regions, while preserving many of the psychological principles from the earlier models. Most network models of emotional processes so far have dealt with positive and negative emotion in general, rather than specific emotions such as fear, joy, sadness, and anger. But a later section of this article reviews a few models relevant to specific emotions: one family of models of auditory fear conditioning in rats, and one model of induced pleasure enhancing creativity in humans. Then models of emotional disorders are reviewed. The article concludes with philosophical statements about the essential contributions of emotion to intelligent behavior and the importance of quantitative theories and models to the interdisciplinary enterprise of understanding the interactions of emotion, cognition, and behavior.

  20. Neural networks and MIMD-multiprocessors

    Science.gov (United States)

    Vanhala, Jukka; Kaski, Kimmo

    1990-01-01

    Two artificial neural network models are compared. They are the Hopfield Neural Network Model and the Sparse Distributed Memory model. Distributed algorithms for both of them are designed and implemented. The run time characteristics of the algorithms are analyzed theoretically and tested in practice. The storage capacities of the networks are compared. Implementations are done using a distributed multiprocessor system.

  1. Neural-Network Computer Transforms Coordinates

    Science.gov (United States)

    Josin, Gary M.

    1990-01-01

    Numerical simulation demonstrated ability of conceptual neural-network computer to generalize what it has "learned" from few examples. Ability to generalize achieved with even simple neural network (relatively few neurons) and after exposure of network to only few "training" examples. Ability to obtain fairly accurate mappings after only few training examples used to provide solutions to otherwise intractable mapping problems.

  2. Salience-Affected Neural Networks

    CERN Document Server

    Remmelzwaal, Leendert A; Ellis, George F R

    2010-01-01

    We present a simple neural network model which combines a locally-connected feedforward structure, as is traditionally used to model inter-neuron connectivity, with a layer of undifferentiated connections which model the diffuse projections from the human limbic system to the cortex. This new layer makes it possible to model global effects such as salience, at the same time as the local network processes task-specific or local information. This simple combination network displays interactions between salience and regular processing which correspond to known effects in the developing brain, such as enhanced learning as a result of heightened affect. The cortex biases neuronal responses to affect both learning and memory, through the use of diffuse projections from the limbic system to the cortex. Standard ANNs do not model this non-local flow of information represented by the ascending systems, which are a significant feature of the structure of the brain, and although they do allow associational learning with...

  3. Fast Algorithms for Convolutional Neural Networks

    OpenAIRE

    Lavin, Andrew; Gray, Scott

    2015-01-01

    Deep convolutional neural networks take GPU days of compute time to train on large data sets. Pedestrian detection for self driving cars requires very low latency. Image recognition for mobile phones is constrained by limited processing resources. The success of convolutional neural networks in these situations is limited by how fast we can compute them. Conventional FFT based convolution is fast for large filters, but state of the art convolutional neural networks use small, 3x3 filters. We ...

  4. Adaptive optimization and control using neural networks

    Energy Technology Data Exchange (ETDEWEB)

    Mead, W.C.; Brown, S.K.; Jones, R.D.; Bowling, P.S.; Barnes, C.W.

    1993-10-22

    Recent work has demonstrated the ability of neural-network-based controllers to optimize and control machines with complex, non-linear, relatively unknown control spaces. We present a brief overview of neural networks via a taxonomy illustrating some capabilities of different kinds of neural networks. We present some successful control examples, particularly the optimization and control of a small-angle negative ion source.

  5. Information Theory for Analyzing Neural Networks

    OpenAIRE

    Sørngård, Bård

    2014-01-01

    The goal of this thesis was to investigate how information theory could be used to analyze artificial neural networks. For this purpose, two problems, a classification problem and a controller problem were considered. The classification problem was solved with a feedforward neural network trained with backpropagation, the controller problem was solved with a continuous-time recurrent neural network optimized with evolution.Results from the classification problem shows that mutual information ...

  6. Sequential optimizing investing strategy with neural networks

    OpenAIRE

    Ryo Adachi; Akimichi Takemura

    2010-01-01

    In this paper we propose an investing strategy based on neural network models combined with ideas from game-theoretic probability of Shafer and Vovk. Our proposed strategy uses parameter values of a neural network with the best performance until the previous round (trading day) for deciding the investment in the current round. We compare performance of our proposed strategy with various strategies including a strategy based on supervised neural network models and show that our procedure is co...

  7. Artificial neural networks in nuclear medicine

    International Nuclear Information System (INIS)

    An analysis of the accessible literature on the diagnostic applicability of artificial neural networks in coronary artery disease and pulmonary embolism appears to be comparative to the diagnosis of experienced doctors dealing with nuclear medicine. Differences in the employed models of artificial neural networks indicate a constant search for the most optimal parameters, which could guarantee the ultimate accuracy in neural network activity. The diagnostic potential within systems containing artificial neural networks proves this calculation tool to be an independent or/and an additional device for supporting a doctor's diagnosis of artery disease and pulmonary embolism. (author)

  8. Fuzzy neural network theory and application

    CERN Document Server

    Liu, Puyin

    2004-01-01

    This book systematically synthesizes research achievements in the field of fuzzy neural networks in recent years. It also provides a comprehensive presentation of the developments in fuzzy neural networks, with regard to theory as well as their application to system modeling and image restoration. Special emphasis is placed on the fundamental concepts and architecture analysis of fuzzy neural networks. The book is unique in treating all kinds of fuzzy neural networks and their learning algorithms and universal approximations, and employing simulation examples which are carefully designed to he

  9. Application of neural networks in coastal engineering

    Digital Repository Service at National Institute of Oceanography (India)

    Mandal, S.

    neural networks, J of computer aided civil and infrastructural engineering, (UK), 13, 113-120. Deo, MC and Naidu, CS (1999) Real time wave forecasting using neural networks, Ocean Engineering, 26, 191-203. Deo, MC, Gondane, DS and Kumar, VS (2002...) An application of artificial neural networks in tide-forecasting. Ocean Engineering, 29, pp 1003-1022 MandaI,S; Subba Rao and Chackraborty, l\\TV (2002) Hindcasting cyclonic waves using neural network. International Conference SHOT 2002, lIT Kharagpur, 18...

  10. Neural networks for nuclear spectroscopy

    Energy Technology Data Exchange (ETDEWEB)

    Keller, P.E.; Kangas, L.J.; Hashem, S.; Kouzes, R.T. [Pacific Northwest Lab., Richland, WA (United States)] [and others

    1995-12-31

    In this paper two applications of artificial neural networks (ANNs) in nuclear spectroscopy analysis are discussed. In the first application, an ANN assigns quality coefficients to alpha particle energy spectra. These spectra are used to detect plutonium contamination in the work environment. The quality coefficients represent the levels of spectral degradation caused by miscalibration and foreign matter affecting the instruments. A set of spectra was labeled with quality coefficients by an expert and used to train the ANN expert system. Our investigation shows that the expert knowledge of spectral quality can be transferred to an ANN system. The second application combines a portable gamma-ray spectrometer with an ANN. In this system the ANN is used to automatically identify, radioactive isotopes in real-time from their gamma-ray spectra. Two neural network paradigms are examined: the linear perception and the optimal linear associative memory (OLAM). A comparison of the two paradigms shows that OLAM is superior to linear perception for this application. Both networks have a linear response and are useful in determining the composition of an unknown sample when the spectrum of the unknown is a linear superposition of known spectra. One feature of this technique is that it uses the whole spectrum in the identification process instead of only the individual photo-peaks. For this reason, it is potentially more useful for processing data from lower resolution gamma-ray spectrometers. This approach has been tested with data generated by Monte Carlo simulations and with field data from sodium iodide and Germanium detectors. With the ANN approach, the intense computation takes place during the training process. Once the network is trained, normal operation consists of propagating the data through the network, which results in rapid identification of samples. This approach is useful in situations that require fast response where precise quantification is less important.

  11. Neural Network Controlled Visual Saccades

    Science.gov (United States)

    Johnson, Jeffrey D.; Grogan, Timothy A.

    1989-03-01

    The paper to be presented will discuss research on a computer vision system controlled by a neural network capable of learning through classical (Pavlovian) conditioning. Through the use of unconditional stimuli (reward and punishment) the system will develop scan patterns of eye saccades necessary to differentiate and recognize members of an input set. By foveating only those portions of the input image that the system has found to be necessary for recognition the drawback of computational explosion as the size of the input image grows is avoided. The model incorporates many features found in animal vision systems, and is governed by understandable and modifiable behavior patterns similar to those reported by Pavlov in his classic study. These behavioral patterns are a result of a neuronal model, used in the network, explicitly designed to reproduce this behavior.

  12. Neural networks with discontinuous/impact activations

    CERN Document Server

    Akhmet, Marat

    2014-01-01

    This book presents as its main subject new models in mathematical neuroscience. A wide range of neural networks models with discontinuities are discussed, including impulsive differential equations, differential equations with piecewise constant arguments, and models of mixed type. These models involve discontinuities, which are natural because huge velocities and short distances are usually observed in devices modeling the networks. A discussion of the models, appropriate for the proposed applications, is also provided. This book also: Explores questions related to the biological underpinning for models of neural networks\\ Considers neural networks modeling using differential equations with impulsive and piecewise constant argument discontinuities Provides all necessary mathematical basics for application to the theory of neural networks Neural Networks with Discontinuous/Impact Activations is an ideal book for researchers and professionals in the field of engineering mathematics that have an interest in app...

  13. Video Traffic Prediction Using Neural Networks

    Directory of Open Access Journals (Sweden)

    Miloš Oravec

    2008-10-01

    Full Text Available In this paper, we consider video stream prediction for application in services likevideo-on-demand, videoconferencing, video broadcasting, etc. The aim is to predict thevideo stream for an efficient bandwidth allocation of the video signal. Efficient predictionof traffic generated by multimedia sources is an important part of traffic and congestioncontrol procedures at the network edges. As a tool for the prediction, we use neuralnetworks – multilayer perceptron (MLP, radial basis function networks (RBF networksand backpropagation through time (BPTT neural networks. At first, we briefly introducetheoretical background of neural networks, the prediction methods and the differencebetween them. We propose also video time-series processing using moving averages.Simulation results for each type of neural network together with final comparisons arepresented. For comparison purposes, also conventional (non-neural prediction isincluded. The purpose of our work is to construct suitable neural networks for variable bitrate video prediction and evaluate them. We use video traces from [1].

  14. Neural Networks for Emotion Classification

    CERN Document Server

    Sun, Yafei

    2011-01-01

    It is argued that for the computer to be able to interact with humans, it needs to have the communication skills of humans. One of these skills is the ability to understand the emotional state of the person. This thesis describes a neural network-based approach for emotion classification. We learn a classifier that can recognize six basic emotions with an average accuracy of 77% over the Cohn-Kanade database. The novelty of this work is that instead of empirically selecting the parameters of the neural network, i.e. the learning rate, activation function parameter, momentum number, the number of nodes in one layer, etc. we developed a strategy that can automatically select comparatively better combination of these parameters. We also introduce another way to perform back propagation. Instead of using the partial differential of the error function, we use optimal algorithm; namely Powell's direction set to minimize the error function. We were also interested in construction an authentic emotion databases. This...

  15. Artificial neural networks in neurosurgery.

    Science.gov (United States)

    Azimi, Parisa; Mohammadi, Hasan Reza; Benzel, Edward C; Shahzadi, Sohrab; Azhari, Shirzad; Montazeri, Ali

    2015-03-01

    Artificial neural networks (ANNs) effectively analyze non-linear data sets. The aimed was A review of the relevant published articles that focused on the application of ANNs as a tool for assisting clinical decision-making in neurosurgery. A literature review of all full publications in English biomedical journals (1993-2013) was undertaken. The strategy included a combination of key words 'artificial neural networks', 'prognostic', 'brain', 'tumor tracking', 'head', 'tumor', 'spine', 'classification' and 'back pain' in the title and abstract of the manuscripts using the PubMed search engine. The major findings are summarized, with a focus on the application of ANNs for diagnostic and prognostic purposes. Finally, the future of ANNs in neurosurgery is explored. A total of 1093 citations were identified and screened. In all, 57 citations were found to be relevant. Of these, 50 articles were eligible for inclusion in this review. The synthesis of the data showed several applications of ANN in neurosurgery, including: (1) diagnosis and assessment of disease progression in low back pain, brain tumours and primary epilepsy; (2) enhancing clinically relevant information extraction from radiographic images, intracranial pressure processing, low back pain and real-time tumour tracking; (3) outcome prediction in epilepsy, brain metastases, lumbar spinal stenosis, lumbar disc herniation, childhood hydrocephalus, trauma mortality, and the occurrence of symptomatic cerebral vasospasm in patients with aneurysmal subarachnoid haemorrhage; (4) the use in the biomechanical assessments of spinal disease. ANNs can be effectively employed for diagnosis, prognosis and outcome prediction in neurosurgery.

  16. The Laplacian spectrum of neural networks

    Directory of Open Access Journals (Sweden)

    Siemon ede Lange

    2014-01-01

    Full Text Available The brain is a complex network of neural interactions, both at the microscopic and macroscopic level. Graph theory is well suited to examine the global network architecture of these neural networks. Many popular graph metrics, however, encode average properties of individual network elements. Complementing these ‘conventional’ graph metrics, the eigenvalue spectrum of the normalized Laplacian describes a network’s structure directly at a systems level, without referring to individual nodes or connections. In this paper, the Laplacian spectra of the macroscopic anatomical neuronal networks of the macaque and cat, and the microscopic network of the Caenorhabditis elegans were examined. Consistent with conventional graph metrics, analysis of the Laplacian spectra revealed an integrative community structure in neural brain networks. Extending previous findings of overlap of network attributes across species, similarity of the Laplacian spectra across the cat, macaque and C. elegans neural networks suggests a certain level of consistency in the overall architecture of the anatomical neural networks of these species. Our results further suggest a specific network class for neural networks, distinct from conceptual small-world and scale-free models as well as several empirical networks.

  17. Optimising the topology of complex neural networks

    CERN Document Server

    Jiang, Fei; Schoenauer, Marc

    2007-01-01

    In this paper, we study instances of complex neural networks, i.e. neural netwo rks with complex topologies. We use Self-Organizing Map neural networks whose n eighbourhood relationships are defined by a complex network, to classify handwr itten digits. We show that topology has a small impact on performance and robus tness to neuron failures, at least at long learning times. Performance may howe ver be increased (by almost 10%) by artificial evolution of the network topo logy. In our experimental conditions, the evolved networks are more random than their parents, but display a more heterogeneous degree distribution.

  18. Optimizing neural network forecast by immune algorithm

    Institute of Scientific and Technical Information of China (English)

    YANG Shu-xia; LI Xiang; LI Ning; YANG Shang-dong

    2006-01-01

    Considering multi-factor influence, a forecasting model was built. The structure of BP neural network was designed, and immune algorithm was applied to optimize its network structure and weight. After training the data of power demand from the year 1980 to 2005 in China, a nonlinear network model was obtained on the relationship between power demand and the factors which had impacts on it, and thus the above proposed method was verified. Meanwhile, the results were compared to those of neural network optimized by genetic algorithm. The results show that this method is superior to neural network optimized by genetic algorithm and is one of the effective ways of time series forecast.

  19. A new formulation for feedforward neural networks.

    Science.gov (United States)

    Razavi, Saman; Tolson, Bryan A

    2011-10-01

    Feedforward neural network is one of the most commonly used function approximation techniques and has been applied to a wide variety of problems arising from various disciplines. However, neural networks are black-box models having multiple challenges/difficulties associated with training and generalization. This paper initially looks into the internal behavior of neural networks and develops a detailed interpretation of the neural network functional geometry. Based on this geometrical interpretation, a new set of variables describing neural networks is proposed as a more effective and geometrically interpretable alternative to the traditional set of network weights and biases. Then, this paper develops a new formulation for neural networks with respect to the newly defined variables; this reformulated neural network (ReNN) is equivalent to the common feedforward neural network but has a less complex error response surface. To demonstrate the learning ability of ReNN, in this paper, two training methods involving a derivative-based (a variation of backpropagation) and a derivative-free optimization algorithms are employed. Moreover, a new measure of regularization on the basis of the developed geometrical interpretation is proposed to evaluate and improve the generalization ability of neural networks. The value of the proposed geometrical interpretation, the ReNN approach, and the new regularization measure are demonstrated across multiple test problems. Results show that ReNN can be trained more effectively and efficiently compared to the common neural networks and the proposed regularization measure is an effective indicator of how a network would perform in terms of generalization.

  20. Drift chamber tracking with neural networks

    International Nuclear Information System (INIS)

    We discuss drift chamber tracking with a commercial log VLSI neural network chip. Voltages proportional to the drift times in a 4-layer drift chamber were presented to the Intel ETANN chip. The network was trained to provide the intercept and slope of straight tracks traversing the chamber. The outputs were recorded and later compared off line to conventional track fits. Two types of network architectures were studied. Applications of neural network tracking to high energy physics detector triggers is discussed

  1. Drift chamber tracking with neural networks

    Energy Technology Data Exchange (ETDEWEB)

    Lindsey, C.S.; Denby, B.; Haggerty, H.

    1992-10-01

    We discuss drift chamber tracking with a commercial log VLSI neural network chip. Voltages proportional to the drift times in a 4-layer drift chamber were presented to the Intel ETANN chip. The network was trained to provide the intercept and slope of straight tracks traversing the chamber. The outputs were recorded and later compared off line to conventional track fits. Two types of network architectures were studied. Applications of neural network tracking to high energy physics detector triggers is discussed.

  2. Coherence resonance in bursting neural networks

    Science.gov (United States)

    Kim, June Hoan; Lee, Ho Jun; Min, Cheol Hong; Lee, Kyoung J.

    2015-10-01

    Synchronized neural bursts are one of the most noticeable dynamic features of neural networks, being essential for various phenomena in neuroscience, yet their complex dynamics are not well understood. With extrinsic electrical and optical manipulations on cultured neural networks, we demonstrate that the regularity (or randomness) of burst sequences is in many cases determined by a (few) low-dimensional attractor(s) working under strong neural noise. Moreover, there is an optimal level of noise strength at which the regularity of the interburst interval sequence becomes maximal—a phenomenon of coherence resonance. The experimental observations are successfully reproduced through computer simulations on a well-established neural network model, suggesting that the same phenomena may occur in many in vivo as well as in vitro neural networks.

  3. Coherence resonance in bursting neural networks.

    Science.gov (United States)

    Kim, June Hoan; Lee, Ho Jun; Min, Cheol Hong; Lee, Kyoung J

    2015-10-01

    Synchronized neural bursts are one of the most noticeable dynamic features of neural networks, being essential for various phenomena in neuroscience, yet their complex dynamics are not well understood. With extrinsic electrical and optical manipulations on cultured neural networks, we demonstrate that the regularity (or randomness) of burst sequences is in many cases determined by a (few) low-dimensional attractor(s) working under strong neural noise. Moreover, there is an optimal level of noise strength at which the regularity of the interburst interval sequence becomes maximal-a phenomenon of coherence resonance. The experimental observations are successfully reproduced through computer simulations on a well-established neural network model, suggesting that the same phenomena may occur in many in vivo as well as in vitro neural networks.

  4. Radiation Behavior of Analog Neural Network Chip

    Science.gov (United States)

    Langenbacher, H.; Zee, F.; Daud, T.; Thakoor, A.

    1996-01-01

    A neural network experiment conducted for the Space Technology Research Vehicle (STRV-1) 1-b launched in June 1994. Identical sets of analog feed-forward neural network chips was used to study and compare the effects of space and ground radiation on the chips. Three failure mechanisms are noted.

  5. Adaptive Neurons For Artificial Neural Networks

    Science.gov (United States)

    Tawel, Raoul

    1990-01-01

    Training time decreases dramatically. In improved mathematical model of neural-network processor, temperature of neurons (in addition to connection strengths, also called weights, of synapses) varied during supervised-learning phase of operation according to mathematical formalism and not heuristic rule. Evidence that biological neural networks also process information at neuronal level.

  6. Self-organization of neural networks

    Energy Technology Data Exchange (ETDEWEB)

    Clark, J.W.; Winston, J.V.; Rafelski, J.

    1984-05-14

    The plastic development of a neural-network model operating autonomously in discrete time is described by the temporal modification of interneuronal coupling strengths according to momentary neural activity. A simple algorithm (brainwashing) is found which, applied to nets with initially quasirandom connectivity, leads to model networks with properties conducive to the simulation of memory and learning phenomena. 18 references, 2 figures.

  7. Self-organization of neural networks

    Science.gov (United States)

    Clark, John W.; Winston, Jeffrey V.; Rafelski, Johann

    1984-05-01

    The plastic development of a neural-network model operating autonomously in discrete time is described by the temporal modification of interneuronal coupling strengths according to momentary neural activity. A simple algorithm (“brainwashing”) is found which, applied to nets with initially quasirandom connectivity, leads to model networks with properties conductive to the simulation of memory and learning phenomena.

  8. Neural Networks for Non-linear Control

    DEFF Research Database (Denmark)

    Sørensen, O.

    1994-01-01

    This paper describes how a neural network, structured as a Multi Layer Perceptron, is trained to predict, simulate and control a non-linear process.......This paper describes how a neural network, structured as a Multi Layer Perceptron, is trained to predict, simulate and control a non-linear process....

  9. Secure Key Exchange using Neural Network

    OpenAIRE

    Vineeta Soni

    2014-01-01

    Any cryptographic system is used to exchange confidential information securely over the public channel without any leakage of information to the unauthorized users. Neural networks can be used to generate a common secret key because the processes involve in Cryptographic system requires large computational power and very complex. Moreover Diffi hellman key exchange is suffered from man-in –the middle attack. For overcome this problem neural networks can be used.Two neural netwo...

  10. Introduction to Concepts in Artificial Neural Networks

    Science.gov (United States)

    Niebur, Dagmar

    1995-01-01

    This introduction to artificial neural networks summarizes some basic concepts of computational neuroscience and the resulting models of artificial neurons. The terminology of biological and artificial neurons, biological and machine learning and neural processing is introduced. The concepts of supervised and unsupervised learning are explained with examples from the power system area. Finally, a taxonomy of different types of neurons and different classes of artificial neural networks is presented.

  11. Rule Extraction using Artificial Neural Networks

    CERN Document Server

    Kamruzzaman, S M

    2010-01-01

    Artificial neural networks have been successfully applied to a variety of business application problems involving classification and regression. Although backpropagation neural networks generally predict better than decision trees do for pattern classification problems, they are often regarded as black boxes, i.e., their predictions are not as interpretable as those of decision trees. In many applications, it is desirable to extract knowledge from trained neural networks so that the users can gain a better understanding of the solution. This paper presents an efficient algorithm to extract rules from artificial neural networks. We use two-phase training algorithm for backpropagation learning. In the first phase, the number of hidden nodes of the network is determined automatically in a constructive fashion by adding nodes one after another based on the performance of the network on training data. In the second phase, the number of relevant input units of the network is determined using pruning algorithm. The ...

  12. International Conference on Artificial Neural Networks (ICANN)

    CERN Document Server

    Mladenov, Valeri; Kasabov, Nikola; Artificial Neural Networks : Methods and Applications in Bio-/Neuroinformatics

    2015-01-01

    The book reports on the latest theories on artificial neural networks, with a special emphasis on bio-neuroinformatics methods. It includes twenty-three papers selected from among the best contributions on bio-neuroinformatics-related issues, which were presented at the International Conference on Artificial Neural Networks, held in Sofia, Bulgaria, on September 10-13, 2013 (ICANN 2013). The book covers a broad range of topics concerning the theory and applications of artificial neural networks, including recurrent neural networks, super-Turing computation and reservoir computing, double-layer vector perceptrons, nonnegative matrix factorization, bio-inspired models of cell communities, Gestalt laws, embodied theory of language understanding, saccadic gaze shifts and memory formation, and new training algorithms for Deep Boltzmann Machines, as well as dynamic neural networks and kernel machines. It also reports on new approaches to reinforcement learning, optimal control of discrete time-delay systems, new al...

  13. Wavelet Neural Networks for Adaptive Equalization

    Institute of Scientific and Technical Information of China (English)

    JIANGMinghu; DENGBeixing; GIELENGeorges; ZHANGBo

    2003-01-01

    A structure based on the Wavelet neural networks (WNNs) is proposed for nonlinear channel equalization in a digital communication system. The construction algorithm of the Minimum error probability (MEP) is presented and applied as a performance criterion to update the parameter matrix of wavelet networks. Our experimental results show that performance of the proposed wavelet networks based on equalizer can significantly improve the neural modeling accuracy, perform quite well in compensating the nonlinear distortion introduced by the channel, and outperform the conventional neural networks in signal to noise ratio and channel non-llnearity.

  14. Sunspot prediction using neural networks

    Science.gov (United States)

    Villarreal, James; Baffes, Paul

    1990-01-01

    The earliest systematic observance of sunspot activity is known to have been discovered by the Chinese in 1382 during the Ming Dynasty (1368 to 1644) when spots on the sun were noticed by looking at the sun through thick, forest fire smoke. Not until after the 18th century did sunspot levels become more than a source of wonderment and curiosity. Since 1834 reliable sunspot data has been collected by the National Oceanic and Atmospheric Administration (NOAA) and the U.S. Naval Observatory. Recently, considerable effort has been placed upon the study of the effects of sunspots on the ecosystem and the space environment. The efforts of the Artificial Intelligence Section of the Mission Planning and Analysis Division of the Johnson Space Center involving the prediction of sunspot activity using neural network technologies are described.

  15. Subspace learning of neural networks

    CERN Document Server

    Cheng Lv, Jian; Zhou, Jiliu

    2010-01-01

    PrefaceChapter 1. Introduction1.1 Introduction1.1.1 Linear Neural Networks1.1.2 Subspace Learning1.2 Subspace Learning Algorithms1.2.1 PCA Learning Algorithms1.2.2 MCA Learning Algorithms1.2.3 ICA Learning Algorithms1.3 Methods for Convergence Analysis1.3.1 SDT Method1.3.2 DCT Method1.3.3 DDT Method1.4 Block Algorithms1.5 Simulation Data Set and Notation1.6 ConclusionsChapter 2. PCA Learning Algorithms with Constants Learning Rates2.1 Oja's PCA Learning Algorithms2.1.1 The Algorithms2.1.2 Convergence Issue2.2 Invariant Sets2.2.1 Properties of Invariant Sets2.2.2 Conditions for Invariant Sets2.

  16. Introduction to artificial neural networks.

    Science.gov (United States)

    Grossi, Enzo; Buscema, Massimo

    2007-12-01

    The coupling of computer science and theoretical bases such as nonlinear dynamics and chaos theory allows the creation of 'intelligent' agents, such as artificial neural networks (ANNs), able to adapt themselves dynamically to problems of high complexity. ANNs are able to reproduce the dynamic interaction of multiple factors simultaneously, allowing the study of complexity; they can also draw conclusions on individual basis and not as average trends. These tools can offer specific advantages with respect to classical statistical techniques. This article is designed to acquaint gastroenterologists with concepts and paradigms related to ANNs. The family of ANNs, when appropriately selected and used, permits the maximization of what can be derived from available data and from complex, dynamic, and multidimensional phenomena, which are often poorly predictable in the traditional 'cause and effect' philosophy. PMID:17998827

  17. Drift chamber tracking with neural networks

    International Nuclear Information System (INIS)

    With the very high event rates projected for experiments at the SSC and LHC, it is important to investigate new approaches to on line pattern recognition. The use of neural networks for pattern recognition. The use of neural networks for pattern recognition in high energy physics detectors has been an area of very active research. The authors discuss drift chamber tracking with a commercial analog VLSI neural network chip. Voltages proportional to the drift times in a 4-layer drift chamber were presented to the Intel ETANN chip. The network was trained to provide the intercept and slope of straight tracks traversing the chamber. The outputs were recorded and later compared off line to conventional track fits. Two types of network architectures were studied. Applications of neural network tracking to high energy physics detector triggers is discussed

  18. Exponential Stability for Delayed Cellular Neural Networks

    Institute of Scientific and Technical Information of China (English)

    YANG Jin-xiang; ZHONG Shou-ming; YAN Ke-yu

    2005-01-01

    The exponential stability of the delayed cellular neural networks (DCNN's) is investigated. By dividing the network state variables into some parts according to the characters of the neural networks, some new sufficient conditions of exponential stability are derived via constructing a Liapunov function. It is shown that the conditions differ from previous ones. The new conditions, which are associated with some initial value, are represented by some blocks of the interconnection matrix.

  19. Learning Processes of Layered Neural Networks

    OpenAIRE

    Fujiki, Sumiyoshi; Fujiki, Nahomi M.

    1995-01-01

    A positive reinforcement type learning algorithm is formulated for a stochastic feed-forward neural network, and a learning equation similar to that of the Boltzmann machine algorithm is obtained. By applying a mean field approximation to the same stochastic feed-forward neural network, a deterministic analog feed-forward network is obtained and the back-propagation learning rule is re-derived.

  20. Research of The Deeper Neural Networks

    Directory of Open Access Journals (Sweden)

    Xiao You Rong

    2016-01-01

    Full Text Available Neural networks (NNs have powerful computational abilities and could be used in a variety of applications; however, training these networks is still a difficult problem. With different network structures, many neural models have been constructed. In this report, a deeper neural networks (DNNs architecture is proposed. The training algorithm of deeper neural network insides searching the global optimal point in the actual error surface. Before the training algorithm is designed, the error surface of the deeper neural network is analyzed from simple to complicated, and the features of the error surface is obtained. Based on these characters, the initialization method and training algorithm of DNNs is designed. For the initialization, a block-uniform design method is proposed which separates the error surface into some blocks and finds the optimal block using the uniform design method. For the training algorithm, the improved gradient-descent method is proposed which adds a penalty term into the cost function of the old gradient descent method. This algorithm makes the network have a great approximating ability and keeps the network state stable. All of these improve the practicality of the neural network.

  1. Coronary Artery Diagnosis Aided by Neural Network

    Science.gov (United States)

    Stefko, Kamil

    2007-01-01

    Coronary artery disease is due to atheromatous narrowing and subsequent occlusion of the coronary vessel. Application of optimised feed forward multi-layer back propagation neural network (MLBP) for detection of narrowing in coronary artery vessels is presented in this paper. The research was performed using 580 data records from traditional ECG exercise test confirmed by coronary arteriography results. Each record of training database included description of the state of a patient providing input data for the neural network. Level and slope of ST segment of a 12 lead ECG signal recorded at rest and after effort (48 floating point values) was the main component of input data for neural network was. Coronary arteriography results (verified the existence or absence of more than 50% stenosis of the particular coronary vessels) were used as a correct neural network training output pattern. More than 96% of cases were correctly recognised by especially optimised and a thoroughly verified neural network. Leave one out method was used for neural network verification so 580 data records could be used for training as well as for verification of neural network.

  2. Neural network regulation driven by autonomous neural firings

    Science.gov (United States)

    Cho, Myoung Won

    2016-07-01

    Biological neurons naturally fire spontaneously due to the existence of a noisy current. Such autonomous firings may provide a driving force for network formation because synaptic connections can be modified due to neural firings. Here, we study the effect of autonomous firings on network formation. For the temporally asymmetric Hebbian learning, bidirectional connections lose their balance easily and become unidirectional ones. Defining the difference between reciprocal connections as new variables, we could express the learning dynamics as if Ising model spins interact with each other in magnetism. We present a theoretical method to estimate the interaction between the new variables in a neural system. We apply the method to some network systems and find some tendencies of autonomous neural network regulation.

  3. Mobility Prediction in Wireless Ad Hoc Networks using Neural Networks

    CERN Document Server

    Kaaniche, Heni

    2010-01-01

    Mobility prediction allows estimating the stability of paths in a mobile wireless Ad Hoc networks. Identifying stable paths helps to improve routing by reducing the overhead and the number of connection interruptions. In this paper, we introduce a neural network based method for mobility prediction in Ad Hoc networks. This method consists of a multi-layer and recurrent neural network using back propagation through time algorithm for training.

  4. Neural network for sonogram gap filling

    DEFF Research Database (Denmark)

    Klebæk, Henrik; Jensen, Jørgen Arendt; Hansen, Lars Kai

    1995-01-01

    a neural network for predicting mean frequency of the velocity signal and its variance. The neural network then predicts the evolution of the mean and variance in the gaps, and the sonogram and audio signal are reconstructed from these. The technique is applied on in-vivo data from the carotid artery....... The neural network is trained on part of the data and the network is pruned by the optimal brain damage procedure in order to reduce the number of parameters in the network, and thereby reduce the risk of overfitting. The neural predictor is compared to using a linear filter for the mean and variance time......In duplex imaging both an anatomical B-mode image and a sonogram are acquired, and the time for data acquisition is divided between the two images. This gives problems when rapid B-mode image display is needed, since there is not time for measuring the velocity data. Gaps then appear...

  5. Convolutional Neural Network for Image Recognition

    CERN Document Server

    Seifnashri, Sahand

    2015-01-01

    The aim of this project is to use machine learning techniques especially Convolutional Neural Networks for image processing. These techniques can be used for Quark-Gluon discrimination using calorimeters data, but unfortunately I didn’t manage to get the calorimeters data and I just used the Jet data fromminiaodsim(ak4 chs). The Jet data was not good enough for Convolutional Neural Network which is designed for ’image’ recognition. This report is made of twomain part, part one is mainly about implementing Convolutional Neural Network on unphysical data such as MNIST digits and CIFAR-10 dataset and part 2 is about the Jet data.

  6. Multispectral-image fusion using neural networks

    Science.gov (United States)

    Kagel, Joseph H.; Platt, C. A.; Donaven, T. W.; Samstad, Eric A.

    1990-08-01

    A prototype system is being developed to demonstrate the use of neural network hardware to fuse multispectral imagery. This system consists of a neural network IC on a motherboard a circuit card assembly and a set of software routines hosted by a PC-class computer. Research in support of this consists of neural network simulations fusing 4 to 7 bands of Landsat imagery and fusing (separately) multiple bands of synthetic imagery. The simulations results and a description of the prototype system are presented. 1.

  7. Multispectral image fusion using neural networks

    Science.gov (United States)

    Kagel, J. H.; Platt, C. A.; Donaven, T. W.; Samstad, E. A.

    1990-01-01

    A prototype system is being developed to demonstrate the use of neural network hardware to fuse multispectral imagery. This system consists of a neural network IC on a motherboard, a circuit card assembly, and a set of software routines hosted by a PC-class computer. Research in support of this consists of neural network simulations fusing 4 to 7 bands of Landsat imagery and fusing (separately) multiple bands of synthetic imagery. The simulations, results, and a description of the prototype system are presented.

  8. Hidden neural networks: application to speech recognition

    DEFF Research Database (Denmark)

    Riis, Søren Kamaric

    1998-01-01

    We evaluate the hidden neural network HMM/NN hybrid on two speech recognition benchmark tasks; (1) task independent isolated word recognition on the Phonebook database, and (2) recognition of broad phoneme classes in continuous speech from the TIMIT database. It is shown how hidden neural networks...... (HNNs) with much fewer parameters than conventional HMMs and other hybrids can obtain comparable performance, and for the broad class task it is illustrated how the HNN can be applied as a purely transition based system, where acoustic context dependent transition probabilities are estimated by neural...... networks...

  9. Neural network based temporal video segmentation.

    Science.gov (United States)

    Cao, X; Suganthan, P N

    2002-01-01

    The organization of video information in video databases requires automatic temporal segmentation with minimal user interaction. As neural networks are capable of learning the characteristics of various video segments and clustering them accordingly, in this paper, a neural network based technique is developed to segment the video sequence into shots automatically and with a minimum number of user-defined parameters. We propose to employ growing neural gas (GNG) networks and integrate multiple frame difference features to efficiently detect shot boundaries in the video. Experimental results are presented to illustrate the good performance of the proposed scheme on real video sequences. PMID:12370954

  10. Estimation of Conditional Quantile using Neural Networks

    DEFF Research Database (Denmark)

    Kulczycki, P.; Schiøler, Henrik

    1999-01-01

    The problem of estimating conditional quantiles using neural networks is investigated here. A basic structure is developed using the methodology of kernel estimation, and a theory guaranteeing con-sistency on a mild set of assumptions is provided. The constructed structure constitutes a basis...... for the design of a variety of different neural networks, some of which are considered in detail. The task of estimating conditional quantiles is related to Bayes point estimation whereby a broad range of applications within engineering, economics and management can be suggested. Numerical results illustrating...... the capabilities of the elaborated neural network are also given....

  11. Nonequilibrium landscape theory of neural networks.

    Science.gov (United States)

    Yan, Han; Zhao, Lei; Hu, Liang; Wang, Xidi; Wang, Erkang; Wang, Jin

    2013-11-01

    The brain map project aims to map out the neuron connections of the human brain. Even with all of the wirings mapped out, the global and physical understandings of the function and behavior are still challenging. Hopfield quantified the learning and memory process of symmetrically connected neural networks globally through equilibrium energy. The energy basins of attractions represent memories, and the memory retrieval dynamics is determined by the energy gradient. However, the realistic neural networks are asymmetrically connected, and oscillations cannot emerge from symmetric neural networks. Here, we developed a nonequilibrium landscape-flux theory for realistic asymmetrically connected neural networks. We uncovered the underlying potential landscape and the associated Lyapunov function for quantifying the global stability and function. We found the dynamics and oscillations in human brains responsible for cognitive processes and physiological rhythm regulations are determined not only by the landscape gradient but also by the flux. We found that the flux is closely related to the degrees of the asymmetric connections in neural networks and is the origin of the neural oscillations. The neural oscillation landscape shows a closed-ring attractor topology. The landscape gradient attracts the network down to the ring. The flux is responsible for coherent oscillations on the ring. We suggest the flux may provide the driving force for associations among memories. We applied our theory to rapid-eye movement sleep cycle. We identified the key regulation factors for function through global sensitivity analysis of landscape topography against wirings, which are in good agreements with experiments.

  12. Neural Network for Estimating Conditional Distribution

    DEFF Research Database (Denmark)

    Schiøler, Henrik; Kulczycki, P.

    Neural networks for estimating conditional distributions and their associated quantiles are investigated in this paper. A basic network structure is developed on the basis of kernel estimation theory, and consistency is proved from a mild set of assumptions. A number of applications within...... statistcs, decision theory and signal processing are suggested, and a numerical example illustrating the capabilities of the elaborated network is given...

  13. Diagnosis method utilizing neural networks

    International Nuclear Information System (INIS)

    Studies have been made on the technique of neural networks, which will be used to identify a cause of a small anomalous state in the reactor coolant system of the ATR (Advance Thermal Reactor). Three phases of analyses were carried out in this study. First, simulation for 100 seconds was made to determine how the plant parameters respond after the occurence of a transient decrease in reactivity, flow rate and temperature of feed water and increase in the steam flow rate and steam pressure, which would produce a decrease of water level in a steam drum of the ATR. Next, the simulation data was analysed utilizing an autoregressive model. From this analysis, a total of 36 coherency functions up to 0.5 Hz in each transient were computed among nine important and detectable plant parameters: neutron flux, flow rate of coolant, steam or feed water, water level in the steam drum, pressure and opening area of control valve in a steam pipe, feed water temperature and electrical power. Last, learning of neural networks composed of 96 input, 4-9 hidden and 5 output layer units was done by use of the generalized delta rule, namely a back-propagation algorithm. These convergent computations were continued as far as the difference between the desired outputs, 1 for direct cause or 0 for four other ones and actual outputs reached less than 10%. (1) Coherency functions were not governed by decreasing rate of reactivity in the range of 0.41x10-2dollar/s to 1.62x10-2dollar /s or by decreasing depth of the feed water temperature in the range of 3 deg C to 10 deg C or by a change of 10% or less in the three other causes. Change in coherency functions only depended on the type of cause. (2) The direct cause from the other four ones could be discriminated with 0.94+-0.01 of output level. A maximum of 0.06 output height was found among the other four causes. (3) Calculation load which is represented as products of learning times and numbers of the hidden units did not depend on the numbers

  14. A COMPREHENSIVE EVOLUTIONARY APPROACH FOR NEURAL NETWORK ENSEMBLES AUTOMATIC DESIGN

    OpenAIRE

    Bukhtoyarov, V.; Semenkin, E.

    2010-01-01

    A new comprehensive approach for neural network ensembles design is proposed. It consists of a method of neural networks automatic design and a method of automatic formation of an ensemble solution on the basis of separate neural networks solutions. It is demonstrated that the proposed approach is not less effective than a number of other approaches for neural network ensembles design.

  15. Neural networks for NOx-emission

    International Nuclear Information System (INIS)

    The government wants to restrict nitrogen oxide emissions. However, continuous measurement of these emissions is expensive and maintenance-sensitive. A prediction model based on the use of neural networks might be a reliable and efficient alternative

  16. Neural Networks in Mobile Robot Motion

    Directory of Open Access Journals (Sweden)

    Danica Janglova

    2008-11-01

    Full Text Available This paper deals with a path planning and intelligent control of an autonomous robot which should move safely in partially structured environment. This environment may involve any number of obstacles of arbitrary shape and size; some of them are allowed to move. We describe our approach to solving the motion-planning problem in mobile robot control using neural networks-based technique. Our method of the construction of a collision-free path for moving robot among obstacles is based on two neural networks. The first neural network is used to determine the "free" space using ultrasound range finder data. The second neural network "finds" a safe direction for the next robot section of the path in the workspace while avoiding the nearest obstacles. Simulation examples of generated path with proposed techniques will be presented.

  17. Neural Network Based 3D Surface Reconstruction

    Directory of Open Access Journals (Sweden)

    Vincy Joseph

    2009-11-01

    Full Text Available This paper proposes a novel neural-network-based adaptive hybrid-reflectance three-dimensional (3-D surface reconstruction model. The neural network combines the diffuse and specular components into a hybrid model. The proposed model considers the characteristics of each point and the variant albedo to prevent the reconstructed surface from being distorted. The neural network inputs are the pixel values of the two-dimensional images to be reconstructed. The normal vectors of the surface can then be obtained from the output of the neural network after supervised learning, where the illuminant direction does not have to be known in advance. Finally, the obtained normal vectors can be applied to integration method when reconstructing 3-D objects. Facial images were used for training in the proposed approach

  18. TIME SERIES FORECASTING USING NEURAL NETWORKS

    Directory of Open Access Journals (Sweden)

    BOGDAN OANCEA

    2013-05-01

    Full Text Available Recent studies have shown the classification and prediction power of the Neural Networks. It has been demonstrated that a NN can approximate any continuous function. Neural networks have been successfully used for forecasting of financial data series. The classical methods used for time series prediction like Box-Jenkins or ARIMA assumes that there is a linear relationship between inputs and outputs. Neural Networks have the advantage that can approximate nonlinear functions. In this paper we compared the performances of different feed forward and recurrent neural networks and training algorithms for predicting the exchange rate EUR/RON and USD/RON. We used data series with daily exchange rates starting from 2005 until 2013.

  19. Additive Feed Forward Control with Neural Networks

    DEFF Research Database (Denmark)

    Sørensen, O.

    1999-01-01

    This paper demonstrates a method to control a non-linear, multivariable, noisy process using trained neural networks. The basis for the method is a trained neural network controller acting as the inverse process model. A training method for obtaining such an inverse process model is applied....... A suitable 'shaped' (low-pass filtered) reference is used to overcome problems with excessive control action when using a controller acting as the inverse process model. The control concept is Additive Feed Forward Control, where the trained neural network controller, acting as the inverse process model......, is placed in a supplementary pure feed-forward path to an existing feedback controller. This concept benefits from the fact, that an existing, traditional designed, feedback controller can be retained without any modifications, and after training the connection of the neural network feed-forward controller...

  20. SAR ATR Based on Convolutional Neural Network

    OpenAIRE

    Tian Zhuangzhuang; Zhan Ronghui; Hu Jiemin; Zhang Jun

    2016-01-01

    This study presents a new method of Synthetic Aperture Radar (SAR) image target recognition based on a convolutional neural network. First, we introduce a class separability measure into the cost function to improve this network’s ability to distinguish between categories. Then, we extract SAR image features using the improved convolutional neural network and classify these features using a support vector machine. Experimental results using moving and stationary target acquisition and recogni...

  1. Applications of Pulse-Coupled Neural Networks

    CERN Document Server

    Ma, Yide; Wang, Zhaobin

    2011-01-01

    "Applications of Pulse-Coupled Neural Networks" explores the fields of image processing, including image filtering, image segmentation, image fusion, image coding, image retrieval, and biometric recognition, and the role of pulse-coupled neural networks in these fields. This book is intended for researchers and graduate students in artificial intelligence, pattern recognition, electronic engineering, and computer science. Prof. Yide Ma conducts research on intelligent information processing, biomedical image processing, and embedded system development at the School of Information Sci

  2. Neural networks, D0, and the SSC

    International Nuclear Information System (INIS)

    We outline several exploratory studies involving neural network simulations applied to pattern recognition in high energy physics. We describe the D0 data acquisition system and a natual means by which algorithms derived from neural networks techniques may be incorporated into recently developed hardware associated with the D0 MicroVAX farm nodes. Such applications to the event filtering needed by SSC detectors look interesting. 10 refs., 11 figs

  3. An Introduction to Convolutional Neural Networks

    OpenAIRE

    O'Shea, Keiron; Nash, Ryan

    2015-01-01

    The field of machine learning has taken a dramatic twist in recent times, with the rise of the Artificial Neural Network (ANN). These biologically inspired computational models are able to far exceed the performance of previous forms of artificial intelligence in common machine learning tasks. One of the most impressive forms of ANN architecture is that of the Convolutional Neural Network (CNN). CNNs are primarily used to solve difficult image-driven pattern recognition tasks and with their p...

  4. Parameterizing Stellar Spectra Using Deep Neural Networks

    OpenAIRE

    Li, Xiangru; Pan, Ruyang

    2016-01-01

    This work investigates the spectrum parameterization problem using deep neural networks (DNNs). The proposed scheme consists of the following procedures: first, the configuration of a DNN is initialized using a series of autoencoder neural networks; second, the DNN is fine-tuned using a gradient descent scheme; third, stellar parameters ($T_{eff}$, log$~g$, and [Fe/H]) are estimated using the obtained DNN. This scheme was evaluated on both real spectra from SDSS/SEGUE and synthetic spectra ca...

  5. Hindcasting cyclonic waves using neural networks

    Digital Repository Service at National Institute of Oceanography (India)

    Mandal, S.; Rao, S.; Chakravarty, N.V.

    for computing extreme wave conditions or design wave statistics. As far as Indian seas are concerned recorded wave data are available for short periods for some places along the coasts. Estimation of wave parameters by numerical wave forecasting schemes.... Some applications of neural network (NN) in wave forecasting are carried out by Deo and Naidu (1999), and Prabaharan (2001). Londhe and Deo (2001) have worked on wave propagation using neural network. This paper describes about hindcasting of wave...

  6. Density functional and neural network analysis

    DEFF Research Database (Denmark)

    Jalkanen, K. J.; Suhai, S.; Bohr, Henrik

    1997-01-01

    dichroism (VCD) intensities. The large changes due to hydration on the structures, relative stability of conformers, and in the VA and VCD spectra observed experimentally are reproduced by the DFT calculations. Furthermore a neural network was constructed for reproducing the inverse scattering data (infer...... the structural coordinates from spectroscopic data) that the DFT method could produce. Finally the neural network performances are used to monitor a sensitivity or dependence analysis of the importance of secondary structures....

  7. Pattern Recognition Using Neural Networks

    Directory of Open Access Journals (Sweden)

    Santaji Ghorpade

    2010-12-01

    Full Text Available Face Recognition has been identified as one of the attracting research areas and it has drawn the attention of many researchers due to its varying applications such as security systems, medical systems,entertainment, etc. Face recognition is the preferred mode of identification by humans: it is natural,robust and non-intrusive. A wide variety of systems requires reliable personal recognition schemes to either confirm or determine the identity of an individual requesting their services. The purpose of such schemes is to ensure that the rendered services are accessed only by a legitimate user and no one else.Examples of such applications include secure access to buildings, computer systems, laptops, cellular phones, and ATMs. In the absence of robust personal recognition schemes, these systems are vulnerable to the wiles of an impostor.In this paper we have developed and illustrated a recognition system for human faces using a novel Kohonen self-organizing map (SOM or Self-Organizing Feature Map (SOFM based retrieval system.SOM has good feature extracting property due to its topological ordering. The Facial Analytics results for the 400 images of AT&T database reflects that the face recognition rate using one of the neural network algorithm SOM is 85.5% for 40 persons.

  8. Neural network segmentation of magnetic resonance images

    Science.gov (United States)

    Frederick, Blaise

    1990-07-01

    Neural networks are well adapted to the task of grouping input patterns into subsets which share some similarity. Moreover once trained they can generalize their classification rules to classify new data sets. Sets of pixel intensities from magnetic resonance (MR) images provide a natural input to a neural network by varying imaging parameters MR images can reflect various independent physical parameters of tissues in their pixel intensities. A neural net can then be trained to classify physically similar tissue types based on sets of pixel intensities resulting from different imaging studies on the same subject. A neural network classifier for image segmentation was implemented on a Sun 4/60 and was tested on the task of classifying tissues of canine head MR images. Four images of a transaxial slice with different imaging sequences were taken as input to the network (three spin-echo images and an inversion recovery image). The training set consisted of 691 representative samples of gray matter white matter cerebrospinal fluid bone and muscle preclassified by a neuroscientist. The network was trained using a fast backpropagation algorithm to derive the decision criteria to classify any location in the image by its pixel intensities and the image was subsequently segmented by the classifier. The classifier''s performance was evaluated as a function of network size number of network layers and length of training. A single layer neural network performed quite well at

  9. Deep Learning with Darwin: Evolutionary Synthesis of Deep Neural Networks

    OpenAIRE

    Shafiee, Mohammad Javad; Mishra, Akshaya; Wong, Alexander

    2016-01-01

    Taking inspiration from biological evolution, we explore the idea of "Can deep neural networks evolve naturally over successive generations into highly efficient deep neural networks?" by introducing the notion of synthesizing new highly efficient, yet powerful deep neural networks over successive generations via an evolutionary process from ancestor deep neural networks. The architectural traits of ancestor deep neural networks are encoded using synaptic probability models, which can be view...

  10. Hopfield neural network based on ant system

    Institute of Scientific and Technical Information of China (English)

    洪炳镕; 金飞虎; 郭琦

    2004-01-01

    Hopfield neural network is a single layer feedforward neural network. Hopfield network requires some control parameters to be carefully selected, else the network is apt to converge to local minimum. An ant system is a nature inspired meta heuristic algorithm. It has been applied to several combinatorial optimization problems such as Traveling Salesman Problem, Scheduling Problems, etc. This paper will show an ant system may be used in tuning the network control parameters by a group of cooperated ants. The major advantage of this network is to adjust the network parameters automatically, avoiding a blind search for the set of control parameters.This network was tested on two TSP problems, 5 cities and 10 cities. The results have shown an obvious improvement.

  11. Fastest learning in small world neural networks

    OpenAIRE

    Simard, D.; Nadeau, L; Kröger, H.

    2004-01-01

    We investigate supervised learning in neural networks. We consider a multi-layered feed-forward network with back propagation. We find that the network of small-world connectivity reduces the learning error and learning time when compared to the networks of regular or random connectivity. Our study has potential applications in the domain of data-mining, image processing, speech recognition, and pattern recognition.

  12. Option Pricing Using Bayesian Neural Networks

    CERN Document Server

    Pires, Michael Maio

    2007-01-01

    Options have provided a field of much study because of the complexity involved in pricing them. The Black-Scholes equations were developed to price options but they are only valid for European styled options. There is added complexity when trying to price American styled options and this is why the use of neural networks has been proposed. Neural Networks are able to predict outcomes based on past data. The inputs to the networks here are stock volatility, strike price and time to maturity with the output of the network being the call option price. There are two techniques for Bayesian neural networks used. One is Automatic Relevance Determination (for Gaussian Approximation) and one is a Hybrid Monte Carlo method, both used with Multi-Layer Perceptrons.

  13. Application of Partially Connected Neural Network

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    This paper focuses mainly on application of Partially Connected Backpropagation Neural Network (PCBP) instead of typical Fully Connected Neural Network (FCBP). The initial neural network is fully connected, after training with sample data using cross-entropy as error function, a clustering method is employed to cluster weights between inputs to hidden layer and from hidden to output layer, and connections that are relatively unnecessary are deleted, thus the initial network becomes a PCBP network.Then PCBP can be used in prediction or data mining by training PCBP with data that comes from database. At the end of this paper, several experiments are conducted to illustrate the effects of PCBP using Iris data set.

  14. Artificial astrocytes improve neural network performance.

    Directory of Open Access Journals (Sweden)

    Ana B Porto-Pazos

    Full Text Available Compelling evidence indicates the existence of bidirectional communication between astrocytes and neurons. Astrocytes, a type of glial cells classically considered to be passive supportive cells, have been recently demonstrated to be actively involved in the processing and regulation of synaptic information, suggesting that brain function arises from the activity of neuron-glia networks. However, the actual impact of astrocytes in neural network function is largely unknown and its application in artificial intelligence remains untested. We have investigated the consequences of including artificial astrocytes, which present the biologically defined properties involved in astrocyte-neuron communication, on artificial neural network performance. Using connectionist systems and evolutionary algorithms, we have compared the performance of artificial neural networks (NN and artificial neuron-glia networks (NGN to solve classification problems. We show that the degree of success of NGN is superior to NN. Analysis of performances of NN with different number of neurons or different architectures indicate that the effects of NGN cannot be accounted for an increased number of network elements, but rather they are specifically due to astrocytes. Furthermore, the relative efficacy of NGN vs. NN increases as the complexity of the network increases. These results indicate that artificial astrocytes improve neural network performance, and established the concept of Artificial Neuron-Glia Networks, which represents a novel concept in Artificial Intelligence with implications in computational science as well as in the understanding of brain function.

  15. BAM annual report 1989

    International Nuclear Information System (INIS)

    The volume reports on the activities of the Federal Institute of Materials Reseaerch and Testing (BAM) and its departments of metals and metal constructions, building, organic materials, chemical safety technology, special fields of materials testing and techniques which are not dependent on the type of material. Research activities of a more general nature are described, and the work of the project groups for computerized tomography, high-quality ceramics, and the database on hazardous materials (DGG) is described. (orig./MM) With 116 figs., 12 tabs., statistical annex

  16. Pattern Classification using Simplified Neural Networks

    CERN Document Server

    Kamruzzaman, S M

    2010-01-01

    In recent years, many neural network models have been proposed for pattern classification, function approximation and regression problems. This paper presents an approach for classifying patterns from simplified NNs. Although the predictive accuracy of ANNs is often higher than that of other methods or human experts, it is often said that ANNs are practically "black boxes", due to the complexity of the networks. In this paper, we have an attempted to open up these black boxes by reducing the complexity of the network. The factor makes this possible is the pruning algorithm. By eliminating redundant weights, redundant input and hidden units are identified and removed from the network. Using the pruning algorithm, we have been able to prune networks such that only a few input units, hidden units and connections left yield a simplified network. Experimental results on several benchmarks problems in neural networks show the effectiveness of the proposed approach with good generalization ability.

  17. Weight discretization paradigm for optical neural networks

    Science.gov (United States)

    Fiesler, Emile; Choudry, Amar; Caulfield, H. John

    1990-08-01

    Neural networks are a primary candidate architecture for optical computing. One of the major problems in using neural networks for optical computers is that the information holders: the interconnection strengths (or weights) are normally real valued (continuous), whereas optics (light) is only capable of representing a few distinguishable intensity levels (discrete). In this paper a weight discretization paradigm is presented for back(ward error) propagation neural networks which can work with a very limited number of discretization levels. The number of interconnections in a (fully connected) neural network grows quadratically with the number of neurons of the network. Optics can handle a large number of interconnections because of the fact that light beams do not interfere with each other. A vast amount of light beams can therefore be used per unit of area. However the number of different values one can represent in a light beam is very limited. A flexible, portable (machine independent) neural network software package which is capable of weight discretization, is presented. The development of the software and some experiments have been done on personal computers. The major part of the testing, which requires a lot of computation, has been done using a CRAY X-MP/24 super computer.

  18. Estimating Conditional Distributions by Neural Networks

    DEFF Research Database (Denmark)

    Kulczycki, P.; Schiøler, Henrik

    1998-01-01

    Neural Networks for estimating conditionaldistributions and their associated quantiles are investigated in this paper. A basic network structure is developed on the basis of kernel estimation theory, and consistency property is considered from a mild set of assumptions. A number of applications...

  19. Comparing artificial and biological dynamical neural networks

    Science.gov (United States)

    McAulay, Alastair D.

    2006-05-01

    Modern computers can be made more friendly and otherwise improved by making them behave more like humans. Perhaps we can learn how to do this from biology in which human brains evolved over a long period of time. Therefore, we first explain a commonly used biological neural network (BNN) model, the Wilson-Cowan neural oscillator, that has cross-coupled excitatory (positive) and inhibitory (negative) neurons. The two types of neurons are used for frequency modulation communication between neurons which provides immunity to electromagnetic interference. We then evolve, for the first time, an artificial neural network (ANN) to perform the same task. Two dynamical feed-forward artificial neural networks use cross-coupling feedback (like that in a flip-flop) to form an ANN nonlinear dynamic neural oscillator with the same equations as the Wilson-Cowan neural oscillator. Finally we show, through simulation, that the equations perform the basic neural threshold function, switching between stable zero output and a stable oscillation, that is a stable limit cycle. Optical implementation with an injected laser diode and future research are discussed.

  20. Design of Robust Neural Network Classifiers

    DEFF Research Database (Denmark)

    Larsen, Jan; Andersen, Lars Nonboe; Hintz-Madsen, Mads;

    1998-01-01

    This paper addresses a new framework for designing robust neural network classifiers. The network is optimized using the maximum a posteriori technique, i.e., the cost function is the sum of the log-likelihood and a regularization term (prior). In order to perform robust classification, we present...... a modified likelihood function which incorporates the potential risk of outliers in the data. This leads to the introduction of a new parameter, the outlier probability. Designing the neural classifier involves optimization of network weights as well as outlier probability and regularization parameters. We...

  1. Electronic device aspects of neural network memories

    Science.gov (United States)

    Lambe, J.; Moopenn, A.; Thakoor, A. P.

    1985-01-01

    The basic issues related to the electronic implementation of the neural network model (NNM) for content addressable memories are examined. A brief introduction to the principles of the NNM is followed by an analysis of the information storage of the neural network in the form of a binary connection matrix and the recall capability of such matrix memories based on a hardware simulation study. In addition, materials and device architecture issues involved in the future realization of such networks in VLSI-compatible ultrahigh-density memories are considered. A possible space application of such devices would be in the area of large-scale information storage without mechanical devices.

  2. Improving neural network performance on SIMD architectures

    Science.gov (United States)

    Limonova, Elena; Ilin, Dmitry; Nikolaev, Dmitry

    2015-12-01

    Neural network calculations for the image recognition problems can be very time consuming. In this paper we propose three methods of increasing neural network performance on SIMD architectures. The usage of SIMD extensions is a way to speed up neural network processing available for a number of modern CPUs. In our experiments, we use ARM NEON as SIMD architecture example. The first method deals with half float data type for matrix computations. The second method describes fixed-point data type for the same purpose. The third method considers vectorized activation functions implementation. For each method we set up a series of experiments for convolutional and fully connected networks designed for image recognition task.

  3. Neural-networks-based Modelling and a Fuzzy Neural Networks Controller of MCFC

    Institute of Scientific and Technical Information of China (English)

    2002-01-01

    Molten Carbonate Fuel Cells (MCFC) are produced with a highly efficient and clean power generation technology which will soon be widely utilized. The temperature characters of MCFC stack are briefly analyzed. A radial basis function (RBF) neural networks identification technology is applied to set up the temperature nonlinear model of MCFC stack, and the identification structure, algorithm and modeling training process are given in detail. A fuzzy controller of MCFC stack is designed. In order to improve its online control ability, a neural network trained by the I/O data of a fuzzy controller is designed. The neural networks can memorize and expand the inference rules of the fuzzy controller and substitute for the fuzzy controller to control MCFC stack online. A detailed design of the controller is given. The validity of MCFC stack modelling based on neural networks and the superior performance of the fuzzy neural networks controller are proved by Simulations.

  4. Applying neural networks in autonomous systems

    Science.gov (United States)

    Thornbrugh, Allison L.; Layne, J. D.; Wilson, James M., III

    1992-03-01

    Autonomous and teleautonomous operations have been defined in a variety of ways by different groups involved with remote robotic operations. For example, Conway describes architectures for producing intelligent actions in teleautonomous systems. Applying neural nets in such systems is similar to applying them in general. However, for autonomy, learning or learned behavior may become a significant system driver. Thus, artificial neural networks are being evaluated as components in fully autonomous and teleautonomous systems. Feed- forward networks may be trained to perform adaptive signal processing, pattern recognition, data fusion, and function approximation -- as in control subsystems. Certain components of particular autonomous systems become more amenable to implementation using a neural net due to a match between the net's attributes and desired attributes of the system component. Criteria have been developed for distinguishing such applications and then implementing them. The success of hardware implementation is a crucial part of this application evaluation process. Three basic applications of neural nets -- autoassociation, classification, and function approximation -- are used to exemplify this process and to highlight procedures that are followed during the requirements, design, and implementation phases. This paper assumes some familiarity with basic neural network terminology and concentrates upon the use of different neural network types while citing references that cover the underlying mathematics and related research.

  5. Dynamic pricing by hopfield neural network

    Institute of Scientific and Technical Information of China (English)

    Lusajo M Minga; FENG Yu-qiang(冯玉强); LI Yi-jun(李一军); LU Yang(路杨); Kimutai Kimeli

    2004-01-01

    The increase in the number of shopbots users in e-commerce has triggered flexibility of sellers in their pricing strategies. Sellers see the importance of automated price setting which provides efficient services to a large number of buyers who are using shopbots. This paper studies the characteristic of decreasing energy with time in a continuous model of a Hopfield neural network that is the decreasing of errors in the network with respect to time. The characteristic shows that it is possible to use Hopfield neural network to get the main factor of dynamic pricing; the least variable cost, from production function principles. The least variable cost is obtained by reducing or increasing the input combination factors, and then making the comparison of the network output with the desired output, where the difference between the network output and desired output will be decreasing in the same manner as in the Hopfield neural network energy. Hopfield neural network will simplify the rapid change of prices in e-commerce during transaction that depends on the demand quantity for demand sensitive model of pricing.

  6. Neutron spectrometry using artificial neural networks

    International Nuclear Information System (INIS)

    An artificial neural network has been designed to obtain neutron spectra from Bonner spheres spectrometer count rates. The neural network was trained using 129 neutron spectra. These include spectra from isotopic neutron sources; reference and operational spectra from accelerators and nuclear reactors, spectra based on mathematical functions as well as few energy groups and monoenergetic spectra. The spectra were transformed from lethargy to energy distribution and were re-binned to 31 energy groups using the MCNP 4C code. The re-binned spectra and the UTA4 response matrix were used to calculate the expected count rates in Bonner spheres spectrometer. These count rates were used as input and their respective spectra were used as output during the neural network training. After training, the network was tested with the Bonner spheres count rates produced by folding a set of neutron spectra with the response matrix. This set contains data used during network training as well as data not used. Training and testing was carried out using the Matlab(R) program. To verify the network unfolding performance, the original and unfolded spectra were compared using the root mean square error. The use of artificial neural networks to unfold neutron spectra in neutron spectrometry is an alternative procedure that overcomes the drawbacks associated with this ill-conditioned problem

  7. Neutron spectrometry with artificial neural networks

    Energy Technology Data Exchange (ETDEWEB)

    Vega C, H.R.; Hernandez D, V.M.; Manzanares A, E.; Rodriguez, J.M.; Mercado S, G.A. [Universidad Autonoma de Zacatecas, A.P. 336, 98000 Zacatecas (Mexico); Iniguez de la Torre Bayo, M.P. [Universidad de Valladolid, Valladolid (Spain); Barquero, R. [Hospital Universitario Rio Hortega, Valladolid (Spain); Arteaga A, T. [Envases de Zacatecas, S.A. de C.V., Zacatecas (Mexico)]. e-mail: rvega@cantera.reduaz.mx

    2005-07-01

    An artificial neural network has been designed to obtain the neutron spectra from the Bonner spheres spectrometer's count rates. The neural network was trained using 129 neutron spectra. These include isotopic neutron sources; reference and operational spectra from accelerators and nuclear reactors, spectra from mathematical functions as well as few energy groups and monoenergetic spectra. The spectra were transformed from lethargy to energy distribution and were re-bin ned to 31 energy groups using the MCNP 4C code. Re-binned spectra and UTA4 response matrix were used to calculate the expected count rates in Bonner spheres spectrometer. These count rates were used as input and the respective spectrum was used as output during neural network training. After training the network was tested with the Bonner spheres count rates produced by a set of neutron spectra. This set contains data used during network training as well as data not used. Training and testing was carried out in the Mat lab program. To verify the network unfolding performance the original and unfolded spectra were compared using the {chi}{sup 2}-test and the total fluence ratios. The use of Artificial Neural Networks to unfold neutron spectra in neutron spectrometry is an alternative procedure that overcomes the drawbacks associated in this ill-conditioned problem. (Author)

  8. Using neural networks to describe tracer correlations

    Directory of Open Access Journals (Sweden)

    D. J. Lary

    2004-01-01

    Full Text Available Neural networks are ideally suited to describe the spatial and temporal dependence of tracer-tracer correlations. The neural network performs well even in regions where the correlations are less compact and normally a family of correlation curves would be required. For example, the CH4-N2O correlation can be well described using a neural network trained with the latitude, pressure, time of year, and methane volume mixing ratio (v.m.r.. In this study a neural network using Quickprop learning and one hidden layer with eight nodes was able to reproduce the CH4-N2O correlation with a correlation coefficient between simulated and training values of 0.9995. Such an accurate representation of tracer-tracer correlations allows more use to be made of long-term datasets to constrain chemical models. Such as the dataset from the Halogen Occultation Experiment (HALOE which has continuously observed CH4  (but not N2O from 1991 till the present. The neural network Fortran code used is available for download.

  9. Fuzzy neural network with fast backpropagation learning

    Science.gov (United States)

    Wang, Zhiling; De Sario, Marco; Guerriero, Andrea; Mugnuolo, Raffaele

    1995-03-01

    Neural filters with multilayer backpropagation network have been proved to be able to define mostly all linear or non-linear filters. Because of the slowness of the networks' convergency, however, the applicable fields have been limited. In this paper, fuzzy logic is introduced to adjust learning rate and momentum parameter depending upon output errors and training times. This makes the convergency of the network greatly improved. Test curves are shown to prove the fast filters' performance.

  10. Neural network plasticity in the human brain

    OpenAIRE

    Rizk, Sviatlana

    2013-01-01

    The human brain is highly organized within networks. Functionally related neural-assemblies communicate by oscillating synchronously. Intrinsic brain activity contains information on healthy and damaged brain functioning. This thesis investigated the relationship between functional networks and behavior. Furthermore, we assessed functional network plasticity after brain damage and as a result of brain stimulation. In different groups of patients we observed reduced functional connectivity bet...

  11. Molding the Knowledge in Modular Neural Networks

    OpenAIRE

    Spaanenburg, L.; Achterop, S.; Slump, C. H.; Zwaag, van der, M.B.

    2002-01-01

    Problem description. The learning of monolithic neural networks becomes harder with growing network size. Likewise the knowledge obtained while learning becomes harder to extract. Such disadvantages are caused by a lack of internal structure, that by its presence would reduce the degrees of freedom in evolving to a training target. A suitable internal structure with respect to modular network construction as well as to nodal discrimination is required. Details on the grouping and selection of...

  12. Modular neural networks and reinforcement learning

    OpenAIRE

    Raicevic, Peter

    2004-01-01

    We investigate the effect of modular architecture in an artificial neural network for a reinforcement learning problem. Using the supervised backpropagation algorithm to solve a two-task problem, the network performance can be increased by using networks with modular structures. However, using a modular architecture to solve a two-task reinforcement learning problem will not increase the performance compared to a non-modular structure. We show that by combining a modular structure with a modu...

  13. Stability of Stochastic Neutral Cellular Neural Networks

    Science.gov (United States)

    Chen, Ling; Zhao, Hongyong

    In this paper, we study a class of stochastic neutral cellular neural networks. By constructing a suitable Lyapunov functional and employing the nonnegative semi-martingale convergence theorem we give some sufficient conditions ensuring the almost sure exponential stability of the networks. The results obtained are helpful to design stability of networks when stochastic noise is taken into consideration. Finally, two examples are provided to show the correctness of our analysis.

  14. Network Traffic Prediction based on Particle Swarm BP Neural Network

    OpenAIRE

    Yan Zhu; Guanghua Zhang; Jing Qiu

    2013-01-01

    The traditional BP neural network algorithm has some bugs such that it is easy to fall into local minimum and the slow convergence speed. Particle swarm optimization is an evolutionary computation technology based on swarm intelligence which can not guarantee global convergence. Artificial Bee Colony algorithm is a global optimum algorithm with many advantages such as simple, convenient and strong robust. In this paper, a new BP neural network based on Artificial Bee Colony algorithm and part...

  15. Estimates on compressed neural networks regression.

    Science.gov (United States)

    Zhang, Yongquan; Li, Youmei; Sun, Jianyong; Ji, Jiabing

    2015-03-01

    When the neural element number n of neural networks is larger than the sample size m, the overfitting problem arises since there are more parameters than actual data (more variable than constraints). In order to overcome the overfitting problem, we propose to reduce the number of neural elements by using compressed projection A which does not need to satisfy the condition of Restricted Isometric Property (RIP). By applying probability inequalities and approximation properties of the feedforward neural networks (FNNs), we prove that solving the FNNs regression learning algorithm in the compressed domain instead of the original domain reduces the sample error at the price of an increased (but controlled) approximation error, where the covering number theory is used to estimate the excess error, and an upper bound of the excess error is given.

  16. Source Fault of the Dec.26, 2003 Bam Earthquake (Mw6.5) in Southeastern Iran Inferred From Aftershock Observation Data by Temporal High-Sensitive-Seismograph Network

    Science.gov (United States)

    Suzuki, S.; Matsushima, T.; Ito, Y.; Hosseini, S. K.; Nakamura, T.; Arash, J.; Sadeghi, H.; Maleki, M.; Aghda, F.

    2004-05-01

    The Bam earthquake occurred in southeastern Iran at 05:26 A.M.(local time) on December 26, 2003 (epicenter: 29.010N, 58.266E, Mo=6.6x10**18Nm, Mw=6.5; ref.1). The earthquake had strike-slip mechanism (strike=175, dip=85, slip=153; ref.2) and source parameters (focal depth=4km, fault dimension=20kmx15km, Dmax=1.0m, stress drop=3.7MPa; ref.2). The earthquake struck the ancient city of Bam and killed more than 40,000 people. It shows that one third of about 120,000 in population in and around Bam city were killed. The main reason of such a big damage may be caused by weak adobe and brick houses; even so, the damage was too much big. We, therefore, are researching other cause of such a big damage. Taking instruments from Japan for this aim we installed 9 high sensitive seismographs and one accelerograph in and around Bam city on February 6-8, 2004. And we observed aftershocks and continue during one month. Reading P and S arriving times of about 100 aftershocks occurring from February 6 to 10, we determined those preliminary hypocenters and magnitudes. Those epicenters (errors<500m) distribute mainly from northeastern Bam city to south direction with about 20km length. It means that the fault of the main shock passed just under eastern half of Bam city where most of houses and buildings were heavily damaged. This fault is about 4 km away west from Bam fault which is presented in geological map (ref.3). A north-south vertical cross-section of the hypocentral distribution (maybe errors < 1km) shows that most of their depths are shallower than 14km and a seismic gap exists in the laterally middle part of their distribution and shallower than 6 km in depth. The shallow seismic gap may correspond to a main fracture zone as shown in the slip distribution figure proposed by Yamanaka (ref.2). This main fracture occurring shallower than about 6 km in depth must be one of causes of the big damage in Bam. (Reference) ref1:USGS,http://neic.usgs.gov/neis/FM/, ref 2: ERI, U. Tokyo

  17. Research on the Simulation of Neural Networks and Semaphores

    Science.gov (United States)

    Zhu, Haibo

    In recent years, much research has been devoted to the emulation of the Turing machine; unfortunately, few have enabled the exploration of SMPs. Given the current status of decentralized algorithms, security experts obviously desire the significant unification of wide-area networks and telephony, which embodies the confusing principles of steganography. In this paper, we present new empathic communication (Bam), demonstrating that digital-to-analog converters and checksums are largely incompatible.

  18. Parameter estimation using compensatory neural networks

    Indian Academy of Sciences (India)

    M Sinha; P K Kalra; K Kumar

    2000-04-01

    Proposed here is a new neuron model, a basis for Compensatory Neural Network Architecture (CNNA), which not only reduces the total number of interconnections among neurons but also reduces the total computing time for training. The suggested model has properties of the basic neuron model as well as the higher neuron model (multiplicative aggregation function). It can adapt to standard neuron and higher order neuron, as well as a combination of the two. This approach is found to estimate the orbit with accuracy significantly better than Kalman Filter (KF) and Feedforward Multilayer Neural Network (FMNN) (also simply referred to as Artificial Neural Network, ANN) with lambda-gamma learning. The typical simulation runs also bring out the superiority of the proposed scheme over Kalman filter from the standpoint of computation time and the amount of data needed for the desired degree of estimated accuracy for the specific problem of orbit determination.

  19. Reconstruction of neutron spectra through neural networks

    International Nuclear Information System (INIS)

    A neural network has been used to reconstruct the neutron spectra starting from the counting rates of the detectors of the Bonner sphere spectrophotometric system. A group of 56 neutron spectra was selected to calculate the counting rates that would produce in a Bonner sphere system, with these data and the spectra it was trained the neural network. To prove the performance of the net, 12 spectra were used, 6 were taken of the group used for the training, 3 were obtained of mathematical functions and those other 3 correspond to real spectra. When comparing the original spectra of those reconstructed by the net we find that our net has a poor performance when reconstructing monoenergetic spectra, this attributes it to those characteristic of the spectra used for the training of the neural network, however for the other groups of spectra the results of the net are appropriate with the prospective ones. (Author)

  20. Hair Loss Diagnosis Using Artificial Neural Networks

    Directory of Open Access Journals (Sweden)

    Ahmad Esfandiari

    2012-09-01

    Full Text Available Hair is an appendage of the skin that plays an important role in the beauty of people's face. Daily averages of 50 to 80 hairs are shed naturally. Various factors are effective in hair loss. In this paper using the eight influence attributes of gender, age, genetic factors, surgery, pregnancy, Zinc deficiency, iron deficiency, anemia and the use of cosmetics, the amount of hair loss is predicted. This work has been performed using artificial neural networks. 60 percent of the collected data was used for train, 20 percent for validation and the remaining 20 percent is used for testing the neural networks. For this, various training algorithms has been used. The result of the implementation of these algorithms has been compared. It seems that neural networks can be successful to predict hair loss.

  1. Recurrent Neural Network for Computing Outer Inverse.

    Science.gov (United States)

    Živković, Ivan S; Stanimirović, Predrag S; Wei, Yimin

    2016-05-01

    Two linear recurrent neural networks for generating outer inverses with prescribed range and null space are defined. Each of the proposed recurrent neural networks is based on the matrix-valued differential equation, a generalization of dynamic equations proposed earlier for the nonsingular matrix inversion, the Moore-Penrose inversion, as well as the Drazin inversion, under the condition of zero initial state. The application of the first approach is conditioned by the properties of the spectrum of a certain matrix; the second approach eliminates this drawback, though at the cost of increasing the number of matrix operations. The cases corresponding to the most common generalized inverses are defined. The conditions that ensure stability of the proposed neural network are presented. Illustrative examples present the results of numerical simulations.

  2. Prediction of metal corrosion by neural networks

    Directory of Open Access Journals (Sweden)

    Z. Jančíková

    2013-07-01

    Full Text Available The contribution deals with the use of artificial neural networks for prediction of steel atmospheric corrosion. Atmospheric corrosion of metal materials exposed under atmospheric conditions depends on various factors such as local temperature, relative humidity, amount of precipitation, pH of rainfall, concentration of main pollutants and exposition time. As these factors are very complex, exact relation for mathematical description of atmospheric corrosion of various metals are not known so far. Classical analytical and mathematical functions are of limited use to describe this type of strongly non-linear system depending on various meteorological-chemical factors and interaction between them and on material parameters. Nowadays there is certain chance to predict a corrosion loss of materials by artificial neural networks. Neural networks are used primarily in real systems, which are characterized by high nonlinearity, considerable complexity and great difficulty of their formal mathematical description.

  3. Assessing Landslide Hazard Using Artificial Neural Network

    DEFF Research Database (Denmark)

    Farrokhzad, Farzad; Choobbasti, Asskar Janalizadeh; Barari, Amin;

    2011-01-01

    neural network has been developed for use in the stability evaluation of slopes under various geological conditions and engineering requirements. The Artificial neural network model of this research uses slope characteristics as input and leads to the output in form of the probability of failure...... and factor of safety. It can be stated that the trained neural networks are capable of predicting the stability of slopes and safety factor of landslide hazard in study area with an acceptable level of confidence. Landslide hazard analysis and mapping can provide useful information for catastrophic loss...... failure" which is main concentration of the current research and "liquefaction failure". Shear failures along shear planes occur when the shear stress along the sliding surfaces exceed the effective shear strength. These slides have been referred to as landslide. An expert system based on artificial...

  4. Classification of radar clutter using neural networks.

    Science.gov (United States)

    Haykin, S; Deng, C

    1991-01-01

    A classifier that incorporates both preprocessing and postprocessing procedures as well as a multilayer feedforward network (based on the back-propagation algorithm) in its design to distinguish between several major classes of radar returns including weather, birds, and aircraft is described. The classifier achieves an average classification accuracy of 89% on generalization for data collected during a single scan of the radar antenna. The procedures of feature selection for neural network training, the classifier design considerations, the learning algorithm development, the implementation, and the experimental results of the neural clutter classifier, which is simulated on a Warp systolic computer, are discussed. A comparative evaluation of the multilayer neural network with a traditional Bayes classifier is presented.

  5. Network Traffic Prediction based on Particle Swarm BP Neural Network

    Directory of Open Access Journals (Sweden)

    Yan Zhu

    2013-11-01

    Full Text Available The traditional BP neural network algorithm has some bugs such that it is easy to fall into local minimum and the slow convergence speed. Particle swarm optimization is an evolutionary computation technology based on swarm intelligence which can not guarantee global convergence. Artificial Bee Colony algorithm is a global optimum algorithm with many advantages such as simple, convenient and strong robust. In this paper, a new BP neural network based on Artificial Bee Colony algorithm and particle swarm optimization algorithm is proposed to optimize the weight and threshold value of BP neural network. After network traffic prediction experiment, we can conclude that optimized BP network traffic prediction based on PSO-ABC has high prediction accuracy and has stable prediction performance.

  6. Accident scenario diagnostics with neural networks

    International Nuclear Information System (INIS)

    Nuclear power plants are very complex systems. The diagnoses of transients or accident conditions is very difficult because a large amount of information, which is often noisy, or intermittent, or even incomplete, need to be processed in real time. To demonstrate their potential application to nuclear power plants, neural networks axe used to monitor the accident scenarios simulated by the training simulator of TVA's Watts Bar Nuclear Power Plant. A self-organization network is used to compress original data to reduce the total number of training patterns. Different accident scenarios are closely related to different key parameters which distinguish one accident scenario from another. Therefore, the accident scenarios can be monitored by a set of small size neural networks, called modular networks, each one of which monitors only one assigned accident scenario, to obtain fast training and recall. Sensitivity analysis is applied to select proper input variables for modular networks

  7. Neural networks and particle physics

    CERN Document Server

    Peterson, Carsten

    1993-01-01

    1. Introduction : Structure of the Central Nervous System Generics2. Feed-forward networks, Perceptions, Function approximators3. Self-organisation, Feature Maps4. Feed-back Networks, The Hopfield model, Optimization problems, Feed-back, Networks, Deformable templates, Graph bisection

  8. Optimal control learning with artificial neural networks

    International Nuclear Information System (INIS)

    This paper shows neural networks capabilities in optimal control applications of non linear dynamic systems. Our method is issued of a classical method concerning the direct research of the optimal control using gradient techniques. We show that neural approach and backpropagation paradigm are able to solve efficiently equations relative to necessary conditions for an optimizing solution. We have taken into account the known capabilities of multi layered networks in approximation functions. And for dynamic systems, we have generalized the indirect learning of inverse model adaptive architecture that is capable to define an optimal control in relation to a temporal criterion. (orig.)

  9. Human Face Recognition Using Convolutional Neural Networks

    Directory of Open Access Journals (Sweden)

    Răzvan-Daniel Albu

    2009-10-01

    Full Text Available In this paper, I present a novel hybrid face recognition approach based on a convolutional neural architecture, designed to robustly detect highly variable face patterns. The convolutional network extracts successively larger features in a hierarchical set of layers. With the weights of the trained neural networks there are created kernel windows used for feature extraction in a 3-stage algorithm. I present experimental results illustrating the efficiency of the proposed approach. I use a database of 796 images of 159 individuals from Reims University which contains quite a high degree of variability in expression, pose, and facial details.

  10. SAR ATR Based on Convolutional Neural Network

    Directory of Open Access Journals (Sweden)

    Tian Zhuangzhuang

    2016-06-01

    Full Text Available This study presents a new method of Synthetic Aperture Radar (SAR image target recognition based on a convolutional neural network. First, we introduce a class separability measure into the cost function to improve this network’s ability to distinguish between categories. Then, we extract SAR image features using the improved convolutional neural network and classify these features using a support vector machine. Experimental results using moving and stationary target acquisition and recognition SAR datasets prove the validity of this method.

  11. Contractor Prequalification Based on Neural Networks

    Institute of Scientific and Technical Information of China (English)

    ZHANG Jin-long; YANG Lan-rong

    2002-01-01

    Contractor Prequalification involves the screening of contractors by a project owner, according to a given set of criteria, in order to determine their competence to perform the work if awarded the construction contract. This paper introduces the capabilities of neural networks in solving problems related to contractor prequalification. The neural network systems for contractor prequalification has an input vector of 8 components and an output vector of 1 component. The output vector represents whether a contractor is qualified or not qualified to submit a bid on a project.

  12. Neural network approach to radiologic lesion detection

    International Nuclear Information System (INIS)

    An area of artificial intelligence that has gained recent attention is the neural network approach to pattern recognition. The authors explore the use of neural networks in radiologic lesion detection with what is known in the literature as the novelty filter. This filter uses a linear model; images of normal patterns become training vectors and are stored as columns of a matrix. An image of an abnormal pattern is introduced and the abnormality or novelty is extracted. A VAX 750 was used to encode the novelty filter, and two experiments have been examined

  13. Ferroelectric Memory Capacitors For Neural Networks

    Science.gov (United States)

    Thakoor, Sarita; Moopenn, Alexander W.; Stadler, Henry L.

    1991-01-01

    Thin-film ferroelectric capacitors proposed as nonvolatile analog memory devices. Intended primarily for use as synaptic connections in electronic neural networks. Connection strengths (synaptic weights) stored as nonlinear remanent polarizations of ferroelectric films. Ferroelectric memory and interrogation capacitors combined into memory devices in vertical or lateral configurations. Photoconductive layer modulated by light provides variable resistance to alter bias signal applied to memory capacitor. Features include nondestructive readout, simplicity, and resistance to ionizing radiation. Interrogated without destroying stored analog data. Also amenable to very-large-scale integration. Allows use of ac coupling, eliminating errors caused by dc offsets in amplifier circuits of neural networks.

  14. Spectral classification using convolutional neural networks

    CERN Document Server

    Hála, Pavel

    2014-01-01

    There is a great need for accurate and autonomous spectral classification methods in astrophysics. This thesis is about training a convolutional neural network (ConvNet) to recognize an object class (quasar, star or galaxy) from one-dimension spectra only. Author developed several scripts and C programs for datasets preparation, preprocessing and postprocessing of the data. EBLearn library (developed by Pierre Sermanet and Yann LeCun) was used to create ConvNets. Application on dataset of more than 60000 spectra yielded success rate of nearly 95%. This thesis conclusively proved great potential of convolutional neural networks and deep learning methods in astrophysics.

  15. Livermore Big Artificial Neural Network Toolkit

    Energy Technology Data Exchange (ETDEWEB)

    2016-07-01

    LBANN is a toolkit that is designed to train artificial neural networks efficiently on high performance computing architectures. It is optimized to take advantages of key High Performance Computing features to accelerate neural network training. Specifically it is optimized for low-latency, high bandwidth interconnects, node-local NVRAM, node-local GPU accelerators, and high bandwidth parallel file systems. It is built on top of the open source Elemental distributed-memory dense and spars-direct linear algebra and optimization library that is released under the BSD license. The algorithms contained within LBANN are drawn from the academic literature and implemented to work within a distributed-memory framework.

  16. Neural networks advances and applications 2

    CERN Document Server

    Gelenbe, E

    1992-01-01

    The present volume is a natural follow-up to Neural Networks: Advances and Applications which appeared one year previously. As the title indicates, it combines the presentation of recent methodological results concerning computational models and results inspired by neural networks, and of well-documented applications which illustrate the use of such models in the solution of difficult problems. The volume is balanced with respect to these two orientations: it contains six papers concerning methodological developments and five papers concerning applications and examples illustrating the theoret

  17. Speech Recognition Method Based on Multilayer Chaotic Neural Network

    Institute of Scientific and Technical Information of China (English)

    REN Xiaolin; HU Guangrui

    2001-01-01

    In this paper,speech recognitionusing neural networks is investigated.Especially,chaotic dynamics is introduced to neurons,and a mul-tilayer chaotic neural network (MLCNN) architectureis built.A learning algorithm is also derived to trainthe weights of the network.We apply the MLCNNto speech recognition and compare the performanceof the network with those of recurrent neural net-work (RNN) and time-delay neural network (TDNN).Experimental results show that the MLCNN methodoutperforms the other neural networks methods withrespect to average recognition rate.

  18. Autonomous robot behavior based on neural networks

    Science.gov (United States)

    Grolinger, Katarina; Jerbic, Bojan; Vranjes, Bozo

    1997-04-01

    The purpose of autonomous robot is to solve various tasks while adapting its behavior to the variable environment, expecting it is able to navigate much like a human would, including handling uncertain and unexpected obstacles. To achieve this the robot has to be able to find solution to unknown situations, to learn experienced knowledge, that means action procedure together with corresponding knowledge on the work space structure, and to recognize working environment. The planning of the intelligent robot behavior presented in this paper implements the reinforcement learning based on strategic and random attempts for finding solution and neural network approach for memorizing and recognizing work space structure (structural assignment problem). Some of the well known neural networks based on unsupervised learning are considered with regard to the structural assignment problem. The adaptive fuzzy shadowed neural network is developed. It has the additional shadowed hidden layer, specific learning rule and initialization phase. The developed neural network combines advantages of networks based on the Adaptive Resonance Theory and using shadowed hidden layer provides ability to recognize lightly translated or rotated obstacles in any direction.

  19. Neutron spectrum unfolding using neural networks

    Energy Technology Data Exchange (ETDEWEB)

    Vega C, H.R.; Hernandez D, V.M.; Manzanares A, E. [Universidad Autonoma de Zacatecas, A.P. 336, 98000 Zacatecas (Mexico)]. E-mail: rvega@cantera.reduaz.mx

    2004-07-01

    An artificial neural network has been designed to obtain the neutron spectra from the Bonner spheres spectrometer's count rates. The neural network was trained using a large set of neutron spectra compiled by the International Atomic Energy Agency. These include spectra from iso- topic neutron sources, reference and operational neutron spectra obtained from accelerators and nuclear reactors. The spectra were transformed from lethargy to energy distribution and were re-binned to 31 energy groups using the MCNP 4C code. Re-binned spectra and UTA4 matrix were used to calculate the expected count rates in Bonner spheres spectrometer. These count rates were used as input and correspondent spectrum was used as output during neural network training. The network has 7 input nodes, 56 neurons as hidden layer and 31 neurons in the output layer. After training the network was tested with the Bonner spheres count rates produced by twelve neutron spectra. The network allows unfolding the neutron spectrum from count rates measured with Bonner spheres. Good results are obtained when testing count rates belong to neutron spectra used during training, acceptable results are obtained for count rates obtained from actual neutron fields; however the network fails when count rates belong to monoenergetic neutron sources. (Author)

  20. Stability prediction of berm breakwater using neural network

    Digital Repository Service at National Institute of Oceanography (India)

    Mandal, S.; Rao, S.; Manjunath, Y.R.

    In the present study, an artificial neural network method has been applied to predict the stability of berm breakwaters. Four neural network models are constructed based on the parameters which influence the stability of breakwater. Training...

  1. Wave transmission prediction of multilayer floating breakwater using neural network

    Digital Repository Service at National Institute of Oceanography (India)

    Mandal, S.; Patil, S.G.; Hegde, A.V.

    In the present study, an artificial neural network method has been applied for wave transmission prediction of multilayer floating breakwater. Two neural network models are constructed based on the parameters which influence the wave transmission...

  2. Parameter Identification by Bayes Decision and Neural Networks

    DEFF Research Database (Denmark)

    Kulczycki, P.; Schiøler, Henrik

    1994-01-01

    The problem of parameter identification by Bayes point estimation using neural networks is investigated.......The problem of parameter identification by Bayes point estimation using neural networks is investigated....

  3. Sparse neural networks with large learning diversity

    CERN Document Server

    Gripon, Vincent

    2011-01-01

    Coded recurrent neural networks with three levels of sparsity are introduced. The first level is related to the size of messages, much smaller than the number of available neurons. The second one is provided by a particular coding rule, acting as a local constraint in the neural activity. The third one is a characteristic of the low final connection density of the network after the learning phase. Though the proposed network is very simple since it is based on binary neurons and binary connections, it is able to learn a large number of messages and recall them, even in presence of strong erasures. The performance of the network is assessed as a classifier and as an associative memory.

  4. Neural Networks for Wordform Recognition

    OpenAIRE

    Eineborg, Martin; Gambäck, Björn

    1994-01-01

    The paper outlines a method for automatic lexical acquisition using three-layered back-propagation networks. Several experiments have been carried out where the performance of different network architectures have been compared to each other on two tasks: overall part-of-speech (noun, adjective or verb) classification and classification by a set of 13 possible output categories. The best results for the simple task were obtained by networks consisting of 204-212 input neurons...

  5. Performance Comparison of Neural Networks for HRTFs Approximation

    Institute of Scientific and Technical Information of China (English)

    2000-01-01

    In order to approach to head-related transfer functions (HRTFs), this paper employs and compares three kinds of one-input neural network models, namely, multi-layer perceptron (MLP) networks, radial basis function (RBF) networks and wavelet neural networks (WNN) so as to select the best network model for further HRTFs approximation. Experimental results demonstrate that wavelet neural networks are more efficient and useful.

  6. Dynamic Object Identification with SOM-based neural networks

    Directory of Open Access Journals (Sweden)

    Aleksey Averkin

    2014-03-01

    Full Text Available In this article a number of neural networks based on self-organizing maps, that can be successfully used for dynamic object identification, is described. Unique SOM-based modular neural networks with vector quantized associative memory and recurrent self-organizing maps as modules are presented. The structured algorithms of learning and operation of such SOM-based neural networks are described in details, also some experimental results and comparison with some other neural networks are given.

  7. Simplified Neural Network Design for Hand Written Digit Recognition

    OpenAIRE

    Muhammad Zubair Asghar; Hussain Ahmad; Shakeel Ahmad; Sheikh Muhammad Saqib; Bashir Ahmad; Muhammad Junaid Asghar

    2011-01-01

    Neural Network is abstraction of the central nervous system and works as parallel processing system. Optimization, image processing, Diagnosis and many other applications are made very simple through neural networks, which are difficult and time consuming when conventional methods are used for their implementation. Neural Network is the simplified version of human brain. Like human brain, neural networks also exhibit efficient performance on perceptive tasks like recognition of visual images ...

  8. Remote Sensing Image Segmentation with Probabilistic Neural Networks

    Institute of Scientific and Technical Information of China (English)

    LIU Gang

    2005-01-01

    This paper focuses on the image segmentation with probabilistic neural networks (PNNs). Back propagation neural networks (BpNNs) and multi perceptron neural networks (MLPs) are also considered in this study. Especially, this paper investigates the implementation of PNNs in image segmentation and optimal processing of image segmentation with a PNN. The comparison between image segmentations with PNNs and with other neural networks is given. The experimental results show that PNNs can be successfully applied to image segmentation for good results.

  9. Neural network method for solving elastoplastic finite element problems

    Institute of Scientific and Technical Information of China (English)

    2006-01-01

    A basic optimization principle of Artificial Neural Network-the Lagrange Programming Neural Network (LPNN) model for solving elastoplastic finite element problems is presented. The nonlinear problems of mechanics are represented as a neural network based optimization problem by adopting the nonlinear function as nerve cell transfer function. Finally, two simple elastoplastic problems are numerically simulated. LPNN optimization results for elastoplastic problem are found to be comparable to traditional Hopfield neural network optimization model.

  10. Optimizing neural network models: motivation and case studies

    OpenAIRE

    Harp, S A; T. Samad

    2012-01-01

    Practical successes have been achieved  with neural network models in a variety of domains, including energy-related industry. The large, complex design space presented by neural networks is only minimally explored in current practice. The satisfactory results that nevertheless have been obtained testify that neural networks are a robust modeling technology; at the same time, however, the lack of a systematic design approach implies that the best neural network models generally  rem...

  11. Applications of Neural Networks in Spinning Prediction

    Institute of Scientific and Technical Information of China (English)

    程文红; 陆凯

    2003-01-01

    The neural network spinning prediction model (BP and RBF Networks) trained by data from the mill can predict yarn qualities and spinning performance. The input parameters of the model are as follows: yarn count, diameter, hauteur, bundle strength, spinning draft, spinning speed, traveler number and twist.And the output parameters are: yarn evenness, thin places, tenacity and elongation, ends-down.Predicting results match the testing data well.

  12. Multilingual Text Detection with Nonlinear Neural Network

    OpenAIRE

    Lin Li; Shengsheng Yu; Luo Zhong; Xiaozhen Li

    2015-01-01

    Multilingual text detection in natural scenes is still a challenging task in computer vision. In this paper, we apply an unsupervised learning algorithm to learn language-independent stroke feature and combine unsupervised stroke feature learning and automatically multilayer feature extraction to improve the representational power of text feature. We also develop a novel nonlinear network based on traditional Convolutional Neural Network that is able to detect multilingual text regions in th...

  13. Weighted Learning for Feedforward Neural Networks

    Institute of Scientific and Technical Information of China (English)

    Rong-Fang Xu; Thao-Tsen Chen; Shie-Jue Lee

    2014-01-01

    ⎯In this paper, we propose two weighted learning methods for the construction of single hidden layer feedforward neural networks. Both methods incorporate weighted least squares. Our idea is to allow the training instances nearer to the query to offer bigger contributions to the estimated output. By minimizing the weighted mean square error function, optimal networks can be obtained. The results of a number of experiments demonstrate the effectiveness of our proposed methods.

  14. Local learning algorithm for optical neural networks

    OpenAIRE

    QIAO, YONG; Psaltis, Demetri

    1992-01-01

    An anti-Hebbian local learning algorithm for two-layer optical neural networks is introduced. With this learning rule, the weight update for a certain connection depends only on the input and output of that connection and a global, scalar error signal. Therefore the backpropagation of error signals through the network, as required by the commonly used back error propagation algorithm, is avoided. It still guarantees, however, that the synaptic weights are updated in the error descent directio...

  15. Auto-associative nanoelectronic neural network

    Energy Technology Data Exchange (ETDEWEB)

    Nogueira, C. P. S. M.; Guimarães, J. G. [Departamento de Engenharia Elétrica - Laboratório de Dispositivos e Circuito Integrado, Universidade de Brasília, CP 4386, CEP 70904-970 Brasília DF (Brazil)

    2014-05-15

    In this paper, an auto-associative neural network using single-electron tunneling (SET) devices is proposed and simulated at low temperature. The nanoelectronic auto-associative network is able to converge to a stable state, previously stored during training. The recognition of the pattern involves decreasing the energy of the input state until it achieves a point of local minimum energy, which corresponds to one of the stored patterns.

  16. Designing Deep Learning Neural Networks using Caffe

    OpenAIRE

    Kishore, Anurag; Jindal, Stuti; Singh, Sanjay

    2015-01-01

    This tutorial investigates various tools for designing Deep Learning Neural Networks (DLNN). Our exploration of many tools has revealed that Caffe is the fastest and most appropriate tool for designing DLNNs. We have given step by step procedure for installing and configuring Caffe and its dependencies for designing DLNN.

  17. Chaotic behavior of a layered neural network

    Energy Technology Data Exchange (ETDEWEB)

    Derrida, B.; Meir, R.

    1988-09-15

    We consider the evolution of configurations in a layered feed-forward neural network. Exact expressions for the evolution of the distance between two configurations are obtained in the thermodynamic limit. Our results show that the distance between two arbitrarily close configurations always increases, implying chaotic behavior, even in the phase of good retrieval.

  18. A Modified Algorithm for Feedforward Neural Networks

    Institute of Scientific and Technical Information of China (English)

    夏战国; 管红杰; 李政伟; 孟斌

    2002-01-01

    As a most popular learning algorithm for the feedforward neural networks, the classic BP algorithm has its many shortages. To overcome some of the shortages, a modified learning algorithm is proposed in the article. And the simulation result illustrate the modified algorithm is more effective and practicable.

  19. Nonlinear Time Series Analysis via Neural Networks

    Science.gov (United States)

    Volná, Eva; Janošek, Michal; Kocian, Václav; Kotyrba, Martin

    This article deals with a time series analysis based on neural networks in order to make an effective forex market [Moore and Roche, J. Int. Econ. 58, 387-411 (2002)] pattern recognition. Our goal is to find and recognize important patterns which repeatedly appear in the market history to adapt our trading system behaviour based on them.

  20. Neural Network Output Optimization Using Interval Analysis

    NARCIS (Netherlands)

    De Weerdt, E.; Chu, Q.P.; Mulder, J.A.

    2009-01-01

    The problem of output optimization within a specified input space of neural networks (NNs) with fixed weights is discussed in this paper. The problem is (highly) nonlinear when nonlinear activation functions are used. This global optimization problem is encountered in the reinforcement learning (RL)

  1. NEURAL NETWORK APPROACH FOR EYE DETECTION

    Directory of Open Access Journals (Sweden)

    Vijayalaxmi

    2012-05-01

    Full Text Available Driving support systems, such as car navigation systems are becoming common and they support driver in several aspects. Non-intrusive method of detecting Fatigue and drowsiness based on eye-blink count and eye directed instruction controlhelps the driver to prevent from collision caused by drowsy driving. Eye detection and tracking under various conditions such as illumination, background, face alignment and facial expression makes the problem complex.Neural Network based algorithm is proposed in this paper to detect the eyes efficiently. In the proposed algorithm, first the neural Network is trained to reject the non-eye regionbased on images with features of eyes and the images with features of non-eye using Gabor filter and Support Vector Machines to reduce the dimension and classify efficiently. In the algorithm, first the face is segmented using L*a*btransform color space, then eyes are detected using HSV and Neural Network approach. The algorithm is tested on nearly 100 images of different persons under different conditions and the results are satisfactory with success rate of 98%.The Neural Network is trained with 50 non-eye images and 50 eye images with different angles using Gabor filter. This paper is a part of research work on “Development of Non-Intrusive system for realtime Monitoring and Prediction of Driver Fatigue and drowsiness” project sponsored by Department of Science & Technology, Govt. of India, New Delhi at Vignan Institute of Technology and Sciences, Vignan Hills, Hyderabad.

  2. Applying Artificial Neural Networks for Face Recognition

    Directory of Open Access Journals (Sweden)

    Thai Hoang Le

    2011-01-01

    Full Text Available This paper introduces some novel models for all steps of a face recognition system. In the step of face detection, we propose a hybrid model combining AdaBoost and Artificial Neural Network (ABANN to solve the process efficiently. In the next step, labeled faces detected by ABANN will be aligned by Active Shape Model and Multi Layer Perceptron. In this alignment step, we propose a new 2D local texture model based on Multi Layer Perceptron. The classifier of the model significantly improves the accuracy and the robustness of local searching on faces with expression variation and ambiguous contours. In the feature extraction step, we describe a methodology for improving the efficiency by the association of two methods: geometric feature based method and Independent Component Analysis method. In the face matching step, we apply a model combining many Neural Networks for matching geometric features of human face. The model links many Neural Networks together, so we call it Multi Artificial Neural Network. MIT + CMU database is used for evaluating our proposed methods for face detection and alignment. Finally, the experimental results of all steps on CallTech database show the feasibility of our proposed model.

  3. Empirical generalization assessment of neural network models

    DEFF Research Database (Denmark)

    Larsen, Jan; Hansen, Lars Kai

    1995-01-01

    This paper addresses the assessment of generalization performance of neural network models by use of empirical techniques. We suggest to use the cross-validation scheme combined with a resampling technique to obtain an estimate of the generalization performance distribution of a specific model...

  4. Localizing Tortoise Nests by Neural Networks.

    Directory of Open Access Journals (Sweden)

    Roberto Barbuti

    Full Text Available The goal of this research is to recognize the nest digging activity of tortoises using a device mounted atop the tortoise carapace. The device classifies tortoise movements in order to discriminate between nest digging, and non-digging activity (specifically walking and eating. Accelerometer data was collected from devices attached to the carapace of a number of tortoises during their two-month nesting period. Our system uses an accelerometer and an activity recognition system (ARS which is modularly structured using an artificial neural network and an output filter. For the purpose of experiment and comparison, and with the aim of minimizing the computational cost, the artificial neural network has been modelled according to three different architectures based on the input delay neural network (IDNN. We show that the ARS can achieve very high accuracy on segments of data sequences, with an extremely small neural network that can be embedded in programmable low power devices. Given that digging is typically a long activity (up to two hours, the application of ARS on data segments can be repeated over time to set up a reliable and efficient system, called Tortoise@, for digging activity recognition.

  5. Localizing Tortoise Nests by Neural Networks.

    Science.gov (United States)

    Barbuti, Roberto; Chessa, Stefano; Micheli, Alessio; Pucci, Rita

    2016-01-01

    The goal of this research is to recognize the nest digging activity of tortoises using a device mounted atop the tortoise carapace. The device classifies tortoise movements in order to discriminate between nest digging, and non-digging activity (specifically walking and eating). Accelerometer data was collected from devices attached to the carapace of a number of tortoises during their two-month nesting period. Our system uses an accelerometer and an activity recognition system (ARS) which is modularly structured using an artificial neural network and an output filter. For the purpose of experiment and comparison, and with the aim of minimizing the computational cost, the artificial neural network has been modelled according to three different architectures based on the input delay neural network (IDNN). We show that the ARS can achieve very high accuracy on segments of data sequences, with an extremely small neural network that can be embedded in programmable low power devices. Given that digging is typically a long activity (up to two hours), the application of ARS on data segments can be repeated over time to set up a reliable and efficient system, called Tortoise@, for digging activity recognition. PMID:26985660

  6. Visualization of neural networks using saliency maps

    DEFF Research Database (Denmark)

    Mørch, Niels J.S.; Kjems, Ulrik; Hansen, Lars Kai;

    1995-01-01

    The saliency map is proposed as a new method for understanding and visualizing the nonlinearities embedded in feedforward neural networks, with emphasis on the ill-posed case, where the dimensionality of the input-field by far exceeds the number of examples. Several levels of approximations...

  7. Neural Networks for protein Structure Prediction

    DEFF Research Database (Denmark)

    Bohr, Henrik

    1998-01-01

    This is a review about neural network applications in bioinformatics. Especially the applications to protein structure prediction, e.g. prediction of secondary structures, prediction of surface structure, fold class recognition and prediction of the 3-dimensional structure of protein backbones...

  8. Towards semen quality assessment using neural networks

    DEFF Research Database (Denmark)

    Linneberg, Christian; Salamon, P.; Svarer, C.;

    1994-01-01

    The paper presents the methodology and results from a neural net based classification of human sperm head morphology. The methodology uses a preprocessing scheme in which invariant Fourier descriptors are lumped into “energy” bands. The resulting networks are pruned using optimal brain damage. Pe...

  9. Artificial neural networks in neutron dosimetry

    Energy Technology Data Exchange (ETDEWEB)

    Vega C, H.R.; Hernandez D, V.M.; Manzanares A, E.; Mercado, G.A.; Perales M, W.A.; Robles R, J.A. [Unidades Academicas de Estudios Nucleares, UAZ, A.P. 336, 98000 Zacatecas (Mexico); Gallego, E.; Lorente, A. [Depto. de Ingenieria Nuclear, Universidad Politecnica de Madrid, (Spain)

    2005-07-01

    An artificial neural network has been designed to obtain the neutron doses using only the Bonner spheres spectrometer's count rates. Ambient, personal and effective neutron doses were included. 187 neutron spectra were utilized to calculate the Bonner count rates and the neutron doses. The spectra were transformed from lethargy to energy distribution and were re-binned to 31 energy groups using the MCNP 4C code. Re-binned spectra, UTA4 response matrix and fluence-to-dose coefficients were used to calculate the count rates in Bonner spheres spectrometer and the doses. Count rates were used as input and the respective doses were used as output during neural network training. Training and testing was carried out in Mat lab environment. The artificial neural network performance was evaluated using the {chi}{sup 2}- test, where the original and calculated doses were compared. The use of Artificial Neural Networks in neutron dosimetry is an alternative procedure that overcomes the drawbacks associated in this ill-conditioned problem. (Author)

  10. Brain tumor grading based on Neural Networks and Convolutional Neural Networks.

    Science.gov (United States)

    Yuehao Pan; Weimin Huang; Zhiping Lin; Wanzheng Zhu; Jiayin Zhou; Wong, Jocelyn; Zhongxiang Ding

    2015-08-01

    This paper studies brain tumor grading using multiphase MRI images and compares the results with various configurations of deep learning structure and baseline Neural Networks. The MRI images are used directly into the learning machine, with some combination operations between multiphase MRIs. Compared to other researches, which involve additional effort to design and choose feature sets, the approach used in this paper leverages the learning capability of deep learning machine. We present the grading performance on the testing data measured by the sensitivity and specificity. The results show a maximum improvement of 18% on grading performance of Convolutional Neural Networks based on sensitivity and specificity compared to Neural Networks. We also visualize the kernels trained in different layers and display some self-learned features obtained from Convolutional Neural Networks. PMID:26736358

  11. Brain tumor grading based on Neural Networks and Convolutional Neural Networks.

    Science.gov (United States)

    Yuehao Pan; Weimin Huang; Zhiping Lin; Wanzheng Zhu; Jiayin Zhou; Wong, Jocelyn; Zhongxiang Ding

    2015-08-01

    This paper studies brain tumor grading using multiphase MRI images and compares the results with various configurations of deep learning structure and baseline Neural Networks. The MRI images are used directly into the learning machine, with some combination operations between multiphase MRIs. Compared to other researches, which involve additional effort to design and choose feature sets, the approach used in this paper leverages the learning capability of deep learning machine. We present the grading performance on the testing data measured by the sensitivity and specificity. The results show a maximum improvement of 18% on grading performance of Convolutional Neural Networks based on sensitivity and specificity compared to Neural Networks. We also visualize the kernels trained in different layers and display some self-learned features obtained from Convolutional Neural Networks.

  12. SOLVING INVERSE KINEMATICS OF REDUNDANT MANIPULATOR BASED ON NEURAL NETWORK

    Institute of Scientific and Technical Information of China (English)

    2003-01-01

    For the redundant manipulators, neural network is used to tackle the velocity inverse kinematics of robot manipulators. The neural networks utilized are multi-layered perceptions with a back-propagation training algorithm. The weight table is used to save the weights solving the inverse kinematics based on the different optimization performance criteria. Simulations verify the effectiveness of using neural network.

  13. Hidden Neural Networks: A Framework for HMM/NN Hybrids

    DEFF Research Database (Denmark)

    Riis, Søren Kamaric; Krogh, Anders Stærmose

    1997-01-01

    This paper presents a general framework for hybrids of hidden Markov models (HMM) and neural networks (NN). In the new framework called hidden neural networks (HNN) the usual HMM probability parameters are replaced by neural network outputs. To ensure a probabilistic interpretation the HNN...

  14. Self-Organizing Multilayered Neural Networks of Optimal Complexity

    OpenAIRE

    Schetinin, V.

    2005-01-01

    The principles of self-organizing the neural networks of optimal complexity is considered under the unrepresentative learning set. The method of self-organizing the multi-layered neural networks is offered and used to train the logical neural networks which were applied to the medical diagnostics.

  15. A brief review of feed-forward neural networks

    OpenAIRE

    SAZLI, Murat Hüsnü

    2006-01-01

    Artificial neural networks, or shortly neural networks, find applications in a very wide spectrum. In this paper, following a brief presentation of the basic aspects of feed-forward neural networks, their mostly used learning/training algorithm, the so-called back-propagation algorithm, have been described.

  16. Extracting Knowledge from Supervised Neural Networks in Image Procsssing

    NARCIS (Netherlands)

    Zwaag, van der Berend Jan; Slump, Kees; Spaanenburg, Lambert; Jain, R.; Abraham, A.; Faucher, C.; Zwaag, van der B.J.

    2003-01-01

    Despite their success-story, artificial neural networks have one major disadvantage compared to other techniques: the inability to explain comprehensively how a trained neural network reaches its output; neural networks are not only (incorrectly) seen as a ¿magic tool¿ but possibly even more as a my

  17. Analysis of Neural Networks in Terms of Domain Functions

    NARCIS (Netherlands)

    Zwaag, van der Berend Jan; Slump, Cees; Spaanenburg, Lambert

    2002-01-01

    Despite their success-story, artificial neural networks have one major disadvantage compared to other techniques: the inability to explain comprehensively how a trained neural network reaches its output; neural networks are not only (incorrectly) seen as a "magic tool" but possibly even more as a my

  18. Recognition of Continuous Digits by Quantum Neural Networks

    Institute of Scientific and Technical Information of China (English)

    2003-01-01

    This paper describes a new kind of neural network-Quantum Neural Network (QNN) and its application to recognition of continuous digits. QNN combines the advantages of neural modeling and fuzzy theoretic principles. Experiment results show that more than 15 percent error reduction is achieved on a speaker-independent continuous digits recognition task compared with BP networks.

  19. A Direct Feedback Control Based on Fuzzy Recurrent Neural Network

    Institute of Scientific and Technical Information of China (English)

    李明; 马小平

    2002-01-01

    A direct feedback control system based on fuzzy-recurrent neural network is proposed, and a method of training weights of fuzzy-recurrent neural network was designed by applying modified contract mapping genetic algorithm. Computer simul ation results indicate that fuzzy-recurrent neural network controller has perfect dynamic and static performances .

  20. Combining neural networks for protein secondary structure prediction

    DEFF Research Database (Denmark)

    Riis, Søren Kamaric

    1995-01-01

    In this paper structured neural networks are applied to the problem of predicting the secondary structure of proteins. A hierarchical approach is used where specialized neural networks are designed for each structural class and then combined using another neural network. The submodels are designed...

  1. A Fuzzy Neural Network for Fault Pattern Recognition

    Institute of Scientific and Technical Information of China (English)

    2001-01-01

    This paper combines fuzzy set theory with AR T neural network, and demonstrates some important properties of the fuzzy ART neural network algorithm. The results from application on a ball bearing diagnosis indicate that a fuzzy ART neural network has an effect of fast stable recognition for fuzzy patterns.

  2. Hopfield Neural Network Approach to Clustering in Mobile Radio Networks

    Institute of Scientific and Technical Information of China (English)

    JiangYan; LiChengshu

    1995-01-01

    In this paper ,the Hopfield neural network(NN) algorithm is developed for selecting gateways in cluster linkage.The linked cluster(LC) architecture is assumed to achieve distributed network control in multihop radio networks throrgh the local controllers,called clusterheads and the nodes connecting these clusterheads are defined to be gateways.In Hopfield NN models ,the most critical issue being the determination of connection weights,we use the approach of Lagrange multipliers(LM) for its dynamic nature.

  3. From Designing A Single Neural Network to Designing Neural Network Ensembles

    Institute of Scientific and Technical Information of China (English)

    Liu Yong; Zou Xiu-fer

    2003-01-01

    This paper introduces supervised learning model,and surveys related research work. The paper is organised as follows. A supervised learning model is firstly described. The bias variance trade-off is then discussed for the supervised learning model. Based on the bias variance trade-off, both the single neural network approaches and the neural network en semble approaches are overviewed, and problems with the existing approaches are indicated. Finally, the paper concludes with specifying potential future research directions.

  4. A Fuzzy Quantum Neural Network and Its Application in Pattern Recognition

    Institute of Scientific and Technical Information of China (English)

    MIAOFuyou; XIONGYan; CHENHuanhuan; WANGXingfu

    2005-01-01

    This paper proposes a fuzzy quantum neural network model combining quantum neural network and fuzzy logic, which applies the fuzzy logic to design the collapse rules of the quantum neural network, and solves the character recognition problem. Theoretical analysis and experimental results show that fuzzy quantum neural network improves recognizing veracity than the traditional neural network and quantum neural network.

  5. Color control of printers by neural networks

    Science.gov (United States)

    Tominaga, Shoji

    1998-07-01

    A method is proposed for solving the mapping problem from the 3D color space to the 4D CMYK space of printer ink signals by means of a neural network. The CIE-L*a*b* color system is used as the device-independent color space. The color reproduction problem is considered as the problem of controlling an unknown static system with four inputs and three outputs. A controller determines the CMYK signals necessary to produce the desired L*a*b* values with a given printer. Our solution method for this control problem is based on a two-phase procedure which eliminates the need for UCR and GCR. The first phase determines a neural network as a model of the given printer, and the second phase determines the combined neural network system by combining the printer model and the controller in such a way that it represents an identity mapping in the L*a*b* color space. Then the network of the controller part realizes the mapping from the L*a*b* space to the CMYK space. Practical algorithms are presented in the form of multilayer feedforward networks. The feasibility of the proposed method is shown in experiments using a dye sublimation printer and an ink jet printer.

  6. Neural networks: Application to medical imaging

    Science.gov (United States)

    Clarke, Laurence P.

    1994-01-01

    The research mission is the development of computer assisted diagnostic (CAD) methods for improved diagnosis of medical images including digital x-ray sensors and tomographic imaging modalities. The CAD algorithms include advanced methods for adaptive nonlinear filters for image noise suppression, hybrid wavelet methods for feature segmentation and enhancement, and high convergence neural networks for feature detection and VLSI implementation of neural networks for real time analysis. Other missions include (1) implementation of CAD methods on hospital based picture archiving computer systems (PACS) and information networks for central and remote diagnosis and (2) collaboration with defense and medical industry, NASA, and federal laboratories in the area of dual use technology conversion from defense or aerospace to medicine.

  7. a Heterosynaptic Learning Rule for Neural Networks

    Science.gov (United States)

    Emmert-Streib, Frank

    In this article we introduce a novel stochastic Hebb-like learning rule for neural networks that is neurobiologically motivated. This learning rule combines features of unsupervised (Hebbian) and supervised (reinforcement) learning and is stochastic with respect to the selection of the time points when a synapse is modified. Moreover, the learning rule does not only affect the synapse between pre- and postsynaptic neuron, which is called homosynaptic plasticity, but effects also further remote synapses of the pre- and postsynaptic neuron. This more complex form of synaptic plasticity has recently come under investigations in neurobiology and is called heterosynaptic plasticity. We demonstrate that this learning rule is useful in training neural networks by learning parity functions including the exclusive-or (XOR) mapping in a multilayer feed-forward network. We find, that our stochastic learning rule works well, even in the presence of noise. Importantly, the mean learning time increases with the number of patterns to be learned polynomially, indicating efficient learning.

  8. Fuzzy logic and neural network technologies

    Science.gov (United States)

    Villarreal, James A.; Lea, Robert N.; Savely, Robert T.

    1992-01-01

    Applications of fuzzy logic technologies in NASA projects are reviewed to examine their advantages in the development of neural networks for aerospace and commercial expert systems and control. Examples of fuzzy-logic applications include a 6-DOF spacecraft controller, collision-avoidance systems, and reinforcement-learning techniques. The commercial applications examined include a fuzzy autofocusing system, an air conditioning system, and an automobile transmission application. The practical use of fuzzy logic is set in the theoretical context of artificial neural systems (ANSs) to give the background for an overview of ANS research programs at NASA. The research and application programs include the Network Execution and Training Simulator and faster training algorithms such as the Difference Optimized Training Scheme. The networks are well suited for pattern-recognition applications such as predicting sunspots, controlling posture maintenance, and conducting adaptive diagnoses.

  9. Computationally Efficient Neural Network Intrusion Security Awareness

    Energy Technology Data Exchange (ETDEWEB)

    Todd Vollmer; Milos Manic

    2009-08-01

    An enhanced version of an algorithm to provide anomaly based intrusion detection alerts for cyber security state awareness is detailed. A unique aspect is the training of an error back-propagation neural network with intrusion detection rule features to provide a recognition basis. Network packet details are subsequently provided to the trained network to produce a classification. This leverages rule knowledge sets to produce classifications for anomaly based systems. Several test cases executed on ICMP protocol revealed a 60% identification rate of true positives. This rate matched the previous work, but 70% less memory was used and the run time was reduced to less than 1 second from 37 seconds.

  10. The Stellar parametrization using Artificial Neural Network

    CERN Document Server

    Giridhar, Sunetra; Kunder, Andrea; Muneer, S; Kumar, G Selva

    2012-01-01

    An update on recent methods for automated stellar parametrization is given. We present preliminary results of the ongoing program for rapid parametrization of field stars using medium resolution spectra obtained using Vainu Bappu Telescope at VBO, Kavalur, India. We have used Artificial Neural Network for estimating temperature, gravity, metallicity and absolute magnitude of the field stars. The network for each parameter is trained independently using a large number of calibrating stars. The trained network is used for estimating atmospheric parameters of unexplored field stars.

  11. Reconstruction of periodic signals using neural networks

    Directory of Open Access Journals (Sweden)

    José Danilo Rairán Antolines

    2014-01-01

    Full Text Available In this paper, we reconstruct a periodic signal by using two neural networks. The first network is trained to approximate the period of a signal, and the second network estimates the corresponding coefficients of the signal's Fourier expansion. The reconstruction strategy consists in minimizing the mean-square error via backpro-pagation algorithms over a single neuron with a sine transfer function. Additionally, this paper presents mathematical proof about the quality of the approximation as well as a first modification of the algorithm, which requires less data to reach the same estimation; thus making the algorithm suitable for real-time implementations.

  12. Distribution network planning algorithm based on Hopfield neural network

    Institute of Scientific and Technical Information of China (English)

    GAO Wei-xin; LUO Xian-jue

    2005-01-01

    This paper presents a new algorithm based on Hopfield neural network to find the optimal solution for an electric distribution network. This algorithm transforms the distribution power network-planning problem into a directed graph-planning problem. The Hopfield neural network is designed to decide the in-degree of each node and is in combined application with an energy function. The new algorithm doesn't need to code city streets and normalize data, so the program is easier to be realized. A case study applying the method to a district of 29 street proved that an optimal solution for the planning of such a power system could be obtained by only 26 iterations. The energy function and algorithm developed in this work have the following advantages over many existing algorithms for electric distribution network planning: fast convergence and unnecessary to code all possible lines.

  13. Inference and contradictory analysis for binary neural networks

    Institute of Scientific and Technical Information of China (English)

    郭宝龙; 郭雷

    1996-01-01

    A weak-inference theory and a contradictory analysis for binary neural networks (BNNs).are presented.The analysis indicates that the essential reason why a neural network is changing its slates is the existence of superior contradiction inside the network,and that the process by which a neural network seeks a solution corresponds to eliminating the superior contradiction.Different from general constraint satisfaction networks,the solutions found by BNNs may contain inferior contradiction but not superior contradiction.

  14. Fuzzy logic and neural networks basic concepts & application

    CERN Document Server

    Alavala, Chennakesava R

    2008-01-01

    About the Book: The primary purpose of this book is to provide the student with a comprehensive knowledge of basic concepts of fuzzy logic and neural networks. The hybridization of fuzzy logic and neural networks is also included. No previous knowledge of fuzzy logic and neural networks is required. Fuzzy logic and neural networks have been discussed in detail through illustrative examples, methods and generic applications. Extensive and carefully selected references is an invaluable resource for further study of fuzzy logic and neural networks. Each chapter is followed by a question bank

  15. Cancer classification based on gene expression using neural networks.

    Science.gov (United States)

    Hu, H P; Niu, Z J; Bai, Y P; Tan, X H

    2015-12-21

    Based on gene expression, we have classified 53 colon cancer patients with UICC II into two groups: relapse and no relapse. Samples were taken from each patient, and gene information was extracted. Of the 53 samples examined, 500 genes were considered proper through analyses by S-Kohonen, BP, and SVM neural networks. Classification accuracy obtained by S-Kohonen neural network reaches 91%, which was more accurate than classification by BP and SVM neural networks. The results show that S-Kohonen neural network is more plausible for classification and has a certain feasibility and validity as compared with BP and SVM neural networks.

  16. Membership generation using multilayer neural network

    Science.gov (United States)

    Kim, Jaeseok

    1992-01-01

    There has been intensive research in neural network applications to pattern recognition problems. Particularly, the back-propagation network has attracted many researchers because of its outstanding performance in pattern recognition applications. In this section, we describe a new method to generate membership functions from training data using a multilayer neural network. The basic idea behind the approach is as follows. The output values of a sigmoid activation function of a neuron bear remarkable resemblance to membership values. Therefore, we can regard the sigmoid activation values as the membership values in fuzzy set theory. Thus, in order to generate class membership values, we first train a suitable multilayer network using a training algorithm such as the back-propagation algorithm. After the training procedure converges, the resulting network can be treated as a membership generation network, where the inputs are feature values and the outputs are membership values in the different classes. This method allows fairly complex membership functions to be generated because the network is highly nonlinear in general. Also, it is to be noted that the membership functions are generated from a classification point of view. For pattern recognition applications, this is highly desirable, although the membership values may not be indicative of the degree of typicality of a feature value in a particular class.

  17. Phase Diagram of Spiking Neural Networks

    Directory of Open Access Journals (Sweden)

    Hamed eSeyed-Allaei

    2015-03-01

    Full Text Available In computer simulations of spiking neural networks, often it is assumed that every two neurons of the network are connected by a probablilty of 2%, 20% of neurons are inhibitory and 80% are excitatory. These common values are based on experiments, observations. but here, I take a different perspective, inspired by evolution. I simulate many networks, each with a different set of parameters, and then I try to figure out what makes the common values desirable by nature. Networks which are configured according to the common values, have the best dynamic range in response to an impulse and their dynamic range is more robust in respect to synaptic weights. In fact, evolution has favored networks of best dynamic range. I present a phase diagram that shows the dynamic ranges of different networks of different parameteres. This phase diagram gives an insight into the space of parameters -- excitatory to inhibitory ratio, sparseness of connections and synaptic weights. It may serve as a guideline to decide about the values of parameters in a simulation of spiking neural network.

  18. Clustering in mobile ad hoc network based on neural network

    Institute of Scientific and Technical Information of China (English)

    CHEN Ai-bin; CAI Zi-xing; HU De-wen

    2006-01-01

    An on-demand distributed clustering algorithm based on neural network was proposed. The system parameters and the combined weight for each node were computed, and cluster-heads were chosen using the weighted clustering algorithm, then a training set was created and a neural network was trained. In this algorithm, several system parameters were taken into account, such as the ideal node-degree, the transmission power, the mobility and the battery power of the nodes. The algorithm can be used directly to test whether a node is a cluster-head or not. Moreover, the clusters recreation can be speeded up.

  19. Computational capabilities of recurrent NARX neural networks.

    Science.gov (United States)

    Siegelmann, H T; Horne, B G; Giles, C L

    1997-01-01

    Recently, fully connected recurrent neural networks have been proven to be computationally rich-at least as powerful as Turing machines. This work focuses on another network which is popular in control applications and has been found to be very effective at learning a variety of problems. These networks are based upon Nonlinear AutoRegressive models with eXogenous Inputs (NARX models), and are therefore called NARX networks. As opposed to other recurrent networks, NARX networks have a limited feedback which comes only from the output neuron rather than from hidden states. They are formalized by y(t)=Psi(u(t-n(u)), ..., u(t-1), u(t), y(t-n(y)), ..., y(t-1)) where u(t) and y(t) represent input and output of the network at time t, n(u) and n(y) are the input and output order, and the function Psi is the mapping performed by a Multilayer Perceptron. We constructively prove that the NARX networks with a finite number of parameters are computationally as strong as fully connected recurrent networks and thus Turing machines. We conclude that in theory one can use the NARX models, rather than conventional recurrent networks without any computational loss even though their feedback is limited. Furthermore, these results raise the issue of what amount of feedback or recurrence is necessary for any network to be Turing equivalent and what restrictions on feedback limit computational power. PMID:18255858

  20. Mechanical stress in abdominal aortic aneurysms using artificial neural networks

    OpenAIRE

    Soudah Prieto, Eduardo; Rodriguez, Jose; López González, Roberto

    2015-01-01

    Combination of numerical modeling and artificial intelligence (AI) in bioengineering processes are a promising pathway for the further development of bioengineering sciences. The objective of this work is to use Artificial Neural Networks (ANN) to reduce the long computational times needed in the analysis of shear stress in the Abdominal Aortic Aneurysm (AAA) by finite element methods (FEM). For that purpose two different neural networks are created. The first neural network (Mesh Neural Netw...

  1. The EEG signal prediction bz using neural network

    OpenAIRE

    Babušiak, B.; Mohylová, J.

    2008-01-01

    The neural network is computational model based on the features abstraction of biological neural systems. The neural networks have many ways of usage in technical field. They have been applied successfully to speech recognition, image analysis and adaptive control, in order to construct software agents or autonomous robots. In this paper is described usage of neural networks for ECG signal prediction. The ECG signal prediction can be used for automated detection of irregular heart...

  2. The EEG Signal Prediction by Using Neural Network

    OpenAIRE

    Branko Babusiak; Jitka Mohylova

    2008-01-01

    The neural network is computational model based on the features abstraction of biological neural systems. The neural networks have many ways of usage in technical field. They have been applied successfully to speech recognition, image analysis and adaptive control, in order to construct software agents or autonomous robots. In this paper is described usage of neural networks for ECG signal prediction. The ECG signal prediction can be used for  automated detection of irregular heartbeat – extr...

  3. Evolving Chart Pattern Sensitive Neural Network Based Forex Trading Agents

    CERN Document Server

    Sher, Gene I

    2011-01-01

    Though machine learning has been applied to the foreign exchange market for quiet some time now, and neural networks have been shown to yield good results, in modern approaches neural network systems are optimized through the traditional methods, and their input signals are vectors containing prices and other indicator elements. The aim of this paper is twofold, the presentation and testing of the application of topology and weight evolving artificial neural network (TWEANN) systems to automated currency trading, and the use of chart images as input to a geometrical regularity aware indirectly encoded neural network systems. This paper presents the benchmark results of neural network based automated currency trading systems evolved using TWEANNs, and compares the generalization capabilities of these direct encoded neural networks which use the standard price vector inputs, and the indirect (substrate) encoded neural networks which use chart images as input. The TWEANN algorithm used to evolve these currency t...

  4. A Projection Neural Network for Constrained Quadratic Minimax Optimization.

    Science.gov (United States)

    Liu, Qingshan; Wang, Jun

    2015-11-01

    This paper presents a projection neural network described by a dynamic system for solving constrained quadratic minimax programming problems. Sufficient conditions based on a linear matrix inequality are provided for global convergence of the proposed neural network. Compared with some of the existing neural networks for quadratic minimax optimization, the proposed neural network in this paper is capable of solving more general constrained quadratic minimax optimization problems, and the designed neural network does not include any parameter. Moreover, the neural network has lower model complexities, the number of state variables of which is equal to that of the dimension of the optimization problems. The simulation results on numerical examples are discussed to demonstrate the effectiveness and characteristics of the proposed neural network.

  5. Convolutional Neural Network Based dem Super Resolution

    Science.gov (United States)

    Chen, Zixuan; Wang, Xuewen; Xu, Zekai; Hou, Wenguang

    2016-06-01

    DEM super resolution is proposed in our previous publication to improve the resolution for a DEM on basis of some learning examples. Meanwhile, the nonlocal algorithm is introduced to deal with it and lots of experiments show that the strategy is feasible. In our publication, the learning examples are defined as the partial original DEM and their related high measurements due to this way can avoid the incompatibility between the data to be processed and the learning examples. To further extent the applications of this new strategy, the learning examples should be diverse and easy to obtain. Yet, it may cause the problem of incompatibility and unrobustness. To overcome it, we intend to investigate a convolutional neural network based method. The input of the convolutional neural network is a low resolution DEM and the output is expected to be its high resolution one. A three layers model will be adopted. The first layer is used to detect some features from the input, the second integrates the detected features to some compressed ones and the final step transforms the compressed features as a new DEM. According to this designed structure, some learning DEMs will be taken to train it. Specifically, the designed network will be optimized by minimizing the error of the output and its expected high resolution DEM. In practical applications, a testing DEM will be input to the convolutional neural network and a super resolution will be obtained. Many experiments show that the CNN based method can obtain better reconstructions than many classic interpolation methods.

  6. Character Recognition Using Genetically Trained Neural Networks

    Energy Technology Data Exchange (ETDEWEB)

    Diniz, C.; Stantz, K.M.; Trahan, M.W.; Wagner, J.S.

    1998-10-01

    Computationally intelligent recognition of characters and symbols addresses a wide range of applications including foreign language translation and chemical formula identification. The combination of intelligent learning and optimization algorithms with layered neural structures offers powerful techniques for character recognition. These techniques were originally developed by Sandia National Laboratories for pattern and spectral analysis; however, their ability to optimize vast amounts of data make them ideal for character recognition. An adaptation of the Neural Network Designer soflsvare allows the user to create a neural network (NN_) trained by a genetic algorithm (GA) that correctly identifies multiple distinct characters. The initial successfid recognition of standard capital letters can be expanded to include chemical and mathematical symbols and alphabets of foreign languages, especially Arabic and Chinese. The FIN model constructed for this project uses a three layer feed-forward architecture. To facilitate the input of characters and symbols, a graphic user interface (GUI) has been developed to convert the traditional representation of each character or symbol to a bitmap. The 8 x 8 bitmap representations used for these tests are mapped onto the input nodes of the feed-forward neural network (FFNN) in a one-to-one correspondence. The input nodes feed forward into a hidden layer, and the hidden layer feeds into five output nodes correlated to possible character outcomes. During the training period the GA optimizes the weights of the NN until it can successfully recognize distinct characters. Systematic deviations from the base design test the network's range of applicability. Increasing capacity, the number of letters to be recognized, requires a nonlinear increase in the number of hidden layer neurodes. Optimal character recognition performance necessitates a minimum threshold for the number of cases when genetically training the net. And, the

  7. Classification of Epileptic EEG Signals using Time-Delay Neural Networks and Probabilistic Neural Networks

    Directory of Open Access Journals (Sweden)

    Ateke Goshvarpour

    2013-05-01

    Full Text Available The aim of this paper is to investigate the performance of time delay neural networks (TDNNs and the probabilistic neural networks (PNNs trained with nonlinear features (Lyapunov exponents and Entropy on electroencephalogram signals (EEG in a specific pathological state. For this purpose, two types of EEG signals (normal and partial epilepsy are analyzed. To evaluate the performance of the classifiers, mean square error (MSE and elapsed time of each classifier are examined. The results show that TDNN with 12 neurons in hidden layer result in a lower MSE with the training time of about 19.69 second. According to the results, when the sigma values are lower than 0.56, the best performance in the proposed probabilistic neural network structure is achieved. The results of present study show that applying the nonlinear features to train these networks can serve as useful tool in classifying of the EEG signals.

  8. Neural network correction of astrometric chromaticity

    CERN Document Server

    Gai, M

    2005-01-01

    In this paper we deal with the problem of chromaticity, i.e. apparent position variation of stellar images with their spectral distribution, using neural networks to analyse and process astronomical images. The goal is to remove this relevant source of systematic error in the data reduction of high precision astrometric experiments, like Gaia. This task can be accomplished thanks to the capability of neural networks to solve a nonlinear approximation problem, i.e. to construct an hypersurface that approximates a given set of scattered data couples. Images are encoded associating each of them with conveniently chosen moments, evaluated along the y axis. The technique proposed, in the current framework, reduces the initial chromaticity of few milliarcseconds to values of few microarcseconds.

  9. Automatic breast density classification using neural network

    Science.gov (United States)

    Arefan, D.; Talebpour, A.; Ahmadinejhad, N.; Kamali Asl, A.

    2015-12-01

    According to studies, the risk of breast cancer directly associated with breast density. Many researches are done on automatic diagnosis of breast density using mammography. In the current study, artifacts of mammograms are removed by using image processing techniques and by using the method presented in this study, including the diagnosis of points of the pectoral muscle edges and estimating them using regression techniques, pectoral muscle is detected with high accuracy in mammography and breast tissue is fully automatically extracted. In order to classify mammography images into three categories: Fatty, Glandular, Dense, a feature based on difference of gray-levels of hard tissue and soft tissue in mammograms has been used addition to the statistical features and a neural network classifier with a hidden layer. Image database used in this research is the mini-MIAS database and the maximum accuracy of system in classifying images has been reported 97.66% with 8 hidden layers in neural network.

  10. Web Page Categorization Using Artificial Neural Networks

    CERN Document Server

    Kamruzzaman, S M

    2010-01-01

    Web page categorization is one of the challenging tasks in the world of ever increasing web technologies. There are many ways of categorization of web pages based on different approach and features. This paper proposes a new dimension in the way of categorization of web pages using artificial neural network (ANN) through extracting the features automatically. Here eight major categories of web pages have been selected for categorization; these are business & economy, education, government, entertainment, sports, news & media, job search, and science. The whole process of the proposed system is done in three successive stages. In the first stage, the features are automatically extracted through analyzing the source of the web pages. The second stage includes fixing the input values of the neural network; all the values remain between 0 and 1. The variations in those values affect the output. Finally the third stage determines the class of a certain web page out of eight predefined classes. This stage i...

  11. Artificial Neural Network for Displacement Vectors Determination

    Directory of Open Access Journals (Sweden)

    P. Bohmann

    1997-09-01

    Full Text Available An artificial neural network (NN for displacement vectors (DV determination is presented in this paper. DV are computed in areas which are essential for image analysis and computer vision, in areas where are edges, lines, corners etc. These special features are found by edges operators with the following filtration. The filtration is performed by a threshold function. The next step is DV computation by 2D Hamming artificial neural network. A method of DV computation is based on the full search block matching algorithms. The pre-processing (edges finding is the reason why the correlation function is very simple, the process of DV determination needs less computation and the structure of the NN is simpler.

  12. Automatic breast density classification using neural network

    International Nuclear Information System (INIS)

    According to studies, the risk of breast cancer directly associated with breast density. Many researches are done on automatic diagnosis of breast density using mammography. In the current study, artifacts of mammograms are removed by using image processing techniques and by using the method presented in this study, including the diagnosis of points of the pectoral muscle edges and estimating them using regression techniques, pectoral muscle is detected with high accuracy in mammography and breast tissue is fully automatically extracted. In order to classify mammography images into three categories: Fatty, Glandular, Dense, a feature based on difference of gray-levels of hard tissue and soft tissue in mammograms has been used addition to the statistical features and a neural network classifier with a hidden layer. Image database used in this research is the mini-MIAS database and the maximum accuracy of system in classifying images has been reported 97.66% with 8 hidden layers in neural network

  13. Multi-Dimensional Recurrent Neural Networks

    CERN Document Server

    Graves, Alex; Schmidhuber, Juergen

    2007-01-01

    Recurrent neural networks (RNNs) have proved effective at one dimensional sequence learning tasks, such as speech and online handwriting recognition. Some of the properties that make RNNs suitable for such tasks, for example robustness to input warping, and the ability to access contextual information, are also desirable in multidimensional domains. However, there has so far been no direct way of applying RNNs to data with more than one spatio-temporal dimension. This paper introduces multi-dimensional recurrent neural networks (MDRNNs), thereby extending the potential applicability of RNNs to vision, video processing, medical imaging and many other areas, while avoiding the scaling problems that have plagued other multi-dimensional models. Experimental results are provided for two image segmentation tasks.

  14. Face Recognition using Eigenfaces and Neural Networks

    Directory of Open Access Journals (Sweden)

    Mohamed Rizon

    2006-01-01

    Full Text Available In this study, we develop a computational model to identify the face of an unknown person’s by applying eigenfaces. The eigenfaces has been applied to extract the basic face of the human face images. The eigenfaces is then projecting onto human faces to identify unique features vectors. This significant features vector can be used to identify an unknown face by using the backpropagation neural network that utilized euclidean distance for classification and recognition. The ORL database for this investigation consists of 40 people with various 400 face images had been used for the learning. The eigenfaces including implemented Jacobi’s method for eigenvalues and eigenvectors has been performed. The classification and recognition using backpropagation neural network showed impressive positive result to classify face images.

  15. Supervised Sequence Labelling with Recurrent Neural Networks

    CERN Document Server

    Graves, Alex

    2012-01-01

    Supervised sequence labelling is a vital area of machine learning, encompassing tasks such as speech, handwriting and gesture recognition, protein secondary structure prediction and part-of-speech tagging. Recurrent neural networks are powerful sequence learning tools—robust to input noise and distortion, able to exploit long-range contextual information—that would seem ideally suited to such problems. However their role in large-scale sequence labelling systems has so far been auxiliary.    The goal of this book is a complete framework for classifying and transcribing sequential data with recurrent neural networks only. Three main innovations are introduced in order to realise this goal. Firstly, the connectionist temporal classification output layer allows the framework to be trained with unsegmented target sequences, such as phoneme-level speech transcriptions; this is in contrast to previous connectionist approaches, which were dependent on error-prone prior segmentation. Secondly, multidimensional...

  16. Neural network prediction of solar cycle 24

    Institute of Scientific and Technical Information of China (English)

    A. Ajabshirizadeh; N. Masoumzadeh Jouzdani; Shahram Abbassi

    2011-01-01

    The ability to predict the future behavior of solar activity has become extremely import due to its effect on the environment near the Earth. Predictions of both the amplitude and timing of the next solar cycle will assist in estimating the various consequences of space weather. The level of solar activity is usually expressed by international sunspot number (Rz). Several prediction techniques have been applied and have achieved varying degrees of success in the domain of solar activity prediction.We predict a solar index (Rz) in solar cycle 24 by using a neural network method. The neural network technique is used to analyze the time series of solar activity. According to our predictions of yearly sunspot number, the maximum of cycle 24 will occur in the year 2013 and will have an annual mean sunspot number of 65. Finally, we discuss our results in order to compare them with other suggested predictions.

  17. Learning in Neural Networks: VLSI Implementation Strategies

    Science.gov (United States)

    Duong, Tuan Anh

    1995-01-01

    Fully-parallel hardware neural network implementations may be applied to high-speed recognition, classification, and mapping tasks in areas such as vision, or can be used as low-cost self-contained units for tasks such as error detection in mechanical systems (e.g. autos). Learning is required not only to satisfy application requirements, but also to overcome hardware-imposed limitations such as reduced dynamic range of connections.

  18. Improving Recurrent Neural Networks For Sequence Labelling

    OpenAIRE

    Dinarelli, Marco; Tellier, Isabelle

    2016-01-01

    In this paper we study different types of Recurrent Neural Networks (RNN) for sequence labeling tasks. We propose two new variants of RNNs integrating improvements for sequence labeling, and we compare them to the more traditional Elman and Jordan RNNs. We compare all models, either traditional or new, on four distinct tasks of sequence labeling: two on Spoken Language Understanding (ATIS and MEDIA); and two of POS tagging for the French Treebank (FTB) and the Penn Treebank (PTB) corpora. The...

  19. Deep convolutional neural networks for pedestrian detection

    OpenAIRE

    Tomè, Denis; Monti, Federico; Baroffio, Luca; Bondi, Luca; Tagliasacchi, Marco; Tubaro, Stefano

    2015-01-01

    Pedestrian detection is a popular research topic due to its paramount importance for a number of applications, especially in the fields of automotive, surveillance and robotics. Despite the significant improvements, pedestrian detection is still an open challenge that calls for more and more accurate algorithms. In the last few years, deep learning and in particular convolutional neural networks emerged as the state of the art in terms of accuracy for a number of computer vision tasks such as...

  20. Diagnosing process faults using neural network models

    Energy Technology Data Exchange (ETDEWEB)

    Buescher, K.L.; Jones, R.D.; Messina, M.J.

    1993-11-01

    In order to be of use for realistic problems, a fault diagnosis method should have the following three features. First, it should apply to nonlinear processes. Second, it should not rely on extensive amounts of data regarding previous faults. Lastly, it should detect faults promptly. The authors present such a scheme for static (i.e., non-dynamic) systems. It involves using a neural network to create an associative memory whose fixed points represent the normal behavior of the system.

  1. Differential Recurrent Neural Networks for Action Recognition

    OpenAIRE

    Veeriah, Vivek; Zhuang, Naifan; Qi, Guo-Jun

    2015-01-01

    The long short-term memory (LSTM) neural network is capable of processing complex sequential information since it utilizes special gating schemes for learning representations from long input sequences. It has the potential to model any sequential time-series data, where the current hidden state has to be considered in the context of the past hidden states. This property makes LSTM an ideal choice to learn the complex dynamics of various actions. Unfortunately, the conventional LSTMs do not co...

  2. Neural network with dynamically adaptable neurons

    Science.gov (United States)

    Tawel, Raoul (Inventor)

    1994-01-01

    This invention is an adaptive neuron for use in neural network processors. The adaptive neuron participates in the supervised learning phase of operation on a co-equal basis with the synapse matrix elements by adaptively changing its gain in a similar manner to the change of weights in the synapse IO elements. In this manner, training time is decreased by as much as three orders of magnitude.

  3. Pedestrian Detection Using Convolutional Neural Networks

    OpenAIRE

    Molin, David

    2015-01-01

    Pedestrian detection is an important field with applications in active safety systems for cars as well as autonomous driving. Since autonomous driving and active safety are becoming technically feasible now the interest for these applications has dramatically increased.The aim of this thesis is to investigate convolutional neural networks (CNN) for pedestrian detection. The reason for this is that CNN have recently beensuccessfully applied to several different computer vision problems. The ma...

  4. Analysis of SSR Using Artificial Neural Networks

    OpenAIRE

    Nagabhushana, BS; Chandrasekharaiah, HS

    1996-01-01

    Artificial neural networks (ANNs) are being advantageously applied to power system analysis problems. They possess the ability to establish complicated input-output mappings through a learning process, without any explicit programming. In this paper, an ANN based method for subsynchronous resonance (SSR) analysis is presented. The designed ANN outputs a measure of the possibility of the occurrence of SSR and is fully trained to accommodate the variations of power system parameters over the en...

  5. Practical introduction to artificial neural networks

    OpenAIRE

    Bougrain, Laurent

    2004-01-01

    What are they ? What for are they ? How to use them ? This article wants to answer these three fundamental questions about artificial neural networks that every engineer interested by this machine learning technique asks to oneself. We present the most useful architectures. We explain how to train them using a supervised or an unsupervised learning depending on the task we want to do : regression, discrimination or clustering. What kind of data can one use and how to prepare them ? Finally, w...

  6. Context dependent learning in neural networks

    OpenAIRE

    Spreeuwers, L.J.; Zwaag, van der, Berend Jan; Heijden, van der, M.

    1995-01-01

    In this paper an extension to the standard error backpropagation learning rule for multi-layer feed forward neural networks is proposed, that enables them to be trained for context dependent information. The context dependent learning is realised by using a different error function (called Average Risk: AVR) in stead of the sum of squared errors (SQE) normally used in error backpropagation and by adapting the update rules. It is shown that for applications where this context dependent informa...

  7. Deep Learning in Neural Networks: An Overview

    OpenAIRE

    Schmidhuber, Juergen

    2014-01-01

    In recent years, deep artificial neural networks (including recurrent ones) have won numerous contests in pattern recognition and machine learning. This historical survey compactly summarises relevant work, much of it from the previous millennium. Shallow and deep learners are distinguished by the depth of their credit assignment paths, which are chains of possibly learnable, causal links between actions and effects. I review deep supervised learning (also recapitulating the history of backpr...

  8. Neural Networks with Complex and Quaternion Inputs

    OpenAIRE

    Rishiyur, Adityan

    2006-01-01

    This article investigates Kak neural networks, which can be instantaneously trained, for complex and quaternion inputs. The performance of the basic algorithm has been analyzed and shown how it provides a plausible model of human perception and understanding of images. The motivation for studying quaternion inputs is their use in representing spatial rotations that find applications in computer graphics, robotics, global navigation, computer vision and the spatial orientation of instruments. ...

  9. Adaptive Filtering Using Recurrent Neural Networks

    Science.gov (United States)

    Parlos, Alexander G.; Menon, Sunil K.; Atiya, Amir F.

    2005-01-01

    A method for adaptive (or, optionally, nonadaptive) filtering has been developed for estimating the states of complex process systems (e.g., chemical plants, factories, or manufacturing processes at some level of abstraction) from time series of measurements of system inputs and outputs. The method is based partly on the fundamental principles of the Kalman filter and partly on the use of recurrent neural networks. The standard Kalman filter involves an assumption of linearity of the mathematical model used to describe a process system. The extended Kalman filter accommodates a nonlinear process model but still requires linearization about the state estimate. Both the standard and extended Kalman filters involve the often unrealistic assumption that process and measurement noise are zero-mean, Gaussian, and white. In contrast, the present method does not involve any assumptions of linearity of process models or of the nature of process noise; on the contrary, few (if any) assumptions are made about process models, noise models, or the parameters of such models. In this regard, the method can be characterized as one of nonlinear, nonparametric filtering. The method exploits the unique ability of neural networks to approximate nonlinear functions. In a given case, the process model is limited mainly by limitations of the approximation ability of the neural networks chosen for that case. Moreover, despite the lack of assumptions regarding process noise, the method yields minimum- variance filters. In that they do not require statistical models of noise, the neural- network-based state filters of this method are comparable to conventional nonlinear least-squares estimators.

  10. Turing Computation with Recurrent Artificial Neural Networks

    OpenAIRE

    Carmantini, Giovanni S; Graben, Peter beim; Desroches, Mathieu; Rodrigues, Serafim

    2015-01-01

    We improve the results by Siegelmann & Sontag (1995) by providing a novel and parsimonious constructive mapping between Turing Machines and Recurrent Artificial Neural Networks, based on recent developments of Nonlinear Dynamical Automata. The architecture of the resulting R-ANNs is simple and elegant, stemming from its transparent relation with the underlying NDAs. These characteristics yield promise for developments in machine learning methods and symbolic computation with continuous time d...

  11. Web Page Categorization Using Artificial Neural Networks

    OpenAIRE

    S. M. Kamruzzaman

    2010-01-01

    Web page categorization is one of the challenging tasks in the world of ever increasing web technologies. There are many ways of categorization of web pages based on different approach and features. This paper proposes a new dimension in the way of categorization of web pages using artificial neural network (ANN) through extracting the features automatically. Here eight major categories of web pages have been selected for categorization; these are business & economy, education, government, en...

  12. Artificial Neural Networks in Stellar Astronomy

    Directory of Open Access Journals (Sweden)

    R. K. Gulati

    2001-01-01

    Full Text Available Next generation of optical spectroscopic surveys, such as the Sloan Digital Sky Survey and the 2 degree field survey, will provide large stellar databases. New tools will be required to extract useful information from these. We show the applications of artificial neural networks to stellar databases. In another application of this method, we predict spectral and luminosity classes from the catalog of spectral indices. We assess the importance of such methods for stellar populations studies.

  13. Prediction of metal corrosion by neural networks

    OpenAIRE

    Jančíková, Zora; Zimný, Ondřej; Koštial, Pavol

    2013-01-01

    The contribution deals with the use of artifi cial neural networks for prediction of steel atmospheric corrosion. Atmospheric corrosion of metal materials exposed under atmospheric conditions depends on various factors such as local temperature, relative humidity, amount of precipitation, pH of rainfall, concentration of main pollutants and exposition time. As these factors are very complex, exact relation for mathematical description of atmospheric corrosion of various metals are...

  14. Prediction of metal corrosion by neural networks

    OpenAIRE

    Jančíková, Z.; Zimný, O.; Koštial, P.

    2013-01-01

    The contribution deals with the use of artificial neural networks for prediction of steel atmospheric corrosion. Atmospheric corrosion of metal materials exposed under atmospheric conditions depends on various factors such as local temperature, relative humidity, amount of precipitation, pH of rainfall, concentration of main pollutants and exposition time. As these factors are very complex, exact relation for mathematical description of atmospheric corrosion of various metals are not known so...

  15. POWER SCALABLE IMPLEMENTATION OF ARTIFICIAL NEURAL NETWORKS

    OpenAIRE

    Modi, Sankalp; Wilson, Peter; Brown, Andrew

    2005-01-01

    As the use of Artificial Neural Network(ANN) in mobile embedded devices gets more pervasive, power consumption of ANN hardware is becoming a major limiting factor. Although considerable research efforts are now directed towards low-power implementations of ANN, the issue of dynamic power scalability of the implemented design has been largely overlooked. In this paper, we discuss the motivation and basic principles for implementing power scaling in ANN Hardware. With the help of a simple examp...

  16. Neural Networks in Chemical Reaction Dynamics

    CERN Document Server

    Raff, Lionel; Hagan, Martin

    2011-01-01

    This monograph presents recent advances in neural network (NN) approaches and applications to chemical reaction dynamics. Topics covered include: (i) the development of ab initio potential-energy surfaces (PES) for complex multichannel systems using modified novelty sampling and feedforward NNs; (ii) methods for sampling the configuration space of critical importance, such as trajectory and novelty sampling methods and gradient fitting methods; (iii) parametrization of interatomic potential functions using a genetic algorithm accelerated with a NN; (iv) parametrization of analytic interatomic

  17. Neural network error correction for solving coupled ordinary differential equations

    Science.gov (United States)

    Shelton, R. O.; Darsey, J. A.; Sumpter, B. G.; Noid, D. W.

    1992-01-01

    A neural network is presented to learn errors generated by a numerical algorithm for solving coupled nonlinear differential equations. The method is based on using a neural network to correctly learn the error generated by, for example, Runge-Kutta on a model molecular dynamics (MD) problem. The neural network programs used in this study were developed by NASA. Comparisons are made for training the neural network using backpropagation and a new method which was found to converge with fewer iterations. The neural net programs, the MD model and the calculations are discussed.

  18. A Bionic Neural Network for Fish-Robot Locomotion

    Institute of Scientific and Technical Information of China (English)

    Dai-bing Zhang; De-wen Hu; Lin-cheng Shen; Hai-bin Xie

    2006-01-01

    A bionic neural network for fish-robot locomotion is presented. The bionic neural network inspired from fish neural network consists of one high level controller and one chain of central pattern generators (CPGs). Each CPG contains a nonlinear neural Zhang oscillator which shows properties similar to sine-cosine model. Simulation results show that the bionic neural network presents a good performance in controlling the fish-robot to execute various motions such as startup,stop,forward swimming,backward swimming,turn right and turn left.

  19. Segmentation of magnetic resonance images using an artificial neural network.

    OpenAIRE

    Piraino, D. W.; Amartur, S. C.; Richmond, B. J.; Schils, J. P.; Thome, J. M.; Weber, P. B.

    1991-01-01

    Signal intensities from intermediate and T2 weighted spin echo images of the brain were used as inputs into an artificial neural network (ANN). The signal intensities were used to train the network to recognize anatomically-important segments. The ANN was a self-organizing map (SOM) neural network which develops a continuous topographical map of the signal intensities within the two images. The neural network segmented images demonstrated good correlation with white matter, gray matter, and c...

  20. Comparison of Training Methods for Deep Neural Networks

    OpenAIRE

    Glauner, Patrick O.

    2015-01-01

    This report describes the difficulties of training neural networks and in particular deep neural networks. It then provides a literature review of training methods for deep neural networks, with a focus on pre-training. It focuses on Deep Belief Networks composed of Restricted Boltzmann Machines and Stacked Autoencoders and provides an outreach on further and alternative approaches. It also includes related practical recommendations from the literature on training them. In the second part, in...

  1. Deep learning in neural networks: an overview.

    Science.gov (United States)

    Schmidhuber, Jürgen

    2015-01-01

    In recent years, deep artificial neural networks (including recurrent ones) have won numerous contests in pattern recognition and machine learning. This historical survey compactly summarizes relevant work, much of it from the previous millennium. Shallow and Deep Learners are distinguished by the depth of their credit assignment paths, which are chains of possibly learnable, causal links between actions and effects. I review deep supervised learning (also recapitulating the history of backpropagation), unsupervised learning, reinforcement learning & evolutionary computation, and indirect search for short programs encoding deep and large networks.

  2. Multilingual Text Detection with Nonlinear Neural Network

    Directory of Open Access Journals (Sweden)

    Lin Li

    2015-01-01

    Full Text Available Multilingual text detection in natural scenes is still a challenging task in computer vision. In this paper, we apply an unsupervised learning algorithm to learn language-independent stroke feature and combine unsupervised stroke feature learning and automatically multilayer feature extraction to improve the representational power of text feature. We also develop a novel nonlinear network based on traditional Convolutional Neural Network that is able to detect multilingual text regions in the images. The proposed method is evaluated on standard benchmarks and multilingual dataset and demonstrates improvement over the previous work.

  3. Deep learning in neural networks: an overview.

    Science.gov (United States)

    Schmidhuber, Jürgen

    2015-01-01

    In recent years, deep artificial neural networks (including recurrent ones) have won numerous contests in pattern recognition and machine learning. This historical survey compactly summarizes relevant work, much of it from the previous millennium. Shallow and Deep Learners are distinguished by the depth of their credit assignment paths, which are chains of possibly learnable, causal links between actions and effects. I review deep supervised learning (also recapitulating the history of backpropagation), unsupervised learning, reinforcement learning & evolutionary computation, and indirect search for short programs encoding deep and large networks. PMID:25462637

  4. Neural network method for characterizing video cameras

    Science.gov (United States)

    Zhou, Shuangquan; Zhao, Dazun

    1998-08-01

    This paper presents a neural network method for characterizing color video camera. A multilayer feedforward network with the error back-propagation learning rule for training, is used as a nonlinear transformer to model a camera, which realizes a mapping from the CIELAB color space to RGB color space. With SONY video camera, D65 illuminant, Pritchard Spectroradiometer, 410 JIS color charts as training data and 36 charts as testing data, results show that the mean error of training data is 2.9 and that of testing data is 4.0 in a 2563 RGB space.

  5. Neural Network Approach for Eye Detection

    CERN Document Server

    Vijayalaxmi,; Sreehari, S

    2012-01-01

    Driving support systems, such as car navigation systems are becoming common and they support driver in several aspects. Non-intrusive method of detecting Fatigue and drowsiness based on eye-blink count and eye directed instruction controlhelps the driver to prevent from collision caused by drowsy driving. Eye detection and tracking under various conditions such as illumination, background, face alignment and facial expression makes the problem complex.Neural Network based algorithm is proposed in this paper to detect the eyes efficiently. In the proposed algorithm, first the neural Network is trained to reject the non-eye regionbased on images with features of eyes and the images with features of non-eye using Gabor filter and Support Vector Machines to reduce the dimension and classify efficiently. In the algorithm, first the face is segmented using L*a*btransform color space, then eyes are detected using HSV and Neural Network approach. The algorithm is tested on nearly 100 images of different persons under...

  6. File access prediction using neural networks.

    Science.gov (United States)

    Patra, Prashanta Kumar; Sahu, Muktikanta; Mohapatra, Subasish; Samantray, Ronak Kumar

    2010-06-01

    One of the most vexing issues in design of a high-speed computer is the wide gap of access times between the memory and the disk. To solve this problem, static file access predictors have been used. In this paper, we propose dynamic file access predictors using neural networks to significantly improve upon the accuracy, success-per-reference, and effective-success-rate-per-reference by using neural-network-based file access predictor with proper tuning. In particular, we verified that the incorrect prediction has been reduced from 53.11% to 43.63% for the proposed neural network prediction method with a standard configuration than the recent popularity (RP) method. With manual tuning for each trace, we are able to improve upon the misprediction rate and effective-success-rate-per-reference using a standard configuration. Simulations on distributed file system (DFS) traces reveal that exact fit radial basis function (RBF) gives better prediction in high end system whereas multilayer perceptron (MLP) trained with Levenberg-Marquardt (LM) backpropagation outperforms in system having good computational capability. Probabilistic and competitive predictors are the most suitable for work stations having limited resources to deal with and the former predictor is more efficient than the latter for servers having maximum system calls. Finally, we conclude that MLP with LM backpropagation algorithm has better success rate of file prediction than those of simple perceptron, last successor, stable successor, and best k out of m predictors.

  7. Artificial Neural Network Model for Predicting Compressive

    Directory of Open Access Journals (Sweden)

    Salim T. Yousif

    2013-05-01

    Full Text Available   Compressive strength of concrete is a commonly used criterion in evaluating concrete. Although testing of the compressive strength of concrete specimens is done routinely, it is performed on the 28th day after concrete placement. Therefore, strength estimation of concrete at early time is highly desirable. This study presents the effort in applying neural network-based system identification techniques to predict the compressive strength of concrete based on concrete mix proportions, maximum aggregate size (MAS, and slump of fresh concrete. Back-propagation neural networks model is successively developed, trained, and tested using actual data sets of concrete mix proportions gathered from literature.    The test of the model by un-used data within the range of input parameters shows that the maximum absolute error for model is about 20% and 88% of the output results has absolute errors less than 10%. The parametric study shows that water/cement ratio (w/c is the most significant factor  affecting the output of the model.     The results showed that neural networks has strong potential as a feasible tool for predicting compressive strength of concrete.

  8. Artificial neural network applications in ionospheric studies

    Directory of Open Access Journals (Sweden)

    L. R. Cander

    1998-06-01

    Full Text Available The ionosphere of Earth exhibits considerable spatial changes and has large temporal variability of various timescales related to the mechanisms of creation, decay and transport of space ionospheric plasma. Many techniques for modelling electron density profiles through entire ionosphere have been developed in order to solve the "age-old problem" of ionospheric physics which has not yet been fully solved. A new way to address this problem is by applying artificial intelligence methodologies to current large amounts of solar-terrestrial and ionospheric data. It is the aim of this paper to show by the most recent examples that modern development of numerical models for ionospheric monthly median long-term prediction and daily hourly short-term forecasting may proceed successfully applying the artificial neural networks. The performance of these techniques is illustrated with different artificial neural networks developed to model and predict the temporal and spatial variations of ionospheric critical frequency, f0F2 and Total Electron Content (TEC. Comparisons between results obtained by the proposed approaches and measured f0F2 and TEC data provide prospects for future applications of the artificial neural networks in ionospheric studies.

  9. Clustering-based selective neural network ensemble

    Institute of Scientific and Technical Information of China (English)

    FU Qiang; HU Shang-xu; ZHAO Sheng-ying

    2005-01-01

    An effective ensemble should consist of a set of networks that are both accurate and diverse. We propose a novel clustering-based selective algorithm for constructing neural network ensemble, where clustering technology is used to classify trained networks according to similarity and optimally select the most accurate individual network from each cluster to make up the ensemble. Empirical studies on regression of four typical datasets showed that this approach yields significantly smaller en semble achieving better performance than other traditional ones such as Bagging and Boosting. The bias variance decomposition of the predictive error shows that the success of the proposed approach may lie in its properly tuning the bias/variance trade-offto reduce the prediction error (the sum of bias2 and variance).

  10. A new approach to artificial neural networks.

    Science.gov (United States)

    Baptista Filho, B D; Cabral, E L; Soares, A J

    1998-01-01

    A novel approach to artificial neural networks is presented. The philosophy of this approach is based on two aspects: the design of task-specific networks, and a new neuron model with multiple synapses. The synapses' connective strengths are modified through selective and cumulative processes conducted by axo-axonic connections from a feedforward circuit. This new concept was applied to the position control of a planar two-link manipulator exhibiting excellent results on learning capability and generalization when compared with a conventional feedforward network. In the present paper, the example shows only a network developed from a neuronal reflexive circuit with some useful artifices, nevertheless without the intention of covering all possibilities devised.

  11. Microscopic instability in recurrent neural networks

    Science.gov (United States)

    Yamanaka, Yuzuru; Amari, Shun-ichi; Shinomoto, Shigeru

    2015-03-01

    In a manner similar to the molecular chaos that underlies the stable thermodynamics of gases, a neuronal system may exhibit microscopic instability in individual neuronal dynamics while a macroscopic order of the entire population possibly remains stable. In this study, we analyze the microscopic stability of a network of neurons whose macroscopic activity obeys stable dynamics, expressing either monostable, bistable, or periodic state. We reveal that the network exhibits a variety of dynamical states for microscopic instability residing in a given stable macroscopic dynamics. The presence of a variety of dynamical states in such a simple random network implies more abundant microscopic fluctuations in real neural networks which consist of more complex and hierarchically structured interactions.

  12. Fuzzy Neural Network Based Traffic Prediction and Congestion Control in High-Speed Networks

    Institute of Scientific and Technical Information of China (English)

    费翔; 何小燕; 罗军舟; 吴介一; 顾冠群

    2000-01-01

    Congestion control is one of the key problems in high-speed networks, such as ATM. In this paper, a kind of traffic prediction and preventive congestion control scheme is proposed using neural network approach. Traditional predictor using BP neural network has suffered from long convergence time and dissatisfying error. Fuzzy neural network developed in this paper can solve these problems satisfactorily. Simulations show the comparison among no-feedback control scheme,reactive control scheme and neural network based control scheme.

  13. PSO optimized Feed Forward Neural Network for offline Signature Classification

    Directory of Open Access Journals (Sweden)

    Pratik R. Hajare

    2015-07-01

    Full Text Available The paper is based on feed forward neural network (FFNN optimization by particle swarm intelligence (PSI used to provide initial weights and biases to train neural network. Once the weights and biases are found using Particle swarm optimization (PSO with neural network used as training algorithm for specified epoch, the same are used to train the neural network for training and classification of benchmark problems. Further the approach is tested for offline signature classifications. A comparison is made between normal FFNN with random weights and biases and FFNN with particle swarm optimized weights and biases. Firstly, the performance is tested on two benchmark databases for neural network, The Breast Cancer Database and the Diabetic Database. Result shows that neural network performs better with initial weights and biases obtained by Particle Swarm optimization. The network converges faster with PSO obtained initial weights and biases for FFNN and classification accuracy is increased.

  14. Runoff Modelling in Urban Storm Drainage by Neural Networks

    DEFF Research Database (Denmark)

    Rasmussen, Michael R.; Brorsen, Michael; Schaarup-Jensen, Kjeld

    1995-01-01

    A neural network is used to simulate folw and water levels in a sewer system. The calibration of th neural network is based on a few measured events and the network is validated against measureed events as well as flow simulated with the MOUSE model (Lindberg and Joergensen, 1986). The neural...... network is used to compute flow or water level at selected points in the sewer system, and to forecast the flow from a small residential area. The main advantages of the neural network are the build-in self calibration procedure and high speed performance, but the neural network cannot be used to extract...... knowledge of the runoff process. The neural network was found to simulate 150 times faster than e.g. the MOUSE model....

  15. Phase Synchronization in Small World Chaotic Neural Networks

    Institute of Scientific and Technical Information of China (English)

    WANG Qing-Yun; LU Qi-Shao

    2005-01-01

    @@ To understand collective motion of realneural networks very well, we investigate collective phase synchronization of small world chaotic Hindmarsh-Rose (HR) neural networks. By numerical simulations, we conclude that small world chaotic HR neural networks can achieve collective phase synchronization. Furthermore, it is shown that phase synchronization of small world chaotic HR neural networks is dependent on the coupling strength,the connection topology (which is determined by the probability p), as well as the coupling number. These phenomena are important to guide us to understand the synchronization of real neural networks.

  16. Detection of Wildfires with Artificial Neural Networks

    Science.gov (United States)

    Umphlett, B.; Leeman, J.; Morrissey, M. L.

    2011-12-01

    Currently fire detection for the National Oceanic and Atmospheric Administration (NOAA) using satellite data is accomplished with algorithms and error checking human analysts. Artificial neural networks (ANNs) have been shown to be more accurate than algorithms or statistical methods for applications dealing with multiple datasets of complex observed data in the natural sciences. ANNs also deal well with multiple data sources that are not all equally reliable or equally informative to the problem. An ANN was tested to evaluate its accuracy in detecting wildfires utilizing polar orbiter numerical data from the Advanced Very High Resolution Radiometer (AVHRR). Datasets containing locations of known fires were gathered from the NOAA's polar orbiting satellites via the Comprehensive Large Array-data Stewardship System (CLASS). The data was then calibrated and navigation corrected using the Environment for Visualizing Images (ENVI). Fires were located with the aid of shapefiles generated via ArcGIS. Afterwards, several smaller ten pixel by ten pixel datasets were created for each fire (using the ENVI corrected data). Several datasets were created for each fire in order to vary fire position and avoid training the ANN to look only at fires in the center of an image. Datasets containing no fires were also created. A basic pattern recognition neural network was established with the MATLAB neural network toolbox. The datasets were then randomly separated into categories used to train, validate, and test the ANN. To prevent over fitting of the data, the mean squared error (MSE) of the network was monitored and training was stopped when the MSE began to rise. Networks were tested using each channel of the AVHRR data independently, channels 3a and 3b combined, and all six channels. The number of hidden neurons for each input set was also varied between 5-350 in steps of 5 neurons. Each configuration was run 10 times, totaling about 4,200 individual network evaluations. Thirty

  17. Neural Network Model of Memory Retrieval.

    Science.gov (United States)

    Recanatesi, Stefano; Katkov, Mikhail; Romani, Sandro; Tsodyks, Misha

    2015-01-01

    Human memory can store large amount of information. Nevertheless, recalling is often a challenging task. In a classical free recall paradigm, where participants are asked to repeat a briefly presented list of words, people make mistakes for lists as short as 5 words. We present a model for memory retrieval based on a Hopfield neural network where transition between items are determined by similarities in their long-term memory representations. Meanfield analysis of the model reveals stable states of the network corresponding (1) to single memory representations and (2) intersection between memory representations. We show that oscillating feedback inhibition in the presence of noise induces transitions between these states triggering the retrieval of different memories. The network dynamics qualitatively predicts the distribution of time intervals required to recall new memory items observed in experiments. It shows that items having larger number of neurons in their representation are statistically easier to recall and reveals possible bottlenecks in our ability of retrieving memories. Overall, we propose a neural network model of information retrieval broadly compatible with experimental observations and is consistent with our recent graphical model (Romani et al., 2013). PMID:26732491

  18. Neural Network Model of memory retrieval

    Directory of Open Access Journals (Sweden)

    Stefano eRecanatesi

    2015-12-01

    Full Text Available Human memory can store large amount of information. Nevertheless, recalling is often achallenging task. In a classical free recall paradigm, where participants are asked to repeat abriefly presented list of words, people make mistakes for lists as short as 5 words. We present amodel for memory retrieval based on a Hopfield neural network where transition between itemsare determined by similarities in their long-term memory representations. Meanfield analysis ofthe model reveals stable states of the network corresponding (1 to single memory representationsand (2 intersection between memory representations. We show that oscillating feedback inhibitionin the presence of noise induces transitions between these states triggering the retrieval ofdifferent memories. The network dynamics qualitatively predicts the distribution of time intervalsrequired to recall new memory items observed in experiments. It shows that items having largernumber of neurons in their representation are statistically easier to recall and reveals possiblebottlenecks in our ability of retrieving memories. Overall, we propose a neural network model ofinformation retrieval broadly compatible with experimental observations and is consistent with ourrecent graphical model (Romani et al., 2013.

  19. Stability of discrete Hopfield neural networks with delay

    Institute of Scientific and Technical Information of China (English)

    Ma Runnian; Lei Sheping; Liu Naigong

    2005-01-01

    Discrete Hopfield neural network with delay is an extension of discrete Hopfield neural network. As it is well known, the stability of neural networks is not only the most basic and important problem but also foundation of the network's applications. The stability of discrete Hopfield neural networks with delay is mainly investigated by using Lyapunov function. The sufficient conditions for the networks with delay converging towards a limit cycle of length 4 are obtained. Also, some sufficient criteria are given to ensure the networks having neither a stable state nor a limit cycle with length 2. The obtained results here generalize the previous results on stability of discrete Hopfield neural network with delay and without delay.

  20. Facial expression recognition using constructive neural networks

    Science.gov (United States)

    Ma, Liying; Khorasani, Khashayar

    2001-08-01

    The computer-based recognition of facial expressions has been an active area of research for quite a long time. The ultimate goal is to realize intelligent and transparent communications between human beings and machines. The neural network (NN) based recognition methods have been found to be particularly promising, since NN is capable of implementing mapping from the feature space of face images to the facial expression space. However, finding a proper network size has always been a frustrating and time consuming experience for NN developers. In this paper, we propose to use the constructive one-hidden-layer feed forward neural networks (OHL-FNNs) to overcome this problem. The constructive OHL-FNN will obtain in a systematic way a proper network size which is required by the complexity of the problem being considered. Furthermore, the computational cost involved in network training can be considerably reduced when compared to standard back- propagation (BP) based FNNs. In our proposed technique, the 2-dimensional discrete cosine transform (2-D DCT) is applied over the entire difference face image for extracting relevant features for recognition purpose. The lower- frequency 2-D DCT coefficients obtained are then used to train a constructive OHL-FNN. An input-side pruning technique previously proposed by the authors is also incorporated into the constructive OHL-FNN. An input-side pruning technique previously proposed by the authors is also incorporated into the constructive learning process to reduce the network size without sacrificing the performance of the resulting network. The proposed technique is applied to a database consisting of images of 60 men, each having the resulting network. The proposed technique is applied to a database consisting of images of 60 men, each having 5 facial expression images (neutral, smile, anger, sadness, and surprise). Images of 40 men are used for network training, and the remaining images are used for generalization and

  1. Neural network learning dynamics in a path integral framework

    OpenAIRE

    Balakrishnan, J.

    2003-01-01

    A path-integral formalism is proposed for studying the dynamical evolution in time of patterns in an artificial neural network in the presence of noise. An effective cost function is constructed which determines the unique global minimum of the neural network system. The perturbative method discussed also provides a way for determining the storage capacity of the network.

  2. Dynamic artificial neural networks with affective systems.

    Science.gov (United States)

    Schuman, Catherine D; Birdwell, J Douglas

    2013-01-01

    Artificial neural networks (ANNs) are processors that are trained to perform particular tasks. We couple a computational ANN with a simulated affective system in order to explore the interaction between the two. In particular, we design a simple affective system that adjusts the threshold values in the neurons of our ANN. The aim of this paper is to demonstrate that this simple affective system can control the firing rate of the ensemble of neurons in the ANN, as well as to explore the coupling between the affective system and the processes of long term potentiation (LTP) and long term depression (LTD), and the effect of the parameters of the affective system on its performance. We apply our networks with affective systems to a simple pole balancing example and briefly discuss the effect of affective systems on network performance.

  3. Applying neural networks to ultrasonographic texture recognition

    Science.gov (United States)

    Gallant, Jean-Francois; Meunier, Jean; Stampfler, Robert; Cloutier, Jocelyn

    1993-09-01

    A neural network was trained to classify ultrasound image samples of normal, adenomatous (benign tumor) and carcinomatous (malignant tumor) thyroid gland tissue. The samples themselves, as well as their Fourier spectrum, miscellaneous cooccurrence matrices and 'generalized' cooccurrence matrices, were successively submitted to the network, to determine if it could be trained to identify discriminating features of the texture of the image, and if not, which feature extractor would give the best results. Results indicate that the network could indeed extract some distinctive features from the textures, since it could accomplish a partial classification when trained with the samples themselves. But a significant improvement both in learning speed and performance was observed when it was trained with the generalized cooccurrence matrices of the samples.

  4. Financial Time Series Prediction Using Elman Recurrent Random Neural Networks

    Science.gov (United States)

    Wang, Jie; Wang, Jun; Fang, Wen; Niu, Hongli

    2016-01-01

    In recent years, financial market dynamics forecasting has been a focus of economic research. To predict the price indices of stock markets, we developed an architecture which combined Elman recurrent neural networks with stochastic time effective function. By analyzing the proposed model with the linear regression, complexity invariant distance (CID), and multiscale CID (MCID) analysis methods and taking the model compared with different models such as the backpropagation neural network (BPNN), the stochastic time effective neural network (STNN), and the Elman recurrent neural network (ERNN), the empirical results show that the proposed neural network displays the best performance among these neural networks in financial time series forecasting. Further, the empirical research is performed in testing the predictive effects of SSE, TWSE, KOSPI, and Nikkei225 with the established model, and the corresponding statistical comparisons of the above market indices are also exhibited. The experimental results show that this approach gives good performance in predicting the values from the stock market indices. PMID:27293423

  5. Financial Time Series Prediction Using Elman Recurrent Random Neural Networks.

    Science.gov (United States)

    Wang, Jie; Wang, Jun; Fang, Wen; Niu, Hongli

    2016-01-01

    In recent years, financial market dynamics forecasting has been a focus of economic research. To predict the price indices of stock markets, we developed an architecture which combined Elman recurrent neural networks with stochastic time effective function. By analyzing the proposed model with the linear regression, complexity invariant distance (CID), and multiscale CID (MCID) analysis methods and taking the model compared with different models such as the backpropagation neural network (BPNN), the stochastic time effective neural network (STNN), and the Elman recurrent neural network (ERNN), the empirical results show that the proposed neural network displays the best performance among these neural networks in financial time series forecasting. Further, the empirical research is performed in testing the predictive effects of SSE, TWSE, KOSPI, and Nikkei225 with the established model, and the corresponding statistical comparisons of the above market indices are also exhibited. The experimental results show that this approach gives good performance in predicting the values from the stock market indices. PMID:27293423

  6. Financial Time Series Prediction Using Elman Recurrent Random Neural Networks.

    Science.gov (United States)

    Wang, Jie; Wang, Jun; Fang, Wen; Niu, Hongli

    2016-01-01

    In recent years, financial market dynamics forecasting has been a focus of economic research. To predict the price indices of stock markets, we developed an architecture which combined Elman recurrent neural networks with stochastic time effective function. By analyzing the proposed model with the linear regression, complexity invariant distance (CID), and multiscale CID (MCID) analysis methods and taking the model compared with different models such as the backpropagation neural network (BPNN), the stochastic time effective neural network (STNN), and the Elman recurrent neural network (ERNN), the empirical results show that the proposed neural network displays the best performance among these neural networks in financial time series forecasting. Further, the empirical research is performed in testing the predictive effects of SSE, TWSE, KOSPI, and Nikkei225 with the established model, and the corresponding statistical comparisons of the above market indices are also exhibited. The experimental results show that this approach gives good performance in predicting the values from the stock market indices.

  7. An introduction to neural network methods for differential equations

    CERN Document Server

    Yadav, Neha; Kumar, Manoj

    2015-01-01

    This book introduces a variety of neural network methods for solving differential equations arising in science and engineering. The emphasis is placed on a deep understanding of the neural network techniques, which has been presented in a mostly heuristic and intuitive manner. This approach will enable the reader to understand the working, efficiency and shortcomings of each neural network technique for solving differential equations. The objective of this book is to provide the reader with a sound understanding of the foundations of neural networks, and a comprehensive introduction to neural network methods for solving differential equations together with recent developments in the techniques and their applications. The book comprises four major sections. Section I consists of a brief overview of differential equations and the relevant physical problems arising in science and engineering. Section II illustrates the history of neural networks starting from their beginnings in the 1940s through to the renewed...

  8. Modeling of Magneto-Rheological Damper with Neural Network

    Institute of Scientific and Technical Information of China (English)

    2006-01-01

    With the revival of magnetorheological technology research in the 1980's, its application in vehicles is increasingly focused on vibration suppression. Based on the importance of magnetorheological damper modeling, nonparametric modeling with neural network, which is a promising development in semi-active online control of vehicles with MR suspension, has been carried out in this study. A two layer neural network with 7 neurons in a hidden layer and 3 inputs and 1 output was established to simulate the behavior of MR damper at different excitation currents. In the neural network modeling, the damping force is a function of displacement, velocity and the applied current. A MR damper for vehicles is fabricated and tested by MTS; the data acquired are utilized for neural network training and validation. The application and validation show that the predicted forces of the neural network match well with the forces tested with a small variance, which demonstrates the effectiveness and precision of neural network modeling.

  9. Flow of Information in Feed-Forward Deep Neural Networks

    OpenAIRE

    Khadivi, Pejman; Tandon, Ravi; Ramakrishnan, Naren

    2016-01-01

    Feed-forward deep neural networks have been used extensively in various machine learning applications. Developing a precise understanding of the underling behavior of neural networks is crucial for their efficient deployment. In this paper, we use an information theoretic approach to study the flow of information in a neural network and to determine how entropy of information changes between consecutive layers. Moreover, using the Information Bottleneck principle, we develop a constrained opt...

  10. Proceedings of intelligent engineering systems through artificial neural networks

    International Nuclear Information System (INIS)

    This book contains the edited versions of the technical presentation of ANNIE '91, the first international meeting on Artificial Neural Networks in Engineering. The conference covered the theory of Artificial Neural Networks and its contributions in the engineering domain and attracted researchers from twelve countries. The papers in this edited book are grouped into four categories: Artificial Neural Network Architectures; Pattern Recognition; Adaptive Control, Diagnosis and Process Monitoring; and Neuro-Engineering Systems

  11. Pixel-wise Segmentation of Street with Neural Networks

    OpenAIRE

    Bittel, Sebastian; Kaiser, Vitali; Teichmann, Marvin; Thoma, Martin

    2015-01-01

    Pixel-wise street segmentation of photographs taken from a drivers perspective is important for self-driving cars and can also support other object recognition tasks. A framework called SST was developed to examine the accuracy and execution time of different neural networks. The best neural network achieved an $F_1$-score of 89.5% with a simple feedforward neural network which trained to solve a regression task.

  12. Neural Networks in Economic Modelling: An Empirical Study.

    OpenAIRE

    Verkooijen, W.J.H.

    1996-01-01

    Abstract: This dissertation addresses the statistical aspects of neural networks and their usability for solving problems in economics and finance. Neural networks are discussed in a framework of modelling which is generally accepted in econometrics. Within this framework a neural network is regarded as a statistical technique that implements a model-free regression strategy. Model-free regression seems particularly useful in situations where economic theory cannot provide sensible model spec...

  13. Pattern recognition of state variables by neural networks

    International Nuclear Information System (INIS)

    An artificial intelligence system based on artificial neural networks can be used to classify predefined events and emergency procedures. These systems are being used in different areas. In the nuclear reactors safety, the goal is the classification of events whose data can be processed and recognized by neural networks. In this works we present a preliminary simple system, using neural networks in the recognition of patterns the recognition of variables which define a situation. (author)

  14. Computational Neural Networks: A New Paradigm for Spatial Analysis

    OpenAIRE

    Fischer, M.M.

    1996-01-01

    In this paper a systematic introduction to computational neural network models is given in order to help spatial analysts learn about this exciting new field. The power of computational neural networks viz-à-viz conventional modelling is illustrated for an application field with noisy data of limited record length: spatial interaction modelling of telecommunication data in Austria. The computational appeal of neural networks for solving some fundamental spatial analysis problems is summarized...

  15. Neural Networks Applied to Thermal Damage Classification in Grinding Process

    OpenAIRE

    Spadotto, Marcelo M.; Aguiar, Paulo Roberto de; Sousa, Carlos C. P.; Bianchi, Eduardo C.

    2008-01-01

    The utilization of neural network of type multi-layer perceptron using the back-propagation algorithm guaranteed very good results. Tests carried out in order to optimize the learning capacity of neural networks were of utmost importance in the training phase, where the optimum values for the number of neurons of the hidden layer, learning rate and momentum for each structure were determined. Once the architecture of the neural network was established with those optimum values, the mean squar...

  16. Analysis of Heart Diseases Dataset using Neural Network Approach

    CERN Document Server

    Rani, K Usha

    2011-01-01

    One of the important techniques of Data mining is Classification. Many real world problems in various fields such as business, science, industry and medicine can be solved by using classification approach. Neural Networks have emerged as an important tool for classification. The advantages of Neural Networks helps for efficient classification of given data. In this study a Heart diseases dataset is analyzed using Neural Network approach. To increase the efficiency of the classification process parallel approach is also adopted in the training phase.

  17. Sensor Temperature Compensation Technique Simulation Based on BP Neural Network

    OpenAIRE

    Xiangwu Wei

    2013-01-01

    Innovatively, neural network function programming in the BPNN (BP neural network) tool boxes from MATLAB are applied, and data processing is done about CYJ-101 pressure sensor, and the problem of the sensor temperature compensation is solved. The paper has made the pressure sensors major sensors and temperature sensor assistant sensors, input the voltage signal from the two sensors into the established BP neural network model, and done the simulation under the NN Toolbox environment of MATLAB...

  18. Neural Networks for Modeling and Control of Particle Accelerators

    CERN Document Server

    Edelen, A L; Chase, B E; Edstrom, D; Milton, S V; Stabile, P

    2016-01-01

    We describe some of the challenges of particle accelerator control, highlight recent advances in neural network techniques, discuss some promising avenues for incorporating neural networks into particle accelerator control systems, and describe a neural network-based control system that is being developed for resonance control of an RF electron gun at the Fermilab Accelerator Science and Technology (FAST) facility, including initial experimental results from a benchmark controller.

  19. A Neural Network-Based Interval Pattern Matcher

    Directory of Open Access Journals (Sweden)

    Jing Lu

    2015-07-01

    Full Text Available One of the most important roles in the machine learning area is to classify, and neural networks are very important classifiers. However, traditional neural networks cannot identify intervals, let alone classify them. To improve their identification ability, we propose a neural network-based interval matcher in our paper. After summarizing the theoretical construction of the model, we take a simple and a practical weather forecasting experiment, which show that the recognizer accuracy reaches 100% and that is promising.

  20. One pass learning for generalized classifier neural network.

    Science.gov (United States)

    Ozyildirim, Buse Melis; Avci, Mutlu

    2016-01-01

    Generalized classifier neural network introduced as a kind of radial basis function neural network, uses gradient descent based optimized smoothing parameter value to provide efficient classification. However, optimization consumes quite a long time and may cause a drawback. In this work, one pass learning for generalized classifier neural network is proposed to overcome this disadvantage. Proposed method utilizes standard deviation of each class to calculate corresponding smoothing parameter. Since different datasets may have different standard deviations and data distributions, proposed method tries to handle these differences by defining two functions for smoothing parameter calculation. Thresholding is applied to determine which function will be used. One of these functions is defined for datasets having different range of values. It provides balanced smoothing parameters for these datasets through logarithmic function and changing the operation range to lower boundary. On the other hand, the other function calculates smoothing parameter value for classes having standard deviation smaller than the threshold value. Proposed method is tested on 14 datasets and performance of one pass learning generalized classifier neural network is compared with that of probabilistic neural network, radial basis function neural network, extreme learning machines, and standard and logarithmic learning generalized classifier neural network in MATLAB environment. One pass learning generalized classifier neural network provides more than a thousand times faster classification than standard and logarithmic generalized classifier neural network. Due to its classification accuracy and speed, one pass generalized classifier neural network can be considered as an efficient alternative to probabilistic neural network. Test results show that proposed method overcomes computational drawback of generalized classifier neural network and may increase the classification performance.

  1. A walk in the statistical mechanical formulation of neural networks

    OpenAIRE

    Agliari, Elena; Barra, Adriano; Galluzzi, Andrea; Tantari, Daniele; Tavani, Flavia

    2014-01-01

    Neural networks are nowadays both powerful operational tools (e.g., for pattern recognition, data mining, error correction codes) and complex theoretical models on the focus of scientific investigation. As for the research branch, neural networks are handled and studied by psychologists, neurobiologists, engineers, mathematicians and theoretical physicists. In particular, in theoretical physics, the key instrument for the quantitative analysis of neural networks is statistical mechanics. From...

  2. Discrete Orthogonal Transforms and Neural Networks for Image Interpolation

    Directory of Open Access Journals (Sweden)

    J. Polec

    1999-09-01

    Full Text Available In this contribution we present transform and neural network approaches to the interpolation of images. From transform point of view, the principles from [1] are modified for 1st and 2nd order interpolation. We present several new interpolation discrete orthogonal transforms. From neural network point of view, we present interpolation possibilities of multilayer perceptrons. We use various configurations of neural networks for 1st and 2nd order interpolation. The results are compared by means of tables.

  3. Dissipativity Analysis of Neural Networks with Time-varying Delays

    Institute of Scientific and Technical Information of China (English)

    Yan Sun; Bao-Tong Cui

    2008-01-01

    A new definition of dissipativity for neural networks is presented in this paper. By constructing proper Lyapunov func- tionals and using some analytic techniques, sufficient conditions are given to ensure the dissipativity of neural networks with or without time-varying parametric uncertainties and the integro-differential neural networks in terms of linear matrix inequalities. Numerical examples are given to illustrate the effectiveness of the obtained results.

  4. ANOMALY NETWORK INTRUSION DETECTION SYSTEM BASED ON DISTRIBUTED TIME-DELAY NEURAL NETWORK (DTDNN)

    OpenAIRE

    LAHEEB MOHAMMAD IBRAHIM

    2010-01-01

    In this research, a hierarchical off-line anomaly network intrusion detection system based on Distributed Time-Delay Artificial Neural Network is introduced. This research aims to solve a hierarchical multi class problem in which the type of attack (DoS, U2R, R2L and Probe attack) detected by dynamic neural network. The results indicate that dynamic neural nets (Distributed Time-Delay Artificial Neural Network) can achieve a high detection rate, where the overall accuracy classification rate ...

  5. Neural network approach for differential diagnosis of interstitial lung diseases

    Science.gov (United States)

    Asada, Naoki; Doi, Kunio; MacMahon, Heber; Montner, Steven M.; Giger, Maryellen L.; Abe, Chihiro; Wu, Chris Y.

    1990-07-01

    A neural network approach was applied for the differential diagnosis of interstitial lung diseases. The neural network was designed for distinguishing between 9 types of interstitial lung diseases based on 20 items of clinical and radiographic information. A database for training and testing the neural network was created with 10 hypothetical cases for each of the 9 diseases. The performance of the neural network was evaluated by ROC analysis. The optimal parameters for the current neural network were determined by selecting those yielding the highest ROC curves. In this case the neural network consisted of one hidden layer including 6 units and was trained with 200 learning iterations. When the decision performances of the neural network chest radiologists and senior radiology residents were compared the neural network indicated high performance comparable to that of chest radiologists and superior to that of senior radiology residents. Our preliminary results suggested strongly that the neural network approach had potential utility in the computer-aided differential diagnosis of interstitial lung diseases. 1_

  6. A hardware implementation of neural network with modified HANNIBAL architecture

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Bum youb; Chung, Duck Jin [Inha University, Inchon (Korea, Republic of)

    1996-03-01

    A digital hardware architecture for artificial neural network with learning capability is described in this paper. It is a modified hardware architecture known as HANNIBAL(Hardware Architecture for Neural Networks Implementing Back propagation Algorithm Learning). For implementing an efficient neural network hardware, we analyzed various type of multiplier which is major function block of neuro-processor cell. With this result, we design a efficient digital neural network hardware using serial/parallel multiplier, and test the operation. We also analyze the hardware efficiency with logic level simulation. (author). 14 refs., 10 figs., 3 tabs.

  7. Neural network models: Insights and prescriptions from practical applications

    Energy Technology Data Exchange (ETDEWEB)

    Samad, T. [Honeywell Technology Center, Minneapolis, MN (United States)

    1995-12-31

    Neural networks are no longer just a research topic; numerous applications are now testament to their practical utility. In the course of developing these applications, researchers and practitioners have been faced with a variety of issues. This paper briefly discusses several of these, noting in particular the rich connections between neural networks and other, more conventional technologies. A more comprehensive version of this paper is under preparation that will include illustrations on real examples. Neural networks are being applied in several different ways. Our focus here is on neural networks as modeling technology. However, much of the discussion is also relevant to other types of applications such as classification, control, and optimization.

  8. Neural network for solving convex quadratic bilevel programming problems.

    Science.gov (United States)

    He, Xing; Li, Chuandong; Huang, Tingwen; Li, Chaojie

    2014-03-01

    In this paper, using the idea of successive approximation, we propose a neural network to solve convex quadratic bilevel programming problems (CQBPPs), which is modeled by a nonautonomous differential inclusion. Different from the existing neural network for CQBPP, the model has the least number of state variables and simple structure. Based on the theory of nonsmooth analysis, differential inclusions and Lyapunov-like method, the limit equilibrium points sequence of the proposed neural networks can approximately converge to an optimal solution of CQBPP under certain conditions. Finally, simulation results on two numerical examples and the portfolio selection problem show the effectiveness and performance of the proposed neural network.

  9. Neural network and its application to CT imaging

    Energy Technology Data Exchange (ETDEWEB)

    Nikravesh, M.; Kovscek, A.R.; Patzek, T.W. [Lawrence Berkeley National Lab., CA (United States)] [and others

    1997-02-01

    We present an integrated approach to imaging the progress of air displacement by spontaneous imbibition of oil into sandstone. We combine Computerized Tomography (CT) scanning and neural network image processing. The main aspects of our approach are (I) visualization of the distribution of oil and air saturation by CT, (II) interpretation of CT scans using neural networks, and (III) reconstruction of 3-D images of oil saturation from the CT scans with a neural network model. Excellent agreement between the actual images and the neural network predictions is found.

  10. Term Structure of Interest Rates Based on Artificial Neural Network

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    In light of the nonlinear approaching capability of artificial neural networks ( ANN), the term structure of interest rates is predicted using The generalized regression neural network (GRNN) and back propagation (BP) neural networks models. The prediction performance is measured with US interest rate data. Then, RBF and BP models are compared with Vasicek's model and Cox-Ingersoll-Ross (CIR) model. The comparison reveals that neural network models outperform Vasicek's model and CIR model,which are more precise and closer to the real market situation.

  11. Digital Watermarking Algorithm Based on Wavelet Transform and Neural Network

    Institute of Scientific and Technical Information of China (English)

    WANG Zhenfei; ZHAI Guangqun; WANG Nengchao

    2006-01-01

    An effective blind digital watermarking algorithm based on neural networks in the wavelet domain is presented. Firstly, the host image is decomposed through wavelet transform. The significant coefficients of wavelet are selected according to the human visual system (HVS) characteristics. Watermark bits are added to them. And then effectively cooperates neural networks to learn the characteristics of the embedded watermark related to them. Because of the learning and adaptive capabilities of neural networks, the trained neural networks almost exactly recover the watermark from the watermarked image. Experimental results and comparisons with other techniques prove the effectiveness of the new algorithm.

  12. Neural Network Inverse Adaptive Controller Based on Davidon Least Square

    Institute of Scientific and Technical Information of China (English)

    2000-01-01

    General neural network inverse adaptive controller haa two flaws: the first is the slow convergence speed; the second is the invalidation to the non-minimum phase system.These defects limit the scope in which the neural network inverse adaptive controller is used.We employ Davidon least squares in training the multi-layer feedforward neural network used in approximating the inverse model of plant to expedite the convergence,and then through constructing the pseudo-plant,a neural network inverse adaptive controller is put forward which is still effective to the nonlinear non-minimum phase system.The simulation results show the validity of this scheme.

  13. Robustness of the ATLAS pixel clustering neural network algorithm

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00407780; The ATLAS collaboration

    2016-01-01

    Proton-proton collisions at the energy frontier puts strong constraints on track reconstruction algorithms. In the ATLAS track reconstruction algorithm, an artificial neural network is utilised to identify and split clusters of neighbouring read-out elements in the ATLAS pixel detector created by multiple charged particles. The robustness of the neural network algorithm is presented, probing its sensitivity to uncertainties in the detector conditions. The robustness is studied by evaluating the stability of the algorithm's performance under a range of variations in the inputs to the neural networks. Within reasonable variation magnitudes, the neural networks prove to be robust to most variation types.

  14. NEURAL NETWORK TRAINING WITH PARALLEL PARTICLE SWARM OPTIMIZER

    Institute of Scientific and Technical Information of China (English)

    Qin Zheng; Liu Yu; Wang Yu

    2006-01-01

    Objective To reduce the execution time of neural network training. Methods Parallel particle swarm optimization algorithm based on master-slave model is proposed to train radial basis function neural networks, which is implemented on a cluster using MPI libraries for inter-process communication. Results High speed-up factor is achieved and execution time is reduced greatly. On the other hand, the resulting neural network has good classification accuracy not only on training sets but also on test sets. Conclusion Since the fitness evaluation is intensive, parallel particle swarm optimization shows great advantages to speed up neural network training.

  15. Advances in Artificial Neural Networks – Methodological Development and Application

    Directory of Open Access Journals (Sweden)

    Yanbo Huang

    2009-08-01

    Full Text Available Artificial neural networks as a major soft-computing technology have been extensively studied and applied during the last three decades. Research on backpropagation training algorithms for multilayer perceptron networks has spurred development of other neural network training algorithms for other networks such as radial basis function, recurrent network, feedback network, and unsupervised Kohonen self-organizing network. These networks, especially the multilayer perceptron network with a backpropagation training algorithm, have gained recognition in research and applications in various scientific and engineering areas. In order to accelerate the training process and overcome data over-fitting, research has been conducted to improve the backpropagation algorithm. Further, artificial neural networks have been integrated with other advanced methods such as fuzzy logic and wavelet analysis, to enhance the ability of data interpretation and modeling and to avoid subjectivity in the operation of the training algorithm. In recent years, support vector machines have emerged as a set of high-performance supervised generalized linear classifiers in parallel with artificial neural networks. A review on development history of artificial neural networks is presented and the standard architectures and algorithms of artificial neural networks are described. Furthermore, advanced artificial neural networks will be introduced with support vector machines, and limitations of ANNs will be identified. The future of artificial neural network development in tandem with support vector machines will be discussed in conjunction with further applications to food science and engineering, soil and water relationship for crop management, and decision support for precision agriculture. Along with the network structures and training algorithms, the applications of artificial neural networks will be reviewed as well, especially in the fields of agricultural and biological

  16. Designing neural networks that process mean values of random variables

    Energy Technology Data Exchange (ETDEWEB)

    Barber, Michael J. [AIT Austrian Institute of Technology, Innovation Systems Department, 1220 Vienna (Austria); Clark, John W. [Department of Physics and McDonnell Center for the Space Sciences, Washington University, St. Louis, MO 63130 (United States); Centro de Ciências Matemáticas, Universidade de Madeira, 9000-390 Funchal (Portugal)

    2014-06-13

    We develop a class of neural networks derived from probabilistic models posed in the form of Bayesian networks. Making biologically and technically plausible assumptions about the nature of the probabilistic models to be represented in the networks, we derive neural networks exhibiting standard dynamics that require no training to determine the synaptic weights, that perform accurate calculation of the mean values of the relevant random variables, that can pool multiple sources of evidence, and that deal appropriately with ambivalent, inconsistent, or contradictory evidence. - Highlights: • High-level neural computations are specified by Bayesian belief networks of random variables. • Probability densities of random variables are encoded in activities of populations of neurons. • Top-down algorithm generates specific neural network implementation of given computation. • Resulting “neural belief networks” process mean values of random variables. • Such networks pool multiple sources of evidence and deal properly with inconsistent evidence.

  17. Convolution neural networks for ship type recognition

    Science.gov (United States)

    Rainey, Katie; Reeder, John D.; Corelli, Alexander G.

    2016-05-01

    Algorithms to automatically recognize ship type from satellite imagery are desired for numerous maritime applications. This task is difficult, and example imagery accurately labeled with ship type is hard to obtain. Convolutional neural networks (CNNs) have shown promise in image recognition settings, but many of these applications rely on the availability of thousands of example images for training. This work attempts to under- stand for which types of ship recognition tasks CNNs might be well suited. We report the results of baseline experiments applying a CNN to several ship type classification tasks, and discuss many of the considerations that must be made in approaching this problem.

  18. An Intelligent technical analysis using neural network

    Directory of Open Access Journals (Sweden)

    Reza Raei

    2011-07-01

    Full Text Available Technical analysis has been one of the most popular methods for stock market predictions for the past few decades. There have been enormous technical analysis methods to study the behavior of stock market for different kinds of trading markets such as currency, commodity or stock. In this paper, we propose two different methods based on volume adjusted moving average and ease of movement for stock trading. These methods are used with and without generalized regression neural network methods and the results are compared with each other. The preliminary results on historical stock price of 20 firms indicate that there is no meaningful difference between various proposed models of this paper.

  19. Artificial Neural Network applied to lightning flashes

    Science.gov (United States)

    Gin, R. B.; Guedes, D.; Bianchi, R.

    2013-05-01

    The development of video cameras enabled cientists to study lightning discharges comportment with more precision. The main goal of this project is to create a system able to detect images of lightning discharges stored in videos and classify them using an Artificial Neural Network (ANN)using C Language and OpenCV libraries. The developed system, can be split in two different modules: detection module and classification module. The detection module uses OpenCV`s computer vision libraries and image processing techniques to detect if there are significant differences between frames in a sequence, indicating that something, still not classified, occurred. Whenever there is a significant difference between two consecutive frames, two main algorithms are used to analyze the frame image: brightness and shape algorithms. These algorithms detect both shape and brightness of the event, removing irrelevant events like birds, as well as detecting the relevant events exact position, allowing the system to track it over time. The classification module uses a neural network to classify the relevant events as horizontal or vertical lightning, save the event`s images and calculates his number of discharges. The Neural Network was implemented using the backpropagation algorithm, and was trained with 42 training images , containing 57 lightning events (one image can have more than one lightning). TheANN was tested with one to five hidden layers, with up to 50 neurons each. The best configuration achieved a success rate of 95%, with one layer containing 20 neurons (33 test images with 42 events were used in this phase). This configuration was implemented in the developed system to analyze 20 video files, containing 63 lightning discharges previously manually detected. Results showed that all the lightning discharges were detected, many irrelevant events were unconsidered, and the event's number of discharges was correctly computed. The neural network used in this project achieved a

  20. Transient Stability Assessment using Artificial Neural Networks

    OpenAIRE

    Krishna, S; Padiyar, KR

    2000-01-01

    Online transient stability assessment (TSA) of a power system is not yet feasible due to the intensive computation involved. Artificial neural networks (ANN) have been proposed as one of the approaches to this problem because of their ability to quickly map nonlinear relationships between the input data and the output. In this paper a review of the previously published papers on TSA using ANN is presented. The paper also reports the results of the application of ANN to the problem of TSA of a...

  1. Evaluating Neural Network Predictors by Bootstrapping

    OpenAIRE

    Blake LeBaron; Andreas S. Weigend

    1994-01-01

    We present a new method, inspired by the bootstrap, whose goal it is to determine the quality and reliability of a neural network predictor. Our method leads to more robust forecasting along with a large amount of statistical information on forecast performance that we exploit. We exhibit the method in the context of multi-variate time series prediction on financial data from the New York Stock Exchange. It turns out that the variation due to different resamplings (i.e., splits between traini...

  2. Email Spam Filter using Bayesian Neural Networks

    Directory of Open Access Journals (Sweden)

    Nibedita Chakraborty

    2012-03-01

    Full Text Available Nowadays, e-mail is widely becoming one of the fastest and most economical forms of communication but they are prone to be misused. One such misuse is the posting of unsolicited, unwanted e-mails known as spam or junk e-mails. This paper presents and discusses an implementation of a spam filtering system. The idea is to use a neural network which will be trained to recognize different forms of often used words in spam mails. The Bayesian ANN is trained with finite sample sizes to approximate the ideal observer. This strategy can provide improved filtering of Spam than existing Static Spam filters.

  3. Robotic velocity generation using neural network

    Institute of Scientific and Technical Information of China (English)

    2001-01-01

    The fast-paced nature of robotic soccer necessitates real-time sensing coupled with quick decision making and behaving. The robot must have high response-rate, exact motion ability, and must robust enough to confront interfere during drastic match. But during the match, we find that the robot usually do not act exactly as the commands from host computer. In this paper, we analyze the reason and present a method that uses BP neural network to output robotic velocity directly instead of conventional path-plan strategy, to reduce the error between actual motion and ideal plan.

  4. Colored Noise Prediction Based on Neural Network

    Institute of Scientific and Technical Information of China (English)

    Gao Fei; Zhang Xiaohui

    2003-01-01

    A method for predicting colored noise by introducing prediction of nonhnear time series is presented By adopting three kinds of neural networks prediction models, the colored noise prediction is studied through changing the filter bandwidth for stochastic noise and the sampling rate for colored noise The results show that colored noise can be predicted The prediction error decreases with the increasing of the sampling rate or the narrowing of the filter bandwidth. If the parameters are selected properly, the prediction precision can meet the requirement of engineering implementation. The results offer a new reference way for increasing the ability for detecting weak signal in signal processing system

  5. Supervised Learning in Multilayer Spiking Neural Networks

    CERN Document Server

    Sporea, Ioana

    2012-01-01

    The current article introduces a supervised learning algorithm for multilayer spiking neural networks. The algorithm presented here overcomes some limitations of existing learning algorithms as it can be applied to neurons firing multiple spikes and it can in principle be applied to any linearisable neuron model. The algorithm is applied successfully to various benchmarks, such as the XOR problem and the Iris data set, as well as complex classifications problems. The simulations also show the flexibility of this supervised learning algorithm which permits different encodings of the spike timing patterns, including precise spike trains encoding.

  6. Cultured Neural Networks: Optimization of Patterned Network Adhesiveness and Characterization of their Neural Activity

    Directory of Open Access Journals (Sweden)

    W. L. C. Rutten

    2006-01-01

    Full Text Available One type of future, improved neural interface is the “cultured probe”. It is a hybrid type of neural information transducer or prosthesis, for stimulation and/or recording of neural activity. It would consist of a microelectrode array (MEA on a planar substrate, each electrode being covered and surrounded by a local circularly confined network (“island” of cultured neurons. The main purpose of the local networks is that they act as biofriendly intermediates for collateral sprouts from the in vivo system, thus allowing for an effective and selective neuron–electrode interface. As a secondary purpose, one may envisage future information processing applications of these intermediary networks. In this paper, first, progress is shown on how substrates can be chemically modified to confine developing networks, cultured from dissociated rat cortex cells, to “islands” surrounding an electrode site. Additional coating of neurophobic, polyimide-coated substrate by triblock-copolymer coating enhances neurophilic-neurophobic adhesion contrast. Secondly, results are given on neuronal activity in patterned, unconnected and connected, circular “island” networks. For connected islands, the larger the island diameter (50, 100 or 150 μm, the more spontaneous activity is seen. Also, activity may show a very high degree of synchronization between two islands. For unconnected islands, activity may start at 22 days in vitro (DIV, which is two weeks later than in unpatterned networks.

  7. Predicate calculus for an architecture of multiple neural networks

    Science.gov (United States)

    Consoli, Robert H.

    1990-08-01

    Future projects with neural networks will require multiple individual network components. Current efforts along these lines are ad hoc. This paper relates the neural network to a classical device and derives a multi-part architecture from that model. Further it provides a Predicate Calculus variant for describing the location and nature of the trainings and suggests Resolution Refutation as a method for determining the performance of the system as well as the location of needed trainings for specific proofs. 2. THE NEURAL NETWORK AND A CLASSICAL DEVICE Recently investigators have been making reports about architectures of multiple neural networksL234. These efforts are appearing at an early stage in neural network investigations they are characterized by architectures suggested directly by the problem space. Touretzky and Hinton suggest an architecture for processing logical statements1 the design of this architecture arises from the syntax of a restricted class of logical expressions and exhibits syntactic limitations. In similar fashion a multiple neural netword arises out of a control problem2 from the sequence learning problem3 and from the domain of machine learning. 4 But a general theory of multiple neural devices is missing. More general attempts to relate single or multiple neural networks to classical computing devices are not common although an attempt is made to relate single neural devices to a Turing machines and Sun et a!. develop a multiple neural architecture that performs pattern classification.

  8. Sonar discrimination of cylinders from different angles using neural networks neural networks

    DEFF Research Database (Denmark)

    Andersen, Lars Nonboe; Au, Whiwlow; Larsen, Jan;

    1999-01-01

    This paper describes an underwater object discrimination system applied to recognize cylinders of various compositions from different angles. The system is based on a new combination of simulated dolphin clicks, simulated auditory filters and artificial neural networks. The model demonstrates its...

  9. Wavelet Neural Network Based Traffic Prediction for Next Generation Network

    Institute of Scientific and Technical Information of China (English)

    Zhao Qigang; Li Qunzhan; He Zhengyou

    2005-01-01

    By using netflow traffic collecting technology, some traffic data for analysis are collected from a next generation network (NGN) operator. To build a wavelet basis neural network (NN), the Sigmoid function is replaced with the wavelet in NN. Then the wavelet multiresolution analysis method is used to decompose the traffic signal, and the decomposed component sequences are employed to train the NN. By using the methods, an NGN traffic prediction model is built to predict one day's traffic. The experimental results show that the traffic prediction method of wavelet NN is more accurate than that without using wavelet in the NGN traffic forecasting.

  10. Neural Network Model Based Cluster Head Selection for Power Control

    Directory of Open Access Journals (Sweden)

    Krishan Kumar

    2011-01-01

    Full Text Available Mobile ad-hoc network has challenge of the limited power to prolong the lifetime of the network, because power is a valuable resource in mobile ad-hoc network. The status of power consumption should be continuously monitored after network deployment. In this paper, we propose coverage aware neural network based power control routing with the objective of maximizing the network lifetime. Cluster head selection is proposed using adaptive learning in neural networks followed by coverage. The simulation results show that the proposed scheme can be used in wide area of applications in mobile ad-hoc network.

  11. Programmable synaptic chip for electronic neural networks

    Science.gov (United States)

    Moopenn, A.; Langenbacher, H.; Thakoor, A. P.; Khanna, S. K.

    1988-01-01

    A binary synaptic matrix chip has been developed for electronic neural networks. The matrix chip contains a programmable 32X32 array of 'long channel' NMOSFET binary connection elements implemented in a 3-micron bulk CMOS process. Since the neurons are kept off-chip, the synaptic chip serves as a 'cascadable' building block for a multi-chip synaptic network as large as 512X512 in size. As an alternative to the programmable NMOSFET (long channel) connection elements, tailored thin film resistors are deposited, in series with FET switches, on some CMOS test chips, to obtain the weak synaptic connections. Although deposition and patterning of the resistors require additional processing steps, they promise substantial savings in silicon area. The performance of synaptic chip in a 32-neuron breadboard system in an associative memory test application is discussed.

  12. A hybrid neural network model for consciousness

    Institute of Scientific and Technical Information of China (English)

    蔺杰; 金小刚; 杨建刚

    2004-01-01

    A new framework for consciousness is introduced based upon traditional artificial neural network models. This framework reflects explicit connections between two parts of the brain: one global working memory and distributed modular cerebral networks relating to specific brain functions. Accordingly this framework is composed of three layers,physical mnemonic layer and abstract thinking layer,which cooperate together through a recognition layer to accomplish information storage and cognition using algorithms of how these interactions contribute to consciousness:(1)the reception process whereby cerebral subsystems group distributed signals into coherent object patterns;(2)the partial recognition process whereby patterns from particular subsystems are compared or stored as knowledge; and(3)the resonant learning process whereby global workspace stably adjusts its structure to adapt to patterns' changes. Using this framework,various sorts of human actions can be explained,leading to a general approach for analyzing brain functions.

  13. A Convolutional Neural Network Neutrino Event Classifier

    CERN Document Server

    Aurisano, A; Rocco, D; Himmel, A; Messier, M D; Niner, E; Pawloski, G; Psihas, F; Sousa, A; Vahle, P

    2016-01-01

    Convolutional neural networks (CNNs) have been widely applied in the computer vision community to solve complex problems in image recognition and analysis. We describe an application of the CNN technology to the problem of identifying particle interactions in sampling calorimeters used commonly in high energy physics and high energy neutrino physics in particular. Following a discussion of the core concepts of CNNs and recent innovations in CNN architectures related to the field of deep learning, we outline a specific application to the NOvA neutrino detector. This algorithm, CVN (Convolutional Visual Network) identifies neutrino interactions based on their topology without the need for detailed reconstruction and outperforms algorithms currently in use by the NOvA collaboration.

  14. A convolutional neural network neutrino event classifier

    Science.gov (United States)

    Aurisano, A.; Radovic, A.; Rocco, D.; Himmel, A.; Messier, M. D.; Niner, E.; Pawloski, G.; Psihas, F.; Sousa, A.; Vahle, P.

    2016-09-01

    Convolutional neural networks (CNNs) have been widely applied in the computer vision community to solve complex problems in image recognition and analysis. We describe an application of the CNN technology to the problem of identifying particle interactions in sampling calorimeters used commonly in high energy physics and high energy neutrino physics in particular. Following a discussion of the core concepts of CNNs and recent innovations in CNN architectures related to the field of deep learning, we outline a specific application to the NOvA neutrino detector. This algorithm, CVN (Convolutional Visual Network) identifies neutrino interactions based on their topology without the need for detailed reconstruction and outperforms algorithms currently in use by the NOvA collaboration.

  15. Software failures prediction using RBF neural network

    Directory of Open Access Journals (Sweden)

    Vitaliy S. Yakovyna

    2015-06-01

    Full Text Available One of the prospective techniques for software reliability prediction are those based on nonparametric models, in particular on artificial neural networks. In this paper the study of influence of number of input neurons of network based on radial basis function on the efficiency of software failures prediction presented in the form of time series is carried out. Software faults time series are constructed using Chromium and Chromium-OS open source software systems testing data with proposed further processing as a normalized values of the number of software failures in equal intervals, followed by transfer to man-days. It is demonstrated that the closest prediction can be achieved using Inverse Multiquadric activation function with 10…20 input layer neurons and 30 hidden neurons.

  16. A hybrid neural network model for consciousness

    Institute of Scientific and Technical Information of China (English)

    蔺杰; 金小刚; 杨建刚

    2004-01-01

    A new framework for consciousness is introduced based upon traditional artificial neural network models. This framework reflects explicit connections between two parts of the brain: one global working memory and distributed modular cerebral networks relating to specific brain functions. Accordingly this framework is composed of three layers, physical mnemonic layer and abstract thinking layer, which cooperate together through a recognition layer to accomplish information storage and cognition using algorithms of how these interactions contribute to consciousness: (l) the reception process whereby cerebral subsystems group distributed signals into coherent object patterns; (2) the partial recognition process whereby patterns from particular subsystems are compared or stored as knowledge; and (3) the resonant learning process whereby global workspace stably adjusts its structure to adapt to patterns' changes. Using this framework, various sorts of human actions can be explained, leading to a general approach for analyzing brain functions.

  17. Edge detection of range images using genetic neural networks

    Institute of Scientific and Technical Information of China (English)

    FAN Jian-ying; DU Ying; ZHOU Yang; WANG Yang

    2009-01-01

    Due to the complexity and asymmetrical illumination, the images of object are difficult to be effectively segmented by some routine method. In this paper, a kind of edge detection method based on image features and genetic algorithms neural network for range images was proposed. Fully considering the essential difference between an edge point and a noise point, some characteristic parameters were extracted from range maps as the input nodes of the network in the algorithm. Firstly, a genetic neural network was designed and implemented. The neural network is trained by genetic algorithm, and then genetic neural network algorithm is combined with the virtue of global optimization of genetic algorithm and the virtue of parallel computation of neural network, so that this algorithm is of good global property. The experimental results show that this method can get much faster and more accurate detection results than the classical differential algorithm, and has better anti-noise performance.

  18. Chinese word sense disambiguation based on neural networks

    Institute of Scientific and Technical Information of China (English)

    LIU Ting; LU Zhi-mao; LANG Jun; LI Sheng

    2005-01-01

    The input of a network is the key problem for Chinese word sense disambiguation utilizing the neural network. This paper presents an input model of the neural network that calculates the mutual information between contextual words and the ambiguous word by using statistical methodology and taking the contextual words of a certain number beside the ambiguous word according to ( - M, + N). The experiment adopts triple-layer BP Neural Network model and proves how the size of a training set and the value of M and N affect the performance of the Neural Network Model. The experimental objects are six pseudowords owning three word-senses constructed according to certain principles. The tested accuracy of our approach on a closed-corpus reaches 90. 31% ,and 89. 62% on an open-corpus. The experiment proves that the Neural Network Model has a good performance on Word Sense Disambiguation.

  19. A survey on RBF Neural Network for Intrusion Detection System

    Directory of Open Access Journals (Sweden)

    Henali Sheth

    2014-12-01

    Full Text Available Network security is a hot burning issue nowadays. With the help of technology advancement intruders or hackers are adopting new methods to create different attacks in order to harm network security. Intrusion detection system (IDS is a kind of security software which inspects all incoming and outgoing network traffic and it will generate alerts if any attack or unusual behavior is found in a network. Various approaches are used for IDS such as data mining, neural network, genetic and statistical approach. Among this Neural Network is more suitable approach for IDS. This paper describes RBF neural network approach for Intrusion detection system. RBF is a feed forward and supervise technique of neural network.RBF approach has good classification ability but its performance depends on its parameters. Based on survey we find that RBF approach has some short comings. In order to overcome this we need to do proper optimization of RBF parameters.

  20. USING A NEURAL NETWORK TO PREDICT ELECTRICITY GENERATION

    Science.gov (United States)

    The paper discusses using a neural network to predict electricity generation. uch predictions are important in developing forecasts of air pollutant release and in evaluating the effectiveness of alternative policies which may reduce pollution. eural network model (NUMOD) that pr...

  1. Layered learning of soccer robot based on artificial neural network

    Institute of Scientific and Technical Information of China (English)

    2001-01-01

    Discusses the application of artificial neural network for MIROSOT, introduces a layered model of BP network of soccer robot for learning basic behavior and cooperative behavior, and concludes from experimental results that the model is effective.

  2. Combining Neural Networks for Skin Detection

    CERN Document Server

    Doukim, Chelsia Amy; Chekima, Ali; Omatu, Sigeru

    2011-01-01

    Two types of combining strategies were evaluated namely combining skin features and combining skin classifiers. Several combining rules were applied where the outputs of the skin classifiers are combined using binary operators such as the AND and the OR operators, "Voting", "Sum of Weights" and a new neural network. Three chrominance components from the YCbCr colour space that gave the highest correct detection on their single feature MLP were selected as the combining parameters. A major issue in designing a MLP neural network is to determine the optimal number of hidden units given a set of training patterns. Therefore, a "coarse to fine search" method to find the number of neurons in the hidden layer is proposed. The strategy of combining Cb/Cr and Cr features improved the correct detection by 3.01% compared to the best single feature MLP given by Cb-Cr. The strategy of combining the outputs of three skin classifiers using the "Sum of Weights" rule further improved the correct detection by 4.38% compared t...

  3. Neural network analysis for hazardous waste characterization

    Energy Technology Data Exchange (ETDEWEB)

    Misra, M.; Pratt, L.Y.; Farris, C. [Colorado School of Mines, Golden, CO (United States)] [and others

    1995-12-31

    This paper is a summary of our work in developing a system for interpreting electromagnetic (EM) and magnetic sensor information from the dig face characterization experimental cell at INEL to determine the depth and nature of buried objects. This project contained three primary components: (1) development and evaluation of several geophysical interpolation schemes for correcting missing or noisy data, (2) development and evaluation of several wavelet compression schemes for removing redundancies from the data, and (3) construction of two neural networks that used the results of steps (1) and (2) to determine the depth and nature of buried objects. This work is a proof-of-concept study that demonstrates the feasibility of this approach. The resulting system was able to determine the nature of buried objects correctly 87% of the time and was able to locate a buried object to within an average error of 0.8 feet. These statistics were gathered based on a large test set and so can be considered reliable. Considering the limited nature of this study, these results strongly indicate the feasibility of this approach, and the importance of appropriate preprocessing of neural network input data.

  4. Artificial neural networks in neutron dosimetry

    International Nuclear Information System (INIS)

    An artificial neural network (ANN) has been designed to obtain neutron doses using only the count rates of a Bonner spheres spectrometer (BSS). Ambient, personal and effective neutron doses were included. One hundred and eighty-one neutron spectra were utilised to calculate the Bonner count rates and the neutron doses. The spectra were transformed from lethargy to energy distribution and were re-binned to 31 energy groups using the MCNP 4C code. Re-binned spectra, UTA4 response matrix and fluence-to-dose coefficients were used to calculate the count rates in the BSS and the doses. Count rates were used as input and the respective doses were used as output during neural network training. Training and testing were carried out in the MATLABR environment. The impact of uncertainties in BSS count rates upon the dose quantities calculated with the ANN was investigated by modifying by ±5% the BSS count rates used in the training set. The use of ANNs in neutron dosimetry is an alternative procedure that overcomes the drawbacks associated with this ill-conditioned problem. (authors)

  5. Boundary Depth Information Using Hopfield Neural Network

    Science.gov (United States)

    Xu, Sheng; Wang, Ruisheng

    2016-06-01

    Depth information is widely used for representation, reconstruction and modeling of 3D scene. Generally two kinds of methods can obtain the depth information. One is to use the distance cues from the depth camera, but the results heavily depend on the device, and the accuracy is degraded greatly when the distance from the object is increased. The other one uses the binocular cues from the matching to obtain the depth information. It is more and more mature and convenient to collect the depth information of different scenes by stereo matching methods. In the objective function, the data term is to ensure that the difference between the matched pixels is small, and the smoothness term is to smooth the neighbors with different disparities. Nonetheless, the smoothness term blurs the boundary depth information of the object which becomes the bottleneck of the stereo matching. This paper proposes a novel energy function for the boundary to keep the discontinuities and uses the Hopfield neural network to solve the optimization. We first extract the region of interest areas which are the boundary pixels in original images. Then, we develop the boundary energy function to calculate the matching cost. At last, we solve the optimization globally by the Hopfield neural network. The Middlebury stereo benchmark is used to test the proposed method, and results show that our boundary depth information is more accurate than other state-of-the-art methods and can be used to optimize the results of other stereo matching methods.

  6. Parameterizing Stellar Spectra Using Deep Neural Networks

    CERN Document Server

    Li, Xiangru

    2016-01-01

    This work investigates the spectrum parameterization problem using deep neural networks (DNNs). The proposed scheme consists of the following procedures: first, the configuration of a DNN is initialized using a series of autoencoder neural networks; second, the DNN is fine-tuned using a gradient descent scheme; third, stellar parameters ($T_{eff}$, log$~g$, and [Fe/H]) are estimated using the obtained DNN. This scheme was evaluated on both real spectra from SDSS/SEGUE and synthetic spectra calculated from Kurucz's new opacity distribution function models. Test consistencies between our estimates and those provided by the spectroscopic parameter pipeline of SDSS show that the mean absolute errors (MAEs) are 0.0048, 0.1477, and 0.1129 dex for log$~T_{eff}$, log$~g$, and [Fe/H] (64.85 K for $T_{eff}$), respectively. For the synthetic spectra, the MAE test accuracies are 0.0011, 0.0182, and 0.0112 dex for log$~T_{eff}$, log$~g$, and [Fe/H] (14.90 K for $T_{eff}$), respectively.

  7. Damage identification with probabilistic neural networks

    Energy Technology Data Exchange (ETDEWEB)

    Klenke, S.E.; Paez, T.L.

    1995-12-01

    This paper investigates the use of artificial neural networks (ANNs) to identify damage in mechanical systems. Two probabilistic neural networks (PNNs) are developed and used to judge whether or not damage has occurred in a specific mechanical system, based on experimental measurements. The first PNN is a classical type that casts Bayesian decision analysis into an ANN framework, it uses exemplars measured from the undamaged and damaged system to establish whether system response measurements of unknown origin come from the former class (undamaged) or the latter class (damaged). The second PNN establishes the character of the undamaged system in terms of a kernel density estimator of measures of system response; when presented with system response measures of unknown origin, it makes a probabilistic judgment whether or not the data come from the undamaged population. The physical system used to carry out the experiments is an aerospace system component, and the environment used to excite the system is a stationary random vibration. The results of damage identification experiments are presented along with conclusions rating the effectiveness of the approaches.

  8. Performance of artificial neural networks and genetical evolved artificial neural networks unfolding techniques

    International Nuclear Information System (INIS)

    With the Bonner spheres spectrometer neutron spectrum is obtained through an unfolding procedure. Monte Carlo methods, Regularization, Parametrization, Least-squares, and Maximum Entropy are some of the techniques utilized for unfolding. In the last decade methods based on Artificial Intelligence Technology have been used. Approaches based on Genetic Algorithms and Artificial Neural Networks (Ann) have been developed in order to overcome the drawbacks of previous techniques. Nevertheless the advantages of Ann still it has some drawbacks mainly in the design process of the network, vg the optimum selection of the architectural and learning Ann parameters. In recent years the use of hybrid technologies, combining Ann and genetic algorithms, has been utilized to. In this work, several Ann topologies were trained and tested using Ann and Genetically Evolved Artificial Neural Networks in the aim to unfold neutron spectra using the count rates of a Bonner sphere spectrometer. Here, a comparative study of both procedures has been carried out. (Author)

  9. Performance of artificial neural networks and genetical evolved artificial neural networks unfolding techniques

    Energy Technology Data Exchange (ETDEWEB)

    Ortiz R, J. M. [Escuela Politecnica Superior, Departamento de Electrotecnia y Electronica, Avda. Menendez Pidal s/n, Cordoba (Spain); Martinez B, M. R.; Vega C, H. R. [Universidad Autonoma de Zacatecas, Unidad Academica de Estudios Nucleares, Calle Cipres No. 10, Fracc. La Penuela, 98068 Zacatecas (Mexico); Gallego D, E.; Lorente F, A. [Universidad Politecnica de Madrid, Departamento de Ingenieria Nuclear, ETSI Industriales, C. Jose Gutierrez Abascal 2, 28006 Madrid (Spain); Mendez V, R.; Los Arcos M, J. M.; Guerrero A, J. E., E-mail: morvymm@yahoo.com.m [CIEMAT, Laboratorio de Metrologia de Radiaciones Ionizantes, Avda. Complutense 22, 28040 Madrid (Spain)

    2011-02-15

    With the Bonner spheres spectrometer neutron spectrum is obtained through an unfolding procedure. Monte Carlo methods, Regularization, Parametrization, Least-squares, and Maximum Entropy are some of the techniques utilized for unfolding. In the last decade methods based on Artificial Intelligence Technology have been used. Approaches based on Genetic Algorithms and Artificial Neural Networks (Ann) have been developed in order to overcome the drawbacks of previous techniques. Nevertheless the advantages of Ann still it has some drawbacks mainly in the design process of the network, vg the optimum selection of the architectural and learning Ann parameters. In recent years the use of hybrid technologies, combining Ann and genetic algorithms, has been utilized to. In this work, several Ann topologies were trained and tested using Ann and Genetically Evolved Artificial Neural Networks in the aim to unfold neutron spectra using the count rates of a Bonner sphere spectrometer. Here, a comparative study of both procedures has been carried out. (Author)

  10. Impact of Mutation Weights on Training Backpropagation Neural Networks

    Directory of Open Access Journals (Sweden)

    Lamia Abed Noor Muhammed

    2014-07-01

    Full Text Available Neural network is a computational approach, which based on the simulation of biology neural network. This approach is conducted by several parameters; learning rate, initialized weights, network architecture, and so on. However, this paper would be focused on one of these parameters that is weights. The aim is to shed lights on the mutation weights through training network and its effects on the results. The experiment was done using backpropagation neural network with one hidden layer. The results reveal the role of mutation in escape from the local minima and making the change

  11. Adaptive training of feedforward neural networks by Kalman filtering

    Energy Technology Data Exchange (ETDEWEB)

    Ciftcioglu, Oe. [Istanbul Technical Univ. (Turkey). Dept. of Electrical Engineering; Tuerkcan, E. [Netherlands Energy Research Foundation (ECN), Petten (Netherlands)

    1995-02-01

    Adaptive training of feedforward neural networks by Kalman filtering is described. Adaptive training is particularly important in estimation by neural network in real-time environmental where the trained network is used for system estimation while the network is further trained by means of the information provided by the experienced/exercised ongoing operation. As result of this, neural network adapts itself to a changing environment to perform its mission without recourse to re-training. The performance of the training method is demonstrated by means of actual process signals from a nuclear power plant. (orig.).

  12. Improved Local Weather Forecasts Using Artificial Neural Networks

    DEFF Research Database (Denmark)

    Wollsen, Morten Gill; Jørgensen, Bo Nørregaard

    2015-01-01

    using an artificial neural network. The neural network used is a NARX network, which is known to model non-linear systems well. The predictions are compared to both a design reference year as well as commercial weather forecasts based upon numerical modelling. The results presented in this paper show...... that the network outperforms the commercial forecast for lower step aheads (network’s performance is in the range of the commercial forecast. However, the neural network approach is fast, fairly precise and allows for further expansion with higher resolution....

  13. Learning and coding in biological neural networks

    Science.gov (United States)

    Fiete, Ila Rani

    How can large groups of neurons that locally modify their activities learn to collectively perform a desired task? Do studies of learning in small networks tell us anything about learning in the fantastically large collection of neurons that make up a vertebrate brain? What factors do neurons optimize by encoding sensory inputs or motor commands in the way they do? In this thesis I present a collection of four theoretical works: each of the projects was motivated by specific constraints and complexities of biological neural networks, as revealed by experimental studies; together, they aim to partially address some of the central questions of neuroscience posed above. We first study the role of sparse neural activity, as seen in the coding of sequential commands in a premotor area responsible for birdsong. We show that the sparse coding of temporal sequences in the songbird brain can, in a network where the feedforward plastic weights must translate the sparse sequential code into a time-varying muscle code, facilitate learning by minimizing synaptic interference. Next, we propose a biologically plausible synaptic plasticity rule that can perform goal-directed learning in recurrent networks of voltage-based spiking neurons that interact through conductances. Learning is based on the correlation of noisy local activity with a global reward signal; we prove that this rule performs stochastic gradient ascent on the reward. Thus, if the reward signal quantifies network performance on some desired task, the plasticity rule provably drives goal-directed learning in the network. To assess the convergence properties of the learning rule, we compare it with a known example of learning in the brain. Song-learning in finches is a clear example of a learned behavior, with detailed available neurophysiological data. With our learning rule, we train an anatomically accurate model birdsong network that drives a sound source to mimic an actual zebrafinch song. Simulation and

  14. Automated Modeling of Microwave Structures by Enhanced Neural Networks

    Directory of Open Access Journals (Sweden)

    Z. Raida

    2006-12-01

    Full Text Available The paper describes the methodology of the automated creation of neural models of microwave structures. During the creation process, artificial neural networks are trained using the combination of the particle swarm optimization and the quasi-Newton method to avoid critical training problems of the conventional neural nets. In the paper, neural networks are used to approximate the behavior of a planar microwave filter (moment method, Zeland IE3D. In order to evaluate the efficiency of neural modeling, global optimizations are performed using numerical models and neural ones. Both approaches are compared from the viewpoint of CPU-time demands and the accuracy. Considering conclusions, methodological recommendations for including neural networks to the microwave design are formulated.

  15. A Tutorial on Deep Neural Networks for Intelligent Systems

    OpenAIRE

    Cuevas-Tello, Juan C.; Valenzuela-Rendon, Manuel; Nolazco-Flores, Juan A.

    2016-01-01

    Developing Intelligent Systems involves artificial intelligence approaches including artificial neural networks. Here, we present a tutorial of Deep Neural Networks (DNNs), and some insights about the origin of the term "deep"; references to deep learning are also given. Restricted Boltzmann Machines, which are the core of DNNs, are discussed in detail. An example of a simple two-layer network, performing unsupervised learning for unlabeled data, is shown. Deep Belief Networks (DBNs), which a...

  16. Reinforcement learning of recurrent neural network for temporal coding

    OpenAIRE

    Kimura, Daichi; Hayakawa, Yoshinori

    2006-01-01

    We study a reinforcement learning for temporal coding with neural network consisting of stochastic spiking neurons. In neural networks, information can be coded by characteristics of the timing of each neuronal firing, including the order of firing or the relative phase differences of firing. We derive the learning rule for this network and show that the network consisting of Hodgkin-Huxley neurons with the dynamical synaptic kinetics can learn the appropriate timing of each neuronal firing. ...

  17. An Interval-valued Fuzzy Competitive Neural Network

    Institute of Scientific and Technical Information of China (English)

    DENG Guan-nan; ZOU Kai-qi

    2006-01-01

    Because interval value is quite natural in clustering, an interval-valued fuzzy competitive neural network is proposed. Firstly, this paper proposes several definitions of distance relating to interval number. And then, it indicates the method of preprocessing input data, the structure of the network and the learning algorithm of the interval-valued fuzzy competitive neural network. This paper also analyses the principle of the learning algorithm. At last, an experiment is used to test the validity of the network.

  18. The EEG Signal Prediction by Using Neural Network

    Directory of Open Access Journals (Sweden)

    Jitka Mohylova

    2008-01-01

    Full Text Available The neural network is computational model based on the features abstraction of biological neural systems. The neural networks have many ways of usage in technical field. They have been applied successfully to speech recognition, image analysis and adaptive control, in order to construct software agents or autonomous robots. In this paper is described usage of neural networks for ECG signal prediction. The ECG signal prediction can be used for  automated detection of irregular heartbeat – extrasystole. The automated detection system of unexpected abnormalities is also described in this paper

  19. Performance comparison of neural networks for undersea mine detection

    Science.gov (United States)

    Toborg, Scott T.; Lussier, Matthew; Rowe, David

    1994-03-01

    This paper describes the design of an undersea mine detection system and compares the performance of various neural network models for classification of features extracted from side-scan sonar images. Techniques for region of interest and statistical feature extraction are described. Subsequent feature analysis verifies the need for neural network processing. Several different neural and conventional pattern classifiers are compared including: k-Nearest Neighbors, Backprop, Quickprop, and LVQ. Results using the Naval Image Database from Coastal Systems Station (Panama City, FL) indicate neural networks have consistently superior performance over conventional classifiers. Concepts for further performance improvements are also discussed including: alternative image preprocessing and classifier fusion.

  20. Apprenticeship effect on a neural network

    International Nuclear Information System (INIS)

    Utilization of a neural network for determining the value of the impact parameter in heavy ion reactions is a two stage process: a apprenticeship stage followed by a utilization stage. During the first stage one determines the network parameters by means of a trial game for which the inputs and outputs are known. To realize the trial game a numerical simulation code was used. In order to estimate the biases of this procedure we generated trial games by resorting to two different models: a transport dynamical model (QMD) coupled to a deexcitation code (GEMINI) and a event generator based on a statistical approach (EUGENE). The procedure was applied to determine the impact parameter in 40 Ca + 40 Ca reactions. The effect of the model dependence of the apprenticeship procedure is studied thoroughly. In the two cases the outputs provided by the network are smaller then 7 fm which can be expected in case of complete events. The network trained with QMD gives outputs for input parameters within a larger range of values which is due to the fact that EUGENE does not contain 'Deep Inelastic'

  1. Deep Recurrent Neural Networks for Supernovae Classification

    CERN Document Server

    Charnock, Tom

    2016-01-01

    We apply deep recurrent neural networks, which are capable of learning complex sequential information, to classify supernovae. The observational time and filter fluxes are used as inputs to the network, but since the inputs are agnostic additional data such as host galaxy information can also be included. Using the Supernovae Photometric Classification Challenge (SPCC) data, we find that deep networks are capable of learning about light curves, however the performance of the network is highly sensitive to the amount of training data. For a training size of 50% of the representational SPCC dataset (around 104 supernovae) we obtain a type Ia vs non type Ia classification accuracy of 94.8%, an area under the Receiver Operating Characteristic curve AUC of 0.986 and a SPCC figure-of-merit F1 = 0.64. We also apply a pre-trained model to obtain classification probabilities as a function of time, and show it can give early indications of supernovae type. Our method is competitive with existing algorithms and has appl...

  2. Neural networks to formulate special fats

    Directory of Open Access Journals (Sweden)

    Garcia, R. K.

    2012-09-01

    Full Text Available Neural networks are a branch of artificial intelligence based on the structure and development of biological systems, having as its main characteristic the ability to learn and generalize knowledge. They are used for solving complex problems for which traditional computing systems have a low efficiency. To date, applications have been proposed for different sectors and activities. In the area of fats and oils, the use of neural networks has focused mainly on two issues: the detection of adulteration and the development of fatty products. The formulation of fats for specific uses is the classic case of a complex problem where an expert or group of experts defines the proportions of each base, which, when mixed, provide the specifications for the desired product. Some conventional computer systems are currently available to assist the experts; however, these systems have some shortcomings. This article describes in detail a system for formulating fatty products, shortenings or special fats, from three or more components by using neural networks (MIX. All stages of development, including design, construction, training, evaluation, and operation of the network will be outlined.

    Las redes neuronales son una rama de la inteligencia artificial basadas en la estructura y funcionamiento de sistemas biológicos, teniendo como principal característica la capacidad de aprender y generalizar conocimiento. Estas son utilizadas en la resolución de problemas complejos, en los cuales los sistemas computacionales tradicionales presentan una eficiencia baja. Hasta la fecha, han sido propuestas aplicaciones para los más diversos sectores y actividades. En el área de grasas y aceites, la utilización de redes neuronales se ha concentrado principalmente en dos asuntos: la detección de adulteraciones y la formulación de productos grasos. La formulación de grasas para uso específico es el caso clásico de problema complejo donde un experto o grupo de

  3. INDUCTION OF DECISION TREES BASED ON A FUZZY NEURAL NETWORK

    Institute of Scientific and Technical Information of China (English)

    Tang Bin; Hu Guangrui; Mao Xiaoquan

    2002-01-01

    Based on a fuzzy neural network, the letter presents an approach for the induction of decision trees. The approach makes use of the weights of fuzzy mappings in the fuzzy neural network which has been trained. It can realize the optimization of fuzzy decision trees by branch cutting, and improve the ratio of correctness and efficiency of the induction of decision trees.

  4. Computational Ecology: Artificial Neural Networks and Their Applications

    Directory of Open Access Journals (Sweden)

    WenJun Zhang

    2011-04-01

    Full Text Available A book, Computational Ecology: Artificial Neural Networks and Their Applications, published in 2010, was introduced and reviewed. This book provides readers with deep insights on algorithms, codes, and applications of artificial neural networks in ecology. A science discipline, computational ecology, is clearly defined and outlined in the book.

  5. Expert System Based on Data Mining and Neural Networks

    Institute of Scientific and Technical Information of China (English)

    NI Zhi-wei; JIA Rui-yu

    2001-01-01

    On the basis of data mining and neural network, this paper proposes a general framework of the neural network expert system and discusses the key techniques in this kind of system. We apply these ideas on agricultural expert system to find some unknown useful knowledge and get some satisfactory results.

  6. Artificial neural networks in predicting current in electric arc furnaces

    Science.gov (United States)

    Panoiu, M.; Panoiu, C.; Iordan, A.; Ghiormez, L.

    2014-03-01

    The paper presents a study of the possibility of using artificial neural networks for the prediction of the current and the voltage of Electric Arc Furnaces. Multi-layer perceptron and radial based functions Artificial Neural Networks implemented in Matlab were used. The study is based on measured data items from an Electric Arc Furnace in an industrial plant in Romania.

  7. A Neural Network Approach to the Classification of Autism.

    Science.gov (United States)

    Cohen, Ira L.; And Others

    1993-01-01

    Neural network technology was compared with simultaneous and stepwise linear discriminant analysis in terms of their ability to classify and predict persons (n=138) as having autism or mental retardation. The neural network methodology was superior in both classifying groups and in generalizing to new cases that were not part of the training…

  8. Multiple image sensor data fusion through artificial neural networks

    Science.gov (United States)

    With multisensor data fusion technology, the data from multiple sensors are fused in order to make a more accurate estimation of the environment through measurement, processing and analysis. Artificial neural networks are the computational models that mimic biological neural networks. With high per...

  9. Identification of Non-Linear Structures using Recurrent Neural Networks

    DEFF Research Database (Denmark)

    Kirkegaard, Poul Henning; Nielsen, Søren R. K.; Hansen, H. I.

    1995-01-01

    Two different partially recurrent neural networks structured as Multi Layer Perceptrons (MLP) are investigated for time domain identification of a non-linear structure.......Two different partially recurrent neural networks structured as Multi Layer Perceptrons (MLP) are investigated for time domain identification of a non-linear structure....

  10. Application of Neural Networks to House Pricing and Bond Rating

    NARCIS (Netherlands)

    Daniëls, H.A.M.; Kamp, B.; Verkooijen, W.J.H.

    1997-01-01

    Feed forward neural networks receive a growing attention as a data modelling tool in economic classification problems. It is well-known that controlling the design of a neural network can be cumbersome. Inaccuracies may lead to a manifold of problems in the application such as higher errors due to l

  11. FUZZY NEURAL NETWORK FOR OBJECT IDENTIFICATION ON INTEGRATED CIRCUIT LAYOUTS

    Directory of Open Access Journals (Sweden)

    A. A. Doudkin

    2015-01-01

    Full Text Available Fuzzy neural network model based on neocognitron is proposed to identify layout objects on images of topological layers of integrated circuits. Testing of the model on images of real chip layouts was showed a highеr degree of identification of the proposed neural network in comparison to base neocognitron.

  12. Optimal Brain Surgeon on Artificial Neural Networks in

    DEFF Research Database (Denmark)

    Christiansen, Niels Hørbye; Job, Jonas Hultmann; Klyver, Katrine;

    2012-01-01

    It is shown how the procedure know as optimal brain surgeon can be used to trim and optimize artificial neural networks in nonlinear structural dynamics. Beside optimizing the neural network, and thereby minimizing computational cost in simulation, the surgery procedure can also serve as a quick...

  13. THE ARTIFICIAL NEURAL NETWORK OF FORECASTING OPEN MINING SLOPE STABILITY

    Institute of Scientific and Technical Information of China (English)

    魏春启; 白润才

    2000-01-01

    The artificial neural network model which forecasts Open Mining Slope stability is established by neural network theory and method. The nonlinear reflection relation between stability target of open mining slope and its influence factor is described. The method of forecasting Open Mining Slope stability is brought forward.

  14. Neural network-genetic programming for sediment transport

    Digital Repository Service at National Institute of Oceanography (India)

    Singh, A.K.; Deo, M.C.; SanilKumar, V.

    . In this paper an alternative approach based on a combination of two soft computing tools, namely neural networks and genetic programming, is suggested. Such a combination was found to produce better results than the individual use of neural networks or genetic...

  15. Automatic generation of a neural network architecture using evolutionary computation

    NARCIS (Netherlands)

    Vonk, E.; Jain, L.C.; Veelenturf, L.P.J.; Johnson, R.

    1995-01-01

    This paper reports the application of evolutionary computation in the automatic generation of a neural network architecture. It is a usual practice to use trial and error to find a suitable neural network architecture. This is not only time consuming but may not generate an optimal solution for a gi

  16. Mapping Neural Network Derived from the Parzen Window Estimator

    DEFF Research Database (Denmark)

    Schiøler, Henrik; Hartmann, U.

    1992-01-01

    The article presents a general theoretical basis for the construction of mapping neural networks. The theory is based on the Parzen Window estimator for......The article presents a general theoretical basis for the construction of mapping neural networks. The theory is based on the Parzen Window estimator for...

  17. Neural Network for Optimization of Existing Control Systems

    DEFF Research Database (Denmark)

    Madsen, Per Printz

    1995-01-01

    The purpose of this paper is to develop methods to use Neural Network based Controllers (NNC) as an optimization tool for existing control systems.......The purpose of this paper is to develop methods to use Neural Network based Controllers (NNC) as an optimization tool for existing control systems....

  18. Implementation of neural network based non-linear predictive control

    DEFF Research Database (Denmark)

    Sørensen, Paul Haase; Nørgård, Peter Magnus; Ravn, Ole;

    1999-01-01

    of non-linear systems. GPC is model based and in this paper we propose the use of a neural network for the modeling of the system. Based on the neural network model, a controller with extended control horizon is developed and the implementation issues are discussed, with particular emphasis...

  19. FUZZY NEURAL NETWORK FOR OBJECT IDENTIFICATION ON INTEGRATED CIRCUIT LAYOUTS

    OpenAIRE

    Doudkin, A. A.

    2016-01-01

    Fuzzy neural network model based on neocognitron is proposed to identify layout objects on images of topological layers of integrated circuits. Testing of the model on images of real chip layouts was showed a highеr degree of identification of the proposed neural network in comparison to base neocognitron.

  20. Neural network approach for solving the maximal common subgraph problem.

    Science.gov (United States)

    Shoukry, A; Aboutabl, M

    1996-01-01

    A new formulation of the maximal common subgraph problem (MCSP), that is implemented using a two-stage Hopfield neural network, is given. Relative merits of this proposed formulation, with respect to current neural network-based solutions as well as classical sequential-search-based solutions, are discussed.

  1. Quantum Neural Networks%量子神经网络

    Institute of Scientific and Technical Information of China (English)

    解光军; 庄镇泉

    2001-01-01

    In recent years,the researches on combination of quantum theory and neural networks have attracted much attention. This paper reviews the development and status about this field. Some quantum neural networks(QNN)models are discussed,the applications and prospects are also given,which show that QNN have great competence and potential in the computational intelligence field.

  2. Island Model based Differential Evolution Algorithm for Neural Network Training

    Directory of Open Access Journals (Sweden)

    Htet Thazin Tike Thein

    Full Text Available There exist many approaches to training neural network. In this system, training for feed forward neural network is introduced by using island model based differential evolution. Differential Evolution (DE has been used to determine optimal value for ANN ...

  3. Using Neural Networks to Predict MBA Student Success

    Science.gov (United States)

    Naik, Bijayananda; Ragothaman, Srinivasan

    2004-01-01

    Predicting MBA student performance for admission decisions is crucial for educational institutions. This paper evaluates the ability of three different models--neural networks, logit, and probit to predict MBA student performance in graduate programs. The neural network technique was used to classify applicants into successful and marginal student…

  4. The use of neural networks for approximation of nuclear data

    Energy Technology Data Exchange (ETDEWEB)

    Korovin, Yu. A.; Maksimushkina, A. V., E-mail: AVMaksimushkina@mephi.ru [National Research Nuclear University MEPhI (Moscow Engineering Physics Institute) (Russian Federation)

    2015-12-15

    The article discusses the possibility of using neural networks for approximation or reconstruction of data such as the reaction cross sections. The quality of the approximation using fitting criteria is also evaluated. The activity of materials under irradiation is calculated from data obtained using neural networks.

  5. Identification of Non-Linear Structures using Recurrent Neural Networks

    DEFF Research Database (Denmark)

    Kirkegaard, Poul Henning; Nielsen, Søren R. K.; Hansen, H. I.

    Two different partially recurrent neural networks structured as Multi Layer Perceptrons (MLP) are investigated for time domain identification of a non-linear structure.......Two different partially recurrent neural networks structured as Multi Layer Perceptrons (MLP) are investigated for time domain identification of a non-linear structure....

  6. Design of Airport Rigid Runway Structures with Neural Networks

    OpenAIRE

    Covatariu, Gabriela; Zarojanu, H.; Ciongradi, I.; Budescu, Mihai

    2011-01-01

    Computing with neural networks ranges between engineering and artificial intelligence. It uses classical engineering mathematical techniques and heuristic methods specific for Artificial Intelligence. This paperwork illustrates the way of using neural networks for improving the computing method by increasing the accuracy in design the concrete slabs from airport infrastructure. The results obtained using the models developed with the method of finite element were used for creating neural netw...

  7. Estimating stock market movements with neural network approach

    OpenAIRE

    KARAATLI, Meltem; GÜNGÖR, İbrahim; DEMİR, Yusuf; KALAYCI, Şeref

    2005-01-01

    In developing countries like Turkey, many kinds of different speculative movements which cause rigid up and down movements in stock market make the proper estimation of the shares difficult. Financial analysts use various methods to determine share prices. Neural network analysis is one of them that became popular and used frequently in recent years. In this paper, we estimated the Istanbul stock market index by using neural network approach. In the application, performance of the neural netw...

  8. Stability for delayed reaction-diffusion neural networks

    Energy Technology Data Exchange (ETDEWEB)

    Allegretto, W. [Department of Mathematical and Statistical Sciences, University of Alberta, Edmonton, Alberta, T6G 2G1 (Canada)]. E-mail: wallegre@math.ualberta.ca; Papini, D. [Dipartimento di Ingegneria dell' Informazione, Universita degli Studi di Siena, via Roma 56, 53100 Siena (Italy)]. E-mail: papini@dii.unisi.it

    2007-01-15

    We consider a Hopfield neural network model with diffusive terms, non-decreasing and discontinuous neural activation functions, time-dependent delays and time-periodic coefficients. We provide conditions on interconnection matrices and delays which guarantee that for each periodic input the model has a unique periodic solution that is globally exponentially stable. Even in the case without diffusion, such conditions improve recent results on classical delayed Hopfield neural networks with discontinuous activation functions. Numerical examples illustrate the results.

  9. Implementing Neural Networks Using VLSI for Image Processing (compression)

    OpenAIRE

    Sindhu R; Dr Shilpa Mehta

    2015-01-01

    Biological systems process the analog signals such as image and sound efficiently. To process the information the way biological systems do we make use of ANN. (Artificial Neural Networks) The focus of this paper is to review the implementation of the neural network architecture using analog components like Gilbert cell multiplier, differential amplifier for neuron activation function and tan sigmoid function circuit using MOS transistor. The neural architecture is trained usin...

  10. 23rd Workshop of the Italian Neural Networks Society (SIREN)

    CERN Document Server

    Esposito, Anna; Morabito, Francesco

    2014-01-01

    This volume collects a selection of contributions which has been presented at the 23rd Italian Workshop on Neural Networks, the yearly meeting of the Italian Society for Neural Networks (SIREN). The conference was held in Vietri sul Mare, Salerno, Italy during May 23-24, 2013. The annual meeting of SIREN is sponsored by International Neural Network Society (INNS), European Neural Network Society (ENNS) and IEEE Computational Intelligence Society (CIS). The book – as well as the workshop-  is organized in two main components, a special session and a group of regular sessions featuring different aspects and point of views of artificial neural networks, artificial and natural intelligence, as well as psychological and cognitive theories for modeling human behaviors and human machine interactions, including Information Communication applications of compelling interest.  .

  11. Neural networks and their potential application in nuclear power plants

    International Nuclear Information System (INIS)

    A neural network is a data processing system consisting of a number of simple, highly interconnected processing elements in an architecture inspired by the structure of the cerebral cortex portion of the brain. Hence, neural networks are often capable of doing things which humans or animals do well but which conventional computers often do poorly. Neural networks have emerged in the past few years as an area of unusual opportunity for research, development and application to a variety of real world problems. Indeed, neural networks exhibit characteristics and capabilities not provided by any other technology. Examples include reading Japanese Kanji characters and human handwriting, reading a typewritten manuscript aloud, compensating for alignment errors in robots, interpreting very noise signals (e.g., electroencephalograms), modeling complex systems that cannot be modeled mathematically, and predicting whether proposed loans will be good or fail. This paper presents a brief tutorial on neural networks and describes research on the potential applications to nuclear power plants

  12. Artificial neural network based approach to transmission lines protection

    International Nuclear Information System (INIS)

    The aim of this paper is to present and accurate fault detection technique for high speed distance protection using artificial neural networks. The feed-forward multi-layer neural network with the use of supervised learning and the common training rule of error back-propagation is chosen for this study. Information available locally at the relay point is passed to a neural network in order for an assessment of the fault location to be made. However in practice there is a large amount of information available, and a feature extraction process is required to reduce the dimensionality of the pattern vectors, whilst retaining important information that distinguishes the fault point. The choice of features is critical to the performance of the neural networks learning and operation. A significant feature in this paper is that an artificial neural network has been designed and tested to enhance the precision of the adaptive capabilities for distance protection

  13. A fuzzy neural network evolved by particle swarm optimization

    Institute of Scientific and Technical Information of China (English)

    PENG Zhi-ping; PENG Hong

    2007-01-01

    A cooperative system of a fuzzy logic model and a fuzzy neural network (CSFLMFNN) is proposed,in which a fuzzy logic model is acquired from domain experts and a fuzzy neural network is generated and prewired according to the model. Then PSO-CSFLMFNN is constructed by introducing particle swarm optimization (PSO) into the cooperative system instead of the commonly used evolutionary algorithms to evolve the prewired fuzzy neural network. The evolutionary fuzzy neural network implements accuracy fuzzy inference without rule matching. PSO-CSFLMFNN is applied to the intelligent fault diagnosis for a petrochemical engineering equipment, in which the cooperative system is proved to be effective. It is shown by the applied results that the performance of the evolutionary fuzzy neural network outperforms remarkably that of the one evolved by genetic algorithm in the convergence rate and the generalization precision.

  14. New cooperative projection neural network for nonlinearly constrained variational inequality

    Institute of Scientific and Technical Information of China (English)

    XIA YouSheng

    2009-01-01

    This paper proposes a new cooperative projection neural network (CPNN), which combines automat-ically three individual neural network models with a common projection term. As a special case, the proposed CPNN can include three recent recurrent neural networks for solving monotone variational in-equality problems with limit or linear constraints, respectively. Under the monotonicity condition of the corresponding Lagrangian mapping, the proposed CPNN is theoretically guaranteed to solve monotone variational inequality problems and a class of nonmonotone variational inequality problems with linear and nonlinear constraints. Unlike the extended projection neural network, the proposed CPNN has no limitation on the initial point for global convergence. Compared with other related cooperative neural networks and numerical optimization algorithms, the proposed CPNN has a low computational complex-ity and requires weak convergence conditions. An application in real-time grasping force optimization and examples demonstrate good performance of the proposed CPNN.

  15. Thermoelastic steam turbine rotor control based on neural network

    Science.gov (United States)

    Rzadkowski, Romuald; Dominiczak, Krzysztof; Radulski, Wojciech; Szczepanik, R.

    2015-12-01

    Considered here are Nonlinear Auto-Regressive neural networks with eXogenous inputs (NARX) as a mathematical model of a steam turbine rotor for controlling steam turbine stress on-line. In order to obtain neural networks that locate critical stress and temperature points in the steam turbine during transient states, an FE rotor model was built. This model was used to train the neural networks on the basis of steam turbine transient operating data. The training included nonlinearity related to steam turbine expansion, heat exchange and rotor material properties during transients. Simultaneous neural networks are algorithms which can be implemented on PLC controllers. This allows for the application neural networks to control steam turbine stress in industrial power plants.

  16. Liquefaction Microzonation of Babol City Using Artificial Neural Network

    DEFF Research Database (Denmark)

    Farrokhzad, F.; Choobbasti, A.J.; Barari, Amin

    2012-01-01

    that will be less susceptible to damage during earthquakes. The scope of present study is to prepare the liquefaction microzonation map for the Babol city based on Seed and Idriss (1983) method using artificial neural network. Artificial neural network (ANN) is one of the artificial intelligence (AI) approaches...... is proposed in this paper. To meet this objective, an effort is made to introduce a total of 30 boreholes data in an area of 7 km2 which includes the results of field tests into the neural network model and the prediction of artificial neural network is checked in some test boreholes, finally the liquefaction...... that can be classified as machine learning. Simplified methods have been practiced by researchers to assess nonlinear liquefaction potential of soil. In order to address the collective knowledge built-up in conventional liquefaction engineering, an alternative general regression neural network model...

  17. Role of neural network models for developing speech systems

    Indian Academy of Sciences (India)

    K Sreenivasa Rao

    2011-10-01

    This paper discusses the application of neural networks for developing different speech systems. Prosodic parameters of speech at syllable level depend on positional, contextual and phonological features of the syllables. In this paper, neural networks are explored to model the prosodic parameters of the syllables from their positional, contextual and phonological features. The prosodic parameters considered in this work are duration and sequence of pitch $(F_0)$ values of the syllables. These prosody models are further examined for applications such as text to speech synthesis, speech recognition, speaker recognition and language identification. Neural network models in voice conversion system are explored for capturing the mapping functions between source and target speakers at source, system and prosodic levels. We have also used neural network models for characterizing the emotions present in speech. For identification of dialects in Hindi, neural network models are used to capture the dialect specific information from spectral and prosodic features of speech.

  18. Simulation Model of Magnetic Levitation Based on NARX Neural Networks

    Directory of Open Access Journals (Sweden)

    Dragan Antić

    2013-04-01

    Full Text Available In this paper, we present analysis of different training types for nonlinear autoregressive neural network, used for simulation of magnetic levitation system. First, the model of this highly nonlinear system is described and after that the Nonlinear Auto Regressive eXogenous (NARX of neural network model is given. Also, numerical optimization techniques for improved network training are described. It is verified that NARX neural network can be successfully used to simulate real magnetic levitation system if suitable training procedure is chosen, and the best two training types, obtained from experimental results, are described in details.

  19. Multispectral thermometry based on neural network

    Institute of Scientific and Technical Information of China (English)

    孙晓刚; 戴景民

    2003-01-01

    In order to overcome the effect of the assumption between emissivity and wavelength on the measurement of true temperature and spectral emissivity for most engineering materials, a neural network based method is proposed for data processing while a blackbody furnace and three optical filters with known spectral transmittance curves were used to make up a true target. The experimental results show that the calculated temperatures are in good agreement with the temperature of the blackbody furnace, and the calculated spectral emissivity curves are in good agreement with the spectral transmittance curves of the filters. The method proposed has been proved to be an effective method for solving the problem of true temperature and emissivity measurement, and it can overcome the effect of the assumption between emissivity and wavelength on the measurement of true temperature and spectral emissivity for most engineering materials.

  20. Track filtering by robust neural network

    International Nuclear Information System (INIS)

    In the present paper we study the following problems of track information extraction by the artificial neural network (ANN) rotor model: providing initial ANN configuration by an algorithm general enough to be applicable for any discrete detector in- or out of a magnetic field; robustness to heavy contaminated raw data (up to 100% signal-to-noise ratio); stability to the growing event multiplicity. These problems were carried out by corresponding innovations of our model, namely: by a special one-dimensional histogramming, by multiplying weights by a specially designed robust multiplier, and by replacing the simulated annealing schedule by ANN dynamics with an optimally fixed temperature. Our approach is valid for both circular and straight (non-magnetic) tracks and tested on 2D simulated data contaminated by 100% noise points distributed uniformly. To be closer to some reality in our simulation, we keep parameters of the cylindrical spectrometer ARES. 12 refs.; 9 figs

  1. Robust Convolutional Neural Networks for Image Recognition

    Directory of Open Access Journals (Sweden)

    Hayder M. Albeahdili

    2015-11-01

    Full Text Available Recently image recognition becomes vital task using several methods. One of the most interesting used methods is using Convolutional Neural Network (CNN. It is widely used for this purpose. However, since there are some tasks that have small features that are considered an essential part of a task, then classification using CNN is not efficient because most of those features diminish before reaching the final stage of classification. In this work, analyzing and exploring essential parameters that can influence model performance. Furthermore different elegant prior contemporary models are recruited to introduce new leveraging model. Finally, a new CNN architecture is proposed which achieves state-of-the-art classification results on the different challenge benchmarks. The experimented are conducted on MNIST, CIFAR-10, and CIFAR-100 datasets. Experimental results showed that the results outperform and achieve superior results comparing to the most contemporary approaches.

  2. Mesh deformation based on artificial neural networks

    Science.gov (United States)

    Stadler, Domen; Kosel, Franc; Čelič, Damjan; Lipej, Andrej

    2011-09-01

    In the article a new mesh deformation algorithm based on artificial neural networks is introduced. This method is a point-to-point method, meaning that it does not use connectivity information for calculation of the mesh deformation. Two already known point-to-point methods, based on interpolation techniques, are also presented. In contrast to the two known interpolation methods, the new method does not require a summation over all boundary nodes for one displacement calculation. The consequence of this fact is a shorter computational time of mesh deformation, which is proven by different deformation tests. The quality of the deformed meshes with all three deformation methods was also compared. Finally, the generated and the deformed three-dimensional meshes were used in the computational fluid dynamics numerical analysis of a Francis water turbine. A comparison of the analysis results was made to prove the applicability of the new method in every day computation.

  3. Neural network training as a dissipative process.

    Science.gov (United States)

    Gori, Marco; Maggini, Marco; Rossi, Alessandro

    2016-09-01

    This paper analyzes the practical issues and reports some results on a theory in which learning is modeled as a continuous temporal process driven by laws describing the interactions of intelligent agents with their own environment. The classic regularization framework is paired with the idea of temporal manifolds by introducing the principle of least cognitive action, which is inspired by the related principle of mechanics. The introduction of the counterparts of the kinetic and potential energy leads to an interpretation of learning as a dissipative process. As an example, we apply the theory to supervised learning in neural networks and show that the corresponding Euler-Lagrange differential equations can be connected to the classic gradient descent algorithm on the supervised pairs. We give preliminary experiments to confirm the soundness of the theory. PMID:27389569

  4. Delayed switching applied to memristor neural networks

    Energy Technology Data Exchange (ETDEWEB)

    Wang, Frank Z.; Yang Xiao; Lim Guan [Future Computing Group, School of Computing, University of Kent, Canterbury (United Kingdom); Helian Na [School of Computer Science, University of Hertfordshire, Hatfield (United Kingdom); Wu Sining [Xyratex, Havant (United Kingdom); Guo Yike [Department of Computing, Imperial College, London (United Kingdom); Rashid, Md Mamunur [CERN, Geneva (Switzerland)

    2012-04-01

    Magnetic flux and electric charge are linked in a memristor. We reported recently that a memristor has a peculiar effect in which the switching takes place with a time delay because a memristor possesses a certain inertia. This effect was named the ''delayed switching effect.'' In this work, we elaborate on the importance of delayed switching in a brain-like computer using memristor neural networks. The effect is used to control the switching of a memristor synapse between two neurons that fire together (the Hebbian rule). A theoretical formula is found, and the design is verified by a simulation. We have also built an experimental setup consisting of electronic memristive synapses and electronic neurons.

  5. Evolving A-Type Artificial Neural Networks

    CERN Document Server

    Orr, Ewan

    2011-01-01

    We investigate Turing's notion of an A-type artificial neural network. We study a refinement of Turing's original idea, motivated by work of Teuscher, Bull, Preen and Copeland. Our A-types can process binary data by accepting and outputting sequences of binary vectors; hence we can associate a function to an A-type, and we say the A-type {\\em represents} the function. There are two modes of data processing: clamped and sequential. We describe an evolutionary algorithm, involving graph-theoretic manipulations of A-types, which searches for A-types representing a given function. The algorithm uses both mutation and crossover operators. We implemented the algorithm and applied it to three benchmark tasks. We found that the algorithm performed much better than a random search. For two out of the three tasks, the algorithm with crossover performed better than a mutation-only version.

  6. Convolutional Neural Networks for SAR Image Segmentation

    DEFF Research Database (Denmark)

    Malmgren-Hansen, David; Nobel-Jørgensen, Morten

    2015-01-01

    Segmentation of Synthetic Aperture Radar (SAR) images has several uses, but it is a difficult task due to a number of properties related to SAR images. In this article we show how Convolutional Neural Networks (CNNs) can easily be trained for SAR image segmentation with good results. Besides...... this contribution we also suggest a new way to do pixel wise annotation of SAR images that replaces a human expert manual segmentation process, which is both slow and troublesome. Our method for annotation relies on 3D CAD models of objects and scene, and converts these to labels for all pixels in a SAR image. Our...... algorithms are evaluated on the Moving and Stationary Target Acquisition and Recognition (MSTAR) dataset which was released by the Defence Advanced Research Projects Agency during the 1990s. The method is not restricted to the type of targets imaged in MSTAR but can easily be extended to any SAR data where...

  7. Evolving neural networks through augmenting topologies.

    Science.gov (United States)

    Stanley, Kenneth O; Miikkulainen, Risto

    2002-01-01

    An important question in neuroevolution is how to gain an advantage from evolving neural network topologies along with weights. We present a method, NeuroEvolution of Augmenting Topologies (NEAT), which outperforms the best fixed-topology method on a challenging benchmark reinforcement learning task. We claim that the increased efficiency is due to (1) employing a principled method of crossover of different topologies, (2) protecting structural innovation using speciation, and (3) incrementally growing from minimal structure. We test this claim through a series of ablation studies that demonstrate that each component is necessary to the system as a whole and to each other. What results is significantly faster learning. NEAT is also an important contribution to GAs because it shows how it is possible for evolution to both optimize and complexify solutions simultaneously, offering the possibility of evolving increasingly complex solutions over generations, and strengthening the analogy with biological evolution. PMID:12180173

  8. A Neural Network for Generating Adaptive Lessons

    Directory of Open Access Journals (Sweden)

    Hassina Seridi-Bouchelaghem

    2005-01-01

    Full Text Available Traditional sequencing technology developed in the field of intelligent tutoring systems have not find an immediate place in large-scale Web-based education. This study investigates the use of computational intelligence for adaptive lesson generation in a distance learning environment over the Web. An approach for adaptive pedagogical hypermedia document generation is proposed and implemented in a prototype called KnowledgeClass. This approach is based on a specialized artificial neural network model. The system allows automatic generation of individualised courses according to the learner’s goal and previous knowledge and can dynamically adapt the course according to the learner’s success in acquiring knowledge. Several experiments showed the effectiveness of the proposed method.

  9. Sign Language Recognition using Neural Networks

    Directory of Open Access Journals (Sweden)

    Sabaheta Djogic

    2014-11-01

    Full Text Available – Sign language plays a great role as communication media for people with hearing difficulties.In developed countries, systems are made for overcoming a problem in communication with deaf people. This encouraged us to develop a system for the Bosnian sign language since there is a need for such system. The work is done with the use of digital image processing methods providing a system that teaches a multilayer neural network using a back propagation algorithm. Images are processed by feature extraction methods, and by masking method the data set has been created. Training is done using cross validation method for better performance thus; an accuracy of 84% is achieved.

  10. Financial time series prediction using spiking neural networks.

    Science.gov (United States)

    Reid, David; Hussain, Abir Jaafar; Tawfik, Hissam

    2014-01-01

    In this paper a novel application of a particular type of spiking neural network, a Polychronous Spiking Network, was used for financial time series prediction. It is argued that the inherent temporal capabilities of this type of network are suited to non-stationary data such as this. The performance of the spiking neural network was benchmarked against three systems: two "traditional", rate-encoded, neural networks; a Multi-Layer Perceptron neural network and a Dynamic Ridge Polynomial neural network, and a standard Linear Predictor Coefficients model. For this comparison three non-stationary and noisy time series were used: IBM stock data; US/Euro exchange rate data, and the price of Brent crude oil. The experiments demonstrated favourable prediction results for the Spiking Neural Network in terms of Annualised Return and prediction error for 5-Step ahead predictions. These results were also supported by other relevant metrics such as Maximum Drawdown and Signal-To-Noise ratio. This work demonstrated the applicability of the Polychronous Spiking Network to financial data forecasting and this in turn indicates the potential of using such networks over traditional systems in difficult to manage non-stationary environments. PMID:25170618

  11. Financial time series prediction using spiking neural networks.

    Science.gov (United States)

    Reid, David; Hussain, Abir Jaafar; Tawfik, Hissam

    2014-01-01

    In this paper a novel application of a particular type of spiking neural network, a Polychronous Spiking Network, was used for financial time series prediction. It is argued that the inherent temporal capabilities of this type of network are suited to non-stationary data such as this. The performance of the spiking neural network was benchmarked against three systems: two "traditional", rate-encoded, neural networks; a Multi-Layer Perceptron neural network and a Dynamic Ridge Polynomial neural network, and a standard Linear Predictor Coefficients model. For this comparison three non-stationary and noisy time series were used: IBM stock data; US/Euro exchange rate data, and the price of Brent crude oil. The experiments demonstrated favourable prediction results for the Spiking Neural Network in terms of Annualised Return and prediction error for 5-Step ahead predictions. These results were also supported by other relevant metrics such as Maximum Drawdown and Signal-To-Noise ratio. This work demonstrated the applicability of the Polychronous Spiking Network to financial data forecasting and this in turn indicates the potential of using such networks over traditional systems in difficult to manage non-stationary environments.

  12. Financial time series prediction using spiking neural networks.

    Directory of Open Access Journals (Sweden)

    David Reid

    Full Text Available In this paper a novel application of a particular type of spiking neural network, a Polychronous Spiking Network, was used for financial time series prediction. It is argued that the inherent temporal capabilities of this type of network are suited to non-stationary data such as this. The performance of the spiking neural network was benchmarked against three systems: two "traditional", rate-encoded, neural networks; a Multi-Layer Perceptron neural network and a Dynamic Ridge Polynomial neural network, and a standard Linear Predictor Coefficients model. For this comparison three non-stationary and noisy time series were used: IBM stock data; US/Euro exchange rate data, and the price of Brent crude oil. The experiments demonstrated favourable prediction results for the Spiking Neural Network in terms of Annualised Return and prediction error for 5-Step ahead predictions. These results were also supported by other relevant metrics such as Maximum Drawdown and Signal-To-Noise ratio. This work demonstrated the applicability of the Polychronous Spiking Network to financial data forecasting and this in turn indicates the potential of using such networks over traditional systems in difficult to manage non-stationary environments.

  13. Integration of Unascertained Method with Neural Networks and Its Application

    Directory of Open Access Journals (Sweden)

    Huawang Shi

    2011-11-01

    Full Text Available This paper presents the adoption of artificial neural network (ANN model and Unascertained system to assist decision-makers in forecasting the early warning of financial in China. Artificial neural network (ANN has outstanding characteristics in machine learning, fault, tolerant, parallel reasoning and processing nonlinear problem abilities. Unascertained system that imitates the human brain's thinking logical is a kind of mathematical tools used to deal with imprecise and uncertain knowledge. Integrating unascertained method with neural network technology, the reasoning process of network coding can be tracked, and the output of the network can be given a physical explanation. Application case shows that combines unascertained systems with feedforward artificial neural networks can obtain more reasonable and more advantage of nonlinear mapping that can handle more complete type of data.

  14. Fault detection and diagnosis using neural network approaches

    Science.gov (United States)

    Kramer, Mark A.

    1992-01-01

    Neural networks can be used to detect and identify abnormalities in real-time process data. Two basic approaches can be used, the first based on training networks using data representing both normal and abnormal modes of process behavior, and the second based on statistical characterization of the normal mode only. Given data representative of process faults, radial basis function networks can effectively identify failures. This approach is often limited by the lack of fault data, but can be facilitated by process simulation. The second approach employs elliptical and radial basis function neural networks and other models to learn the statistical distributions of process observables under normal conditions. Analytical models of failure modes can then be applied in combination with the neural network models to identify faults. Special methods can be applied to compensate for sensor failures, to produce real-time estimation of missing or failed sensors based on the correlations codified in the neural network.

  15. System Identification, Prediction, Simulation and Control with Neural Networks

    DEFF Research Database (Denmark)

    Sørensen, O.

    1997-01-01

    a Gauss-Newton search direction is applied. 3) Amongst numerous model types, often met in control applications, only the Non-linear ARMAX (NARMAX) model, representing input/output description, is examined. A simulated example confirms that a neural network has the potential to perform excellent System...... study of the networks themselves. With this end in view the following restrictions have been made: 1) Amongst numerous neural network structures, only the Multi Layer Perceptron (a feed-forward network) is applied. 2) Amongst numerous training algorithms, only the Recursive Prediction Error Method using......The intention of this paper is to make a systematic examination of the possibilities of applying neural networks in those technical areas, which are familiar to a control engineer. In other words, the potential of neural networks in control applications is given higher priority than a detailed...

  16. Impulsive Neural Networks Algorithm Based on the Artificial Genome Model

    Directory of Open Access Journals (Sweden)

    Yuan Gao

    2014-05-01

    Full Text Available To describe gene regulatory networks, this article takes the framework of the artificial genome model and proposes impulsive neural networks algorithm based on the artificial genome model. Firstly, the gene expression and the cell division tree are applied to generate spiking neurons with specific attributes, neural network structure, connection weights and specific learning rules of each neuron. Next, the gene segment duplications and divergence model are applied to design the evolutionary algorithm of impulsive neural networks at the level of the artificial genome. The dynamic changes of developmental gene regulatory networks are controlled during the whole evolutionary process. Finally, the behavior of collecting food for autonomous intelligent agent is simulated, which is driven by nerves. Experimental results demonstrate that the algorithm in this article has the evolutionary ability on large-scale impulsive neural networks

  17. BWR fuel cycle optimization using neural networks

    International Nuclear Information System (INIS)

    Highlights: → OCONN a new system to optimize all nuclear fuel management steps in a coupled way. → OCON is based on an artificial recurrent neural network to find the best combination of partial solutions to each fuel management step. → OCONN works with a fuel lattices' stock, a fuel reloads' stock and a control rod patterns' stock, previously obtained with different heuristic techniques. → Results show OCONN is able to find good combinations according the global objective function. - Abstract: In nuclear fuel management activities for BWRs, four combinatorial optimization problems are solved: fuel lattice design, axial fuel bundle design, fuel reload design and control rod patterns design. Traditionally, these problems have been solved in separated ways due to their complexity and the required computational resources. In the specialized literature there are some attempts to solve fuel reloads and control rod patterns design or fuel lattice and axial fuel bundle design in a coupled way. In this paper, the system OCONN to solve all of these problems in a coupled way is shown. This system is based on an artificial recurrent neural network to find the best combination of partial solutions to each problem, in order to maximize a global objective function. The new system works with a fuel lattices' stock, a fuel reloads' stock and a control rod patterns' stock, previously obtained with different heuristic techniques. The system was tested to design an equilibrium cycle with a cycle length of 18 months. Results show that the new system is able to find good combinations. Cycle length is reached and safety parameters are fulfilled.

  18. Forecasting Zakat collection using artificial neural network

    Science.gov (United States)

    Sy Ahmad Ubaidillah, Sh. Hafizah; Sallehuddin, Roselina

    2013-04-01

    'Zakat', "that which purifies" or "alms", is the giving of a fixed portion of one's wealth to charity, generally to the poor and needy. It is one of the five pillars of Islam, and must be paid by all practicing Muslims who have the financial means (nisab). 'Nisab' is the minimum level to determine whether there is a 'zakat' to be paid on the assets. Today, in most Muslim countries, 'zakat' is collected through a decentralized and voluntary system. Under this voluntary system, 'zakat' committees are established, which are tasked with the collection and distribution of 'zakat' funds. 'Zakat' promotes a more equitable redistribution of wealth, and fosters a sense of solidarity amongst members of the 'Ummah'. The Malaysian government has established a 'zakat' center at every state to facilitate the management of 'zakat'. The center has to have a good 'zakat' management system to effectively execute its functions especially in the collection and distribution of 'zakat'. Therefore, a good forecasting model is needed. The purpose of this study is to develop a forecasting model for Pusat Zakat Pahang (PZP) to predict the total amount of collection from 'zakat' of assets more precisely. In this study, two different Artificial Neural Network (ANN) models using two different learning algorithms are developed; Back Propagation (BP) and Levenberg-Marquardt (LM). Both models are developed and compared in terms of their accuracy performance. The best model is determined based on the lowest mean square error and the highest correlations values. Based on the results obtained from the study, BP neural network is recommended as the forecasting model to forecast the collection from 'zakat' of assets for PZP.

  19. Implementing Neural Networks Using VLSI for Image Processing (compression

    Directory of Open Access Journals (Sweden)

    Sindhu R

    2015-04-01

    Full Text Available Biological systems process the analog signals such as image and sound efficiently. To process the information the way biological systems do we make use of ANN. (Artificial Neural Networks The focus of this paper is to review the implementation of the neural network architecture using analog components like Gilbert cell multiplier, differential amplifier for neuron activation function and tan sigmoid function circuit using MOS transistor. The neural architecture is trained using Back propagation algorithm for compressing the image. This paper surveys the methods of implementing the neural network using VLSI .Different CMOS technologies are used for implementing the circuits for arithmetic operations (i.e. 180nm, 45nm, 32nm.And the MOS transistors are working in sub threshold region. In this paper a review is made on how the VLSI architecture is used to implement neural networks and trained for compressing the image.

  20. Architecture Analysis of an FPGA-Based Hopfield Neural Network

    Directory of Open Access Journals (Sweden)

    Miguel Angelo de Abreu de Sousa

    2014-01-01

    Full Text Available Interconnections between electronic circuits and neural computation have been a strongly researched topic in the machine learning field in order to approach several practical requirements, including decreasing training and operation times in high performance applications and reducing cost, size, and energy consumption for autonomous or embedded developments. Field programmable gate array (FPGA hardware shows some inherent features typically associated with neural networks, such as, parallel processing, modular executions, and dynamic adaptation, and works on different types of FPGA-based neural networks were presented in recent years. This paper aims to address different aspects of architectural characteristics analysis on a Hopfield Neural Network implemented in FPGA, such as maximum operating frequency and chip-area occupancy according to the network capacity. Also, the FPGA implementation methodology, which does not employ multipliers in the architecture developed for the Hopfield neural model, is presented, in detail.

  1. Using software and hardware neural networks in a Higgs search

    International Nuclear Information System (INIS)

    The present investigation uses information from computer simulations to train neural networks to identify decays of heavy Higgs particles (mH>>mZ). Results are presented both for software and hardware analog neural networks. The hardware tests include the Intel ETANN and the CLNN32/CLNS64 (experimental, research prototype developed at Bellcore) chip-set implemented in VME-modules. The processing and learning times for the networks are discussed. ((orig.))

  2. Data assimilation: Particle filter and artificial neural networks

    International Nuclear Information System (INIS)

    The goal of this work is to present the performance of the Neural Network Multilayer Perceptrons trained to emulate a Particle Filter in the context of data assimilation. Techniques for data assimilation are applied for the Lorenz system, which presents a strong nonlinearity and chaotic nature. The cross validation method was used for training the network. Good results were obtained applying the multilayer perceptrons neural network.

  3. Decoupling Control Method Based on Neural Network for Missiles

    Institute of Scientific and Technical Information of China (English)

    ZHAN Li; LUO Xi-shuang; ZHANG Tian-qiao

    2005-01-01

    In order to make the static state feedback nonlinear decoupling control law for a kind of missile to be easy for implementation in practice, an improvement is discussed. The improvement method is to introduce a BP neural network to approximate the decoupling control laws which are designed for different aerodynamic characteristic points, so a new decoupling control law based on BP neural network is produced after the network training. The simulation results on an example illustrate the approach obtained feasible and effective.

  4. Neurons vs Weights Pruning in Artificial Neural Networks

    OpenAIRE

    Bondarenko, Andrey; Borisov, Arkady; Alekseeva, Ludmila

    2015-01-01

    Artificial neural networks (ANN) are well known for their good classification abilities. Recent advances in deep learning imposed second ANN renaissance. But neural networks possesses some problems like choosing hyper parameters such as neuron layers count and sizes which can greatly influence classification rate. Thus pruning techniques were developed that can reduce network sizes, increase its generalization abilities and overcome overfitting. Pruning approaches, in contrast to growing neur...

  5. PREDICTION OF LEAF SPRING PARAMETERS USING ARTIFICIAL NEURAL NETWORKS

    OpenAIRE

    Dr.D.V.V.KRISHNA PRASAD; J.P.KARTHIK

    2013-01-01

    In this paper an attempt is made to predict the optimum design parameters using artificial neural networks. For this static and dynamic analysis on various leaf spring configuration is carried out by ANSYS and is used as training data for neural network. Training data includes cross section of the leaf, load on the leaf spring, stresses, displacement and natural frequencies. By creating a network using thickness and width of the leaf, load on the leaf spring as input parameters and stresses, ...

  6. DeXpression: Deep Convolutional Neural Network for Expression Recognition

    OpenAIRE

    Burkert, Peter; Trier, Felix; Afzal, Muhammad Zeshan; Dengel, Andreas; Liwicki, Marcus

    2015-01-01

    We propose a convolutional neural network (CNN) architecture for facial expression recognition. The proposed architecture is independent of any hand-crafted feature extraction and performs better than the earlier proposed convolutional neural network based approaches. We visualize the automatically extracted features which have been learned by the network in order to provide a better understanding. The standard datasets, i.e. Extended Cohn-Kanade (CKP) and MMI Facial Expression Databse are us...

  7. Satellite-as-a-Sensor Neural Network Abnormality Classification Optimization

    OpenAIRE

    Hammond, Michelle; Jobman, 2d Lt Ryan

    2006-01-01

    Neural networks and classification networks are used in commercial and government industries for data mining and pattern trend analysis. The commercial banking industry use neural networks to detect out of pattern spending habits of customers for identity theft purposes. An example of government use is the monitoring of satellite state-of-health measurements for pattern changes indicating possible sensor abnormality or onboard hardware failure in a real time environment.

  8. Neural network based speech synthesizer: A preliminary report

    Science.gov (United States)

    Villarreal, James A.; Mcintire, Gary

    1987-01-01

    A neural net based speech synthesis project is discussed. The novelty is that the reproduced speech was extracted from actual voice recordings. In essence, the neural network learns the timing, pitch fluctuations, connectivity between individual sounds, and speaking habits unique to that individual person. The parallel distributed processing network used for this project is the generalized backward propagation network which has been modified to also learn sequences of actions or states given in a particular plan.

  9. Application of neural networks in coastal engineering - An overview

    Digital Repository Service at National Institute of Oceanography (India)

    Mandal, S.; Patil, S.G.; Manjunatha, Y.R.; Hegde, A.V.

    of human brain. The biggest merit is its ability to deal with fuzzy information whose interrelation is ambiguous or whose functional relationship is not clear. A neural network has capability of learning and adjusts with the outside environment. Training... the network is a kind of learning process. Method of learning is done with specified examples. A neural network possesses the ability to learn and able to memorize a large amount of various information and then to formalize it. Furthermore, the most...

  10. MOVING TARGETS PATTERN RECOGNITION BASED ON THE WAVELET NEURAL NETWORK

    Institute of Scientific and Technical Information of China (English)

    Ge Guangying; Chen Lili; Xu Jianjian

    2005-01-01

    Based on pattern recognition theory and neural network technology, moving objects automatic detection and classification method integrating advanced wavelet analysis are discussed in detail. An algorithm of moving targets pattern recognition on the combination of inter-frame difference and wavelet neural network is presented. The experimental results indicate that the designed BP wavelet network using this algorithm can recognize and classify moving targets rapidly and effectively.

  11. Optimization Design based on BP Neural Network and GA Method

    Directory of Open Access Journals (Sweden)

    Bing Wang

    2013-12-01

    Full Text Available This study puts forward one kind optimization controlling solution method on complicated system. At first modeling using neural network then adopt the real data to structure the neural network model of pertinence, make the parameter to seek to the neural network model excellently by mixing GA finally, thus got intelligence to the complicated system to optimize and control. The method can identify network configuration and network training methods. By adopting the number coding and effectively reducing the network size and the network convergence time, increase the network training speed. The study provides this and optimizes relevant MATLAB procedure which controls the method, so long as adjust a little to the concrete problem, can believe this procedure well the optimization of the complicated system controls the problem in the reality of solving.

  12. Results from the First Flight of BAM

    CERN Document Server

    Tucker, G S; Halpern, M; Towlson, W

    1996-01-01

    A new instrument, BAM (Balloon-borne Anisotropy Measurement), designed to measure cosmic microwave background (CMB) anisotropy at medium angular scales was flown for the first time in July of 1995. BAM is unique in that it uses a cryogenic differential Fourier transform spectrometer coupled to a lightweight off-axis telescope. The very successful first flight of BAM demonstrates the potential of the instrument for obtaining high quality CMB anisotropy data.

  13. Comparison of Gompertz and neural network models of broiler growth.

    Science.gov (United States)

    Roush, W B; Dozier, W A; Branton, S L

    2006-04-01

    Neural networks offer an alternative to regression analysis for biological growth modeling. Very little research has been conducted to model animal growth using artificial neural networks. Twenty-five male chicks (Ross x Ross 308) were raised in an environmental chamber. Body weights were determined daily and feed and water were provided ad libitum. The birds were fed a starter diet (23% CP and 3,200 kcal of ME/kg) from 0 to 21 d, and a grower diet (20% CP and 3,200 kcal of ME/ kg) from 22 to 70 d. Dead and female birds were not included in the study. Average BW of 18 birds were used as the data points for the growth curve to be modeled. Training data consisted of alternate-day weights starting with the first day. Validation data consisted of BW at all other age periods. Comparison was made between the modeling by the Gompertz nonlinear regression equation and neural network modeling. Neural network models were developed with the Neuroshell Predictor. Accuracy of the models was determined by mean square error (MSE), mean absolute deviation (MAD), mean absolute percentage error (MAPE), and bias. The Gompertz equation was fit for the data. Forecasting error measurements were based on the difference between the model and the observed values. For the training data, the lowest MSE, MAD, MAPE, and bias were noted for the neural-developed neural network. For the validation data, the lowest MSE and MAD were noted with the genetic algorithm-developed neural network. Lowest bias was for the neural-developed network. As measured by bias, the Gompertz equation underestimated the values whereas the neural- and genetic-developed neural networks produced little or no overestimation of the observed BW responses. Past studies have attempted to interpret the biological significance of the estimates of the parameters of an equation. However, it may be more practical to ignore the relevance of parameter estimates and focus on the ability to predict responses.

  14. Spikes Synchronization in Neural Networks with Synaptic Plasticity

    CERN Document Server

    Borges, Rafael R; Batista, Antonio M; Caldas, Iberê L; Borges, Fernando S; Lameu, Ewandson L

    2015-01-01

    In this paper, we investigated the neural spikes synchronisation in a neural network with synaptic plasticity and external perturbation. In the simulations the neural dynamics is described by the Hodgkin Huxley model considering chemical synapses (excitatory) among neurons. According to neural spikes synchronisation is expected that a perturbation produce non synchronised regimes. However, in the literature there are works showing that the combination of synaptic plasticity and external perturbation may generate synchronised regime. This article describes the effect of the synaptic plasticity on the synchronisation, where we consider a perturbation with a uniform distribution. This study is relevant to researches of neural disorders control.

  15. Forecasting PM10 in metropolitan areas: Efficacy of neural networks

    International Nuclear Information System (INIS)

    Deterministic photochemical air quality models are commonly used for regulatory management and planning of urban airsheds. These models are complex, computer intensive, and hence are prohibitively expensive for routine air quality predictions. Stochastic methods are becoming increasingly popular as an alternative, which relegate decision making to artificial intelligence based on Neural Networks that are made of artificial neurons or ‘nodes’ capable of ‘learning through training’ via historic data. A Neural Network was used to predict particulate matter concentration at a regulatory monitoring site in Phoenix, Arizona; its development, efficacy as a predictive tool and performance vis-à-vis a commonly used regulatory photochemical model are described in this paper. It is concluded that Neural Networks are much easier, quicker and economical to implement without compromising the accuracy of predictions. Neural Networks can be used to develop rapid air quality warning systems based on a network of automated monitoring stations.Highlights: ► Neural Network is an alternative technique to photochemical modelling. ► Neutral Networks can be as effective as traditional air photochemical modelling. ► Neural Networks are much easier and quicker to implement in health warning system. - Neutral networks are as effective as photochemical modelling for air quality predictions, but are much easier, quicker and economical to implement in air pollution (or health) warning systems.

  16. Discriminating lysosomal membrane protein types using dynamic neural network.

    Science.gov (United States)

    Tripathi, Vijay; Gupta, Dwijendra Kumar

    2014-01-01

    This work presents a dynamic artificial neural network methodology, which classifies the proteins into their classes from their sequences alone: the lysosomal membrane protein classes and the various other membranes protein classes. In this paper, neural networks-based lysosomal-associated membrane protein type prediction system is proposed. Different protein sequence representations are fused to extract the features of a protein sequence, which includes seven feature sets; amino acid (AA) composition, sequence length, hydrophobic group, electronic group, sum of hydrophobicity, R-group, and dipeptide composition. To reduce the dimensionality of the large feature vector, we applied the principal component analysis. The probabilistic neural network, generalized regression neural network, and Elman regression neural network (RNN) are used as classifiers and compared with layer recurrent network (LRN), a dynamic network. The dynamic networks have memory, i.e. its output depends not only on the input but the previous outputs also. Thus, the accuracy of LRN classifier among all other artificial neural networks comes out to be the highest. The overall accuracy of jackknife cross-validation is 93.2% for the data-set. These predicted results suggest that the method can be effectively applied to discriminate lysosomal associated membrane proteins from other membrane proteins (Type-I, Outer membrane proteins, GPI-Anchored) and Globular proteins, and it also indicates that the protein sequence representation can better reflect the core feature of membrane proteins than the classical AA composition.

  17. Hybrid neural network bushing model for vehicle dynamics simulation

    International Nuclear Information System (INIS)

    Although the linear model was widely used for the bushing model in vehicle suspension systems, it could not express the nonlinear characteristics of bushing in terms of the amplitude and the frequency. An artificial neural network model was suggested to consider the hysteretic responses of bushings. This model, however, often diverges due to the uncertainties of the neural network under the unexpected excitation inputs. In this paper, a hybrid neural network bushing model combining linear and neural network is suggested. A linear model was employed to represent linear stiffness and damping effects, and the artificial neural network algorithm was adopted to take into account the hysteretic responses. A rubber test was performed to capture bushing characteristics, where sine excitation with different frequencies and amplitudes is applied. Random test results were used to update the weighting factors of the neural network model. It is proven that the proposed model has more robust characteristics than a simple neural network model under step excitation input. A full car simulation was carried out to verify the proposed bushing models. It was shown that the hybrid model results are almost identical to the linear model under several maneuvers

  18. Efficient Speech Recognition by Using Modular Neural Network

    Directory of Open Access Journals (Sweden)

    Dr.R.L.K.Venkateswarlu

    2011-05-01

    Full Text Available The Modular approach and Neural Network approach are well known concepts in the research and engineering community. By combining these two together, the Modular Neural Network approach is very effective in searching for solutions to complex problems of various fields. The aim of this study is the distribution of the complexity for the ambiguous words classification task on a set of modules. Each of these modules is a single Neural Network which is characterized by its high degree of specialization. The number of interfaces, and there with possibilities for filtering external acoustic – phonetic knowledge, increases a modular architecture. Modular Neural Network (MNN for speech recognition is presented with speaker dependent single word recognition in this paper. Using this approach by taking computational effort into account, the system performance can be accessed. The active performance is found maximum for MFCC while training with Modular Neural Network classifiers as 99.88%. The active performance is found maximum for LPCC while training with Modular Neural Network classifier as 99.77%. It is found that MFCC performance is superior to LPCC performance while training the speech data with Modular Neural Network classifier.

  19. Applications of artificial neural network chips

    International Nuclear Information System (INIS)

    In a collaboration between CERN and Royal Institute of Technology Stockholm a so called Asynchronous Transfer Mode (ATM) test setup was developed. The main goal of the task was the experimental verification of the harware design principles and methods, partly the application of the test setup for testing the neural network controlled self-routing, asynchronous event-building ATM networks. We took part in the first implementation of the IBM Zero Instruction Set Computer (ZISC036)[2] on a PC-486 ISA-bus card. This chip has been designed for cost-effective recognition and classification in real time. After building the PC interface card and testing the main functions of the built-in logic a code for character recognition was developed for comparing its performance to other RBF-type methods. The results show that the ZISC036 is performing quite well. The most attractive feature of the chip is the speed: if it is operated at 20 MHz, 64 component the evaluation is ready in 0.5 μ sec. (K.A.) 2 refs.; 1 fig

  20. DEM interpolation based on artificial neural networks

    Science.gov (United States)

    Jiao, Limin; Liu, Yaolin

    2005-10-01

    This paper proposed a systemic resolution scheme of Digital Elevation model (DEM) interpolation based on Artificial Neural Networks (ANNs). In this paper, we employ BP network to fit terrain surface, and then detect and eliminate the samples with gross errors. This paper uses Self-organizing Feature Map (SOFM) to cluster elevation samples. The study area is divided into many more homogenous tiles after clustering. BP model is employed to interpolate DEM in each cluster. Because error samples are eliminated and clusters are built, interpolation result is better. The case study indicates that ANN interpolation scheme is feasible. It also shows that ANN can get a more accurate result by comparing ANN with polynomial and spline interpolation. ANN interpolation doesn't need to determine the interpolation function beforehand, so manmade influence is lessened. The ANN interpolation is more automatic and intelligent. At the end of the paper, we propose the idea of constructing ANN surface model. This model can be used in multi-scale DEM visualization, and DEM generalization, etc.